Real-ESRGAN aims at developing Practical Algorithms for General Image Restoration
šDemos | š©Updates | ā”Usage | š°Model Zoo | š§Install | š»Train | āFAQ | šØContribution
š„ AnimeVideo-v3 model (åØę¼«č§é¢å°ęØ”å). Please see [anime video models] and [comparisons]
š„ RealESRGAN_x4plus_anime_6B for anime images (åØę¼«ęå¾ęØ”å). Please see [anime_model]
Real-ESRGAN aims at developing Practical Algorithms for General Image/Video Restoration.
We extend the powerful ESRGAN to a practical restoration application (namely, Real-ESRGAN), which is trained with pure synthetic data.
š Thanks for your valuable feedbacks/suggestions. All the feedbacks are updated in feedback.md.
If Real-ESRGAN is helpful, please help to ā this repo or recommend it to your friends š
Other recommended projects:
ā¶ļø GFPGAN: A practical algorithm for real-world face restoration
ā¶ļø BasicSR: An open-source image and video restoration toolbox
ā¶ļø facexlib: A collection that provides useful face-relation functions.
ā¶ļø HandyView: A PyQt5-based image viewer that is handy for view and comparison
ā¶ļø HandyFigure: Open source of paper figures
[Paper] ā [YouTube Video] ā [Bē«č®²č§£] ā [Poster] ā [PPT slides]
Xintao Wang, Liangbin Xie, Chao Dong, Ying Shan
Tencent ARC Lab; Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences
--outscale
(It actually further resizes outputs with LANCZOS4
). Add RealESRGAN_x2plus.pth model.Clone repo
git clone https://github.com/xinntao/Real-ESRGAN.git
cd Real-ESRGAN
Install dependent packages
# Install basicsr - https://github.com/xinntao/BasicSR
# We use BasicSR for both training and inference
pip install basicsr
# facexlib and gfpgan are for face enhancement
pip install facexlib
pip install gfpgan
pip install -r requirements.txt
python setup.py develop
There are usually three ways to inference Real-ESRGAN.
You can download Windows / Linux / MacOS executable files for Intel/AMD/Nvidia GPU.
This executable file is portable and includes all the binaries and models required. No CUDA or PyTorch environment is needed.
You can simply run the following command (the Windows example, more information is in the README.md of each executable files):
./realesrgan-ncnn-vulkan.exe -i input.jpg -o output.png -n model_name
We have provided five models:
You can use the -n
argument for other models, for example, ./realesrgan-ncnn-vulkan.exe -i input.jpg -o output.png -n realesrnet-x4plus
outscale
) as the python script inference_realesrgan.py
.Usage: realesrgan-ncnn-vulkan.exe -i infile -o outfile [options]...
-h show this help
-i input-path input image path (jpg/png/webp) or directory
-o output-path output image path (jpg/png/webp) or directory
-s scale upscale ratio (can be 2, 3, 4. default=4)
-t tile-size tile size (>=32/0=auto, default=0) can be 0,0,0 for multi-gpu
-m model-path folder path to the pre-trained models. default=models
-n model-name model name (default=realesr-animevideov3, can be realesr-animevideov3 | realesrgan-x4plus | realesrgan-x4plus-anime | realesrnet-x4plus)
-g gpu-id gpu device to use (default=auto) can be 0,1,2 for multi-gpu
-j load:proc:save thread count for load/proc/save (default=1:2:2) can be 1:2,2,2:2 for multi-gpu
-x enable tta mode"
-f format output image format (jpg/png/webp, default=ext/png)
-v verbose output
Note that it may introduce block inconsistency (and also generate slightly different results from the PyTorch implementation), because this executable file first crops the input image into several tiles, and then processes them separately, finally stitches together.
outscale
. The program will further perform cheap resize operation after the Real-ESRGAN output.Usage: python inference_realesrgan.py -n RealESRGAN_x4plus -i infile -o outfile [options]...
A common command: python inference_realesrgan.py -n RealESRGAN_x4plus -i infile --outscale 3.5 --face_enhance
-h show this help
-i --input Input image or folder. Default: inputs
-o --output Output folder. Default: results
-n --model_name Model name. Default: RealESRGAN_x4plus
-s, --outscale The final upsampling scale of the image. Default: 4
--suffix Suffix of the restored image. Default: out
-t, --tile Tile size, 0 for no tile during testing. Default: 0
--face_enhance Whether to use GFPGAN to enhance face. Default: False
--fp32 Use fp32 precision during inference. Default: fp16 (half precision).
--ext Image extension. Options: auto | jpg | png, auto means using the same extension as inputs. Default: auto
Download pre-trained models: RealESRGAN_x4plus.pth
wget https://github.com/xinntao/Real-ESRGAN/releases/download/v0.1.0/RealESRGAN_x4plus.pth -P weights
Inference!
python inference_realesrgan.py -n RealESRGAN_x4plus -i inputs --face_enhance
Results are in the results
folder
Pre-trained models: RealESRGAN_x4plus_anime_6B
More details and comparisons with waifu2x are in anime_model.md
# download model
wget https://github.com/xinntao/Real-ESRGAN/releases/download/v0.2.2.4/RealESRGAN_x4plus_anime_6B.pth -P weights
# inference
python inference_realesrgan.py -n RealESRGAN_x4plus_anime_6B -i inputs
Results are in the results
folder
@InProceedings{wang2021realesrgan,
author = {Xintao Wang and Liangbin Xie and Chao Dong and Ying Shan},
title = {Real-ESRGAN: Training Real-World Blind Super-Resolution with Pure Synthetic Data},
booktitle = {International Conference on Computer Vision Workshops (ICCVW)},
date = {2021}
}
If you have any question, please email xintao.wang@outlook.com
or xintaowang@tencent.com
.
If you develop/use Real-ESRGAN in your projects, welcome to let me know.
Ā Ā Ā Ā GUI
Thanks for all the contributors.