Skip to content
This repository has been archived by the owner on Oct 1, 2020. It is now read-only.

Latest commit

 

History

History
338 lines (282 loc) · 18.2 KB

README.md

File metadata and controls

338 lines (282 loc) · 18.2 KB

🔲 TODO

  • Test TV loss/regularization (needs to balance loss weight with other losses).
  • Test HFEN loss (needs to balance loss weight with other losses).
  • Test Partial Convolution based Padding (PartialConv2D).
  • Test PartialConv2D with random masks.
  • Add automatic model scale change (preserve conv layers, estimate upscale layers).
  • Add automatic loading of old models and new ESRGAN models.
  • Downscale images before and/or after inference. Helps in cleaning up some noise or bring images back to the original scale.
  • Import GMFN's recurrent network and add the feature loss to their MSE model, should have better MSE results with SRGAN's features/textures (Needs testing)

Done

  • [:white_check_mark:] Add on the fly augmentations (gaussian noise, blur, JPEG compression).
  • [:white_check_mark:] Add TV loss/regularization options. Useful for denoising tasks, reduces Total Variation.
  • [:white_check_mark:] Add HFEN loss. Useful to keep high frequency information. Used Gaussian filter to reduce the effect of noise.
  • [:white_check_mark:] Add Partial Convolution based Padding (PartialConv2D). It should help prevent edge padding issues. Zero padding is the default and typically has best performance, PartialConv2D has better performance and converges faster for segmentation and classification (https://arxiv.org/pdf/1811.11718.pdf). Code has been added, but the switch makes pretained models using Conv2D incompatible. Training new models for testing. (May be able to test inpainting and denoising)
  • [:white_check_mark:] Added SSIM and MS-SSIM loss functions.

An image super-resolution toolkit flexible for development. It now provides:

  1. PSNR-oriented SR models (e.g., SRCNN, SRResNet and etc). You can try different architectures, e.g, ResNet Block, ResNeXt Block, Dense Block, Residual Dense Block, Poly Block, Dual Path Block, Squeeze-and-Excitation Block, Residual-in-Residual Dense Block and etc.
  1. Enhanced SRGAN model (It can also train the SRGAN model). Enhanced SRGAN achieves consistently better visual quality with more realistic and natural textures than SRGAN and won the first place in the PIRM2018-SR Challenge. For more details, please refer to Paper, ESRGAN repo. (If you just want to test the model, ESRGAN repo provides simpler testing codes.)

  1. SFTGAN model. It adopts Spatial Feature Transform (SFT) to effectively incorporate other conditions/priors, like semantic prior for image SR, representing by segmentation probability maps. For more details, please refer to Paper, SFTGAN repo.

Table of Contents

  1. Dependencies
  2. Codes
  3. Usage
  4. Datasets
  5. Pretrained models

Dependencies

Optional Dependencies

Codes

./codes. We provide a detailed explaination of the code framework in ./codes.

We also provide:

  1. Some useful scripts. More details in ./codes/scripts.
  2. Evaluation codes, e.g., PSNR/SSIM metric.
  3. Wiki, e.g., How to make high quality gif with full (true) color, Matlab bicubic imresize and etc.

Usage

Data and model preparation

The common SR datasets can be found in Datasets. Detailed data preparation can be seen in codes/data.

We provide pretrained models in Pretrained models.

How to Test

Test ESRGAN (SRGAN) models

  1. Modify the configuration file options/test/test_esrgan.json
  2. Run command: python test.py -opt options/test/test_esrgan.json

Test SR models

  1. Modify the configuration file options/test/test_sr.json
  2. Run command: python test.py -opt options/test/test_sr.json

Test SPSR models

  1. Modify the configuration file options/test/test_spsr.json
  2. Run command: python test.py -opt options/test/test_spsr.json

Test SFTGAN models

  1. Obtain the segmentation probability maps: python test_seg.py
  2. Run command: python test_sftgan.py

How to Train

Train ESRGAN (SRGAN) models

We use a PSNR-oriented pretrained SR model to initialize the parameters for better quality. According to the author's paper and some testing, this will also stabilize the GAN training and allows for faster convergence.

  1. Prepare datasets, usually the DIV2K dataset. More details are in codes/data and (Faster IO speed).
  2. Optional: If the intention is to replicate the original paper here you would prepare the PSNR-oriented pretrained model. You can also use the original RRDB_PSNR_x4.pth as the pretrained model for that purpose, otherwise any existing model will work as pretrained.
  3. Modify the configuration file options/train/train_esrgan.json
  4. Run command: python train.py -opt options/train/train_esrgan.json

Train SR models

  1. Prepare datasets, usually the DIV2K dataset. More details are in codes/data.
  2. Modify the configuration file options/train/train_sr.json
  3. Run command: python train.py -opt options/train/train_sr.json

Train SPSR models

  1. Prepare datasets, usually the DIV2K dataset. More details are in codes/data.
  2. Modify the configuration file options/train/train_spsr.json
  3. Run command: python train.py -opt options/train/train_spsr.json

Train SFTGAN models

Pretraining is also important. We use a PSNR-oriented pretrained SR model (trained on DIV2K) to initialize the SFTGAN model.

  1. First prepare the segmentation probability maps for training data: run test_seg.py. We provide a pretrained segmentation model for 7 outdoor categories in Pretrained models. We use Xiaoxiao Li's codes to train our segmentation model and transfer it to a PyTorch model.
  2. Put the images and segmentation probability maps in a folder as described in codes/data.
  3. Transfer the pretrained model parameters to the SFTGAN model.
    1. First train with debug mode and obtain a saved model.
    2. Run transfer_params_sft.py to initialize the model.
    3. We provide an initialized model named sft_net_ini.pth in Pretrained models
  4. Modify the configuration file in options/train/train_sftgan.json
  5. Run command: python train.py -opt options/train/train_sftgan.json

Resuming Training

When resuming training, just pass a option with the name resume_state, like , "resume_state": "../experiments/debug_001_RRDB_PSNR_x4_DIV2K/training_state/200.state".

Datasets

Several common SR datasets are list below.

Name Datasets Short Description Google Drive Baidu Drive
Classical SR Training T91 91 images for training Google Drive Baidu Drive
BSDS200 A subset (train) of BSD500 for training
General100 100 images for training
Classical SR Testing Set5 Set5 test dataset
Set14 Set14 test dataset
BSDS100 A subset (test) of BSD500 for testing
urban100 100 building images for testing (regular structures)
manga109 109 images of Japanese manga for testing
historical 10 gray LR images without the ground-truth
2K Resolution DIV2K proposed in NTIRE17(800 train and 100 validation) Google Drive Baidu Drive
Flickr2K 2650 2K images from Flickr for training
DF2K A merged training dataset of DIV2K and Flickr2K
OST (Outdoor Scenes) OST Training 7 categories images with rich textures Google Drive Baidu Drive
OST300 300 test images of outdoor scences
PIRM PIRM PIRM self-val, val, test datasets Google Drive Baidu Drive

Any dataset can be augmented to expose the model to information that might not be available in the images, such a noise and blur. For this reason, Data Augmentation has been added to the options in this repository and it can be extended to include other types of augmentations.

Pretrained models

The most recent community pretrained models can be found in the Wiki.

You can put the downloaded models in the default experiments/pretrained_models folder.

Models that were trained using the same pretrained model or are derivates of the same pretrained model are able to be interpolated to combine the properties of both. The original author demostrated this by interpolating the PSNR pretrained model (which is not perceptually good, but results in smooth images) with the ESRGAN resulting models that have more details but sometimes is excessive to control a balance in the resulting images, instead of interpolating the resulting images from both models, giving much better results.

The authors continued exploring the capabilities of linearly interpolating models in their new work "DNI" (CVPR19): Deep Network Interpolation for Continuous Imagery Effect Transition with very interesting results and examples. The script for interpolation can be found in the net_interp.py file, but a new version with more options will be commited at a later time. This is an alternative to create new models without additional training and also to create pretrained models for easier fine tuning.

More details and explanations of interpolation can be found here in the Wiki.

Following are the original pretrained models that the authors made available for ESRGAN, SPSR, and SFTGAN:

Name Models Short Description Google Drive Baidu Drive
ESRGAN RRDB_ESRGAN_x4.pth final ESRGAN model we used in our paper Google Drive Baidu Drive
RRDB_PSNR_x4.pth model with high PSNR performance
SPSR spsr.pth Structure-Preserving model Google Drive Baidu Drive (code muw3)
SFTGAN segmentation_OST_bic.pth segmentation model Google Drive Baidu Drive
sft_net_ini.pth sft_net for initilization
sft_net_torch.pth SFTGAN Torch version (paper)
SFTGAN_bicx4_noBN_OST_bg.pth SFTGAN PyTorch version
SRGAN*1 SRGAN_bicx4_303_505.pth SRGAN(with modification) Google Drive Baidu Drive
SRResNet*2 SRResNet_bicx4_in3nf64nb16.pth SRResNet(with modification) Google Drive Baidu Drive

For more details about the original pretrained models, please see experiments/pretrained_models.


Additional Help

If you have any questions, we have a discord server where you can ask them and a Wiki with more information.


Acknowledgement

  • Code architecture is inspired by pytorch-cyclegan.
  • Thanks to Wai Ho Kwok, who contributes to the initial version.

BibTex

@InProceedings{wang2018esrgan,
    author = {Wang, Xintao and Yu, Ke and Wu, Shixiang and Gu, Jinjin and Liu, Yihao and Dong, Chao and Qiao, Yu and Loy, Chen Change},
    title = {ESRGAN: Enhanced super-resolution generative adversarial networks},
    booktitle = {The European Conference on Computer Vision Workshops (ECCVW)},
    month = {September},
    year = {2018}
}
@InProceedings{wang2018sftgan,
    author = {Wang, Xintao and Yu, Ke and Dong, Chao and Loy, Chen Change},
    title = {Recovering realistic texture in image super-resolution by deep spatial feature transform},
    booktitle = {The IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},
    month = {June},
    year = {2018}
}
@inproceedings{ma2020structure,
    author={Ma, Cheng and Rao, Yongming and Cheng, Yean and Chen, Ce and Lu, Jiwen and Zhou, Jie},
    title={Structure-Preserving Super Resolution with Gradient Guidance},
    booktitle={Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},
    month = {March},
    year={2020}
}