# StyleGAN V2
## StyleGAN V2 introduction
The task of StyleGAN V2 is image generation. Given a vector of a specific length, generate the image corresponding to the vector. It is an upgraded version of StyleGAN, which solves the problem of artifacts generated by StyleGAN.
StyleGAN V2 can mix multi-level style vectors. Its core is adaptive style decoupling.
Compared with StyleGAN, its main improvement is:
- The quality of the generated image is significantly better (higher FID score, fewer artifacts)
- Propose a new method to replace progressive training, with more perfect details such as teeth and eyes
- Style mixing improved
- Smoother interpolation
- Train faster
## How to use
### Generate
The user can generate different results by replacing the value of the seed or removing the seed. Use the following command to generate images:
```
cd applications/
python -u tools/styleganv2.py \
--output_path \
--weight_path \
--model_type ffhq-config-f \
--seed 233 \
--size 1024 \
--style_dim 512 \
--n_mlp 8 \
--channel_multiplier 2 \
--n_row 3 \
--n_col 5 \
--cpu
```
**params:**
- output_path: the directory where the generated images are stored
- weight_path: pretrained model path
- model_type: inner model type in PaddleGAN. If you use an existing model type, `weight_path` will have no effect.
Currently available: `ffhq-config-f`, `animeface-512`
- seed: random number seed
- size: model parameters, output image resolution
- style_dim: model parameters, dimensions of style z
- n_mlp: model parameters, the number of multi-layer perception layers for style z
- channel_multiplier: model parameters, channel product, affect model size and the quality of generated pictures
- n_row: the number of rows of the sampled image
- n_col: the number of columns of the sampled picture
- cpu: whether to use cpu inference, if not, please remove it from the command
### Train
#### prepapre datasets
you can get ffhq dataset from [here](https://drive.google.com/drive/folders/1u2xu7bSrWxrbUxk-dT-UvEJq8IjdmNTP)
for convenient, we provide [images256x256.tar](https://paddlegan.bj.bcebos.com/datasets/images256x256.tar)
The structure of stylegan data is as following:
```
PaddleGAN
├── data
├── ffhq
├──images1024x1024
├── 00000.png
├── 00001.png
├── 00002.png
├── 00003.png
├── 00004.png
├──images256x256
├── 00000.png
├── 00001.png
├── 00002.png
├── 00003.png
├── 00004.png
├──custom_data
├── img0.png
├── img1.png
├── img2.png
├── img3.png
├── img4.png
...
```
train model
```
python tools/main.py -c configs/stylegan_v2_256_ffhq.yaml
```
### Inference
When you finish training, you need to use ``tools/extract_weight.py`` to extract the corresponding weights.
```
python tools/extract_weight.py output_dir/YOUR_TRAINED_WEIGHT.pdparams --net-name gen_ema --output YOUR_WEIGHT_PATH.pdparams
```
Then use ``applications/tools/styleganv2.py`` to get results
```
python tools/styleganv2.py --output_path stylegan01 --weight_path YOUR_WEIGHT_PATH.pdparams --size 256
```
Note: ``--size`` should be same with your config file.
## Results
Random Samples:
![Samples](../../imgs/stylegan2-sample.png)
Random Style Mixing:
![Random Style Mixing](../../imgs/stylegan2-sample-mixing-0.png)
## Reference
```
@inproceedings{Karras2019stylegan2,
title = {Analyzing and Improving the Image Quality of {StyleGAN}},
author = {Tero Karras and Samuli Laine and Miika Aittala and Janne Hellsten and Jaakko Lehtinen and Timo Aila},
booktitle = {Proc. CVPR},
year = {2020}
}
```