Register
Login
Resources
Docs Blog Datasets Glossary Case Studies Tutorials & Webinars
Product
Data Engine LLMs Platform Enterprise
Pricing Explore
Connect to our Discord channel
midsterx 37fce42634
Add modifications post paper acceptance
1 year ago
9741ee2dbb
Modified README.md
2 years ago
954f3d9fc9
Updated the Figures notebook
2 years ago
ad853f2918
Store cache in absolute path
2 years ago
089195ade5
Update figures and results
2 years ago
c9f89a6a3a
Added models for DVC tracking
2 years ago
c9f89a6a3a
Added models for DVC tracking
2 years ago
37fce42634
Add modifications post paper acceptance
1 year ago
37fce42634
Add modifications post paper acceptance
1 year ago
92e3a06f13
Removed unnecessary import in decomposition.py
2 years ago
5df6e73598
Removed redundant estimators
2 years ago
5df6e73598
Removed redundant estimators
2 years ago
84d024810b
Address reviewer comments
2 years ago
Storage Buckets
Data Pipeline
Legend
DVC Managed File
Git Managed File
Metric
Stage File
External File

README.md

You have to be logged in to leave a comment. Sign In

GANSpace: Discovering Interpretable GAN Controls

Python 3.7 Tensorflow 1.15

StyleGAN2 Cars

This repository is the reproduction of GANSpace: Discovering Interpretable GAN Controls in TensorFlow 1.x as a part of the ML Reproducibility Challenge 2021 (Spring Edition). The original implementation of the paper uses PyTorch. The accompanying reproducibility report and a summary of the results can be found in the wiki.

Requirements

The code requires a Windows/Linux machine with one NVIDIA GPU, compatible NVIDIA drivers, CUDA 10.0 toolkit and cuDNN 7.5. For more information on installation instructions, please refer to the official NVIDIA installation guides:

  1. CUDA: Windows, Linux
  2. cuDNN

We use the official NVlabs implementation of StyleGAN/StyleGAN2 models which compile custom CUDA kernels on the fly. Please ensure that NVCC is installed and is in the system path.

All the code in this repository was successfully tested on two machines running Windows 10 and Ubuntu 18.04 LTS.

Setup

  1. (Optional but recommended) Create a virtual environment with conda: conda create -n re-ganspace python=3.7
  2. (Optional but recommended) Activate the virtual environment: conda activate re-ganspace
  3. Clone the repository: git clone https://dagshub.com/midsterx/Re-GANSpace
  4. Install the required packages using pip: pip install -r requirements.txt
  5. (Optional but recommended) The code automatically downloads the required StyleGAN/StyleGAN2 models and caches them locally. However, you can download them on your own using DVC: dvc pull -r origin

Usage

Reproducibility

Open In Colab

We successfully reproduce and verify all the StyleGAN/StyleGAN2 experiments from the original paper. The folder figures contain 5 python scripts that correspond to the StyleGAN/StyleGAN2 figures in the original paper.

To generate the figures, run the following python scripts:

  • Figure 1 (figure_1.py): Demonstration of applying a sequence of hand tuned edits discovered using GANSpace on StyleGAN2 trained on the FFHQ and Cars datasets.
  • Figure 3 (figure_3.py): Illustration of the effect of variations along the principal components in the intermediate latent space of StyleGAN2 FFHQ.
  • Figure 4 (figure_4.py): Illustration of the significance of the principal components as compared to random directions in the intermediate latent space of StyleGAN2 Cats.
  • Figure 5 (figure_5.py): Illustration of the efficacy of GANSpace as compared to other supervised learning techniques in identifying edit directions.
  • Figure 7 (figure_7.py): Selection of interpretable edits discovered by selective application of latent edits across the layers of several pretrained GAN models.

The generated figures can be found in the results folder.

Alternatively, you can run the experiments on Google Colab by opening the python notebook linked in this section. Please ensure that you are using the GPU runtime.

Additional Experiments and Playground

Open In Colab

In addition to reproducing the authors' results, we ran our own experiments and identified interesting results. We briefly summarize them here:

  • New edits: We identify new edits on the Stylegan2 Beetles dataset. The edit adds a pattern on the shell of the beetle. The generated pattern varies depending on the seed used.
  • Truncation Psi on StyleGAN: The original authors use the "truncation trick" on images generated using StyleGAN2 to improve their quality. However, this is not enabled for StyleGAN images. During our experimentation, we found that enabling truncation while applying edits on StyleGAN images improved their quality as well.

The results from our experiments can be found in the results/custom folder.

We also provide a playground with an interactive UI where you can explore various edits using GANSpace on pretrained StyleGAN/StyleGAN2 models.

To run our custom experiments and use the playground, open the python notebook linked in this section on Google Colab. Please ensure that you are using the GPU runtime.

Note

  1. If you encounter OSError: Google Drive quota exceeded errors while running the experiments, please download the pretrained models using DVC as described in the Setup section.
  2. BigGAN512-deep results were not reproduced from the original paper.

ML Reproducibility Challenge, Spring 2021

The paper has been accepted into the ReScience journal, and the OpenReview review can be found here. Paper coming soon!

Reference

@inproceedings{härkönen2020ganspace,
  title     = {GANSpace: Discovering Interpretable GAN Controls},
  author    = {Erik Härkönen and Aaron Hertzmann and Jaakko Lehtinen and Sylvain Paris},
  booktitle = {Proc. NeurIPS},
  year      = {2020}
}
Tip!

Press p or to see the previous file or, n or to see the next file

About

This repository is the reproduction of GANSpace: Discovering Interpretable GAN Controls in TensorFlow 1.x as a part of the ML Reproducibility Challenge 2021 (Spring Edition). The original implementation of the paper uses PyTorch. The accompanying reproducibility report and a summary of the results can be found in the wiki.

Publications
View on arXiv  
Collaborators 2

Comments

Loading...