0% found this document useful (0 votes)
28 views25 pages

Lec 19

Generative adversarial networks (GANs) are a type of implicit generative model that uses two neural networks, a generator and discriminator, competing against each other. The generator tries to generate realistic samples to fool the discriminator, while the discriminator tries to distinguish real samples from generated ones. GANs have achieved impressive results generating high-resolution images but evaluating their quality is challenging. Variations like CycleGAN can perform tasks like style transfer without paired training examples.

Uploaded by

thesurajzaware
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
28 views25 pages

Lec 19

Generative adversarial networks (GANs) are a type of implicit generative model that uses two neural networks, a generator and discriminator, competing against each other. The generator tries to generate realistic samples to fool the discriminator, while the discriminator tries to distinguish real samples from generated ones. GANs have achieved impressive results generating high-resolution images but evaluating their quality is challenging. Variations like CycleGAN can perform tasks like style transfer without paired training examples.

Uploaded by

thesurajzaware
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 25

CSC321 Lecture 19: Generative Adversarial Networks

Roger Grosse

Roger Grosse CSC321 Lecture 19: Generative Adversarial Networks 1 / 25


Overview

In generative modeling, we’d like to train a network that models a


distribution, such as a distribution over images.
One way to judge the quality of the model is to sample from it.
This field has seen rapid progress:

2018
2009 2015
Roger Grosse CSC321 Lecture 19: Generative Adversarial Networks 2 / 25
Overview

Four modern approaches to generative modeling:


Generative adversarial networks (today)
Reversible architectures (next lecture)
Autoregressive models (Lecture 7, and next lecture)
Variational autoencoders (CSC412)
All four approaches have different pros and cons.

Roger Grosse CSC321 Lecture 19: Generative Adversarial Networks 3 / 25


Implicit Generative Models

Implicit generative models implicitly define a probability distribution


Start by sampling the code vector z from a fixed, simple distribution
(e.g. spherical Gaussian)
The generator network computes a differentiable function G mapping
z to an x in data space

Roger Grosse CSC321 Lecture 19: Generative Adversarial Networks 4 / 25


Implicit Generative Models
A 1-dimensional example:

Roger Grosse CSC321 Lecture 19: Generative Adversarial Networks 5 / 25


Implicit Generative Models

https://blog.openai.com/generative-models/

Roger Grosse CSC321 Lecture 19: Generative Adversarial Networks 6 / 25


Implicit Generative Models

This sort of architecture sounded preposterous to many of us, but


amazingly, it works.

Roger Grosse CSC321 Lecture 19: Generative Adversarial Networks 7 / 25


Generative Adversarial Networks

The advantage of implicit generative models: if you have some


criterion for evaluating the quality of samples, then you can compute
its gradient with respect to the network parameters, and update the
network’s parameters to make the sample a little better
The idea behind Generative Adversarial Networks (GANs): train two
different networks
The generator network tries to produce realistic-looking samples
The discriminator network tries to figure out whether an image came
from the training set or the generator network
The generator network tries to fool the discriminator network

Roger Grosse CSC321 Lecture 19: Generative Adversarial Networks 8 / 25


Generative Adversarial Networks

Roger Grosse CSC321 Lecture 19: Generative Adversarial Networks 9 / 25


Generative Adversarial Networks
Let D denote the discriminator’s predicted probability of being data
Discriminator’s cost function: cross-entropy loss for task of classifying
real vs. fake images

JD = Ex∼D [− log D(x)] + Ez [− log(1 − D(G (z)))]

One possible cost function for the generator: the opposite of the
discriminator’s

JG = −JD
= const + Ez [log(1 − D(G (z)))]

This is called the minimax formulation, since the generator and


discriminator are playing a zero-sum game against each other:

max min JD
G D

Roger Grosse CSC321 Lecture 19: Generative Adversarial Networks 10 / 25


Generative Adversarial Networks
Updating the discriminator:

Roger Grosse CSC321 Lecture 19: Generative Adversarial Networks 11 / 25


Generative Adversarial Networks
Updating the generator:

Roger Grosse CSC321 Lecture 19: Generative Adversarial Networks 12 / 25


Generative Adversarial Networks

Alternating training of the generator and discriminator:

Roger Grosse CSC321 Lecture 19: Generative Adversarial Networks 13 / 25


A Better Cost Function

We introduced the minimax cost function for the generator:

JG = Ez [log(1 − D(G (z)))]

One problem with this is saturation.


Recall from our lecture on classification: when the prediction is really
wrong,
“Logistic + squared error” gets a weak gradient signal
“Logistic + cross-entropy” gets a strong gradient signal
Here, if the generated sample is really bad, the discriminator’s
prediction is close to 0, and the generator’s cost is flat.

Roger Grosse CSC321 Lecture 19: Generative Adversarial Networks 14 / 25


A Better Cost Function

Original minimax cost:

JG = Ez [log(1 − D(G (z)))]

Modified generator cost:

JG = Ez [− log D(G (z))]

This fixes the saturation problem.

Roger Grosse CSC321 Lecture 19: Generative Adversarial Networks 15 / 25


Generative Adversarial Networks

Since GANs were introduced in 2014, there have been hundreds of


papers introducing various architectures and training methods.
Most modern architectures are based on the Deep Convolutional GAN
(DC-GAN), where the generator and discriminator are both conv nets.
GAN Zoo: https://github.com/hindupuravinash/the-gan-zoo
Good source of horrible puns (VEEGAN, Checkhov GAN, etc.)

Roger Grosse CSC321 Lecture 19: Generative Adversarial Networks 16 / 25


GAN Samples
Celebrities:

Karras et al., 2017. Progressive growing of GANs for improved quality, stability, and variation

Roger Grosse CSC321 Lecture 19: Generative Adversarial Networks 17 / 25


GAN Samples
Bedrooms:

Karras et al., 2017. Progressive growing of GANs for improved quality, stability, and variation

Roger Grosse CSC321 Lecture 19: Generative Adversarial Networks 18 / 25


GAN Samples
Objects:

Karras et al., 2017. Progressive growing of GANs for improved quality, stability, and variation

Roger Grosse CSC321 Lecture 19: Generative Adversarial Networks 19 / 25


GAN Samples

GANs revolutionized generative modeling by producing crisp,


high-resolution images.
The catch: we don’t know how well they’re modeling the distribution.

Can’t measure the log-likelihood they assign to held-out data.


Could they be memorizing training examples? (E.g., maybe they
sometimes produce photos of real celebrities?)
We have no way to tell if they are dropping important modes from the
distribution.
See Wu et al., “On the quantitative analysis of decoder-based
generative models” for partial answers to these questions.

Roger Grosse CSC321 Lecture 19: Generative Adversarial Networks 20 / 25


CycleGAN

Style transfer problem: change the style of an image while preserving the
content.

Data: Two unrelated collections of images, one for each style


Roger Grosse CSC321 Lecture 19: Generative Adversarial Networks 21 / 25
CycleGAN

If we had paired data (same content in both styles), this would be a


supervised learning problem. But this is hard to find.
The CycleGAN architecture learns to do it from unpaired data.
Train two different generator nets to go from style 1 to style 2, and
vice versa.
Make sure the generated samples of style 2 are indistinguishable from
real images by a discriminator net.
Make sure the generators are cycle-consistent: mapping from style 1 to
style 2 and back again should give you almost the original image.

Roger Grosse CSC321 Lecture 19: Generative Adversarial Networks 22 / 25


CycleGAN

Roger Grosse CSC321 Lecture 19: Generative Adversarial Networks 23 / 25


CycleGAN

Style transfer between aerial photos and maps:

Roger Grosse CSC321 Lecture 19: Generative Adversarial Networks 24 / 25


CycleGAN

Style transfer between road scenes and semantic segmentations (labels of


every pixel in an image by object category):

Roger Grosse CSC321 Lecture 19: Generative Adversarial Networks 25 / 25

You might also like