Table of Contents:

Generative Adversarial Networks

discriminator: a traditional classification network;

generator: take random noise as input and transform it to produce images to fool the discriminator into thinking the images it produced are real.

We can think this back and force process of the generator(\(G\)) trying to fool the discriminator($D$), and the discriminator tring to correctly classify real vs. fake as a minimax game:

\[\underset{G}{\operatorname{minimize}} \underset{D}{\operatorname{maximize}} \mathbb{E}_{x \sim p_{\text {data }}}[\log D(x)]+\mathbb{E}_{z \sim p(z)}[\log (1-D(G(z)))]\]

From Goodfellow et al., we alternate the following updates:

  • update the generator to maximize the probability of the discriminator making the incorrect choice on generated data:
\[\underset{G}{\operatorname{maximize}} \mathbb{E}_{z \sim p(z)}[\log D(G(z))]\]
  • update the discriminator to maximize the probability of the discriminator making the correct choice on real and generated data:
\[\underset{D}{\operatorname{maximize}} \mathbb{E}_{x \sim p_{\text {data }}}[\log D(x)]+\mathbb{E}_{z \sim p(z)}[\log (1-D(G(z)))]\]

Reference

CS231n

literaryno4/cs231n

min-char-rnn.py gist: 112 lines of Python

Lecture 10: Recurrent Neural Networks(slide)

A Useful Video