GAN : How I Taught My Computer To Lie
GANs : Generative Adversarial Networks
Sometimes you might wonder how deepfakes are made or how AI generates those hyper-realistic faces of people who don't exist. The answer is often GANs (Generative Adversarial Networks).

In simple terms, a GAN is like a constant battle between two neural networks: a Forger and a Detective.
The Core Concept
The architecture consists of two main components:
- The Generator (): The "Forger". It takes random noise as input and tries to generate data (like an image) that looks real.
- The Discriminator (): The "Detective". It takes an image (either real from the dataset or fake from the Generator) and tries to classify it as real or fake.
The Math Behind It (Don't Panic!)
Okay, that formula looks scary, but let's break it down. It's just a fancy way of describing a Minimax Game:
Think of it like a tug-of-war:
- : The Detective gets points for correctly identifying Real images.
- : The Detective gets points for correctly spotting Fake images created by the Generator ().
The Game:
- The Detective () wants to MAXIMIZE the score (spot all the fakes!).
- The Forger () wants to MINIMIZE the score (fool the detective so badly that the detective can't tell the difference).
When the game ends perfectly (Nash Equilibrium), the Detective is just guessing ( chance), because the Forger is just that good.
Key Factors & Challenges
Training GANs is notoriously difficult. Here are the main factors to consider:
1. Nash Equilibrium
We want the system to reach a state where the Generator produces perfect fakes, and the Discriminator is guessing randomly (50% confidence). This is the Nash Equilibrium.
2. Mode Collapse
Sometimes the Generator finds one image that fools the Discriminator and just keeps producing that same image over and over. (Insert "I'll do it again" Goofy meme)
3. Vanishing Gradients
If the Discriminator is too good too early, the Generator gets no useful feedback (gradients vanish), and learning stops.
Code Example (PyTorch)
Let's build this from scratch using PyTorch. We'll break it down into three parts: The Artist, The Critic, and The Battle.
1. The Generator (The Artist)
The Generator takes random noise and tries to turn it into an image (in this case, a handwritten digit from MNIST).
class Generator(nn.Module):
def __init__(self, z_dim, img_dim):
super().__init__()
self.gen = nn.Sequential(
# Take in random noise (z_dim)
nn.Linear(z_dim, 256),
nn.LeakyReLU(0.1),
# Output a flat image vector (img_dim)
nn.Linear(256, img_dim),
nn.Tanh(), # Squish values between -1 and 1
)
def forward(self, x):
return self.gen(x)
2. The Discriminator (The Critic)
The Discriminator looks at an image and outputs a single number between 0 and 1: "How real is this?"
class Discriminator(nn.Module):
def __init__(self, img_dim):
super().__init__()
self.disc = nn.Sequential(
nn.Linear(img_dim, 128),
nn.LeakyReLU(0.1),
# Output a single probability
nn.Linear(128, 1),
nn.Sigmoid(), # Squish output between 0 (Fake) and 1 (Real)
)
def forward(self, x):
return self.disc(x)
3. The Training Loop (The Battle)
This is where the magic happens. In every step, we train the Discriminator to spot fakes, and then we train the Generator to fool the Discriminator.
# ... setup code omitted ...
for epoch in range(num_epochs):
for real_images, _ in dataloader:
# --- Train Discriminator (The Critic) ---
# "Hey Critic, here are some real images. Learn them!"
disc_real = disc(real_images)
loss_real = criterion(disc_real, torch.ones_like(disc_real))
# "Hey Critic, here are some fakes I just made. Spot them!"
noise = torch.randn(batch_size, z_dim)
fake_images = gen(noise)
disc_fake = disc(fake_images)
loss_fake = criterion(disc_fake, torch.zeros_like(disc_fake))
# Update the Critic
loss_disc = (loss_real + loss_fake) / 2
opt_disc.zero_grad()
loss_disc.backward(retain_graph=True)
opt_disc.step()
# --- Train Generator (The Artist) ---
# "Hey Artist, try to fool the Critic now!"
output = disc(fake_images)
# We want the Critic to think these are REAL (1)
loss_gen = criterion(output, torch.ones_like(output))
# Update the Artist
opt_gen.zero_grad()
loss_gen.backward()
opt_gen.step()
Conclusion
GANs are powerful but finicky. When they work, they are magic. When they don't, it's just noise.

Happy coding!