Section 9: Unsupervised Learning with GANs!

Congrats making it to Week 9! It's been a tough quarter with all that's going on, but we're all really proud of all the progress and work you all have put into this course.

Time for something fun!

StyleGAN

Let's start with a super super cool application of GANs known as StyleGAN. Can you tell which people are real and which people are not?

Hint: you might not be able to tell from the faces alone 👓, 💇🏽‍♀️, 👔

Whice Face is Real?

Goal

Given samples $\{x\}_{i=1}^n$ drawn i.i.d. from an unknown probability distribution $Q(x)$, train a model that can generate samples from learned distribution, which resembles $Q(x)$.

Generative Adversarial Networks

Generative adversarial networks (GANs) are deep learning models, which, as part of the optimization process, learn a generating distribution $G(x)$ to resemble $Q(x)$.

Libraries

In [0]:
from argparse import Namespace
import matplotlib.pyplot as plt
import time
import torch
import torch.nn.functional as F
from torch.utils.data import DataLoader
from torch import nn
from torch import optim
from torchvision import datasets
from torchvision import transforms

%matplotlib inline

Hyperparameters

It's good practice to specify project hyperparameters using argparse. Usually, we run experiments from the terminal and the typical code will look a bit more like the following:

import argparse
...
def main():
    parser = argparse.ArgumentParser()
    parser.add_argument(...)
    ...
    hparams: argparse.Namespace = parser.parse_args()

if __name__ == '__main__':
    main()
In [0]:
args = {
    'batch_size': 128,
    'dis_lr': 0.0001, 
    'epochs': 150,
    'gen_lr': 0.0001,
    'latent_dim': 100,
    'log_interval': 150,
    'seed': 446
}
hparams = Namespace(**args)

Devices, Datasets, and DataLoaders

Say that three times fast 😄

Running this notebook on a CPU will take too long. Verify that you are able to run on a CUDA device.

Runtime -> Change runtime type -> Hardware Accelerator: GPU

In [0]:
device = torch.device('cuda:0"' if torch.cuda.is_available() else 'cpu')
print('using device', device)
using device cuda:0

We'll be training our GAN on the FashionMNIST dataset. FashionMNIST is very similar to MNIST. Same number of samples and same dimensionality! That is, $X_{\text{Train}} \in \mathbb{R}^{60,000 \times 28 \times 28}$.

In [0]:
data_train = datasets.FashionMNIST(root='data', 
                               train=True, 
                               transform=transforms.ToTensor(),
                               download=True)

data_test = datasets.FashionMNIST(root='data', 
                              train=False, 
                              transform=transforms.ToTensor())
num_workers = torch.multiprocessing.cpu_count()
train_dataloader = DataLoader(dataset=data_train, 
                          batch_size=hparams.batch_size,
                          num_workers=num_workers,
                          shuffle=True)
# no need for a test loader
Downloading http://fashion-mnist.s3-website.eu-central-1.amazonaws.com/train-images-idx3-ubyte.gz to data/FashionMNIST/raw/train-images-idx3-ubyte.gz
Extracting data/FashionMNIST/raw/train-images-idx3-ubyte.gz to data/FashionMNIST/raw
Downloading http://fashion-mnist.s3-website.eu-central-1.amazonaws.com/train-labels-idx1-ubyte.gz to data/FashionMNIST/raw/train-labels-idx1-ubyte.gz
Extracting data/FashionMNIST/raw/train-labels-idx1-ubyte.gz to data/FashionMNIST/raw
Downloading http://fashion-mnist.s3-website.eu-central-1.amazonaws.com/t10k-images-idx3-ubyte.gz to data/FashionMNIST/raw/t10k-images-idx3-ubyte.gz
Extracting data/FashionMNIST/raw/t10k-images-idx3-ubyte.gz to data/FashionMNIST/raw
Downloading http://fashion-mnist.s3-website.eu-central-1.amazonaws.com/t10k-labels-idx1-ubyte.gz to data/FashionMNIST/raw/t10k-labels-idx1-ubyte.gz
Extracting data/FashionMNIST/raw/t10k-labels-idx1-ubyte.gz to data/FashionMNIST/raw
Processing...
Done!
/pytorch/torch/csrc/utils/tensor_numpy.cpp:141: UserWarning: The given NumPy array is not writeable, and PyTorch does not support non-writeable tensors. This means you can write to the underlying (supposedly non-writeable) NumPy array using the tensor. You may want to copy the array to protect its data or make it writeable before converting it to a tensor. This type of warning will be suppressed for the rest of this program.
In [0]:
samples = [] # why is writing matplotlib code so gross? :((((
for _, (data, _) in enumerate(train_dataloader):
    for i in range(16):
        samples.append(data[i, :, :, :].squeeze(dim=0).numpy())
    break
plt.figure()
i, j = 0, 0 
_, subplots = plt.subplots(nrows=4, ncols=4)
for sample in samples:
    subplots[j][i].imshow(sample, cmap='binary') # also try binary 
    subplots[j][i].set_axis_off()
    i += 1
    if i % 4 == 0:
        j += 1 
        i = 0
plt.show()


<Figure size 432x288 with 0 Axes>

Defining the Model(s)

Let's play a (zero-sum) game 😈 mwahahaha

  1. We have a generator network whose job is to create forgeries, which resemble the data.
  2. We have a discriminator network whose job is to separate forgeries from originals.

Let $G$ be the generator and let $D$ be the discriminator.

  • $G$ is trying to trick $D$.
  • $D$ is trying to not get punked by $G$.

Generator

If the discriminator was fixed, then the objective is $ \min_G L(G, D)$, where the likelihood function is $$ L(G, D) = \frac{1}{2}\mathbb{E}_{x\sim Q}\left[ \text{log}(D(x)) \right] + \frac{1}{2}\mathbb{E}_{x\sim G} \left[\text{log}(1 - D(x)\right]$$

However, the discriminator is not fixed! Both the generator and discriminator get better over time. Generator is trying to make the discriminator's likelihood small.

In [0]:
class Generator(nn.Module):
    def __init__(self):
        super(Generator, self).__init__()
        self.generator = nn.Sequential(
            nn.Linear(hparams.latent_dim, 3136, bias=False),
            nn.BatchNorm1d(num_features=3136),
            nn.LeakyReLU(inplace=True, negative_slope=0.0001),
            Reshape(),
            
            nn.ConvTranspose2d(in_channels=64, out_channels=32, kernel_size=(3, 3), stride=(2, 2), padding=1, bias=False),
            nn.BatchNorm2d(num_features=32),
            nn.LeakyReLU(inplace=True, negative_slope=0.0001),
            nn.Dropout2d(p=0.2),
            
            nn.ConvTranspose2d(in_channels=32, out_channels=16, kernel_size=(3, 3), stride=(2, 2), padding=1, bias=False),
            nn.BatchNorm2d(num_features=16),
            nn.LeakyReLU(inplace=True, negative_slope=0.0001),
            nn.Dropout2d(p=0.2),
            
            nn.ConvTranspose2d(in_channels=16, out_channels=8, kernel_size=(3, 3), stride=(1, 1), padding=0, bias=False),
            nn.BatchNorm2d(num_features=8),
            nn.LeakyReLU(inplace=True, negative_slope=0.0001),
            nn.Dropout2d(p=0.2),
            
            nn.ConvTranspose2d(in_channels=8, out_channels=1, kernel_size=(2, 2), stride=(1, 1), padding=0, bias=False),
            nn.Tanh()
        )
      
    def forward(self, z):
        img = self.generator(z)
        return img

# module utility to reshape latent features into a tensor that can be processed by atrous convolutions
class Reshape(nn.Module):
    def forward(self, input):
        return input.view(input.size(0), 64, 7, 7)

Discriminator

The discriminator is a binary classifer: fake? or not fake? For a given distribution $G(x)$ and samples from the true underlying distribution $Q(x)$, we can learn $D$!

In [0]:
class Discriminator(nn.Module):
    def __init__(self):
        super(Discriminator, self).__init__()
        self.discriminator = nn.Sequential(
            nn.Conv2d(in_channels=1, out_channels=8, padding=1, kernel_size=(3, 3), stride=(2, 2), bias=False),
            nn.BatchNorm2d(num_features=8),
            nn.LeakyReLU(inplace=True, negative_slope=0.0001), 
            nn.Dropout2d(p=0.2),
            nn.Conv2d(in_channels=8, out_channels=32, padding=1, kernel_size=(3, 3), stride=(2, 2), bias=False),
            nn.BatchNorm2d(num_features=32),
            nn.LeakyReLU(inplace=True, negative_slope=0.0001), 
            nn.Dropout2d(p=0.2),
            Flatten(),
            nn.Linear(7*7*32, 1),
        )
    
    def forward(self, img):
        pred = self.discriminator(img)
        return pred.view(-1)

class Flatten(nn.Module):
    def forward(self, input):
        return input.view(input.size(0), -1)

We can

In [0]:
torch.manual_seed(hparams.seed)

forger = Generator()
forger.to(device)

detective = Discriminator()
detective.to(device)

print(forger)
print(detective)
Generator(
  (generator): Sequential(
    (0): Linear(in_features=100, out_features=3136, bias=False)
    (1): BatchNorm1d(3136, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
    (2): LeakyReLU(negative_slope=0.0001, inplace=True)
    (3): Reshape()
    (4): ConvTranspose2d(64, 32, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
    (5): BatchNorm2d(32, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
    (6): LeakyReLU(negative_slope=0.0001, inplace=True)
    (7): Dropout2d(p=0.2, inplace=False)
    (8): ConvTranspose2d(32, 16, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
    (9): BatchNorm2d(16, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
    (10): LeakyReLU(negative_slope=0.0001, inplace=True)
    (11): Dropout2d(p=0.2, inplace=False)
    (12): ConvTranspose2d(16, 8, kernel_size=(3, 3), stride=(1, 1), bias=False)
    (13): BatchNorm2d(8, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
    (14): LeakyReLU(negative_slope=0.0001, inplace=True)
    (15): Dropout2d(p=0.2, inplace=False)
    (16): ConvTranspose2d(8, 1, kernel_size=(2, 2), stride=(1, 1), bias=False)
    (17): Tanh()
  )
)
Discriminator(
  (discriminator): Sequential(
    (0): Conv2d(1, 8, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
    (1): BatchNorm2d(8, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
    (2): LeakyReLU(negative_slope=0.0001, inplace=True)
    (3): Dropout2d(p=0.2, inplace=False)
    (4): Conv2d(8, 32, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
    (5): BatchNorm2d(32, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
    (6): LeakyReLU(negative_slope=0.0001, inplace=True)
    (7): Dropout2d(p=0.2, inplace=False)
    (8): Flatten()
    (9): Linear(in_features=1568, out_features=1, bias=True)
  )
)

"The Name of the Game": Optimization

Recall the likelihood objective. The game is viewed as: $$ \min_G \max_D \frac{1}{2}\mathbb{E}_{x\sim Q}\left[ \text{log}(D(x)) \right] + \frac{1}{2}\mathbb{E}_{x\sim G} \left[\text{log}(1 - D(x)\right]$$

The optimization process for GANs is a bit more subtle than what we've done previously. As opposed to 'hill-climbing' we have two competing, learning networks, which play a sort of minimax game against each other until an equilibrium is (hopefully 🤞🏽) reached.

In [0]:
optimizer_G = torch.optim.Adam(forger.parameters(), lr=hparams.gen_lr)
optimizer_D = torch.optim.Adam(detective.parameters(), lr=hparams.dis_lr)

Learning

In [0]:
tic = time.time()    
losses_D, losses_G = [], []
for epoch in range(hparams.epochs):
    detective.train()
    forger.train()
    print('Epoch: %03d/%03d' % (epoch+1, hparams.epochs))
    for batch_idx, (data, targets) in enumerate(train_dataloader):
        ### data preparation start ###
        data = (data - 0.5)*2.  # normalization
        data, targets = data.view(-1, 784).to(device), targets.to(device)
        valid = torch.ones(targets.size(0)).float().to(device)
        fake = torch.zeros(targets.size(0)).float().to(device)        
        ### data preparation end ##

        ### update + train step start: generator ###
        optimizer_G.zero_grad()
        # generator creates forgeries
        z = torch.zeros((targets.size(0), hparams.latent_dim)).uniform_(-1.0, 1.0).to(device)
        forgery = forger(z)
        # generator drives down loss to manipulate discriminator
        prediction = detective(forgery.view(targets.size(0), 1, 28, 28))
        forger_loss = F.binary_cross_entropy_with_logits(prediction, valid)
        forger_loss.backward()
        optimizer_G.step()
        ### update + train step end: generator ###
        
        ### update + train step start: discriminator ###       
        optimizer_D.zero_grad()
        prediction_real = detective(data.view(targets.size(0), 1, 28, 28))
        real_loss = F.binary_cross_entropy_with_logits(prediction_real, valid)
        prediction_forgery = detective(forgery.view(targets.size(0), 1, 28, 28).detach())
        forgery_loss = F.binary_cross_entropy_with_logits(prediction_forgery, fake)
        detective_loss = 0.5 * (real_loss + forgery_loss)
        detective_loss.backward()
        optimizer_D.step()        
        ### update + train step end: discriminator ###

        ### logging start ###
        losses_D.append(detective_loss.item())
        losses_G.append(forger_loss.item())
        # TODO: change default tabstop to 8, show students how to use tensorboard?
        if not batch_idx % hparams.log_interval:
            print('    Batch %03d/%03d | Generator Loss: %.4f | Discriminator Loss: %.4f' 
                   % (batch_idx, len(train_dataloader), forger_loss, detective_loss))
        ### logging end ###
    tictoc = (time.time() - tic) / 60  # ban tiktok?
    print('    Training Time: %.2f min' % (tictoc))
tictoc = (time.time() - tic) / 60
print('Total Training Time: %.2f min' % (tictoc))
Epoch: 001/150
    Batch 000/469 | Generator Loss: 0.7646 | Discriminator Loss: 0.7542
    Batch 150/469 | Generator Loss: 1.3542 | Discriminator Loss: 0.3431
    Batch 300/469 | Generator Loss: 1.6998 | Discriminator Loss: 0.2900
    Batch 450/469 | Generator Loss: 2.1205 | Discriminator Loss: 0.1857
    Training Time: 0.13 min
Epoch: 002/150
    Batch 000/469 | Generator Loss: 1.1217 | Discriminator Loss: 0.4883
    Batch 150/469 | Generator Loss: 1.1503 | Discriminator Loss: 0.4609
    Batch 300/469 | Generator Loss: 1.2254 | Discriminator Loss: 0.4740
    Batch 450/469 | Generator Loss: 1.2029 | Discriminator Loss: 0.4858
    Training Time: 0.26 min
Epoch: 003/150
    Batch 000/469 | Generator Loss: 1.1789 | Discriminator Loss: 0.5232
    Batch 150/469 | Generator Loss: 1.2228 | Discriminator Loss: 0.5106
    Batch 300/469 | Generator Loss: 1.3898 | Discriminator Loss: 0.4439
    Batch 450/469 | Generator Loss: 1.1263 | Discriminator Loss: 0.4886
    Training Time: 0.39 min
Epoch: 004/150
    Batch 000/469 | Generator Loss: 1.1467 | Discriminator Loss: 0.4565
    Batch 150/469 | Generator Loss: 1.2199 | Discriminator Loss: 0.4916
    Batch 300/469 | Generator Loss: 1.1284 | Discriminator Loss: 0.5192
    Batch 450/469 | Generator Loss: 1.1538 | Discriminator Loss: 0.5099
    Training Time: 0.51 min
Epoch: 005/150
    Batch 000/469 | Generator Loss: 1.1705 | Discriminator Loss: 0.4974
    Batch 150/469 | Generator Loss: 1.2048 | Discriminator Loss: 0.5129
    Batch 300/469 | Generator Loss: 1.1675 | Discriminator Loss: 0.4909
    Batch 450/469 | Generator Loss: 1.2646 | Discriminator Loss: 0.5009
    Training Time: 0.64 min
Epoch: 006/150
    Batch 000/469 | Generator Loss: 1.1485 | Discriminator Loss: 0.4864
    Batch 150/469 | Generator Loss: 1.3125 | Discriminator Loss: 0.4651
    Batch 300/469 | Generator Loss: 1.2469 | Discriminator Loss: 0.4340
    Batch 450/469 | Generator Loss: 1.2588 | Discriminator Loss: 0.4567
    Training Time: 0.78 min
Epoch: 007/150
    Batch 000/469 | Generator Loss: 1.0870 | Discriminator Loss: 0.4994
    Batch 150/469 | Generator Loss: 1.1168 | Discriminator Loss: 0.4912
    Batch 300/469 | Generator Loss: 1.1856 | Discriminator Loss: 0.4995
    Batch 450/469 | Generator Loss: 1.2088 | Discriminator Loss: 0.4760
    Training Time: 0.90 min
Epoch: 008/150
    Batch 000/469 | Generator Loss: 1.2065 | Discriminator Loss: 0.5118
    Batch 150/469 | Generator Loss: 1.1960 | Discriminator Loss: 0.4919
    Batch 300/469 | Generator Loss: 1.3078 | Discriminator Loss: 0.4753
    Batch 450/469 | Generator Loss: 1.1252 | Discriminator Loss: 0.5432
    Training Time: 1.03 min
Epoch: 009/150
    Batch 000/469 | Generator Loss: 1.2161 | Discriminator Loss: 0.5603
    Batch 150/469 | Generator Loss: 1.2772 | Discriminator Loss: 0.5016
    Batch 300/469 | Generator Loss: 1.2608 | Discriminator Loss: 0.5140
    Batch 450/469 | Generator Loss: 1.0046 | Discriminator Loss: 0.5268
    Training Time: 1.16 min
Epoch: 010/150
    Batch 000/469 | Generator Loss: 1.3022 | Discriminator Loss: 0.5213
    Batch 150/469 | Generator Loss: 1.1643 | Discriminator Loss: 0.5229
    Batch 300/469 | Generator Loss: 1.0793 | Discriminator Loss: 0.5298
    Batch 450/469 | Generator Loss: 1.2173 | Discriminator Loss: 0.5260
    Training Time: 1.29 min
Epoch: 011/150
    Batch 000/469 | Generator Loss: 1.2651 | Discriminator Loss: 0.5689
    Batch 150/469 | Generator Loss: 1.3085 | Discriminator Loss: 0.5094
    Batch 300/469 | Generator Loss: 1.3086 | Discriminator Loss: 0.5028
    Batch 450/469 | Generator Loss: 1.1306 | Discriminator Loss: 0.4700
    Training Time: 1.41 min
Epoch: 012/150
    Batch 000/469 | Generator Loss: 1.2021 | Discriminator Loss: 0.5077
    Batch 150/469 | Generator Loss: 1.2400 | Discriminator Loss: 0.4446
    Batch 300/469 | Generator Loss: 1.2828 | Discriminator Loss: 0.5171
    Batch 450/469 | Generator Loss: 1.1868 | Discriminator Loss: 0.5055
    Training Time: 1.54 min
Epoch: 013/150
    Batch 000/469 | Generator Loss: 1.0677 | Discriminator Loss: 0.5771
    Batch 150/469 | Generator Loss: 1.2052 | Discriminator Loss: 0.5174
    Batch 300/469 | Generator Loss: 1.3831 | Discriminator Loss: 0.4663
    Batch 450/469 | Generator Loss: 1.1601 | Discriminator Loss: 0.5670
    Training Time: 1.67 min
Epoch: 014/150
    Batch 000/469 | Generator Loss: 1.2083 | Discriminator Loss: 0.5094
    Batch 150/469 | Generator Loss: 1.3141 | Discriminator Loss: 0.5271
    Batch 300/469 | Generator Loss: 1.0918 | Discriminator Loss: 0.5424
    Batch 450/469 | Generator Loss: 1.1110 | Discriminator Loss: 0.5323
    Training Time: 1.80 min
Epoch: 015/150
    Batch 000/469 | Generator Loss: 1.0954 | Discriminator Loss: 0.5165
    Batch 150/469 | Generator Loss: 1.0848 | Discriminator Loss: 0.5203
    Batch 300/469 | Generator Loss: 1.1657 | Discriminator Loss: 0.4837
    Batch 450/469 | Generator Loss: 1.2057 | Discriminator Loss: 0.4998
    Training Time: 1.93 min
Epoch: 016/150
    Batch 000/469 | Generator Loss: 1.2560 | Discriminator Loss: 0.4992
    Batch 150/469 | Generator Loss: 1.3440 | Discriminator Loss: 0.5051
    Batch 300/469 | Generator Loss: 1.2227 | Discriminator Loss: 0.5075
    Batch 450/469 | Generator Loss: 1.3068 | Discriminator Loss: 0.4984
    Training Time: 2.06 min
Epoch: 017/150
    Batch 000/469 | Generator Loss: 1.3531 | Discriminator Loss: 0.4893
    Batch 150/469 | Generator Loss: 1.2200 | Discriminator Loss: 0.5443
    Batch 300/469 | Generator Loss: 1.2002 | Discriminator Loss: 0.4962
    Batch 450/469 | Generator Loss: 1.2318 | Discriminator Loss: 0.5249
    Training Time: 2.18 min
Epoch: 018/150
    Batch 000/469 | Generator Loss: 0.9999 | Discriminator Loss: 0.5418
    Batch 150/469 | Generator Loss: 1.1174 | Discriminator Loss: 0.5341
    Batch 300/469 | Generator Loss: 1.2570 | Discriminator Loss: 0.5651
    Batch 450/469 | Generator Loss: 1.2283 | Discriminator Loss: 0.5656
    Training Time: 2.31 min
Epoch: 019/150
    Batch 000/469 | Generator Loss: 1.1607 | Discriminator Loss: 0.5425
    Batch 150/469 | Generator Loss: 1.2479 | Discriminator Loss: 0.5170
    Batch 300/469 | Generator Loss: 1.1716 | Discriminator Loss: 0.5552
    Batch 450/469 | Generator Loss: 1.0747 | Discriminator Loss: 0.5532
    Training Time: 2.44 min
Epoch: 020/150
    Batch 000/469 | Generator Loss: 1.1600 | Discriminator Loss: 0.5101
    Batch 150/469 | Generator Loss: 1.0414 | Discriminator Loss: 0.5288
    Batch 300/469 | Generator Loss: 1.1104 | Discriminator Loss: 0.5935
    Batch 450/469 | Generator Loss: 1.0301 | Discriminator Loss: 0.5659
    Training Time: 2.57 min
Epoch: 021/150
    Batch 000/469 | Generator Loss: 1.0874 | Discriminator Loss: 0.5388
    Batch 150/469 | Generator Loss: 1.1044 | Discriminator Loss: 0.5477
    Batch 300/469 | Generator Loss: 1.0833 | Discriminator Loss: 0.5711
    Batch 450/469 | Generator Loss: 0.9453 | Discriminator Loss: 0.5576
    Training Time: 2.69 min
Epoch: 022/150
    Batch 000/469 | Generator Loss: 1.1831 | Discriminator Loss: 0.5769
    Batch 150/469 | Generator Loss: 1.0784 | Discriminator Loss: 0.5276
    Batch 300/469 | Generator Loss: 1.1648 | Discriminator Loss: 0.5188
    Batch 450/469 | Generator Loss: 1.0393 | Discriminator Loss: 0.5859
    Training Time: 2.82 min
Epoch: 023/150
    Batch 000/469 | Generator Loss: 1.2030 | Discriminator Loss: 0.5533
    Batch 150/469 | Generator Loss: 1.2238 | Discriminator Loss: 0.5476
    Batch 300/469 | Generator Loss: 0.9440 | Discriminator Loss: 0.5583
    Batch 450/469 | Generator Loss: 1.2142 | Discriminator Loss: 0.5722
    Training Time: 2.95 min
Epoch: 024/150
    Batch 000/469 | Generator Loss: 1.0162 | Discriminator Loss: 0.5824
    Batch 150/469 | Generator Loss: 1.0899 | Discriminator Loss: 0.5293
    Batch 300/469 | Generator Loss: 1.0941 | Discriminator Loss: 0.5412
    Batch 450/469 | Generator Loss: 0.9670 | Discriminator Loss: 0.6074
    Training Time: 3.08 min
Epoch: 025/150
    Batch 000/469 | Generator Loss: 1.0179 | Discriminator Loss: 0.5287
    Batch 150/469 | Generator Loss: 1.1109 | Discriminator Loss: 0.5536
    Batch 300/469 | Generator Loss: 0.9716 | Discriminator Loss: 0.5615
    Batch 450/469 | Generator Loss: 1.0982 | Discriminator Loss: 0.5318
    Training Time: 3.20 min
Epoch: 026/150
    Batch 000/469 | Generator Loss: 1.0526 | Discriminator Loss: 0.5268
    Batch 150/469 | Generator Loss: 1.1076 | Discriminator Loss: 0.5469
    Batch 300/469 | Generator Loss: 1.0323 | Discriminator Loss: 0.5165
    Batch 450/469 | Generator Loss: 1.1467 | Discriminator Loss: 0.5909
    Training Time: 3.33 min
Epoch: 027/150
    Batch 000/469 | Generator Loss: 1.1435 | Discriminator Loss: 0.5274
    Batch 150/469 | Generator Loss: 1.0631 | Discriminator Loss: 0.5580
    Batch 300/469 | Generator Loss: 1.0752 | Discriminator Loss: 0.5545
    Batch 450/469 | Generator Loss: 0.9649 | Discriminator Loss: 0.5989
    Training Time: 3.46 min
Epoch: 028/150
    Batch 000/469 | Generator Loss: 1.0758 | Discriminator Loss: 0.5676
    Batch 150/469 | Generator Loss: 0.9814 | Discriminator Loss: 0.5749
    Batch 300/469 | Generator Loss: 0.9966 | Discriminator Loss: 0.6002
    Batch 450/469 | Generator Loss: 1.0830 | Discriminator Loss: 0.5983
    Training Time: 3.59 min
Epoch: 029/150
    Batch 000/469 | Generator Loss: 1.0813 | Discriminator Loss: 0.5869
    Batch 150/469 | Generator Loss: 1.0076 | Discriminator Loss: 0.5588
    Batch 300/469 | Generator Loss: 1.0296 | Discriminator Loss: 0.5723
    Batch 450/469 | Generator Loss: 1.0940 | Discriminator Loss: 0.5433
    Training Time: 3.72 min
Epoch: 030/150
    Batch 000/469 | Generator Loss: 0.9970 | Discriminator Loss: 0.5764
    Batch 150/469 | Generator Loss: 1.0401 | Discriminator Loss: 0.5250
    Batch 300/469 | Generator Loss: 1.1556 | Discriminator Loss: 0.5457
    Batch 450/469 | Generator Loss: 1.0960 | Discriminator Loss: 0.5863
    Training Time: 3.84 min
Epoch: 031/150
    Batch 000/469 | Generator Loss: 1.0881 | Discriminator Loss: 0.5595
    Batch 150/469 | Generator Loss: 1.0527 | Discriminator Loss: 0.5881
    Batch 300/469 | Generator Loss: 1.0921 | Discriminator Loss: 0.5620
    Batch 450/469 | Generator Loss: 0.9942 | Discriminator Loss: 0.6068
    Training Time: 3.97 min
Epoch: 032/150
    Batch 000/469 | Generator Loss: 1.1014 | Discriminator Loss: 0.5362
    Batch 150/469 | Generator Loss: 1.0418 | Discriminator Loss: 0.6183
    Batch 300/469 | Generator Loss: 1.0222 | Discriminator Loss: 0.6053
    Batch 450/469 | Generator Loss: 1.0609 | Discriminator Loss: 0.5820
    Training Time: 4.10 min
Epoch: 033/150
    Batch 000/469 | Generator Loss: 0.8775 | Discriminator Loss: 0.5780
    Batch 150/469 | Generator Loss: 1.1253 | Discriminator Loss: 0.6008
    Batch 300/469 | Generator Loss: 1.0669 | Discriminator Loss: 0.5436
    Batch 450/469 | Generator Loss: 1.0661 | Discriminator Loss: 0.5689
    Training Time: 4.22 min
Epoch: 034/150
    Batch 000/469 | Generator Loss: 1.1670 | Discriminator Loss: 0.5532
    Batch 150/469 | Generator Loss: 1.0157 | Discriminator Loss: 0.5508
    Batch 300/469 | Generator Loss: 0.9950 | Discriminator Loss: 0.5814
    Batch 450/469 | Generator Loss: 1.0764 | Discriminator Loss: 0.5867
    Training Time: 4.35 min
Epoch: 035/150
    Batch 000/469 | Generator Loss: 1.0449 | Discriminator Loss: 0.5817
    Batch 150/469 | Generator Loss: 1.0254 | Discriminator Loss: 0.5728
    Batch 300/469 | Generator Loss: 1.0233 | Discriminator Loss: 0.5866
    Batch 450/469 | Generator Loss: 1.0579 | Discriminator Loss: 0.6058
    Training Time: 4.48 min
Epoch: 036/150
    Batch 000/469 | Generator Loss: 1.0930 | Discriminator Loss: 0.5538
    Batch 150/469 | Generator Loss: 1.0606 | Discriminator Loss: 0.5692
    Batch 300/469 | Generator Loss: 1.0321 | Discriminator Loss: 0.6016
    Batch 450/469 | Generator Loss: 1.0552 | Discriminator Loss: 0.5584
    Training Time: 4.61 min
Epoch: 037/150
    Batch 000/469 | Generator Loss: 1.1172 | Discriminator Loss: 0.6155
    Batch 150/469 | Generator Loss: 0.9938 | Discriminator Loss: 0.5576
    Batch 300/469 | Generator Loss: 1.0874 | Discriminator Loss: 0.5452
    Batch 450/469 | Generator Loss: 1.0057 | Discriminator Loss: 0.6251
    Training Time: 4.74 min
Epoch: 038/150
    Batch 000/469 | Generator Loss: 1.0448 | Discriminator Loss: 0.5659
    Batch 150/469 | Generator Loss: 1.0417 | Discriminator Loss: 0.5430
    Batch 300/469 | Generator Loss: 0.9855 | Discriminator Loss: 0.5323
    Batch 450/469 | Generator Loss: 1.1449 | Discriminator Loss: 0.5673
    Training Time: 4.87 min
Epoch: 039/150
    Batch 000/469 | Generator Loss: 1.0379 | Discriminator Loss: 0.6022
    Batch 150/469 | Generator Loss: 1.0496 | Discriminator Loss: 0.5957
    Batch 300/469 | Generator Loss: 1.0402 | Discriminator Loss: 0.5202
    Batch 450/469 | Generator Loss: 1.0402 | Discriminator Loss: 0.5595
    Training Time: 5.00 min
Epoch: 040/150
    Batch 000/469 | Generator Loss: 1.0023 | Discriminator Loss: 0.5767
    Batch 150/469 | Generator Loss: 1.0839 | Discriminator Loss: 0.5538
    Batch 300/469 | Generator Loss: 1.0961 | Discriminator Loss: 0.5795
    Batch 450/469 | Generator Loss: 1.0919 | Discriminator Loss: 0.5655
    Training Time: 5.13 min
Epoch: 041/150
    Batch 000/469 | Generator Loss: 0.9564 | Discriminator Loss: 0.6457
    Batch 150/469 | Generator Loss: 0.9900 | Discriminator Loss: 0.6377
    Batch 300/469 | Generator Loss: 0.9877 | Discriminator Loss: 0.5701
    Batch 450/469 | Generator Loss: 1.0018 | Discriminator Loss: 0.6318
    Training Time: 5.26 min
Epoch: 042/150
    Batch 000/469 | Generator Loss: 1.0473 | Discriminator Loss: 0.5742
    Batch 150/469 | Generator Loss: 0.8654 | Discriminator Loss: 0.6120
    Batch 300/469 | Generator Loss: 1.0802 | Discriminator Loss: 0.5517
    Batch 450/469 | Generator Loss: 1.0619 | Discriminator Loss: 0.6179
    Training Time: 5.38 min
Epoch: 043/150
    Batch 000/469 | Generator Loss: 0.9256 | Discriminator Loss: 0.6098
    Batch 150/469 | Generator Loss: 0.9298 | Discriminator Loss: 0.5994
    Batch 300/469 | Generator Loss: 0.9351 | Discriminator Loss: 0.5954
    Batch 450/469 | Generator Loss: 0.9636 | Discriminator Loss: 0.6426
    Training Time: 5.51 min
Epoch: 044/150
    Batch 000/469 | Generator Loss: 0.9580 | Discriminator Loss: 0.5917
    Batch 150/469 | Generator Loss: 0.9880 | Discriminator Loss: 0.5773
    Batch 300/469 | Generator Loss: 0.9201 | Discriminator Loss: 0.6187
    Batch 450/469 | Generator Loss: 0.9499 | Discriminator Loss: 0.6153
    Training Time: 5.64 min
Epoch: 045/150
    Batch 000/469 | Generator Loss: 0.9359 | Discriminator Loss: 0.6095
    Batch 150/469 | Generator Loss: 0.9063 | Discriminator Loss: 0.5915
    Batch 300/469 | Generator Loss: 0.8766 | Discriminator Loss: 0.5891
    Batch 450/469 | Generator Loss: 0.9868 | Discriminator Loss: 0.5589
    Training Time: 5.77 min
Epoch: 046/150
    Batch 000/469 | Generator Loss: 0.9923 | Discriminator Loss: 0.6180
    Batch 150/469 | Generator Loss: 0.9415 | Discriminator Loss: 0.6540
    Batch 300/469 | Generator Loss: 1.0207 | Discriminator Loss: 0.6041
    Batch 450/469 | Generator Loss: 0.9341 | Discriminator Loss: 0.6129
    Training Time: 5.89 min
Epoch: 047/150
    Batch 000/469 | Generator Loss: 0.9324 | Discriminator Loss: 0.6477
    Batch 150/469 | Generator Loss: 0.9526 | Discriminator Loss: 0.6001
    Batch 300/469 | Generator Loss: 0.9894 | Discriminator Loss: 0.5963
    Batch 450/469 | Generator Loss: 1.0328 | Discriminator Loss: 0.5975
    Training Time: 6.02 min
Epoch: 048/150
    Batch 000/469 | Generator Loss: 0.9159 | Discriminator Loss: 0.6116
    Batch 150/469 | Generator Loss: 0.9007 | Discriminator Loss: 0.6198
    Batch 300/469 | Generator Loss: 0.9850 | Discriminator Loss: 0.6183
    Batch 450/469 | Generator Loss: 0.9033 | Discriminator Loss: 0.6293
    Training Time: 6.15 min
Epoch: 049/150
    Batch 000/469 | Generator Loss: 0.8491 | Discriminator Loss: 0.5947
    Batch 150/469 | Generator Loss: 0.9223 | Discriminator Loss: 0.6241
    Batch 300/469 | Generator Loss: 0.9235 | Discriminator Loss: 0.6470
    Batch 450/469 | Generator Loss: 0.8362 | Discriminator Loss: 0.6368
    Training Time: 6.28 min
Epoch: 050/150
    Batch 000/469 | Generator Loss: 0.9781 | Discriminator Loss: 0.6460
    Batch 150/469 | Generator Loss: 0.9688 | Discriminator Loss: 0.5982
    Batch 300/469 | Generator Loss: 1.0482 | Discriminator Loss: 0.6009
    Batch 450/469 | Generator Loss: 0.9755 | Discriminator Loss: 0.5754
    Training Time: 6.40 min
Epoch: 051/150
    Batch 000/469 | Generator Loss: 0.8765 | Discriminator Loss: 0.5927
    Batch 150/469 | Generator Loss: 0.9841 | Discriminator Loss: 0.6056
    Batch 300/469 | Generator Loss: 0.9703 | Discriminator Loss: 0.5762
    Batch 450/469 | Generator Loss: 0.8988 | Discriminator Loss: 0.5806
    Training Time: 6.53 min
Epoch: 052/150
    Batch 000/469 | Generator Loss: 1.0108 | Discriminator Loss: 0.6499
    Batch 150/469 | Generator Loss: 0.9095 | Discriminator Loss: 0.6422
    Batch 300/469 | Generator Loss: 0.9236 | Discriminator Loss: 0.5957
    Batch 450/469 | Generator Loss: 1.0673 | Discriminator Loss: 0.5895
    Training Time: 6.66 min
Epoch: 053/150
    Batch 000/469 | Generator Loss: 0.9371 | Discriminator Loss: 0.5997
    Batch 150/469 | Generator Loss: 0.9041 | Discriminator Loss: 0.5888
    Batch 300/469 | Generator Loss: 0.9198 | Discriminator Loss: 0.6287
    Batch 450/469 | Generator Loss: 1.0477 | Discriminator Loss: 0.5714
    Training Time: 6.79 min
Epoch: 054/150
    Batch 000/469 | Generator Loss: 0.9119 | Discriminator Loss: 0.6192
    Batch 150/469 | Generator Loss: 0.8544 | Discriminator Loss: 0.6352
    Batch 300/469 | Generator Loss: 0.9422 | Discriminator Loss: 0.6124
    Batch 450/469 | Generator Loss: 0.9194 | Discriminator Loss: 0.6244
    Training Time: 6.91 min
Epoch: 055/150
    Batch 000/469 | Generator Loss: 0.9499 | Discriminator Loss: 0.6018
    Batch 150/469 | Generator Loss: 0.9204 | Discriminator Loss: 0.6669
    Batch 300/469 | Generator Loss: 0.8802 | Discriminator Loss: 0.6603
    Batch 450/469 | Generator Loss: 0.8986 | Discriminator Loss: 0.6309
    Training Time: 7.04 min
Epoch: 056/150
    Batch 000/469 | Generator Loss: 0.9302 | Discriminator Loss: 0.6020
    Batch 150/469 | Generator Loss: 1.0448 | Discriminator Loss: 0.6013
    Batch 300/469 | Generator Loss: 0.9759 | Discriminator Loss: 0.6168
    Batch 450/469 | Generator Loss: 0.8682 | Discriminator Loss: 0.6583
    Training Time: 7.17 min
Epoch: 057/150
    Batch 000/469 | Generator Loss: 0.8680 | Discriminator Loss: 0.6138
    Batch 150/469 | Generator Loss: 0.9751 | Discriminator Loss: 0.6312
    Batch 300/469 | Generator Loss: 0.8755 | Discriminator Loss: 0.6345
    Batch 450/469 | Generator Loss: 0.9404 | Discriminator Loss: 0.5883
    Training Time: 7.30 min
Epoch: 058/150
    Batch 000/469 | Generator Loss: 0.8904 | Discriminator Loss: 0.5805
    Batch 150/469 | Generator Loss: 1.0353 | Discriminator Loss: 0.6165
    Batch 300/469 | Generator Loss: 0.8924 | Discriminator Loss: 0.6620
    Batch 450/469 | Generator Loss: 0.9869 | Discriminator Loss: 0.6201
    Training Time: 7.42 min
Epoch: 059/150
    Batch 000/469 | Generator Loss: 0.8990 | Discriminator Loss: 0.6128
    Batch 150/469 | Generator Loss: 0.8631 | Discriminator Loss: 0.5675
    Batch 300/469 | Generator Loss: 0.9857 | Discriminator Loss: 0.6193
    Batch 450/469 | Generator Loss: 0.9626 | Discriminator Loss: 0.6229
    Training Time: 7.55 min
Epoch: 060/150
    Batch 000/469 | Generator Loss: 0.8861 | Discriminator Loss: 0.6483
    Batch 150/469 | Generator Loss: 0.8775 | Discriminator Loss: 0.6784
    Batch 300/469 | Generator Loss: 0.8956 | Discriminator Loss: 0.6669
    Batch 450/469 | Generator Loss: 0.9414 | Discriminator Loss: 0.6068
    Training Time: 7.68 min
Epoch: 061/150
    Batch 000/469 | Generator Loss: 0.9246 | Discriminator Loss: 0.6244
    Batch 150/469 | Generator Loss: 0.9069 | Discriminator Loss: 0.5753
    Batch 300/469 | Generator Loss: 0.8193 | Discriminator Loss: 0.6139
    Batch 450/469 | Generator Loss: 0.8738 | Discriminator Loss: 0.6271
    Training Time: 7.81 min
Epoch: 062/150
    Batch 000/469 | Generator Loss: 0.8993 | Discriminator Loss: 0.6283
    Batch 150/469 | Generator Loss: 1.0043 | Discriminator Loss: 0.5977
    Batch 300/469 | Generator Loss: 0.9033 | Discriminator Loss: 0.6525
    Batch 450/469 | Generator Loss: 0.9515 | Discriminator Loss: 0.6187
    Training Time: 7.93 min
Epoch: 063/150
    Batch 000/469 | Generator Loss: 0.8763 | Discriminator Loss: 0.6627
    Batch 150/469 | Generator Loss: 0.8911 | Discriminator Loss: 0.6370
    Batch 300/469 | Generator Loss: 0.8805 | Discriminator Loss: 0.6780
    Batch 450/469 | Generator Loss: 0.8663 | Discriminator Loss: 0.6474
    Training Time: 8.06 min
Epoch: 064/150
    Batch 000/469 | Generator Loss: 0.8817 | Discriminator Loss: 0.6347
    Batch 150/469 | Generator Loss: 0.8320 | Discriminator Loss: 0.6310
    Batch 300/469 | Generator Loss: 0.8588 | Discriminator Loss: 0.6243
    Batch 450/469 | Generator Loss: 0.8220 | Discriminator Loss: 0.6589
    Training Time: 8.19 min
Epoch: 065/150
    Batch 000/469 | Generator Loss: 0.9648 | Discriminator Loss: 0.6200
    Batch 150/469 | Generator Loss: 0.8998 | Discriminator Loss: 0.6325
    Batch 300/469 | Generator Loss: 0.9167 | Discriminator Loss: 0.6834
    Batch 450/469 | Generator Loss: 0.9202 | Discriminator Loss: 0.6558
    Training Time: 8.32 min
Epoch: 066/150
    Batch 000/469 | Generator Loss: 0.9392 | Discriminator Loss: 0.6066
    Batch 150/469 | Generator Loss: 0.8963 | Discriminator Loss: 0.6349
    Batch 300/469 | Generator Loss: 0.9346 | Discriminator Loss: 0.6363
    Batch 450/469 | Generator Loss: 0.9625 | Discriminator Loss: 0.6337
    Training Time: 8.44 min
Epoch: 067/150
    Batch 000/469 | Generator Loss: 0.9025 | Discriminator Loss: 0.6093
    Batch 150/469 | Generator Loss: 0.9457 | Discriminator Loss: 0.6207
    Batch 300/469 | Generator Loss: 0.8434 | Discriminator Loss: 0.6563
    Batch 450/469 | Generator Loss: 0.9275 | Discriminator Loss: 0.6119
    Training Time: 8.57 min
Epoch: 068/150
    Batch 000/469 | Generator Loss: 0.9107 | Discriminator Loss: 0.6145
    Batch 150/469 | Generator Loss: 0.9286 | Discriminator Loss: 0.5974
    Batch 300/469 | Generator Loss: 0.8665 | Discriminator Loss: 0.6332
    Batch 450/469 | Generator Loss: 0.9118 | Discriminator Loss: 0.6692
    Training Time: 8.70 min
Epoch: 069/150
    Batch 000/469 | Generator Loss: 0.9072 | Discriminator Loss: 0.6615
    Batch 150/469 | Generator Loss: 0.8440 | Discriminator Loss: 0.6357
    Batch 300/469 | Generator Loss: 0.9652 | Discriminator Loss: 0.6415
    Batch 450/469 | Generator Loss: 0.8283 | Discriminator Loss: 0.6798
    Training Time: 8.83 min
Epoch: 070/150
    Batch 000/469 | Generator Loss: 0.9225 | Discriminator Loss: 0.6741
    Batch 150/469 | Generator Loss: 0.8656 | Discriminator Loss: 0.6180
    Batch 300/469 | Generator Loss: 0.8149 | Discriminator Loss: 0.6585
    Batch 450/469 | Generator Loss: 0.7994 | Discriminator Loss: 0.6608
    Training Time: 8.96 min
Epoch: 071/150
    Batch 000/469 | Generator Loss: 0.8956 | Discriminator Loss: 0.6420
    Batch 150/469 | Generator Loss: 0.8529 | Discriminator Loss: 0.5966
    Batch 300/469 | Generator Loss: 0.8337 | Discriminator Loss: 0.6356
    Batch 450/469 | Generator Loss: 0.8332 | Discriminator Loss: 0.6250
    Training Time: 9.08 min
Epoch: 072/150
    Batch 000/469 | Generator Loss: 0.8464 | Discriminator Loss: 0.6495
    Batch 150/469 | Generator Loss: 0.8890 | Discriminator Loss: 0.7069
    Batch 300/469 | Generator Loss: 0.8861 | Discriminator Loss: 0.5988
    Batch 450/469 | Generator Loss: 0.8924 | Discriminator Loss: 0.6120
    Training Time: 9.21 min
Epoch: 073/150
    Batch 000/469 | Generator Loss: 0.8900 | Discriminator Loss: 0.6126
    Batch 150/469 | Generator Loss: 0.9277 | Discriminator Loss: 0.6478
    Batch 300/469 | Generator Loss: 0.8397 | Discriminator Loss: 0.6596
    Batch 450/469 | Generator Loss: 0.8660 | Discriminator Loss: 0.5968
    Training Time: 9.34 min
Epoch: 074/150
    Batch 000/469 | Generator Loss: 0.9154 | Discriminator Loss: 0.5786
    Batch 150/469 | Generator Loss: 0.8822 | Discriminator Loss: 0.6255
    Batch 300/469 | Generator Loss: 0.8511 | Discriminator Loss: 0.6386
    Batch 450/469 | Generator Loss: 0.9231 | Discriminator Loss: 0.6424
    Training Time: 9.47 min
Epoch: 075/150
    Batch 000/469 | Generator Loss: 0.8857 | Discriminator Loss: 0.6318
    Batch 150/469 | Generator Loss: 0.9411 | Discriminator Loss: 0.6766
    Batch 300/469 | Generator Loss: 0.8100 | Discriminator Loss: 0.6871
    Batch 450/469 | Generator Loss: 0.8419 | Discriminator Loss: 0.6232
    Training Time: 9.60 min
Epoch: 076/150
    Batch 000/469 | Generator Loss: 0.8467 | Discriminator Loss: 0.5945
    Batch 150/469 | Generator Loss: 0.8792 | Discriminator Loss: 0.6338
    Batch 300/469 | Generator Loss: 0.8762 | Discriminator Loss: 0.6247
    Batch 450/469 | Generator Loss: 0.8609 | Discriminator Loss: 0.6354
    Training Time: 9.72 min
Epoch: 077/150
    Batch 000/469 | Generator Loss: 0.8429 | Discriminator Loss: 0.6268
    Batch 150/469 | Generator Loss: 0.8650 | Discriminator Loss: 0.6513
    Batch 300/469 | Generator Loss: 0.9051 | Discriminator Loss: 0.6651
    Batch 450/469 | Generator Loss: 0.9347 | Discriminator Loss: 0.6309
    Training Time: 9.85 min
Epoch: 078/150
    Batch 000/469 | Generator Loss: 0.9228 | Discriminator Loss: 0.6185
    Batch 150/469 | Generator Loss: 0.8711 | Discriminator Loss: 0.6429
    Batch 300/469 | Generator Loss: 0.8817 | Discriminator Loss: 0.6157
    Batch 450/469 | Generator Loss: 0.7888 | Discriminator Loss: 0.6555
    Training Time: 9.98 min
Epoch: 079/150
    Batch 000/469 | Generator Loss: 0.8526 | Discriminator Loss: 0.6110
    Batch 150/469 | Generator Loss: 0.9301 | Discriminator Loss: 0.6673
    Batch 300/469 | Generator Loss: 0.8378 | Discriminator Loss: 0.6372
    Batch 450/469 | Generator Loss: 0.8623 | Discriminator Loss: 0.6255
    Training Time: 10.11 min
Epoch: 080/150
    Batch 000/469 | Generator Loss: 0.8273 | Discriminator Loss: 0.6587
    Batch 150/469 | Generator Loss: 0.9030 | Discriminator Loss: 0.6302
    Batch 300/469 | Generator Loss: 0.9217 | Discriminator Loss: 0.6148
    Batch 450/469 | Generator Loss: 0.8475 | Discriminator Loss: 0.6202
    Training Time: 10.24 min
Epoch: 081/150
    Batch 000/469 | Generator Loss: 0.8246 | Discriminator Loss: 0.6474
    Batch 150/469 | Generator Loss: 0.8004 | Discriminator Loss: 0.6469
    Batch 300/469 | Generator Loss: 0.9437 | Discriminator Loss: 0.6231
    Batch 450/469 | Generator Loss: 0.8395 | Discriminator Loss: 0.6055
    Training Time: 10.36 min
Epoch: 082/150
    Batch 000/469 | Generator Loss: 0.8900 | Discriminator Loss: 0.6147
    Batch 150/469 | Generator Loss: 0.9312 | Discriminator Loss: 0.6470
    Batch 300/469 | Generator Loss: 0.7987 | Discriminator Loss: 0.6614
    Batch 450/469 | Generator Loss: 0.9051 | Discriminator Loss: 0.6388
    Training Time: 10.49 min
Epoch: 083/150
    Batch 000/469 | Generator Loss: 0.8932 | Discriminator Loss: 0.6307
    Batch 150/469 | Generator Loss: 0.8686 | Discriminator Loss: 0.6327
    Batch 300/469 | Generator Loss: 0.8601 | Discriminator Loss: 0.6600
    Batch 450/469 | Generator Loss: 0.9179 | Discriminator Loss: 0.6437
    Training Time: 10.62 min
Epoch: 084/150
    Batch 000/469 | Generator Loss: 0.9175 | Discriminator Loss: 0.5985
    Batch 150/469 | Generator Loss: 0.8821 | Discriminator Loss: 0.6854
    Batch 300/469 | Generator Loss: 0.8356 | Discriminator Loss: 0.6272
    Batch 450/469 | Generator Loss: 0.8438 | Discriminator Loss: 0.6331
    Training Time: 10.74 min
Epoch: 085/150
    Batch 000/469 | Generator Loss: 0.8230 | Discriminator Loss: 0.6271
    Batch 150/469 | Generator Loss: 0.8446 | Discriminator Loss: 0.5949
    Batch 300/469 | Generator Loss: 0.8513 | Discriminator Loss: 0.6674
    Batch 450/469 | Generator Loss: 0.7784 | Discriminator Loss: 0.6507
    Training Time: 10.87 min
Epoch: 086/150
    Batch 000/469 | Generator Loss: 0.8435 | Discriminator Loss: 0.6960
    Batch 150/469 | Generator Loss: 0.8247 | Discriminator Loss: 0.6456
    Batch 300/469 | Generator Loss: 0.8969 | Discriminator Loss: 0.6136
    Batch 450/469 | Generator Loss: 0.8361 | Discriminator Loss: 0.6544
    Training Time: 11.00 min
Epoch: 087/150
    Batch 000/469 | Generator Loss: 0.8692 | Discriminator Loss: 0.6383
    Batch 150/469 | Generator Loss: 0.7944 | Discriminator Loss: 0.6355
    Batch 300/469 | Generator Loss: 0.7870 | Discriminator Loss: 0.6249
    Batch 450/469 | Generator Loss: 0.7808 | Discriminator Loss: 0.6015
    Training Time: 11.13 min
Epoch: 088/150
    Batch 000/469 | Generator Loss: 0.8778 | Discriminator Loss: 0.6125
    Batch 150/469 | Generator Loss: 0.8406 | Discriminator Loss: 0.6411
    Batch 300/469 | Generator Loss: 0.8804 | Discriminator Loss: 0.6579
    Batch 450/469 | Generator Loss: 0.7803 | Discriminator Loss: 0.6490
    Training Time: 11.26 min
Epoch: 089/150
    Batch 000/469 | Generator Loss: 0.8671 | Discriminator Loss: 0.6213
    Batch 150/469 | Generator Loss: 0.8514 | Discriminator Loss: 0.6010
    Batch 300/469 | Generator Loss: 0.8721 | Discriminator Loss: 0.6556
    Batch 450/469 | Generator Loss: 0.8359 | Discriminator Loss: 0.6731
    Training Time: 11.39 min
Epoch: 090/150
    Batch 000/469 | Generator Loss: 0.8207 | Discriminator Loss: 0.6735
    Batch 150/469 | Generator Loss: 0.9175 | Discriminator Loss: 0.6093
    Batch 300/469 | Generator Loss: 0.8742 | Discriminator Loss: 0.6589
    Batch 450/469 | Generator Loss: 0.8346 | Discriminator Loss: 0.6536
    Training Time: 11.52 min
Epoch: 091/150
    Batch 000/469 | Generator Loss: 0.8048 | Discriminator Loss: 0.6435
    Batch 150/469 | Generator Loss: 0.8121 | Discriminator Loss: 0.6634
    Batch 300/469 | Generator Loss: 0.8508 | Discriminator Loss: 0.6305
    Batch 450/469 | Generator Loss: 0.8251 | Discriminator Loss: 0.6631
    Training Time: 11.65 min
Epoch: 092/150
    Batch 000/469 | Generator Loss: 0.8312 | Discriminator Loss: 0.6542
    Batch 150/469 | Generator Loss: 0.8260 | Discriminator Loss: 0.6255
    Batch 300/469 | Generator Loss: 0.8134 | Discriminator Loss: 0.6156
    Batch 450/469 | Generator Loss: 0.8169 | Discriminator Loss: 0.6379
    Training Time: 11.78 min
Epoch: 093/150
    Batch 000/469 | Generator Loss: 0.8537 | Discriminator Loss: 0.6497
    Batch 150/469 | Generator Loss: 0.8532 | Discriminator Loss: 0.6330
    Batch 300/469 | Generator Loss: 0.8338 | Discriminator Loss: 0.6473
    Batch 450/469 | Generator Loss: 0.8228 | Discriminator Loss: 0.6376
    Training Time: 11.90 min
Epoch: 094/150
    Batch 000/469 | Generator Loss: 0.7510 | Discriminator Loss: 0.6396
    Batch 150/469 | Generator Loss: 0.8495 | Discriminator Loss: 0.6598
    Batch 300/469 | Generator Loss: 0.7866 | Discriminator Loss: 0.5948
    Batch 450/469 | Generator Loss: 0.8262 | Discriminator Loss: 0.6511
    Training Time: 12.03 min
Epoch: 095/150
    Batch 000/469 | Generator Loss: 0.7986 | Discriminator Loss: 0.6192
    Batch 150/469 | Generator Loss: 0.8568 | Discriminator Loss: 0.6246
    Batch 300/469 | Generator Loss: 0.8135 | Discriminator Loss: 0.6675
    Batch 450/469 | Generator Loss: 0.8670 | Discriminator Loss: 0.6124
    Training Time: 12.16 min
Epoch: 096/150
    Batch 000/469 | Generator Loss: 0.8617 | Discriminator Loss: 0.6714
    Batch 150/469 | Generator Loss: 0.8778 | Discriminator Loss: 0.6356
    Batch 300/469 | Generator Loss: 0.8313 | Discriminator Loss: 0.6222
    Batch 450/469 | Generator Loss: 0.8796 | Discriminator Loss: 0.6459
    Training Time: 12.29 min
Epoch: 097/150
    Batch 000/469 | Generator Loss: 0.8382 | Discriminator Loss: 0.6495
    Batch 150/469 | Generator Loss: 0.8516 | Discriminator Loss: 0.6495
    Batch 300/469 | Generator Loss: 0.8782 | Discriminator Loss: 0.6347
    Batch 450/469 | Generator Loss: 0.8096 | Discriminator Loss: 0.6319
    Training Time: 12.42 min
Epoch: 098/150
    Batch 000/469 | Generator Loss: 0.8811 | Discriminator Loss: 0.6089
    Batch 150/469 | Generator Loss: 0.7940 | Discriminator Loss: 0.6222
    Batch 300/469 | Generator Loss: 0.8546 | Discriminator Loss: 0.6587
    Batch 450/469 | Generator Loss: 0.8799 | Discriminator Loss: 0.5807
    Training Time: 12.54 min
Epoch: 099/150
    Batch 000/469 | Generator Loss: 0.8693 | Discriminator Loss: 0.6071
    Batch 150/469 | Generator Loss: 0.8774 | Discriminator Loss: 0.5956
    Batch 300/469 | Generator Loss: 0.8544 | Discriminator Loss: 0.6524
    Batch 450/469 | Generator Loss: 0.8555 | Discriminator Loss: 0.6457
    Training Time: 12.67 min
Epoch: 100/150
    Batch 000/469 | Generator Loss: 0.8591 | Discriminator Loss: 0.6598
    Batch 150/469 | Generator Loss: 0.8101 | Discriminator Loss: 0.6621
    Batch 300/469 | Generator Loss: 0.7914 | Discriminator Loss: 0.6234
    Batch 450/469 | Generator Loss: 0.8510 | Discriminator Loss: 0.6485
    Training Time: 12.80 min
Epoch: 101/150
    Batch 000/469 | Generator Loss: 0.8430 | Discriminator Loss: 0.6432
    Batch 150/469 | Generator Loss: 0.7700 | Discriminator Loss: 0.6545
    Batch 300/469 | Generator Loss: 0.8714 | Discriminator Loss: 0.6239
    Batch 450/469 | Generator Loss: 0.7692 | Discriminator Loss: 0.6562
    Training Time: 12.93 min
Epoch: 102/150
    Batch 000/469 | Generator Loss: 0.8234 | Discriminator Loss: 0.6691
    Batch 150/469 | Generator Loss: 0.8155 | Discriminator Loss: 0.6747
    Batch 300/469 | Generator Loss: 0.9051 | Discriminator Loss: 0.6029
    Batch 450/469 | Generator Loss: 0.8341 | Discriminator Loss: 0.6225
    Training Time: 13.06 min
Epoch: 103/150
    Batch 000/469 | Generator Loss: 0.8603 | Discriminator Loss: 0.6310
    Batch 150/469 | Generator Loss: 0.7609 | Discriminator Loss: 0.6578
    Batch 300/469 | Generator Loss: 0.7706 | Discriminator Loss: 0.6740
    Batch 450/469 | Generator Loss: 0.8281 | Discriminator Loss: 0.6503
    Training Time: 13.19 min
Epoch: 104/150
    Batch 000/469 | Generator Loss: 0.8480 | Discriminator Loss: 0.6606
    Batch 150/469 | Generator Loss: 0.8060 | Discriminator Loss: 0.6683
    Batch 300/469 | Generator Loss: 0.7845 | Discriminator Loss: 0.6518
    Batch 450/469 | Generator Loss: 0.8850 | Discriminator Loss: 0.6415
    Training Time: 13.32 min
Epoch: 105/150
    Batch 000/469 | Generator Loss: 0.8138 | Discriminator Loss: 0.6508
    Batch 150/469 | Generator Loss: 0.8390 | Discriminator Loss: 0.6653
    Batch 300/469 | Generator Loss: 0.8759 | Discriminator Loss: 0.6300
    Batch 450/469 | Generator Loss: 0.7696 | Discriminator Loss: 0.6671
    Training Time: 13.44 min
Epoch: 106/150
    Batch 000/469 | Generator Loss: 0.8155 | Discriminator Loss: 0.6788
    Batch 150/469 | Generator Loss: 0.8538 | Discriminator Loss: 0.6526
    Batch 300/469 | Generator Loss: 0.8264 | Discriminator Loss: 0.6462
    Batch 450/469 | Generator Loss: 0.8849 | Discriminator Loss: 0.6240
    Training Time: 13.57 min
Epoch: 107/150
    Batch 000/469 | Generator Loss: 0.8392 | Discriminator Loss: 0.6757
    Batch 150/469 | Generator Loss: 0.8998 | Discriminator Loss: 0.6334
    Batch 300/469 | Generator Loss: 0.8865 | Discriminator Loss: 0.6107
    Batch 450/469 | Generator Loss: 0.9068 | Discriminator Loss: 0.6392
    Training Time: 13.70 min
Epoch: 108/150
    Batch 000/469 | Generator Loss: 0.8250 | Discriminator Loss: 0.6648
    Batch 150/469 | Generator Loss: 0.8375 | Discriminator Loss: 0.6643
    Batch 300/469 | Generator Loss: 0.8139 | Discriminator Loss: 0.6398
    Batch 450/469 | Generator Loss: 0.8655 | Discriminator Loss: 0.6430
    Training Time: 13.83 min
Epoch: 109/150
    Batch 000/469 | Generator Loss: 0.8671 | Discriminator Loss: 0.6377
    Batch 150/469 | Generator Loss: 0.8783 | Discriminator Loss: 0.6485
    Batch 300/469 | Generator Loss: 0.8002 | Discriminator Loss: 0.6715
    Batch 450/469 | Generator Loss: 0.7579 | Discriminator Loss: 0.6868
    Training Time: 13.96 min
Epoch: 110/150
    Batch 000/469 | Generator Loss: 0.8409 | Discriminator Loss: 0.6110
    Batch 150/469 | Generator Loss: 0.8462 | Discriminator Loss: 0.6559
    Batch 300/469 | Generator Loss: 0.8314 | Discriminator Loss: 0.6238
    Batch 450/469 | Generator Loss: 0.8328 | Discriminator Loss: 0.6296
    Training Time: 14.09 min
Epoch: 111/150
    Batch 000/469 | Generator Loss: 0.8086 | Discriminator Loss: 0.6749
    Batch 150/469 | Generator Loss: 0.8811 | Discriminator Loss: 0.6579
    Batch 300/469 | Generator Loss: 0.8459 | Discriminator Loss: 0.6742
    Batch 450/469 | Generator Loss: 0.8393 | Discriminator Loss: 0.6626
    Training Time: 14.22 min
Epoch: 112/150
    Batch 000/469 | Generator Loss: 0.8234 | Discriminator Loss: 0.6317
    Batch 150/469 | Generator Loss: 0.7496 | Discriminator Loss: 0.6613
    Batch 300/469 | Generator Loss: 0.8724 | Discriminator Loss: 0.6308
    Batch 450/469 | Generator Loss: 0.8426 | Discriminator Loss: 0.6526
    Training Time: 14.34 min
Epoch: 113/150
    Batch 000/469 | Generator Loss: 0.8266 | Discriminator Loss: 0.6098
    Batch 150/469 | Generator Loss: 0.8888 | Discriminator Loss: 0.6046
    Batch 300/469 | Generator Loss: 0.8782 | Discriminator Loss: 0.6630
    Batch 450/469 | Generator Loss: 0.8312 | Discriminator Loss: 0.6591
    Training Time: 14.47 min
Epoch: 114/150
    Batch 000/469 | Generator Loss: 0.8114 | Discriminator Loss: 0.6386
    Batch 150/469 | Generator Loss: 0.8207 | Discriminator Loss: 0.6246
    Batch 300/469 | Generator Loss: 0.7949 | Discriminator Loss: 0.6075
    Batch 450/469 | Generator Loss: 0.8522 | Discriminator Loss: 0.6462
    Training Time: 14.60 min
Epoch: 115/150
    Batch 000/469 | Generator Loss: 0.7520 | Discriminator Loss: 0.6725
    Batch 150/469 | Generator Loss: 0.8291 | Discriminator Loss: 0.6311
    Batch 300/469 | Generator Loss: 0.8405 | Discriminator Loss: 0.6559
    Batch 450/469 | Generator Loss: 0.8292 | Discriminator Loss: 0.6614
    Training Time: 14.73 min
Epoch: 116/150
    Batch 000/469 | Generator Loss: 0.7829 | Discriminator Loss: 0.6509
    Batch 150/469 | Generator Loss: 0.8341 | Discriminator Loss: 0.6551
    Batch 300/469 | Generator Loss: 0.8701 | Discriminator Loss: 0.6420
    Batch 450/469 | Generator Loss: 0.8472 | Discriminator Loss: 0.6363
    Training Time: 14.86 min
Epoch: 117/150
    Batch 000/469 | Generator Loss: 0.8094 | Discriminator Loss: 0.6378
    Batch 150/469 | Generator Loss: 0.7599 | Discriminator Loss: 0.6201
    Batch 300/469 | Generator Loss: 0.8458 | Discriminator Loss: 0.6410
    Batch 450/469 | Generator Loss: 0.8291 | Discriminator Loss: 0.6709
    Training Time: 14.98 min
Epoch: 118/150
    Batch 000/469 | Generator Loss: 0.7696 | Discriminator Loss: 0.6682
    Batch 150/469 | Generator Loss: 0.7743 | Discriminator Loss: 0.6246
    Batch 300/469 | Generator Loss: 0.8372 | Discriminator Loss: 0.6358
    Batch 450/469 | Generator Loss: 0.8276 | Discriminator Loss: 0.6450
    Training Time: 15.11 min
Epoch: 119/150
    Batch 000/469 | Generator Loss: 0.7894 | Discriminator Loss: 0.6735
    Batch 150/469 | Generator Loss: 0.8944 | Discriminator Loss: 0.6294
    Batch 300/469 | Generator Loss: 0.8051 | Discriminator Loss: 0.6895
    Batch 450/469 | Generator Loss: 0.8212 | Discriminator Loss: 0.6373
    Training Time: 15.24 min
Epoch: 120/150
    Batch 000/469 | Generator Loss: 0.7708 | Discriminator Loss: 0.6677
    Batch 150/469 | Generator Loss: 0.7324 | Discriminator Loss: 0.6491
    Batch 300/469 | Generator Loss: 0.8533 | Discriminator Loss: 0.6147
    Batch 450/469 | Generator Loss: 0.7962 | Discriminator Loss: 0.6723
    Training Time: 15.37 min
Epoch: 121/150
    Batch 000/469 | Generator Loss: 0.7964 | Discriminator Loss: 0.6237
    Batch 150/469 | Generator Loss: 0.8608 | Discriminator Loss: 0.6773
    Batch 300/469 | Generator Loss: 0.8331 | Discriminator Loss: 0.6856
    Batch 450/469 | Generator Loss: 0.8038 | Discriminator Loss: 0.6683
    Training Time: 15.50 min
Epoch: 122/150
    Batch 000/469 | Generator Loss: 0.8185 | Discriminator Loss: 0.6522
    Batch 150/469 | Generator Loss: 0.8308 | Discriminator Loss: 0.6272
    Batch 300/469 | Generator Loss: 0.8163 | Discriminator Loss: 0.6426
    Batch 450/469 | Generator Loss: 0.7870 | Discriminator Loss: 0.6151
    Training Time: 15.63 min
Epoch: 123/150
    Batch 000/469 | Generator Loss: 0.8498 | Discriminator Loss: 0.6719
    Batch 150/469 | Generator Loss: 0.8112 | Discriminator Loss: 0.6567
    Batch 300/469 | Generator Loss: 0.8372 | Discriminator Loss: 0.6372
    Batch 450/469 | Generator Loss: 0.8414 | Discriminator Loss: 0.6785
    Training Time: 15.76 min
Epoch: 124/150
    Batch 000/469 | Generator Loss: 0.7867 | Discriminator Loss: 0.6643
    Batch 150/469 | Generator Loss: 0.7859 | Discriminator Loss: 0.6457
    Batch 300/469 | Generator Loss: 0.7613 | Discriminator Loss: 0.6526
    Batch 450/469 | Generator Loss: 0.7633 | Discriminator Loss: 0.6904
    Training Time: 15.89 min
Epoch: 125/150
    Batch 000/469 | Generator Loss: 0.9176 | Discriminator Loss: 0.6366
    Batch 150/469 | Generator Loss: 0.8261 | Discriminator Loss: 0.6441
    Batch 300/469 | Generator Loss: 0.8638 | Discriminator Loss: 0.6346
    Batch 450/469 | Generator Loss: 0.8232 | Discriminator Loss: 0.6396
    Training Time: 16.01 min
Epoch: 126/150
    Batch 000/469 | Generator Loss: 0.8731 | Discriminator Loss: 0.6350
    Batch 150/469 | Generator Loss: 0.8372 | Discriminator Loss: 0.6802
    Batch 300/469 | Generator Loss: 0.8434 | Discriminator Loss: 0.6719
    Batch 450/469 | Generator Loss: 0.8743 | Discriminator Loss: 0.6522
    Training Time: 16.14 min
Epoch: 127/150
    Batch 000/469 | Generator Loss: 0.7997 | Discriminator Loss: 0.6685
    Batch 150/469 | Generator Loss: 0.8063 | Discriminator Loss: 0.6713
    Batch 300/469 | Generator Loss: 0.8332 | Discriminator Loss: 0.6320
    Batch 450/469 | Generator Loss: 0.8030 | Discriminator Loss: 0.6493
    Training Time: 16.28 min
Epoch: 128/150
    Batch 000/469 | Generator Loss: 0.8842 | Discriminator Loss: 0.6280
    Batch 150/469 | Generator Loss: 0.8777 | Discriminator Loss: 0.6392
    Batch 300/469 | Generator Loss: 0.8483 | Discriminator Loss: 0.6096
    Batch 450/469 | Generator Loss: 0.7690 | Discriminator Loss: 0.6739
    Training Time: 16.41 min
Epoch: 129/150
    Batch 000/469 | Generator Loss: 0.8138 | Discriminator Loss: 0.6185
    Batch 150/469 | Generator Loss: 0.8081 | Discriminator Loss: 0.6806
    Batch 300/469 | Generator Loss: 0.8273 | Discriminator Loss: 0.6527
    Batch 450/469 | Generator Loss: 0.8472 | Discriminator Loss: 0.6472
    Training Time: 16.54 min
Epoch: 130/150
    Batch 000/469 | Generator Loss: 0.8235 | Discriminator Loss: 0.6444
    Batch 150/469 | Generator Loss: 0.8593 | Discriminator Loss: 0.6290
    Batch 300/469 | Generator Loss: 0.7831 | Discriminator Loss: 0.6297
    Batch 450/469 | Generator Loss: 0.8034 | Discriminator Loss: 0.6252
    Training Time: 16.66 min
Epoch: 131/150
    Batch 000/469 | Generator Loss: 0.8533 | Discriminator Loss: 0.6455
    Batch 150/469 | Generator Loss: 0.8528 | Discriminator Loss: 0.6761
    Batch 300/469 | Generator Loss: 0.8067 | Discriminator Loss: 0.6435
    Batch 450/469 | Generator Loss: 0.7884 | Discriminator Loss: 0.6422
    Training Time: 16.79 min
Epoch: 132/150
    Batch 000/469 | Generator Loss: 0.8304 | Discriminator Loss: 0.6375
    Batch 150/469 | Generator Loss: 0.7028 | Discriminator Loss: 0.6668
    Batch 300/469 | Generator Loss: 0.8078 | Discriminator Loss: 0.6147
    Batch 450/469 | Generator Loss: 0.8233 | Discriminator Loss: 0.6623
    Training Time: 16.92 min
Epoch: 133/150
    Batch 000/469 | Generator Loss: 0.8526 | Discriminator Loss: 0.6381
    Batch 150/469 | Generator Loss: 0.7723 | Discriminator Loss: 0.6534
    Batch 300/469 | Generator Loss: 0.7947 | Discriminator Loss: 0.6436
    Batch 450/469 | Generator Loss: 0.8172 | Discriminator Loss: 0.6370
    Training Time: 17.05 min
Epoch: 134/150
    Batch 000/469 | Generator Loss: 0.8543 | Discriminator Loss: 0.6513
    Batch 150/469 | Generator Loss: 0.8244 | Discriminator Loss: 0.5896
    Batch 300/469 | Generator Loss: 0.7836 | Discriminator Loss: 0.6609
    Batch 450/469 | Generator Loss: 0.8362 | Discriminator Loss: 0.6313
    Training Time: 17.18 min
Epoch: 135/150
    Batch 000/469 | Generator Loss: 0.8084 | Discriminator Loss: 0.6398
    Batch 150/469 | Generator Loss: 0.8049 | Discriminator Loss: 0.6757
    Batch 300/469 | Generator Loss: 0.8130 | Discriminator Loss: 0.6696
    Batch 450/469 | Generator Loss: 0.8444 | Discriminator Loss: 0.6541
    Training Time: 17.31 min
Epoch: 136/150
    Batch 000/469 | Generator Loss: 0.8236 | Discriminator Loss: 0.6172
    Batch 150/469 | Generator Loss: 0.8272 | Discriminator Loss: 0.6618
    Batch 300/469 | Generator Loss: 0.8964 | Discriminator Loss: 0.6471
    Batch 450/469 | Generator Loss: 0.8344 | Discriminator Loss: 0.6329
    Training Time: 17.43 min
Epoch: 137/150
    Batch 000/469 | Generator Loss: 0.7465 | Discriminator Loss: 0.6396
    Batch 150/469 | Generator Loss: 0.7846 | Discriminator Loss: 0.6943
    Batch 300/469 | Generator Loss: 0.8284 | Discriminator Loss: 0.6603
    Batch 450/469 | Generator Loss: 0.7869 | Discriminator Loss: 0.6554
    Training Time: 17.56 min
Epoch: 138/150
    Batch 000/469 | Generator Loss: 0.8867 | Discriminator Loss: 0.6528
    Batch 150/469 | Generator Loss: 0.8185 | Discriminator Loss: 0.6520
    Batch 300/469 | Generator Loss: 0.8382 | Discriminator Loss: 0.6616
    Batch 450/469 | Generator Loss: 0.7656 | Discriminator Loss: 0.6444
    Training Time: 17.69 min
Epoch: 139/150
    Batch 000/469 | Generator Loss: 0.8456 | Discriminator Loss: 0.6509
    Batch 150/469 | Generator Loss: 0.8368 | Discriminator Loss: 0.6573
    Batch 300/469 | Generator Loss: 0.8106 | Discriminator Loss: 0.6812
    Batch 450/469 | Generator Loss: 0.7628 | Discriminator Loss: 0.6870
    Training Time: 17.82 min
Epoch: 140/150
    Batch 000/469 | Generator Loss: 0.7869 | Discriminator Loss: 0.6866
    Batch 150/469 | Generator Loss: 0.7753 | Discriminator Loss: 0.6819
    Batch 300/469 | Generator Loss: 0.8336 | Discriminator Loss: 0.6512
    Batch 450/469 | Generator Loss: 0.8283 | Discriminator Loss: 0.6378
    Training Time: 17.95 min
Epoch: 141/150
    Batch 000/469 | Generator Loss: 0.8624 | Discriminator Loss: 0.6121
    Batch 150/469 | Generator Loss: 0.7317 | Discriminator Loss: 0.6200
    Batch 300/469 | Generator Loss: 0.7954 | Discriminator Loss: 0.6601
    Batch 450/469 | Generator Loss: 0.8048 | Discriminator Loss: 0.6600
    Training Time: 18.08 min
Epoch: 142/150
    Batch 000/469 | Generator Loss: 0.8424 | Discriminator Loss: 0.6564
    Batch 150/469 | Generator Loss: 0.8368 | Discriminator Loss: 0.6704
    Batch 300/469 | Generator Loss: 0.7832 | Discriminator Loss: 0.6425
    Batch 450/469 | Generator Loss: 0.8888 | Discriminator Loss: 0.6604
    Training Time: 18.21 min
Epoch: 143/150
    Batch 000/469 | Generator Loss: 0.7629 | Discriminator Loss: 0.6492
    Batch 150/469 | Generator Loss: 0.8988 | Discriminator Loss: 0.6544
    Batch 300/469 | Generator Loss: 0.8707 | Discriminator Loss: 0.6431
    Batch 450/469 | Generator Loss: 0.8253 | Discriminator Loss: 0.6481
    Training Time: 18.33 min
Epoch: 144/150
    Batch 000/469 | Generator Loss: 0.8279 | Discriminator Loss: 0.6358
    Batch 150/469 | Generator Loss: 0.7755 | Discriminator Loss: 0.6465
    Batch 300/469 | Generator Loss: 0.8062 | Discriminator Loss: 0.6743
    Batch 450/469 | Generator Loss: 0.8922 | Discriminator Loss: 0.6369
    Training Time: 18.46 min
Epoch: 145/150
    Batch 000/469 | Generator Loss: 0.7578 | Discriminator Loss: 0.6828
    Batch 150/469 | Generator Loss: 0.8724 | Discriminator Loss: 0.6083
    Batch 300/469 | Generator Loss: 0.7520 | Discriminator Loss: 0.6598
    Batch 450/469 | Generator Loss: 0.8057 | Discriminator Loss: 0.6704
    Training Time: 18.59 min
Epoch: 146/150
    Batch 000/469 | Generator Loss: 0.7814 | Discriminator Loss: 0.6534
    Batch 150/469 | Generator Loss: 0.8082 | Discriminator Loss: 0.6540
    Batch 300/469 | Generator Loss: 0.8041 | Discriminator Loss: 0.6559
    Batch 450/469 | Generator Loss: 0.8315 | Discriminator Loss: 0.6314
    Training Time: 18.72 min
Epoch: 147/150
    Batch 000/469 | Generator Loss: 0.7430 | Discriminator Loss: 0.6363
    Batch 150/469 | Generator Loss: 0.8291 | Discriminator Loss: 0.6358
    Batch 300/469 | Generator Loss: 0.7620 | Discriminator Loss: 0.6649
    Batch 450/469 | Generator Loss: 0.7764 | Discriminator Loss: 0.6702
    Training Time: 18.84 min
Epoch: 148/150
    Batch 000/469 | Generator Loss: 0.8565 | Discriminator Loss: 0.6862
    Batch 150/469 | Generator Loss: 0.8545 | Discriminator Loss: 0.6301
    Batch 300/469 | Generator Loss: 0.8527 | Discriminator Loss: 0.6686
    Batch 450/469 | Generator Loss: 0.7784 | Discriminator Loss: 0.6803
    Training Time: 18.97 min
Epoch: 149/150
    Batch 000/469 | Generator Loss: 0.8201 | Discriminator Loss: 0.6733
    Batch 150/469 | Generator Loss: 0.7637 | Discriminator Loss: 0.6669
    Batch 300/469 | Generator Loss: 0.8376 | Discriminator Loss: 0.6759
    Batch 450/469 | Generator Loss: 0.7737 | Discriminator Loss: 0.6584
    Training Time: 19.10 min
Epoch: 150/150
    Batch 000/469 | Generator Loss: 0.8208 | Discriminator Loss: 0.6831
    Batch 150/469 | Generator Loss: 0.8342 | Discriminator Loss: 0.6517
    Batch 300/469 | Generator Loss: 0.7557 | Discriminator Loss: 0.6694
    Batch 450/469 | Generator Loss: 0.7800 | Discriminator Loss: 0.6680
    Training Time: 19.23 min
Total Training Time: 19.23 min

Evaluation

Let's take a look at how our losses change over time! Do the losses converge?

In [0]:
ax1 = plt.subplot(1, 1, 1)
ax1.plot(range(len(losses_G)), losses_G, label='Loss/Generator')
ax1.plot(range(len(losses_D)), losses_D, label='Loss/Discriminator')
ax1.set_xlabel('Iterations')
ax1.set_ylabel('Loss')
ax1.legend()

ax2 = ax1.twiny()
newlabel = list(range(hparams.epochs+1))
iter_per_epoch = len(train_dataloader)
newpos = [e*iter_per_epoch for e in newlabel]

ax2.set_xticklabels(newlabel[::10])
ax2.set_xticks(newpos[::10])

ax2.xaxis.set_ticks_position('bottom')
ax2.xaxis.set_label_position('bottom')
ax2.spines['bottom'].set_position(('outward', 45))
ax2.set_xlabel('Epochs')
ax2.set_xlim(ax1.get_xlim())
plt.show()

Once the training process is complete, the discriminator has served it's purpose. We simply throw it away, and now we can keep the generator to create 'fake' samples of the original data!

In [0]:
forger.eval()
z = torch.zeros((16, hparams.latent_dim)).uniform_(-1.0, 1.0).to(device)
generated_images = forger(z).view(-1, 28, 28)

samples = [] 
for i in range(16):
    # seriously, if anyone has suggestions on a new, cleaner visualization library, I'm all ears!
    samples.append(generated_images[i].to(cpu).detach().numpy())
    # I didn't bother making this code extensible, but basically replace any of the 4's you see with a 
    # number of choice in order to change the number of samples
plt.figure()
i, j = 0, 0 
_, subplots = plt.subplots(nrows=4, ncols=4)
for sample in samples:
    subplots[j][i].imshow(sample, cmap='binary') # also try binary 
    subplots[j][i].set_axis_off()
    i += 1
    if i % 4 == 0:
        j += 1 
        i = 0
plt.show()
<Figure size 432x288 with 0 Axes>

Challenges

  • GANs are notoriously finnicky to train. They suffer from mode collapse, which is a phenomena where the generated samples are not as diverse as the training samples.

  • No, seriously! They can be a complete nightmare to train. Over the years the community has discovered some neat tricks to make training easier.

    • Schedule updates of generator and discriminator. Updating both with the same frequency can lead to unstable minimax optimization.
    • Use soft-labels (add noise to the class labels)
    • Use a non-saturating loss function for the GAN. While this makes it NOT a zero-sum game, results tend to improve.
    • Better evaluation metrics?
    • See references for more...
  • Attempts at using GANs for NLP have not been very successful. Instead, transformer architectures have set SOTA benchmarks across all NLP tasks in the last few years.

Cool Applications

  1. Imaginary fashion models and designs
  2. Generating video game assets (from characters to trees)
  3. Text to image synthesis (I give 'monkey', and GAN returns a picture 🤯)
  4. Upscaling resolution of images
  5. Dataset generation

Many more!

Ethics

As discussed in lecture, GANs can be used to create 'deepfakes'. While plenty of the deepfake material out there is harmless, there exist more malicious applications of this technology 😕(fake news, revenge pornography).

It's our responsibility to use deep learning wisely, and make sure the technology benefits people instead of hurting them!