GANs for Face Aging problems — What does your face look like in the next few years?

Learn how to apply GANs to see face looks in face aging problem or in different conditions.

Introduction

This blog will introduce you to the core components of GANs. It will take you through how each component works and the important concepts and technology behind GANs. It will also give you a brief overview of the benefits and drawbacks of using GANs and an insight into certain real-world applications. After understanding GAN architecture, we will see how GANs are applied in face aging problem.

Generative Adversarial Networks

What is a GAN?

Their goal is to generate data points that are magically similar to some of the data points in the training set.

Currently, people use GAN to generate various features. It can generate realistic images, 3D-models, videos, and a lot more.

Generating faces using DCGAN

Firstly, let’s take a look in general GANs model.

Generative Adversarial Networks architecture

What is a generator network?

What is a discriminator network?

Generator and discriminator network in GANs.

Training through adversarial play in GANs

  • The first network, the generator, has never seen the real artwork but is trying to create an artwork that looks like the real thing.
Generator train
  • The second network, the discriminator, tries to identify whether an artwork is real or fake.

The generator, in turn, tries to fool the discriminator into thinking that its fakes are the real deal by creating more realistic artwork over multiple iterations.

The discriminator tries to outwit the generator by continuing to refine its own criteria for determining a fake.

They guide each other by providing feedback from the successful changes they make in their own process in each iteration.

Ultimately, the discriminator trains the generator to the point at which it can no longer determine which artwork is real and which is fake.

How to implement GANs in face aging problem

All codes are executed in TensorFlow 1.12 and CuDA 9.0. We recommend you to run in a Python environment.

Install Cuda 9.0 (this can take a few minutes)

$wget https://developer.nvidia.com/compute/cuda/9.0/Prod/local_installers/cuda-repo-ubuntu1604-9-0-local_9.0.176-1_amd64-deb$dpkg -i cuda-repo-ubuntu1604-9-0-local_9.0.176-1_amd64-deb$apt-key add /var/cuda-repo-9-0-local/7fa2af80.pub$apt-get update$apt-get install cuda=9.0.176-1

To install TensorFlow, run command below in terminal:

pip install --upgrade tensorflow-gpu==1.12.2

Clone this repo:

git clone https://github.com/pbaylies/stylegan-encoder\cd stylegan-encoder

Setting up folder structure for our images:

rm -rf aligned_images raw_imagesmkdir aligned_images raw_images

Prepare images for training

Put your images wanted to be changed into the folder raw_images, data structure would be like this:

├── ./raw_images
│ ├── [your images shoule be here]
│ ├── [your images should be here]

Auto-Align faces

Run the script:

python align_images.py raw_images/ aligned_images/ --output_size=1024

This script will:

  1. Look for faces in the images
  2. Crop out the faces from the images
  3. Align the faces
  4. Rescale the resulting images and save them in “aligned_images” folder

Encoding faces into StyleGAN latent space

$gdown https://drive.google.com/uc?id=1aT59NFy9-bNyXjDuZOTMl0qX0jmZc6Zb$mkdir data$mv finetuned_resnet.h5 data$rm -rf generated_images latent_representations

Train latent encode

$python encode_images.py --optimizer=adam --lr=0.002 --decay_rate=0.95 --decay_steps=6 --use_l1_penalty=0.3 --face_mask=True --iterations=500 --early_stopping=False --early_stopping_threshold=0.05 --average_best_loss=0.5 --use_lpips_loss=0 --use_discriminator_loss=0 --output_video=True aligned_images/ generated_images/ latent_representations/

Access to this https://drive.google.com/drive/u/1/folders/1exoCSLE-CRmfr9yqW3Yv4M9YI7VAw1LZ and download these pre-trained files:

Put these files in the same folder.

Save outout_vectors.npy into latent_representations by script.

$python save_latent.py

Edit the file save_latent.py to define out_file parameter to get the destination of latent.

Implement Aging-face progress

In the root folder, execute:

$git clone https://github.com/tr1pzz/InterFaceGAN.git$cd InterFaceGAN/$gdown https://drive.google.com/uc?id=1MEGjdvVpUsu1jB4zrXZN7Y4kBBOzizDQ$mv karras2019stylegan-ffhq-1024x1024.pkl InterFaceGAN/models/pretrain/karras2019stylegan-ffhq-1024x1024.pkl

Testing

Run this command to use this final_w_vectors to generate images.

python test_age.py

The results as below:

Changes in age
Changes in gender
Changes in gender
Changes in smiling

NeurondAI is a transformation business. Contact us at:

Website: https://www.neurond.com/

Neurond AI is a transformation business. https://www.neurond.com/