Machines describing

man, as described

by man

Generating Art with Machine Learning.

Portrait

Trained with 16,747 images

For 62h 51m

Abstract

Trained with 7208 images

For 38h 19m

Overview

In preparation for my bachelor’s thesis, I experimented with different GAN’s, to evaluate their ability to learn how to generate artworks. 30,000 images from wikiart.org were scraped and different implementations of GAN and GPU configurations were benchmarked. A custom made tool, detected faces in portrait pieces and cropped the image accordingly to a square. No selection in style and origin, special preparation or other enhancement of the input was done, to ensure that the machine will be fully responsible for understanding human’s idea of art. The algorithm was trained for multiple days on a DGX-1 Cloud VM.

Dataset

30,000 human images have been scraped. Portrait’s and abstract depictions of all epochs and styles from 100 BC to 2019 are being analyzed from the GAN to take away as much authorship from the human supervisor as possible.

Preparation

The algorithm requires the input images to be cropped to a square. To ensure that a face is present in the cropped version, a basic face detection tool detects faces in portraits and cuts the images accordingly while only changing the aspect ratio.

Training

GAN is trained for a few days and the trained model is used to generate the machine art locally. The results are far from perfect, but longer training runs in the future should improve them.

The Algorithm

The Generative Adversarial Network I chose has been developed by Tero Kerras et al. and implements a progressive training which greatly reduces the training time and therefore overall cost of running the GAN. Find out more about the algorithm in the paper linked below.

A random series of pictures dreamed up by my computer.

Abstract Portrait