I’ve been using faces extracted from Renaissance paintings to train the neural networks (GANs) and create “new” faces and portraits. I am collaborating with these neural networks to make portraits of what distant future artificially intelligent machines might remember as their ancestors, gods or creators. Painted portraits were not originally digital images, of course, so they seem like a good place to start when looking for progenitor figures.
Renaissance paintings, particularly from the early Renaissance may employ repetitive and stereotyped facial expressions. Even the finest examples seem to exhibit some kind of heavenly ideal of beauty where individuality is not the main point – see for example Duccio’s Maestà, a masterpiece from 1311 CE.
Less than ten years later, Giotto was showing much greater individuality and a wider range of facial expressions. For example, see St. Francis Renounces his Worldly Goods (c. 1320) (link)
For the next 200 years, faces only got more expressive and individual. These are fewer of them, but they are much more interesting than 50,000 headshots of smiling celebrities often used to train neural networks.
The last faces I made (see blog post) produced some interesting examples, but most of the output from the GAN was unusuable, probably because of the small number of images. Revisiting the dataset, I also noticed that some of the images were quite small, and my data pre-processor had scaled them up.
This time, I took not just early renaissance faces (that would be contemporary with the gold backgrounds I used in the physical process), but also included some High Renaissance and Mannerist images. After removing all the faces less than about 64 x 64 pix, I was left with a higher quality image dataset almost twice the size, at around 4500 images.
Training overnight on a rented AWS instance, I got some interesting candidates around epoch 70:
The results are more interesting, more varied, and higher quality than what the GAN produced previously. This process was about 100x faster than the first time I did it. I knew which GAN implementations to use, and I had already written Python programs to get the images, extract the faces, and rename/resize the training data.