As I mentioned in a previous post (“A Big Change in the World of GAN Image-Making”), Kate and I had been discussing compositing creation myth images out of the images made by neural networks, and the images they’re trained on.
I made a couple of images using the output of BigGAN. No one is claiming that the output of that network is art, though the images are certainly quite striking. But since they are produced without any particular intention in mind, we can talk about their aesthetic qualities but probably wouldn’t call them art.
This raises an important point about how we see AI in society. Is creativity yet another area where our jobs are being taken over by machines? It’s hard to feel very threatened by today’s rudimentary image-making nets. But we are starting to see that it’s possible to write compelling click-bait headlines using neural nets.
NYU Graduate Student Ross Goodwin mentioned a conversation he had with his supervisor Allison Parrish, who is an expert in text generation,
Ross uses position data and photos to inspire his machines to write interesting and whimsical prose, which people can then contemplate in the locations that inspired the writing. He speculates that machines can be a tool to extend our writing capabilities and tap into thoughts and impulses we might otherwise struggle to express.
In a similar vein, a teacher of machine learning on popular computer science learning site Udacity.com predicts that machine learning will augment the capabilities of our minds in the same way that physical machinery has augmented the physical strength of our bodies a thousandfold.
Returning to the question of neural networks like GANs making art, researcher and neural network blogger Janelle Shane says that the images BigGAN makes may not be art, but selecting them for a particular purpose is an artistic act. I agree. Nonetheless, my first images created by combining bits of the output from BigGAN did not impress the few people I discussed them with. A typical exchange went something like:
“Did you make these images then?”
“No. It was produced by a neural network”
“Did you program it?”
“No, someone else did.”
“I’m going through the thousands of images and trying to make a composition from them that tells a story”
“OK so you’re copying, basically.”
I sigh deeply and ponder the plight of the misunderstood artist. But then I reflect that they have a point. It seems that people are willing to consider machine-made art, but it helps if the artist programmed the machine themselves.
This is a familiar criticism that comes up regarding photo composites and even studio photography – that it is somehow “fake”. If the person wasn’t really in front of that backdrop, is it honest to cut-and-paste them in front of it? Journalistic and documentary photography has to adhere to a pretty strict set of professional ethics when representing a scene or events as they happened. But most people would accept that a painter can modify or entirely construct a scene to satisfy their aims. I think we struggle more with then when it’s done with photography because we may be used to thinking of photorealistic images as portraying reality.
This may be a challenge for this project because I am using a photorealistic image-making technique to depict imaginary events. This kind of photo compositing seems to work best when either it is done extremely well (as on many movie posters), or is clearly depicting an imaginary scene (see for example the work of Von Wong https://www.vonwong.com or Miss Aniela https://www.missaniela.com).
I am still working through the visual style and visual vocabulary of these images, by making images and soliciting feedback. I will be satisfied even if the images I produce don’t satisfy everyone, but I do want to try to make them reasonably interesting and accessible for most people. If I find that most people just think I am trying to fool them with a fake, I will want to adjust my approach.
As I work on the style and composition of my mythological images, I am simultaneously getting started training up my own GANs. Unfortunately, this particular technique, while perfect for my project, is very much the deep end of the machine learning pool. I will be looking for ways to get acquainted with some of the technical details without getting completely tangled in the weeds.