Physical Processes, Collaborations and Surprise

Part of this project is to take the output of AI and make it physical. These objects and creations work better and are less confined to our present day when they are not experienced on a computer screen. I am also exploring the performative aspects of a machine making images. For now, the intent comes from me, but when watching a robot arm work for example, it’s hard not to anthropomorphize and perceive that the machine has intention and is a real being. Jordan Wolfson’s Colored Sculpture is a great example of this. The title emphasizes that the humanoid figure which seems to undergo violent treatment is really just a coloured sculpture. But viewers tend to have visceral reactions when confronted with a spectacle that engages their sense that a sentient being is involved.

Robotlab’s bios ( features a large industrial robot carefully and precisely copying the Torah only a long scroll of paper. The viewer has the impression that this diligent machine will take all the time in the world copying this manuscript, and nothing could possibly deflect it from its devoted copying. The hushed sounds of the brush on paper and the machine’s movement recall a monastic scriptorium, and the robot’s bright colour is a reminder of the monochromatic monastic garb belonging to various different traditions.

But there is no surprise in what bios writes. Although the arm is capable of almost limitless motions, it is confined along a one-dimensional trajectory, without deviating by a single character – a feat probably unmatched by even the most diligent human copyists. 

I think the capacity to surprise us is the most interesting and possibly most useful characteristic of AI, and the key characteristic that differentiates these machines from every other kind of software or hardware machine ever devised. I think this is valuable because they can complement humanity’s strengths – excelling where we are weak, devising solutions we might never have conceived, showing us our blind spots and teaching us new tricks along the way. There are plenty of areas where they are very far behind us. 

In a wonderful summary of the recent matches between Google’s AlphaGo and human champions, The Atlantic wrote:

 “They’re how I imagine games from far in the future,” Shi Yue, a top Go player from China, has told the press. A Go enthusiast named Jonathan Hop who’s been reviewing the games on YouTube calls the AlphaGo-versus-AlphaGo face-offs “Go from an alternate dimension.” From all accounts, one gets the sense that an alien civilization has dropped a cryptic guidebook in our midst: a manual that’s brilliant—or at least, the parts of it we can understand.

This new way of learning seems to also be a two-way street. Not only are humans needed to create the AIs in the first place, we can use our unique human capabilities to steer their general-purpose networks in the right direction, as for example with supervised learning. Artist Mario Klingemann has done a lot of work with GANs on faces and human figures. In a recent project, he helped cue the network by identifying the facial features as particularly important. We can see it developing better acuity in the face than in the surrounding details. we can point out a path to them and they will learn it (

The quality of surprise is both an outcome of neural networks, and a capability that can be actively cultivated to help the network play and make discoveries. OpenAI researchers programmed a bot to play the video games Montezuma’s Revenge and Super Mario Bros by configuring it to avoid boredom, i.e. states where it could predict what would happen next. By seeking unfamiliar states, it learned to navigate the map, discover hidden levels, and defeat the bosses at the end of each level. No one watching the player would confuse it for a human playing the game. No human thumbs, no matter how young, would see a reason to have the player hop continuously as they traverse the map.

Is there is a fundamental difference between traditional generative art methods (e.g. procedural bots like The Painting Fool) and GANs? Do we learn more from collaborating with a neural network-based system than with a rule-based system? I hope to learn more about it as this project proceeds.