Credit: Refik Anadol
This exhibition offers a comprehensive look at current forms of artistic work with machine learning and AI. It is supplemented by a tutorial program by and for artists to impart what you have to know to get started using machine learning in artistic projects.
Making use of current image-recognition software, Deep Learning Kubrick explores the idea of AI trying to make sense of stories and fictions; in this case snippets from classics by the film director Stanley Kubrick.
Commissioned to work with SALT Research collections, the artist Refik Anadol employed machine-learning algorithms to search and sort relations among 1.7 million documents. Interactions of the multidimensional data found in the archives are, in turn, translated into an immersive media installation.
The “CSIA” is a creative research project that partially replicates an open-source intelligence (OSINT) system, including an interface that allows users to experience how intelligence agents surveil social media posts and two machine-learning classifiers for predictive policing.
They say any two people in the world can be connected through friends of friends, just in a few steps. How about artworks? Using machine-learning techniques that analyze the visual features of artworks, X Degrees of Separation finds pathways between any two artifacts, connecting the two through a chain of artworks.
White Collar Crime Risk Zones uses machine learning to predict where financial crimes will happen across the US. The system was trained on incidents of financial malfeasance from 1964 to the present day, collected from the Financial Industry Regulatory Authority (FINRA), a non-governmental organization that regulates financial companies.
Given audio of President Barack Obama, the scientists synthesize a high-quality video of him speaking with accurate lip sync, composited into a target video clip.
Over the past few years, machine-learning research has rapidly overtaken the field of computer vision with advanced techniques for real-time image processing, enabling many promising new applications.
Can a machine make us look at art through the lens of today’s world? Inspired by the paradoxes of bringing AI to a museum applying rational and objective thinking to a subjective field like art, Recognition uses artificial intelligence algorithms to compare photographs from current events as they unfold.
The artist and designer Daniel de Bruin is driven by the desire to become part of the things he creates. Neurotransmitter 3000 is such a thing: a seven-meter-high attraction in which he lets himself swing around.
Fall of the House of Usher, based on the short story by Edgar Allan Poe, is a twelve-minute animation in which each still is generated by artificial intelligence. This is done by using a neural net (pix2pix) trained on the artist’s ink drawings from stills from the 1929 version of the film.
Based on AI models currently used, among other things, in content moderation and surveillance, the artworks explore the “latent space” of the AI as it processes and imagines the world for itself, dreaming in the areas between and beyond what it has learnt from us.
Blade Runner—Autoencoded is a film made by training an autoencoder—a type of generative neural network—to recreate frames from the 1982 film Blade Runner. The Autoencoder learns to model all frames by trying to copy them through a very narrow information bottleneck, being optimized to create images that are as similar as possible to the original images.
This is a story of discovery, technology and one that calls into question the humanity of creativity. The film tells the story of technologist Ross Goodwin and his literary artificial-intelligent robot as they set out to write the longest novel in the English language.
The pet robot AIBO was born at Sony in 1999 as the world’s first home-entertainment robot. About 150,000 robots were built but production and sales ended in 2006 and technical support was also discontinued in 2014. Today former Sony engineers provide unofficial maintenance services for owners who remain firmly attached to their AIBO robots.
Robot, Doing Nothing accuses our modern society of being incessantly busy even beyond the confines of everyday life in the workplace. Emanuel Gollob has created a fictitious scenario: the results of studies demonstrate that the efficiency of our society is enhanced by doing nothing.
Landmarks consists of a series of 3D lenticular images. Corporeal constructions in the form of meshes of lines and flat shapes seem to rise from the image’s surface and float in space in front of the actual image plane. These fluctuating structures take shape on the basis of virtual wire-mesh models.
A deep neural network opening its eyes for the first time, and trying to understand what it sees. Learning to See is an ongoing series of works that use state-of-the-art machine-learning algorithms as a means of reflecting on ourselves and how we make sense of the world.