Category Archives: AI – Artificial Intelligence

Does art have any relevance “in the Age of AI”?

Christie’s recently sold for $432,000 a rather amusing portrait created by AI. Last Summer, (human) participants deemed that the artworks created by a computer system were more communicative and inspiring than human-made ones. A few years ago, an artist convincingly automated the kind of texts written by art critics. I could multiply the attention-grabbing stories but i’m sure that you’ve also been following the debates around the impact that AI is having on art and on the specificity of human creativity. But does art have a voice when it comes to understanding and shaping AI?


Blinking Turing by Vuk Cosic


E-relevance of Culture in the Age of AI at RiHub in Rijeka. Photo credit: Tanja Kanazir / ECOC Rijeka 2020

A couple of weeks ago in was in Rijeka, Croatia, to participate to E-relevance of Culture in the Age of AI, a seminar that aimed to offer food for thought to the Council of Europe’s reflection on the role that culture can have on the field of artificial intelligence. The sun was shining, i was wearing my favourite jumpsuit and the company was smart: Felix Stalder (media and cultural theorist and professor for Digital Culture and Network Theory at the Zürich University of the Arts), Vladan Joler (artist, founder the SHARE Foundation and professor at the University of Novi Sad), Gerfried Stocker (artistic director at Ars Electronica), Matteo Pasquinelli (professor in Media Philosophy at the University of the Arts and Design, Karlsruhe), etc. Everything was orchestrated by Vuk Cosic, a “cosmopolitan retired artist” and a classic of net.art.

I didn’t take many notes during the festival as i was engrossed in the debates so instead of a proper report, i’m just going to freewheel my way through a few bits and bobs i learnt over these two days in Rijeka. And i’ll focus ONLY on the art parts because you can’t really trust me with anything else.

E-relevance of Culture in the Age of AI. The cheekiness of the title isn’t obvious until you read it out loud. It sounds like the “irrelevance of culture in the age of AI.” It’s true it is often difficult to explain the invaluable role that art and culture can play in the evolution of forces that are going to shape society in ways we might not always fully comprehend.


E-relevance of Culture in the Age of AI at RiHub in Rijeka. Photo credit: Tanja Kanazir / ECOC Rijeka 2020


E-relevance of Culture in the Age of AI at RiHub in Rijeka. Photo credit: Tanja Kanazir / ECOC Rijeka 2020

And yet, even if it is not immediately obvious, art (and culture in general) does have a role in stimulating a culture of reflection and healthy skepticism, in shaping new models and narratives, in articulating all the social dimensions of a technology like AI, on seeping into discussions and eventually into reality.

Science-fiction is a powerful example of the role art can have on the perception and even the development of a technology. Much the public’s imagination of what AI looks like and the kind of interaction we have with it is still shaped by Stanley Kubrick’s film 2001: A Space Odyssey. That film is 50 year old which tells us a lot about the role that culture can play in the debate around AI. The clean lines of Alexa and the voice of Siri, for example, probably owe a lot to the haunting image of AI that the film created.

As for the smoothness of technological ‘personal assistants’, they mask the complexity of the power relationships that are built into these machines.

Vuk Cosic made that hidden complexity of relationships more tangible when he brought to the discussion a series of anecdotes about the way folk culture is mocking AI, revealing how small accidents uncover the hold the technology has over our lives. And how we can sabotage it, albeit in very modest ways.

Starting with stories of accidental orders i had never heard of. Such as the one in which Amazon’s Alexa started ordering people dollhouses automatically upon hearing a news presenter on tv declare: “I love the little girl, saying ‘Alexa ordered me a dollhouse’.”


The burger king ad debacle. Photo from phandroid

A few months later, Burger King perhaps thought it would be a genius idea to piggyback on the dollhouse episode and exploit it for a TV spot. “You’re watching a 15-second Burger King ad, which is unfortunately not enough time to explain all of the fresh ingredients in the Whopper sandwich. But I’ve got an idea,” the narrator said, standing behind the counter at the burger chain. “OK Google, what is the Whopper burger?”

The trick was supposed to prompt voice-activated smart speakers into describing its burgers, just like Alexa had been tricked by a voice on the television to buy dollhouse. The problem, however, is that Google gets its explanation of the Whopper from Wikipedia, an encyclopedia everyone is free to edit.

Within hours of the ad’s release, users had made humorous modifications to the Whopper Wikipedia page. Soon after, Google appeared to make changes that stopped the commercial from activating the devices.

An interesting issue worth mentioning here is that wikipedia is free and written collaboratively by volunteers. And yet, this unpaid, crowdsourced source of valuable information is plundered by multi-billion corporations to make even more money.

At that moment in the conversation, Felix Stalder asked me: “Do you know of !Mediengruppe Bitnik’s work with Alexa?” No, i didn’t. And yes, it’s a great project. We wouldn’t expect anything less from these guys.


!Mediengruppe Bitnik (music by Low Jack, graphics by Knoth & Renner), Alexiety, 2018

Together with musician Low Jack, !Mediengruppe Bitnik have created an EP music record titled ‘Alexiety’. The album is made to be streamed on the radio “for the enjoyment of smart homes everywhere.”

In ‘Alexiety’, a set of three songs attempts to capture the feelings we develop toward Intelligent Personal Assistants: the carefree love that embraces Alexa before data privacy and surveillance issues outweigh the benefits; the alienation and decoupling / uncoupling from the allure of remote control and instant gratification; the anxiety and discomfort around Alexa and other Intelligent Personal Assistants that is Alexiety.

The work explores the unbalanced power relationship between Intelligent Personal Assistants that are taking more and more control over our lives and us, poor flesh and bones creatures who know so little about their algorithms, rule-sets and even real machinic presence.

Hardcore Anal Hydrogen, Jean-Pierre,2018

Speaking of music, in his statement Gerfried Stocker presented us with many fascinating artistic works that use AI. The one that really struck me might not be the most thought-provoking nor the most valuable in terms of critique of the technology though. Click and see above.


Kate Crawford and Vladan Joler, Anatomy of an AI System, 2018


E-relevance of Culture in the Age of AI at RiHub in Rijeka. Photo credit: Tanja Kanazir / ECOC Rijeka 2020

The event was also the opportunity to see Anatomy of an AI System in all its printed majesty. The map, created by Vladan Joler and Kate Crawford, elegantly dissects the whole genesis, life and death of an individual networked device based on a centralised artificial intelligence system. Printed on a gigantic sticker, the work was covering one of the walls of the seminar room.


Sterling Crispin, N.A.N.O. , B.I.O. , I.N.F.O. , C.O.G.N.O., 2015

My own contribution to the discussions in Rijeka consisted in reminding the audience that technology is not made of just algorithms and big data. I briefly explained the cost that the sometimes invisible materiality of AI, its infrastructures and the devices we use, is having on the environment and on the lives of workers who often live far away from us. I’m sure you already follow this kind of discussion so i’ll spare you the details. Among the artistic projects i used to illustrate the issue, i’ll only mention Sterling Crispin’s N.A.N.O. , B.I.O. , I.N.F.O. , C.O.G.N.O. because of the way it illustrates the tension between the grand vision and promises of the Silicon Valley and the fragility of a world that is increasingly shaken by contingencies such as the depletion of natural resources (energy, minerals, etc.) and climate change.


Michael Mandiblerg, Postmodern Times, 2018

I also talked about Michael Mandiberg’s Postmodern Times. The artist commissioned freelancers on the crowdsourcing labor platform Fiverr.com to recreate small clips of Charlie Chaplin’s Modern Times. Mandiberg then assembled all the small clips made by the hidden human cogs in the powerful digital machine and recreated the famous 1936 comedy, drawing a bittersweet portrait of the digital factory and its ruthless reliance on precarity.


E-relevance of Culture in the Age of AI at RiHub in Rijeka. Photo credit: Tanja Kanazir / ECOC Rijeka 2020

In conclusion, i’m not afraid for artists. I trust them to unfold all the expressive forms of AI technology, to use, abuse, hack, sabotage AI just like they do with any new medium. And as for us, the public, i suspect we’ll start treasuring human fallibility just like we are amused by the glitches in the machines nowadays.

With that said, i AM worried about the shrinking space that is left to art and culture today. Europe needs to create an even more nurturing environment for artists through education, commissions, residency programs and by facilitating collaboration with research centers. If Europe doesn’t make them feel valued, some of these bright and critical minds who have been educated with public money in Europe might just move to Silicon Valley (or to any of its European outposts) and dedicate their creativity to the sole glory of the GAFAM.


E-relevance of Culture in the Age of AI at RiHub in Rijeka. Photo credit: Tanja Kanazir / ECOC Rijeka 2020


E-relevance of Culture in the Age of AI at RiHub in Rijeka. Photo credit: Tanja Kanazir / ECOC Rijeka 2020

The seminar took place at RiHub in Rijeka, Croatia. RiHub in case you were wondering is a “nursery for innovative and creative work”. I find the term utterly ridiculous but the space is welcoming and amazingly well designed.

Global control, macho technology and the Krampus. Notes from the RIXC Open Fields conference

The RIXC Open Fields conference took place a couple of weeks ago in Riga, Latvia. Like each year, the event spurred conversations addressing the current and upcoming challenges of a society that is increasingly shaped by technology and science. This year’s edition specifically looked at ubiquitous surveillance and data privacy.


One of Dan Perjovschi’s drawings which is part of the Global Control and Censorship exhibition

The festival conference, titled GLOBAL CONTROL, investigated these issues from three main perspectives. The first, “hybrid war”, explored the rise of “post-truth” propaganda in media, its consequences on global politics and on individual nations. The second perspective dealt with the scale of surveillance and at its potential “depth” due to the development of immersive technologies. The third concerned “the next big privacy” issue and zoomed in on social media, the safety and future of the data we publish and the need to re-establish some kind of trust on/of social media.

As is often the case with conferences that invite multiple perspectives and speakers with backgrounds as different as architecture, choreography, computing, photography and feminism (to name just a few), the discussions often showed the impact that the main topic under study can have on areas that might seem unrelated: telepathy, feminism, public transport, memory loss, real estate, mattresses that outsmart you, etc.

It’s been a fun and intellectually stimulating conference. I came back with a notebook full of quotes, references to artworks and comments scribbled during the conference. Here’s a short selections of the ones i found most interesting:


Dani Ploeger, frontline, 2016-17. Still from 360 video, edited by William J. Bates


Dani Ploeger, Patrol, 2017. Photo by Alexia Manzano via Furtherfield

Dani Ploeger presented a body of works that investigates the coexistence of digital consumer culture and firearms in everyday life. fronterlebnis (“front experience”, a literary genre which romanticized the war experience and the camaraderie of being ‘brothers-in-arms’) emerged from two journeys through Ukraine, during which he spent some time with soldiers on the frontline in the war in Donbass, and explored shopping malls, weapon stores, monuments and flea markets.

In 2017, the artist got himself a press card and travelled to the so-called ‘ATO zone’ (Anti-Terrorist Operation zone) to document Ukrainian army and volunteer forces on the frontline.

Dani Ploeger, Patrol, 2017

For Patrol, one of the works in the series, Ploeger recorded a firefight on the frontline in East-Ukraine with his smartphone. In his short film, soldiers are handling technologies from two different centuries. On the one hand, they use kalashnikovs and other mid-20th century firearms. On the other, they use their state-of-the-art digital devices to record and share the documentation of their exploits on the frontline. Ploeger’s video footage was later transferred to 16mm film, a medium that echoes the era of the weapon technologies represented.

Dani Ploeger, frontline, 2016-17

His Fronline installation is set in a white space filled with loud war soundscape produced in a movie studio. In the middle of the room, a VR headset shows uneventful video documentation of a frontline position in East-Ukraine where a group of (slightly out of shape) soldiers is sitting down waiting for something to happen, reminding us that the reality of war might be less action-packed and far more frustrating than we might think for wanna be Rambo.

Ploeger’s work points to the complicity between two types of technologies that are the object of much fetishization: communication devices and firearms. It also highlights wider issues around society’s continued masculinised and fetishised relationship to war.


One of Sterling’s slides shows an American troll as seen by Russians

Multicolor Revolutions, the title of the keynote given by Bruce Sterling on the opening night, also evoked war and digital media in Ukraine after the Euromaidan demonstrations. The science-fiction author talked about the extravagant palace that Viktor Yanukovych built in secret in the middle of a forest outside of Kiev, American trolls pictured by Russians, cyberwarfare and much more. One of the most fascinating comments he made was that, from what appeared on forums and other digital media, people who live far away from the place of a conflict tend to be far more excited about the escalation of violence than people living in close proximity of it. Sterling said he was particularly worried about the rich guys who live far away from the scene of war. They might never have touched a weapon but they have enough money to pay an army of people who can rage a very damaging war electronically. However, he concluded, the one thing he’s most concerned about is climate change. Wise words!

As a parenthesis, i was very interested in a comment made after Sterling’s keynote by Rasa Smite who was moderating the evening. She too is concerned about the rich guys, the ones who see themselves as the new Medici and who throw big money and their own idea of ‘good art’ at major art events. Sometimes they do it with taste, sometimes not. What is certain is that the budget of the events they bankroll dwarfs the one of public-funded festivals like RIXC Open Fields festival.

Sound artist Jasmine Guffond contributed to the conference with a performance/presentation of Listening Back, a research based project that sonifies online data surveillance as one browses online. Focused on tracking cookies, the plug-in for chrome and firefox translates data generated from cookies into (rather unpleasant) sound, providing sonic evidence of otherwise invisible monitoring and data gathering infrastructures.

I couldn’t find any trace of the plug-in online but i still thought it was worth mentioning because i believe sonification can play an important role in the understanding of the extent of data collection (and exploitation.)

Karen Palmer, The Future of Immersive Filmmaking

Dr. Ellen Pearlman is the director of the ThoughtWorks Arts Residency, a program in New Yorks that supports artists exploring new lines of inquiry intersecting technology and society. In her keynote, she introduced us to some of the artists who developed their work with them. I was particularly interested in RIOT by digital filmmaker Karen Palmer.

Inspired by unrest in Ferguson, Missouri, following the police killing of Michael Brown in 2014, RIOT is an emotionally responsive, live-action hybrid of film and game which uses facial recognition and A.I. technology to respond to your emotional state and alter the video story journey in real time. The objective is to get through a dangerous riot alive.

Palmer hopes that a new type of storytelling might shift people’s perspectives on social issues and raise more empathy towards multiple points of views.

Daniela Mitterberger and Tiziano Derme, The Savage Mind


Bad photo i took of one of the slides of Daniela Mitterberger and Tiziano Derme that showed The Krampus

I’ve always had a soft spot for the Krampus so i was delighted to see its cheerful little face appear in The Savage Mind, a project by Daniela Mitterberger and Tiziano Derme, co-Founders and directors at FutureRetrospectiveNarrative.

The Savage Mind uses digital architecture, data capturing technologies and VR to explore the relation between intangible cultural heritage, technology and the production of a speculative architecture. More specifically it focuses on the traditional Klaubauf ritual performed each winter in alpine villages in eastern Tyrol in Austria.

The project looks at how technology can translate the emotional data of a pagan ritual into new realms. It also explores how machines can help us reconsider the world of nature and play a role in the valorization of immaterial cultural heritage

Timo Toots, Memopol-2

In their joint presentation Privacy Experiments in Public and Artistic Spaces, Raivo Kelomees and Stacey Koosel, explored the parallels between Timo Toots‘ installation Memopol and the national ID card and public transport card currently used in Tallin, Estonia.

Estonia is notoriously well-ahead of other nations in terms of digitalization of its services. Memopol shows the drawbacks of this governmental policy. Visitors are invited to insert a national ID-card or passport into the Memopol machine which then starts collecting information about the visitor from (inter)national databases and the Internet. The data is then visualized on a large-scale display. In some cases, the amount of data gathered reaches disturbing dimensions: By logging in the government portal, citizen can see information from prescription drugs to high school exams, from tax reports to driving licenses. All recorded for unlimited time. This intrusion into private life doesn’t regard only Estonian citizens but each of us who use social network sites, public transport cards, loyalty shopping cards, etc.

One of the things that surprised me in Kelomees and Koosel’s presentation is that some of the inhabitants of Tallin protested AGAINST the city’s plan of offering free public transport to its citizens. Most people would think protesters were crazy but their discontent only showed that some people are well aware that privacy is the price to pay for free services nowadays.

An interesting paper mentioned during the conference was Heroic versus Collaborative AI for the Arts (PDF of the paper), by Mark d’Inverno and Jon McCormack. The text looks at the nature of the relationship between AI and Art and introduce two opposing concepts: that of “Heroic AI”, to describe the situation where the software takes on the role of the lone creative hero and “Collaborative AI” where the system supports, challenges and provokes the creative activity of humans. We then set out what we believe are the main challenges for AI research in understanding its potential relationship to art and art practice.

I’m going to end with a very disturbing business mentioned by Jens Hauser in his keynote “Ungreening Greenness”:

Not everyone complains about global warming in California. The drought is seen as a great business opportunity for a grass painting company called Green Canary. Its employees will be happy to come and paint your lawn whenever the grass is too pale. You can let that grass die and pretend that you’re rich enough not to be bothered by climate change.

The RIXC Open Fields conference, organized by RIXC the center for new media culture, is over but if you’re in Riga, don’t miss the accompanying exhibition: Global Control and Censorship. It’s at the National Libary of Latvia until 21 October 2018.

Secrets of Trade. Goldin+Senneby on magic, finance and art market predictions


Goldin+Senneby, Zero Magic, 2016. Exhibition view at Nome Gallery

Goldin+Senneby use strategies and tools inspired by the financial sector to dissect the late-capitalist system, interrogate its mythologies and expose its connections with areas as diverse as virtual identities, precarious labour in the art sector or even alchemy. Goldin+Senneby is “a framework for collaboration”, its projects often use the skills and knowledge of experts in the fields they are investigating. These collaborators might be computer engineers, magicians, economists, anthropologists or playwrights.

The NOME gallery in Berlin has recently opened a solo show of the artists duo. Titled Secrets of Trade, the exhibition presents key works from Goldin+Senneby’s recent interrogations of financial trading, the art market, and artificial intelligence.

Here’s a quick overview of some of the works in the show:


Goldin+Senneby, Art Aligns With Young Readers, 2017. From the series Force Directed Predictions


Goldin+Senneby, Force Directed Predictions. Exhibition view at Nome Gallery

Correlation maps generated by algorithms trained on art market data (Force Directed Predictions). The maps visualize art price fluctuations as they relate to macro political and economic factors such as employment rates and literacy. One shows only positive correlations and the other only negative correlations.


Goldin+Senneby, Zero Magic, 2016. Exhibition view at Nome Gallery

For Zero Magic, the artist duo infiltrated a secretive hedge fund in the US, reverse engineered its methods and recreated its short selling practices. This practice consists in selling shares that one does not own to buy it back once it has fallen in price, netting a profit in the process. Which sounds pretty baffling for someone like me. Indeed there’s something akin to magic here. It’s about ‘adjusting’ people’s perception of reality, making them see things that do not exist. It doesn’t take place on a stage though but in the more secretive context of the financial markets.

In collaboration with the magician Malin Nilsson and finance sociologist Théo Bourgeron, Goldin+Senneby developed and patented a magic trick for the financial markets that has the capacity to undermine the perceived value of a publicly traded company and to profit from this. The magic gimmick consists in a computer program that help non-experts identify suitable short selling targets, and a step-by-step guide to undermining their perceived value and executing thus a successful short sale. Goldin+Senneby put the Zero Magic computer software inside a magic box that also contains a US Patent Application for Computer Assisted Magic Trick Executed in the Financial Markets and four historical examples of magic tricks played out offstage, in real life.

Nilsson will be performing a magic demonstration on the last night of the exhibition (this way to RSVP!)


Goldin+Senneby, Momentum Trading Strategy. Exhibition view at Nome Gallery


Goldin+Senneby, Momentum Trading Strategy. Exhibition view at Nome Gallery

Goldin+Senneby acquired a series of confidential trading strategies in exchange for artworks. These ‘tricks of the trade’ are bound in files and sealed in glass boxes. The content remain a mystery to the viewer, only cover illustrations by designer Johan Hjerpe might give us a clue since they visually interpret the main dynamics of the strategies.


Goldin+Senneby, Banca Rotta (Central Europe, Late Baroque, oak), 2012/2017. Exhibition view at Nome Gallery

There’s also an antique money changers’ table broken in two. The piece of furniture is a visual representation of the etymology of “bankruptcy”, which derives from banca rotta, the Italian word for broken bench, the bench that moneylenders worked from and that had to be broken when they were no longer in business.

I’ve written time and time again about Goldin+Senneby‘s work. But i’ve never met them. Nor have i ever had the chance to fire a few questions at them. Until now:

Hi Goldin+Senneby! To be honest with you, I’m a bit worried about this interview. While preparing it, i read a story in rhizome that says: “In previous interviews the artists have responded to questions about the project exclusively in the form of quotes from its various parts. For the interview below, however, they produced some new statements, perhaps mindful of the opportunity to recycle them in future incarnations of Headless.” Is that a strategy you have kept on using since that 2009 rhizome interview?

No. This was one of our strategies used in the Headless project – an eight year long performance (2007-2015) staging an “act of withdrawal”.

Goldin+Senneby works with people who sometimes have rather surprising profiles: a magician, an investment banker, an academic social scientist, a patent attorney, an anthropologist, etc. How do you work with them? How much say do they have in the process that leads to a final work?

We try to produce situations in which our (willing and unwilling) collaborators can “act as themselves”. We think of our practice as a distribution of agency within authored frameworks. The clearer the frame we are able to provide, the more agency can be handed over.

Attaching a slide from a “progress report” produced by management consultant Aliceson Robinson in 2011, where she interviewed 12 individuals who had played key roles in our project The Nordenskiöld Model (2010-2017), but notably not ourselves.


Progress Report

The show at NOME Gallery will feature the magic demonstration Acid Money. Could you tell us what will happen? Will it feature the magic trick for the financial markets that you patented?

Yes, the magic demonstration will feature our trick for the financial markets (patent pending) and how we appropriated the methods for this trick from a secretive hedge fund. It will also offer an opportunity to bring some magic with you home!

One of the works you will be showing at the Berlin gallery is Force Directed Predictions. From what i gathered online, the series is based on a system that uses big data and AI to predict art prices. Do algorithms really play such a key role when it comes to predicting art prices? How did we get there?

In the context of a gallery show we were interested in the possibility of offering art market predictions as artworks. So on one level the idea is straightforward: to offer meta-data about collecting to collectors.

And because the learning process of the AI we are working with looks at art price fluctuations in relation to a wide range of macro political and economic indicators (10k+ correlating factors) it also produces portraits of the kind of society in which the art market thrives or declines respectively.

These works are the beginning of a longer process. We are collaborating with XLabs.ai and one of their artificial intelligences that has been surprisingly good at predicting “black swan” events in other areas (such as unexpected civil unrest, large jumps in commodity prices, etc). When we got into contact with XLabs, they were just about to discontinue the use of this AI, since they were disillusioned with their customer base – the only customers that were able and willing to pay for these kinds of predictions were either hedge funds or authoritarian states, and they were not interested in selling their services to either of these categories.

So you figured out how to predict art prices, how to use magic and patents to perform financial manipulations…. Why do you feel that you need to bring this knowledge into the art world? That’s very generous of course but if i were you, i’d use all those tricks and know-how to get ultra rich and ultra idle.

In this sense we are not sure that we are bringing anything to the art world that isn’t already there. Clearly, much of the art sector is bound up with the “ultra rich and ultra idle”, as you put it. For us an important question is how to deal with this position of implication.


Goldin+Senneby. Exhibition view at Nome Gallery


Goldin+Senneby, VWAP Mean Reversion Strategy with Professor Donald MacKenzie and Philip Grant, 2013. Exhibition view at Nome Gallery

Your practice mixes objects with creative forms such as theater, magic and literary fiction. In general, i find that many of your works are quite ‘brainy’. They are fascinating and easy to get drawn into but they require time and attention from the audience to fully engage with them. Is that part of a plan to request effort from the audience? Or is it because the complexity of topics such as financial operations or the art market requires that we observe/reflect upon them with care?

In times of financialization, speed and acceleration have been distinct features. But we are slow. We work for years on the same project. And this slowness produces certain contradictions that we value.

One of our long-term collaborators, playwright Pamela Carter, drew our attention to how, in physical comedy, it’s a rule that you slow the action down … just a little … just enough to give the audience time to see the joke fully … and then laugh.

Thanks Goldin+Senneby!

Goldin+Senneby‘s solo show, Secrets of Trade, remains open until 9 June 2018 at the NOME gallery in Berlin. The magic demonstration Acid Money will take place as part of the exhibition on 9 June 2018 at 7.30 pm.

Previous stories featuring Goldin+Senneby: Artefact festival: Magic and politics, Art Turning Left: How Values Changed Making 1789-2013, Artissima 2013 – From Philospher’s stone to tomato crops, Feedforward. The Angel of History. Part 2: Globalization and agency.

DocLab exhibition asks “Are robots imitating us or are we imitating robots?”


DocLab Expo: Uncharted Rituals. Exhibition view in de Brakke Grond, part of the International Documentary Film festival Amsterdam. Photo Nichon Glerum


Jonathan Harris exhibition in de Brakke Grond, part of the International Documentary Film festival Amsterdam. Photo Nichon Glerum

The 11th edition of IDFA DocLab closed on Sunday at De Brakke Grond in Amsterdam. An integral part of the International Documentary Film Festival Amsterdam (IDFA), DocLab looks at how contemporary artists, designers, filmmakers and other creators use technology to devise and pioneer new forms of documentary storytelling. It’s a space for debates, conversations, VR experiences, interactive experiments and workshops.

For some reason, i thought that this year’s programme was even more intense than in previous years and i’m going to need 3 blog posts to cover all the ideas and projects i found particularly interesting. There will be one story summing up the notes i took during the DocLab: Interactive Conference. Another post will briefly comment on some of the interactive documentaries i saw in Amsterdam and back home. And today, i’d like to look at a couple of installations that explore the main theme of the festival: Uncharted Rituals or how we have to constantly, subtly and often unknowingly adjust our behaviour and mindset to technology. Instead of the other way round.

Robots and computers are acting more and more like people. They’re driving around in cars, hooking us up with new lovers and talking to us out of the blue. But is the opposite also true— are people acting more and more like robots?

The computers may think so: addicted to our phones, caught in virtual filter bubbles and dependent on just a handful of tech companies, people are acting more and more predictably. The breakthrough of artificial intelligence and immersive media doesn’t just pose the question what technology does to us, but also what we do with this technology.

I have only 3 works to submit to you today but each of them makes valuable comments about the way we might one day have to dance with and around technology in order to coexist with it:


Max Pinckers and Dries Depoorter, Trophy Camera v0.9, 2017


Max Pinckers and Dries Depoorter, Trophy Camera v0.9, 2017


Max Pinckers and Dries Depoorter, Trophy Camera v0.9 at DocLab Expo: Uncharted Rituals. Exhibition view in de Brakke Grond, part of the International Documentary Film festival Amsterdam. Photo Nichon Glerum

A photographic image is never objective. It is always framed by human aesthetic choices, agendas and conscious or unconscious bias. The Trophy Camera v0.9 aggregates this element of human subjectivity into a photo camera that can only make award-winning pictures.

The AI-powered camera, developed by photographer Max Pinckers and media artist (and DocLab Academy alumnus) Dries Depoorter, has been trained by all the photos that have won an award at the World Press Photo competition, from 1955 to the present.

Based on the identification of labeled patterns, the experimental device is programmed to identify, shoot and save only images that it predicts have at least a 90% chance of winning the competition. These photos are then automatically uploaded to a dedicated website: trophy.camera. I tried several times but my photos were never deemed award-worthy by the camera.


Burhan Ozbilici, WPP of the year 2017


and its trophy.camera version?

The work reminded me of the World Press Photo awards of 2011 when Michael Wolf won an honorary mention in Contemporary Issues with a photo he made by placing a camera on a tripod in front of a computer screen running Google Street View. The award raised a heated debate among photographers. For some of them, Wolf didn’t take the pictures, the cameras on Google street car automatically did it. This is therefore not photojournalism. And yet, who would have paid attention to these scenes if Wolf hadn’t recognized and framed them?

Trophy Camera v0.9 is tongue-in-cheek and irreverent but it points to a future when algorithms will win prizes that have traditionally recognized human creativity and vision.


Sander Veenhof, Patent Alert

Sander Veenhof, Patent Alert

Google, Microsoft and other tech companies are fighting over patents for the smart glasses that scan the environment and layer information over it.

One company owns the rights to scanning common hand gestures, while another holds a patent on helping you to cross the road. Patent Alert exposes the patenting obstacles that will intrude on our experiences with augmented reality headsets once the technology becomes mainstream.

Sander Veenhof created a HoloLens app that uses a cloud-based Computer Vision library to analyse your surrounding and warn you about gestures and behaviours that are not allowed because they are covered by a patent that’s not owned by the supplier of the device you are wearing.

Memo Akten, Learning to see: Hello World! [WIP R&D 3]


Memo Akten, Learning to See: Hello World! at DocLab Expo: Uncharted Rituals. Exhibition view in de Brakke Grond, part of the International Documentary Film festival Amsterdam. Photo Nichon Glerum

Memo Akten‘s Learning to See series of works uses Machine Learning algorithms to reflect on how we make sense of the world and consequently distort it, influenced by our expectations.

One of the investigations in the series, Hello, World!, explores the process of learning and understanding developed by a deep neural network “opening its eyes for the first time.”

The neural network starts off completely blank. It will learn by looking for patterns in what it’s seeing. Over time, the system will built up a database of similarities and connections and use it to make predictions of the future.

Interestingly, Akten’s description of the learning process holds a mirror back to us: But the network is training in realtime, it’s constantly learning, and updating its ‘filters’ and ‘weights’, to try and improve its compressor, to find more optimal and compact internal representations, to build a more ‘universal world-view’ upon which it can hope to reconstruct future experiences. Unfortunately though, the network also ‘forgets’. When too much new information comes in, and it doesn’t re-encounter past experiences, it slowly loses those filters and representations required to reconstruct those past experiences.

How far can we go when we draw parallels between the way a computer trains itself and the way we learn? Are humans the only one who are capable of turning learning into understanding? Or will computers beat us at that too one day? But perhaps more crucially, can computer help us see and oppose our own cognitive biases?

Play Station: Bread and Circus for the new jobless society


Lawrence Lek, Play Station, 2017


Lawrence Lek, The Nøtel (with Steve Goodman/Kode9), 2015

In Ancient Rome, politicians used to court the approval of the masses with circus games and cheap food. The satisfaction of citizen’s immediate needs distracted them from any concern regarding the management of the state and made them more likely to vote for lavish politicians. Satirical poet Juvenal found the political strategy disgraceful and talked about panem et circenses.

What will be the 21st century’s bread and circus when the unavoidable impact of job automation puts many of us out of work? Where are we going to find satisfaction and self-worth in the coming years when, as experts predict, automated systems replace 50 percent of all jobs? Will our countries have to face waves of unrest as citizens flood the streets asking for employment, dignity and a reason to get up in the morning? If a universal basic income provides us with bread, what will be our circus?

Artist Lawrence Lek’s latest utopian fiction VR game imagines that in the near future tech companies might throw us a bone:

Set in 2037, Play Station takes place in a futuristic version of the White Chapel Building, the London headquarters of a mysterious technology start-up known as Farsight. A world leader in digital automation, Farsight trains employees to outsource their jobs as much as possible, rewarding top performers with access to exclusive entertainment and e-holidays.

Play Station is ‘a useless-job simulator’. Farsight has no need for human workers, because it relies on automation to ensure profit and growth. The VR simulation is only there to give people a sense of fulfillment. Because Lek trained and worked as an architect, most of his works are site-specific. Play Station, for example, will be installed in the atrium of the recently re-invented White Chapel Building in London where it will stand as a critical comment on the changing boundaries between workplace and playground.

I had a quick email conversation with Lek ahead of the launch of the work for Art Night 2017 on July 1:

Hi Lawrence! Should we rejoice at the idea that playing video games might one day become the new form of work? Or is there something more sinister behind the idea?

In the training and promotional video for Play Station, the guide explains, ‘It’s work! It’s Play! No, it’s Playwork™!’

Play Station is a VR simulation set in 2037 London, where the player is a new employee in a warehouse distribution training centre for Farsight Corporation, a company that specialises in AI automation technology. Here, all work is disguised as play.

The project continues my hybrid site-specific/science-fiction world of Sinofuturism, exploring scenarios where advanced technology, driven by Asian research and investment, poses an existential problem for humanity’s heroic vision of itself. In the Nøtel (made in collaboration with Steve Goodman/Kode9), a fully-automated luxury hotel has its staff replaced completely by drones; In Geomancer, a Singaporean satellite AI comes to earth, hoping to become an artist. With Play Station, I asked – if mechanical automation and AI have kept on replacing the human workforce, could this be seen as an unexpected form of utopia?

I think it would lead to some kind of crisis about work because so much human self-worth is defined in relation to an individual’s value as a labour-provider. It’s a universal syndrome. Whether these beliefs stem from the Protestant Christian or Chinese work ethic, an individual’s relevance to society has extremely deep-set roots in the basis of civilization in agricultural societies, where labour was necessary for survival and (hopefully) prosperity.

Modern work culture has its roots in the transition from an agriculture to the Victorian mechanised workforce; jobs that used to be performed by human labour have repeatedly been augmented and replaced by technology. But what if the ultimate conclusion of the Marxist liberation from drudgery was actually a life of leisure? What would people do if they had universal basic income and they never no longer had to work in order to enjoy a sustainable living?

One idealistic possibility is that everybody will be an artist, free to express themselves and explore the highest forms of human creativity (with lots of government grants and charitable funding of course). More realistically, people would spend time playing computer games, hanging out, and indulging in some kind of play. And at its most extreme, there will be a crisis when the justification for our place in society is no longer predicated on our ability to work.

Lawrence Lek, The Nøtel (with Steve Goodman/Kode9), 2015

Why did you chose a Virtual Reality game to explore post work society?

Play Station is essentially a useless-job simulator. In a way, it’s a future version of medieval re-enactment cosplay scenarios, where people dress up as knights and gather for banquets, tournaments and archery.

In the game, you’re being trained to perform a job that isn’t actually a financial necessity for Farsight corporation. They’ve made billions through AI automation projects. Play Station is one of their charitable goodwill projects. In the future, maybe ‘corporate social responsibility’ goes beyond sponsoring charities. The VR simulation is to give people the illusion that they are productive members of society!


Lawrence Lek, Play Station, 2017

Should we be worried that, soon, all we will have left to spend time is going to be game and VR?

Virtual reality is just the latest in a long line of entertainment mediums that seek to be more immersive. From theatre, to cinema, television, and video games, I think these forms of mass media are designed to envelop the viewer in ever-increasing forms of immersion. That’s why there’s been such a big push in investment, from Facebook acquiring Oculus, to Samsung and Sony developing their own forms of VR. It’s compelling from a multinational business perspective, because the medium can be distributed and domesticated into individual households. There’s a huge potential market for the devices.

So in a post-work society, if everybody has 100% leisure time then VR might be the new opiate of the masses.

Geomancer (Trailer), 2017

Your visions of the future tend to be quite dystopian. But is Play Station anchored in actual examples of trends, news stories and practices? How much of this piece and how much of your work in general is tied to reality and how much of it is the result of your own imagination?

In Geomancer, set in Singapore in 2065, the curator AI says, ‘Utopia VR is big business these days.’

Although it’s often set in the future, my own work is very much tied to reality and what I see in everyday life, from promotional stands at Westfield shopping centre to the hyperactive ads that pop up before Youtube videos. Play Station and Farsight are fictional entities based on how tech companies continuously attempt to improve their public persona through architecture and branding. As part of the installation, I’m creating a marketing video based on promotional videos for hi-tech companies seeking investors and customers. Many of these companies’ founders have genuine utopian dreams about the potential of technology to create a thriving company and to benefit humanity. Naturally, those two things don’t always work together. But in the fictional world of the promo trailer or the VR playground, they do.

I don’t make these works as judgemental criticisms, they are simply more of a reflection of the symbiosis of society, culture, technology, and corporate growth. Whether that’s dystopian or not, I don’t know. But it’s what I see around me every day.


Lawrence Lek, Play Station, 2017


Lawrence Lek, Play Station, 2017

Is there anything about The White Chapel Building that call for this type of post-work/game scenario?

I’m very interested in the interdependent relationship of property economics and architectural aesthetics. The White Chapel Building itself is a newly-renovated former centre for the Royal Bank of Scotland. It’s now leased out to digitally-driven companies and agencies. The new interior reflects trends in workplace design; the 1980s anonymity of big-business architecture (stone cladding, vast central atrium, muted colours) has given way to the post-Millennial workplace (the atrium has a cafe and is open to the public, and you can see the open-plan offices, colourful furniture, and contemporary artwork all around).

We know the ‘playground’ aesthetic of Google workplaces, and Play Station is an imagined continuation of this kind of primary-colours-and-bean-bags aesthetic. But while the interior design of the future workplace will look ever more playful, the underlying economic prerogatives won’t change.

Could you describe the interaction? How do people explore the game and participate?

Play Station is set up as a mandala-like pentagon in the atrium of the White Chapel Building, with each of the five points housing a ‘promo’ station with an Oculus headset, PC, and TV screens playing the instructional video for Farsight Corporation’s ‘new brand of automated workplaces’. The video is for training new employees how to become more efficient workers. Once they put on the VR headset, players engaged in a variety of tasks for fulfilment services (goods distribution). Lucky employees even get to go on Farsight’s rollercoaster ride…

Just like Amazon’s distribution warehouses combine robot and human workforces, there’s a certain kind of automated performance that the player has to learn in order to progress in the game. I’m interested in how video games use ‘fun’ and interactivity to make the player forget the actual physical work and repetitive motion required to play the game.

I actually really dislike putting on those ugly, unhygienic VR goggles. And i’ve had to wear them A LOT over the past few years. Sometimes it was worth it though. What do you find compelling and relevant in VR technology? What makes you want to work with this technology?

I’m most interested in the how the player becomes a performer to other members of the audience, who are also waiting for their turn to become a performer themselves.

There’s a huge difference between ‘ideal’ VR where the virtual world is indistinguishable from the physical one, and the sheer clumsiness of the technology itself. VR headsets add a comedic element to interaction in a public space. At its most basic level, putting on goggles is being blindfolded to your immediate surroundings. When you’re playing, you become the object of attention for other viewers to look at, but you remain happily complicit in this relationship because you’re in another world. This results in a strange kind of reverse voyeurism, where the player’s mind is in another world, but their body stays in the public space of the exhibition.

I find these invisible relationships and social connections very interesting. While exploring, people express subconscious parts of their personality in how they interact with virtual worlds. Some want to win the game by exhausting all possible routes; others want to walk off the edge of the planet. All of these approaches express an attempt to make sense of the world, to master it, to explore the joy or sadness within it; except that it’s literally through the lens of this absurd VR technology that we see as somehow ‘advanced’.

Lawrence Lek, Sinofuturism (1839-2046 AD), 2016 


Lawrence Lek, Sinofuturism (1839-2046 AD), 2016


Lawrence Lek, Geomancer, Commissioned for the Jerwood/FVU Awards 2017


Lawrence Lek, Geomancer, Commissioned for the Jerwood/FVU Awards 2017

Is the future of work something that concerns you personally? Because i suspect that one day AI will take an even more ‘active’ role in the field of creativity as well.

I think AI will increasingly learn to perform ever more complex and creative tasks. I’m interested what this means in my own role as an artist. Can every job be replaced? Is being a writer and artist any different in essence from being a warehouse worker or stockbroker? We all have to make decisions based on certain rules that govern our task. Of course, there’s the romantic ideal of an artist making genius masterpieces. But these are also the result of a very large series of decisions, tastes and preferences as well as the mastery of a range of skills.

My last film, Geomancer, addresses this a problem specifically. While seeking independence from the Singapore government, the satellite AI decides that the most illogical (and therefore most compelling) thing for them to do is to become an artist. What kind of art work would a consciousness create if they had the whole store of human knowledge, of every human and machine language, the entire archive of the internet from 1969 to 2065? And also the capacity to use machine vision on an unimaginable scale, perceiving and recording the movement of every wave and living creature within the ocean? The places where this posthuman idea of creativity will lead are terrifying and beautiful, and maybe even sublime. I think that’s where technology and art are heading.

Thanks Lawrence!

You can experience Play Station at The White Chapel Building for Art Night 2017 on July 1. Lawrence Lek will also be joining Art Night curator, Fatos Üstek at Whitechapel Gallery on Thursday 6 July to discuss his new project.

Play Station by Lawrence Lek for Art Night 2017 is a co-commission by Outset Young Patron Circle and Art Night, supported by Derwent London.

Medusa FPS, the game that perverts the logic and goals of the FPS genre


Karolina Sobecka, Medusa FPS (caption from the video game)

The military is increasingly using smart robotic weapon systems that distribute agency between a team of men, an algorithm and a machine. This type of weapon means that any sense of responsibility and accountability is shared and thus diluted. If that were not disturbing enough, personal weapons used by civilians are now being fitted with similar ‘smart’ technology. The Tracking Point Rifle, for example, won’t allow you to pull the trigger until it has been pointed in exactly the right place. It is an extremely precise and sophisticate piece of machinery that takes into account dozens of variables, including wind, shake and distance to the target. The weapon also comes with a wifi transmitter to stream live video and audio to a nearby iPad. Every shot is recorded so it can be posted to YouTube or Facebook should you wish to.

Karolina Sobecka‘s Medusa FPS is directly inspired by these semi-autonomous and autonomous weapons. In her First Person Shooter game, the player uses an AI-assisted gun that guides his or her hand to aim more effectively and fires when a ‘target’ enters its field of view. Which of course seems to wipe out much of the thrill of playing a FPS game. Medusa FPS, however, reverses the usual logic and goals of FPS games. The challenge for the player here is to fight against his or her own in-game character and prevent it from shooting anyone. They cannot drop the weapon nor stop it from firing, but they can obstruct it (and the gun’s) vision.

Medusa FPS hinges on the conflict created between the player and her in-game character. Virtual environments have allowed us to create and play out multiple personas, and thus allow for potentially creating internal dialog between those. The POV perspective used by the game convention helps to set up such a confrontation here. The vision is shared between the player and the character, and is a place of contention of either’s agency.

Medusa FPS is part of Monsters of the Machine, a show that explores the ‘unintended and dramatic consequences’ that technology can have for the world. The exhibition was curated by Furtherfield.org co-director Marc Garrett and it features a few of my favourite artists. One of them is Karolina Sobecka and i thought i’d take the excuse of the Laboral show to get in touch with her and have her talk about the game:


Exhibition view at Laboral. Photo by Marcos Morilla

Hi Karolina! i must admit that i was totally shocked and horrified when i read the full description of the conceptual and technological background of the game. I had no idea that personal weapons could become so dangerously sophisticated. The possibility of sharing the shootings on social platforms is particularly chilling. How do you ensure that people who come in the gallery and play the game actually engage with the issues rather than just enjoy it as a new gaming challenge?

I was pretty surprised when I learned about the ‘precision-guided’ personal weapons too. The TrackingPoint rifle really just sounds like a grotesque exaggeration of the trends in ‘smart’ things and in the need for experience to be dramatized by social media. And yes, the live streaming of the ‘shot view’ is probably its most disturbing feature, partly because you can see the profit logic in this design. This really quickly becomes also a question of responsibility of online media that benefits from this kind of material. Incidentally, we now already have a discussion of how the presence of social media might be encouraging people to create a certain reality on the ground thanks to the recent murders that were being streamed or posted on Facebook.

Fortunately, it looks like the TrackingPoint startup is not doing great. Apparently the guns, besides being very expensive, ‘take the fun out of hunting.’ And also, the rifle has already been hacked (to remotely change the target), so at least it serves to illuminate some of the vulnerabilities of privately owned networked weapons.


A view through the scope of the Tracking Point TP750. Photo: Greg Kahn for WIRED

My project was actually inspired not by TrackingPoint but by reading Mark Dorrian’s essay ‘Drone Semiosis’ about the autonomous weapons systems (such as Gorgon Stare), which, as he writes ‘conflate the act of seeing and killing.’ I think it demonstrates the violence of surveillance in general, that violence is implicit in the act of targeting. The other really complicated issue Dorrian writes about is the distribution of the responsibility for killing between several human and non-human actors. That’s really interesting and ideally my project can compel people to think about it.

The game is only presented in art galleries, so I think the context encourages the viewer to reflect on it as a critical material. The game itself is quite simple, and the metaphor – which is the formal device – is encountered right away. The design is centered around this dissonance between what you expect and what you experience when you first start playing. I hope that just the initial moment of having to re-calibrate – to stop and think about one’s action, is enough of an interruption of the habit to cause some reflection.


Karolina Sobecka, Medusa FPS (caption from the video game)

And where do you think that these ‘monsters in the machine’ are going to lead society? How far can we push the limits of what is ethically and socially acceptable?

On one hand this monster could be be the traces of the humans that made it, their biases and agendas buried in the design of the system, while the actual human with his critical reflection, re-negotiation and re-evaluation has been de-coupled from it. This might be the hidden monstrosity, but what I think actually produces a sense of threat is the uncertainty, apprehension towards the unknown. The gamble is bigger in the context of a global networked society, which means that risks associated with technologies can have a more far-reaching impact.

AI making decisions that are moral in nature, and act on them is a complicated question. I think this is something we will have to grapple with for some time – it takes time for people to be exposed to this reality to think about what consequences it might have, see examples of how it unfolds, and develop narratives of what’s acceptable and what isn’t. I think the drones are a good example to think through the ethics of integrating decision-making software into social systems, because when harm is done the stakes are so high – someone is killed. It might be even more difficult to analyze systems when the effects, and who benefits or is harmed, might be more subtle or just invisible. Weapons are ostensibly violent but perhaps a more insidious kind of violence can be done as a result of entrenching a technology that exploits people under a banner of freedom or economic independence.

But I tend to agree with the standard answer to this question – that the technology develops first and then our ethics have to try to keep up, so it is important that we analyze, reframe and hack technology from the very beginning of its development, to constantly apply the critical lens to it, and to keep in check the potential of the harmful consequences.

And have you noticed that people engage and react differently to Medusa FPS depending on whether they are avid players of FPS games or gallery visitors who are interested in the concept but less used to the mechanic and logic of FPS games?

I don’t really have an answer to this. I haven’t gotten a chance to see how people interact with this in a gallery. And honestly (and this might sound antithetical to game designers) I am more interested in the concept than the game playing myself – I love designing interactive systems, but I’m a really bad player.


Karolina Sobecka, Medusa FPS (caption from the video game)

I don’t know if this is relevant but looking at screenshots from the game and reading your text, there seem to be a strong feminine presence in the game. In the role of the potential victim of a bullet but also in the way you describe the work, using ‘she’ where one might have expected a ‘they’ or even a ‘he’. for example: “The player cannot drop the weapon or stop it from firing, but she can obstruct her (and the gun’s) vision.” Is this something you’d like to comment on? Is this a feminist statement?

I used an equal number of male and female characters but all of them are just plain people models rather than soldiers or fighters, which might contribute to this impression of ‘feminine’ presence. The masculinity of the men is not exaggerated as it would be in standard FPS characters. It wasn’t meant as an overt feminist statement. It’s just shifting what the standard of FPS is in terms of genre: where the enemies are predefined and unambiguous, including how they look and act.

Could you describe how the interaction unfolds? What the player has to do that will enable him/her to obstruct the gun vision and shoot as few people as possible?

The player has to hide within the building structures or stay away from anybody else. The people in the scene simply walk around the world. When they are shot at, they start running away or trying to attend to the ones who were shot. They actually are controlled by a modified ‘AI’ script, so their behavior is the result of several simple rules. They run away to a safe distance and then resume their wandering behavior. They are curious (so if the player is in their field of vision they’ll walk to approach him), and sometimes attracted to being in a group. The term AI in games that connotes autonomous behavior of an agent (usually enemy) has been around for a long time, but it usually is just a very simple behavior based on a few rules. Now that Artificial Intelligence is starting to control devices and agents in our daily lives it might be interesting to look at those really simple standard behaviors.

The player doesn’t get much chance to watch the pattern of their behavior. There’s an AI script on the weapon as well – which exaggerates ‘guided aiming.’ I have heard that in some games weapons can be in fact scripted in a similar but more subtle way, making the player a more effective shooter. In my game if there’s a person within the field of view, the weapon will rotate to get them in the crosshairs – and when they are in the crosshairs, it will fire. Player’s control will override any movement by the weapon but the moment he or she stops exerting active control, the weapon will take over do its automatic guiding. In practice this looks like jerking the gun back and forth. It’s a strange interaction because it doesn’t correspond do anything in physical reality. The immersive illusion of the game rests mostly in the POW camera. But this wrestling of control happens with the keyboard.

Could you also explain the choices you made while designing the visual universe of the game? Why is is so cold and clean?

I’ve been using this palette and character design for some time, it’s in a way just a pragmatic choice – since I’m interested in the rules of the game, rather than building an immersive world, the visual quality should support that. It’s a model rather than an illustration or a narrative, and this design is a bare-bones model, where everything that I’m not trying to point to is at a default value.

Thanks Karolina!

Medusa FPS is part of the exhibition Monsters of the Machine, curated by Marc Garrett, co-director of Furtherfield.org. The show remains open until 31st August 2017 at Laboral Centro de Arte y Creación Industrial in Gijón.

Interview with James Bridle about human deskilling and machine understanding


James Bridle, Untitled (Autonomous Trap 001)

Tesla customers who want to take advantage of its cars AutoPilot mode are required to agree that the system is in a “public beta phase”. They are also expected to keep their hands on the wheel and “maintain control and responsibility for the vehicle.”

Almost a year ago, Joshua Brown was driving on the highway in Florida when he decided to put his Tesla car into self-driving mode. It was a bright Spring day and the vehicle’s sensors failed to distinguish a white tractor-trailer crossing the highway against a bright sky. The car didn’t brake and Brown was the first person to die in a self-driving car accident.

Autonomous cars have since been associated with a growing number of errors, accidents, glitches and other malfunctions. Interestingly, human trust in these technologies doesn’t seem to falter: we assume that the technology ‘knows’ what it is doing and are lulled into a false sense of safety. Tech companies are only too happy to confirm that bias and usually blame the humans for any crash or flaw.

James Bridle, Autonomous Trap 001 (Salt Ritual, Mount Parnassus, Work In Progress), 2017


James Bridle, Installation view of Failing to Distinguish Between a Tractor Trailer and the Bright White Sky at Nome Gallery, Berlin, 2017. Photo: Gianmarco Bresao

James Bridle‘s solo show Failing to Distinguish Between a Tractor Trailer and the Bright White Sky, which recently opened at NOME project in Berlin, explores the arrival of technologies of prediction and automation into our everyday lives.

The most discussed work in the show is a video showing a driverless car entrapped inside a double circle of road markings made with salt. The vehicle, seemingly unable to make sense of the conflicting information, barely moves back and forth as if under the spell of a mysterious force.

The work demonstrates admirably the limitation of machine perception, the pitfalls of a technology which inner working and logic is completely opaque to us, the difference between human and machine comprehension, between accuracy and reliability.

I sometimes wonder how aware most of us really are of the impact that self-driving vehicles will have on our life: soon we might not be able to read maps not just because GPS have made that skill superfluous but because these maps will be unintelligible to us; we might even be seen as too unreliable behind a wheel and be forbidden to drive cars (we’ll have sex instead apparently.)

Taking as their central subject the self-driving car, the works in the exhibition test the limits of human knowing and machine perception, strategize modes of resistance to algorithmic regimes, and devise new myths and poetic possibilities for an age of computation.

It feels strangely ominous to write about autonomous machines on the 1st of May, a day celebrated as International Workers’ Day. After all, these smart systems are going to ‘put us out of job‘. And truck drivers, taxi drivers, delivery drivers are among the professions which will be hit first.


James Bridle, Untitled (Activated Cloud), 2017

I asked the artist, theorist and writer to tell us more about the exhibition:

Hi James! I had a look at the video and not a lot is happening once the car is inside the circle. Which is exactly what you wanted to show of course. But for all i know, the machine could have stopped to work just because it never worked as an autonomous vehicle in the first place and you could be hiding inside making it move a bit. Could you explain what the machine sees and what causes the car to stall?

The car in the video is not autonomous. My main inspiration for the project was in understanding machine learning, and the system I developed – based on the research and work of many others – was entirely in software. I kitted out a regular car with cameras and sensors – some off the shelf, some I developed myself – and drove it around for days on end. This data is then fed into a neural network, a kind of software modelled originally on the brain itself, which learns to make associations between the datapoints: knowing the kind of speed, or steering angle, which should be associated with certain road conditions, it learns to reproduce them.

I’m really interested in this kind of AI which instead of attempting to describe all the rules of the world from the outset, develops them as a result of direct experience. The result of this form of training is both very powerful, and sometimes very unexpected and strange, as we’re becoming aware of through so many stories about AI “mistakes” and biases. As these systems become more and more embedded in the world, i think it’s really important to understand them better, and also participate in their creation.

My software is developed to the point where it can read the road ahead, keep to its lane, react to other vehicles and turnings – but in a very limited way. I certainly would not put my life in its hands, but it does give me a window into the way in which such systems function. In the Activations series of prints in the exhibition, which show the way in which the machine translates incoming video data into information, you can see the things highlighted as most significant: the edges of the road, and the white lines which direct it. Any machine trained to obey the rules of the road would and should obey the “rules” of the autonomous trap because it’s simply a no entry sign – but whether such rules are included in the training data of the new generation of “intelligent” vehicles is an open question.


James Bridle, Untitled (Activation 002), 2017


James Bridle, Untitled (Activation 004), 2017

It is a bit daunting to realise that a technology as sophisticated as a driverless car can be fooled by a couple of kilos of salt. In a sense your role fulfills the same role as the one of hackers who enter a system to point to its flaws and gaps and thus help the developers and corporations to fix the problem. Have you had any feedback from people in the car industry after the work was published in various magazines?

The autonomous trap is indeed a potential white hat or black hat op. In machine learning, this might be called an “adversarial example” – that is, a situation deliberately engineered to trick the system, so it can learn from and defend against such tricks in the future. It might be useful to some researcher, I don’t really know. But as I’m interested in the ways in which machine intelligence differs from human intelligence, I’ve been following closely many techniques for generating adversarial examples – research papers which show, for example, the ways in which image classifiers can be fooled either with entirely bizarre random-looking images, or with images that, to a human, are indistinguishable. What I like about the trap is that it’s an adversarial example that sits in the middle – that is recognisable to both machine and human senses. As a result, it’s both offensive and communicative – it’s really trying to find a middle or common ground, a space of potential cooperation rather than competition.

You placed the car inside a salt circle on a road leading to Mount Parnassus (instead of on a car park or any other urban location any artist dealing with tech would do!). The experiment with the autonomous car is thus surrounded by mythology, Dyonisian mysteries and magic.Why do you embed this sophisticated technology into myths and enigmatic forces?

The mythological aspects of the project weren’t planned from the beginning, but they have been becoming more pronounced in my work for some time now. While working on the Cloud Index project last year I spent a lot of time with medieval mystical texts, and particularly The Cloud of Unknowing, as a way of thinking through other meanings of “the cloud”, as both computer network and way of knowing.

In particular, I’m interested in a language that admits doubt and uncertainty, that acknowledges that there are things we cannot know yet must take into account, in a way that contemporary technological discourse does not. This seems like a crucial form of discourse for an interconnected yet increasingly complex and fragmented world.

In the autonomous car project, the association with Mount Parnassus and its mythology came about quite simply because I was driving around Attica in order to train the car, and it’s pretty much impossible to drive around Greece without encountering sites from ancient mythology. And this mythology is a continuous thread, not just something from the history books. As I was driving around, I was listening to Robert Graves’ Greek Myths, which connects Greek mythology to pre-Classical animism and ritual cults, as well as to the birth of Christianity and other monotheistic religions. There’s a cave on the side of Mount Parnassus which was sacred, like all rustic caves, to Pan, but has also been written about as a hiding place for the infant Zeus, and various nymphs. The same cave was used by Greek partisans hiding from the Ottoman armies in the nineteenth century and the Nazis occupiers in the twentieth, and no doubt on many other occasions throughout history – there’s a reason those stories were written about that place, and the writing of those stories allowed for that place to retain its power and use. Mythology and magic have always been forms of encoded and active story-telling, and this is what I believe and want technology to be: an agential and inherently political activity, understood as something participatory, illuminating, and potentially emancipatory.


James Bridle, Installation view of Failing to Distinguish Between a Tractor Trailer and the Bright White Sky at Nome Gallery, Berlin, 2017. Photo: Gianmarco Bresao


James Bridle, Installation view of Failing to Distinguish Between a Tractor Trailer and the Bright White Sky at Nome Gallery, Berlin, 2017. Photo: Gianmarco Bresao

Your practice as an artist and thinker is widely recognised so i suspect that you could have knocked on the door of Tesla or Volkswagen and get an autonomous car to play with. Why did you find it so important to build your own self-driving car?

I think it’s incredibly important to understand the medium you’re working with, which in my case was machine vision and machine intelligence as applied to a self-driving car – something that makes its own way in the world. By understanding the materiality of the medium, you really get a sense of a much wider range of possibilities for it – something you will never do with someone else’s machine. I’m not really interested in what Tesla or VW want to do with a self-driving car – although I have a fairly good idea – rather, I’m interested in thinking through and with this technology, and proposing alternative pathways for it – such as getting lost and therefore generating new and unexpected experiences, rather than ones pre-programmed by the manufacturer. Moreover, I’m interested in the very fact that it’s possible for me to do this, and for showing that it’s possible, which is itself today a radical act.

I believe there’s a concrete and causal relationship between the complexity of the systems we encounter every day, the opacity with which most of those systems are constructed or described, and fundamental, global issues of inequality, violence, populism and fundamentalism. Only through self-education, self-organisation, and new forms of systemic literacy can we counter these currents: programming is one form of systemic literacy, demonstrating the accessibility and comprehensibility of these technologies is another.

The salt circle is associated with protection. Do you think our society should be protected from autonomous vehicles?

In certain ways, absolutely. There are many potential benefits to autonomous vehicles, in terms of road safety and ecology, but like all of our technologies there’s also great risk, particularly when control of these vehicles is entirely privatised and corporatised. The best model for an autonomous vehicle future is basically good public transport – so why aren’t we building that? At the moment, the biggest players in autonomous vehicles are the traditional vehicle manufacturers – hardly beacons of social or environmental responsibility – and Silicon Valley zaibatsus such as Google and Uber, whose primary motivation is financialising virtual labour until they develop AI which can cut humans out of the loop entirely. For me, the autonomous vehicle stands in most particularly for the deskilling and automation of all forms of labour (including, in Google’s case, cognitive labour), and as such is a tool for degrading individual and collective agency. This will happen first to truck and taxi drivers, but will slowly extend to most of the workforce which, despite accelerationist dreams, is currently shredding rather than building a social framework which might support a low-work future. So, looked at that way, the corporate-controlled autonomous vehicle and automation in general is absolutely something that should be resisted, while it fails to serve the interest of most of the people it effects.

In all things, technological determinism – the idea that a particular outcome is inevitable because the technology for it exists – must be opposed. Knowing where the off switch is a vital and necessary complement to the kind of democratic involvement in the design process described above.

The artist statement in the catalogue of the show says that you worked with software and geography. I understand the necessity of the software but geography? What was the role and importance of geography in the project? How did you work with it?

The question which I kept returning to while working on the project, alongside “what does it mean for me to make an autonomous car?” is “what does it mean to make it here?” – that is, not on a test track in Bavaria or a former military base in Silicon Valley, but in Greece, a place with a very different material history and social present. How does a machine see the world when its experience is of fields, mountains, and winding tracks, rather than Californian highways and German autobahns? What is the role of automation in a place already suffering under austerity and unemployment – but which also has always produced its own, characteristic responses to instability? One of the things I find fascinating about the so-called autonomous vehicle is that, in comparison to the traditional car, it’s really as far from autonomous as you can get. It must constantly return to the network, constantly update itself, constantly observe and learn from the world, in order to be able to operate. In this way, it also seems to embody some potentially more connected and more community-minded world – more akin to some of the social movements so active in Greece today than the atomised, alienated passengers of late capitalism.


James Bridle, Gradient Ascent, 2016


James Bridle, Gradient Ascent, 2016

In the video and catalogue text entitled “Gradient Ascent”, Mount Parnassus and the journeys around it becomes an allegory both for general curiosity, and for specific problem-solving: one of the precise techniques in computer science for maximising a complex function is the random walk. Re-instituting geography within the domain of the machine becomes one of the ways of humanising it.

I was reading on Creators that this is just the beginning of a series of experiments for the car. Do you already know where you will go next with the technology?

I’m still quite resistant to the idea of asking a manufacturer for an actual vehicle, and for now my resources are pretty limited, but it might be possible to move onto the mechanical part of the project in other ways – I’ve had some interest from academic and research groups. I think there’s lots more to be done in exploring other uses for the autonomous vehicle – as well as questions of agency and liability. What might autonomous vehicles do to borders, for example, when their driverless nature makes them more akin to packets on a borderless digital network? What new forms of community, as hinted above, might they engender? On the other hand, I never set out to build a fully functioning car, but to understand and think through the processes of developing it, and to learn from the journey itself. I think I’m more interested in the future of machine intelligence and machinic thinking than I am in the specifics of autonomous vehicles, but I hope it won’t be the last time I get to collaborate with a system like this.

Thanks James!

James Bridle’s solo show Failing to Distinguish Between a Tractor Trailer and the Bright White Sky is at NOME project in Berlin until July 29, 2017

Dataghost 2. The kabbalistic computational machine


RYBN, Dataghost 2, 2016. Installation view at STUK in Leuven for the Artefact festival. Photo © Kristof Vrancken

As early as the 1st century, Jews believed that the Torah and other key religious texts contained encoded truths and hidden meanings. They used a system called Gematria to uncover them. According to this numerological system, each Hebrew letter also corresponds to a number (for example: 1 is Aleph, 2 is Bet, 3 is Gimel, 4 is Daleth, etc.) Kabbalists extended the method to other texts and, by converting letters to numbers, they looked for a hidden meaning in each word. Other hermeneutic techniques used by the Kabbalah are Temurah, which rearrange words and sentences to deduce deeper spiritual meanings, and Notarikon which creates words from letters taken from the beginning, middle, or end of words.


RYBN, Dataghost 2, 2016. Installation view at STUK in Leuven for the Artefact festival. Photo © Kristof Vrancken


RYBN, Dataghost 2, 2016. Installation view at STUK in Leuven for the Artefact festival. Photo © Kristof Vrancken

The French collective RYBN.org has applied this numerological system of transformations, associations and substitutions to computing. Their Dataghost 2 installation is a kabbalistic computational machine that seeks to reveal the hidden messages buried within the data traffic.

A daemon, installed on a server, catches all incoming and outgoing digital communications, and dumps their content using network interception tools. All the encapsulated data are then submitted to several decyphering algorithms, reproducing the hermeneutic techniques of Kabbalah. The raw data are decomposed and recomposed according to the substitution principles that govern the Kabbalah, in order to unveil the mystic of network communications.

By following the kabbalistic alpha-numerical system, the fragments of codes generate in the process millions of shell commands, most of them incoherent or nonfunctional. However, from time to time, the commands will ‘make sense’ to the computer. The machine will interpret them as tasks that needs to be executed. At this precise moment, the machine achieves the invocation ritual of a digital Golem.

However, there is no way to predict where the ritual might lead the machine: the executed commands might saturate the memory capacity of the machine, provoke a definitive stop within the software layer, or overpass several critical limits that results in an overheating of certain electronic components, or lead to the destruction of parts of its physical layers. Over the course of its life, the system constantly publishes its self-destructive activity in the form of a print out of all the different commands.

I discovered the work two days ago at the Artefact festival at STUK in Leuven (a mere 15 minute away from Brussels so take the train now if you’re in Belgium because the show is as enchanting as its theme suggests) and Dataghost 2 was dead. The demise came quite early. The email exchange in which the artists and STUK tried to understand what had happened was printed out and added to the exhibition space. The emails reveal that the system probably erased a critical file which brought the whole process to its term.

The installation is currently running in dead mode. Both the printing machine and the screen remain frozen. 



RYBN, Dataghost 2, 2016. Installation view at STUK in Leuven for the Artefact festival. Photo © Kristof Vrancken


RYBN, Dataghost 2, 2016. Installation view at STUK in Leuven for the Artefact festival. Photo © Kristof Vrancken

I found the work brilliant. On the one hand, it is super complex and perplexing, just like most esoteric practices. On the other hand, it demonstrates with great efficiency and simplicity that algorithms (and by extension any technology system) are only as rational (or irrational) as the humans who program them.

Dataghost 2 is exhibited at the artefact festival in Leuven, Belgium. The exhibition, curated by Karen Verschooren from STUK & Ils Huygens from Z33 continues until 9 March 2017

If you find yourself in Paris, the RYBN collective will be discussing Dataghost 2 tomorrow 3 March at the Ecole Nationale Supérieure d’Arts de Paris Cergy.

Related stories: The Occult, Witchcraft & Magic. An Illustrated History.

GAMERZ 2016 – Our daily computer-programmed reality

For a full intro to the festival, check out my previous story: GAMERZ: Digital tech ‘degenerated’ by craft and kludge.


Špela Petrič, Miha Turšič, Dunja Zupančič and Dragan Živadinov, Agents non-humains, installation at GAMERZ 2016. Photo: Luce Moreau

This year, a section of the exhibition of the GAMERZ festival was dedicated to the omnipresence of algorithms into our life. It was curated by artist, writer and otherwise brilliant cultural agitator Ewen Chardronnet.

By anchoring his curatorial text in the year 1972, Chardronnet reminds us that back then, the future of technology was not paved with malignant machines and other existential risks. Instead, it was brimming with hopes, ideals and thrilling speculations. In 1972 thus, Nixon orders the development of a Space Shuttle program, the first man-made satellite leaves the solar system, a man walks for the last time on the moon and his crew photographed one of the most reproduced images in human history: The Blue Marble portrait of the Earth.

1972 is also the year the cybernetic system Cybersyn and NASDAQ, the world’s first electronic stock market, were starting to show their potential. NASDAQ was only one year old but would rapidly become the fastest growing stock market. As for the socialism-imbued promises held by Cybersyn, they were cut short by the coup d’état led by Augusto Pinochet (and backed by the U.S.) in 1973.

Philip K. Dick declares that we Live in “a computer-programmed reality”

As the curator also recalls, a few years later Philip K. Dick would explain with determination (and a certain sentiment that the audience is not ready to believe his words) that we are living in a computer programmed reality. And indeed, nowadays, many scientists would argue that the science fiction writer’s declaration should not be taken lightly and that a being whose intelligence is far greater than our own might very well have created us for their own entertainment. In other words, chances are that we are indeed living in a computerized simulation.

What is sure is that artificial manipulations and decisions of all kinds have very physical and real impacts on our culture.

“Nowadays, algorithms are everywhere,” writes Chardronnet. “They organize the planning and optimal use of resources, pictures rendering, bio-computerizing, cryptography, stock exchanges, electronic surveillance, target marketing, our behavior on social media… But algorithms are as old as Babylon. If procedural generation video games universes are truly infinite, is there still any enchanted gardens full of immaterial mathematical relics to be found? Or will it be time to encompass the possibility of an end?

The title for Chardronnet’s exhibition is thus, very fittingly, Simulated Universe. The show featured artists whose works filter through the hype and anxiety surrounding a world controlled by artificial and often invisible intelligence.


Konrad Becker and Felix Stalder, Painted by Numbers. A Discursive Installation on Algorithmic Regime. Installation view at GAMERZ. Photo by Luce Moreau


Konrad Becker and Felix Stalder, Painted by Numbers. A Discursive Installation on Algorithmic Regime. Installation view at GAMERZ. Photo by Luce Moreau

Painted by Numbers provides an excellent introduction to the theme of the exhibition. Konrad Becker and Felix Stalder interviewed artists, scientists, activists, artists and experts in technology about their own perspectives on the power of algorithmic realities. The interviews were then segmented and rebuilt into short thematic videos that explore a particular issue (politics, culture, agency, etc.) under different but complementary points of view. The people interviewed talk about the perceived rationality of algorithms, weigh in on the possibility to build algorithm that would better reflect our values, discuss their lack of transparency, the subtle ways in which they are already shaping our cognitive processes, and often secretly scoring of members of society.

The videos are also available for watching online. I would highly recommend that you have a look at them if you have an hour (or 6 times 10 minutes) to spend on short films that efficiently open up all sorts of questions and provocations around the world built on data.
Extra bonus points to the authors of the videos for including women’s perspective (still not something that we should take for granted, alas!)


RYBN, ADM XI at GAMERZ 2016. Photo: Luce Moreau


RYBN, ADM XI at GAMERZ 2016. Photo: Luce Moreau


RYBN, ADM XI at GAMERZ 2016. Photo: Luce Moreau


RYBN, ADM XI at GAMERZ 2016. Photo: Luce Moreau

Important decisions are more and more devolved to machines and programs, making it difficult to determine who (or what) is actually in control. The trend is particularly noticeable in finance where increasingly high number of stock trades are now driven by algorithms.


“Algorithmic Trading. Percentage of Market Volume,” data from Morton Glantz, Robert Kissell. Multi-Asset Risk Modeling: Techniques for a Global Economy in an Electronic and Algorithmic Trading Era. Academic Press, Dec 3, 2013, p. 258.

Many types of algorithmic or automated trading activities can be described as high-frequency trading (HFT), a type of algorithmic trading characterized by such high speeds, such high turnover rates and high order-to-trade ratios that no man would ever dream of comparing to its. Algo trading been the subject of much debate since one of them caused the 2010 Flash Crash which saw nearly $1 trillion of value erased from U.S. stocks and the Dow Jones index lose almost 9% of its value in a matter of minutes. The market rapidly regained its composure and eventually closed 3% lower.

RYBN.ORG is a group of French artists who have been studying algorithmic finance for a number of years but who also created their own trading robot. Using an artificial intelligence algorithm, the autonomous program has been investing and speculating on financial markets since 2011. More recently, the group have invited other artists to join their research platform ADM XI and experiment with counter-intuitive strategies of investment and speculation.

The trading algorithms hosted on the platform follow their own non mercantile and obsessive logic: some attempt to produce a total and irreversible chaos, others try to influence the market prices to make it look like a given geometrical shape, while others tries do saturate the market with non human affects.

Within this contest, benefits are no longer driven by the prices and other economic instruments, but rather, by living organisms – soil, plants, bacteria; by supraterrestrial rules – environmental, astronomical, astrological; or by non-scientific knowledges – esoteric, magic, geomancy, etc.

Suzanne Treister‘s Quantum V algorithm is guided by data from human brains under the influence of psychoactve plants and planetary networks. Horia Cosmin Samoila‘s work submits selected stocks and financial products to an algorithm governed by the Global Consciousness Project, a Princeton University parapsychology experiment that looks for interactions between “global consciousness” and physical systems. Marc Swynghedauw’s HeidiX buys or sells stock according to how likely her actions with help her (she’s a lady bot!) reach the summit of famous mountains. Nicolas Montgermont‘s HADES trading algorithm uses its knowledge in astronomy, astrology and mythology to sell or buy gold. You can find more algorithms on the platform, the logic behind each of them is frankly quite baffling but some of them seem to perform rather well.


Regina de Miguel, Una historia nunca contada desde abajo, 2016. Photo by Luce Moreau

Una historia nunca contada desde abajo (A Story Never Told from Below) is inspired by the Cybersyn or Synco project. The project kicked off in mid-1971, when cybernetic visionary Stafford Beer was approached by a high-ranking member of the newly elected socialist government of Salvador Allende. The scientist was asked if he could apply his cybernetic theories to the management of the public sector of the Chilean economy. The objective of Cybersyn was to use a system of networked telex machines and computers to transmit data from factories to the government, allowing for economic planning in real time. The project was dropped after Pinochet’s 1971 coup.

Regina de Miguel’s film lasts roughly 2 hours and i didn’t get a chance to watch it properly, alas! From what i’ve seen and also gathered from various readings and discussions, it seems that the work looks at times (which might now be regarded as ‘utopian’) when scientists and politicians embraced technology with enthusiasm in the hope that they would genuinely help them govern and improve humanity. However, utopias, even the most revolutionary ones, tend to be betrayed by the systematic failures of the times when they were conceived.


Špela Petrič, Miha Turšič, Dunja Zupančič and Dragan Živadinov, Agents non-humains, installation at GAMERZ 2016. Photo: Luce Moreau


Špela Petrič, Miha Turšič, Dunja Zupančič and Dragan Živadinov, Agents non-humains, installation at GAMERZ 2016. Photo: Luce Moreau


Špela Petrič, Miha Turšič, Dunja Zupančič and Dragan Živadinov, Agents non-humains, installation at GAMERZ 2016. Photo: Luce Moreau


Špela Petrič, Miha Turšič, Dunja Zupančič and Dragan Živadinov, Agents non-humains, installation at GAMERZ 2016. Photo: Luce Moreau


Špela Petrič, Miha Turšič, Dunja Zupančič and Dragan Živadinov, Agents non-humains, performance at GAMERZ 2016. Photo: Luce Moreau


Špela Petrič, Miha Turšič, Dunja Zupančič and Dragan Živadinov, Agents non-humains, performance at GAMERZ 2016. Photo: Luce Moreau

Špela Petrič, Miha Turšič, Dunja Zupančič and Dragan Živadinov believe that the research in arts and humanities should be recognized as development practices complementary to space science and technology. The installation and performance they presented at GAMERZ are part of a broader call to integrate art forms into space programmes.

Živadinov was a co-founder of the avant-garde art collective Neue Slowenische Kunst and the director of the first complete theatre production in zero gravity conditions. In 1995 Živadinov embarked on Noordung 1995-2045, a 50-year theatrical process named after the famous Slovene rocket engineer and pioneer of cosmonautics.

The Noordung 1995-2045 theatre piece is to be repeated on the same day, every ten years, until 2045. Should any of the actors die during this 50 year period (as it happened already with actress Milena Grm), their role will be symbolized on stage by a remote controlled sign that the individual had previously selected. As for their text, it will be replaced by a melody for women and rhythm for men. Since it is highly likely that all actors will have passed away by 2045, all that will remain on the stage for the last performance will be their technological substitutes. Each of these devices will then be sent in the Earth’s orbit from where they will transmit signals back to Earth and also into deep space.


Špela Petrič, Miha Turšič, Dunja Zupančič and Dragan Živadinov, Agents non-humains, performance at GAMERZ 2016. Photo: Luce Moreau

Previously: GAMERZ: Digital tech ‘degenerated’ by craft and kludge.

Predictive Art Bot. A call for artworks that interpret AI-generated concepts

unthemparrrkspecified

utreisterdogmanspecified

There’s a weird account on Twitter. Its author sounds like someone who’s desperately looking for a cyberpunk scenario, trying to impress its tutors of media art with a ‘subversive’ idea for a graduation project or maybe coming up with the worst possible critical design concept. It’s actually neither of those. Or rather it could be all of that and more.

Behind @predarbot‘s tweets is a program that combines the headlines of articles recently published on websites specializing in digital art and hactivism and then it randomly turns them into titles for potential artworks that ‘critically explore the intersection between art and technology’ (not the bot’s words but the canonical description used by pretty much every single media art event i’ve ever attended.)

The bot effortlessly predicts an infinite number of likely, possible, almost ­existent or even nonsensical proposals all of which are in line with the jargon currently in use. In this absurd manner, the Predictive Art Bot works towards the generation of an exhaustive series of all the possible automatic artistic responses and trends that may arise in the future. This algorithm’s constant production of “reflex art” that takes the form of simple utterances means that it, in turn, continually induces mental projections generated by this stimulus.

The twitter page is part of Predictive Art Bot, a project launched a few weeks ago by Maria Roszkowska and Nicolas Maigret. With this work, the duo is inviting other artists to collaborate with the bot, interpret some of the most puzzling/exciting/provocative tweets and turn them into real prototypes, drafts for impossible projects, live performances, failed experiments, etc. They have even managed to get grants from well-known media art festivals to finance and exhibit some of these experiments. No matter which shape they might eventually adopt….

unspredicschemaapecified

The idea of involving machines into creativity might not be as wild and wacky as it sounds. Among 2017 biggest trend forecasts is “AI as artist.” Composing music, directing movies, creating video games, etc. How long then till robots win a Golden Nica at ars or even the Turner Prize?

Sunspring, a sci-fi short film starring Thomas Middleditch

i00000otunspecified

unfoodtruckspecified

I got Maria Roszkowska and Nicolas Maigret to tell me where the project comes from and where they hope it might lead them, the art world and the field of AI.

Hi Maria and Nicolas! How does the artbot work exactly? Does the bot just take its inspiration from the websites you follow and then shake them up into a tweet?

Predictive Art Bot is an algorithm for predicting artistic concepts. It is designed to be both a simplified archetype of the process of exploring artistic ideas and a potentially inspiring, radical and exotic semi-intelligent entity.
This Art Bot is programmed to generate and publish on Twitter new expressions of hypothetical artistic concepts, based on emerging trends in different fields. The areas covered evolve over time. They can engage with fields such as art, activism, economics, medicine, ecology, transhumanism, occultism, and so on.

Based on a collection of keywords from these fields, it triggers a precarious algorithm that publishes on Twitter the proto-concept for an artwork. This concept can become a prophecy… or even a self-fulfilling prophecy, if an artist ends up realizing it. It is precisely the whole issue of the calls for projects which the bot automatically generates, and that invite artists to interpret its nonhuman concepts in exchange for a fee.

(@predartbot)

femminiiiiistart09

If i understand correctly, the tweets that will be turned into ‘real” projects by artists are the ones that are the most popular on the bot’s twitter page. Don’t you expect twitter followers to want to pervert the system and support only the ones that are the most offensive or impossible? Although the level of interaction is much lower here, you could potentially end up with a monster like the Microsoft AI twitter chatbot…

It would be brilliant if people engaged with this project so much that they’d pervert, sabotage and hijack it. Actually, this Bot totally embraces risk as a core quality of the performative process!

And it is precisely on this subject that the famous Microsoft chatbot TAY was a precious moment. A moment when the biases and mechanics of a programmed system, usually rather inconspicuous, suddenly become visible.

There’s something a bit disturbing about the Predictive Art Bot. It seems to suggest that robots are going to take the one thing that humans are proud of: creativity! They are not only going to ‘steal’ our jobs in the future (including the ‘intellectual’ ones) and do everything far better and faster than us but they might end up being better at making films, artworks, music that will amaze us. Is that something you’d like to comment on?

“Individuals who operate effectively in our culture have already been considerably augmented.” — Doug Engelbart, Augmenting Human Intellect, 1962

In art, the production of concepts remains at the very top of the value chain. By operating on this field of thought and inspiration, Predictive Art Bot proposes to push to their limits the ideologies of delegation to the machine and of technological augmentation.

But it is also an attempt to escape the solutionist ideology promoted by the charlatans of AI, who became experts at throwing around science fiction clichés in order to secure R&D budgets.

We need to deconstruct the mythology that surrounds AI because it sometimes verges on the religious, mystical or magical. The reality is quite different: the algorithms have already taken control and they are biased and stupid.

If we look at the services of alleged AIs, we find, for example, contents written by turkers, collages of answers given by other users, or a cluster of possible feedback put together by writers.

This Gizmodo article is great in this context because it shows the process of humans teaching to algorithms, but even further it shows humans doing the micro-tasks that the AI application pretends to do, but actually can’t. Furthermore, in a series of articles published around the same time, we could discover the universe of poets, psychologists and other writers who design responses for alleged AI services.

pic20021211110167299
Eliza chatterbot (image)

Predictive Art Bot is a very precarious system, and this is a deliberate and conscious choice. Besides, the first chatbot, Eliza by Joseph Weizenbaum (1966), was a great influence, because of its effectiveness in evoking an intelligence, while there was in fact only a simple program. It was already at the time a strong criticism of the research and ideology that surround artificial intelligence.

“In the distant past, warlords of the Dark Ages ransacked and impoverished what remained of civilization; nowadays, new Datalords behind padded doors are out to rob our past and resell it as our future. ”
— Konrad Becker, “Operation Systems”, dictionary of strategical reality (2009)

On the other hand, Predictive Art Bot takes its cue from and hijack research on the prediction and identification of trends, which is currently booming both in engineering and in marketing, with a profusion of derivative services (e.g. IBM Watson). This bot aims to ape these methods in a very improvised way. It also uses an application, that is both experimental and verging on the absurd, to render these challenges more perceptible.

The concepts of the bot are essentially designed to stimulate the imagination and to trigger desires and interpretation processes. It is precisely in these human interpretations that lie the endless choices that determine the relevance of a realized project. And it is this same process of interpretation that will distinguish a successful human / non-human augmentation :)

unstrendtrend00pecified

What made you start this research?

Predictive Art Bot is based on the premise that the recipes for the production of artistic ideas (in a given cultural field) tend towards normalization and convergence. These recipes are then extremely close to a program or to primary and predictable organisms. Gilles Deleuze (quoting here Uexküll) illustrates efficiently what the “world” perceived by a simple organism (an entity ready to react to certain types of hyper-limited stimulus, according to a set of automated reflexes) would be like:

Deleuze/Parnet, A is for Animal

Predictive Art Bot proposes to not only reproduce this precarious mechanism of concept production, but also and especially to make it progressively more deviant.

alforithmjunkartpic

Are you sometimes surprised by what the bot comes up with because i actually thought that some of the tweets do look like the most ‘normal’ media art works, the ones that get awards because they echo what is considered edgy and ‘subversive’ at a certain moment.

The bot proposes angles of approach that are fairly unusual. But in the background, it attempts to exhaust ad absurdio the current rhetorical combinations operating in the artistic field examined, which can be regarded as a caricatural simplification of the brainstorming process.

Beyond that, the bot also presents a potential for prediction and fiction. Every now and then, you get concepts that are almost impossible, and it is precisely this ‘almost’, these overflows, that we’re pursuing. When concepts become unrealistic / impracticable, and trigger a mental exercise akin to conceptual art.
It is somehow a contemporary extension of protocols based on chance and interpretation, which were made popular by the historical avant-gardes: Dada poems, exquisite corpses, magic realism, the instructions for Fluxus events, and even protocols such as 	</div><!-- .entry-content -->
	
	</article><!-- #post-9829 -->
		</div><!-- #content -->
	</section><!-- #primary -->

<div id=

Programming for Artists