Category Archives: artificial intelligence

Exploitation Forensics. Interview with Vladan Joler


Vladan Joler and Kate Crawford, Anatomy of an AI system (detail)

If you find yourself in Ljubljana this week, don’t miss SHARE Lab. Exploitation Forensics at Aksioma.

The exhibition presents maps and documents that SHARE Lab, a research and data investigation lab based in Serbia, has created over the last few years in order to prize open, analyze and make sense of the black boxes that hide behind our most used platforms and devices.

The research presented at Aksioma focuses on two technologies that modern life increasingly relies on: Facebook and Artificial Intelligence.

The map dissecting the most famous social media ‘service’ might be sober and elegant but the reality it uncovers is everything but pretty. A close look at the elaborate graph reveals exploitation of material and immaterial labour and generation of enormous amounts of wealth with not much redistribution in between (to say the least.) As for the map exploring the deep materiality of AI, it dissects the whole supply chain behind the technology deployed by Alexa and any other ‘smart’ device. From mining to transport and with more exploitation of data, labour, resources in the process.

Should you not find yourself in Ljubljana, then you can still discover the impulses, findings and challenges behind the maps in this video recording of the talk that the leader of the SHARE Lab, Prof. Vladan Joler, gave at Aksioma two weeks ago:

Talk by Vladan Joler at the Aksioma Project Space in Ljubljana on 29 November 2017

In the presentation, Joler talks superpowers of social media and AI invisible infrastructures but he also makes fascinating forays into the quantification of nature, the language of neural networks, accelerating inequality gaps, troll-hunting and issues of surveillance capitalism.

I also took the Aksioma exhibition as an excuse to ask Vladan Joler a few questions:


SHARE Lab (Vladan Joler and Andrej Petrovski), Facebook Algorithmic Factory (detail)

Hi Vladan! The Facebook maps and the information that accompanies them on the Share lab website are wonderful but also a bit overwhelming. Is there anything we can do to resist the way our data are used? Is there any way we can still use Facebook while maintaining a bit of privacy, and without being too exploited or targeted by potentially unethical methods? Would you advise us to just cancel our Facebook account? Or is there a kind of medium way?

I have my personal opinion on that, but the issue is that in order to make such a decision each user should be able to understand what happens to their private data, data generated by activity and behaviour and many other types of data that is being collected by such platforms. However, the main problem, and the core reasoning behind our investigations, is that what happens within Facebook for example, i.e. the way it works is something that we can call a black box. The darkness of said boxes is shaped by many different layers of in-transparency. From different forms of invisible infrastructures over the ecosystems of algorithms to many forms of hidden exploitation of human labour, all those dark places are not meant to be seen by us. The only thing that we are allowed to see are the minimalist interfaces and shiny offices where play and leisure meet work. Our investigations are exercises in testing our own capacities as independent researchers to sneak in and put some light on those hidden processes. So the idea is to try and give the users of those platforms more facts so that they are able decide if the price they are paying might be too high in the end. After all, this is a decision that each person should make individually.

Another issue is that, the deeper we were going into those black boxes, the more we became conscious of the fact that our capacities to understand and investigate those systems are extremely limited. Back to your question, personally I don’t believe that there is a middle way, but unfortunately I also don’t believe that there is a simple way out of this. Probably we should try to think about alternative business models and platforms that are not based on surveillance capitalism. We are repeating this mantra about open source, decentralised, community-run platforms, to no real effect.


SHARE Lab. Exploitation Forensics at Aksioma Project Space. Photo: Jure Goršič / Aksioma


SHARE Lab. Exploitation Forensics at Aksioma Project Space. Photo: Janez Janša

The other depressing thing is that for many people, Facebook IS the internet. They just don’t care about privacy, privacy belongs in the past and being targeted is great because it means that Facebook is extra fun and useful. Do you think that fighting for privacy is a futile battle? That we should just go with the flow and adapt to this ‘new normal’?

It is interesting to think that privacy already belongs to the past since historically speaking privacy as we understand it today is not an old concept. It is questionable whether we ever had a moment in time when we had properly defined our right to privacy and we were able to defend it. So, from my point of view, it is more of a process of exploration and an urge to define in each moment what privacy means in present time. We should accept the decentralised view on the term privacy and accept that for different cultures this word has a different meaning and not just imply, for example, European view on privacy. Currently, with such a fast development of technology, with the lack of transparency-related tools and methodologies, outdated laws and ineffective bureaucracies, we are left behind in understanding what is really going on behind the walls of leading corporations whose business models are based on surveillance capitalism. Without understanding what is going on behind the walls of the five biggest technology firms (Alphabet, Amazon, Apple, Facebook and Microsoft) we cannot rethink and define what privacy in fact is nowadays.

The dynamics of power on the Web have dramatically changed, and Google and Facebook now have a direct influence over 70% of internet traffic. Our previous investigations are saying that 90% of the websites we investigated have some of the Google cookies embedded. So, they are the Internet today and even more, their power is spilling out of the web into many other segments of our life, from our bedrooms, cars, cities to our bodies.


SHARE Lab. Exploitation Forensics at Aksioma Project Space. Photo: Janez Janša


SHARE Lab. Exploitation Forensics at Aksioma Project Space. Photo: Janez Janša

Could you explain us the title of the show “Exploitation Forensics”?

Oxford dictionary is giving us two main uses of the word exploitation : (1) the action or fact of treating someone unfairly in order to benefit from their work and (2) the action of making use of and benefiting from resources. Basically both versions are essentially related to two maps that are featured in the exhibition. We can read our maps as visualisations of exploitation process regardless whether we speak about exploitation of our immaterial labour (content production and user behaviour as labour) or we go deeper and say that we are not even a workers in that algorithmic factory, but pure, raw material, i.e. a resource (user behavioural data as a resource). Each day users of Facebook provide 300.000.000 hours of unpaid immaterial labour and this labour is transformed into the 18 billion US dollars of revenue each year. We can argue if that is something that we can call exploitation or not, for the simple reason that users use those platforms voluntarily, but for me the key question is do we really have an option to stay out of those systems anymore? For example, our Facebook accounts are checked during visa applications, and the fact that you maybe don’t have a profile can be treated as an anomaly, as a suspicious fact.

Not having profile will place you in a different basket and maybe different price model if you want to get life insurance and for sure, not having Linkedin account if you are applying for a job will lower your chances of getting the job you want. Our options of staying out are more and more limited each day and the social price we are paying to stay out of it is higher and higher.

If our Facebook map is somehow trying to visualise one form of exploitation, the other map that had the unofficial title “networks of metal, sweat and neurons” is visualising basically three crucial forms of exploitation during the birth, life and death of our networked devices. Here we are drawing shapes of exploitation related to different forms of human labour, exploitation of natural resources and exploitation of personal data quantified nature and human made products.

The word forensics is usually used for scientific tests or techniques used in connection with the detection of crime; and we used many different forensic methods in our investigations since my colleague Andrej Petrovski has a degree in cyber forensics. But in this case the use of this word can be treated also as a metaphor. I like to think of black boxes such as Facebook or complex supply chains and hidden exploitations as crime scenes. Crime scenes where different sort of crimes against personal privacy, nature exploitation or let’s say in some broad sense crime against humanity happens.


SHARE Lab. Exploitation Forensics at Aksioma Project Space. Photo: Jure Goršič / Aksioma


SHARE Lab. Exploitation Forensics at Aksioma Project Space. Photo: Jure Goršič / Aksioma

The maps are incredibly sophisticated and detailed. Surely you can’t have assimilated and processed all this data without the help of data crunching algorithms? Could you briefly describe your methodology? How you managed to absorb all this information and turn it into a visualisation that is both clear and visually-appealing?

In our previous investigations (eg. Metadata Investigation: Inside Hacking Team or Mapping and quantifying political information warfare) we relied mostly on process of data collection and data analysis, trying to apply different methods of metadata analysis similar to ones that organisations such as the NSA or Facebook probably use to analyse our personal data. For that we used different data collection methods and publicly available tools for data analysis (eg. Gephi, Tableau, Raw Graphs). However, the two maps featured in the exhibition are mostly product of long process of diving and digging into publicly available documentation such as 8000 publicly available patents registered by Facebook, their terms or services documentation and some available reports from regulatory bodies. At the beginning, we wanted to use some data analysis methods, but we very quickly realised that the complexity of data collection operations by Facebook and the number of data points they use is so big that any kind of quantitative analysis would be almost impossible. This tells a lot about our limited capacity to investigate such complex systems. By reading and watching hundreds of patents we were able to find some pieces of this vast mosaic of data exploitation map we were making.

So, those maps, even though they look in some way generative and made by algorithms, they are basically almost drawn by hand. Sometimes it takes months to draw such an complex map, but somehow I need to say that I really appreciate slowness of this process. Doing it manually gives you the time to think about each detail. Those are more cognitive maps based on collected information then data visualizations.


SHARE Lab. Exploitation Forensics at Aksioma Project Space. Photo: Jure Goršič / Aksioma

In a BBC article, you are quoted as saying “If Facebook were a country, it would be bigger than China.” Which reminded me of several news stories that claim that the way the Chinese use the internet is ‘a guide to the future’ (cf. How China is changing the internet) Would you agree with that? Do you think that Facebook might eventually eat up so many apps that we’ll find ourselves in a similar situation, with our lives almost entirely mediated by Facebook?

The unprecedented concentration of wealth within the top five technology companies allows them to treat the whole world of innovation as their outsourced research and development. Anarcho-Capitalist ecosystem of startups is based on a dream that in one moment one of those top five mega companies will acquire them for millions of dollars.

If you just take a look at one of the graphs from our research on “The human fabric of the Facebook Pyramid” mapping the connections within Facebook top management, you will probably realise that through their board of directors they have their feet in most important segments of technological development in combination with political power circles. This new hyper-aristocracy has a power to eat up any new innovation, any attempt that will potentially endanger their monopoly.

The other work in the Aksioma show is Anatomy of an AI system, a map that guides “visitors through the birth, life and death of one networked device, based on a centralized artificial intelligence system, Amazon Alexa, exposing its extended anatomy and various forms of exploitation.” Could you tell us a few words about this map? Does it continue the Facebook research or is it investigating different issues?

Barcelona-based artist Joana Moll infected me with this obsession about materiality of technology. For years we were investigating networks and data flows, trying to visualise and explore different processes within those invisible infrastructures. But then after working with Joana I realised that each of those individual devices we were investigating, has let’s say another dimension of existence, that is related to the process of their birth, life and death.

We started to investigate what Jussi Parikka described as geology of media. In parallel with that, our previous investigations had a lot to do with issues of digital labour, beautifully explained in works of Christian Fuchs and other authors, and this brought us to investigate the complex supply chains and labour exploitation in the proces.

Finally, together with Kate Crawford from AI Now Institute, we started to develop a map that is a combination of all those aspects in one story. The result is a map of the extended anatomy of one AI based device, in this case Amazon Echo. This anatomy goes really deep, from the process of exploitation of the metals embedded in those devices, over the different layers of production process, hidden labour, fractal supply chains, internet infrastructures, black boxes of neural networks, process of data exploitation to the death of those devices. This map basically combines and visualises three forms of exploitation: exploitation of human labour, exploitation of material resources and exploitation of quantified nature or we can say exploitation of data sets. This map is still in beta version and it is a first step towards something that we are calling in this moment – AI Atlas that should be developed together with AI Now institute during next year.

Do you plan to build up an atlas with more maps over time? By looking at other social media giants? Do you have new targets in view? Other tech companies you’d like to dissect in the way you did Facebook?

The idea of an Atlas as a form is there from the beginnings of our investigations when we explored different forms of networks and invisible infrastructures. The problem is that the deeper our investigations went, those maps became more and more complex and grew in size. For example, maps exhibited at Aksioma are 4×3 m in size and still there are parts of the maps that are on the edge of readability. Complexity, scale and materiality of those maps became somehow a burden itself. For the moment there are two main forms of materialisations of our research. First, the main form are stories on our website and recently those big printed maps are starting to have their life at different gallery spaces around. It is just recently that our work was exhibited in art context and I need to say that I kind of enjoy in that new turn.


SHARE Lab. Exploitation Forensics at Aksioma Project Space. Photo: Janez Janša


SHARE Lab. Exploitation Forensics at Aksioma Project Space. Photo: Janez Janša

How are you exhibiting the maps in the gallery? Do you accompany the prints with information texts or videos that contextualize the maps?

Yes. As You mentioned before, those maps are somewhat overwhelming, complex and not so easy to understand. On the website we have stories, narratives that guide the readers through the complexities of those black boxes. But at the exhibitions we need to use different methods to help viewers navigate and understand those complex issues. Katarzyna Szymielewicz from Panoptykon Foundation, created video narrative that is accompanying our Facebook map and we are usually exhibiting a pile of printed Facebook patents, so visitors can explore them by themselves.

Thanks Vladan!

SHARE Lab. Exploitation Forensics is at Aksioma | Project Space in Ljubljana until 15 December 2017.

Previously: Critical investigation into the politics of the interface. An interview with Joana Moll and Uberworked and Underpaid: How Workers Are Disrupting the Digital Economy.

AI, global warming, black holes and other impending global catastrophes! Videos for your weekend

I’m just back from a short trip to Dublin where i visited Design and Violence at the Science Gallery. I’ve LOTS to tell you about the exhibition. It’s dense, brilliant and sometimes also a bit disturbing. It challenges everything you think you know about what is good and what is bad, about design’s role in discriminating, torturing and drafting new forms of insidious brutality.

While i was in town, i had the chance to attend one of the Science Gallery’s evenings that explore impending global catastrophes. Called The End is Nigh, the series is not as dark and gloomy as the title suggests. Well, yes it is but there was also a lot of humour, irony and messages of hope in the discussions. The panel i attended, Automatic Disqualification: Will AI mean the end of work, or the end of humans?, explored the possible threats posed by artificial intelligence in the fields of employment, social inequalities and even the survival of the human race.

Video of THE END IS NIGH #2 – Automatic Disqualification: Will AI mean the end of work, or the end of humans?

The panelists were:
Barry O’Sullivan, the deputy president of the European Association for Artificial Intelligence, who summed up the key concepts of AI, the extent of its presence in our daily life and the main threats that humanity might have to face in the near future,
Niall Shanahan, a communications officer for IMPACT, Ireland’s largest public service trade union, focused on how/where/why AI is going to replace us in the work place,
Mary Aiken, a forensic cyberpsychologist (probably the coolest title/job in the whole universe) whose work specializes in the impact of technology on human behaviour, pretty much dominated the evening. She talked about Google losing control of its search engine, lessons learnt and quickly forgotten in the area of AI, technology distracting us from the desire to be 21st century Luddites, moving from natural selection to algorithm selection, sexbots making human physical encounters IRL dispensable, etc.
– CJ Cullen, the Deputy Director of Communications and Information Services at the Irish Defence Forces, talked about (autonomous) killing machines.
The discussion was moderated by Anton Savage of Today FM.

Another panel looked at how we should deal with climate change: should we mitigate climate change now? Or should we wait for future technologies to solve our problems?

Video of THE END IS NIGH #3 – In Hot Water: Is Climate Change humanity’s Greatest Threat?

The panelists were: Cara Augustenborg, environmental scientist and lecturer at University College Dublin, Hugh Fitzpatrick, student in MSc Environmental Science TCD, and Barry McMullin, Associate Professor at DCU faculty of engineering and computing. The event was chaired by Constantine Boussalis, Assistant Professor in Political Science at Trinity College Dublin.

I missed that one unfortunately but i’m going to watch it tonight.

And i’m going to keep the first episode of the series, The End is Nigh: Asteroids, Comets, and Rogue Black Holes: Can Earth dodge a cosmic bullet?, for the weekend! This one looked at humanity’s best options to ensure survival in the event of planetary catastrophe.

Video of The End is Nigh 1: Asteroids, Comets, and Rogue Black Holes: Can Earth dodge a cosmic bullet?

The panelists were Mary Bourke, Assistant Professor of Geography at Trinity College Dublin, David McKeown, Assistant Professor of Design Innovation in the TCD School of Engineering and Niamh Shaw, engineer, scientist and artist.
The event was hosted by hosted by Joseph Roche, Assistant Professor of Science Education at TCD.

Photo on the homepage via Caribbean 360.

Book review: Artificial Intelligence. What Everyone Needs to Know

9780190602390Artificial Intelligence. What Everyone Needs to Know, by computer scientist, researcher and futurist Jerry Kaplan

On amazon USA and UK.

Publisher Oxford University Press writes: The emergence of systems capable of independent reasoning and action raises serious questions about just whose interests they are permitted to serve, and what limits our society should place on their creation and use. Deep ethical questions that have bedeviled philosophers for ages will suddenly arrive on the steps of our courthouses. Can a machine be held accountable for its actions? Should intelligent systems enjoy independent rights and responsibilities, or are they simple property? Who should be held responsible when a self-driving car kills a pedestrian? Can your personal robot hold your place in line, or be compelled to testify against you? If it turns out to be possible to upload your mind into a machine, is that still you?

Sometimes i realize that i need a new perspective on technology. My main sources of information about science or technology are art exhibitions, social media channels run by activists and books by social scientists or philosophers. I decided to expand my horizons and check out what an engineer has to say about technology. In particular artificial intelligence.

I thought a book like Artificial Intelligence. What Everyone Needs to Know wouldn’t overwhelm me with nerdiness. The volume is part of an Oxford University Press series that aims to offer compact and balanced monographs on complex issues in a Q&A format.

In his intro to the book, computer scientist and futurist Kaplan promises to give nontech readers an overview of the key issues and arguments about the main social, ethical, legal and economic issues raised by Artificial Intelligence.

The experience didn’t start too well for me… The first part is remarkably techy for a book that promises not to scare off the amateur. It’s not difficult to follow at all but i was there for the ethics, the critics and the possible pitfalls of AI! I soldiered on nonetheless, read about the intellectual history of AI, the history of machine learning, the various types of AI (actually that part was very interesting, it gives grounding and clarity to the whole field), etc.

DARPA Robotics Challenge Pomona Fairplex 06-June-2015 Photographer: J. Krohn Requester: Brett Kennedy
JPL’s RoboSimian exits its vehicle following a brief drive through a slalom course at the DARPA Robotics Challenge Finals. Photo: J. Krohn/ JPL-Caltech

Things picked up for me at chapter 4, the one that studies the philosophy of AI and how it poses a series of challenges to philosophy or religious doctrines which often orbit around human uniqueness and our place in universe. Whereas the first few chapters explained terms such as computer vision, speech recognition, natural language processing, the pages in chapter 4 invite readers to reconsider and refine their understanding of intelligence, free will, consciousness and what it means to be ‘alive.’ Automated methods are slowly nibbling at the list of abilities previously considered the sole province of humans. Think of chess, for example. Pre-Deep Blue, being a master of chess was regarded as the epitome of being intelligent and human. Then in 1996, Garry Kasparov was defeated by a computer and we had to find new benchmarks to define human intelligence.

The following chapters kept on getting more and more relevant to my interests as they explored the impact that AI will have -or already has- on law, on human labor, on social equity (although the disruptive effects of AI are not inevitable, it is quite likely that income inequality will get worse) and it ends by looking at the possible future impacts of artificial intelligence.

The questions Kaplan explores are fascinating. I sometimes wished he would have added more details and depth to several of the issues he presents but i guess the particular format of the book made it difficult for him to be too lengthy. Here are some of the questions he answers (and sometimes admits we don’t have quite yet the framework to answer them with certainty):

Should people bear full responsibility for their intelligent agents (if your autonomous car hits someone)? Should an AI system be permitted to own property? Could an AI system commit a crime (answer is yes) and can it be held accountable for it? Can a computer ‘feel’? Which professions are under threat of being automated in the near future? Will i be able to upload myself into a computer? How can we minimise future risks posed by the machines? What will be the impact of AI on social equity? What are the benefits and the risks of making computers that act like people? Who’s going to benefit from this tech revolution? Are there alternatives to our current labor-based economy?

Artificial Intelligence. What Everyone Needs to Know is not a book i would normally pick up but i’m glad i did. There is much hype and fear around robots and artificial intelligence and it’s difficult to get a clear view of what lays ahead of us. Much of the public perception of AI is shaped by Hollywood, sensationalist headlines, and videos of robots interacting flawlessly with a trained demonstrator. The reality, as Kaplan demonstrates in this book, is a bit more complicated:

IEEE Spectrum, A compilation of robots falling down on Day 1 of the DARPA Robotics Challenge Finals, 2015

Book review: Artificial Intelligence. What Everyone Needs to Know

9780190602390Artificial Intelligence. What Everyone Needs to Know, by computer scientist, researcher and futurist Jerry Kaplan

On amazon USA and UK.

Publisher Oxford University Press writes: The emergence of systems capable of independent reasoning and action raises serious questions about just whose interests they are permitted to serve, and what limits our society should place on their creation and use. Deep ethical questions that have bedeviled philosophers for ages will suddenly arrive on the steps of our courthouses. Can a machine be held accountable for its actions? Should intelligent systems enjoy independent rights and responsibilities, or are they simple property? Who should be held responsible when a self-driving car kills a pedestrian? Can your personal robot hold your place in line, or be compelled to testify against you? If it turns out to be possible to upload your mind into a machine, is that still you?

Sometimes i realize that i need a new perspective on technology. My main sources of information about science or technology are art exhibitions, social media channels run by activists and books by social scientists or philosophers. I decided to expand my horizons and check out what an engineer has to say about technology. In particular artificial intelligence.

I thought a book like Artificial Intelligence. What Everyone Needs to Know wouldn’t overwhelm me with nerdiness. The volume is part of an Oxford University Press series that aims to offer compact and balanced monographs on complex issues in a Q&A format.

In his intro to the book, computer scientist and futurist Kaplan promises to give nontech readers an overview of the key issues and arguments about the main social, ethical, legal and economic issues raised by Artificial Intelligence.

The experience didn’t start too well for me… The first part is remarkably techy for a book that promises not to scare off the amateur. It’s not difficult to follow at all but i was there for the ethics, the critics and the possible pitfalls of AI! I soldiered on nonetheless, read about the intellectual history of AI, the history of machine learning, the various types of AI (actually that part was very interesting, it gives grounding and clarity to the whole field), etc.

DARPA Robotics Challenge Pomona Fairplex 06-June-2015 Photographer: J. Krohn Requester: Brett Kennedy
JPL’s RoboSimian exits its vehicle following a brief drive through a slalom course at the DARPA Robotics Challenge Finals. Photo: J. Krohn/ JPL-Caltech

Things picked up for me at chapter 4, the one that studies the philosophy of AI and how it poses a series of challenges to philosophy or religious doctrines which often orbit around human uniqueness and our place in universe. Whereas the first few chapters explained terms such as computer vision, speech recognition, natural language processing, the pages in chapter 4 invite readers to reconsider and refine their understanding of intelligence, free will, consciousness and what it means to be ‘alive.’ Automated methods are slowly nibbling at the list of abilities previously considered the sole province of humans. Think of chess, for example. Pre-Deep Blue, being a master of chess was regarded as the epitome of being intelligent and human. Then in 1996, Garry Kasparov was defeated by a computer and we had to find new benchmarks to define human intelligence.

The following chapters kept on getting more and more relevant to my interests as they explored the impact that AI will have -or already has- on law, on human labor, on social equity (although the disruptive effects of AI are not inevitable, it is quite likely that income inequality will get worse) and it ends by looking at the possible future impacts of artificial intelligence.

The questions Kaplan explores are fascinating. I sometimes wished he would have added more details and depth to several of the issues he presents but i guess the particular format of the book made it difficult for him to be too lengthy. Here are some of the questions he answers (and sometimes admits we don’t have quite yet the framework to answer them with certainty):

Should people bear full responsibility for their intelligent agents (if your autonomous car hits someone)? Should an AI system be permitted to own property? Could an AI system commit a crime (answer is yes) and can it be held accountable for it? Can a computer ‘feel’? Which professions are under threat of being automated in the near future? Will i be able to upload myself into a computer? How can we minimise future risks posed by the machines? What will be the impact of AI on social equity? What are the benefits and the risks of making computers that act like people? Who’s going to benefit from this tech revolution? Are there alternatives to our current labor-based economy?

Artificial Intelligence. What Everyone Needs to Know is not a book i would normally pick up but i’m glad i did. There is much hype and fear around robots and artificial intelligence and it’s difficult to get a clear view of what lays ahead of us. Much of the public perception of AI is shaped by Hollywood, sensationalist headlines, and videos of robots interacting flawlessly with a trained demonstrator. The reality, as Kaplan demonstrates in this book, is a bit more complicated:

IEEE Spectrum, A compilation of robots falling down on Day 1 of the DARPA Robotics Challenge Finals, 2015

Persona. Or how objects become human

2buddcoreac3dfc_b
Wang Zi Won, Mechanical Avalokitesvara, 2015

0bru6insrumentacbc
Ghost Hunter suitcase and alphabet for ouija, 1926-1940 Surnatéum, Bruxelles. Photo Claude Germain

1postttttttt69241533375_n
Kenji Yanobe, Sweet Harmonizer II , 1995

The Musée du Quai Branly in Paris is probably one of the few places in the world where you can see post-apocalyptic outfits, ghost hunter instruments, divination robots, Nigerian monoliths bearing minimal human features, Mezcala anthropomorphic figurines, the egg of a titanosaurus, Japanese Bunraku puppets and other historical or contemporary artifacts in the same exhibition.

Persona. Strangely Human lines up over 200 objects and videos to probe how ancient and contemporary cultures infuse life and persona into things.

Many objects have a status more similar to that of a person or a creature than that of a simple object. Works of art – Western or non-Western, popular or contemporary –, or high-tech products – robots, machines, etc. – are regularly endowed, in their use, with unexpected capacities for action, which render them almost people. Like a child devoted to its cuddly toy or someone who curses their computer or mobile accusing it of being incompetent or stubborn. Like the shaman who calls on the spirits through a statuette resembling the gods.

The backdrop of the exhibition is of course the ongoing debates regarding transhumanism, artificial intelligence and the increasingly blurry borders that separate humans from machines. But what makes the exhibition of the Musée du Quai Branly original and different from the shows i usually cover is that its approach is mostly anthropological. The curators are anthropologist Emmanuel Grimaud, ethnologist Anne-Christine Taylor-Descola, anthropologist Denis Vidal and art historian Thierry Dufrêne. Together they gather artifacts from all over the world to explore questions such as: How does the inanimate become animate? How do people establish an unusual or intimate relationship with objects?


Persona, Étrangement humain (trailer)

The exhibition investigates the human in the non-human through 4 different paths.

The first one looks at ‘unidentified presences’, the ones that we think we can detect in a vague shape, or an unexpected sound. It seems that, as humans, we are ‘wired’ to anthropomorphise, to identify life where there is objectively only a bunch of abstract shapes.

In 1944, psychologists Fritz Heider and Marianne Simmel showed to subjects a short animation of independently moving geometric shapes. They found that most people couldn’t help but attribute intentional movements, personalities and goal-directed interactions to the shapes. The attribution takes place in the absence of common social cues like body language, facial expressions or speech. The experiment shows how humans have a spontaneous tendency to attribute feelings and thoughts to barely anthropomorphic shapes.


Fritz Heider & Marianne Simmel, Experimental study of apparent behavior, 1944

In 2008, the BBC re-created a controversial sensory deprivation experiment. Six people were taken to a nuclear bunker and left alone for 48 hours. Three subjects were left alone in dark, sound-proofed rooms, while the other three are given goggles and foam cuffs, while white noise is piped into their ears. The volunteers suffered anxiety, extreme emotions, paranoia and significant deterioration in their mental functioning. They also hallucinated and thought they could see or hear thousands of empty oyster shells, a snake, zebras, tiny cars, the room taking off, mosquitoes, fighter planes buzzing around.


BBC, 48 Hours of Total Isolation (The volunteers begin to hallucinate)

Meanwhile in Thailand, people adopt Kuman Thong, or “Gold Baby.” The little household effigy contains the spirit of a mythical child. Its owner has to care for it as if it were a real child, show it affection and talk to it every day. A bit like you would do with a tamagotchi.

2kuma9526_b
Kuman Thong

A second section of the show explores the persons that you might want to ‘detect’ and communicate with: the ghosts, the spirits, the apparitions, etc.

I wasn’t expecting to find Thomas Edison there. At the end of his life, the famous inventor was said to have been working on a device for communicating with the dead. The “spirit phone” or telephone to the Dead would have enabled paranormal researchers to work ‘in a strictly scientific way.’

The idea for the device came through a correspondence between Edison and Sir William Crookes. The British inventor claimed to have captured images of spirits on photographs. These images allegedly encouraged Edison. The machine never saw the light of the day. Hence the skepticism that surrounds it.

0spirit10edis
Image via unreal facts

0crookeskatieking1
William Crookes, Photos with Katie King

The divination apparatus below appears to have been developed in response to sudden changes in Pende culture, in particular the arrival of colonialists in the region. These changes in society fueled demands for new tools that might afford insight into unfamiliar experiences.

During consultation, the diviner would lay the instrument on his knees with the head facing up while names of individuals suspected of crimes were recited. The galukoji‘s head would spring upward when the culprit’s name was uttered.

Galukoji-accorde¦on-divinatoire-Pende-congo-1920-1950-Surnate¦um-Bruxelles-®-Muse¦e-du-quai-Branly-photo-Claude-Germain-768x1024
Galukoji, Divinatory instrument, Pende region, Congo, 1920 – 1950. Photo Claude Germain

2kofiaa2f58a_b
Divination statue (Kafigeledio), Ivory Coast, XIX-early XXTH century. These effigies oracle were manipulated by members of secret societies to detect who was lying

2hand496f7_b
Spirit hand Martinka and Memento mori ring, late XIX and XVIIth century

Used during the cohoba ritual, the tool was used to help the participant vomit before the ceremony and thus helped them purify their body. The participant would then inhale a potent hallucinogen, putting them in a trance that facilitates contact with supernatural beings.

fragment-de-spatule-vomitive-martinique_5513841
Vomit-inducing spatula, Martinique, circa 1200 – 1492. Photo Patrick Gries

The third chapter in the exhibition studied what robotics professor Masahiro Mori called the Uncanny Valley, the thin line that is crossed by things that appear so human that they end up repelling us. Instead of trying to replicate exactly the human appearance, Mori actually suggested that designers explore zoomorphism or draw inspiration from other art forms (Bunraku theatre, religious statuary, etc.) to produce effects of empathy, attachment and even hypnosis.

This section features Vanuatu marionnettes, prosthesis, mommies that all evoke the human form and seem to both attract and repel the viewer.

Human skull covered with human hair, animal teeth and tinted animal skin. The death raises here a feeling of “uncanny strangeness”.

ExpoPersona-5
Anthropomorphic crest, Cross River (Africa.) Photo Thierry Olivier, Michel Urtado

4momie_entre-attraction-et-repulsion-ce-crane_a09d16210a770b162d77e92ff9ea95ef
Mummy, undated parched head of Mundurucu Indian, Brazil

Jean Dupuy’s dust sculpture comes to life as soon as it is connected to the heart beats of the visitors. The dust is actually an extremely low-density red pigment called Lithol Rubin that has the ability to remain suspended in air for long periods.

0techno5heartdust
Jean Dupuy, Cone Pyramid (Heart beats dust), 1968 (photo)

0techno4vueeexpober
Performance of the piece at the exhibition Für Augen un Ohren, Akademie der Künst, Berlin, 1980 (photo)

Automata of the gods are displayed during religious feasts today in India. The figures are used to capture attention, tell myths or accompany rituals. Their slow and hypnotic gestures put people in a state that prepares to devotion.

Conception Ankush Bhaikar pour l’exposition “Persona. Étrangement humain”
Matsya automaton, avatar of the god Vishnu. Conception Ankush Bhaikar for “Persona. Strangely Human.” Photo Emmanuel Grimaud

72_1931_1_35 F
Vanuatu marionnette. Photo Gautier Deblonde © musée du quai Branly

The final part of the exhibition, “Show Home”, invites you to enter a dwelling and meet the interfaces, devices and robots that might one day be part of our family. How shall we cohabit with them?

Some of the pieces on show are the ones you expect to see there: robots, life-like love dolls but you will also discover a collection of phallic amulets and anthropomorphic spoons.

Stan Wannet‘s electro-mechanical installation features a pair of baboons playing a classic gambling trick. The work is a direct reference to both Wolfgang Von Kempelen’s Chess Playing Turk and Hieronymus Bosch’s painting The Conjurer ‘in an attempt to blur the artificial borders between our rational, polite and slightly ambitions selves on the one hand and the more primal, greedy and curious us on the other.’


Stan Wannet, Civilized Aspirations in Art, Monkeys and small time Entrepreneurs

Divinatory robots such as the one below were popular in Mumbai in the 1990s. They were made using discarded Japanese toys. From the sanskrit Bhavishya (“destiny, future”), the robot is an interface to divination, it predicts the future in 3 languages in exchange of a few coins.

Robot de divination
Bhaishyavani, Robot de divination, End of XXth century. Photo Claude Germain

The little sculptures below are made using kitchen tools. They are designed as “real incarnations of gods.” They assist users in their everyday lives, but they can also turn against them.

0claude-germain-sculptures-haitiennes-loa-ogoun-surnateum-bruxelles-expo-persona
Two Haitian sculptures from the nineteenth century representing the Ogou loa. Photo Claude Germain

Strange Days Have Found Us-1
Danny Van Ryswyk, Strange Days Have Found Us

Return of the Venusian
Danny Van Ryswyk, Return of the Venusian, 2015

Some of the images i took during my visit of the exhibition are on flickr.

Persona. Strangely Human remains open at the Musée du Quai Branly in Paris until 13 November 2016.

MENACE 2, an artificial intelligence made of wooden drawers and coloured beads

008-julien-previeux-theredlist
Julien Prévieux, MENACE 2 (Machine Educable Noughts and Crosses Engine), 2010. Image Jousse Entreprise

In 1961, Donald Michie, a British WWII code breaker and a researcher in artificial intelligence, developed MENACE (the Machine Educable Noughts And Crosses Engine), one of the first programs capable of learning to play and win a game of Noughts and Crosses (or Tic-Tac-Toe if you’re American.) The work emerged from his wartime discussions with Alan Turing about whether or not computers could be programmed to learn from experience.

Since he had no computers at his disposal at the time, he created a device built out of matchboxes and glass beads to simulate a learning algorithm.

A few years ago, Julien Prévieux (who’s imho one of the most interesting artists of the moment) recreated the machine under the form of a beautiful wooden piece of furniture. MENACE 2 (Machine Educable Noughts and Crosses Engine) can be played right now at Kunsthalle Wien where it is part of The Promise of Total Automation, an exhibition that explores machines and their potential to elevate or enslave us (i reviewed it last week.)

0i0iCON-FESTIVALJEUX
Julien Prévieux, MENACE 2 (Machine Educable Noughts and Crosses Engine), 2010. Image Jousse Entreprise

Here’s how MENACE works:

There are 304 little wooden drawers (or matchboxes in the original version created by Michie.) Each of them represents a unique board position that the player can encounter during a game. Each drawer is filled with coloured beads that represent a different move in that board state. The quantity of a colour indicated the “certainty” that playing the corresponding move would lead to a win.

Menace “learns” to win the game by playing repeatedly against the human player, honing its strategy until its opponent is only able to draw or lose against it. The trial and error learning process involves being “punished” for losing and “rewarded” for drawing or winning. This type of machine learning is called reinforcement learning.

Menace always plays first.

1st move, Menace’s turn: The operator opens the top left matchbox/drawer that displays the empty board, opens it and takes out a random bead. The color of that bead determines the space that Menace will mark with a cross or a nough. The player marks the move on the grid and puts the beard on a specially-designed little container added in front of the drawer to remember Menace’s move.

2nd move, the human player places his or her counter on the grid.

3rd move, Menace’s turn: the player identifies the drawer that displays the current board layout, opens it and takes out a random bead. Once again, the color of that bead determines the space that Menace will mark with a cross or a nough. The process is identical to the 1st move.

4th move, player’s turn: The human player places the counter in the spot that will prevent Menace from getting three in a row.

The operator/player keeps on playing until the end.

When Menace loses, the beads on the drawers are put aside, in a bag. This will decrease the probability of Menace making those moves in those states again. If it was a draw, an extra bead of the colour played is added inside each relevant matchbox/drawer. And if Menace wins, three extra beads of the same colour are added, making it more likely that it will makes those moves again next time.

_MENACE_2_2010_d_tail_1__large
Julien Prévieux, MENACE 2 (Machine Educable Noughts and Crosses Engine), 2010. Image Jousse Entreprise

This wooden version of artificial intelligence is crafty, unassuming and stylish but its low-tech appearance might make us forget about the fears and doubts we might have when we think about artificial intelligence. AI is no longer a topic of science fiction novels, it is a field of research that’s evolving rapidly and is seen by some as threatening to take over our jobs and govern our daily lives.

The Promise of Total Automation was curated by Anne Faucheret. The exhibition is open until 29 May at Kunsthalle Wien in Vienna. Don’t miss it if you’re in the area.

Also in the exhibition: Prototype II (after US patent no 6545444 B2) or the quest for free energy. For my review of the show, press play.