Category Archives: Facebook

Exploitation Forensics. Interview with Vladan Joler


Vladan Joler and Kate Crawford, Anatomy of an AI system (detail)

If you find yourself in Ljubljana this week, don’t miss SHARE Lab. Exploitation Forensics at Aksioma.

The exhibition presents maps and documents that SHARE Lab, a research and data investigation lab based in Serbia, has created over the last few years in order to prize open, analyze and make sense of the black boxes that hide behind our most used platforms and devices.

The research presented at Aksioma focuses on two technologies that modern life increasingly relies on: Facebook and Artificial Intelligence.

The map dissecting the most famous social media ‘service’ might be sober and elegant but the reality it uncovers is everything but pretty. A close look at the elaborate graph reveals exploitation of material and immaterial labour and generation of enormous amounts of wealth with not much redistribution in between (to say the least.) As for the map exploring the deep materiality of AI, it dissects the whole supply chain behind the technology deployed by Alexa and any other ‘smart’ device. From mining to transport and with more exploitation of data, labour, resources in the process.

Should you not find yourself in Ljubljana, then you can still discover the impulses, findings and challenges behind the maps in this video recording of the talk that the leader of the SHARE Lab, Prof. Vladan Joler, gave at Aksioma two weeks ago:

Talk by Vladan Joler at the Aksioma Project Space in Ljubljana on 29 November 2017

In the presentation, Joler talks superpowers of social media and AI invisible infrastructures but he also makes fascinating forays into the quantification of nature, the language of neural networks, accelerating inequality gaps, troll-hunting and issues of surveillance capitalism.

I also took the Aksioma exhibition as an excuse to ask Vladan Joler a few questions:


SHARE Lab (Vladan Joler and Andrej Petrovski), Facebook Algorithmic Factory (detail)

Hi Vladan! The Facebook maps and the information that accompanies them on the Share lab website are wonderful but also a bit overwhelming. Is there anything we can do to resist the way our data are used? Is there any way we can still use Facebook while maintaining a bit of privacy, and without being too exploited or targeted by potentially unethical methods? Would you advise us to just cancel our Facebook account? Or is there a kind of medium way?

I have my personal opinion on that, but the issue is that in order to make such a decision each user should be able to understand what happens to their private data, data generated by activity and behaviour and many other types of data that is being collected by such platforms. However, the main problem, and the core reasoning behind our investigations, is that what happens within Facebook for example, i.e. the way it works is something that we can call a black box. The darkness of said boxes is shaped by many different layers of in-transparency. From different forms of invisible infrastructures over the ecosystems of algorithms to many forms of hidden exploitation of human labour, all those dark places are not meant to be seen by us. The only thing that we are allowed to see are the minimalist interfaces and shiny offices where play and leisure meet work. Our investigations are exercises in testing our own capacities as independent researchers to sneak in and put some light on those hidden processes. So the idea is to try and give the users of those platforms more facts so that they are able decide if the price they are paying might be too high in the end. After all, this is a decision that each person should make individually.

Another issue is that, the deeper we were going into those black boxes, the more we became conscious of the fact that our capacities to understand and investigate those systems are extremely limited. Back to your question, personally I don’t believe that there is a middle way, but unfortunately I also don’t believe that there is a simple way out of this. Probably we should try to think about alternative business models and platforms that are not based on surveillance capitalism. We are repeating this mantra about open source, decentralised, community-run platforms, to no real effect.


SHARE Lab. Exploitation Forensics at Aksioma Project Space. Photo: Jure Goršič / Aksioma


SHARE Lab. Exploitation Forensics at Aksioma Project Space. Photo: Janez Janša

The other depressing thing is that for many people, Facebook IS the internet. They just don’t care about privacy, privacy belongs in the past and being targeted is great because it means that Facebook is extra fun and useful. Do you think that fighting for privacy is a futile battle? That we should just go with the flow and adapt to this ‘new normal’?

It is interesting to think that privacy already belongs to the past since historically speaking privacy as we understand it today is not an old concept. It is questionable whether we ever had a moment in time when we had properly defined our right to privacy and we were able to defend it. So, from my point of view, it is more of a process of exploration and an urge to define in each moment what privacy means in present time. We should accept the decentralised view on the term privacy and accept that for different cultures this word has a different meaning and not just imply, for example, European view on privacy. Currently, with such a fast development of technology, with the lack of transparency-related tools and methodologies, outdated laws and ineffective bureaucracies, we are left behind in understanding what is really going on behind the walls of leading corporations whose business models are based on surveillance capitalism. Without understanding what is going on behind the walls of the five biggest technology firms (Alphabet, Amazon, Apple, Facebook and Microsoft) we cannot rethink and define what privacy in fact is nowadays.

The dynamics of power on the Web have dramatically changed, and Google and Facebook now have a direct influence over 70% of internet traffic. Our previous investigations are saying that 90% of the websites we investigated have some of the Google cookies embedded. So, they are the Internet today and even more, their power is spilling out of the web into many other segments of our life, from our bedrooms, cars, cities to our bodies.


SHARE Lab. Exploitation Forensics at Aksioma Project Space. Photo: Janez Janša


SHARE Lab. Exploitation Forensics at Aksioma Project Space. Photo: Janez Janša

Could you explain us the title of the show “Exploitation Forensics”?

Oxford dictionary is giving us two main uses of the word exploitation : (1) the action or fact of treating someone unfairly in order to benefit from their work and (2) the action of making use of and benefiting from resources. Basically both versions are essentially related to two maps that are featured in the exhibition. We can read our maps as visualisations of exploitation process regardless whether we speak about exploitation of our immaterial labour (content production and user behaviour as labour) or we go deeper and say that we are not even a workers in that algorithmic factory, but pure, raw material, i.e. a resource (user behavioural data as a resource). Each day users of Facebook provide 300.000.000 hours of unpaid immaterial labour and this labour is transformed into the 18 billion US dollars of revenue each year. We can argue if that is something that we can call exploitation or not, for the simple reason that users use those platforms voluntarily, but for me the key question is do we really have an option to stay out of those systems anymore? For example, our Facebook accounts are checked during visa applications, and the fact that you maybe don’t have a profile can be treated as an anomaly, as a suspicious fact.

Not having profile will place you in a different basket and maybe different price model if you want to get life insurance and for sure, not having Linkedin account if you are applying for a job will lower your chances of getting the job you want. Our options of staying out are more and more limited each day and the social price we are paying to stay out of it is higher and higher.

If our Facebook map is somehow trying to visualise one form of exploitation, the other map that had the unofficial title “networks of metal, sweat and neurons” is visualising basically three crucial forms of exploitation during the birth, life and death of our networked devices. Here we are drawing shapes of exploitation related to different forms of human labour, exploitation of natural resources and exploitation of personal data quantified nature and human made products.

The word forensics is usually used for scientific tests or techniques used in connection with the detection of crime; and we used many different forensic methods in our investigations since my colleague Andrej Petrovski has a degree in cyber forensics. But in this case the use of this word can be treated also as a metaphor. I like to think of black boxes such as Facebook or complex supply chains and hidden exploitations as crime scenes. Crime scenes where different sort of crimes against personal privacy, nature exploitation or let’s say in some broad sense crime against humanity happens.


SHARE Lab. Exploitation Forensics at Aksioma Project Space. Photo: Jure Goršič / Aksioma


SHARE Lab. Exploitation Forensics at Aksioma Project Space. Photo: Jure Goršič / Aksioma

The maps are incredibly sophisticated and detailed. Surely you can’t have assimilated and processed all this data without the help of data crunching algorithms? Could you briefly describe your methodology? How you managed to absorb all this information and turn it into a visualisation that is both clear and visually-appealing?

In our previous investigations (eg. Metadata Investigation: Inside Hacking Team or Mapping and quantifying political information warfare) we relied mostly on process of data collection and data analysis, trying to apply different methods of metadata analysis similar to ones that organisations such as the NSA or Facebook probably use to analyse our personal data. For that we used different data collection methods and publicly available tools for data analysis (eg. Gephi, Tableau, Raw Graphs). However, the two maps featured in the exhibition are mostly product of long process of diving and digging into publicly available documentation such as 8000 publicly available patents registered by Facebook, their terms or services documentation and some available reports from regulatory bodies. At the beginning, we wanted to use some data analysis methods, but we very quickly realised that the complexity of data collection operations by Facebook and the number of data points they use is so big that any kind of quantitative analysis would be almost impossible. This tells a lot about our limited capacity to investigate such complex systems. By reading and watching hundreds of patents we were able to find some pieces of this vast mosaic of data exploitation map we were making.

So, those maps, even though they look in some way generative and made by algorithms, they are basically almost drawn by hand. Sometimes it takes months to draw such an complex map, but somehow I need to say that I really appreciate slowness of this process. Doing it manually gives you the time to think about each detail. Those are more cognitive maps based on collected information then data visualizations.


SHARE Lab. Exploitation Forensics at Aksioma Project Space. Photo: Jure Goršič / Aksioma

In a BBC article, you are quoted as saying “If Facebook were a country, it would be bigger than China.” Which reminded me of several news stories that claim that the way the Chinese use the internet is ‘a guide to the future’ (cf. How China is changing the internet) Would you agree with that? Do you think that Facebook might eventually eat up so many apps that we’ll find ourselves in a similar situation, with our lives almost entirely mediated by Facebook?

The unprecedented concentration of wealth within the top five technology companies allows them to treat the whole world of innovation as their outsourced research and development. Anarcho-Capitalist ecosystem of startups is based on a dream that in one moment one of those top five mega companies will acquire them for millions of dollars.

If you just take a look at one of the graphs from our research on “The human fabric of the Facebook Pyramid” mapping the connections within Facebook top management, you will probably realise that through their board of directors they have their feet in most important segments of technological development in combination with political power circles. This new hyper-aristocracy has a power to eat up any new innovation, any attempt that will potentially endanger their monopoly.

The other work in the Aksioma show is Anatomy of an AI system, a map that guides “visitors through the birth, life and death of one networked device, based on a centralized artificial intelligence system, Amazon Alexa, exposing its extended anatomy and various forms of exploitation.” Could you tell us a few words about this map? Does it continue the Facebook research or is it investigating different issues?

Barcelona-based artist Joana Moll infected me with this obsession about materiality of technology. For years we were investigating networks and data flows, trying to visualise and explore different processes within those invisible infrastructures. But then after working with Joana I realised that each of those individual devices we were investigating, has let’s say another dimension of existence, that is related to the process of their birth, life and death.

We started to investigate what Jussi Parikka described as geology of media. In parallel with that, our previous investigations had a lot to do with issues of digital labour, beautifully explained in works of Christian Fuchs and other authors, and this brought us to investigate the complex supply chains and labour exploitation in the proces.

Finally, together with Kate Crawford from AI Now Institute, we started to develop a map that is a combination of all those aspects in one story. The result is a map of the extended anatomy of one AI based device, in this case Amazon Echo. This anatomy goes really deep, from the process of exploitation of the metals embedded in those devices, over the different layers of production process, hidden labour, fractal supply chains, internet infrastructures, black boxes of neural networks, process of data exploitation to the death of those devices. This map basically combines and visualises three forms of exploitation: exploitation of human labour, exploitation of material resources and exploitation of quantified nature or we can say exploitation of data sets. This map is still in beta version and it is a first step towards something that we are calling in this moment – AI Atlas that should be developed together with AI Now institute during next year.

Do you plan to build up an atlas with more maps over time? By looking at other social media giants? Do you have new targets in view? Other tech companies you’d like to dissect in the way you did Facebook?

The idea of an Atlas as a form is there from the beginnings of our investigations when we explored different forms of networks and invisible infrastructures. The problem is that the deeper our investigations went, those maps became more and more complex and grew in size. For example, maps exhibited at Aksioma are 4×3 m in size and still there are parts of the maps that are on the edge of readability. Complexity, scale and materiality of those maps became somehow a burden itself. For the moment there are two main forms of materialisations of our research. First, the main form are stories on our website and recently those big printed maps are starting to have their life at different gallery spaces around. It is just recently that our work was exhibited in art context and I need to say that I kind of enjoy in that new turn.


SHARE Lab. Exploitation Forensics at Aksioma Project Space. Photo: Janez Janša


SHARE Lab. Exploitation Forensics at Aksioma Project Space. Photo: Janez Janša

How are you exhibiting the maps in the gallery? Do you accompany the prints with information texts or videos that contextualize the maps?

Yes. As You mentioned before, those maps are somewhat overwhelming, complex and not so easy to understand. On the website we have stories, narratives that guide the readers through the complexities of those black boxes. But at the exhibitions we need to use different methods to help viewers navigate and understand those complex issues. Katarzyna Szymielewicz from Panoptykon Foundation, created video narrative that is accompanying our Facebook map and we are usually exhibiting a pile of printed Facebook patents, so visitors can explore them by themselves.

Thanks Vladan!

SHARE Lab. Exploitation Forensics is at Aksioma | Project Space in Ljubljana until 15 December 2017.

Previously: Critical investigation into the politics of the interface. An interview with Joana Moll and Uberworked and Underpaid: How Workers Are Disrupting the Digital Economy.

Obfuscates your feelings on Facebook and defeat its algorithms in the process


Ben Grosser, Go Rando, 2017

At a time when truth is treated as a polymorph and malleable commodity, when half of your country seems to live in another dimension, when polls are repeatedly proved wrong, when we feel more betrayed and crushed than ever by the result of a referendum or a political election, it is easy to feel disoriented and suspicious about what people around us are really thinking.

Blinding Pleasures, a group show curated by Filippo Lorenzin at arebyte Gallery in London invites us to ponder on our cognitive bias and on the mechanisms behind the False Consensus effect and the so-called Filter Bubble. The artworks in the exhibition explore how we can subvert, comprehend and become more mindful about the many biases, subtle manipulations and functioning of the mechanisms that govern the way we relate to news and ultimately to our fellow human beings.

Ben Grosser, Go Rando (Demonstration video), 2017

One of the pieces premiering at arebyte is Go Rando, a brand new browser extension by Ben Grosser that allows you to muddle your feelings every time you “like” a photo, link or status on Facebook. Go Rando will randomly pick up one of the six Facebook “reactions”, leaving thus your feelings and the way you are perceived by your contacts at the mercy of a seemingly absurd plug-in.

The impetus behind the work is far more astute and pertinent than it might seem though. Every “like”, every sad or laughing icon is seen by your friends but also processed by algorithms used for surveillance, government profiling, targeted advertising and content suggestion. By obfuscating the limited number of emotions offered to you by Facebook, the plug-in allows you to fool the platform algorithms, perturb its data collection practices and appear as someone whose feelings are emotionally “balanced”.

If you want to have a go, installing Go Rando on your browser is a fairly straightforward task. And don’t worry, the extension also allows you choose a specific reaction if you want to.

Grosser has been critically exploring, dissecting and perverting facebook mechanisms for a number of years now. His witty artworks become strangely more relevant with each passing year, as facebook gains even more popularity, both in number, influence and importance.

I caught up with Ben right after the opening of the Blinding Pleasures show:

Hi Ben! Your latest work, Go Rando, randomly assigns Facebook users an ‘emotion’ when you click “Like”. I hate to admit it but I’m not brave enough to use Go Rando. I’d feel too vulnerable and at the mercy of an algorithm. Also, I’d be too worried about the way my contacts would judge the assigned reactions. “Would I offend or shock anyone?” Are you expecting that many people will be as coward as I am? And more generally, what are you expecting people to discover or reflect upon with this work?

As users of Facebook I’d say we are always—as you put it—“at the mercy of an algorithm.” With Go Rando I aim to give users some agency over which algorithm they’re at the mercy of. Are they fully subject to the designs Facebook made available, or are they free to deviate from such a prescribed path? Because Go Rando’s purpose is to obfuscate one’s feelings on Facebook by randomly choosing “reactions” for them, I do expect some (many?) will share your concerns about using it.

However, whether one uses Go Rando or not, my intention for this work is to provoke individual consideration of the methods and effects of emotional surveillance. How is our Facebook activity being “read,” not only by our friends, but also by systems? Where does this data go? Whom does it benefit? Who is made most vulnerable?

With this work and others, I’m focused on the cultural, social, and political effects of software. In the case of Facebook, how are its designs changing what we say, how we say it, and to whom we speak? With “reactions” in particular, I hope Go Rando gets people thinking about how the way they feel is being used to alter their view of the world. It changes what they see on the News Feed. Their “reactions” enable more targeted advertising and emotional manipulation. And, as we’ve seen with recent events in Britain and America, our social media data can be used to further the agendas of corporate political machines intent on steering public opinion to their own ends.

Go Rando also gives users the possibility to select a specific reaction when they want to. That’s quite magnanimous. Why not be more radical and prevent users from intervening on the choice of emotion/reaction?

I would argue that allowing the user occasional choice is the more radical path. A Go Rando with no flexibility would be more pure, but would have fewer users. And an intervention with fewer users would be less effective, especially given the scale of Facebook’s 1.7 billion member user base. Instead, I aim for the sweet spot that is still disruptive yet broadly used, thus creating the strongest overall effect. This means I need to keep in mind a user like you, someone who is afraid to offend or shock in a tricky situation. The fact is that there are going to be moments when going rando just isn’t appropriate (e.g. when Go Rando blindly selects “Haha” for a sad announcement). But as long as the user makes specific choices irregularly, then those “real” reactions will get lost in the rando noise. And once a user adopts Go Rando, one of its benefits is that they can safely get into a flow of going rando first and asking questions later. They can let the system make “reaction” decisions for them, only self-correcting when necessary. This encourages mindfulness of their own feelings on/about/within Facebook while simultaneously reminding them of the emotional surveillance going on behind the scenes.


Opening of Blinding Pleasures. Photo: arebyte gallery


Opening of Blinding Pleasures. Photo: arebyte gallery

Go Rando is part of arebyte Gallery’s new show Blinding Pleasures. Could you tell us your own view on the theme of the exhibition, the False Consensus effect? How does Go Rando engage in it?

With the recent Brexit vote and the US Presidential election, I think we’ve just seen the most consequential impacts one could imagine of the false consensus effect. And even though I’m someone who was fully aware of the role of algorithms in social media feeds, I (and nearly everyone else I know) was still stunned this past November. In other words, we thought we knew what the country was thinking. We presumed that what we saw on our feeds was an accurate enough reflection of the world that our traditional modes of prediction would continue to hold.

We were wrong. So why? In hindsight, some of it was undoubtedly wishful thinking, hoping that my fellow citizens wouldn’t elect a racist, sexist, reality television star as President. But some of it was also the result of trusting the mechanisms of democracy (e.g. information access and visibility) to algorithms designed primarily to keep us engaged rather than informed. Facebook’s motivation is profit, not democracy.

It’s easy to think that what we see on Facebook is an accurate reflection of the world, but it’s really just an accurate reflection of what the News Feed algorithm thinks we want the world to look like. If I “Love” anti-Trump articles and “Angry” pro-Trump articles, then Facebook gleans that I want a world without Trump and gives me the appearance of a world where that sentiment is the dominant feeling.

Go Rando is focused on these feelings. By producing (often) inaccurate emotional reactions, Go Rando changes what the user sees on their feed and thus disrupts some of the filter bubble effects produced by Facebook. The work also resists other corporate attempts to analyze our “needs and fears,” like those practiced by the Trump campaign. They used such analyses to divide citizens into 32 personality types and then crafted custom messages for each one. Go Rando could help thwart this kind of manipulation in the future.

The idea of a False Consensus effect is overwhelming and it makes me feel a bit powerless. Are there ways that artists and citizens could acknowledge and address the impact it has on politics and society?

It is not unreasonable to feel powerless given the situation. So much infrastructure has been built to give us what (they think) we want, that it’s hard to push back against. Some advocate complete disengagement from social media and other technological systems. For most that’s not an option, even if it was desirable. Others develop non-corporate distributed alternatives such as Diaspora or Ello. This is important work, but it’s unlikely to replace global behemoths like Facebook anytime soon.

So, given the importance of imagining alternative social media designs, what might we do? I’ve come to refer to my process for this as “software recomposition,” treating sites like Facebook, Google, and others not as fixed spaces of consumption and interaction but as fluid spaces of manipulation and experimentation. In doing so I’m drawing on a lineage of net artists and hacktivists who use existing systems as their primary material. In my case, such recomposition manifests as functional code-based artworks that allow people to see and use their digital tools in new ways. But anyone—artist or citizen—can engage in this critical practice. All it takes is some imagination, a white board, and perhaps some writing to develop ideas about how the sites we use every day are working now, and how small or big changes might alter the balance of power between system and user in the future.


Ben Grosser, Go Rando, 2017


Ben Grosser, Go Rando, 2017

I befriended you on Facebook today not just because you look like a friendly guy but also because I was curious to see how someone whose work engages so critically and so often with Facebook was using the platform. You seem to be rather quiet on fb. Very much unlike some of my other contacts who carry their professional and private business very openly on their page. Can you tell us about your own relationship to Facebook? How do you use it? How does it feed your artistic practice? And vice-versa, how maybe some projects you developed have had an impact on the way you use Fb and other social platforms?

When it comes to Facebook I’d say I’m about half Facebook user and half artist using Facebook.

I start with the user part, as many of my ideas come from this role. I use Facebook to keep up with friends, meet new people, follow issues—pretty normal stuff. But I also try to stay hyper-aware of how I’m affected by the site when using it. Why do I care how many “likes” I got? What causes some people to respond but others to (seemingly) ignore my latest post?

When these roles intersect, Facebook becomes a site of experimentation for me. I’m constantly watching for changes in the interface, and when I find them I try to imagine how they came to be. What is the “problem” someone thought this change is solving? I also often post about these changes, and/or craft tests to see how others might be perceiving them.

A favorite example of mine is a post I made last year:

“Please help me make this status a future Facebook memory.”

Nothing else beyond that, no explanation, no instruction. What followed was an onslaught of comments that included quotes such as: “I knew you could do it!!” or “great news!” or “Awesome! Congrats!” or “You will always remember where you were when this happened.” In other words, without discussion, people had an instinct about what kinds of content might trigger Facebook in the future to recall this post from this past. These kinds of experiments not only help me think about what those signals might be, but also illustrate how many of us are (unconsciously) building internal models about how Facebook works at the algorithmic level. Because of this, much of my work has a collaborative nature to it, even if those collaborations aren’t formal ones.

Ben Grosser, Facebook Demetricator (demonstration video), 2012-present

Do you know if some people have started using or at least perceiving Facebook and social media practices in general differently after having encountered one of your works?

Yes, definitely. Because some of my works—like Facebook Demetricator, which removes all quantifications from the Facebook interface—have been in use for years by thousands of people, I regularly get direct feedback from them. They tell me stories about what they’ve learned of their own interactions with Facebook as a result, and, in many cases, how my works have changed their relationship with the site.

Some of the common themes with Demetricator are that its removal of the numbers on Facebook blunts feelings of competition (e.g. between themselves and their friends), or removes compulsive behaviors (e.g. stops them from constantly checking to see how many new notifications they’ve received). But perhaps most interestingly, Demetricator has also helped users to realize that they craft rules for themselves about how to act (and not act) within Facebook based on what the numbers say.

For example, multiple people have shared with me that it turns out they have a rule for themselves about not liking a post if it has too many likes already (say, 25 or 50). But they weren’t aware of this rule until Demetricator removed the like counts. All of the sudden, they felt frozen, unable to react to a post without toggling Demetricator off to check! If your readers are interested in more stories like this, I have detailed many of them in a paper about Demetricator and Facebook metrics called “What Do Metrics Want: How Quantification Prescribes Social Interaction on Facebook,” published in the journal Computational Culture.


Ben Grosser, Facebook Demetricator, 2012-present

Ok, sorry but I have another question regarding Facebook. I actually dislike that platform and tend to avoid thinking about it. But since you’re someone who’s been exploring it for years, it would be foolish of me to dismiss your wise opinion! A work like Facebook Demetricator was launched in 2012. 5 years is a long time on the internet. How do you feel about the way this project has aged? Do you think that the way Facebook uses data and the way users experience data has evolved over time?

I have mixed feelings about spending much of my last five years with Demetricator. I’m certainly fortunate to have a work that continues to attract attention the way this one does. But there have been times—usually when Facebook makes some major code change—when I’ve wished I could put it away for good! Because Facebook is constantly changing, I have to regularly revise Demetricator in response or it will stop functioning. In this way, I’ve come to think of Demetricator as a long-term coding performance project.

Perhaps the best indicator of how well Demetricator has aged is that it keeps resurfacing for new audiences. Someone who had never heard about it before will find the work and write about its relationship to current events, and this will create a surge of new users and attention. The latest example is Demetricator getting discussed as a way of dealing with post-election social media anxiety in the age of Trump.

In terms of Facebook’s uses of and user experiences with data over time, there’s no question this has evolved. People have a lot more awareness about the implications of big data and overall surveillance post-Snowden. The recent Brexit and US election results have helped expand popular understandings of concepts like the filter bubble. And I would say that, while perhaps not that many people are aware of it, many more now understand that what we see on Facebook is the result of an algorithm making content decisions for us. At the same time, Facebook continues to roll out new methods of quantifying its users (e.g. “reactions”), and these are not always discussed critically on arrival, so there’s plenty of room for growth.

I find the way you explore and engage with algorithms and data infrastructures fascinating, smart but also easy to approach for anyone who might not be very familiar with issues related to algorithm, data gathering, surveillance, etc. There always seem to be several layers to your works. One that is easy to understand with a couple of sentences and one that goes deeper and really questions our relationship to technology. How do you ensure that people will see past the amusing side of your work and immerse themselves into the more critical aspect of each new project (if that’s something that concerns you)?

It’s important to me that my work has these different layers of accessibility. My intention is to entice people to dig deeper into the questions I’m thinking about. But as long as some people go deep, I’m not worried when others don’t.

In fact, one of the reasons I often use humor as a strategic starting point is to encourage different types of uses for each work I make. This is because I not only enjoy the varied lives my projects live but also learn something new from each of them. As you might expect, sometimes it’s a user’s in-depth consideration that reveals new insights. But other times it’s the casual interaction that helps me better understand what I’ve made and how people think about my topic of interest.

Ultimately, in a world where so many of us engage with software all day long, I want us to think critically about what software is. How is it made? What does it do? Who does it serve? In other words, what are software’s cultural, social, and political effects? Because software is a layer of the world that is hard to see, I hope my work brings a bit more of it into focus for us all.


Ben Grosser, Music Obfuscator


Ben Grosser, Tracing You (screenshot), 2015

Any upcoming work, field of research or events coming up after the exhibition at arebyte Gallery?

I have several works and papers in various stages of research or completion. I’ll mention three. Music Obfuscator is a web service that manipulates the signal of uploaded music tracks so that they can evade content identification algorithms on sites like YouTube and Vimeo. With this piece, I’m interested in the mechanisms and limits of computational listening. I have a lot done on this (I showed a preview at Piksel in Norway), but hope to finally release it to the public this spring or summer at the latest. I’m in the middle of research for a new robotics project called Autonomous Video Artist. This will be a self-propelled video capture robot that that seeks out, records, edits, and uploads its own video art to the web as a way of understanding how our own ways of seeing are culturally-developed. Finally, I have an article soon to be published in the journal Big Data & Society that will discuss reactions to my computational surveillance work Tracing You, illustrating how transparent surveillance reveals a desire for increased visibility online.

Thanks Ben!


Image by Filippo Lorenzin

Go Rando is part of Blinding Pleasures, a group show that explores the dangers and potentials of a conscious use of the mechanisms behind the False Consensus effect and its marketing-driven son, the so-called “Filter Bubble”. The show was curated by Filippo Lorenzin and is at arebyte Gallery in London until 18th March 2017.