PRAVDA – AI Accountability Dystopia

PRAVDA

“James, you must press the button now.” CRUX’s raspy voice says on his ear piece.

In front of him, his index finger lies on top of the button. James doesn’t really know what to do now. Part of him says to believe in the command he has received. But it just doesn’t feel right.

Feelings are not allowed. He learned that on his first day of training. It was then that his instructor explained to him that there was no reason to have feelings in his position. They would just stand in his way of making concrete, rational choices. James vividly remembers the crispy sound of the word rational when the instructor said it, because of his slow pronunciation.

CRUX says it again. He is supposed to press the button, he knows it. Time seems to move slowly, dragging on as his mind races.

When in training, he was always trying to do his best. Not that it was very hard, everyone seemed to do really well, because the job is incredibly easy (and plain boring most of the time, he ponders). But he learned quickly to scan through all the symbols, images, mappings and text in his visor screen. On his ear piece CRUX’s monotone voice would point him to the most crucial aspects of the analysis, and walk him through to a conclusion.

He always felt they were working together. CRUX was always able to find patterns, inconsistencies, and directions in the data. He was the human, though, he beared the consequences of the decisions they made. That’s why, although his position was really boring most of the time, he felt useful. He was the safeguard, the human component.

“Press the button now”. It’s horrifying, he thinks. He can’t believe that he’s about to do it. Why? What for? He knows it can’t be undone once he presses it. But CRUX’s harsh-sounding voice insists. “James, you must press the button now.”

CRUX’s decisions, he learned during training, are made through evidence-based mathematically-sourced statistically-efficient JQuiL-approved and seal-stamped accountability, ethics and transparency. He, himself, had to spend many days to understand the basics of the way the system works: complex formulas, democratic societal input, non-Evolutionary Key Bindings. It was complex, layered, and hard-to-grasp.

Now, his visor indicates the situation is critical. Written in bold letters on his screen it says: PRESS THE BUTTON. James knows it’s all on him now. How could it be that this could be the right thing to do? No matter what he saw or heard, he knew it was impossible. But the system ordered him to. He knows the system is not only precise, but absolutely trustworthy. It has been tested and reviewed by the brightest, most inquisitive minds.

But deep down he knows it can’t be right. It just doesn’t make sense, no matter how he looks at it. Why should he, the safeguard, bow to something he doesn’t agree with? But CRUX has never been wrong, he knows that too. It was created to be as close as possible to perfection. He can’t say the same about himself. Could it be that he has gone mad? Could it be he has simply missed something, he was not updated on new policies? Could it be he just doesn’t get it?

He knows the consequences are devastating if he doesn’t press the button. Is he willing to trust his defective self that much? Right now, he doesn’t now. Words flash on his visor, CRUX’s voice insist: “You must do it. Accept the truth.” His finger gets heavier and heavier, and he can feel the cold button on his index finger. He doesn’t know the truth. He must press the button.

And so he does.

 

 

Reflection and Consequence Mitigation Strategy 

In the story, I wanted to reflect on AI making a decision that we disagree. In the story, James is put against CRUX (the AI). He is just there as a safeguard, he is liable to the consequences and so must make sure that they are well-thought and correct. But, as we get to see, he is not permitted to disagree because he is a human, and his “knowledge“ is considered inferior to that of the machine (epistemic injustice). I was also considering what happens when accountability becomes a stamp that can put in products: CRUX is reliable and trustworthy, so it cannot make mistakes or errors. But that completely breaks when James’ (correctly or not) disagrees with the “objective” analysis of the machine.

I think this exercise on thinking about this dystopia where automated decision making defines very important things, and humans are used just as impotent safeguards, makes me consider that the framing of AI accountability could be hijacked to create things that are not accountable at all. The idea that we need to have more ethical machines can be taken at face value and not consider the systemic construction of the machine, our intrinsic knowledges as humans, and how it is ultimately impossible to create something that is “the truth”.

There is also a consideration about how transparency may not really mean very much in this context. Although James knows the basics of how the machine works, and the machine even walks him through the decision, he can’t really go through it the same way as the machine can. The machine’s transparency also becomes one of the reasons why he trusts it.

I think what this story tells me is that I’m afraid that AI accountability, ethics, transparency, all of that can be coopted to privilege the machine itself and its decisions, instead of making us more critical/conscious of it. I think that one of the main mitigation strategies for this is really understanding that we also need to question “the truth” of the machine, its epistemological basis, and not give power to the idea that we can just “fix” the bias that we spot on machines and that is it, we are done. There is no seal of approval for accountability.

Reverse-engineer the PredPol ‘black box’: An unpredictable hackathon

Reverse-engineer the PredPol ‘black box’: An unpredictable hackathon

Our Values:

1. We open the black box

Our objective is to open up and reverse engineer closed systems. This may involve both technical expertise to look at closed algorithms by reverse engineering their decisions, but also engaging in sociological analysis based on lived-experience of communities affected by these algorithms. What matters the most is taking the inescrutable black box of PredPol and picking it apart, in all sorts of ways.

2. We look at all the parts

If we are to analyze how a complex system such as PredPol works, we need to focus on different aspects of it. We need to have teams look into the: code (data, algorithms, automated decision making), norms (how do officers create data, how is the data used, what do citizens/affected communities know and think of it?), market (how can we track who is financing, how they get money, how governments spend it) and law (how can we think of better public policy, how could it be infringing current policy, what would be good actions in the judicial sphere?).

3. We don’t miss the big picture

Although we position ourselves critically towards predictive policing and all forms of algorithmic bias, we cannot miss the big picture of reducing violent crime and societal problems. Therefore, as we reverse-engineer the black box of PredPol, we not only work to deconstruct and problematize their design, but also think: how could we do it better? How could we fix predictive policing and think about a safer, community-based form of public safety?

4. We care about the small details

We value the small data too: looking at how individual people and communities are affected. Their lives, bodies and stories matter, and need to be taken seriously.  Here, there is a focus not only on ’Big Data’, but also on small data of all kinds.

5. We value the ethics of data

We value ethics, and adopt an ethics of care approach to personal data, and all that arises from our interactions with data. That way, we have a focus on data protection considered in its context, with special focus on responsibility and working alongside the affected people.

Where?

The Hackhaton would be organized in a city in the USA, where PredPol has been implemented and the community has been impacted by it. Cities like Los Angeles would be ideal, in this sense.

Who is there?

The Hackhaton invites people from different cities in the USA where PredPol is present (at least 4 different cities, plus the home city). Diversity of backgrounds is welcome and encouraged: a lot of the effort will be put into inviting people from different parts of society: designers, hackers, engineers; social scientists, anthropologists, economists; policymakers, lawyers, politicians; community leaders, citizens, activists; etc.

Gender performativity // Queer cultures

– Something I’ve worked on:

I worked on a project to discuss gender performativity and queer cultures with communities in Brazil, that generated a music video from their experiences. The music video is composed in queer language/slangs, using stories co-written with the performers.

– What’s the full scope of the problem?

Gender and sexuality in Brazil are still a very taboo/problematic discussion, although there are several policy, cultural, artistic and governmental efforts. Brazil has a very high rate of violence toward queer people (specially trans/travesti people; see: http://www.refworld.org/docid/58736b5f4.html). Several projects have attempted to shift these norms, including the Museum of Sexual Diversity, in Sao Paulo, and several other smaller campaigns. Law projects have been passed that turn homophobia into a crime, for example, but these are still not enforced by the police and courts as they should.

– Who’s affected?

The queer community is very diverse, serving as an umbrella for various identifications. In the community we worked with, there were drag queens, drag kings, transgender, travestis (which mean a different thing in Portuguese than the English transvestite), lesbians, gays, non-binary, etc. Queer communities that suffer from social exclusion/economic inequality are even more affected.

– Who’s best positioned to address the problem?

We considered that it was important to bridge our queer community in the university environment (more priviledged economically) with a peripheric queer community of drag queens, who were already producing their own drag show. Their drag queen show was very successful in their neighborhood, but was not able to connect to other communities, e.g. in other neighbor cities. We decided to create a music video/documentary, using our expertise in audiovisual communication, while also looking at their/our experiences as queer people.

– What are predictable consequences of the proposed solution?

We expected that the video, when publicized, would draw attention, empower queer communities in the region/country and, by exposing some of the problems that affect these communities, raise consciousness about these problems. Unlike many other similar productions, we were focusing on a peripheric community of drag queens, and not on middle-class gay communities of the big cities. This, we believed, would bring more diversity and voices to the discussion. One of the more direct outcomes we predicted was that the drag queen performers would also be able to use the video in their show, but also as a way for them to get more performances.

– What were unpredictable consequences of the proposed solution?

In our production, we were naïve and did not account for some of the problematic of not having as diverse of a team as we could have. For example, although all the stories told in the music video were co-written with the drag queens/trans/travestis, the video production team was composed only of cisgendered people whos studied in the same university. This became problematic when the music video was presetned in video festivals, where it was questioned for not being as diverse as it could, specially considering its thematic.

Critical Data Literacy // “Predictive Policing” // Recommender systems

1. Critical Data Literacy

The broad idea that everything we do becomes data, is analyzed by algorithms and that the result of that shapes the way we live in the world has become a major topic of discussion in public discourse. The dawn of GDPR in Europe (and consequently the world), the Cambridge Analytica scandal, among other stories punctuate a moment of acknowledgement and, in many places, worry about personal data. This worry is indeed justified, as many reports have been pointing to the major problems of privacy, surveillance, bias, and discrimination. We need to educate citizens with a critical lens towards digitality and datafication. There is a lot of talk about computational thinking, but not as much of how to increase critical consciousness of datafication.

NORMS: We are already starting to see how people are becoming more wary of sharing our data online. I think a lever of change that could be activated, in this sense, is raising consciousness of people of how their data becomes utilized by these corporations. To further achieve this it would be possible to use these norms to exert pressure in the MARKET, as companies would have to adapt and deal with a less naìve public.

It would be interesting to see more projects that seek to make CODE readable and understandable, not only in its core sense (i.e. computational working), but also in how it works more deeply in society. If Scratch allows kids to understand and play around with code as if it was Lego, what would it look like if kids could see the deeper layers of Amazon’s exploitation of data sets, Mechanical Turkers and earth’s minerals (see Anatomy of an AI System)? On the other hand, companies could be forced to unblackbox their systems and thus become more transparent by changes in LAW.

2. “Predictive Policing”

The systems of so-called ‘predictive policing’ are becoming more and more common. They have been linked, though, to increasing inequality, racism and other highly problematic questions. As CCTVs, bodycams, social media, etc are used without any transparency by companies like Palantir, we face a situation where we face dire social consequences from technological changes.

MARKET: Would it be possible to work with the activist community inside of the open source movement to impede the appropriation of open source for violent/racist software? What if CC/BY had a condition that certain companies cannot be done to your open CODE, unless they pay hefty licensing charges? But how would we all agree which companies get to be part of this?

LAW: The AI Now 2017 report contains 10 recommendations says asks for the development of “​standards​ ​to​ ​track​ ​the​ ​provenance,​ ​development,​ ​and​ ​use​ ​of​ ​training​ ​datasets throughout​ ​their​ ​life​ ​cycle”. These standards could be constructed through law, and thus define more harsh limitations for policing using algorithms.

NORMS: I think there’s a strong ‘norm’ currently on the technological development community that tech companies that work with the military are ostracized and receive criticism from their employees (although that does not really impede them from existing, e.g. Clarifai). Could we adopt that ethos also in relation to predictive policing and work with the developers to be more critical toward that work?

3. Ethics of recommender systems

As recommender systems become pervasive, and are powered by machine learning algorithms, we need to think of how to deal with their ethical quandaries, specially when they are pervasively used, but we don’t quite completely understand/can interpret some of the decision of these systems. As our reflection on the systems often moves slower than the adoption of them in society (e.g. YouTube), we need to discuss the ethical elements of it in a broader societal discourse.

LAW: the conception of accountability is still very poorly defined. The companies argue that they are accountable and follow society’s requests, but we don’t really feel like we are part of the conversation. ToS really are not read, and they don’t really define how the system works. What would happen if, for example, companies like Facebook and YouTube/Google were understood as media companies and not as tech companies?

Interventions on CODE could work to try to render understandable how these algorithms work, which could potentially lead the publics to change/rethink their habits (NORMS). We in part already do this in our day-to-day life as we get angry with what is recommended to us (why does Facebook think I’m so interested in pre-packaged meals?). But being able to see that more clearly and perhaps even respond to it, even collectively, could be game-changing. Finally, thinking about the MARKET could mean creating a business model for recommendation that really does not use and sells personal data as much. Or that uses user’s interests in a way that is not so exploitative, while also giving access to information that is unexpected.

Critical studies of technology // Designing critical technology

First, let me introduce myself. My name is Gabriel Pereira, and I am from Brazil. I am currently a visiting PhD student here at Comparative Media Studies – MIT, and a PhD fellow at Aarhus University (Denmark). My main research interests are critical studies of data infrastructures. I am particularly interested in understanding how these pervasive data infrastructures constrain and enable how we think and experience memory and archives today.

My research methods are experimental and collaborative, involving both arts-based inquiry and ethnographic work. Most recently, I have been developing the Museum of Random Memory with a large network of researchers, activists and artists. It is a series of arts-based public interventions and experiments designed to spark reflection about the underlying complexities of everyday digital media usage. We explore questions such as: How can we retake control of the Big Data we produce in our everyday lives, especially now that digital platforms continually track us and create memories on our behalf?

Other work I’ve done focuses, for example, on APIs as elements of infrastructures that enable and shape the networking of urban data; rethorical analysis of young people’s experiences of their social media usage; and more artistic approaches to Artificial Intelligence in the archives of museums. In my research, so far, I have been drawing on critical infrastructure studies (from STS), new materialism, platform studies, critical code/algorithm studies and other related feminist/critical perspectives on technology.

A lot of my research projects stem from different collaborations. To mention a few: the Digital Living Research Commons at Aarhus University, which is home to diverse projects on datafication and digitalisation (where Annette Markham, my supervisor, is co-director); and the Center for Arts, Design, and Social Research, which is a global network of collaboration and experimentation in arts and research.

Now: What am I doing here? I became interested in this class because it relates directly to my research practice in critical studies of technology. That being said, it offers new perspectives to me, specially related with values-driven/community-driven design. Often I have used research and arts perspectives without necessarily thinking in design terms, so it would certainly be a useful tool. Also, I feel that, specially today, when our world becomes saturated in digital tech, we need to approach it from many different perspectives and tools. Learning how to think about technology from a social change perspective and the design elements of this is part not only of studying technology, but of thinking about how to create better technology and research for the world.