PRAVDA – AI Accountability Dystopia

PRAVDA

“James, you must press the button now.” CRUX’s raspy voice says on his ear piece.

In front of him, his index finger lies on top of the button. James doesn’t really know what to do now. Part of him says to believe in the command he has received. But it just doesn’t feel right.

Feelings are not allowed. He learned that on his first day of training. It was then that his instructor explained to him that there was no reason to have feelings in his position. They would just stand in his way of making concrete, rational choices. James vividly remembers the crispy sound of the word rational when the instructor said it, because of his slow pronunciation.

CRUX says it again. He is supposed to press the button, he knows it. Time seems to move slowly, dragging on as his mind races.

When in training, he was always trying to do his best. Not that it was very hard, everyone seemed to do really well, because the job is incredibly easy (and plain boring most of the time, he ponders). But he learned quickly to scan through all the symbols, images, mappings and text in his visor screen. On his ear piece CRUX’s monotone voice would point him to the most crucial aspects of the analysis, and walk him through to a conclusion.

He always felt they were working together. CRUX was always able to find patterns, inconsistencies, and directions in the data. He was the human, though, he beared the consequences of the decisions they made. That’s why, although his position was really boring most of the time, he felt useful. He was the safeguard, the human component.

“Press the button now”. It’s horrifying, he thinks. He can’t believe that he’s about to do it. Why? What for? He knows it can’t be undone once he presses it. But CRUX’s harsh-sounding voice insists. “James, you must press the button now.”

CRUX’s decisions, he learned during training, are made through evidence-based mathematically-sourced statistically-efficient JQuiL-approved and seal-stamped accountability, ethics and transparency. He, himself, had to spend many days to understand the basics of the way the system works: complex formulas, democratic societal input, non-Evolutionary Key Bindings. It was complex, layered, and hard-to-grasp.

Now, his visor indicates the situation is critical. Written in bold letters on his screen it says: PRESS THE BUTTON. James knows it’s all on him now. How could it be that this could be the right thing to do? No matter what he saw or heard, he knew it was impossible. But the system ordered him to. He knows the system is not only precise, but absolutely trustworthy. It has been tested and reviewed by the brightest, most inquisitive minds.

But deep down he knows it can’t be right. It just doesn’t make sense, no matter how he looks at it. Why should he, the safeguard, bow to something he doesn’t agree with? But CRUX has never been wrong, he knows that too. It was created to be as close as possible to perfection. He can’t say the same about himself. Could it be that he has gone mad? Could it be he has simply missed something, he was not updated on new policies? Could it be he just doesn’t get it?

He knows the consequences are devastating if he doesn’t press the button. Is he willing to trust his defective self that much? Right now, he doesn’t now. Words flash on his visor, CRUX’s voice insist: “You must do it. Accept the truth.” His finger gets heavier and heavier, and he can feel the cold button on his index finger. He doesn’t know the truth. He must press the button.

And so he does.

 

 

Reflection and Consequence Mitigation Strategy 

In the story, I wanted to reflect on AI making a decision that we disagree. In the story, James is put against CRUX (the AI). He is just there as a safeguard, he is liable to the consequences and so must make sure that they are well-thought and correct. But, as we get to see, he is not permitted to disagree because he is a human, and his “knowledge“ is considered inferior to that of the machine (epistemic injustice). I was also considering what happens when accountability becomes a stamp that can put in products: CRUX is reliable and trustworthy, so it cannot make mistakes or errors. But that completely breaks when James’ (correctly or not) disagrees with the “objective” analysis of the machine.

I think this exercise on thinking about this dystopia where automated decision making defines very important things, and humans are used just as impotent safeguards, makes me consider that the framing of AI accountability could be hijacked to create things that are not accountable at all. The idea that we need to have more ethical machines can be taken at face value and not consider the systemic construction of the machine, our intrinsic knowledges as humans, and how it is ultimately impossible to create something that is “the truth”.

There is also a consideration about how transparency may not really mean very much in this context. Although James knows the basics of how the machine works, and the machine even walks him through the decision, he can’t really go through it the same way as the machine can. The machine’s transparency also becomes one of the reasons why he trusts it.

I think what this story tells me is that I’m afraid that AI accountability, ethics, transparency, all of that can be coopted to privilege the machine itself and its decisions, instead of making us more critical/conscious of it. I think that one of the main mitigation strategies for this is really understanding that we also need to question “the truth” of the machine, its epistemological basis, and not give power to the idea that we can just “fix” the bias that we spot on machines and that is it, we are done. There is no seal of approval for accountability.

Leave a Reply

Your email address will not be published. Required fields are marked *