Critical Data Literacy // “Predictive Policing” // Recommender systems

1. Critical Data Literacy

The broad idea that everything we do becomes data, is analyzed by algorithms and that the result of that shapes the way we live in the world has become a major topic of discussion in public discourse. The dawn of GDPR in Europe (and consequently the world), the Cambridge Analytica scandal, among other stories punctuate a moment of acknowledgement and, in many places, worry about personal data. This worry is indeed justified, as many reports have been pointing to the major problems of privacy, surveillance, bias, and discrimination. We need to educate citizens with a critical lens towards digitality and datafication. There is a lot of talk about computational thinking, but not as much of how to increase critical consciousness of datafication.

NORMS: We are already starting to see how people are becoming more wary of sharing our data online. I think a lever of change that could be activated, in this sense, is raising consciousness of people of how their data becomes utilized by these corporations. To further achieve this it would be possible to use these norms to exert pressure in the MARKET, as companies would have to adapt and deal with a less naìve public.

It would be interesting to see more projects that seek to make CODE readable and understandable, not only in its core sense (i.e. computational working), but also in how it works more deeply in society. If Scratch allows kids to understand and play around with code as if it was Lego, what would it look like if kids could see the deeper layers of Amazon’s exploitation of data sets, Mechanical Turkers and earth’s minerals (see Anatomy of an AI System)? On the other hand, companies could be forced to unblackbox their systems and thus become more transparent by changes in LAW.

2. “Predictive Policing”

The systems of so-called ‘predictive policing’ are becoming more and more common. They have been linked, though, to increasing inequality, racism and other highly problematic questions. As CCTVs, bodycams, social media, etc are used without any transparency by companies like Palantir, we face a situation where we face dire social consequences from technological changes.

MARKET: Would it be possible to work with the activist community inside of the open source movement to impede the appropriation of open source for violent/racist software? What if CC/BY had a condition that certain companies cannot be done to your open CODE, unless they pay hefty licensing charges? But how would we all agree which companies get to be part of this?

LAW: The AI Now 2017 report contains 10 recommendations says asks for the development of “​standards​ ​to​ ​track​ ​the​ ​provenance,​ ​development,​ ​and​ ​use​ ​of​ ​training​ ​datasets throughout​ ​their​ ​life​ ​cycle”. These standards could be constructed through law, and thus define more harsh limitations for policing using algorithms.

NORMS: I think there’s a strong ‘norm’ currently on the technological development community that tech companies that work with the military are ostracized and receive criticism from their employees (although that does not really impede them from existing, e.g. Clarifai). Could we adopt that ethos also in relation to predictive policing and work with the developers to be more critical toward that work?

3. Ethics of recommender systems

As recommender systems become pervasive, and are powered by machine learning algorithms, we need to think of how to deal with their ethical quandaries, specially when they are pervasively used, but we don’t quite completely understand/can interpret some of the decision of these systems. As our reflection on the systems often moves slower than the adoption of them in society (e.g. YouTube), we need to discuss the ethical elements of it in a broader societal discourse.

LAW: the conception of accountability is still very poorly defined. The companies argue that they are accountable and follow society’s requests, but we don’t really feel like we are part of the conversation. ToS really are not read, and they don’t really define how the system works. What would happen if, for example, companies like Facebook and YouTube/Google were understood as media companies and not as tech companies?

Interventions on CODE could work to try to render understandable how these algorithms work, which could potentially lead the publics to change/rethink their habits (NORMS). We in part already do this in our day-to-day life as we get angry with what is recommended to us (why does Facebook think I’m so interested in pre-packaged meals?). But being able to see that more clearly and perhaps even respond to it, even collectively, could be game-changing. Finally, thinking about the MARKET could mean creating a business model for recommendation that really does not use and sells personal data as much. Or that uses user’s interests in a way that is not so exploitative, while also giving access to information that is unexpected.

Leave a Reply

Your email address will not be published. Required fields are marked *