hello from the valley

Remember how on the first day of class Ethan was all like, “we’re here to walk you into the valley of depression about the consequences of technology, and then back out of it so you don’t feel the need to apply to that public policy program”?

Well, hi. I’m Blakeley, a second-year, possibly-going-to-graduate-in-June masters student here at the Media Lab. I’ve currently got about 48 Google Chrome tabs open right now and half of them are filled with queries like “harvard kennedy school admission how” and the other half are “STS phd program jobs after graduation.” So… yeah. Maybe I could use some introspection as to how I got here.

this is my valley! look at all the anxiety over technology! just look at it all! 

I came to the Media Lab last year with a background in math and computer science. Upon reading Cathy O’Neil’s Weapons of Math Destruction I was STOKED to combat algorithmic bias, injustice, and as per usual, the patriarchy of tech. The goal of my main project last year, Turing Box, was relatively simple: let’s build a two-sided platform where on one side algorithm developers will upload their AI systems and on the other side examiners (maybe social scientists, maybe other computer scientists) will evaluate those algorithms for potential bias or harm. Algorithms with good evaluations will receive seals of approval, certifying that they are, well, at a minimum, the least worst algorithm there is for a particular task.

Turing Box garnered a lot of praise upon it’s announcement. Companies wanted to use it internally, social scientists were excited about accessibility, and as a tool to explain algorithmic bias, it excelled. On the other hand, it also endured a lot of fair criticism, mostly about the framing of algorithms as agents with behavior to be studied and the lack of credit to those in STS who laid the groundwork.

But it wasn’t the criticism that worried me. What worried me was that Turing Box, if placed in the very market it was creating, wouldn’t hold up against other versions of the platform. I could imagine a hundred scenarios in which the platform could fail as a societally beneficial tool. What if big tech companies flooded the markets in an adversarial way? What if the majority of evaluators only came from a particular company and were taught to recognize the behavior of their own algorithms so that they could give them higher evaluations? How do you recommend algorithms to examiners to evaluate without, essentially, setting the scientific agenda in the same way that platforms like Twitter determine our political discourse? How do we ensure that our platform doesn’t just offer measurement as a solution to real societal problems?

but what if our site gets popular and everyone adversarially uses it and it makes it seems like we’ve solved algorithmic bias but all we’ve actually done is measured a bunch of really useless stuff and then we have no way of knowing until real harm actually occurs

I’ve since, uh, pivoted. The goal of my masters thesis is to construct an “AI and Ethics” curriculum for middle schoolers (or as I’m trying to advertise, “just a better AI curriculum that includes ethical considerations because we should have been doing that in the first place,” it’s not really catching on yet…).

So, why am I here? There are a few reasons. First, I’d love to walk back out of the valley. Second, I don’t want to build a curriculum where I walk my students into the valley and leave them stranded. I’m looking forward to learning about value-sensitive design and participatory design because I’d like to integrate these techniques into my own curriculum. Third, I really, really want to graduate on time. My parents already took vacation days.

“Why do you write like you’re running out of time?” Um, because I WANT TO GRADUATE IN JUNE. 

If you’re interested in discussing what an AI+Ethics curriculum might look like for middle schoolers, I’d love to chat!

Leave a Reply

Your email address will not be published. Required fields are marked *