Questions, questions

1. Where am I? How far is my (unintended) reach?

 

2. I spent most of my queer childhood in a homophobic, misogynistic country. As I lingered in the periphery of online queer communities for years, I witnessed nameless, countless friends get outed, quit school (often “recommended” leave), run away from home, survive on part time jobs, quit because of abuse, take other part time jobs, repeat, and, eventually, disappear. Fifteen-year-old Hane thought, if I became an Educated Adult, maybe I would gain enough respect in my field to not get immediately fired if I were discovered.

2.1. I am academically the 99th percentile outcome of the entire community, if not an outlier. There will be socioeconomic consequences.

2.1.1. It was obvious by high school. I wanted to become a public role model and improve the reputation of queer people.

2.1.2. I want to give myself the responsibility of improving queer lives in that country.

2.1.2.1. Can this be done with technology?

2.2. What is the responsibility that comes with academic privilege? Is there any?

2.2.1. Is this elitist?

2.3. It is not without effort that I live here as a small, colored, assigned-female-at-birth, queer person, but life has dramatically improved since I have moved to Boston. I may be a token minority (minus religion), but at least I am a model one?

2.3.1. Repeat 2.1.2.

 

3. Engineering is my profession. It also happened that I went to a pretty good engineering school.

3.1. I am a master’s student in the Media Lab, Opera of the Future. Before that, I was an electrical engineering undergrad at MIT.

3.2. I inevitably embed my biases in the systems I develop, and my work may receive more credit than it warrants in itself.

3.2.1. This is privilege.

3.2.2. What is the responsibility that comes along with technological-academic privilege? Explanations? Clarity? Dedication? Infallibility? Good Morals?

3.2.3. Am I elitist?

3.3. The current development of machine learning (and so-called AI) algorithms bothers me. They make decisions based on biases while they are believed to be fair science and often disadvantage the disadvantaged. The algorithms themselves are more often than not unexplainable and uninterpretable, making matters worse.

3.3.1. What can be done?

Learning to Step Forward

As long as I’ve been alive, I’ve been afraid of the poor outcomes of even my most banal decisions. I used to be that cripplingly shy kid in most social situations, ruining any idea that I’d be the confident, loud, and overbearing Nigerian auntie I guess I was supposed to be. Better a weird kid than a difficult one, I guess.

Though I’ve worked hard to be a 9-5 extrovert, it’s still difficult for me to step out and make my opinions known in most situations. I’ve still got my kid in me manifesting in the worst ways. To date, I’m still fighting the urge to sit quiet when I have an idea and encourage myself take up some space. The strive to remain agreeable is taxing — wouldn’t recommend.

It’s not to say there hasn’t been stuff brewing, albeit always showing up more quietly than I intended. In my former life I was a policy researcher, hiding whatever fiery thoughts, critiques, and realities I had about the inequality of the real world in a very agreeable working paper that only a few people would read. It was a comfortable space for a younger me – non-confrontational and polite, but I also could have been convinced it was an action. We would pat ourselves on the back for moving the needle (any needle), no matter how incrementally. If someone makes a bad policy decision out of the work we’ve done, it was their prerogative. We were as neutral and fact-based as we could have been. Given our current political situation and my former institution’s only marginal bend towards positive action, I would not consider myself entirely proud of this position.

At MIT, I’m learning instead to be stymied by my own laziness instead of fear, and step into a place where I can stand more firmly on my integrity than my ability to be diplomatic. Admittedly, it’s scary position for an urban planner, a field that’s been as celebrated for its bold wins as it has been marred by its bold mistakes. As history shows, balancing everyone’s idea of integrity is a difficult task. However, as a planner, my personal focus is on how technology is translated to, driven by, and affecting marginalized communities, and I do think the field could use some new voices. In this class I’m hoping to get some legs to stand on, to be able to evaluate what being a good navigator, steward, translator, and participant in this space looks like. But mostly, I’m aiming to kick the quiet in me, turning that instead into thoughtful action that is both democratic and illustrative of the good values that I hope can drive us into the future. If research taught me anything, it’s that at some point, it’s worthwhile to take an action, just best that the action is a good one.

I could have been working on that

I’m a grad student in Computer Science, studying how people use intelligent systems like predictive typing, and how using those systems shapes the people who use them. But as I have been approaching the end of my PhD, something has been weighing on me: I still don’t know how to use all I’ve been learning to have a positive impact in the world.

I often felt a certain discontent when I reflected on my research. Somehow, despite finding an advisor who seemed to care about doing good in the world, my work was solving first-world problems at best (in the rare moments my work was successful). I would pray something like, “God, why can’t I be doing work that actually helps people?!” — with a mix of frustration at my poor choice of projects, envy of those who were doing that kind of work, and hope that maybe, somehow, I might find a way to do something “good”…

I tried various things. I helped organize a hackathon for Christians in tech to support the work of various nonprofits. The hackathon got a lot of interest, which I tried to leverage in order to build a community of Boston-area technologists and nonprofits that could solve problems together. I’ve befriended several homeless people, and tried to support the various organizations that try to help them. I tried to network with other tech-for-good and international development people in the area. But nothing went anywhere.

Earlier this year, as I read the ancient prophets like Isaiah, Amos, and Malachi calling out injustice and oppression in their days, I was struck by how apt their words still are today. At the same I was also reading Virginia Eubanks’ book Automating Inequality. And as she denounced three different technological systems that oppressed the poor, I realized that I could see myself working on any of them while waking up each morning thinking that I’m working on something that is helping people. Perhaps projects that sell themselves as being for social good have an especially high risk of actually doing harm.

Looking back, perhaps my prayers to find work with social impact weren’t getting answered because before I could work on using tech for good, I needed to grapple with how it could be used for harm, especially unintentionally. So even though many wise people would say I shouldn’t be taking another class at this point in my career, that’s why I’m here.

A brief history of journalism’s mistakes and how I hope to avoid repeating them

The year I became a newspaper reporter, an internet-connected computer — one — was installed in the newsroom, shaping the change the course of my career and my industry. I’m up for a bar debate over this, but I doubt any industry has been as transformed by the internet as as much as the news business.

Of course the internet has reshaped all of society, which is why I’m in this class. I’m looking to get a better idea of how technology shapes our lives and how we can make smarter decisions about how we introduce and use technology into our lives.

As long as I’ve been a journalist, the news business has been behind technologically. We’ve been at the mercy of companies that knew what they were doing and had a much better idea of how things would end up than we did.

So news companies installed crappy content-management systems that limited what they could do and cost a fortune. They allowed anonymous, hateful comments to be posted on their sites without thinking through how that affected discourse. They created intrusive advertising to get between their journalism and their audience, and they installed ad tech to track their audience as they roamed the web. In so many ways, the news business has implemented technology in a way that demeans our journalism and disrespects our audience, eroding the trust that should be at the core of this relationship.

That’s how legacy news (mostly newspapers) has done it. Others have figured out how to use technology to strengthen their relationship with their audience. They’re working on ways to bring people into their reporting. They use data and news applications to tell stories that couldn’t be told with words. Many of these initiatives show promise. But they require doing things in a different way, and change is hard, especially when you need to create a product every day with fewer people and less revenue.

hello from the valley

Remember how on the first day of class Ethan was all like, “we’re here to walk you into the valley of depression about the consequences of technology, and then back out of it so you don’t feel the need to apply to that public policy program”?

Well, hi. I’m Blakeley, a second-year, possibly-going-to-graduate-in-June masters student here at the Media Lab. I’ve currently got about 48 Google Chrome tabs open right now and half of them are filled with queries like “harvard kennedy school admission how” and the other half are “STS phd program jobs after graduation.” So… yeah. Maybe I could use some introspection as to how I got here.

this is my valley! look at all the anxiety over technology! just look at it all! 

I came to the Media Lab last year with a background in math and computer science. Upon reading Cathy O’Neil’s Weapons of Math Destruction I was STOKED to combat algorithmic bias, injustice, and as per usual, the patriarchy of tech. The goal of my main project last year, Turing Box, was relatively simple: let’s build a two-sided platform where on one side algorithm developers will upload their AI systems and on the other side examiners (maybe social scientists, maybe other computer scientists) will evaluate those algorithms for potential bias or harm. Algorithms with good evaluations will receive seals of approval, certifying that they are, well, at a minimum, the least worst algorithm there is for a particular task.

Turing Box garnered a lot of praise upon it’s announcement. Companies wanted to use it internally, social scientists were excited about accessibility, and as a tool to explain algorithmic bias, it excelled. On the other hand, it also endured a lot of fair criticism, mostly about the framing of algorithms as agents with behavior to be studied and the lack of credit to those in STS who laid the groundwork.

But it wasn’t the criticism that worried me. What worried me was that Turing Box, if placed in the very market it was creating, wouldn’t hold up against other versions of the platform. I could imagine a hundred scenarios in which the platform could fail as a societally beneficial tool. What if big tech companies flooded the markets in an adversarial way? What if the majority of evaluators only came from a particular company and were taught to recognize the behavior of their own algorithms so that they could give them higher evaluations? How do you recommend algorithms to examiners to evaluate without, essentially, setting the scientific agenda in the same way that platforms like Twitter determine our political discourse? How do we ensure that our platform doesn’t just offer measurement as a solution to real societal problems?

but what if our site gets popular and everyone adversarially uses it and it makes it seems like we’ve solved algorithmic bias but all we’ve actually done is measured a bunch of really useless stuff and then we have no way of knowing until real harm actually occurs

I’ve since, uh, pivoted. The goal of my masters thesis is to construct an “AI and Ethics” curriculum for middle schoolers (or as I’m trying to advertise, “just a better AI curriculum that includes ethical considerations because we should have been doing that in the first place,” it’s not really catching on yet…).

So, why am I here? There are a few reasons. First, I’d love to walk back out of the valley. Second, I don’t want to build a curriculum where I walk my students into the valley and leave them stranded. I’m looking forward to learning about value-sensitive design and participatory design because I’d like to integrate these techniques into my own curriculum. Third, I really, really want to graduate on time. My parents already took vacation days.

“Why do you write like you’re running out of time?” Um, because I WANT TO GRADUATE IN JUNE. 

If you’re interested in discussing what an AI+Ethics curriculum might look like for middle schoolers, I’d love to chat!

Technology, complexity, and the path to social change

I am naturally introverted. Although I love interacting with people, I like to spend slightly more than half of my time working in a solitary way. I love programming; especially when I get to develop elegant programing solutions to complex problems.

Fortunately, the world is filled with complex problems, so this gives me an unlimited supply of interesting ways to spend my time. But there’s a catch. Like anyone who voluntarily signs up for a “Technology and Social Change” class, I have had to face an inconvenient truth:  Technology does not solve complex problems. Apply technology to a problem, and that problem becomes more complex. I want to believe that I can spend my time solving interesting technical problems, while simultaneously making the world a better place. In reality, it takes engaging with the world and the people in the world to make a positive impact.

I’m not the first introvert to figure this out. For many years, Zuckerberg was convinced that a more “connected” world is also a better world. From the outside, it looks like he is beginning to understand that Facebook has made the world more complicated, and has probably not made the world a better place to live. We all want to believe that solving complex programming challenges can map to solving societal challenges.

A wealth of research in anthropology and psychology suggest that people are not equipped to handle relationships at the scale of the internet. Consider the work of Robin Dunbar, who compared the brains and social patterns in primates, and extrapolated that animals are cognitively limited to maintaining approximately 150 meaningful relationships. Or the writings of economist E.F. Schumacher that convincingly illustrate how large scale institutions  negatively impact our quality of life while also ruining the environment.

It would appear that to make the world a better place we have to look up from our computer screens and engage with the world around us. Can I please just go back to writing code now?

My Story

I’m a 3rd year PhD. in the Opera of the Future Group here at the MIT Media Lab. I’m interested making art and music that takes advantage of the unique capabilities of the internet. The democratization and diversification of media was one of the original utopian promises of the internet. We don’t have to look backwards very far to see the enthusiasm from Wired Magazine in the 90s and the libertarian internet ideals of John Perry Barlow. The reputation of the internet has soured since then. In reality, taste in popular music continues to narrow, and more and more music is distributed through fewer and fewer channels. Is this an inevitability? Is the future of music one where all content comes from Spotify, and success is measured by market share? One where the biggest platform with the most data mined from its users dominates the competition via the “network effect” popularized with the advent of the web 2.0? Surely if I just code the right solution, we can push things toward a diverse world of dynamic music and media? Can I please just go back to writing code now? This is a technical problem that requires a technical solution, right?

 

 

I thought data could save the planet; I was wrong.

Hey folks. I’m Rachael, a research assistant and MS candidate in the Media Lab’s Space Enabled group. Before MIT, I helped launch, grow, and dep. direct Global Forest Watch, an initiative that monitors deforestation in real-time from space. Previous to Global Forest Watch, I spent a year living in remote indigenous communities researching the use of technology to protect local rights, lands and culture. With a background in environmental policy, anthropology, and earth observation, I use tools of ethnography and policy analysis to imagine how global environmental data can enable local conservation action.

Or so it says on my Media Lab profile. The reality is that I’m a jaded environmentalist deeply skeptical of the power of technology to solve wicked collective action problems like climate change and resource management. I am taking this class to indulge my nihilism, perhaps inspire a dash of hope, and to develop analytical frameworks to understand enabling conditions and barriers of technology for social change.

But let’s take a step back.

Satellite data provides a synoptic view of the world’s forests, oceans, freshwater, and cities. My professional career has centered on using earth observation technologies to produce timely, accurate data on the environment. I helped create reports, blogs, guidance, apps, and interactive webmaps to deliver this data to decision-makers, assuming if they saw the scale of the problem, they’d act. After all, “we can’t manage what we don’t measure.”

The reality, I have learned, is that data is necessary but not sufficient to spur environmental action.

Park rangers in Kibale National Park locate deforestation using Forest Watcher. (Credit: World Resources Institute and Jane Goodall Institute)
Members of the NGO HAkA review Forest Watcher data in their Aceh office. (Credit: World Resources Institute)

An example: I spent several years leading a mobile app project called Forest Watcher, developed together with the Jane Goodall Institute and local communities and rangers in Uganda, Peru and Indonesia. The app receives satellite alerts of deforestation in the user’s area. Users navigate to alerts and collect photos, text and GPS points to document the change. Local communities use the resulting data to inform conservation plans, report illegal activities to authorities, and prioritize resources for at-risk areas. The ultimate goal? Reduce deforestation rates in areas with active app use.

The app works; the concept doesn’t.

This is because users face myriad individual, systemic and institutional challenges to reducing deforestation. In Uganda, park rangers didn’t have motorcycles to visit remote deforestation. In Peru, cutting down trees to cultivate coca proved more lucrative than conservation. In Indonesia, corrupt official advance infrastructure projects in protected areas, in direct violation of the law.

The app is sexy; the problems are not. Lines of code can’t overcome governance, corruption, incentives and resources limitations.

I thought data could save the planet, but it can’t. Someone please prove me wrong.

 

Technology and the public sphere

Hey! My name is Joachim and I am a PhD Fellow in Political Philosophy at the University of Copenhagen, Denmark. I’m very excited about visiting Civic Media for the term. In my research, I’m trying to figure out what the public sphere means, what it can or should do. Obviously, the public sphere has always been transformed by technology from printing to digital platforms, and technology has been of core concern to this field at least since philosophers of the Enlightenment in the 1770s tried to capture the fascinating essence of the public sphere: the structures of publication.

In public sphere studies, technology drives social change as well as creating new problems (and opportunities). I’m worried about technology for the problems that it may be used to solve: The fact that solutions are possible do not automatically make them good or desirable solutions. (For example, ’fall detection floors’ for elderly that decrease the need to help/check on them and thus reduce the amount of social interactions elderly people may have during the day.) Sometimes, solutions may just wipe out near-invisible but important aspects of life.

During our first class, I was reminded of an architect – Lars Lerup – who wrote a book called Building the Unfinished. Architecture, is Lerup’s point, is always unfinished because human actions make other interactions possible than the intentions of the architect. My guess is that technology may have this same feature of unfinishedness, and one of the things I am looking forward to in this course is to explore such aspects of technology in terms of driving social change.

Journalists and developers

It still happens once in a while that my girlfriend and I look at each other and almost in unison exclaim:

“I can’t believe that we get to do this!”

We both got Journalistic fellowships in Cambridge this year. She the Nieman Fellowship at Harvard and me the Knight Science Journalism Fellowship at Massachusetts Institute of Technology.

Her focus is managing how your 269-year-old newspaper moves from broadsheet paper to infinitely smaller LED screens. Mine is the relationship between developers and journalists in the newsroom.

I left several of such relationships when I took a leave of absence and got on the plane from Denmark. Back in Denmark I was the only journalist in our editorial development team. I think of it as a producing innovation lab. Apart from me it consists of three programmers and two graphics artists. Together we develop new ways of doing and presenting Journalism.

Before that I tried just about every position in digital Journalism in Denmark. I have reported from the field, edited the frontpage, and helped build our social media desk. I have covered everything from terrorist attacks to missing animals. I have been in charge of our digital election coverage. And I have increasingly done all of this through the collaboration with people with technical know-how.

Because in my view technology offers one of the brightest beams of light for the current state of news media. Digital journalism used to just be words under an image, but not anymore. Code is just as important.

Recent events in politics had made the crystal clear why we need a vibrant and engaging news media that can compete with the social networks for people’s attention. And in order to do that and stay relevant we need to get smarter and bring new types of people into the newsroom.

But even though the partnership between our journalists and developers can be beautiful, innovative and yield completely new ways of telling stories, the relationship is not without pitfalls. Because we speak different languages and employ even more different workflows.

So how do we make it work?

That is the overarching question I will spend my time in Cambridge trying to answer.

The MIT Media Lab is an obvious place to explore the dynamics of interdisciplinary collaboration and get a feel of the technologies of tomorrow. The potential is endless, but this class (Technology and Social Change) will probably help to reign in the optimism too. But looking at the flip side of the tech that we call upon to save the news business sounds like a healthy thing to do and something I look very much forward to this semester.

I’m writing to you amid the wreck of my well-intentioned start up…

In 2016, I founded a company as a senior at Stanford University. I had spent the three previous summers doing internships in the music industry. These internships had revealed to me how behind the scenes, the ‘culture machine’ struggled with entrenched social inequalities. I was astounded, also, by the lack of advanced technology and wondered what role technology could play. Many close friends of mine at the time were musicians struggling to ‘make it’: despite being talented, it was difficult to get booked—it was even harder to get fair pay given the nature of the gigs. Seeing the industry from these perspectives inspired me to create an AI-fueled “demand forecasting” technology that I hoped would help the situation. I believed my tech would create opportunities that would bolster the now hardly-existent musical middle class, decrease the overbearing power of entrenched monopolies, and eliminate some of the parasitic roles that reduce the feasibility of a creative career in the present day.

After a year of building out our model using a testing data set from a large, infamous ticketing company, we stumbled upon gold: our model could forecast the demand for concerts in advance far more accurately than even the most skilled industry employees. While we celebrated this success, in retrospect, my mistakes at this precise moment make this more of an embarrassment than a success. In order to make our model as best as it could be, we needed access to a massive amount of high-quality ticketing data… and yet, I had failed to see that the only people who had that kind of data were the very monopolies we were trying to undermine. This ultimately meant that if we wanted to make something that worked, the practical use of what we made would be determined by the company that owned it.

As a naïve student, I was inexperienced enough to think at the time that these business partners shared my vision for the tech. Yet this definitely wasn’t the case: it became clear that our potential business partners had no intentions of pursuing our vision. Instead, they wanted to use our AI model to more efficiently drive up ticket prices, which I strongly felt hurt customers and music culture at large. I was personally upset by the possibility that I had created something that would be used to hurt musicians and music lovers, thus damaging the world I identified with most. Disillusioned, I began trying to find an equally viable business partner that wouldn’t use the tech in this problematic way. I was indeed eventually able to find a better home for the team and the model. My startup exited in February of 2018 bound by non-disclosure, so I cannot say much more than that. Today, our model has been trained on their dataset and is working at a success rate beyond what we had even thought possible. …But it still isn’t clear how this tech will ultimately be used by the company. Though our model won’t be used to hurt anyone, our model will certainly not perform the positive social role I had envisioned.

Realizing that Money Isn’t Everything

I grew up in a household where money was often the concern. As a child I remember living in an illegal basement apartment where my sister and I shared a bedroom while our mother slept in the living room. For years, my mother remained in a verbally and physically abusive relationship just so we could have another source of income. We rarely talked about plans for the future or career options, but I knew one thing for certain – I refused to be poor or to depend on someone else for money. I saw the way that financial worries negatively affected my mother’s life and promised to never let that happen to myself.

Fast forward to my senior year of college. Somehow I had stumbled into Computer Science as a major and had a job offer making six figures at Microsoft. That was more than double what my mother made at the time. Obviously I took the job. It paid well, had great benefits, and provided a kind of security that no one in my family had previously had.

Working at Microsoft wasn’t as glorious as I imagined it would be though. While the work was challenging and my coworkers were enjoyable to be around, what I actually did on a day-to-day basis was simply meaningless. In fact, much of the work I did – and even didn’t do – went unnoticed. I realized that if no one noticed what I was doing, then clearly it wasn’t important at all.

About the same time I started doubting my work at Microsoft, I began volunteering for an organization called TEALS. With TEALS, I taught Computer Science at a public high school before attending work each day. Within a few months, I found that I enjoyed teaching more than my actual job. Thus I decided to quit my job at Microsoft, and take a huge pay cut, to be a Computer Science teacher.

Because I do not have a teaching credential, I could not be employed in a public school and thus took a job at a private high school. Although the work was more fulfilling than my work at Microsoft, it did not allow me to truly give back as I did with TEALS. I knew that I was helping people, but most of the students did not fit into the demographic of those I truly wanted to help – those in situations closer to the one I had as a child.

While teaching, I also learned more about money. Although I was making a lot less than I did at Microsoft, I was still happy and comfortable. In fact, I realized that most of what I had spent my money on was simply stupid – clothes I never wore, expensive meals that weren’t very good, overpriced cocktails, etc. When I thought about what I really enjoyed having and doing, most of it did not cost very much at all. When I worked at Microsoft, I had never even saved any money outside of my company-sponsored 401k. As a teacher, I ended up with extra money each month that went into an investment account. Simply put, I realized that making a lot of money didn’t matter to me. As long as I could support myself and be comfortable, I would be happy.

With this in mind, I decided to leave my job and take a pay cut yet again to attend grad school. I felt that I could, and should, do more with my technological skills to truly help others and enact social change. I don’t yet know how I will do that, but I feel that this course will be a great way for me to start exploring my options.

That time a puzzled engineer wanted to solve hunger

After my fourth spacecraft had launched, I felt guilty. My next mission was to create technology to visit Mars yet hunger was still a problem on a degrading Earth. Why didn’t we solve hunger instead? Was I being ungrateful or rightfully puzzled? I understand it is a privilege to contribute directly to the space program. And spacecraft number four was on a mission to measure precipitation, which eventually predicted hurricanes and saved lives. I do not know if that is enough of a contribution to humanity. I know I felt compelled to start a food security organization to obliterate hunger.

I began asking new questions I need help answering. Let’s assume zero hunger – on any planet – is achieved in fifty years: what are the tools to get there? Which ones should we work on now? What if we have all the tools we need? If we grow our own vegetables, will grocery stores stop offering mass-produced spinach? Is the power of collective action an effective strategy for food security? What if we lived in a world where Americans, North Koreans and Venezuelans all had access to clean food? Who benefits? What if food was not an avenue for oppression? If we asked ten different people with ten different philosophies what a perfect world looks like: doesn’t it include greenery and nutritious food to exercise our molars and satisfy our stomachs? Lastly, am I asking the right questions and focused on the right problem? After all, I have always been well-fed.

What brought me here?

Hi, I’m Abigail, an MEng student in EECS. I grew up in Texas and Seoul, Korea, which were two quite different experience.

Education is highly-valued and highly-competitive in Korea. Especially back when I was in high school, school grades and national exam grades were completely relative, so that only the top 4% of students received A, the next 7% B, and so on.

Moreover, schools had very rigid structure. My school, for example, had regular classes 8am-5pm, special after school classes 6-9pm, and mandatory study hall in complete silence 9-11pm. I went to a private school, so it was a bit more extreme, but most schools still had the same schedule, just off by a couple hours. (So it’s not too surprising when I first came to MIT as a freshman and thought ‘Wow, MIT is so….chill?’)

I do not agree with a lot of philosophy behind Korean educational system, but one pro is that there is a rich abundance of high quality learning resources readily available—spanning workbooks, online videos, and private tutors. After all, education is a big market and high competition applies to content providers as well.

I would like to find a way to utilize those existing network and resources as more than simple boosters to help students get ahead in the competition. There were great resources and lecturers I encountered that helped me connect different subjects and develop critical thinking, and yet their strengths are overlooked or merely advertised as means to help students get better grades. Perhaps, there is a way to shed light on their alternative values and eventually shift the approach to education.

Intro post

Hi! My name is Kathy and I’m a junior at Harvard. I’m studying a mix of CS and ethnic studies, and am interested in intersections of art and critical studies of race, gender, class, and the like with emerging and existing technologies. I come to this class hoping to learn as much as I can about what the technology and social change space looks like, what work has been done in the field, and what work is possible and important to pursue. Coming into this course, I’ve been thinking about issues like predictive policing (“This is a Story about Nerds and Cops: PredPol and Algorithmic Policing” by Jackie Wang) and the vast wealth disparities driven by the tech industry (Elon Musk and Tesla in Reno, NV) and hope to orient my studies towards gaining the resources to navigate this kind of landscape and work towards undoing some of the very real harm that uncritical uses of technology can and has caused.