Values, and a hackathon on housing

Values

It was a wonderful exercise to name some values that I want to bring to my work. My current work falls far short of these values, but stating them can help me navigate towards them. I draw these values from two streams: the stated goals and values of the Christian faith (acknowledging that many Christians have acted in opposition to those values), and from aspects of many organizations and communities that I admire.

 

  • Incarnate. Don’t solve other people’s’ problems; get close enough that their problems become yours. Be vulnerable. “Nothing about without”
  • Seek just systems. Justice is more than “doing good”; it requires identifying oppression and fighting it. Identify how present reality emerges from both individual decisions (of both powerful and marginalized people) and the overlapping systems of culture, law, market, beliefs, habits, networks, environment, code, processes, etc. Think about how any proposed actions echo in all these systems.
  • Be humble and kind. Don’t assume. Listen. Admit bias. Identify what’s broken inside first. Don’t boast. Celebrate others. Be slow to diagnose and “fix”. Forgive. Treat better than you’re treated. Be slow to anger and blame.
  • Be transparent. Be honest. Share data and code.
  • Be grounded. Keep motivations connected with reality. Use evidence (both data and story) to make decisions. Ask questions.
  • Serve to empower, seek flourishing not dependence. Act in ways that increase the autonomy of the leats empowered. Humanize.

 

A Convening on Housing

Context: I’ve walked with friends through transitions both into and out of homelessness. I, the problem-solver, often wanted to find even a small set of problems to fix or blame, but the situations have been complex. But in the process I saw affordability and access as major practical barriers. In one friend’s attempt to return to housing after an eviction, he found that rents even in the “affordable” areas he was looking had skyrocketed. He encountered too many predators trying to scam needy and vulnerable people. And when he found options that were somehow both affordable and in reasonable condition, the landlords required a perfect rent payment history. Basically, you need to be housed in order to be housed. Meanwhile, “luxury” aspirational housing is going up all around the region.

So: let’s do a hackathon / policy summit on housing. Specific goal: loosen the connection between money/privilege and the right to housing that works. But since we’re incarnational, let’s do it not in elite spaces like the Media Lab, but in local community spaces like schools, churches, and homes in communities like Dorchester and East Boston that are facing cross-pressures of history and current investment. Since we’re kind and slow to blame, and seek to do nothing about without, let’s invite not only the policymakers, architects, and planners, or the housed and the homeless, but even the people we consider to be the “bad guys” — landlords, luxury condo developers, even Airbnb hosts (whom some blame for rising rents) — and find ways to not let them feel like the bad guys. Since we seek just systems, let’s invite everyone to share ways that current systems work for them, and ways that they hurt them. Since we want to be grounded, let’s center discussions around empirical numbers and data, but allow no data to go without a human story that either confirms or questions it. Since we seek to empower, let’s find ways to invite the people who come to the table with least power (e.g., homeless, tenants at risk of eviction) to take ownership — but since we want to be kind, we also need to challenge them to not just blame those in power. // The typical hackathon model can be very prideful: “we’re gonna solve this massive problem in a weekend.” Instead, what if the goal was to help us participants learn more about what makes the problem hard, and build relationships that can guide and empower our future, more deliberate actions? Making can still form a core element, both as a way to explore the challenges of the problem at hand (e.g., let’s make an interactive game that illustrates what’s hard about housing policy) and as a way of trying out how each participant’s skills and background might contribute.

“Fair” is no substitute for “Just”

“Algorithmic fairness” has become a hot topic, but it’s not really solving the right problem. I’ve only tangentially worked in this area, so apologies in advance for my ignorance.

Recent accusations that algorithms exhibit racial and other biases have led to widespread efforts to make machine learning (and other sociotechnical systems) more “Fair, Accountable, and Transparent”. But “fair” does not imply “just”. If we were to start history over from a blank slate, some of the fairness criteria that have been developed might help avoid some kinds of discrimination. But we must deal with the real history of past unfairness.

People and organizations in power are increasingly using algorithms powered by machine learning to exert their influence. The math inside the algorithms may be morally neutral, but whoever owns the data and whoever defines the objective that the algorithm attempts to maximize have power. Fundamentally, the objective of most algorithms is to score best at predicting present data given past data. But if past data bear the scars of past oppression (e.g., redlining), an algorithm can score highly by predicting that those scars will persist (e.g., that a person in a historically redlined area will not repay a home loan). Algorithms perpetuate the biased status quo.

Efforts towards fairness in machine learning have attempted to define alternative objectives and measures that, for example, strive towards “equal opportunity” in loan prediction. But the full scope of the problem is very broad:

  • Different treatment at the hands of intelligent systems (loan approval etc)
  • Different treatment by people using intelligent systems (e.g., predictive policing, risk assessment tools)
  • Different usefulness of intelligent systems to minorities (e.g., speech recognition not working as well for minorities, or photo libraries erroneously grouping all my Native American friends’ faces together)
  • Different influence on relationships (e.g., the “Algorithmic glass ceiling in social networks”)
  • Different self-perceptions (e.g., when I search for my dream profession, the people I see don’t look like me)

… just to name a few. The problem directly affects those who are minorities (thus not well represented in some data), historically oppressed (thus vulnerable to predictive injustice), and underenfranchized (thus unable to voice how decisions of powerful systems affect them). But it also affects majority groups: for example, biased resume classifiers may keep minorities out of some workplaces, and lack of diversity has documented negative effects on creativity and team productivity. Organizations will also be subjected to accusations of injustice made through the legal system or social or political activism.

Those who hold the data and design the algorithms are, in one sense, in the best position to address the injustice done or furthered by their algorithms. Most large tech companies now have teams working on “fairness”, and Google and Microsoft publish prolifically in this area. But can criteria and measures developed by the current tech elite really ensure equitable (not just equal) treatment for all? And can they even be “unfair” in ways that seek to heal past injustice? Solutions will will also require government involvement, both to drive policy about what sorts of systems will be allowed to interact with citizens in what ways, and also to drive procurement of tools that the government will use. Ensuring that all this actually does good in practice, though, requires researchers, journalists, and others to act as auditors and watchdogs.

Predictable consequences of work towards algorithmic fairness include:

  • Increase in attempts (internal and external) to hold organizations accountable for their algorithmic decision-making. Along with this will come an increase in attempts to access sensitive data in order to be able to audit these decisions, which often occupies an ethical and legal grey zone — so law will develop to govern these practices.
  • For those in sizeable minorities, this increase in accountability will lead to a reduction in obvious harm.
  • For those who are in groups that are not clearly quantifiable in commonly collected demographic data, the effect will be more mixed. Since detecting and documenting harm typically requires identifying who is harmed, groups who are harder to identify will be harder to protect. Some approaches may result in reduction of harm for these groups also (e.g., Distributionally Robust Optimization), but many will not, especially if the algorithm is able to discern enough of a difference to predict differently for them.
  • Algorithm design choices, “interpretable” visualizations of machine learning algorithms, and aggregate “fairness” measures will be offered as evidence in legal discrimination cases. This evidence will be misunderstood.
  • Technology suppliers who are able to document the “fairness” of their technologies will be chosen as lower-risk options by government and private procurement processes. Since established suppliers with lots of data and resources will be better able to make this documentation, and thus gain access to yet more data, the “rich will get richer”.

 

I could have been working on that

I’m a grad student in Computer Science, studying how people use intelligent systems like predictive typing, and how using those systems shapes the people who use them. But as I have been approaching the end of my PhD, something has been weighing on me: I still don’t know how to use all I’ve been learning to have a positive impact in the world.

I often felt a certain discontent when I reflected on my research. Somehow, despite finding an advisor who seemed to care about doing good in the world, my work was solving first-world problems at best (in the rare moments my work was successful). I would pray something like, “God, why can’t I be doing work that actually helps people?!” — with a mix of frustration at my poor choice of projects, envy of those who were doing that kind of work, and hope that maybe, somehow, I might find a way to do something “good”…

I tried various things. I helped organize a hackathon for Christians in tech to support the work of various nonprofits. The hackathon got a lot of interest, which I tried to leverage in order to build a community of Boston-area technologists and nonprofits that could solve problems together. I’ve befriended several homeless people, and tried to support the various organizations that try to help them. I tried to network with other tech-for-good and international development people in the area. But nothing went anywhere.

Earlier this year, as I read the ancient prophets like Isaiah, Amos, and Malachi calling out injustice and oppression in their days, I was struck by how apt their words still are today. At the same I was also reading Virginia Eubanks’ book Automating Inequality. And as she denounced three different technological systems that oppressed the poor, I realized that I could see myself working on any of them while waking up each morning thinking that I’m working on something that is helping people. Perhaps projects that sell themselves as being for social good have an especially high risk of actually doing harm.

Looking back, perhaps my prayers to find work with social impact weren’t getting answered because before I could work on using tech for good, I needed to grapple with how it could be used for harm, especially unintentionally. So even though many wise people would say I shouldn’t be taking another class at this point in my career, that’s why I’m here.