Regulating Algorithmic Discrimination

We are starting to rely on algorithms to decide for us: hire the next employee, detect cancer, enroll in insurance, and grant parole. We do not know, however, how algorithms make these decisions, and a lot of them are clearly biased. ProPublica, a nonprofit investigative newsroom, revealed how the software currently used in our judicial system to predict subsequent crimes of offenders was racially biased, giving many African Americans a higher risk score though they did not go on to reoffend. Similarly, Xiaolin Wu and Xi Zhang from Shanghai Jiao Tong University conducted a study on teaching a neural network to identify criminals by their photos with a 90 percent accuracy rate. Though this dataset was racially homogenous, there were questions of whether the machine was picking up on the white collar that non-criminals were more likely to wear.

These algorithms affect the entire world. While some foreign governments have shown more distributive efforts, such as the GDPR issued by the European Union, Germany’s ethics rules for autonomous vehicles (specifically banning algorithmic preferences on the lives of certain people over others), much of the discovery is heralded by academia and non-profit, non-governmental organizations such as EFF, ACLU, and Algorithm Watch that are dedicated to the cause.

Training in algorithmic discrimination

 

5 values to keep in mind while thinking about algorithmic discrimination:

  1. Transparency: A lot of current problems come from the fact the process–dataset, models, usage, who, what, why, …–is kept private. Establishing some degree of transparency will alleviate inherent structural problems.
  2. Explainability: Many systems are treated as blackbox magic, and we are still unable to explain most of the results. Explainability would be essential in validating the usage.
  3. Questioning: Systematic errors often remain simply because many people readily believe algorithmic solutions as “correct” without questioning or investigating.
  4. Non-abusing: Ethical acquisition of data, just presentation and usage of results, all involve non-abuse/exploitation of individuals (data sources).
  5. Vulnerability: It is important to remember that any implementation of any degree of solution at this point will be far from complete.

 

While “a convening” is helpful in bringing like-minded people to a conversation or collaboration and is perhaps attuned to the democratic spirit of voluntary participation, it should be for specific issues and clear purposes. Some caveats if it were to be applied to topics such as algorithmic discrimination:

  1. Lack of diversity: These gatherings tend to attract people from similar academic standings. The problem, however, affects everyone.
  2. Preaching to the choir: A lot of these gatherings have a very ostensible political/economic stance, and participants, coming from similar backgrounds as previously mentioned, often have very similar sets of values. The event could easily become an echo chamber with no critical conversation that comes from conflicting values.
  3. Idealism: Undebated values could easily produce unrealistic solutions.
  4. Separation between implementers and the affected: The people most severely affected by the problem would not be the major makeup of these gatherings. Such solutions have a danger of not reflecting the needs of the people they intend to help.
  5. Short-term solutions (especially hackathons): People are induced to imagine solutions that would be implementable in the given timeframe of the gathering. Many of the problems, however, require a much more complex approach on a very long-term timescale.

It may be a completely different category of problem, but I vote to think for a more fundamental, mandatory training system instead of a voluntary convening. It would be something similar to anti-discrimination or sexual abuse trainings that are required for jobs, schools, etc.

queer rights, algorithmic discrimination, waste reduction

Queer Rights

While queer rights in the US has not necessarily advanced to where most want it to be, my primary focus now will be on queer rights in Korea where I spent a significant number of years. Korea is still very conservative in these issues, as discrimination based on sexual orientation or gender identity is not outlawed (in fact, the military bans homosexuality and has undergone purges fairly recently, in 2017), and close to 60% of the population are against same-sex marriage.

Law

Implementation of fundamental anti-discrimination laws (which would direct the norm) could be an eventual goal, but it is a difficult one that many rights organizations have been working towards for over a decade. Another unmentioned yet possibly helpful establishment may be a more rigorous separation of church and state, because far-right Christians are often the strongest opponents of the anti-discrimination law. In the past, their anti-homosexuality arguments based on the Bible have been taken seriously by the government in annulling the proposed anti-discrimination laws.

Norm

One of the difficulties in establishing queer rights is low visibility. Given that acceptance from the general public is as important a factor as establishing anti-discrimination laws (though the two are correlated), it is difficult to advocate for a group that does not seem to exist. Increasing public visibility—whether it is introducing more media content or more individuals (especially public figures) coming out of the closet—would help significantly.

Code

Online community services have been a major driving force in organizing support groups and strategic rights movements. Changes I would like to see implemented are differentiated use of words sex vs gender in official papers, a wider variety of choices in sexual orientation and gender identity on social media websites, and many more.

Market

More companies could be invited to advertise in Pride. Companies would have a monetary incentive to somewhat inadvertently participate in pride and to advocate for queer people as a special market. There is a precedent of a more natural occurrence of queer(especially gay)-targeted marketing in the UK where it was popularly referred to as pink economics.

 

 

Algorithmic discrimination

Algorithms are supplanting humans in making decisions. However, these systems have been often found to perpetuate bias, especially socioeconomic discrimination, while forgoing explanation or critical interpretation.

Law & Market

A vast majority of these algorithms are developed and implemented in commercial settings, so the market is of great significance. Law could regulate the market by requiring publication of training/testing data for the model and periodic performance reports? To critique this suggestion, there are so many models out there and such verification is a nontrivial amount of work with not much incentive that even if the data is released, not much may be done with them. On the other hand, data may start coming from more credible sources.

Restrict the usage of algorithms in hiring processes unless proven to be fair enough?

Norm

The current public opinion—machine learning is magic!—or the lack thereof may be the main source of troubles. Publicly recognizing the fact that 1) “correct” results are not necessarily correct nor justifiable especially when the process cannot be explained and 2) these systems are not destined to solve all of our problems would help proliferate a culture of critical implementation and usage.

Code

Ideally, code would become more explainable in the future. In the meanwhile, since lack of formal analysis is probably another reason for continuously biased performance, so developing guidelines and infrastructure to gauge performance at each step would help. Regarding transparency, development of open source models could help.

 

 

Waste Reduction

Americans produce 7 pounds of trash per day on average, and nearly 70% goes to landfill.

Law

There is so much law could do: mandate compost, tax waste by volume/weight; restrict (by taxing or banning) usage of excessive packaging, single-use material, and non-recyclable material; give incentives to developing materials that are bio-degradable.

Norm & Market

There are many unsustainable norms in the US compared to other as-developed countries: consumerism that could be improved; short cycle of buying to throwing out exacerbated by the tendency to buy cheaper, lower-quality products; lack of recycling principles; overusing single-use, disposable products, use excessive/un-recyclable packaging.

Code

Increasing the number of Waste-To-Energy facilities. Development (and competitive pricing) of bio-degradable material.

Man of My Words

 

 

While instances of overt, aggressive sexism have decreased during the past half-century, sexism persists, more embedded. Internalized sexism is one of the more subtle forms of contemporary sexism, not extensively examined but surprisingly influential as some studies illustrate.

Internalized sexism is also prevalent in the perception of voice. A persuasive voice for public speaking and is often described as booming—loud, deep, and resonant—a quality more often physically tied to male voices than female ones. Female voices—quiet, less resonant, “weak”—are thus often given less authority. Female voices also lack representation, with females accounting for only 30% of speaking characters in top grossing films, for example. Many women subconsciously associate female voices, including their own voices, with weakness or lack of authority and ultimately connect themselves with such.

Man of My Words is a wearable self-feedback voice changer for women aiming to disrupt this self-association between voice and hierarchy. When they speak into the device, they will hear their voice real-time in a male voice, thereby having the perception of speaking in a male voice. Man of My Words aims to disrupt the association women make between their voice and social hierarchy. By this experience, women can hopefully attribute to themselves to the authority and power they usually only associate with men.

Questions, questions

1. Where am I? How far is my (unintended) reach?

 

2. I spent most of my queer childhood in a homophobic, misogynistic country. As I lingered in the periphery of online queer communities for years, I witnessed nameless, countless friends get outed, quit school (often “recommended” leave), run away from home, survive on part time jobs, quit because of abuse, take other part time jobs, repeat, and, eventually, disappear. Fifteen-year-old Hane thought, if I became an Educated Adult, maybe I would gain enough respect in my field to not get immediately fired if I were discovered.

2.1. I am academically the 99th percentile outcome of the entire community, if not an outlier. There will be socioeconomic consequences.

2.1.1. It was obvious by high school. I wanted to become a public role model and improve the reputation of queer people.

2.1.2. I want to give myself the responsibility of improving queer lives in that country.

2.1.2.1. Can this be done with technology?

2.2. What is the responsibility that comes with academic privilege? Is there any?

2.2.1. Is this elitist?

2.3. It is not without effort that I live here as a small, colored, assigned-female-at-birth, queer person, but life has dramatically improved since I have moved to Boston. I may be a token minority (minus religion), but at least I am a model one?

2.3.1. Repeat 2.1.2.

 

3. Engineering is my profession. It also happened that I went to a pretty good engineering school.

3.1. I am a master’s student in the Media Lab, Opera of the Future. Before that, I was an electrical engineering undergrad at MIT.

3.2. I inevitably embed my biases in the systems I develop, and my work may receive more credit than it warrants in itself.

3.2.1. This is privilege.

3.2.2. What is the responsibility that comes along with technological-academic privilege? Explanations? Clarity? Dedication? Infallibility? Good Morals?

3.2.3. Am I elitist?

3.3. The current development of machine learning (and so-called AI) algorithms bothers me. They make decisions based on biases while they are believed to be fair science and often disadvantage the disadvantaged. The algorithms themselves are more often than not unexplainable and uninterpretable, making matters worse.

3.3.1. What can be done?