Regulating Algorithmic Discrimination

We are starting to rely on algorithms to decide for us: hire the next employee, detect cancer, enroll in insurance, and grant parole. We do not know, however, how algorithms make these decisions, and a lot of them are clearly biased. ProPublica, a nonprofit investigative newsroom, revealed how the software currently used in our judicial system to predict subsequent crimes of offenders was racially biased, giving many African Americans a higher risk score though they did not go on to reoffend. Similarly, Xiaolin Wu and Xi Zhang from Shanghai Jiao Tong University conducted a study on teaching a neural network to identify criminals by their photos with a 90 percent accuracy rate. Though this dataset was racially homogenous, there were questions of whether the machine was picking up on the white collar that non-criminals were more likely to wear.

These algorithms affect the entire world. While some foreign governments have shown more distributive efforts, such as the GDPR issued by the European Union, Germany’s ethics rules for autonomous vehicles (specifically banning algorithmic preferences on the lives of certain people over others), much of the discovery is heralded by academia and non-profit, non-governmental organizations such as EFF, ACLU, and Algorithm Watch that are dedicated to the cause.

Leave a Reply

Your email address will not be published. Required fields are marked *