Just Machine Learning in Unjust World?
Artificial intelligence systems are meant to transcend some of the imperfections of the human mind, being grounded in mathematics and operating on data rather than emotion or subjective perception. But as more algorithms are weaved into daily life, the limits of their objectivity are being revealed. Algorithms can amplify the biases that we have in society.
For example, a ProPublica investigation last year found that a private software used to predict future criminals was biased against black people. In this talk these bias-related problems will be illustrated and will be shown as part of the problem in creating fair algorithms is the concept of fairness itself.
What’s considered fair and precise in the field of computer science may not translate well to justice in the real world. One way to address this problem is by putting computer scientists into conversation with ethicists, philosophers, and others from fields that have historically examined justice and fairness.
Speaker: Tina Eliassi-Rad