- Joined
- Oct 2, 2015
- Messages
- 1,200
- Reaction score
- 0
Sexist, racist, and discriminatory artificial intelligence has a new opponent: the ACLU.
Earlier this month, the 97-year-old nonprofit advocacy organization launched a partnership with AI Now, a New York-based research initiative to address the social consequences of artificial intelligence.
In May last year, a stunning report claimed that a computer program used by a US court for risk assessment was biased against black prisoners. The program, Correctional Offender Management Profiling for Alternative Sanctions (Compas), was much more prone to label black defendants as likely to reoffend – flagging them at almost twice the rate as white people (45% to 24%)
“If you’re not careful, you risk automating the exact same biases these programs are supposed to eliminate,” says Kristian Lum, the lead statistician at the San Francisco-based, non-profit Human Rights Data Analysis Group (HRDAG). Last year, Lum and a co-author showed that PredPol, a program for police departments that predicts hotspots where future crime might occur, can get stuck in a feedback loop of over-policing majority black and brown neighbourhoods. The program was “learning” from previous crime reports. For Samuel Sinyangwe, a justice activist and policy researcher, this kind of approach is “especially nefarious” because police can say: “We’re not being biased, we’re just doing what the math tells us.”
https://www.fastcodesign.com/901342...o-civil-liberty-the-aclu-has-a-plan-to-fix-it
https://www.theguardian.com/inequal...ots-how-ai-is-learning-all-our-worst-impulses
Earlier this month, the 97-year-old nonprofit advocacy organization launched a partnership with AI Now, a New York-based research initiative to address the social consequences of artificial intelligence.
In May last year, a stunning report claimed that a computer program used by a US court for risk assessment was biased against black prisoners. The program, Correctional Offender Management Profiling for Alternative Sanctions (Compas), was much more prone to label black defendants as likely to reoffend – flagging them at almost twice the rate as white people (45% to 24%)
“If you’re not careful, you risk automating the exact same biases these programs are supposed to eliminate,” says Kristian Lum, the lead statistician at the San Francisco-based, non-profit Human Rights Data Analysis Group (HRDAG). Last year, Lum and a co-author showed that PredPol, a program for police departments that predicts hotspots where future crime might occur, can get stuck in a feedback loop of over-policing majority black and brown neighbourhoods. The program was “learning” from previous crime reports. For Samuel Sinyangwe, a justice activist and policy researcher, this kind of approach is “especially nefarious” because police can say: “We’re not being biased, we’re just doing what the math tells us.”
https://www.fastcodesign.com/901342...o-civil-liberty-the-aclu-has-a-plan-to-fix-it
https://www.theguardian.com/inequal...ots-how-ai-is-learning-all-our-worst-impulses