georgina-if-blog
georgina-if-blog
Georgina Bourke
1 post
Don't wanna be here? Send us removal request.
georgina-if-blog · 7 years ago
Text
Just an engineer: The Politics of AI, Kate Crawford
In July, I saw Kate Crawford talk as part of the You and AI lecture series at the Royal Society.
Kate delivered an excellent talk about the ethical challenges that need to be addressed in the field of machine learning and the need for accountability in the way machine learning is used.
A brief summary of my notes are below, but please do watch Kate’s full talk
“It’s commonplace that AI touches healthcare, criminal justice...what’s different are the authoritarian governments we have now. How is AI amplifying this power?”
Kate began with a story about some researchers who developed a model to predict people’s association to criminal gangs. The model was adopted and used in policing. If someone is classified as a gang member they can be given additional criminal charges or heavier sentences. 
The dataset used to train the researcher’s model is known to contain errors, 42 of the subjects in the dataset are under the age of 1. The model that is being actively used to predict whether someone is in a gang has learnt its ‘truth’ from a flawed dataset.
When one of the researchers was challenged about their decision to use that dataset and the possible implications it could have if those predictions are incorrect as a result, they replied “I’m just an engineer”. 
This highlighted the theme of the talk, who should be responsible for the decisions a machine learning model makes? Another issue is that bias in machine learning systems is currently invisible and therefore unaccountable.
Kate went on to talk about the Lena test image. The Lena image is from a Playboy magazine from 1972 that was used to test an early image processing algorithm in 1973. Since then it has been widely used to test other image processing algorithms. Critical decisions made about what training data is ok to use can continue to influence research three decades later. 
“The cultural lessons of our past are being used to train our future” 
In terms of the solutions to addressing bias in machine learning, Kate talked about techniques like “scrubbing to neutral”. But “who’s idea of neutrality” is it? Removing bias from your systems is effectively deciding the version of society you want to see. These are political decisions, but at the moment they’re not treated as such.
Other solutions discussed included widening datasets so that they are more diverse. The issue there is that it means collecting more data about already  marginalised groups. 
“Equal surveillance is not equality” 
The talk ended on a positive note about the power we have to affect change.
René Carmille was a French engineer, who worked as a punch card computer expert in Lyon in 1944. At the time, IBM worked with the Nazis to enable them to use their punch card computer systems to update census data to find and round up Jewish citizens. Carmille hacked the machines, reprogramming them so that they’d never punch information from Column 11 [which indicated religion] on to the census cards, saving thousands of lives. 
“We’re plagued with technical inevitability. We need to start with the world we want, then ask how can technology support this?”
0 notes