Can artificial intelligence develop without bias, clearly evolving without a tendency to favour any one set of values over another? An Op-Ed piece in the New York Times makes its view clear enough in its choice of title: “Artificial Intelligence’s White Guy Problem”. From face recognition software that mistakes black people for gorillas through predictive software that operates on assumptions about recidivism that would provoke outrage if articulated by human agents: these egregious mistakes have at least done us all a service in alerting us to the fact that algorithms are not by nature free of moral considerations. They work as distinct outputs of the people who craft them, reflecting the choices and betraying the values of their creators.
Bias, however unintentional, proceeds from privileged perspectives that go beyond the assumptions of colour and race. The Times Op-Ed instances the gender-driven discrimination revealed in last year’s academic study of Google job advertisements, revealing that men were more likely than women to be shown adverts for the more highly-paid jobs. Elsewhere, there has been the instance of political bias that arose in the story reported in the Financial Times, concerning Facebook’s alleged search bias in favouring liberal over conservative views. That led to a story in Salon that asked if “Google results could change an election.”
Work is being done on what a thoughtful blog on the London School of Economics website refers to as “algorithmic fairness – a systematic way to formalise hidden assumptions about fairness and justice so that we can evaluate an algorithm for how well it complies with these assumptions.” But vigilant circumspection must ensure that humanity never leaves it to algorithms to protect human values. In spinning the wheels of privilege, it may seem to the (mostly) wealthy, white, older, male population that is driving AI’s development that humanity’s existential risks lie some way off in the future. The reality for the lesser privileged is that for them, the impacts of implicit AI bias are being felt now.
Lively Stuff from Planet BAM!
- AI should produce good penalty takers: humans will continue to miss
Timely article by the American Council on Science and Health examines the human balancing of an operating process in the brain that gets squeezed by pressure in the moment, while a contiguous “monitoring” process facilitates “ironic error”. If it can happen to Ronaldo . . .
- Brain’s effect on dieting: it protects the “defensible zone”
Our brains operate in a comfort zone created by thousands of years’ evolution, our childhood programming, and the breathless inducements of Big Food. What chance do diets alone have?
- Genes do it, even developmental dormant-since-birth genes do it
Science Magazine reports on research into genes that live for days after an animal’s death