Breaking

Sexist Racist Bots

A.I. / Future Technologies /

Sexist Racist Bots – and now we have it – AI is intrinsically prejudiced, but why the stunned media reaction?

AI isn’t some fairy godmother says one research study co-author, Joanna Bryson, computer scientist at the University of Bath and Princeton University.

“AI is just an extension of our existing culture.”

Of course, it is. Because data and images used to train AIs reflect both current and past prejudice.

Big data meant to avoid gender and racial prejudice are in fact biased, with algorithms which establish word meanings hoovering up text written by humans, and adopting the humans’ stereotypes.

University of Virginia computer science professor Vicente Ordóñez and his team tested two of the biggest picture and data collections used in training AIs, which included one supported by Facebook and Microsoft, and they found colossal sexism.

Image recognition resulted in this type of jolly equation: kitchen = women, shopping = women, washing = women, even items of cutlery = women. Whereas, man = sport, man = hunting.  Produce a picture of a man in a kitchen and the algorithm labelled it woman.

Princeton University’s work on algorithm word association was equally disturbing. Seemingly white names were equated with pleasant results and seemingly black ones with unpleasant.

It has all the fun of the old Janet and John readers – Janet helps mummy in the kitchen, John helps daddy mend the car.  Or let’s go beyond to Enid Blyton where the black-faced golliwogs were always the bad guys.

And blatantly sexist or racist embedding bring these biases into real-world systems.  If word embedding brings the equation man is to computer programmer as a woman is to homemaker., don’t trust a computer to sort applicants’ CVs.

If we thought algorithms would give men and women an equal chance in work opportunities, forget it when they decide that father is to a doctor as a mother is to a nurse.

Tolga Bolukbasi, a Boston University computer scientist sums it up when considering how prejudiced embedding might be used: “when sorting résumés or loan applications, say. For example, if a computer searching résumés for computer programmers associates “programmer” with men, men’s’ résumés will pop to the top”

And if names like Brett and Allison equate with positive words like love and laughter, and names like Alonzo and Shaniqua with negative words like failure, the playing field could not be less level.

Joanna Bryson suggests that in deciding how or whether to act on such biases if hiring programmers, you could decide to set gender quotas.

It all highlights she says “that it is important how you choose your words. This is actually a vindication of political correctness and affirmative action.”

And Arvind Narayanan, assistant professor of computer science and Center for Information Technology Policy member at Princeton adds:  “We have a situation where these artificial intelligence systems may be perpetuating historical patterns of bias that we might find socially unacceptable and might be trying to move away from.

“The biases that we studied are easy to overlook when designers are creating systems. We should treat biases as part of the language and establish an explicit way in machine learning of determining what we consider acceptable and unacceptable.”

And now folks if you’ll excuse me, I’m just off back to the kitchen!

DD


Tags: ,



Dori DeLuca




Previous Post

eSports in future Olympics?

Next Post

iGen Experts Don’t See Eye to Eye





You might also like



0 Comment


Leave a Reply

Your email address will not be published. Required fields are marked *


More Story

eSports in future Olympics?

eSports in future Olympics? - With Paris confirmed as hosts for the 2024 Olympics, the Olympic Council of Asia (OCA) has already...