The demons in AI’s data: how to make AI less biassed

By Carl Smith, illustrated by Ing Lee

Reid Blackman says very few organisations take a holistic and forward-thinking approach to AI bias.

“The methods they use for identifying bias are insufficient,” says Reid Blackman.

“Their bias and mitigation strategies and tactics are not sufficient.”

He says most consider it a purely technical problem.

Dr Wiebke Hutiri says sometimes new skillsets are needed in teams for a socio-technical approach.

“The more technology we have, the more we need to have social sciences and humanities to understand what to do with it,” she says.

Dr Aripta Biswas says greater diversity among technology development teams can help too.

But she says they have to be meaningfully involved.

“Some of the companies claim that ‘we have the most diverse set of people on the team’, so it's like ‘most fair’,” she says.

“I think that's definitely a starting point, but it's not the only thing.”

Education and awareness is important for organisations using AI too.

“Some companies just don’t realise they are at risk,” says Reid Blackman.

“They have an unwarranted degree of confidence in the data scientists’ ability to solve these problems.”

Some companies are drafting their own checklists or blueprints to try to reduce bias.

But Professor Aimee van Wynsberghe says this isn’t enough.

“If companies are only meant to have ‘ethics principles’ or to self-regulate, they’re not going to,” she says

“There should be like a fairness auditing process in every automated decision,” says Dr Biswas.

The new EU's AI Act lays out core principles to make sure AI's developed responsibly with citizens’ interests in mind.”

It has more oversight for riskier uses and aims to make it, “safe, transparent, traceable, non-discriminatory and environmentally friendly.”

Canada, Brazil, China, parts of the US, and Australia are working on similar guidelines.

But the foundations of this powerful new technology are being laid right now.

So, many are arguing it’s time.

“It's possible to develop higher level principles,” says Dr Wiebke Hutiri.

“What I would like to see much more of is a proactive view of what our future should look like,” she says.

Aimee van Wynsberghe was a Member of the High-Level Expert Group on AI that advised the EU’s legislation and she says now is the time to put guardrails in place.

“I absolutely believe it’s not too late,” says Prof van Wysberghe.

“But we are at a crucial moment right now.”

Open and transparent datasets, algorithms, and bias mitigation tools could make AI safer for everyone.

The idea is that everyone in the AI community can chip in.

But there are big concerns about open datasets scraping private, personal, or copyright data from the internet.

Dr Wiebke Hutiri says we could try something like a ‘Data Daemon’ approach.

This is where all personal data could only be used by the individuals it belongs to.

Referencing Philip Pullman’s Northern Lights series, she says we might picture this as a little dataset pet that stays with us at all times.

So, this kind of AI could limit harmful generalisations.

“Personalisation is one way to combat bias,” says Dr Hutiri.

It’s just one creative approach.

Other simpler steps include having a ‘Devil’s Advocate’ in development teams to find flaws.

Or setting core principles around inclusiveness to guide tech development.

Or more testing and engagement with diverse users.

Responsible AI researchers say people have to decide what fairness looks like in technology.

We can’t just rely on software developers or computer scientists to figure everything out.

They argue we all have to pay attention.

This is just one way AI - and technology in general - can hurt people.

The first step towards untangling complex problems like this is awareness. It’s education.

...a few days later...

Credits

  • Reporter & creator: Carl Smith
  • Illustrator & visual storytelling: Ing Lee

This project was supported and first published by the MIP.labor program.

  • Web developer: Stefan Auerbach
  • Scientific advisor: Professor Didar Zowghi, Science Team Leader – Diversity and Inclusion in Artificial Intelligence, CSIRO Australia
  • With thanks to Ítalo Carajá.

Read more