Maya was on her way to uni…
...when something took over her phone - until she switched it off.
More than 50% of universities report experimenting with AI software to assess potential students, despite the ethical concerns.
AI Ethicist Professor Aimee van Wynsberghe from the University of Bonn says many systems assessing applicants are moving past just skimming CVs.
“The next stage is when you do a video interview which is then filtered through artificial intelligence models to make predictions based on facial cues, how you’re speaking, whether or not you’re making eye contact,” she says.
Experts agree many historical datasets used in AI models are WEIRD.
WEIRD is an acronym that comes from psychology describing biasses in research datasets.
Even though AI’s used everywhere, in lots of settings, the data it’s using is often dated and warped like this.
Examples like this might look extreme.
But Amazon’s AI recruitment tool - built using existing employees CVs - kept selecting more men than women in trials.
No matter how they tinkered with the model, they couldn’t make it stop doing this - so they canned the project.
Hidden dangers like this are hard to fix. Sometimes they’re even hard to spot.
Reid Blackman, author of Ethical Machines, points out the AI-generated formulas running these programs are often immensely complex.
Given the potential harms, AI Ethicist Dr Arpita Biswas from Harvard University says we need to make sure we can properly understand how these programs are working - at least at a basic level.
Every algorithm should be capable of explaining why they are making certain decisions,” says Dr Biswas.
“There's no standard which says that you have to do this… And that's a problem,” she says.
History is biassed.
Humans are biassed.
Sometimes programmers introduce these biasses by choosing which data to focus on, or by tinkering with an algorithm.
For example, Professor John MacCormick accidentally constructed a racially biassed algorithm.
It detected head movements by focusing on pixels that detect skin colour - but this programming decision meant it failed to detect non-white skin.
Arpita Biswas says all these biasses are being baked into the foundational AI models that are now starting to make decisions in our lives.
The technology that’s now filtering into businesses and organisations.
“People will just take a model which is already a pre-packaged thing, take a dataset [and] use it,” Dr Biswas says.
All of these are well-known steps to help reduce bias in AI.
Many in the field say people working alongside AI - keeping one another in check - is the best path forward.
But organisations are rapidly developing AI tools because they’re seen as a cost-effective efficiency tool.
All of these other steps to tame AI models mean more time, money, and training.
Plus, there’s rarely any obligation or incentive for these expensive steps.
So, right now, most companies consider something like mitigating bias as a ‘nice-to-have’ rather than essential.
It’s often left just to software developers to fix or ‘optimise’ biassed models after they’ve already caused problems.
Dr Abraham Glasser from Gallaudet University is hard of hearing and he studies accessibility and inclusion in technology.
He says this approach can - and does - lead to harm among underrepresented groups, including people with disabilities.
“This goes more generally for any technology that is in development,” says Dr Glasser.
“It finds the issues after the program is already developed and has been sent out into the world,” he says.
Ethical or responsible AI teams have been among the first to go during mass cost-saving layoffs across the tech sector.
Some AI companies are using automated ‘quantitative bias detection tools’.
These might, for example, help detect whether one group in society is receiving loan approvals in a bank’s AI approval algorithm at a much lower rate than other groups.
Many of these are freely available, including IBM’s AI Fairness 360 toolkit.
But AI ethics experts point out that quantitative tools can’t be the only line of defence, including Dr Wiebke Hutiri, an AI Research Scientist now at SonyAI.
“We might say we think a system is fair if everybody gets the same thing,” says Dr Hutiri.
“Or we might say we think a system is fair if those that have suffered in the past get more than those that haven't suffered in the past,” she says.
In other words, “fair” can mean many things.
Dr Hutiri says we shouldn’t just rely on computer scientists or tech developers to optimise some kind of fairness algorithm themselves.
Research and policy papers are full of dire warnings about this.
The US-based Brookings institute recently published a report saying…
Others go even further.
A 2022 report from the US’ National Institute of Standards and Technology says many organisations are becoming over-reliant on technological solutions that don’t actually fix the core problems.
It states, “the idea that quantitative measures are better and more objective than other observations is known as the McNamara Fallacy.”
The report continues: “This fallacy, and the related concept technochauvanism, are at the center of many of the issues related to algorithmic bias.”
Most AI ethicists are seriously worried about relying on this blind optimism that technology has all the solutions.
The future we’re stepping into will be filled with AI - but Afua Bruce says the foundations of this future are very shaky.
AI is a tool.
Any tool can be used for good or bad.
But if there are inherent faults like this in a tool then we’re setting ourselves up for a more unfair and unequal world.
Scientific research that affects people is regulated by strict ethics approvals, review boards, and has to go through rounds of controlled trials.
But Arpita Biswas points out that technology development doesn’t have these same safeguards.
It’s just released into our lives, sometimes leading to real world harm.
“You cannot do a shallow job of just testing it within your team... that has been the tech industry’s approach for almost like two, three decades,” says Dr Biswas.
“Fairness is not a [software] bug. It's an important issue. It's harming people.”
“So we cannot really use the same mentality when it comes to fairness,” she says.
Read more
Credits
This project was supported and first published by the MIP.labor program.