AI bias will explode, but only unbiased AI will survive?

AI bias will explode, but only unbiased AI will survive?

The most sophisticated AI systems are only as good as the data they are trained on, and if that data has been collected in a biased or compromised way, then results are unlikely to fit with the real world which we are attempting to model.

Devising new ways to monitor for bias, and eliminate it at the source, are keys to creating AI software which accurately reflects reality, rather than the biased human view of reality that AI promises to help us transcend.

Welser tells me “One of the hopes of AI is that it will help us make decisions in less biased ways, because AI won’t have human biases. So if you’re making decisions on mortgages, or who should get bail, or who you should recruit, all of those things have biases built into them.

“AI systems would hopefully be able to make those decisions with less bias, but the challenge is that the AI gets trained on data, and if that data has a bias then your AI will be biased.

“We spend a lot of time right now working on how the systems we’re training aren’t inadvertently learning bias, and also protecting them from players who might be trying to teach them bias that we don’t want them to have.”

AI systems trained in this way to provide an unbiased, objective model of the world are likely to be the most successful. This will help us to tackle moral and ethical problems which will be encountered by any industries or fields of research attempting to use AI to tackle social issues or make decisions that will affect human lives.

Close Menu
×
×

Cart

test