Erase unconscious bias from your AI datasets | Opensource.com

Erase unconscious bias from your AI datasets

Biased training datasets can produce serious consequences in people's lives, explains All Things Open Lightning Talk speaker.

a handshake
Image by : 

opensource.com

x

Get the newsletter

Join the 85,000 open source advocates who receive our giveaway alerts and article roundups.

Artificial intelligence failures often generate a lot of laughs when they make silly mistakes like this goofy photo. However, "the problem is that machine learning gaffes aren't always funny … They can have pretty serious consequences for end users when the datasets that are used to train these machine learning algorithms aren't diverse enough," says Lauren Maffeo, a senior content analyst at GetApp.

In her Lightning Talk, "Erase unconscious bias from your AI datasets," at All Things Open 2018, October 23 in Raleigh, NC, Lauren describes some of the grim implications and advocated for developers to take measures to protect people from machine learning and artificial intelligence bias.

To learn more about this issue, watch Lauren's talk and read her Opensource.com article, "The case for open source classifiers in AI algorithms," which delves further into this problem.


What to read next

human head, brain outlined with computer hardware background

Machine bias is a widespread problem with potentially serious human consequences, but it's not unmanageable.

About the author

Lauren Maffeo - Lauren Maffeo has reported on and worked within the global technology sector. She started her career as a freelance journalist covering tech trends for The Guardian and The Next Web from London. Today, she works as a senior content analyst at GetApp (a Gartner company), where she covers the impact of emerging tech like AI and blockchain on small and midsize business owners. Lauren has been cited by sources including Forbes, Fox Business, DevOps Digest, The Atlantic, and Inc.com. She has spoken...