Erase unconscious bias from your AI datasets |

Erase unconscious bias from your AI datasets

Biased training datasets can produce serious consequences in people's lives, explains All Things Open Lightning Talk speaker.

a handshake
Image by :


Subscribe now

Get the highlights in your inbox every week.

Artificial intelligence failures often generate a lot of laughs when they make silly mistakes like this goofy photo. However, "the problem is that machine learning gaffes aren't always funny … They can have pretty serious consequences for end users when the datasets that are used to train these machine learning algorithms aren't diverse enough," says Lauren Maffeo, a senior content analyst at GetApp.

In her Lightning Talk, "Erase unconscious bias from your AI datasets," at All Things Open 2018, October 23 in Raleigh, NC, Lauren describes some of the grim implications and advocated for developers to take measures to protect people from machine learning and artificial intelligence bias.

To learn more about this issue, watch Lauren's talk and read her article, "The case for open source classifiers in AI algorithms," which delves further into this problem.

What to read next

Brain and data illustration

Machine bias is a widespread problem with potentially serious human consequences, but it's not unmanageable.

About the author

Lauren Maffeo - Lauren Maffeo has reported on and worked within the global technology sector. She started her career as a freelance journalist covering tech trends for The Guardian and The Next Web from London. Today, she works as a service designer for Steampunk, a human-centered design firm building civic tech solutions for government agencies. Prior to Steampunk, Lauren was an associate principal analyst at Gartner, where she covered the impact of emerging tech like AI and blockchain on small and midsize...