Erase unconscious bias from your AI datasets

Biased training datasets can produce serious consequences in people's lives, explains All Things Open Lightning Talk speaker.
164 readers like this
164 readers like this
a handshake

opensource.com

Artificial intelligence failures often generate a lot of laughs when they make silly mistakes like this goofy photo. However, "the problem is that machine learning gaffes aren't always funny … They can have pretty serious consequences for end users when the datasets that are used to train these machine learning algorithms aren't diverse enough," says Lauren Maffeo, a senior content analyst at GetApp.

In her Lightning Talk, "Erase unconscious bias from your AI datasets," at All Things Open 2018, October 23 in Raleigh, NC, Lauren describes some of the grim implications and advocated for developers to take measures to protect people from machine learning and artificial intelligence bias.

To learn more about this issue, watch Lauren's talk and read her Opensource.com article, "The case for open source classifiers in AI algorithms," which delves further into this problem.

What to read next
Photograph of Lauren, a white woman with long brown hair, standing in front of a tree wearing a grey coat.
Lauren Maffeo has reported on and worked within the global technology sector. She started her career as a freelance journalist covering tech trends for The Guardian and The Next Web from London. Today, she works as a service designer for Steampunk, a human-centered design firm building civic tech solutions for government agencies.

1 Comment

Really interesting. Thanks!

Creative Commons LicenseThis work is licensed under a Creative Commons Attribution-Share Alike 4.0 International License.

Find the perfect open source tool

Project management, business intelligence, reporting, and more. Check these popular projects.