4 questions about AI ethics and how open source can help

Open source resources could provide effective solutions to some key ethical considerations for artificial intelligence.
79 readers like this.
Manage your workstation with Ansible: Automating configuration


As a high school student, I've become very interested in artificial intelligence (AI), which is emerging as one of the most impactful innovations of recent times. This past summer, I was selected for the AI4ALL program, where we learned how to develop AI systems using Python.

For my final project, I created an object-detection program and integrated it with a virtual drone simulation. Throughout the project, I was able to use open source frameworks, including TensorFlow, Keras, Scikit-learn, and PyTorch, to aid in developing the object-detection machine learning (ML) algorithm process.

By doing this project and using other open source frameworks like Linux, Apache Kafka, and ElasticSearch, I realized the impact of open source technologies on AI system development. Access to these powerful, flexible technologies enables people to experiment with and develop AI systems.

I discovered another important aspect of AI when I attended a seminar at William & Mary about AI and ethics. As we discussed some of the ethical concerns surrounding AI systems, I started wondering: How do I make sure that the systems I'm developing are ethical?

To answer my question, I interviewed three experts: Alon Kaufman, PhD, CEO of Duality Technologies; Iria Giuffrida, PhD, a law professor at William & Mary; and an executive in the fintech industry (name kept confidential upon request). I am very grateful to these experts for giving me their valuable time, fueling my curiosity, and inspiring me with their journeys in this field.

How important of a role does ethics currently play in AI systems?

As more advances are made with AI, society is starting to appreciate the importance of ethical implications. However, the discussions about ethical values in AI system development are still an academic exercise, Dr. Kaufman says. Today, much of AI, and especially ML models, remains a "black box" because advanced algorithms and complex AI technologies (such as deep learning) produce models that are not self-explainable. Without having explainable AI/ML models, it's hard to introduce ethical considerations into the conversation. 

For Dr. Kaufman, the industry's focus today is on data security and protection. As more smart technology is developed, it's getting easier to acquire data. The large amounts of data handled in AI systems make them vulnerable to cyber breaches.

While it is certainly vital to build data protection measures for AI systems, it is becoming critical to shift the focus to other ethical implications, such as privacy and machine bias.

Is ethical AI development a good goal for the future, or is it wishful thinking?

The short answer is that it is a reasonable goal to create ethical AI systems, but it will take time.

"Ethics" is a difficult concept to define. One approach is to look at ethics through a three-pronged methodology: security, privacy, and fairness. Most of the industry is now focused on security, and a practical next step could be to address the issues of privacy and ensuring data anonymity. "Privacy-enhancing technologies, such as secure computing and synthetic data could offer assistance in this process," explains Dr. Kaufman. Only after finding solutions to the security and privacy areas of ethics is it reasonable to venture into machine fairness and bias.

There are many factors to consider in the development of unbiased AI. "The first step is the deployment of active-learning models into society, followed by the actual development of fair systems," says the fintech industry expert. He explains that when ML equipped with active learning is deployed, society's own biases are often imposed on the model, even unknowingly, creating very flawed systems. When developers create AI systems, their preconceived notions must never be injected into the system code, says Dr. Kaufman. These are some of the key areas that must be addressed in developing unbiased models.

How early should students be exposed to ethical considerations in AI/ML?

It's increasingly important to learn not only about computer science principles but also about how to build ethical models. Professor Giuffrida says computer scientists and AI developers have not explicitly been trained to think about the ethics behind their systems. This is why the development of bias-proof systems cannot be a linear process, she says; they must go through multiple review levels to minimize injecting bias in the machine process.

Professor Giuffrida emphasizes that AI development should be treated as an interdisciplinary study, meaning that it is not just developers who are in charge of creating accurate and ethical systems—that's an enormous responsibility to put on one group. But introducing ethical concepts to aspiring AI engineers at a foundational level could accelerate building sustainable, functional, and ethical systems.

Will ethics come at the expense of innovation?

This was the main question I kept coming back to throughout the interviews. During the AI4ALL course, we explored the recent Genderify fiasco, which was AI software intended to identify a user's gender based on several data points, including name, email address, and username. When launched for public use, the results were highly inaccurate—for example, the name "Meghan Smith" was assigned female, but "Dr. Meghan Smith" was assigned male. These biases led to the service's failure. More comprehensive ethical reviews and tests conducted before launch could make this type of product successful, at least initially.

All three experts voiced the idea that ethical boundaries are becoming necessary. According to the fintech executive, "Reasonable ethical constraints will not limit innovation—companies must eventually figure out how to innovate within these boundaries." In this fashion, ethical considerations could increase sustainability and success for a product—rather than putting a complete stop to AI development. Venture capitalist Rob Toews offered an interesting perspective on regulating AI, and I recommend this article if you want to learn more about this topic.

Ultimately, introducing ethics into pure AI is a daunting but necessary step to secure AI's viability in our world. Open source resources could provide effective solutions to some key ethical considerations, including comprehensive system testing. For example, the AI Fairness 360 open source toolkit by IBM assesses discrimination and biases in ML models. Open source communities have the ability to develop tools like this to provide greater coverage and support for ethical AI systems. In the meantime, I believe it is my generation's responsibility to build bridges between creating revolutionary AI technologies and considering ethical implications that will stand the test of time.

What to read next
User profile image.
Sahana Sreeram is a first-year at Duke University. She enjoys programming in Python and Java and has gained experience building AI systems in the AI4ALL and MIT FutureMakers programs. Sahana is also a part of her school CyberPatriot team, where she learns how to secure interfaces using Linux.

Comments are closed.

Creative Commons LicenseThis work is licensed under a Creative Commons Attribution-Share Alike 4.0 International License.