Teaching algorithmic ethics requires an open approach

Developing socially responsible approaches to artificial intelligence requires transparent and inclusive education about algorithmic systems.
137 readers like this.
How to find files in Linux

Lewis Cowles, CC BY-SA 4.0

Artificial intelligence (AI) tools and other algorithmic systems are increasingly impacting social, political, and economic structures around us. Simultaneously, and as part of this impact, these systems are increasingly used to inform—or directly make—decisions for policymakers and other institutional leaders.

This trend could have profoundly positive impacts on humanity. Consider, for example, the ways in which AI applications have already proven revolutionary in medical diagnosis. But with and alongside the benefits these systems promise are also serious risks, for the growing unchecked use of algorithms in this fashion risks dangerously amplifying inequality and concentrating power in the hands of the few. Other related problems may accompany this, such as the increased commodification of personal information absent consumer protections, or the buildout of digital surveillance infrastructures that are more often than not turned against already marginalized or oppressed populations.

One of the most promising mechanisms for combating the dangerous encroachment of individual agency and power through algorithms is open education. Policymakers and advisors educated on these ethical technology issues can make informed regulatory decisions, technologists can increase their awareness of the impacts of their designs, and citizens and consumers can adequately understand how algorithmic systems are impacting their everyday lives. Where knowledge is power, education can provide that knowledge.

Twenty-first century educators have both a responsibility and an opportunity to empower this kind of learning about technology ethics in an inclusive and interdisciplinary fashion. Crucially, this education must be open: following principles like transparency, inclusivity, adaptability, collaboration, and community. Government regulation, greater ethical pressure within big tech organizations, and other solutions cannot act alone. Education—particularly education that is open—is essential to addressing these broader challenges brought on by increased interaction with and reliance on algorithms.

Today's state of affairs

Algorithms and AI tools are already changing both the concentration and the homogeneity of decision-making power in our institutions. For example, judges in the United States are using so-called risk assessment algorithms (RAAs) to aid their decision-making around prison sentencing. These automated systems—which vary in sophistication from basic input-function-output formulas to neural networks that use deep learning—will take an individual's profile and run some form of risk assessment on that person. This could be that person's likelihood of recommitting a crime, or it could be the degree to which they're inclined towards violent criminal behavior. Essentially, the pitch is that the algorithms reduce the workload for judges with many cases on the docket and limited time to read individuals' criminal records. Such a pitch also plays, explicitly or not, on the notion that mathematical formulas and algorithms are somehow objective.

Yet when these systems take data from our world—such as a person's number of prior arrests—at face value and use them as proxies for outputs like "likelihood of re-offense," they introduce unfairness into algorithmic decisions. As ProPublica unmasked in a 2016 story on COMPAS, an RAA used to aid prison sentencing in American courts, this bias manifests in disparate impacts on already marginalized groups. COMPAS was likely to falsely flag black defendants as future criminals at nearly twice the rate it did for white defendants, and white defendants "were mislabeled as low risk more often than black defendants." Since the data used (e.g., number of prior arrests) does not (and unfortunately will not, for the near future) have equal values across different demographics, this introduces a risk of systematic bias in the decision machine. Also worth noting is that the COMPAS system used in this particular case is made by a for-profit company that likely has little incentive to disclose or address this issue of its own volition.

Policymakers and advisors educated on these ethical technology issues can make informed regulatory decisions, technologists can increase their awareness of the impacts of their designs, and citizens and consumers can adequately understand how algorithmic systems are impacting their everyday lives.

Here, as with many other uses of algorithms in public and private institutions—welfare distribution, housing allocation for the homeless, resume reviewing, news feed curation, and much more—decision-making is, in some sense, further concentrated than is already the case more widely. Take the judges example once again. The judicial institution already involves a select number of judges making decisions for collective groups many multiples larger, depending on the jurisdiction of the court in which they serve. Yet when many of these judges from different courts depend on this single COMPAS system for decision assistance in prison sentencing—usually buying into the myth of algorithmic objectivity, because they haven't been educated otherwise—there is a risk that decision-making influence, in some senses, is concentrated even further into the hands of the few who build the algorithm. (And what happens in the near future, when judges are using this kind of system not just as a reference point on risk or recidivism, but to more concretely get exact prison sentence recommendations?)

This is exacerbated by the fact that those groups designing these technologies are often culturally and racially homogenous, identifying as white and male. Though getting consistent and accurate estimates is difficult, many reports indicate that diversity in the "technology sector" (and in technology roles generally, particularly in executive roles) is terrible. And furthermore, as with any institution, decisions here are going to be much better tailored to populations that look more like decision makers. This can impact everything from the construction and makeup of the technology itself to the terms and services that underpin its use.

Again, these issues are not unique to algorithms, and just like in other situations, this concentrated and homogenous decision-making lends itself to biased and/or unfair decisions as well, this time embedded in the code: sexist hiring algorithms, malfunctioning welfare distribution systems, search engines that reinforce racial and gender stereotypes, and more. The algorithms themselves malfunction—causing disparate impacts on already marginalized groups—because nobody is influencing the algorithmic design process otherwise.

Looking forward

Going forward, there is serious risk that institutional decision-making becomes further concentrated among developers building algorithms—algorithms that increasingly impact institutional decision-making (especially around public policy) for many people. And even if decision-making structures in technology and elsewhere become more diverse and inclusive—and that's also a big when—the issue of concentrated decision-making through algorithms and their developers persists. This won't impact every institution, certainly, and the impacts on different institutions and the resulting policy outcomes will look different in each case. But this is a path we're headed down.

In a very immediate sense, Joy Buolamwini writes in The New York Times, artificial intelligence is poised to worsen social inequality should its design and use go unchecked. And on a broader scale, as Yuval Noah Harari so eloquently highlights in The Atlantic Monthly, contemporary digital technologies, without the right checks and design principles, may very well erode human agency and the structures of liberal democracy as we know it. Yet both authors and many others agree: it's not too late. We have not crossed some threshold (if one even exists) at which algorithms are so entrenched in the world that we can't change how they are designed or used or regulated. On the contrary, actions that prevent automated systems from worsening social inequality and denying people agency are certainly possible today.

Solutions through open ethical tech education

Educating students about the power and pervasiveness of algorithmic activity is both a responsibility and an opportunity for open-minded teachers and technology ethicists.

Educating students about the power and pervasiveness of algorithmic activity is both a responsibility and an opportunity for open-minded teachers and technology ethicists. And that work should both embrace open organizational values—transparency, inclusivity, adaptability, collaboration, and community—and embed them in educational initiatives and materials aimed at fostering an ethics that addresses the potentially dangerous impacts of AI applications and other algorithmic systems on our world.

Secrecy around various algorithms has arguably led to many of the problems we see today: disparate impacts on different groups—such as with the risk assessment algorithms used in prison sentencing—compounded by a lack of public and easily accessible information about how these algorithms were designed and deployed. Because information concerning system design is often hidden or otherwise unavailable, identifying and understanding these systems' negative effects is more difficult. Ethical tech education, in the spirit of fighting these facts, should therefore embrace transparency, where the content included in coursework—and how that coursework is structured—is open to scrutiny by others. Feedback on educational materials in such an emerging area will only strengthen such initiatives.

Those developing ethical technology education programs should also be transparent about everything following the design stage. Sharing both failures and successes with others working on these problems of ethical technology education is important: What worked? What didn't? How well did the course bridge STEM-humanities divides? How relatable were the problems to students of different backgrounds? How "technical" was the material? What kinds of technologies provoked the most discussion? What kind of buy-in (administrators, students, etc.) was most important to getting this coursework implemented? The answers to these questions have the potential to help other educators working on these problems, not to mention those in government, industry, and other sectors also striving to develop ethical tech education for their constituents. Transparency is a powerful principle to embrace here.

Ethical technology education should also embrace inclusivity. Part of the problem with algorithm design and deployment today (as previously referenced) is the small size and relative homogeneity of the groups making design and deployment decisions. Few people from the general population have input or influence, and those who do have input or influence usually aren't representative of the general population. As a result, there is almost inherently an implicit and/or explicit desire to tailor these algorithms to the needs of those who share experiences with the designers—while not designing, or even designing against, the needs of those outside that circle.

Education on technology ethics therefore shouldn't just regurgitate mainstream narratives about technology—like the need to innovate absent regulation, accepting that some things "break" in the process—by tapping into small and homogenous groups. Instead, the design and maintenance of ethical technology education should pursue and embrace inclusivity in design, content, and structure. To understand the impact of risk assessment algorithms on prisoners, for instance, including only the perspectives of white system designers would not do justice; the perspectives of those affected should also be a consideration (in this case, for example, black individuals whose "risk" scores are so grossly miscalculated by the algorithm). Similarly, adopting perspectives just from technologists excludes the views of those in professions from sociology to journalism, and therefore misses important perspectives on technology. More inclusive curricular design and maintenance may therefore not just be fairer, but better—more comprehensively assessing the impact of algorithms on different groups. This is essential if we are to fight the concentrated and more homogeneous decision-making threatened by many algorithmic systems.

Adaptability, to use one final example of an open organization principle, is essential for those seeking to educate about ethical technology issues. Technology is evolving at breakneck speeds. Artificial intelligence applications and other algorithms in particular are often deployed with little testing and oversight beforehand. To ensure ethical tech education does not become quickly outdated—to ensure it remains accessible and relatable to those with varying degrees of knowledge—there must be collaborative processes that quickly pivot ethical tech education to include new technologies, new implementations of those technologies, and new effects of those technologies. Robust feedback loops from administrators, students, and others with stakes in ethical tech education can help here. In a similar vein, continuous conversation with those working on technology issues—and continued iterations of the coursework in response—serve the growth mindset that is needed to keep this kind of education current. As algorithmic fairness, data privacy, and other issues evolve, education on ethical technology should adapt in response.

Of course, open education alone is not enough. An inclusive and diverse approach to managing the risks of artificial intelligence's and other algorithms' growing role in society—one that actively engages and leverages input from a breadth of stakeholders, from citizens to regulators to tech developers—is one that should include education as just one component.

Simultaneously, we should not forget the potential positive effects that might result from increased use of and reliance on AI and other algorithms. We should pursue and embrace the ways in which systems can in fact be designed, technically speaking, with fairness, privacy protections, security, transparency, and other human-centered design principles in mind. But as we head down dangerous paths with unchecked use of algorithmic systems, open ethical tech education is a crucial way for education to make its mark on the world going forward.

Read this next

What to read next
User profile image.
Justin Sherman is a senior at Duke University, a Fellow at the Duke Center on Law & Technology at Duke University's School of Law, and a Cybersecurity Policy Fellow at New America.

2 Comments

One problem is close source software, can't inspect it. Need require can only use open software and data sets for artificial intelligence that control people's life.

Next problem Parliaments/Congresses have too much free time create too many law.

In my honest opinion ethics can only be inspired, not taught and yes Open Culture is a great way to do so.

Congrats for the article!

Creative Commons LicenseThis work is licensed under a Creative Commons Attribution-Share Alike 4.0 International License.

Download the Open Organization Definition

Now with full-color illustrations