Justin Sherman

130 points
User profile image.

Justin Sherman is a senior at Duke University, a Fellow at the Duke Center on Law & Technology at Duke University's School of Law, and a Cybersecurity Policy Fellow at New America. He is the Co-Founder and President of Ethical Tech (https://ethicaltech.duke.edu/), Duke's nonpartisan student-faculty initiative focusing on research, education, and policy development on issues of ethics and technology. His writing has been published by a variety of popular and academic outlets, including The Washington Post, The Atlantic, Foreign Policy, WIRED, and the Council on Foreign Relations.

Authored Content

Authored Comments

Well-said. That's why it's so important we educate policymakers and the public (and, arguably, even many tech developers themselves) that technology is NOT inherently objective - that 1s and 0s don't take biased human perceptions and just "unbias" them.

Absolutely, Greg - I entirely agree. Privacy is an enormous concern when it comes to machine learning algorithms that rely on massive data sets which often contain personally identifiable information (e.g. as used in medicine, healthcare, finance, law enforcement, and more). Along that vein, there's also a concern for how we tokenize or anonymize PII while still (a) preserving the ability of the AI models to make sense of the data, and (b) ensuring that malicious actors can't perform adversarial injections into the network - e.g. feeding malicious training data to a "general" self-driving car neural network in the cloud, which over time could cause cars to crash into people. All of this and more is a serious issue, as you said, given that our world is headed in this direction!