Open source code isn’t a warranty

No readers like this yet.
annoying bugs

Opensource.com

Automotive software issues, such as the Jeep hack and Volkswagen cheating on emissions tests, have made headlines this year, which means the public is thinking about software in cars like never before. Some experts have argued that mandating that such software be open source is a solution to the problem. Although there are definite benefits to public scrutiny of the software, code visibility alone is no guarantee. As Sam Liles explained to me in a recent email, open source code didn’t prevent ShellShock.

Dr. Liles was formerly a professor of cyber forensics at Purdue University, where he and his students researched the security of automotive and other Internet of Things devices. He says that defense-in-depth is dead, meaning we can no longer rely on multiple layers of security for protection. Our phones and other personal devices, for example, know everything about us: where we go, with whom we communicate, even when we're having sex. These devices, and all of the information they contain, live inside our personal and work networks. A compromised phone can access troves of information or spread threats to other connected devices.

The sheer volume of these devices represents a challenge in itself. "Who is going to do incident response at this level?" Liles asks. For that matter, who is going to audit all of that code? In The Cathedral and the Bazaar, Eric S. Raymond wrote, “Given enough eyeballs, all bugs are shallow," which he called Linus’s Law, but we cannot rely on enough eyeballs alone. If such important and established projects such as OpenSSL lacked the resources to prevent bugs like Heartbleed, who is going to examine the millions of lines of software that run the devices we take for granted every day?

Although the 2011 NASA and the NHTSA investigation into a rash of unintended acceleration incidents involving Toyota cars found "no evidence that a malfunction in electronics caused large unintended accelerations”, other researchers have identified ways to produce acceleration in automobiles by way of software. "If the Power Management ECU has been compromised," the IOActive report reads, "acceleration could be quickly altered to make the car extremely unsafe to operate.” Clearly, software is a critical component of modern automotive safety.

Nevertheless, research such as that done by Liles' group remains relatively rare. Just analyzing the software is often difficult. "Forensics is almost never built into systems and often for the purpose of legal validity needs to be reverse engineered," Liles says. Additionally, the change in threats posed by the Internet of Things requires a fundamental shift in the way research is conducted. "Many of the 'old' information assurance and security rules, doctrine, and sometimes called science is based on myths, half truths, and outdated technological concepts."

So where does open source fit into this? Accidental bugs, sometimes significant, will continue to exist whether or not the source code is open. Heartbleed, ShellShock, and many other high-profile vulnerabilities in open source software tell us this is the case. Intentional misbehavior would become riskier in the open, but openness is only helpful to the degree we have some way of validating that the source code that has been provided is what's actually running. This becomes increasingly important as cars become open systems, connected to our phones and to mobile Internet services.

Tags
User profile image.
Ben Cotton is a meteorologist by training, but weather makes a great hobby. Ben works as the Fedora Program Manager at Red Hat. He is the author of Program Management for Open Source Projects. Find him on Twitter (@FunnelFiasco) or at FunnelFiasco.com.

11 Comments

This is great! I shared it with many of my friends already.

Hi.

Open source is not what it makes the software secure. People is the one that makes it secure. Open source is a tool that can guarantee for people to have access to the source code to make it secure.

Regards

In reply to by Don Watkins

Martin, thanks for reading! You're absolutely right. Access to source code is an important precondition for security, but there's much more required beyond that.

In reply to by Martin Iturbide (not verified)

I am pretty skeptical of the 'many eyes' hypothesis with regard to bugs in general, and there is an additional issue when trying to apply it to security specifically: in that case, many of the eyes will be malicious in intent. Security is always a race between good and evil.

Before anyone says I am advocating the discredited 'security by obscurity', let's look at what it means to say that it doesn't work. It does not mean that any use of obscurity is pointless - the whole point of private keys, of course, is that they are obscured. What it means is that if you are trusting only to the obscurity of your implementation, you don't have security. This observation is not an invitation to make things easier for your opponents. Open-sourcing code can not help with security unless you can be sure that it leads to more and better scrutiny by white-hats than black-hats, and there is no guarantee even then.

I'm with you. My contributions to this site make it fairly obvious that I'm an advocate of open source software generally, but I'm under no delusions that it's a panacea. I think there is definite merit to the "many eyes" hypothesis, but it tends to be most beneficial pre-release. In other words, many eyes make design bugs shallow, but by the time the code has shipped, there aren't a whole lot of eyes. Sticking with the ESR theme, I'm generally more in favor of the bazaar approach, but I do think the cathedral has some extra benefit in this sort of situation.

In reply to by ARaybold

To follow up on my comment about security being a race, I am aware that the security company Coverity (and perhaps other vendors) runs a service whereby open-source projects can have their code scanned for vulnerabilities. I wonder if they have a mechanism to identify attempts by an attacker to submit the code (probably obfuscated and mixed in with unrelated code) from an open-source project they wish to attack, as if it were their own project?

In reply to by bcotton

That's a very good question. Their FAQ doesn't answer it. Some sort of integration with Palamida would seem like a good idea.

In reply to by ARaybold

The former LinuxBIOS (now coreboot) project does auditable builds of their work, so you can look and see if you're running the code that was published. I'm not aware of anyone else doping so last year, when I enquired. Others may have picked up the techniques since.

I seem to recall seeing some traffic about Fedora and Debian looking into that, but I don't recall any specifics off-hand. It would be a great article for this site, if you're willing to write it.

In reply to by David Collier-Brown (not verified)

Great read Ben!

When investigating Toyota issues NASA did not examine the code while running on the target (in the car). Instead of using JTAG or ICE debuggers they run the code on a SIMULATOR! Anyone who fixed a bug or two in firmware knows that hardware-software interactions can not be simply simulated. Their ineptitude is the reason they did not uncover the root cause of the bug.

Creative Commons LicenseThis work is licensed under a Creative Commons Attribution-Share Alike 4.0 International License.