Michael Daconta at GCN has posted a brief call to arms for the software industry. Here's the gist:
Although I am a believer in free markets and the benefits of competition, industry has a responsibility to work together on the foundational layers to build security, quality and reliability from the ground up to advance the professionalism of the field. In essence, the information technology industry must emulate other engineering disciplines, or technological disasters and cybersecurity holes will worsen.
Daconta is uneasy with the number of platforms and methods available to software developers, and sees ever-more options and disruptions in the near future; IPv6 and 64-bit computing seem to trouble him particularly. We're already balkanized and disorganized, how can we possibly expect to produce reliable and useful software with all this messy innovation happening? The answer, of course, is control. Lots of it. Specifically, three proposals:
- Licenses for software developers
- A new, reliable, layered software platform developed by the NSF and DARPA
- Treat software like engineering, not art.
Gracious. I barely know where to start. Let's try to imagine the software development world in five years, with these proposals in place. Software development is now a licensed activity. Like an architect or a mechanical engineer, you have to pass an exam and perhaps post a bond to practice the discipline. There's probably a professional association, like the America Bar Association or the American Medical Association, to administer the credentials. This licensing regime is actually a pretty good idea, because all software has to be developed according to some very specific methods, with plenty of testing and documentation to back it up. So rather than letting any fool with a compiler write software, they'll have to spend a year or two learning the right way to write code. The process is cumbersome, but every piece of software that gets compiled is perfect. At least, as perfect as we know how. The licensing and formal methods are only possible, of course, because we have a government-directed platform that we must build on. Anyone who wants to run software in the government must do so on this stack. It sounds as though Daconta would like a broad mandate for NASA-style code development. You can probably see where I'm going with this. Another way to tell the story might be: There is now a government-owned platform that every government program is mandated to use, from clouds to mobile phones. Any software built on that platform must submit to rigorous, independent testing before it is deployed. Imagine ISO 9000 and Common Criteria having a baby with teeth. Anyone writing software must be licensed to do so in the United States. As a result, the pool of available programming talent is decimated. The costs of developing software for government rise, naturally. The pool is further diminshed because developers who want to work with the latest hardware or software no longer work for agencies or contractors -- they've wandered back to the private sector, where they can enjoy the fruits of the free market. The government software platform quickly begins to show its age, since the only developers on the platform are those that are paid to use it. Platforms that people truly want to use are in the open market, innovating at their leisure, out of the reach of government agencies. As the government is no longer able to consume most commercially available software, it is now back in the business of writing software itself. More accurately, it is back in the business of hiring system integrators to write that software on its behalf. It's the 1970s all over again. Budgets explode, and innovation grinds to a halt. It's all the agencies can do just to tread water. System integrators, of course, are delighted. They're now commanding outrageous salaries for the few programmers trained and willing to work on this mandated government platform. Let's hope there's a better way. This doesn't diminish the problem that Daconta is hinting at, of course. Software reliability is certainly something to worry about. But there is no single solution or set of policy prescriptions that will solve the problem. I don't think that imposing additional controls on the development of software makes sense, certainly not for all cases. There are already robust certification regimes and methods for software that does very important work: fly an airplane, control a nuclear reactor, and so forth. We don't need that kind of scrutiny on my game console or desktop. It's important to note that these robust certifications only evaluate the software itself, not the people who make it. This is what's great about software: we can examine the final product before it's distributed. I don't really mind if my software is written by a clever 7 year old. If it's doing the job it's supposed to, that's fine with me. This focus on ends, rather than the means, is something you can't do with a building or an airplane. With software, we can change our minds with far fewer consequences. We can thoroughly test and scrutinize before it's in a customer's hands. When we do find a flaw, it's easier to patch software than it is a 777 or a skyscraper. We should take advantage of that fact, not try to make software rigid and inflexible just because we know how to manage rigid and inflexible things. Because software has these unique properties, we have the freedom to bring more or less scrutiny to bear, depending on the circumstances. Which is what we're doing already, through programs like the federally-funded Software Engineering Institute at Carnegie Mellon. So where Daconta sees mayhem and chaos, I see creativity and innovation. There are many opportunities to improve the reliability of our software, but none of them have to do with the process by which we arrive at a particular piece of code. Projects like David Wheeler's OpenProofs, for example, can provide the tools we need to be mathematically sure software is doing what we intended. The Linux Test Project does this for Linux, and it was inspired in no small part by governments' Common Criteria mandates. This is, in fact, how the process should work. The government should set the requirements for reliability and assurance, and allow the private sector to innovate its way toward those requirements. If we only create software that we can understand perfectly, we lose the ability to be creative, to innovate, and to take advantage of the collective intelligence and cleverness of millions of software developers. We will never eliminate risk in software, but we can manage that risk, not through more stringent controls, but by encouraging as many smart people as possible to address the problem.