One bug, millions of dollars lost: An argument for open source solutions

No readers like this yet.
annoying bugs

On August 1, Knight Capital Group, a financial services company, lost $440 million in less than an hour because of a software bug. As I understand it, this bug could have been avoided if more thorough testing was done before release but, as the Omaha World-Herald reports, the company "rushed to develop a computer program so it could take advantage of a new Wall Street venue for trading stocks...and failed to fully work out the kinks in its system."

In an op-ed piece in, Ellen Ullman, a former software engineer and author, talks about how the SEC's call to companies like Knight to "fully test their computer systems before deploying coding changes" is an impossibility. Ellen writes:

First, it is impossible to fully test any computer system. To think otherwise is to misunderstand what constitutes such a system. It is not a single body of code created entirely by one company. Rather, it is a collection of "modules" plugged into one another. Software modules are purchased from multiple vendors; the programs are proprietary; a purchaser (like Knight Capital) cannot see this code. Each piece of hardware also has its own embedded, inaccessible programming. The resulting system is a tangle of black boxes wired together that communicate through dimly explained "interfaces." A programmer on one side of an interface can only hope that the programmer on the other side has gotten it right.

While I agree with Ellen and teach my students that all software comes with some sort of risk and should be evaluated with that in mind, I do think that, were the system(s) in this debacle written using an open source model instead of a proprietary one, these holes might have been caught, the release deadline would have been met (or even pushed back for the betterment of the product), and this whole thing would have been avoided. Systems don't have to be a 'tangle of black boxes'—they can be an efficient network of transparent tubes instead.

I may be preaching to the choir here when I remind you of Eric Raymond's statement that 'given enough eyeballs, all bugs are shallow.' Using the open source development model means that everyone can see what everyone else is doing in the system and can more easily communicate how changes are going to affect each interconnected module. I'm not saying that all bugs can always be found or that any system is perfect, just that using the open model means that problems can be found more quickly, communication is easier, collaboration is more effective, and bugs are patched more efficiently.

I take this incident with Knight Capital as yet another argument for software that affects the public (if not all software) being developed in an open fashion!

User profile image.
Nicole C. Baratta (Engard) is a Senior Content Strategist at Red Hat. She received her MLIS from Drexel University and her BA from Juniata College. Nicole volunteers as the Director of ChickTech Austin. Nicole is known for many different publications including her books “Library Mashups", "More Library Mashups", and "Practical Open Source Software for Libraries".


Testing the software is not enough. It has to be tested in the present of all the other software used in stock trading. Otherwise, anything could happen. Here's an example, and doesn't even involve computers; all actions were automatic and based on local conditions. Results: 30 million people without power for 12 hours.

A key point is that the open source methodology can be followed even if the source code isn't externally open. Knight Capital Group could engage with vendors and its own development team using inner source (as <a href="">CollabNet coined the concept</a>.) Basically, you practice development with code and collaboration that is internally open. You <a href="">release early and often</a>. You make it possible for non-development team members to watch the project, test early versions, brainstorm ideas that help avoid catastrophic mistakes, and so forth.

As Nicole points out, building on top of open source gets all the benefits of shallow bugs, and building a solution that is also open source means the shallow bugs might end up being your own. Starting with an inner source model helps an organization learn how to act in <a href="">the open source way</a>, so they are more skilled and more bold when it comes to collaborating externally on software projects.

The one line from the op-ed that really bothered me was "<em>there is no such thing as a body of code without bugs</em>".

First of all, to take a line from the great Edsger Dijkstra, the usage of the term "bug" is dishonest and deflects responsibility of the failure from the programmer when the bug is more properly called a "fault", and a program with a fault is more properly called "wrong".

But on a broader level, I think it's a crying shame that we, as a society, have become so accustomed and desensitized to software failures that we don't hold computer programs to the same standards of quality that we expect from roads and bridges, cars and planes, power infrastructure and water treatment facilities.

What I love about openness is that it provides accountability for the engineering behind software. Open source developers have the advantage that they can leverage that openness while holding their own work to a higher standard, establishing more <em>confidence</em> in their code.

You can rant the same about the word "typo". They both come from the same thing, human fallibility. And yes, I think it's a crying shame that we, as a society, have become so accustomed and desensitized to writing failures that we don't hold literature to the same standards of quality that we expect from roads and bridges, cars and planes, power infrastructure and water treatment facilities. :)

I would also add that, although there is no algorithm that proves the correctness of software, if you use correctly the right tools you can have very tight software. The first thing that comes to my mind is SPARK ( ) a subset of Ada that can be used to formally prove the correctness of software.
Of course, you could make a mistake in the formal specs of your code, but I expect that the probability of making two complementary mistakes (in the code and in the spec) so that they pass the formal check, is quite low...

I agree with the previous comments, a 'bug' now is a give in any software.
However, I find this leads to two famous arguments - closed source and open source code. If closed source code was so secure and superior because no one can see the code to exploit it. Then why is it when the closed source code is released in an 'exploitative' way the vendors are concerned?
I do not see any concern to see source code if the OS, APP ect... is written correctly. Two people may not write in the same way and sometimes being 'open' to suggestions gives you a broader view point of the overall picture than just a 'narrow' view of a widget.
I have found a collaborative effort will produce better results when the individuals contribution is appreciated. In the real world is farther from the truth, most of the time one person wants to be promoted as the one horse show or taking all the glory.

If you think all code is strewn with bugs, why do you fly, or for for that matter trust a traffic light? Open or closed, testing and quality control is absolutely necessary and is often minimized in a project that is late. Perhaps, in fact Apple and Oracle have something, owning everything from hardware, operating system, compilers and developed code in their products.

P.S. I should have used the word "defects" rather than "bugs". My mistake. . .

I somewhat disagree with the author's assertion. Open source does not necessarily mean more defect free software. Open source software will probably be more defect-free if enough competent people have examined the source code and the changes are managed by a competent team.

Closed proprietary software has an incentive to get things right - otherwise their product will not sell and they may end up at the wrong end of a law suit.

Creative Commons LicenseThis work is licensed under a Creative Commons Attribution-Share Alike 3.0 Unported License.