In his keynote speech at the Red Hat Summit in Boston, Red Hat CEO Jim Whitehurst made the case that of the $1.3 trillion USD spent in 2009 on Enterprise IT globally, $500 billion was essentially wasted (due to new project mortality and Version 2.0-itis). Moreover, because the purpose of IT spending is to create value (typically $6-$8 for each $1 of IT spend), the $500 billion waste in enterprise IT spending translates to $3.5 trillion of lost economic value. He goes on to explain that with the right innovations—in software business models, software architectures, software technologies, and applications—we can get full value from the money that's being wasted today, reinforcing the thesis that innovation trumps cost savings.
But then along comes Accenture's Chief Technology Architect Paul Daugherty, and in his keynote he presents a list of the top five reasons that customers choose open source software (which is now up to 78% among their customers):
#1 (76%): better quality than proprietary software.
#5 (54%): lower total cost of ownership.
So which is it? Does innovation trump cost savings? Or does quality trump cost savings?
According to the research of Dr. David Upton, if you practice path-based innovation (also known as continuous innovation, or Kaizen), then quality and innovation are one and the same thing. Or, mathematically, innovation is the integral of quality improvement over time. Unfortunately, Dr. Upton's research also shows that most executive compensation structures do not reward disciplined continuous improvement, but rather efforts that are typically "win big/lose big". And perversely, they tend to reward upfront those who place the bets rather than those who are around when the bet can actually be judged. This encourages executives to make innovation a risky business when it could be a reliable engine of sustainable value creation. And it conditions those in the trenches to fear and loathe the Next Big Thing, especially when it has an executive sponsor. This in turn leads to the worst-case scenario of IT departments conservatively protecting systems that were never appropriate in the first place. But there is a better way.
In his keynote, Jim correctly points out that modular, layered architectures are much more susceptible to incremental improvement. Not only do many eyes make all bugs shallow, but many hands make the burden light. Highly modular systems encourage massive participation, and the sum total of many, many small improvements can be seen as a large improvement indeed. This was made absolutely clear in Boston this week as Red Hat explained its Cloud Foundations platform—a single large change enabled by thousands of smaller changes enabled by yet thousands more smaller changes. Red Hat's engineering model embraces incremental innovation, and the integral across all the communities who contribute is simply mind-blowing.
But when we break down these innovations into their constituent elements, what we often find is that at the finest level of detail, there is no distinction between the atomic change from which the innovation is derived and a very specific, very concrete improvement to the quality of the system. Indeed, it is better (and more accurate) to think of quality not as fixing something that is broken (as if it will never need to be touched again), but rather making an adaptation that is an improvement. Of course it is important to eliminate defects in order to build a quality product, but it is equally important to eliminate inflexible or wrong assumptions that reduce fitness in future contexts. When everybody is able to make such adaptations, the result is nothing short of transformation.
I've spent a lot of time in the free/open source software community: nearly 10 years as a principal developer of the GNU C and C++ compilers and the GNU debugger, and more than 10 years since teaching others from my experiences. One of the most profound insights I've gained about the relationship between open source software development and software quality came from assimilating an analysis published in the paper Two case studies of open source software development: Apache and Mozilla, published in TOSEM, July 2002. For a full explanation, please see this transcript of a keynote speech I gave in 2009. For the purposes of this article, I want to focus on the fact that the paper counted 388 different contributors to Apache, with Developer #1 doing 20% of everything and Developer #388 making a change so insignificant that it could not really be seen in the graphs. The paper explains that the open source codes studied in the paper produced deliverables faster, with fewer bugs, that were themselves fixed faster, than comparable proprietary software also studied. And the paper observes that because open source software like Apache did not restrict participation, bugs that might not have made it to the MUSTFIX list where developer resources are scarce (as surely they are when every developer must be paid out of profits) can still be fixed by some developer somewhere in the world who cares about that particular issue. And so I thought I accepted what the paper explained, and what I knew from my own experience, that open source was far and away the best way to clean up all the corner cases that inevitably arise in complex software projects. Hooray for continuous improvement! But that was only half the story.
After teaching what this paper taught a few dozen times as a part of New Hire Orientation at Red Hat, a new insight came to me, which is the flip-side of the story. Imagine you have your little world of code you maintain, and you find one day that something is wrong. You search and search, and you conclude that the problem is not with the code you've written, but lies beyond, in some library or application you did not write. You might find the problem is with Apache, and by making that determination, you could verify your hypothesis by looking at the code, observing the behavior, and if you were right, you could become developer #389 by fixing that defect, as so many have before you. But suppose instead you find the problem lies in some proprietary software. That is where your ability to improve the system ends. Moreover, you still have a problem. WTF?! (What's The Fix?!)
You can document the problem, making customers suspicious of your own software, or you can place a work-around in your own code. The work-around is not a "correct" fix, but it might give you the behavior you need, and now instead of fixing a problem, you've actually created a second problem which, for the time being, cancels out the first, maybe. You cannot know for sure because you cannot see the original problem, only the shadows that it casts. Now imagine there are hundreds of modules with hundreds of opportunities for fixes which instead generate work-arounds. It is easy to see that there could be hundreds of times the number of defects or potential defects lurking in the system when, if the source code were available, there need be none at all!
Thus, open source not only permits developers to fix the bugs where they lie, but also a strong incentive (and culture) to not pollute ones own work just because a bug lies in another module. The cumulative result has been measured quality differences of 100x or more compared with proprietary software as measured by Coverity. Such a difference in quality is noticable. And empowering. And encouraging to not only fix what is wrong, but to improve what could be better. And all of this functions as an encouragement to raise quality, and innovation to the point where IT delivers on its real promise: creating value.
2 Comments