Open Thread Thursday: Release early, release often?

Image by
submit to reddit
(1 votes)

You may be familiar with the Thomas Edison quote: "I have not failed. I've just found 10,000 ways that won't work." In the open source way, the principle is sometimes referred to as rapid prototyping, or "release early, release often." The idea is that faster prototypes can lead to faster failures. And faster failures lead to faster solutions.

What do you think? Do you agree with the philosophy? And if so, how can we help organizations see small failures as steps toward big successes?

Share your thoughts below.

Creative Commons License


Rebecca's picture
Open Source Champion

I think the US's system of needing good stock prices every quarter really impacts this. I've read elsewhere that Japanese investors expect long term returns, and are not bothered by short term dips, so that Japanese execs are more free to invest in their businesses, and I would imagine, fail early and often to produce more innovative ideas.

Unidentified's picture

I think that in some cases, the fear is a determinant factor for not assume a failure as step to the final result.
If i have something to do, and my organization is too closed i will not get the necessary risk to try different ways of doing it..i will try the default way, that i consider safe.
Maybe in an open organization my failure was understanded by my colleages and used as a learning step.

Unidentified's picture

Test out at the latest Ubuntu and Debian.

Which would you prefer for a home/office computer (not a test bed).

You should have the answer.

dragonbite's picture
Open Minded

One of the options that open source projects often can include is the ability to make updates which don't add something new (feature, widget or eye-candy), but can be to clean up the code or re-align the code to facillitate future changes/add-ons.

I can't see something like Microsoft Office 2010 being basically Office 2007 with some unnoticeable changes to it so Office 2012 will be able to take advantage of some new technology or paradim, or just to make 2010 more secure.

So with the open source model it is possible to get it out to the people, get real-world feedback and experience and to build or backtrack as necessary.

With the Office example, the only way for them to test and try that security or code-cleanup is to do it, test it locally and in small subsets of testers and then sell it to the public who must see enough of a value to spend the money on it.

So long as the controllers are able to see that "ok, this is a failure.. what happened and why and how are we going to circumnavigate this?", a failure really isn't a failure.

With open source, though, while one project is flailing it is still possible for somebody else to look in with the right attitude and turn that failure into a sucess (either in the project or in a fork of the project).

Marc Dekens's picture

I guess it will depend on the type of product or service you want to develop. I wouldn't want to step into the building process too early when my house was to be build/developed as part of the Open Architecture Network ( or if someone dreams up an open source health clinic/hospital. Or an open source car production line.

But when the risks are low, I agree. The additional benefit is that commercial companies might refrain from developping a similar product, which is better for humanity as a whole.

Come to speak of it: How many applications are a real creative and original open source effort?. It seems to me that a lot of activity is a reaction on a commercial service or product. Not that this is bad, but if you want to change the world, you want to start a movement that has a new and hopeful perspective, I guess. Like there is a difference in outlook when you "fight against war or terror" or "promote peace". Shift the paradigm, think outside the box sort of things. Design the future that you want to see. When you release your fantastic idea too often, would the result in the end have become what you meant? Or what is do-able?

For what it's worth.


gunnar's picture
Open Source Evangelist

I totally agree that you have to consider risk. One way to keep the risks low, so you can benefit from "failing forward," is to keep your tasks small. I think if you're able to compose a large, complex, and high-risk project from many smaller pieces, you can ensure that each individual piece benefits from early failures without compromising the larger project. This also ensures that any individual piece can be swapped out for a better one.

Interchangeable parts: still a good idea.

MyRoyFriend's picture

In engineering, you dont try 10,000 different things and find out which one works, if you're designing an electronics circuit, you dont "guess" about what components you put where and hope for the best, and if you happen to fluke the 1 in 10,000 correct combination you lucky.

Engineering is not based on luck, it's based on scientific and engineering principles, you dont design a bridge by "trying things out to see if they work". You use specific methods of design, testing, analysis, and cycle though those methods to end up with a viable and working product.

It was ok for edison to try 10,000 different filaments for light globs and finding one that works, but you could not apply that same technique to buildings, bridges, airplanes, medical drugs, software, hardware or anything really.

Edison was very persistant and tried lots of things, many things were tried that will more scientific knowledge and theory behind him would have been rejected.

For example, it would have been easy to not test light filaments that would not conduct electricity. Like he tried horse hair, string, paper and so on. Things that dont conduct electricity could of been ruled out.

With software the procedure should be the same, it should be specifically designed, and engineered, software development should not be "trial and error". This shows in FOSS alot, the release often and early, as opposed to release finished and tested is failing FOSS.

I would certainly not want to use drugs from a drug company that used "trial and error" in developing their medications. Just as I dont trust software that applies the same trial and error methods.

So until FOSS software goes from being a amatuer "craft" to a professional engineering disipline with all the responsibilities that an engineer who designs a skyscraper that it will not fall down. Software (FOSS) needs to work to meet those professional standards or quality, and not releasing on the normal users second rate (we tried this and it worked, for us) code and products.

If you engineer a skyscraper, and you make a mistake, and it falls down in the first small wind, you would probably end up in prison for negligent manslaughter. Bugs are not a thing that should be ALLOWED at all, in software, hardware or in any engineering disipline.

Take a lesson from quality assurance standards,

"Doing the right things right, first time, every time".

gunnar's picture
Open Source Evangelist

I don't know about you, but in my experience, there are very few software projects that benefit from the methods of structural engineers. In some cases, you're right: it makes sense to plan thoroughly in advance and execute perfectly. But don't underestimate exactly what that takes. These folks, for example:

I think you'll agree that the effort to bring that kind of discipline is extraordinary. And almost never necessary.

You should read "Mythical Man-Month" by Fred Brooks. They made me read it in computer science 101. I was glad that they did. It's a thorough treatment of your argument. I think you'll like it.

Finally, I wouldn't characterize the development model as "trial and error." It's not a million monkey writing code. It's a bunch of flawed people with imperfect information trying their best, just like every other development model. The difference in open source is that the model expects failures and mistakes, and provides a means of quickly remediating them. In the process, it allows the best ideas from a large and fluid community of interest to float to the top -- something top-down planning simply can't do efficiently.

MyRoyFriend's picture

The acceptance of "bugs" are very specific to software development. Most if not all other engineering disiplines dont speak of "bugs" they speak of design errors. Those erros, mistakes are to be identified and rectified, or you have not done you're job.

You dont design an electronics system, without very specific design criteria, specifications, error budgets, probably extensive computer simulation, PCB design and so on.

All those are what is called engineering, it's a disipline, it's a specific method to attain a functionial system.
Like hardware, you design it to work over a range of acceptable parameters and you meet the initial design specification when you create a device that does what you set out to do.

If in the design stage, you determine problems or errors that is when you fix them, there is no point is building a circuit that you have built a "bug" in it.

This applies to software, software engineering is a disipline, "hacking code" is a method of writing code in an Ad Hoc fasion.

The mythical man mouth, I thought expressed to concept, that it's *NOT* correct that 2 programmers are twice as fast as 1 in the development of software.

Also with any design, software or hardware, the best and ONLY person who should be correcting design flaws (bugs in software) should be the person who designed it in the first place.

It's not acceptable to say you're an electronics engineer if your designs have flaws, and dont work all the time.

It's also not acceptable for software engineers to product product that does not meet specifications.

And there should be specific specifications, detailed, including a design plan and time line.
Without that the chances of you're project failing would be close to 100%.

As you have no meter, or guage to determine if you have met the specifications as you dont have any..

Im an embedded systems engineer, as well as analogue electronics, with over 30 years in the electronics and software development industry, in military, scientific and industrial applications.

I design embedded systems, so I have to design the "system" and the components of that system, including the embedded processors and software development.
You dont get a chance to apply patches on thousands of embedded processors with burned PROMS containing code, you have to get it 100% correct the first time.

QA, "Doing the right things RIGHT, first time".
Owning you're own mistakes, and fixing them before release is second nature to engineers, that IS their job. Bugs are a failure and embarrasment. and creating designs with bugs could easily result in prison.

As someone said, I certainly expect the software in a 747 jet to be "bug" free. Just as I expect the code I write to control a DAM's floodgates is 100% correct.
Or the skyscraper im in does not have "bugs" in it's design.
Or bridge, or any other engineering disipline, Software development badly needs to focus more on Quality than quantity.

gunnar's picture
Open Source Evangelist

You're exactly right that there's a class of software that requires the kind of rigor you describe. But it's a subset of all the software being developed.

I agree that there is always room for better developers, more scrutiny on security and reliability measures, and so on. This has to be balanced, though. Not all software needs DO 178B certification.

shanna's picture

I can see both sides. When software sits closer to the machine (IE as mentioned, embedded programs)

    cost of failure is higher / tolerance for error is lower
    set of expected interactions is lower/ability to exactly specify is higher
    user interface is low priority/low feature

When software sits closer to people some of these things are turned around.

    greater tolerance for error
    interface high priority/high feature
    set of interactions is high, unexpected interactions

The main point I am thinking is that *how people want to interact with software* is very different from "how a machine interacts with software" - different models are necessary to specify the does it do what I want it to? part. Simply because people don't know what they want till they see it.

Wanos's picture

When a structural engineer designs a structure what is it's purpose. If it's a bridge you use it to cross a divide. If it's a building it's to put things in & protect them from the elements.
Software, especially FOSS is subject to one thing that no "true" engineer (& by this I mean one who works with the physical world rather than a virtual) has to contend with - variation.

Imagine trying to design a car that was also a boat, as well as a submarine that also had to accept other peripherals (ie. wings) to allow it to fly. Then add that you need to exchange the engine with an electric, or a hybrid as well as a miniature nuclear reactor or a standard petrol burning engine. This is then driven by a person to allow them to get from point A to B. That is the purpose of the product, but it must allow you to do this any way you feel is best.
In the same way FOSS needs to run in multiple environments on multiple types of hardware.

Roy - as an embedded system designer how many times have you had to make the code run on "any" possible system? I bet it's never because it defeats the purpose of an embedded system. You need to make it do it's thing perfectly & that's fine. It has a singular purpose, easily defined & set by rules. You work to these & you have goals etc that allow your product to be benchmarked.

I think that a short & frequent release cycle that allows for testing many different types of systems & fixing them quickly is very beneficial. When required most software designers will make a system "perfect" when they can & that requires that we have control over the environment to a larger degree. Banking software is a thought that comes to mind.

I believe that a fast prototype to allow testing is good & I employ this when pushing out a new product. Many times an idea from a tester has sparked a whole new way of thinking about what we are building. They are invaluable & a faster cycle allow us to incorporate that easier & put it back out to more testers.
Funnily enough it seems that once it reaches a critical stage it then gets a little slower as it gets closer to it's intended function. This is then finalised into a "product" & labelled as stable.

Why don't we do this to a building? Sometimes I think we do. Look at the building being renovated down the road. It may get a new office section downstairs & apartments above but it's still performing the same function. Protection from elements.

Sure, software has a purpose but as it uses a language to complete it's task this is fraught with the same difficulties as talking to your spouse. You will normally get it correct but sometimes the idea in your head & the words that come out made sense to you but not to her. I'm just thankful that code is less expensive to apologise to.