European Commission stands against vendor lock-in

European Commission stands against vendor lock-in

Lock on a building
Image by :

After a decade-long battle, terms of a settlement agreement were finally reached last week between the European Commission and Microsoft regarding anticompetitiveness. The official settlement is a win for European consumers, but the simultaneous Public Undertaking on Interoperability issued by the company leaves much to be desired for the open source community.

The official settlement is an agreement regarding the bundling of Windows and the Internet Explorer browser, which the Commission had argued gave Microsoft an unfair advantage in the web browser market.

Browser Ballot Screen Submitted to  European Commission in Annex B & C of Microsoft’s ProposalEuropean users of Microsoft Windows will soon see a pop-up ballot screen, like the one pictured, allowing them to choose one of twelve browsers to use: Internet Explorer, Safari, Chrome, Firefox, Opera, AOL, Maxthon, K-Meleon, Flock, Avant Browser, Sleipnir and Slim Browser.  According to the Commission, “This should ensure competition on the merits and allow consumers to benefit from technical developments and innovation both on the web browser market and on related markets, such as web-based applications.”

You can read the case documents on the Commission’s website, and a good synopsis of the issues about the web browser and effects of the settlement from the AP writeup.

At the same time, Microsoft published its revised Public Undertaking of Interoperability, which the Commission’s press release is careful to point is not a part of the settlement, although the government “welcomes this initiative to improve interoperability.” The Public Undertaking says the company will provide developers, including those in the open source community, access to technical documentation to products such as Windows, Windows Server, Office, Exchange and SharePoint. It sounds very positive for open source developers, until you see that the Patent Pledge redefines open source to exclude commercial applications.

An ‘open source project’ is a software development project the resulting source code of which is freely distributed, modified, or copied pursuant to an open source license and is not commercially distributed by its participants. If You engage in the commercial distribution or importation of software derived from an open source project or if You make or use such software outside the scope of creating such software code, You do not benefit from this promise for such distribution or for these other activities.” [note: emphasis mine]

First, are we really back here again - arguing whether free/open source software can be sold for profit? Perhaps we should revisit the Open Source Definition maintained by the Open Source Initiative, which pretty clearly states that you can sell the software if you want. Or not. It’s your choice.

This Patent Pledge definition takes away that choice by attempting to exclude commercial applications. Doesn’t that sound “anticompetitive” to you? Isn’t it ironic that this is being promoted to coincide with the company’s settlement of an anticompetition suit? Of course, this isn’t new. You can read more about it at Groklaw.

But, before your blood starts to boil, the key to remember here is that progress is slowly being made in the government space. Although the undertaking isn’t part of the settlement with the Commission, the government body noted that it “will carefully monitor the impact of this undertaking on the market and take its findings into account in the pending antitrust investigation regarding interoperability.” With that and the official settlement, the European Commission has sent a strong message to the tech industry that it will not tolerate practices that lead to vendor lock-in. The government body instead stood for consumers who should have freedom of choice in the products they want to use. Kudos.



As the co-chair of OSA's Interoperability Group (note: OSA recently merged with OW2) I share your concern over the 'Patent Pledge' wording. Jaspersoft is one of many commercial open source ("Open Core" in our case) members of OSA that have worked closely over the last 3 years to define and implement Interoperability standards. These standards include Single-Sign-On, Data Integration, Portal Integration and others across a suite of ERP, CRM, BI, System Monitoring and Data Integration applications. We're starting to work now to enhance OW2's Interoperability strategy & roadmap.


Vote up!
Vote down!

Andy Updegrove once told me that "open standards can exist without open source, but open source can't exist without open standards." So thanks for the work you guys are doing on interoperability standards. We're all better off for it.

Vote up!
Vote down!

Open source can exist without standards.

Standards are designed to allow competition in the same space at the same time by different entities and to allow those working in nearby spaces to be able to interface efficiently.

Open source is clearly enhanced by open standards.

However, it gets tricky when we talk about open standards and closed source.

Closed source almost automatically leads to interoperability problems even among open standards because most open standards allow for extension mechanisms ("room for growth and innovation") and underspecify in some areas. Also, anything written in English has ambiguities, potential overspecification, and other errors.

So what is different with open source? Open source makes it much easier to interface correctly despite the imperfections of open standards because all blueprints are accessible. Further, you are more likely to discover some types of problems quicker (because of this same source code access).

The (inferior) alternative to code access in the closed source world is to build very thorough tests to try to catch the main problems (note that open source can also build thorough tests). But we find two different scenarios.

Closed source competitors with an interest in cooperating can work aggressively at passing tests and trying to make the standard very clear and correct; however, any monopolist stands to gain for every interoperability failure that exists. The reason is because, when interoperability fails, the status quo -- what opens yesterday's documents or runs today's already created code -- is what is much more likely to stay, with the competition being branded the "broken product", even if the competition was actually built much more according to spec. And it's very easy to get accidental interop failures ("bugs" naturally exist from expected human errors, even were standards perfect), never mind strategic sloppiness, the ordinary extensions mentioned before, and a gross abuse of the extension mechanism in order to thwart interop even in "almost" ordinary cases that would have been covered in the core standard.

So when it comes to interoperability, open source enables it in all circumstances, if perhaps potentially inefficiently. However, closed source can lead to very serious interoperability problems even under the scope of open standards, and open standards tend to fall short in a number of ways, making it easier for closed source competition to fail to interoperate noticeably more than their open source counterparts.

Note that "innovation" through extensions in standards are frequently desired by those the favor closed source, specifically because it allows lock-in. [With or without standards, a desire for lock-in is a motivation to using closed source in the first place rather than to resort to compete on quality of service. Monopolists simply leverage this much more successfully than most other closed source developers.]

Also note that sometimes interoperability failures are subtle. They may not be obvious immediately and may rear their heads at inopportune moments, leading buyers to reconsider their use of the new software.

Finally, note that in practice we do find interoperability failures with open source, but the key difference is that if you had very important documents you wanted to preserve moving forward and the new open software failed somehow, then it is at least resonably possible to have someone (perhaps for $$) find the problems and fix them (assuming you didn't have the time or skill to do the work yourself and no one else had done it either). Risks that exist with open source are mitigated by the openness of the source code. In fact, sometimes the companies that contribute the most code to a project capitalize on their investments by being able to offer the highest quality of service related to such software (and they can be replaced if they don't perform). In contrast, try getting anyone but the proprietary vendor of your software to help you fix a migration problem. Sometimes third parties come to the rescue, but their capabilities are limited (meanwhile, the vendor might actually not want you to fix some types of problems if it will mean lost sales). And let's not forget that closed source is much better established. The differences will be greater as open source continues to grow and gain the support or more vendors.

Vote up!
Vote down!