The UK Cabinet Office has made no secret of its enthusiasm for open source software. They've provided a Government Action Plan, included open source in their ICT Strategy, and even provided an Open Source Procurement Toolkit for government buyers. They see the same benefits as their US counterparts: a more competitive software market, more innovation, more interagency collaboration, fewer silos, better security, and more opportunities for domestic software development firms. The UK, however, hasn't yet seen the kind of open source adoption we have in the United States despite similar challenges and similar market conditions.
A few weeks ago, I had a chance to spend a week with some very energetic and talented folks in UK government and industry. There is frustration on both sides. Those in the government are eager to see greater open source adoption, and those in industry feel unable to deliver, for a variety of reasons. It's still early days for the UK policy, but US-type adoption of open source still feels very far away.
Some (myself included) have suggested that a UK and European penchant for top-down policymaking at the expense of market-driven approaches that focus on removing barriers and creating incentives is responsible. In the US, the argument goes, open source started in academic and research work, entered IT enterprise by providing core Internet services at the edge of the network, and eventually began supporting mission-critical workloads. US policymaking ratified, rather than led, a transformation that was already underway, so there was never a concern about open source being mainstream or viable. To paraphrase French politician Alexandre Auguste Ledru-Rollin, the US policy apparatus had to learn where its people were headed so it could lead them.
After spending a week discussing this issue, I now realize that this explanation isn't sufficient. We have to account for over a decade of nearly identical market conditions which should have been driving the UK towards open source adoption. After all, a pro-market policy of benign neglect and non-intervention is precisely what the UK previously had in place. The Cabinet Office, obviously, wouldn't feel compelled to act if the market was adequately addressing the problem.
During our meetings, "culture" was invoked regularly as another reason for slower adoption. As in the US, "culture" is a frequent scapegoat for organizational inertia. This is a pet peeve of mine. "Culture" is a regrettably acceptable way of throwing up one's hands in frustration. You can't "fix" a culture. You can, though, fix the structure: a complex set of market dynamics, policies, and procedures that create the behavior in question.
So if we consider this a structural question, it's worth reviewing what this structure looks like. As in the US, UK government IT is not conducted by government, but by private sector system integrators — "SIs," in the jargon — who have been contracted to fulfill a set of requirements provided by the government. When the government needs a new IT system to support an entitlement program, for instance, they will create a tender that defines what capability is required, and the implementation details are left to the integrator. The assumption is that the SIs will reduce costs and improve service delivery because of their specialization and experience on other public sector projects. The transition of IT systems from the public sector to SIs has been, for the most part, a success, both here and in the UK. But while these structures are similar, I'm now convinced that subtle differences between the US and the UK procurement regimes can account for their wildly different rates of open source adoption.
In an arrangement of this kind, there is a constant tension between the government and industry on the amount of control the government should have over implementation details. In the UK, I was surprised at the bright line that was drawn between government and industry responsibility. "Risk" was a frequent topic of conversation. Should the government intervene in any aspect of the implementation, they are introducing the dreaded "risk," which seemed to be an expensive, unsavory, and distasteful proposition for the government. The integrators, predictably, discourage government intervention, arguing that they should be free to make the best choices on the government's behalf. After all, didn't the government put this work to bid so they could get out of the IT business?
In the US, on the other hand, it's not unusual for the government to be heavily involved in the implementation details. Procurements may not specify specific vendors or providers, but the language of the tender can make it clear if the government has a preference for one general approach over another, especially in the context of an agency's IT portfolio management strategy. The US Army, for instance, manages its IT portfolio through their Common Operating Environment framework, which defines standards, platforms, and other details that in the UK would be firmly in the purview of industry.
A colleague of mine was very excited about a new contracting practice in the UK: the government invites industry to a public meeting where new contracts are discussed, and industry can ask questions about those contracts. It took me a moment to realize that he was talking about a common US practice: Industry Days. After years of frustration, acquisition tools like Industry Days emerged in the US to encourage discussion between government and industry, especially before contracts are put out to bid. US procurement staff now consider these collaborations a prerequisite to sensible contracting. I heard a number of UK government representatives discuss the prospect of a "portfolio management" approach in their acquisitions. This is good news. It's hard to imagine an effective portfolio management strategy without US-style freedom to engage vendors and intervene in their implementations.
This brings us to another difference between the two outsourcing strategies: the length and duration of the contract vehicles. In the United States, there are literally thousands of contracts for thousands of missions, even within the same agency or department. After a number of spectacularly bad long-term contracts, under which the integrator operates with little competition for years at a time with predictably poor results, there is now a clear preference in the legislative and executive branch for shorter contracts. This is especially true for IT, which has a notoriously short lifecycle. In the UK, I was frankly shocked at the size and scope of the contracts in place. Once an integrator or team of integrators has secured a contract, it seems, their incumbency is guaranteed for years at a time. These large, long-term contracts strongly encourage the integrator to engage in large, long-term contracts with its software vendors, further locking the government into IT systems it does not control. I understand that the UK is moving towards more, smaller contracts but the effect these monolithic contracts have already had on innovation in the UK government is predictable.
US agencies, then, have an ability to insist on new technology and encourage new approaches in a way that their UK counterparts do not. The have the freedom to manage their own IT portfolio as much or as little as they like. They can also create consequences if an industry partner does not comply with the government's vision. Whether these government interventions in their own IT projects are successful or not, the additional freedom certainly makes it more likely that innovations will be introduced into a program. More frequent competitive bids also act as an incentive for industry to introduce new approaches, so as to differentiate themselves from their competitors. It's not hard to imagine much more innovation and open source adoption in the UK government in a more interventionist and more volatile, though slightly riskier, contracting regime.
The UK security regime also came up frequently. This isn't a surprise. Certification and accreditation (C&A) is central to any government IT procurement. In the US, this responsibility is distributed across many Designated Accrediting Authorities (DAAs) that all work from a national set of controls and procedures, like FISMA and NIST Special Publication 800-53. The accreditation process is fragmented and subjective, but the DAAs rely heavily on standard certifications like the internationally recognized Common Criteria to create a base level of assurance for the products they use. The diversity and subjectivity of the process can be frustrating for all involved, but it also allows more freedom for vendors and customers, increasing the chance that a new technology will be introduced.
In the UK, C&A responsibility falls to just one organization: the Communications-Electronics Security Group (CESG). Like their counterparts in the US, they rely on accreditation. Unlike the US, they seem to struggle with their certification regime. The CESG issued some surprisingly honest and insightful clarifying guidance in November of 2009 which describes this challenge. The lack of a well-understood baseline of certifications has put an incredible burden on the CESG's accreditation process and on vendors that want to enter the UK government market. The new Commercial Product Assurance (CPA) process, which hopes to remedy this, has been progressing slowly, as evidenced by the lack of certified products on the CPA website. You can imagine how this has encumbered the adoption of open source.
If we take all these procurement and C&A challenges together, I believe that the issue of open source adoption is part of a larger question about how a government might introduce new ideas, approaches, and technology into the government IT enterprise. Any attempt to introduce more open source software should address these more fundamental issues of certification, procurement, competition, and portfolio management.
This doesn't mean that the UK Cabinet Office should abandon its open source policy, or that the UK should adopt a US-style procurement or C&A structure. For anyone who's worked with the US Federal Acquisition Regulations or the DOD security regime, the prospect of exporting that particular monster is absurd. The US, in fact, has much to learn from the UK approach. I'm envious, for instance, of having one authority for certification and accreditation procedures. If the Cabinet Office is interested in reduced costs, more innovation, and everything else open source software can offer, it should focus its efforts on accelerating the reform of its acquisition and security regimes. That reform should include:
- a renewed focus on internationally-recognized certification as a way of reducing the accreditation burden and lowering barriers to entry
- Encourage more turnover and competition on IT contract vehicles
- Allow far more dialogue between government and industry, both before and after a contract award.
- Allow far more involvement from the government in the decisions that affect the fate of its IT systems
Ironically, the procurement reforms I've recommended for the UK could be lifted directly from the US Office of Management and Budget's 25-point Plan for IT Reform. This is because the UK and the US are, in the end, much more similar than they realize. They both struggle with the same IT market, the same demanding set of mission requirements, and the same shrinking budgets. The policymakers on both sides have much to learn from each other's success. That is, after all, the open source way.