With the understanding that well-connected government can enhance efficient and effective delivery of services to citizens Governments around the world – have become increasingly interested in assuring that their ICT systems are built and maintained in a manner that results in the highest levels of interoperability, data access and interchange, and “digital sovereignty.” One policy tool commonly used by governments, the eGovernment Interoperability Framework (eGIF) was pioneered by the UK in 2000, and has since been replicated in over 2 dozen other countries. These policy tools most often address only the technical domain (technical interconnection), but some policies have also addressed semantic challenges (i.e., meaning of data) and organizational challenges (e.g., business processes).
Over the past few months, I reviewed a range of (but by no means all) national eGIFs, charted out some of the resulting data and shared it at a workshop at the FutureGov2009 conference in Indonesia. What jumped off the page at me? First, the review suggests that policymakers are focusing too much on technical interoperability, which although significant a decade ago, have increasingly been worked out. Governments (and consultants) are often turning to policy tools that are largely modeled on what the UK did in 2000 (largely focused on the technical domain), to the detriment of much more substantial semantic and organizational issues that today are the main barriers to eGovernment interoperability. It’s not hard to see the common ancestry that many of these technically-oriented eGIFs share. Just one example is the so-called “8µ Law” standard, a non-existent standard that initially appeared on the UK list and then somehow wound up under consideration in at least 7 other countries in the context of their eGIF. The existence of this “standard” across so many other countries raises some important questions about the utility and effectiveness of these policy tools, nearly a decade after they were first rolled out.
Even more intriguing, one of the most touted benefits of these technical focused frameworks– that they give governments a mechanism to mandate specific technical standards which in turn result in better interoperability– doesn’t appear supported by the data. Instead, the data appear to present a case of “the tail wagging the dog.” Rather than dictating the standards that will lead to better interoperability, these standards lists largely capture the commonly used standards that the market has already embraced and agreed to use for interoperability purposes. If you look at the group of standards that are shared across roughly 80% of the standards lists (a dozen or so standards), you don’t find many surprises. What you find are core networking and interconnection standards (HTML, HTTP, XML, etc) that have long been supported by any product that wants to have any chance at achieving widespread marketshare. While there are some interesting questions raised by the many, many other standards that appear on only one or two lists, there is little evidence that eGIFs were/are being used by governments to drive adoption and use (through requirements or mandates) of various standards, with an aim toward improving technical interoperability. It’s possible that the technically-focused approach contributed a whole lot less to improvements in eGovernment interoperability over the last decade than some governments and practitioners believe.
There is a ton of work to be done in the next decade to make eGovernment work better. It is critical as we start out the next decade that we make sure we have the right mix of policy tools at hand. My preliminary look back suggests that we need to take a more thoughtful look at whether the current incarnation of eGIFs is the right starting point for the next decade.