The best way to get value from data is to give it away from the Guardian.
From the article:
Last Friday I wrote a short piece on for the Datablog giving some background and context for a big open data big policy package that was announced yesterday morning by Vice President Neelie Kroes. But what does the package contain? And what might the new measures mean for the future of open data in Europe?
The announcement contained some very strong language in support of open data. Open data is the new gold, the fertile soil out of which a new generation of applications and services will grow. In a networked age, we all depend on data, and opening it up is the best way to realise its value, to maximise its potential.
There was little ambiguity about the Commissioner’s support for an ‘open by default’ position for public sector information, nor for her support for the open data movement, for “those of us who believe that the best way to get value from data is to give it away“. There were props to Web Inventor Tim Berners-Lee, the Open Knowledge Foundation, OpenSpending, WheelMap, and the Guardian Datablog, amongst others.
Open government data at no or low cost, represents a real opportunity for value-add data vendors. Particularly those using topic maps.
Topic maps enable the creation of data products that can be easily integrated with data products created from different perspectives.
Not to mention reuse of data analysis to create new products to respond to public demand.
For example, after the recent misfortunes with flooding and nuclear reactors in Japan, there was an upsurge of interest in the safety of reactors in other countries. The information provided by news outlets was equal parts summary and reassurance. A data product that mapped together known issues with the plants in Japan, their design, inspection reports on reactors in some locale, plus maps of their locations, etc., would have found a ready audience.
Creation of a data product like that, in time to catch the increase in public interest, would depend on prior analysis of large amounts public data. Analysis that could be re-used for a variety of purposes.