Guest post: What linked data does, what linked data needs
I've been talking about linked data from inside and outside the news and media industry for most of my career. I have spoken to the executive boards of publishers who have done just fine for centuries without the esotery of description logics and persistent URIs. It can be a very tough conversation.
The news and publishing industry want better, faster and more accurate methods of gathering information (cheaper would be nice too). This way, their journalists and editors can produce the important and useful content and information their organisations have staked their reputations upon. The provision of 'important and useful information' is an ideal stated in the prime directives of Linked Open Data espoused by Tim Berners-Lee. In the last few years, the conversations have been getting easier and more often than not, the invitation comes from someone within in the organisation who 'gets it'. I believe this is because companies like ODI member Ontotext have a proven track record of providing enterprises the means to boot strap client organisations' knowledge with the Linked Open Data cloud. Organisations are beginning to appreciate that openness does not mean a lack of control.
A recently launched example is Newz. It is a shared content platform bringing together news produced by the leading newspaper publishers in the Netherlands. Content is semantically tagged and linked to the Linked Open Data Cloud (e.g. dbPedia, Freebase, Geonames, MusicBrainz, Europeana, Cornetto etc).
Ontotext's OWLiM forms the backbone of Newz's semantic media platform. The project had to convince a dozen competitors that they should open their content, their assets, to a centralized platform. The Dutch have an expression for such as task, 'wheelbarrow full of frogs' (something akin to 'herding cats') but they succeeded. Additionally, Newz had to convince the newspaper publishers of the value of using data from outside the walled garden (i.e. from the LOD data sources mentioned above). This data had to be shown to be of value, correct and reliable. Here too the project succeeded. The initial stage of the project has gone live and feedback from the publishers has been positive. It is a significant achievement in many respects and bodes well for the sustainability of Linked Open Data.
However, the platform does not publish data or content to the Linked Open Data graph, although this is something they are interested in exploring in the future.
There are still battles to be fought, but it is important to see that technologists, open data advocates, governments and private companies are allies.
In the spirit of that multipartite discussion, the ODI and organisations such as the BBC, EuroMoney and Affectv are participating in an event to discuss the use and potential of Linked Data. They will describe how they are currently using semantics within their organisations and discuss what further work is needed to bring this emerging technology to the mainstream. For more details visit the Meetup page.
Jarred McGinnis is an independent consultant in Semantic Technologies and visiting Research Fellow at King’s College London. Previously he was the Head of Research, Semantic Technologies, at the Press Association, investigating the role of technologies such as text-mining and Linked Data in the news industry. Dr. McGinnis received his PhD in Informatics from the University of Edinburgh in 2006.