Another Word For It Patrick Durusau on Topic Maps and Semantic Diversity

March 21, 2015

Turning the MS Battleship

Filed under: Interoperability,Microsoft,WWW,XML,XPath — Patrick Durusau @ 8:46 am

Improving interoperability with DOM L3 XPath by Thomas Moore.

From the post:

As part of our ongoing focus on interoperability with the modern Web, we’ve been working on addressing an interoperability gap by writing an implementation of DOM L3 XPath in the Windows 10 Web platform. Today we’d like to share how we are closing this gap in Project Spartan’s new rendering engine with data from the modern Web.

Some History

Prior to IE’s support for DOM L3 Core and native XML documents in IE9, MSXML provided any XML handling and functionality to the Web as an ActiveX object. In addition to XMLHttpRequest, MSXML supported the XPath language through its own APIs, selectSingleNode and selectNodes. For applications based on and XML documents originating from MSXML, this works just fine. However, this doesn’t follow the W3C standards for interacting with XML documents or exposing XPath.

To accommodate a diversity of browsers, sites and libraries wrap XPath calls to switch to the right implementation. If you search for XPath examples or tutorials, you’ll immediately find results that check for IE-specific code to use MSXML for evaluating the query in a non-interoperable way:

It seems like a long time ago that a relatively senior Microsoft staffer told me that turning a battleship like MS takes time. No change, however important, is going to happen quickly. Just the way things are in a large organization.

The important thing to remember is that once change starts, that too takes on a certain momentum and so is more likely to continue, even though it was hard to get started.

Yes, I am sure the present steps towards greater interoperability could have gone further, in another direction, etc. but they didn’t. Rather than complain about the present change for the better, why not use that as a wedge to push for greater support for more recent XML standards?

For my part, I guess I need to get a copy of Windows 10 on a VM so I can volunteer as a beta tester for full XPath (XQuery?/XSLT?) support in a future web browser. MS as a full XML competitor and possible source of open source software would generate some excitement in the XML community!

September 12, 2013

…Wheat Data Interoperability Working Group

Filed under: Agriculture,Data Integration,Interoperability — Patrick Durusau @ 3:50 pm

Case statement: Wheat Data Interoperability Working Group

From the post:

The draft case statement for the Wheat Data Interoperability Working Group has been released

The Wheat data interoperability WG is a working group of the RDA Agricultural data interest group. The working group will take advantage of other RDA’s working group’s production. In particular, the working group will be watchful of working groups concerned with metadata, data harmonization and data publishing. 

The working group will also interact with the WheatIS experts and other plant projects such as TransPLANT, agINFRA which are built on standard technologies for data exchange and representation. The Wheat data interoperability group will exploit existing collaboration mechanisms like CIARD to get as much as possible stakeholder involvement in the work.

If you want to contribute with comments, do not hesitate to contact the Wheat Data Interoperability Working Group at Working group “Wheat data interoperability”.

References

Wheat initiative Information System:

GARNet report – Making data accessible to all:

Various relevant refs:

I know, agricultural interoperability doesn’t have the snap of universal suffrage, the crackle of a technological singularity or the pop of first contact.

On the other hand, with a world population estimated at 7.108 billion people, agriculture is an essential activity.

The specifics of wheat data interoperability should narrow down to meaningful requirements. Requirements with measures of success or failure.

Unlike measuring progress towards or away from less precise goals.

June 4, 2013

Full Healthcare Interoperability “…may take some creative thinking.”

Filed under: Health care,Interoperability — Patrick Durusau @ 3:41 pm

Completing drive toward healthcare interoperability will be challenge by Ed Burns.

From the post:

The industry has made progress toward healthcare interoperability in the last couple years, but getting over the final hump may take some creative thinking. There are still no easy answers for how to build fully interoperable nationwide networks.

At the Massachusetts Institute of Technology CIO Symposium, held May 22 in Cambridge, Ma., Beth Israel Deaconess Medical Center CIO John Halamka, M.D., said significant progress has been made.

In particular, he pointed to the growing role of the Clinical Document Architecture (CDA) standard. Under the 2014 Certification Standards, EHR software must be able to produce transition of care documents in this form.

But not every vendor has reached the point where it fully supports this standard, and it is not the universal default for clinician data entry. Additionally, Halamka pointed out that information in health records tends to be incomplete. Often the worker responsible for entering important demographic data and other information into the record is the least-trained person on the staff, which can increase the risk of errors and produce bad data.

There are ways around the lack of vendor support for healthcare data interoperability. Halamka said most states’ information exchanges can function as middleware. As an example, he talked about how Beth Israel is able to exchange information with Atrius Health, a group of community-based hospitals in Eastern Massachusetts, across the state’s HIE even though the two networks are on different systems.

“You can get around what the vendor is able to do with middleware,” Halamka said.

But while these incremental changes have improved data interoperability, supporting full interconnectedness across all vendor systems and provider networks could take some new solutions.

Actually “full” healthcare interoperability isn’t even a possibility.

What we can do is decide how much interoperability is worth in particular situations and do the amount required.

Everyone in the healthcare industry has one or more reasons for the formats and semantics they use now.

Changing those formats and semantics requires not only changing the software but training the people who use the software and the data it produces.

Not to mention the small task of deciding on what basis interoperability will be built.

As you would expect, I think a topic map as middleware solution, one that ties diverse systems together in a re-usable way, is the best option.

Convincing the IT system innocents that write healthcare policy that demanding interoperability isn’t an effective strategy would be a first step.

What would you suggest as a second step?

March 31, 2013

Opening Standards: The Global Politics of Interoperability

Filed under: Data Silos,Interoperability,Silos,Standards — Patrick Durusau @ 10:26 am

Opening Standards: The Global Politics of Interoperability Edited by Laura DeNardis.

Overview:

Openness is not a given on the Internet. Technical standards–the underlying architecture that enables interoperability among hardware and software from different manufacturers–increasingly control individual freedom and the pace of innovation in technology markets. Heated battles rage over the very definition of “openness” and what constitutes an open standard in information and communication technologies. In Opening Standards, experts from industry, academia, and public policy explore just what is at stake in these controversies, considering both economic and political implications of open standards. The book examines the effect of open standards on innovation, on the relationship between interoperability and public policy (and if government has a responsibility to promote open standards), and on intellectual property rights in standardization–an issue at the heart of current global controversies. Finally, Opening Standards recommends a framework for defining openness in twenty-first-century information infrastructures.

Contributors discuss such topics as how to reflect the public interest in the private standards-setting process; why open standards have a beneficial effect on competition and Internet freedom; the effects of intellectual property rights on standards openness; and how to define standard, open standard, and software interoperability.

If you think “open standards” have impact, what would you say about “open data?”

At a macro level, “open data” has many of the same issues as “open standards.”

At a micro level, “open data” has unique social issues that drive the creation of silos for data.

So far as I know, a serious investigation of the social dynamics of data silos has yet to be written.

Understanding the dynamics of data silos might, no guarantees, lead to better strategies for dismantling them.

Suggestions for research/reading on the social dynamics of data silos?

March 23, 2013

Increasing Interoperability of Data for Social Good [$100K]

Filed under: Challenges,Contest,Integration,Interoperability,Topic Maps — Patrick Durusau @ 2:23 pm

Increasing Interoperability of Data for Social Good

March 4, 2013 through May 7, 2013 11:30 AM PST

Each Winner to Receive $100,000 Grant

Got your attention? Good!

From the notice:

The social sector is full of passion, intuition, deep experience, and unwavering commitment. Increasingly, social change agents from funders to activists, are adding data and information as yet one more tool for decision-making and increasing impact.

But data sets are often isolated, fragmented and hard to use. Many organizations manage data with multiple systems, often due to various requirements from government agencies and private funders. The lack of interoperability between systems leads to wasted time and frustration. Even those who are motivated to use data end up spending more time and effort on gathering, combining, and analyzing data, and less time on applying it to ongoing learning, performance improvement, and smarter decision-making.

It is the combining, linking, and connecting of different “data islands” that turns data into knowledge – knowledge that can ultimately help create positive change in our world. Interoperability is the key to making the whole greater than the sum of its parts. The Bill & Melinda Gates Foundation, in partnership with Liquidnet for Good, is looking for groundbreaking ideas to address this significant, but solvable, problem. See the website for more detail on the challenge and application instructions. Each challenge winner will receive a grant of $100,000.

From the details website:

Through this challenge, we’re looking for game-changing ideas we might never imagine on our own and that could revolutionize the field. In particular, we are looking for ideas that might provide new and innovative ways to address the following:

  • Improving the availability and use of program impact data by bringing together data from multiple organizations operating in the same field and geographical area;
  • Enabling combinations of data through application programming interface (APIs), taxonomy crosswalks, classification systems, middleware, natural language processing, and/or data sharing agreements;
  • Reducing inefficiency for users entering similar information into multiple systems through common web forms, profiles, apps, interfaces, etc.;
  • Creating new value for users trying to pull data from multiple sources;
  • Providing new ways to access and understand more than one data set, for example, through new data visualizations, including mashing up government and other data;
  • Identifying needs and barriers by experimenting with increased interoperability of multiple data sets;
  • Providing ways for people to access information that isn’t normally accessible (for using natural language processing to pull and process stories from numerous sources) and combing that information with open data sets.

Successful Proposals Will Include:

  • Identification of specific data sets to be used;
  • Clear, compelling explanation of how the solution increases interoperability;
  • Use case;
  • Description of partnership or collaboration, where applicable;
  • Overview of how solution can be scaled and/or adapted, if it is not already cross-sector in nature;
  • Explanation of why the organization or group submitting the proposal has the capacity to achieve success;
  • A general approach to ongoing sustainability of the effort.

I could not have written a more topic map oriented challenge. You?

They suggest the usual social data sites:

February 26, 2013

Ocean Data Interoperability Platform (ODIP)

Filed under: Interoperability,Open Data — Patrick Durusau @ 1:53 pm

Ocean Data Interoperability Platform (ODIP)

From the post:

The Ocean Data Interoperability Platform (ODIP) is a 3-year initiative (2013-2015) funded by the European Commission under the Seventh Framework Programme. It aims to contribute to the removal of barriers hindering the effective sharing of data across scientific domains and international boundaries.

ODIP brings together 11 organizations from United Kingdom, Italy, Belgium, The Netherlands, Greece and France with the objective to provide a forum to harmonise the diverse regional systems.

The First Workshop will take place from Monday 25 February 2013 to and including Thursday 28 February 2013. More information about the workshop at 1st ODIP Workshop.

From the workshop page, a listing of topics with links to further materials:

Gathering a snapshot of our present day semantic diversity is an extremely useful exercise. Whatever your ultimate choice for a “solution.”

December 24, 2012

Geospatial Intelligence Forum

Filed under: Integration,Intelligence,Interoperability — Patrick Durusau @ 2:32 pm

Geospatial Intelligence Forum: The Magazine of the National Intelligence Community

Apologies but I could not afford a magazine subscription for every reader of this blog.

The next best thing is a free magazine that may be useful in your data integration/topic map practice.

Defense intelligence has been a hot topic for the last decade and there are no signs that is going to change any time soon.

I was browsing through Geospatial Intelligence Forum (GIF) when I encountered:

Closing the Interoperability Gap by Cheryl Gerber.

From the article:

The current technology gaps can be frustrating for soldiers to grapple with, particularly in the middle of battlefield engagements. “This is due, in part, to stovepiped databases forcing soldiers who are working in tactical operations centers to perform many work-arounds or data translations to present the best common operating picture to the commander,” said Dr. Joseph Fontanella, AGC director and Army geospatial information officer.

Now there is a use case for interoperability, being “…in the middle of battlefield engagements.”

Cheryl goes on to identify five (5) gaps in interoperability.

GIF looks like a good place to pick up riffs, memes, terminology and even possible contacts.

Enjoy!

September 28, 2012

2013 Workshop on Interoperability in Scientific Computing

Filed under: Conferences,Interoperability,Science,Scientific Computing — Patrick Durusau @ 10:52 am

2013 Workshop on Interoperability in Scientific Computing

From the post:

The 13th annual International Conference on Computational Science (ICCS 2013) will be held in Barcelona, Spain from 5th – 7th June 2013. ICCS is an ERA 2010 ‘A’-ranked conference series. For more details on the main conference, please visit www.iccs-meeting.org The 2nd Workshop on Interoperability in Scientific Computing (WISC ’13) will be co-located with ICCS 2013.

Approaches to modelling take many forms. The mathematical, computational and encapsulated components of models can be diverse in terms of complexity and scale, as well as in published implementation (mathematics, source code, and executable files). Many of these systems are attempting to solve real-world problems in isolation. However the long-term scientific interest is in allowing greater access to models and their data, and to enable simulations to be combined in order to address ever more complex issues. Markup languages, metadata specifications, and ontologies for different scientific domains have emerged as pathways to greater interoperability. Domain specific modelling languages allow for a declarative development process to be achieved. Metadata specifications enable coupling while ontologies allow cross platform integration of data.

The goal of this workshop is to bring together researchers from across scientific disciplines whose computational models require interoperability. This may arise through interactions between different domains, systems being modelled, connecting model repositories, or coupling models themselves, for instance in multi-scale or hybrid simulations. The outcomes of this workshop will be to better understand the nature of multidisciplinary computational modelling and data handling. Moreover we hope to identify common abstractions and cross-cutting themes in future interoperability research applied to the broader domain of scientific computing.

How is your topic map information product going to make the lives of scientists simpler?

September 2, 2012

HTML [Lessons in Semantic Interoperability – Part 3]

Filed under: HTML,Interoperability,Semantics — Patrick Durusau @ 12:06 pm

If HTML is an example of semantic interoperability, are there parts of HTML that can be re-used for more semantic interoperability?

Some three (3) year old numbers on usage of HTML elements:

Element Percentage
a 21.00
td 15.63
br 9.08
div 8.23
tr 8.07
img 7.12
option 4.90
li 4.48
span 3.98
table 3.15
font 2.80
b 2.32
p 1.98
input 1.79
script 1.77
strong 0.97
meta 0.95
link 0.66
ul 0.65
hr 0.37
http://webmasters.stackexchange.com/questions/11406/recent-statistics-on-html-usage-in-the-wild

Assuming they still hold true, the <a> element is by far the most popular.

Implications for a semantic interoperability solution that leverages on the <a> element?

Leave the syntax the hell alone!

As we saw in parts 1 and 2 of this series, the <a> element has:

  • simplicity
  • immediate feedback

If you don’t believe me, teach someone who doesn’t know HTML at all how to create an <a> element and verify its presence in browser. (I’ll wait.)

Back so soon? 😉

To summarize: The <a> element is simple, has immediate feedback and is in widespread use.

All of which makes it a likely candidate to leverage for semantic interoperability. But how?

And what of all the other identifiers in the world? What happens to them?

September 1, 2012

HTML [Lessons in Semantic Interoperability – Part 2]

Filed under: HTML,Interoperability,Semantics,Web Server — Patrick Durusau @ 10:11 am

While writing Elli (Erlang Web Server) [Lessons in Semantic Interoperability – Part 1], I got distracted by the realization that web servers produce semantically interoperable content every day. Lots of it. For hundreds of millions of users.

My question: What makes the semantics of HTML different?

The first characteristic that came to mind was simplicity. Unlike some markup languages, ;-), HTML did not have to await the creation of WYSIWYG editors to catch on. In part I suspect because after a few minutes with it, most users (not all), could begin to author HTML documents.

Think about the last time you learned something new. What is the one thing that brings closure to the learning experience?

Feedback, knowing if your attempt at an answer is right or wrong. If right, you will attempt the same solution under similar circumstances in the future. If wrong, you will try again (hopefully).

When HTML appeared, so did primitive (in today’s terms) web browsers.

Any user learning HTML could get immediate feedback on their HTML authoring efforts.

Not:

  • After installing additional validation software
  • After debugging complex syntax or configurations
  • After millions of other users do the same thing
  • After new software appears to take advantage of it

Immediate feedback means just that immediate feedback.

The second characteristic is immediate feedback.

You can argue that such feedback was an environmental factor and not a characteristic of HTML proper.

Possibly, possibly but if such a distinction is possible and meaningful, how does it help with the design/implementation of the next successful semantic interoperability language?

I would argue by whatever means, any successful semantic interoperability language is going to include immediate feedback, however you classify it.

Elli (Erlang Web Server) [Lessons in Semantic Interoperability – Part 1]

Filed under: Erlang,Interoperability,Semantics,Web Server — Patrick Durusau @ 8:04 am

Elli

From the post:

My name is Knut, and I want to show you something really cool that I built to solve some problems we are facing here at Wooga.

Having several very successful social games means we have a large number of users. In a single game, they can generate around ten thousand HTTP requests per second to our backend systems. Building and operating the software required to service these games is a big challenge that sometimes requires creative solutions.

As developers at Wooga, we are responsible for the user experience. We want to make our games not only fun and enjoyable but accessible at all times. To do this we need to understand and control the software and hardware we rely on. When we see an area where we can improve the user experience, we go for it. Sometimes this means taking on ambitious projects. An example of this is Elli, a webserver which has become one of the key building blocks of our successful backends.

Having used many of the big Erlang webservers in production with great success, we still found ourselves thinking of how we could improve. We want a simple and robust core with no errors or edge cases causing problems. We need to measure the performance to help us optimize our network and user code. Most importantly, we need high performance and low CPU usage so our servers can spend their resources running our games.

I started this post about Elli to point out the advantages of having a custom web server application. If your needs aren’t meet by one of the standard ones.

Something clicked and I realized that web servers, robust and fast as well as lame and slow, churn out semantically interoperable content every day.

For hundreds of millions of users.

Rather than starting from the perspective of the “semantic interoperability” we want, why not examine the “semantic interoperability” we have already, for clues on what may or may not work to increase it?

When I say “semantic interoperability” on the web, I am speaking of the interpretation of HTML markup, the <a>, <p>, <ol>, <ul>, <div>, <h1-6>, elements that make up most pages.

What characteristics do those markup elements share that might be useful in creating more semantic interoperability?

The first characteristic is simplicity.

You don’t need a lot of semantic overhead machinery or understanding to use any of them.

A plain text editor and knowledge that some text has a general presentation is enough.

Takes a few minutes for a user to learn enough HTML to produce meaningful (to them and others) results.

At least in the case of HTML, that simplicity has lead to a form of semantic interoperability.

HTML was defined with interoperable semantics but unadopted interoperable semantics are like no interoperable semantics at all.

If HTML has simplicity of semantics, what else does it have that lead to widespread adoption?

May 9, 2012

Converged Cloud Growth…[Ally or Fan Fears on Interoperability]

Filed under: Cloud Computing,Interoperability — Patrick Durusau @ 2:58 pm

Demand For Standards—Interoperability To Fuel Converged Cloud Growth

Confused terminology is often a mess in stable CS areas, to say nothing of rapidly developing one such as cloud computing.

Add to that all the marketing hype that creates even more confusion.

Thinking there should be opportunities for standardizing terminology and mappings to vendor terminology in the process.

Topic maps would be a natural for the task.

Interested?

February 26, 2012

Where to Publish and Find Ontologies? A Survey of Ontology Libraries

Filed under: Interoperability,Ontology,Semantic Colonialism,Semantic Web — Patrick Durusau @ 8:27 pm

Where to Publish and Find Ontologies? A Survey of Ontology Libraries by Natasha F. Noy and Mathieu d’Aquin.

Abstract:

One of the key promises of the Semantic Web is its potential to enable and facilitate data interoperability. The ability of data providers and application developers to share and reuse ontologies is a critical component of this data interoperability: if different applications and data sources use the same set of well defined terms for describing their domain and data, it will be much easier for them to “talk” to one another. Ontology libraries are the systems that collect ontologies from different sources and facilitate the tasks of finding, exploring, and using these ontologies. Thus ontology libraries can serve as a link in enabling diverse users and applications to discover, evaluate, use, and publish ontologies. In this paper, we provide a survey of the growing—and surprisingly diverse—landscape of ontology libraries. We highlight how the varying scope and intended use of the libraries affects their features, content, and potential exploitation in applications. From reviewing eleven ontology libraries, we identify a core set of questions that ontology practitioners and users should consider in choosing an ontology library for finding ontologies or publishing their own. We also discuss the research challenges that emerge from this survey, for the developers of ontology libraries to address.

Speaking of semantic colonialism, this survey is an accounting of the continuing failure of that program. The examples cited as “ontology libraries” are for the most part not interoperable with each other.

Not that I disagree that having greater data interoperability would be a bad thing, it would be a very good thing, for some issues. The problem, as I see it, is the fixation of the Semantic Web community on a winner-takes-all model of semantics. Could well be, (warning, heresy ahead) that RDF and OWL aren’t the most effective ways to represent or “reason” about data. Just saying, no proof, formal or otherwise to be offered.

And certainly there is a lack of data written using RDF (or even linked data) or annotated using OWL. I don’t think there is a good estimate of all available data so it is difficult to give a good figure for exactly how little of the overall amount of data that is in all the Semantic Web formats.

Any new format will only be applied to the creation of new data so that will leave us with the ever increasing mountains of legacy data which lack the new format.

Rather than seeking to reduce semantic diversity, what appears to be a losing bet, we should explore mechanisms to manage semantic diversity.

January 5, 2012

Interoperability Driven Integration of Biomedical Data Sources

Interoperability Driven Integration of Biomedical Data Sources by Douglas Teodoro, Rémy Choquet, Daniel Schober, Giovanni Mels, Emilie Pasche, Patrick Ruch, and Christian Lovis.

Abstract:

In this paper, we introduce a data integration methodology that promotes technical, syntactic and semantic interoperability for operational healthcare data sources. ETL processes provide access to different operational databases at the technical level. Furthermore, data instances have they syntax aligned according to biomedical terminologies using natural language processing. Finally, semantic web technologies are used to ensure common meaning and to provide ubiquitous access to the data. The system’s performance and solvability assessments were carried out using clinical questions against seven healthcare institutions distributed across Europe. The architecture managed to provide interoperability within the limited heterogeneous grid of hospitals. Preliminary scalability result tests are provided.

Appears in:

Studies in Health Technology and Informatics
Volume 169, 2011
User Centred Networked Health Care – Proceedings of MIE 2011
Edited by Anne Moen, Stig Kjær Andersen, Jos Aarts, Petter Hurlen
ISBN 978-1-60750-805-2

I have been unable to find a copy online, well, other than the publisher’s copy, at $20 for four pages. I have written to one of the authors requesting a personal use copy as I would like to report back on what it proposes.

November 10, 2011

Putting Data in the Middle

Filed under: Data,Interoperability — Patrick Durusau @ 6:45 pm

Putting Data in the Middle

Jill Dyche uses a photo of Paul Allen and Bill Gates as a jumping off point to talk about a data-centric view of the world.

Remarking:

IT departments furtively investing in successive integration efforts, hoping for the latest and greatest “single version of the truth” watch their budgets erode and their stakeholders flee. CIOs praying that their latest packaged application gets traction realize that they’ve just installed yet another legacy system. Executives wake up and admit that the idea of a huge, centralized, behemoth database accessible by all and serving a range of business needs was simply a dream. Rubbing their eyes they gradually see that data is decoupled from the systems that generate and use it, and past infrastructure plays have merely sedated them.

I really like the successive integration efforts line.

Jill offers an alternative to that sad scenario, but you will have to read her post to find out!

May 23, 2011

ISO initiative OntoIOp (Ontology interoperability)

Filed under: Interoperability,Ontology — Patrick Durusau @ 7:46 pm

ISO initiative OntoIOp (Ontology interoperability)

Prof. Dr. Till Mossakowsk post the following note to the ontolog-forum today:

Dear all,

we are currently involved in a new ISO standardisation initiative concerned with ontology interoperability.

This initiative is somehow orthogonal and complementary to Common Logic, because the topic is interoperability. This means interoperability both among ontologies (i.e. concering matching, alignment, and suitable means to write these down) as well as among ontology languages (e.g. OWL, UML, Common Logic, or F-logic, and translations among these). The idea is to have all these languages as part of a meta-standard, such that ontology designers can bring in their ontologies verbatim as they are, and yet relate them to other ontologies (e.g. check that an OWL version of some ontology is entailed by its first-order formulation).

The first official meeting for this is already mid next month in Seoul, and we now quickly have to move forward getting some countries into the boat. It will be essential to have experts from all relevant communities involved in this effort.

If you are interested in this initiative, the rough draft [1] for the standard and a related paper [2] will give you some more info. Please have a look and let me know what you think. We also look for people who want to officially take part in the development of the standard, either actively or just by voting on behalf of your national standardisation body.

All the best,
Till

[1] http://www.dfki.de/sks/till/papers/OntoIOp.pdf
[2] http://www.dfki.de/sks/till/papers/ontotrans.pdf

I haven’t had time to review the documents but given the time frame wanted to bring this to your attention sooner rather than later.

When you have reviewed the documents, comments welcome.

Powered by WordPress