## Archive for the ‘Semantic Web’ Category

### Beyond Enterprise Search…

Tuesday, May 21st, 2013

From the post:

Searching through all your content is fine – until you get a mountain of it with similar content, differentiated only by context. Then you’ll need to understand the meaning within the content. In this post I discuss how to do this using semantic techniques…

Organisations today have realised that for certain applications it is useful to have a consolidated search approach over several catalogues. This is most often the case when customers can interact with several parts of the company – sales, billing, service, delivery, fraud checks.

This approach is commonly called Enterprise Search, or Search and Discovery, which is where your content across several repositories is indexed in a separate search engine. Typically this indexing occurs some time after the content is added. In addition, it is not possible for a search engine to understand the fully capabilities of every content system. This means complex mappings are needed between content, meta data and security. In some cases, this may be retrofitted with custom code as the systems do not support a common vocabulary around these aspects of information management.

Content Search

We are all used to content search, so much so that for today’s teenagers a search bar with a common (‘Google like’) grammar is expected. This simple yet powerful interface allows us to search for content (typically web pages and documents) that contain all the words or phrases that we need. Often this is broadened by the use of a thesaurus and word stemming (plays and played stems to the verb play), and combined with some form of weighting based on relative frequency within each unit of content.

Other techniques are also applied. Metadata is extracted or implied – author, date created, modified, security classification, Dublin Core descriptive data. Classification tools can be used (either at the content store or search indexing stages) to perform entity extraction (Cheese is a food stuff) and enrichment (Sheffield is a place with these geospatial co-ordinates). This provides a greater level of description of the term being searched for over and above simple word terms.

Using these techniques, additional search functionality can be provided. Search for all shops visible on a map using a bounding box, radius or polygon geospatial search. Return only documents where these words are within 6 words of each other. Perhaps weight some terms as more important than others, or optional.

These techniques are provided by many of the Enterprise class search engines out there today. Even Open Source tools like Lucene and Solr are catching up with this. They have provided access to information where before we had to rely on Information and Library Services staff to correctly classify incoming documents manually, as they did back in the paper bound days of yore.

Content search only gets you so far though.

I was amening with the best of them until Adam reached the part about MarkLogic 7 going to add Semantic Web capabilities.

I didn’t see any mention of linked data replicating the semantic diversity that currently exists in data stores.

Making data more accessible isn’t going to make it less diverse.

Although making data more accessible may drive the development of ways to manage semantic diversity.

So perhaps there is a useful side to linked data after all.

### A Trillion Triples in Perspective

Saturday, May 18th, 2013

Mozart Meets MapReduce by Isaac Lopez.

From the post:

Big data has been around since the beginning of time, says Thomas Paulmichl, founder and CEO of Sigmaspecto, who says that what has changed is how we process the information. In a talk during Big Data Week, Paulmichl encouraged people to open up their perspective on what big data is, and how it can be applied.

During the talk, he admonished people to take a human element into big data. Paulmichl demonstrated this by examining the work of musical prodigy, Mozart – who Paulmichl noted is appreciated greatly by both music scientists, as well as the common music listener.

“When Mozart makes choices on writing a piece of work, the number of choices that he has and the kind of neural algorithms that his brain goes through to choose things is infinitesimally higher that what we call big data – it’s really small data in comparison,” he said.

Taking Mozart’s The Magic Flute as an example, Paulmichl, discussed the framework that Mozart used to make his choices by examining a music sheet outlining the number of bars, the time signature, the instrument and singer voicing.

“So from his perspective, he sits down, and starts to make what we as data scientists call quantitative choices,” explained Paulmichl. “Do I put a note here, down here, do I use a different instrument; do I use a parallel voicing for different violins – so these are all metrics that his brain has to decide.”

Exploring the mathematics of the music, Paulmichl concluded that in looking at The Magic Flute, Mozart had 4.72391E+21 creative variations (and then some) that he could have taken with the direction of it over the course of the piece. “We’re not talking about a trillion dataset; we’re talking about a sextillion or more,” he says adding that this is a very limited cut of the quantitative choice that his brain makes at every composition point.

“[A] sextillion or more…” puts the question of processing a trillion triples into perspective.

Another musical analogy?

Triples are the one finger version of Jingle Bells*:

*The gap is greater than the video represents but it is still amusing.

Does your analysis/data have one finger subtlety?

### A self-updating road map of The Cancer Genome Atlas

Friday, May 17th, 2013

A self-updating road map of The Cancer Genome Atlas by David E. Robbins, Alexander Grüneberg, Helena F. Deus, Murat M. Tanik and Jonas S. Almeida. (Bioinformatics (2013) 29 (10): 1333-1340. doi: 10.1093/bioinformatics/btt141)

Abstract:

Motivation: Since 2011, The Cancer Genome Atlas’ (TCGA) files have been accessible through HTTP from a public site, creating entirely new possibilities for cancer informatics by enhancing data discovery and retrieval. Significantly, these enhancements enable the reporting of analysis results that can be fully traced to and reproduced using their source data. However, to realize this possibility, a continually updated road map of files in the TCGA is required. Creation of such a road map represents a significant data modeling challenge, due to the size and fluidity of this resource: each of the 33 cancer types is instantiated in only partially overlapping sets of analytical platforms, while the number of data files available doubles approximately every 7 months.

Results: We developed an engine to index and annotate the TCGA files, relying exclusively on third-generation web technologies (Web 3.0). Specifically, this engine uses JavaScript in conjunction with the World Wide Web Consortium’s (W3C) Resource Description Framework (RDF), and SPARQL, the query language for RDF, to capture metadata of files in the TCGA open-access HTTP directory. The resulting index may be queried using SPARQL, and enables file-level provenance annotations as well as discovery of arbitrary subsets of files, based on their metadata, using web standard languages. In turn, these abilities enhance the reproducibility and distribution of novel results delivered as elements of a web-based computational ecosystem. The development of the TCGA Roadmap engine was found to provide specific clues about how biomedical big data initiatives should be exposed as public resources for exploratory analysis, data mining and reproducible research. These specific design elements align with the concept of knowledge reengineering and represent a sharp departure from top-down approaches in grid initiatives such as CaBIG. They also present a much more interoperable and reproducible alternative to the still pervasive use of data portals.

Availability: A prepared dashboard, including links to source code and a SPARQL endpoint, is available at http://bit.ly/TCGARoadmap. A video tutorial is available at http://bit.ly/TCGARoadmapTutorial.

Curious how the granularity of required semantics and the uniformity of the underlying data set impact the choice of semantic approaches?

### Putting Linked Data on the Map

Monday, May 13th, 2013

Putting Linked Data on the Map by Richard Wallis.

In fairness to Linked Data/Semantic Web, I really should mention this post by one of its more mainstream advocates:

Show me an example of the effective publishing of Linked Data – That, or a variation of it, must be the request I receive more than most when talking to those considering making their own resources available as Linked Data, either in their enterprise, or on the wider web.

There are some obvious candidates. The BBC for instance, makes significant use of Linked Data within its enterprise. They built their fantastic Olympics 2012 online coverage on an infrastructure with Linked Data at its core. Unfortunately, apart from a few exceptions such as Wildlife and Programmes, we only see the results in a powerful web presence. The published data is only visible within their enterprise.

Dbpedia is another excellent candidate. From about 2007 it has been a clear demonstration of Tim Berners-Lee’s principles of using URIs as identifiers and providing information, including links to other things, in RDF – it is just there at the end of the dbpedia URIs. But for some reason developers don’t seem to see it as a compelling example. Maybe it is influenced by the Wikipedia effect – interesting but built by open data geeks, so not to be taken seriously.

A third example, which I want to focus on here, is Ordnance Survey. Not generally known much beyond the geographical patch they cover, Ordnance Survey is the official mapping agency for Great Britain. Formally a government agency, they are best known for their incredibly detailed and accurate maps that are the standard accessory for anyone doing anything in the British countryside. A little less known is that they also publish information about post-code areas, parish/town/city/county boundaries, parliamentary constituency areas, and even European regions in Britain. As you can imagine, these all don’t neatly intersect, which makes the data about them a great case for a graph based data model and hence for publishing as Linked Data. Which is what they did a couple of years ago.

The reason I want to focus on their efforts now, is that they have recently beta released a new API suite, which I will come to in a moment. But first I must emphasise something that is often missed.

Linked Data is just there – without the need for an API the raw data (described in RDF) is ‘just there to consume’. With only standard [http] web protocols, you can get the data for an entity in their dataset by just doing a http GET request on the identifier…

(images omitted)

Richard does a great job describing the Linked Data APIs from the Ordnance Survey.

My only quibble is with his point:

Linked Data is just there – without the need for an API the raw data (described in RDF) is ‘just there to consume’.

True enough but it omits the authoring side of Linked Data.

Or understanding the data to be consumed.

With HTML, authoring hyperlinks was only marginally more difficult than “using” hyperlinks.

And the consumption of a hyperlink, beyond mime types, was unconstrained.

So linked data isn’t “just there.”

It’s there with an authoring burden that remains unresolved and that constrains consumption, should you decide to follow “standard [http] web protocols” and Linked Data.

I am sure the Ordnance Survey Linked Data and other Linked Data resources Richard mentions will be very useful, to some people in some contexts.

But pretending Linked Data is easier than it is, will not lead to improved Linked Data or other semantic solutions.

### Non-Adoption of Semantic Web, Reason #1002

Monday, May 13th, 2013

Kingsley Idehen offers yet another explanation/excuses for non-adoption of the semantic web in On Hybrid Relational Databases. Interview with Kingsley Uyi Idehen by Roberto V. Zicari.

The highlight of this interview reads:

The only obstacle to Semantic Web technologies in the enterprise lies in better articulation of the value proposition in a manner that reflects the concerns of enterprises. For instance, the non disruptive nature of Semantic Web technologies with regards to all enterprise data integration and virtualization initiatives has to be the focal point

You may recall Kingsley’s demonstration of the non-complexity of authoring for the Semantic Web in The Semantic Web Is Failing — But Why? (Part 3).

Could it be users sense the “lock-in” of RDF/Semantic Web?

Q14. Big Data Analysis: could you connect Virtuoso with Hadoop? How does Viruoso relate to commercial data analytics platforms, e.g Hadapt, Vertica?

K​ingsley Uyi Idehen: You can integrate data managed by Hadoop based ETL workflows via ODBC or Web Services driven by Hapdoop clusters that expose RESTful interaction patterns for data access. As for how Virtuoso relates to the likes of Vertica re., analytics, this is about Virtuoso being the equivalent of Vertica plus the added capability of RDF based data management, Linked Data Deployment, and share-nothing clustering. There is no job that Vertica performs that Virtuoso can’t perform.

There are several jobs that Virtuoso can perform that Vertica, VoltDB, Hadapt, and many other NoSQL and NewSQL simply cannot perform with regards to scalable, high-performance RDF data management and Linked Data deployment. Remember, RDF based Linked Data is all about data management and data access without any kind of platform lock-in. Virtuoso locks you into a value proposition (performance and scale) not the platform itself. (emphasis added to last sentence)

It’s comforting to know RDF/Semantic Web “lock-in” has our best interest at heart.

See Kingley dodging the next question on Virtuoso’s ability scale:

Q15. Do you also benchmark loading trillion of RDF triples? Do you have current benchmark results? How much time does it take to querying them?

K​ingsley Uyi Idehen: As per my earlier responses, there is no shortage of benchmark material for Virtuoso.

The benchmarks are also based on realistic platform configurations unlike the RDBMS patterns of the past which compromised the utility of TPC benchmarks.

Full Disclosure: I haven’t actually counted all of Kingsley’s reasons for non-adoption of the Semantic Web. The number I assign here may be high or low.

### The ChEMBL database as linked open data

Thursday, May 9th, 2013

The ChEMBL database as linked open data by Egon L Willighagen, Andra Waagmeester, Ola Spjuth, Peter Ansell, Antony J Williams, Valery Tkachenko, Janna Hastings, Bin Chen and David J Wild. (Journal of Cheminformatics 2013, 5:23 doi:10.1186/1758-2946-5-23).

Abstract:

Background Making data available as Linked Data using Resource Description Framework (RDF) promotes integration with other web resources. RDF documents can natively link to related data, and others can link back using Uniform Resource Identifiers (URIs). RDF makes the data machine-readable and uses extensible vocabularies for additional information, making it easier to scale up inference and data analysis.

Results This paper describes recent developments in an ongoing project converting data from the ChEMBL database into RDF triples. Relative to earlier versions, this updated version of ChEMBL-RDF uses recently introduced ontologies, including CHEMINF and CiTO; exposes more information from the database; and is now available as dereferencable, linked data. To demonstrate these new features, we present novel use cases showing further integration with other web resources, including Bio2RDF, Chem2Bio2RDF, and ChemSpider, and showing the use of standard ontologies for querying.

Conclusions We have illustrated the advantages of using open standards and ontologies to link the ChEMBL database to other databases. Using those links and the knowledge encoded in standards and ontologies, the ChEMBL-RDF resource creates a foundation for integrated semantic web cheminformatics applications, such as the presented decision support.

You already know about the fragility of ontologies so no need to repeat that rant here.

Having material encoded with an ontology, on the other hand, after vetting, can be a source that you wrap with a topic map.

So all that effort isn’t lost.

### VIVO (Update)

Tuesday, May 7th, 2013

Seeing the webinar on ViVO, Semantic Mashups Across Large, Heterogeneous Institutions:… reminded me it had been more than a year since I had visited the VIVO site (VIVO – An interdisciplinary national network).

I visited VIVO today and they appear to have the same seven (7) institutional sponsors as they did in March of 2012.

I did not find a list of other institutions that have downloaded and installed VIVO.

The University of Pennsylvania Veterinary Library has but I did not see any others.

Does anyone else have a feel for the use of this project?

Thanks!

### Semantic Mashups Across Large, Heterogeneous Institutions:…

Tuesday, May 7th, 2013

Semantic Mashups Across Large, Heterogeneous Institutions: Experiences from the VIVO Service

May 22, 2013
1:00 – 2:30 p.m. (Eastern Time)

VIVO is a semantic web application focused on discovering researchers and research publications in the life sciences. The service, which uses open-source software originally developed and implemented at Cornell University, operates by harvesting data about researcher interests, activities, and accomplishments from academic, administrative, professional, and funding sources. Using a built-in, editable ontology for describing things such as People, Courses, and Publications, data is transformed into a Semantic-Web-compliant form. VIVO provides automated and self-updating processes for improving data quality and authenticity. Starting with a classic Google-style search box, VIVO users can browse search results structured around people, research interests, courses, publications, and the like — data that can be exposed for re-use by other systems in a machine-readable format.

This webinar, held by a veteran at the Albert R. Mann Library Information Technology Services department at Cornell, where the VIVO project was born, presents the perspective of a software developer on the practicalities of building a high-quality Semantic-Web search service on existing data maintained in dozens of formats and software platforms at large, diverse institutions. The talk will highlight services that leverage the Semantic Web platform in innovative ways, e.g., for finding researchers based on the text content of a particular Web page and for visualizing networks of collaboration across institutions.

Hosted by NISO, you may want to check the registration fees before deciding to attend. Or having a member of your department to attend and share what they learn.

### FuturICT:… [No Semantics?]

Monday, May 6th, 2013

FuturICT: Participatory Computing for Our Complex World

FuturICT is a visionary project that will deliver new science and technology to explore, understand and manage our connected world. This will inspire new information and communication technologies (ICT) that are socially adaptive and socially interactive, supporting collective awareness.

Revealing the hidden laws and processes underlying our complex, global, socially interactive systems constitutes one of the most pressing scientific challenges of the 21st Century. Integrating complexity science with ICT and the social sciences, will allow us to design novel robust, trustworthy and adaptive technologies based on socially inspired paradigms. Data from a variety of sources will help us to develop models of techno-socioeconomic systems. In turn, insights from these models will inspire a new generation of socially adaptive, self-organised ICT systems. This will create a paradigm shift and facilitate a symbiotic co-evolution of ICT and society. In response to the European Commission’s call for a ‘Big Science’ project, FuturICT will build a largescale, pan European, integrated programme of research which will extend for 10 years and beyond.

Did you know that the term “semantic” appears only twice in the FuturICT Project Outline? And both times as in the “semantic web?”

Not a word of how models, data sources, paradigms, etc., with different semantics are going to be wedded into a coherent whole.

View it as an opportunity to deliver FuturlCT results using topic maps beyond this project.

### Agile Knowledge Engineering and Semantic Web (AKSW)

Sunday, April 28th, 2013

Agile Knowledge Engineering and Semantic Web (AKSW)

From the webpage:

The Research Group Agile Knowledge Engineering and Semantic Web (AKSW) is hosted by the Chair of
Business Information Systems (BIS) of the Institute of Computer Science (IfI) / University of Leipzig as well as the Institute for Applied Informatics (InfAI).

Goals

• Development of methods, tools and applications for adaptive Knowledge Engineering in the context of the Semantic Web
• Research of underlying Semantic Web technologies and development of fundamental Semantic Web tools and applications
• Maturation of strategies for fruitfully combining the Social Web paradigms with semantic knowledge representation techniques

AKSW is committed to the free software, open source, open access and open knowledge movements.

Complete listing of projects.

I have mentioned several of these projects before. On seeing a reminder of the latest release of RDFaCE (RDFa Content Editor), I thought I should post on the common source of those projects.

### The Motherlode of Semantics, People

Saturday, April 27th, 2013

1st International Workshop on “Crowdsourcing the Semantic Web” (CrowdSem2013)

Submission deadline: July 12, 2013 (23:59 Hawaii time)

From the post:

1st International Workshop on “Crowdsourcing the Semantic Web” in conjunction with the 12th Interantional Seamntic Web Conference (ISWC 2013), 21-25 October 2013, in Sydney, Australia. This interactive workshop takes stock of the emergent work and chart the research agenda with interactive sessions to brainstorm ideas and potential applications of collective intelligence to solving AI hard semantic web problems.

The Global Brain Semantic Web—a Semantic Web interleaving a large number of human and machine computation—has great potential to overcome some of the issues of the current Semantic Web. In particular, semantic technologies have been deployed in the context of a wide range of information management tasks in scenarios that are increasingly significant in both technical (data size, variety and complexity of data sources) and economical terms (industries addressed and their market volume). For many of these tasks, machine-driven algorithmic techniques aiming at full automation do not reach a level of accuracy that many production environments require. Enhancing automatic techniques with human computation capabilities is becoming a viable solution in many cases. We believe that there is huge potential at the intersection of these disciplines – large scale, knowledge-driven, information management and crowdsourcing – to solve technically challenging problems purposefully and in a cost effective manner.

I’m encouraged.

The Semantic Web is going to start asking the entities (people) that originate semantics about semantics.

Going the motherlode of semantics.

Now to see what they do with the answers.

### Brain: … [Topic Naming Constraint Reappears]

Wednesday, April 24th, 2013

Brain: biomedical knowledge manipulation by Samuel Croset, John P. Overington and Dietrich Rebholz-Schuhmann. (Bioinformatics (2013) 29 (9): 1238-1239. doi: 10.1093/bioinformatics/btt109)

Abstract:

Summary: Brain is a Java software library facilitating the manipulation and creation of ontologies and knowledge bases represented with the Web Ontology Language (OWL).

Availability and implementation: The Java source code and the library are freely available at https://github.com/loopasam/Brain and on the Maven Central repository (GroupId: uk.ac.ebi.brain). The documentation is available at https://github.com/loopasam/Brain/wiki.

Contact: croset@ebi.ac.uk

Supplementary information: Supplementary data are available at Bioinformatics online.

Odd how things like the topic naming constraint show up in unexpected contexts.

But as I read the article I saw:

The names (short forms) of OWL entities handled by a Brain object have to be unique. It is for instance not possible to add an OWL class, such as http://www.example.org/Cell to the ontology if an OWL entity with the short form ‘Cell’ already exists.

The explanation?

Despite being in contradiction with some Semantic Web principles, this design prevents ambiguous queries and hides as much as possible the cumbersome interaction with prefixes and Internationalized Resource Identifiers (IRI).

I suppose but doesn’t ambiguity exist in the mind of the user? That is they use a term than can have more than one meaning?

Having unique terms simply means inventing odd terms that no user will know.

Rather than unambiguous isn’t that unfound?

### Collaborative annotation… [Human + Machine != Semantic Monotony]

Sunday, April 21st, 2013

Collaborative annotation for scientific data discovery and reuse by Kirk Borne. (Borne, K. (2013), Collaborative annotation for scientific data discovery and reuse. Bul. Am. Soc. Info. Sci. Tech., 39: 44–45. doi: 10.1002/bult.2013.1720390414)

Abstract:

Human classification alone, unable to handle the enormous quantity of project data, requires the support of automated machine-based strategies. In collaborative annotation, humans and machines work together, merging editorial strengths in semantics and pattern recognition with the machine strengths of scale and algorithmic power. Discovery informatics can be used to generate common data models, taxonomies and ontologies. A proposed project of massive scale, the Large Synoptic Survey Telescope (LSST) project, will systematically observe the southern sky over 10 years, collecting petabytes of data for analysis. The combined work of professional and citizen scientists will be needed to tag the discovered astronomical objects. The tag set will be generated through informatics and the collaborative annotation efforts of humans and machines. The LSST project will demonstrate the development and application of a classification scheme that supports search, curation and reuse of a digital repository.

A persuasive call to arms to develop “collaborative annotation:”

Humans and machines working together to produce the best possible classification label(s) is collaborative annotation. Collaborative annotation is a form of human computation [1]. Humans can see patterns and semantics (context, content and relationships) more quickly, accurately and meaningfully than machines. Human computation therefore applies to the problem of annotating, labeling and classifying voluminous data streams.

And more specifically for the Large Synoptic Survey Telescope (LSST):

The discovery potential of this data collection would be enormous, and its long-term value (through careful data management and curation) would thus require (for maximum scientific return) the participation of scientists and citizen scientists as well as science educators and their students in a collaborative knowledge mark-up (annotation and tagging) data environment. To meet this need, we envision a collaborative tagging system called AstroDAS (Astronomy Distributed Annotation System). AstroDAS is similar to existing science knowledge bases, such as BioDAS (Biology Distributed Annotation System, www.biodas.org).

As you might expect, semantic diversity is going to be present with “collaborative annotation.”

Semantic Monotony (aka Semantic Web) has failed for machines alone.

No question it will fail for humans + machines.

Are you ready to step up to the semantic diversity of collaborative annotation (humans + machines)?

### Open Annotation Data Model

Tuesday, March 19th, 2013

Open Annotation Data Model

Abstract:

The Open Annotation Core Data Model specifies an interoperable framework for creating associations between related resources, annotations, using a methodology that conforms to the Architecture of the World Wide Web. Open Annotations can easily be shared between platforms, with sufficient richness of expression to satisfy complex requirements while remaining simple enough to also allow for the most common use cases, such as attaching a piece of text to a single web resource.

An Annotation is considered to be a set of connected resources, typically including a body and target, where the body is somehow about the target. The full model supports additional functionality, enabling semantic annotations, embedding content, selecting segments of resources, choosing the appropriate representation of a resource and providing styling hints for consuming clients.

My first encounter with this proposal so I need to compare it to my Simple Web Semantics.

At first blush, the Open Annotation Core Model looks a lot heavier than Simple Web Semantics.

I need to reform my blog posts into a formal document and perhaps attach a comparison as an annex.

### Semantic Search Over The Web (SSW 2013)

Monday, March 18th, 2013

3RD International Workshop onSemantic Search Over The Web (SSW 2013)

Dates:

Abstract Papers submission: May 31, 2013 – 15:00 (3:00 pm) EDT
(Short) Full Paper submission: June 7, 2013 – 15:00 (3:00 pm) EDT
Camera-ready copy due: August 2, 2013
Workshop date: During VLDB (Aug 26 – Aug 30)

From the webpage:

We are witnessing a smooth evolution of the Web from a worldwide information space of linked documents to a global knowledge base, composed of semantically interconnected resources. To date, the correlated and semantically annotated data available on the web amounts to 25 billion RDF triples, interlinked by around 395 million RDF links. The continuous publishing and the integration of the plethora of semantic datasets from companies, government and public sector projects is leading to the creation of the so-called Web of Knowledge. Each semantic dataset contributes to extend the global knowledge and increases its reasoning capabilities. As a matter of facts, researchers are now looking with growing interest to semantic issues in this huge amount of correlated data available on the Web. Many progresses have been made in the field of semantic technologies, from formal models to repositories and reasoning engines. While the focus of many practitioners is on exploiting such semantic information to contribute to IR problems from a document centric point of view, we believe that such a vast, and constantly growing, amount of semantic data raises data management issues that must be faced in a dynamic, highly distributed and heterogeneous environment such as the Web.

The third edition of the International Workshop on Semantic Search over the Web (SSW) will discuss about data management issues related to the search over the web and the relationships with semantic web technologies, proposing new models, languages and applications.

The research issues can be summarized by the following problems:

• How can we model and efficiently access large amounts of semantic web data?
• How can we effectively retrieve information exploiting semantic web technologies?
• How can we employ semantic search in real world scenarios?

The SSW Workshop invites researchers, engineers, service developers to present their research and works in the field of data management for semantic search. Papers may deal with methods, models, case studies, practical experiences and technologies.

Apologies for the uncertainty of the workshop date. (There is confusion about the date on the workshop site, one place says the 26th, the other the 30th. Check before you make reservation/travel arrangements.)

I differ with the organizers on some issues but on the presence of: “…data management issues that must be faced in a dynamic, highly distributed and heterogeneous environment such as the Web,” there is no disagreement.

That’s the trick isn’t it? In any confined or small group setting, just about any consistent semantic solution will work.

The hurly-burly of a constant stream of half-heard, partially understood communications across distributed and heterogeneous systems tests the true mettle of semantic solutions.

Not a quest for perfect communication but “good enough.”

### Beacons of Availability

Sunday, March 17th, 2013

From Records to a Web of Library Data – Pt3 Beacons of Availability by Richard Wallis.

Beacons of Availability

As I indicated in the first of this series, there are descriptions of a broader collection of entities, than just books, articles and other creative works, locked up in the Marc and other records that populate our current library systems. By mining those records it is possible to identify those entities, such as people, places, organisations, formats and locations, and model & describe them independently of their source records.

As I discussed in the post that followed, the library domain has often led in the creation and sharing of authoritative datasets for the description of many of these entity types. Bringing these two together, using URIs published by the Hubs of Authority, to identify individual relationships within bibliographic metadata published as RDF by individual library collections (for example the British National Bibliography, and WorldCat) is creating Library Linked Data openly available on the Web.

Why do we catalogue? is a question, I often ask, with an obvious answer – so that people can find our stuff. How does this entification, sharing of authorities, and creation of a web of library linked data help us in that goal. In simple terms, the more libraries can understand what resources each other hold, describe, and reference, the more able they are to guide people to those resources. Sounds like a great benefit and mission statement for libraries of the world but unfortunately not one that will nudge the needle on making library resources more discoverable for the vast majority of those that can benefit from them.

I have lost count of the number of presentations and reports I have seen telling us that upwards of 80% of visitors to library search interfaces start in Google. A similar weight of opinion can be found that complains how bad Google, and the other search engines, are at representing library resources. You will get some balancing opinion, supporting how good Google Book Search and Google Scholar are at directing students and others to our resources. Yet I am willing to bet that again we have another 80-20 equation or worse about how few, of the users that libraries want to reach, even know those specialist Google services exist. A bit of a sorry state of affairs when the major source of searching for our target audience, is also acknowledged to be one of the least capable at describing and linking to the resources we want them to find!

Library linked data helps solve both the problem of better description and findability of library resources in the major search engines. Plus it can help with the problem of identifying where a user can gain access to that resource to loan, download, view via a suitable license, or purchase, etc.

I’m am an ardent sympathizer helping people to find “our stuff.”

I don’t disagree with the description of Google as: “…the major source of searching for our target audience, is also acknowledged to be one of the least capable at describing and linking to the resources we want them to find!”

But in all fairness to Google, I would remind you of Drabenstott’s research that found for the Library of Congress subject headings:

Overall percentages of correct meanings for subject headings in the original order of subdivisions were as follows:

 children 32% adults 40% reference 53% technical services librarians 56%

The Library of Congress subject classification has been around for more than a century and just over half of the librarians can use it correctly.

Let’s don’t wait more than a century to test the claim:*

Library linked data helps solve both the problem of better description and findability of library resources in the major search engines.

* By “test” I don’t mean the sort of study, “…we recruited twelve LIS students but one had to leave before the study was complete….”

I am using “test” in the sense of a well designed and organized social science project with professional assistance from social scientists, UI test designers and the like.

I think OCLC is quite sincere in its promotion of linked data, but effectiveness is an empirical question, not one of sincerity.

### Algebraix Data Achieves Unrivaled Semantic Benchmark Performance

Saturday, March 16th, 2013

Algebraix Data Achieves Unrivaled Semantic Benchmark Performance by Angela Guess.

From the post:

Algebraix Data Corporation today announced its SPARQL Server(TM) RDF database successfully executed all 17 of its queries on the SP2 benchmark up to one billion triples on one computer node. The SP2 benchmark is the most computationally complex for testing SPARQL performance and no other vendor has reported results for all queries on data sizes above five million triples.

Furthermore, SPARQL Server demonstrated linear performance in total SP2Bench query time on data sets from one million to one billion triples. These latest dramatic results are made possible by algebraic optimization techniques that maximize computing resource utilization.

“Our outstanding SPARQL Server performance is a direct result of the algebraic techniques enabled by our patented Algebraix technology,” said Charlie Silver, CEO of Algebraix Data. “We are investing heavily in the development of SPARQL Server to continue making substantial additional functional, performance and scalability improvements.”

Pretty much a copy of the press release from Algebraix.

You may find:

Doing the Math: The Algebraix DataBase Whitepaper: What it is, how it works, why we need it (PDF) by Robin Bloor, PhD

ALGEBRAIX Technology Mathematics Whitepaper (PDF), by Algebraix Data

and,

Granted Patents

more useful.

BTW, The SP²Bench SPARQL Performance Benchmark, will be useful as well.

Algebraix listed its patents but I supplied the links. Why the links were missing at Algebraix I cannot say.

If the “…no other vendor has reported results for all queries on data sizes above five million triples…” is correct, isn’t scaling an issue for SQARQL?

### DRM/WWW, Wealth/Salvation: Theological Parallels

Thursday, March 14th, 2013

Cory Doctorow misses a teaching moment in his: What I wish Tim Berners-Lee understood about DRM.

Cory says:

Whenever Berners-Lee tells the story of the Web’s inception, he stresses that he was able to invent the Web without getting any permission. He uses this as a parable to explain the importance of an open and neutral Internet.

The “…without getting any permission” was a principle for Tim Berners-Lee when he was inventing the Web.

A principle then, not now.

Evidence? The fundamentals of RDF have been mired in the same model for fourteen (14) years. Impeding the evolution of the “Semantic” Web. Whatever its merits.

Another example? HTML5 violates prior definitions of URL in order to widen the reach of HTML5. (URL Homonym Problem: A Topic Map Solution)

Same “principle” as DRM support, expanding the label of “WWW” beyond what early supporters would recognize as the WWW.

HTML5 rewriting of URL and DRM support are membership building exercises.

The teaching moment comes from early Christian history.

You may (or may not) recall the parable of the rich young ruler (Matthew 19:16-30), where a rich young man asks Jesus what he must do to be saved?

Jesus replies:

One thing you still lack. Sell all that you have and distribute to the poor, and you will have treasure in heaven; and come, follow me.

And for the first hundred or more years of Christianity, so far as can be known, that rule, divesting yourself of property was followed.

Until, Clement of Alexandria. Clement took the position that indeed the rich could retain their goods, so long as they used it charitably. (Now there’s a loophole!)

Created two paths to salvation, one for anyone foolish enough to take the Bible at its word and another for anyone would wanted to call themselves Christians, without any inconvenience or discomfort.

Following Clement of Alexandria, Tim Berners-Lee is creating two paths to the WWW.

One for people who are foolish enough to innovate and share information, the innovation model of the WWW that Cory speaks so highly of.

Another path for people (DRM crowd) who neither spin nor toil but who want to burden everyone who does.

Membership as a principle isn’t surprising considering how TBL sees himself in the mirror:

### Aaron Swartz’s A Programmable Web: An Unfinished Work

Wednesday, March 13th, 2013

Aaron Swartz’s A Programmable Web: An Unfinished Work

Abstract:

This short work is the first draft of a book manuscript by Aaron Swartz written for the series “Synthesis Lectures on the Semantic Web” at the invitation of its editor, James Hendler. Unfortunately, the book wasn’t completed before Aaron’s death in January 2013. As a tribute, the editor and publisher are publishing the work digitally without cost.

From the author’s introduction:

” . . . we will begin by trying to understand the architecture of the Web — what it got right and, occasionally, what it got wrong, but most importantly why it is the way it is. We will learn how it allows both users and search engines to co-exist peacefully while supporting everything from photo-sharing to financial transactions.

We will continue by considering what it means to build a program on top of the Web — how to write software that both fairly serves its immediate users as well as the developers who want to build on top of it. Too often, an API is bolted on top of an existing application, as an afterthought or a completely separate piece. But, as we’ll see, when a web application is designed properly, APIs naturally grow out of it and require little effort to maintain.

Then we’ll look into what it means for your application to be not just another tool for people and software to use, but part of the ecology — a section of the programmable web. This means exposing your data to be queried and copied and integrated, even without explicit permission, into the larger software ecosystem, while protecting users’ freedom.

Finally, we’ll close with a discussion of that much-maligned phrase, ‘the Semantic Web,’ and try to understand what it would really mean.”

Table of Contents: Introduction: A Programmable Web / Building for Users: Designing URLs / Building for Search Engines: Following REST / Building for Choice: Allowing Import and Export / Building a Platform: Providing APIs / Building a Database: Queries and Dumps / Building for Freedom: Open Data, Open Source / Conclusion: A Semantic Web?

Even if you disagree with Aaron, on issues both large and small, as I do, it is a very worthwhile read.

But I will save my disagreements for another day. Enjoy the read!

### Simple Web Semantics: Multiple Dictionaries

Tuesday, February 26th, 2013

When I last posted about Simple Web Semantics, my suggested syntax was:

 

While you can use any one of multiple dictionaries for the URI in an <a> element, that requires manual editing of the source HTML.

Here is an improvement on that idea:

The content of the content attribute on a meta element with a name attribute with the value “dictionary” is one or more “URLs” (in the HTML 5 sense), if more than one, the “URLs” are separated by whitespace.

The content of the dictionary attribute on an a element is one or more “URLs” (in the HTML 5 sense), if more than one, the “URLs” are separated by whitespace.

Thinking that enables authors of content to give users choices as to which dictionaries to use with particular “URLs.”

For example, a popular account of a science experiment could use the term, H2O and have a dictionary entry pointing to: http://upload.wikimedia.org/wikipedia/commons/thumb/c/c2/SnowflakesWilsonBentley.jpg/220px-SnowflakesWilsonBentley.jpg, which produces this image:

Which would be a great illustration for a primary school class about a form of H2O.

On the other hand, another dictionary entry for the same URL might point to: http://upload.wikimedia.org/wikipedia/commons/thumb/0/03/Liquid-water-and-ice.png/220px-Liquid-water-and-ice.png, which produces this image:

Which would be more appropriate for a secondary school class.

Writing this for an inline <a> element, I would write:

<a href="http://en.wikipedia.org/wiki/Water" dictionary="http://upload.wikimedia.org/wikipedia/commons/ thumb/c/c2/SnowflakesWilsonBentley.jpg/220px-SnowflakesWilsonBentley.jpg http://upload.wikimedia.org/wikipedia/commons/ thumb/0/03/Liquid-water-and-ice.png/220px-Liquid-water-and-ice.png">H2O</a> 

The use of a “URL” and images all from Wikipedia is just convenience for this example. Dictionary entries are not tied to the “URL” in the href attribute.

That presumes some ability on the part of the dictionary server to respond with meaningful information to display to a user who must choose between two dictionaries.

Enabling users to have multiple sources of additional information at their command versus the simplicity of a single dictionary, seems like a good choice.

Nothing prohibits a script writer from enabling users to insert their own dictionary preferences either for the document as a whole or for individual <a> elements.

If you missed my series on Simple Web Semantics, see: Simple Web Semantics — Index Post.

Apologies for quoting “URL/s” throughout the post but after reading:

Note: The term “URL” in this specification is used in a manner distinct from the precise technical meaning it is given in RFC 3986. Readers familiar with that RFC will find it easier to read this specification if they pretend the term “URL” as used herein is really called something else altogether. This is a willful violation of RFC 3986. [RFC3986]

in the latest HTML5 draft, it seemed like the right thing to do.

Would it have been too much trouble to invent “something else altogether” for this new meaning of “URL?”

### Precursors to Simple Web Semantics

Thursday, February 21st, 2013

A couple of precursors to Simple Web Semantics have been brought to my attention.

Wanted to alert you so you can consider these prior/current approaches while evaluating Simple Web Semantics.

The first one was from Rob Weir (IBM), who suggested I look at “smart tags” from Microsoft and sent the link to Soft tags (Wikipedia).

The second one was from Nick Howard (a math wizard I know) who pointed out the similarity to bookmarklets. On that see: Bookmarklet (Wikipedia).

I will be diving deeper into both of these technologies.

Not so much a historical study but what did/did not work, etc.

Other suggestions, directions, etc. are most welcome!

I have a another refinement to the syntax that I will be posting tomorrow.

### Simple Web Semantics – Index Post

Monday, February 18th, 2013

Sam Hunting suggested that I add indexes to the Simple Web Semantics posts to facilitate navigating from one to the other.

It occurred to me that having a single index page could also be useful.

The series began with:

Reasoning why something isn’t working is important to know before proposing a solution.

I have gotten good editorial feedback on the proposal and will be posting a revision in the next couple of days.

Nothing substantially different but clearer and more precise.

I am always open to comments but the sooner they arrive the sooner I can make improvements.

### Simple Web Semantics (SWS) – Syntax Refinement

Sunday, February 17th, 2013

In Saving the “Semantic” Web (part 5), the only additional HTML syntax I proposed was:

<meta name=”dictionary” content=”URI”>

in the <head> element of an HTML document.

(Where you would locate the equivalent declaration of a URI dictionary in other document formats will vary.)

But that sets the URI dictionary for an entire document.

What if you want more fine grained control over the URI dictionary for a particular URI?

It would be possible to do something complicated with namespaces, containers, scope, etc. but the simpler solution would be:

<a dictionary="URI" href="yourURI">

Either the URI is governed by the declaration for the entire page or it has a declared dictionary URI.

Or to summarize the HTML syntax of SWS at this point:

<meta name=”dictionary” content=”URI”>

<a dictionary="URI" href="yourURI">

### Saving the “Semantic” Web (part 5)

Friday, February 15th, 2013

Simple Web Semantics

For what it’s worth, what follows in this post is a partial, non-universal and useful only in some cases proposal.

That has been forgotten by this point but in my defense, I did try to warn you.

1. Division of Semantic Labor

The first step towards useful semantics on the web must be a division of semantic labor.

I won’t recount the various failures of the Semantic Web, topic maps and other initiatives to “educate” users on how they should encode semantics.

All such efforts have, are now and will fail.

That is not a negative comment on users.

In another life I advocated tools that would enable biblical scholars to work in XML, without having to learn angle-bang syntax. It wasn’t for lack of intelligence, most of them were fluent in five or six ancient languages.

They were focused on being biblical scholars and had no interest in learning the minutiae of XML encoding.

After many years, due to a cast of hundreds if not thousands, OpenOffice, OpenDocumentFormat (ODF) and XML editing became available to the ordinary users.

Not the fine tuned XML of the Text Encoding Initiative (TEI) or DocBook, but having a 50 million plus user share is better than being in the 5 to 6 digit range.

Users have not succeeded in authoring structured data, such as RDF, but have demonstrated competence at authoring <a> elements with URIs.

I propose the following division of semantic labor:

Users – Responsible for identification of subjects in content they author, using URIs in the <a> element.

Experts – Responsible for annotation (by whatever means) of URIs that can be found in <a> elements in content.

2. URIs as Pointers into a Dictionary

One of the comments in these series pointed out that URIs are like “pointers into a dictionary.” I like that imagery and it is easier to understand than the way I intended to say it.

If you think of words as pointers into a dictionary, how many dictionaries does a word point into?

And contrast your answer with the number of dictionaries into which a URI points?

If we are going to use URIs as “pointers into a dictionary,” then there should be no limit on the number of dictionaries into which they can point.

A URI can be posed to any number of dictionaries as a query, with possibly different results from each dictionary.

3. Of Dictionaries

Take for example the URI, http://data.nytimes.com/47271465269191538193 as an example of a URI that can appear in a dictionary.

If you follow that URI, you will notice a couple of things:

1. It isn’t content suitable for primary or secondary education.
2. The content is limited to that of the New York Times.
3. The content of the NYT consists of article pointers

Not to mention it is a “pull” interface that requires effort on the part of users, as opposed to a “push” interface that reduces that effort.

What if rather than “following” the URI http://data.nytimes.com/47271465269191538193, you could submit that same URI to another dictionary, one than had different information?

A dictionary that for that URI returns:

1. Links to content suitable for primary or secondary education.
2. Broader content than just New York Times.
3. Curated content and not just article pointers

Just as we have semantic diversity:

URI dictionaries shall not be required to use a particular technology or paradigm.

4. Immediate Feedback

Whether you will admit it or not, we have all coded HTML and then loaded it in a browser to see the results.

That’s called “immediate feedback” and made HTML, the early versions anyway, extremely successful.

When <a> elements with URIs are used to identify subjects, how can we duplicate that “immediate feedback” experience?

My suggestion is that users encode in the <head> of their documents a meta element that reads:

<meta name=”dictionary” content=”URI”>

And insert either JavaScript or JQuery code that creates an array of all the URIs in the document, passes those URIs to the dictionary specified by the user and then displays a set of values when a user mouses over a particular URI.

Think of it as being the equivalent of spell checking except for subjects. You could even call it “subject checking.”

For most purposes, dictionaries should only return 3 or 4 key/values pairs, enough for users to verify their choice of a URI. With an option to see more information.

True enough, I haven’t asked for users to say which of those properties identify the subject in question and I don’t intend to. That lies in the domain of experts.

The inline URI mechanism lends itself to automatic insertion of URIs, which users could then verify capture their meaning. (Wikifier is a good example, assuming you have a dictionary based on Wikipedia URIs.)

Users should be able to choose the dictionaries they prefer for identification of subjects. Further, users should be able to verify their identifications from observing properties associated with a URI.

5. Incentives, Economic and Otherwise

There are economic and other incentives that arise from “Simple Web Semantics.”

First, divorcing URI dictionaries from any particular technology will create an easy on ramp for dictionary creators to offer as many or few services as they choose. Users can vote with their feet on which URI dictionaries meet their needs.

Second, divorcing URIs from their sources creates the potential for economic opportunities and competition in the creation of URI dictionaries. Dictionary creators can serve up definitions for popular URIs, along with pointers to other content, free and otherwise.

Third, giving users the right to choose their URI dictionaries is a step towards returning democracy to the WWW.

Fourth, giving users immediate feedback based on URIs they choose, makes users the judges of their own semantics, again.

Fifth, with the rise of URI dictionaries, the need to maintain URIs, “cool” or otherwise, simply disappears. No one maintains the existence of words. We have dictionaries.

There are technical refinements that I could suggest but I wanted to draw the proposal in broad strokes and improve it based on your comments.

PS: As I promised at the beginning, this proposal does not address many of the endless complexities of semantic integration. If you need a different solution, for a different semantic integration problem, you know where to find me.

### Saving the “Semantic” Web (part 4)

Wednesday, February 13th, 2013

Democracy vs. Aristocracy

Part of a recent comment on this series reads:

What should we have been doing instead of the semantic web? ISO Topic Maps? There is some great work in there, but has it been a better success?

That is an important question and I wanted to capture it outside of comments on a prior post.

Earlier in this series of posts I pointed out the success of HTML, especially when contrasted with Semantic Web proposals.

Let me hasten to add the same observation is true for ISO Topic Maps (HyTime or later versions).

The critical difference between HTML (the early and quite serviceable versions) and Semantic Web/Topic Maps is that the former democratizes communication and the latter fosters a technical aristocracy.

Every user who can type and some who hunt-n-peck, can author HTML and publish their content for others around the world to read, discuss, etc.

That is a very powerful and democratizing notion about content creation.

The previous guardians, gate keepers, insiders, and their familiars, who didn’t add anything of value to prior publications processes, are still reeling from the blow.

Even as old aristocracies crumble, new ones evolve.

Technical aristocracies for example. A phrase relevant to both the Semantic Web and ISO Topic Maps.

Having tasted freedom, the crowds aren’t as accepting of the lash/leash as they once were. Nor of the aristocracies who would wield them. Nor should they be.

Which make me wonder: Why the emphasis on creating dumbed down semantics for computers?

We already have billions of people who are far more competent semantically than computers.

Where are our efforts to enable them to transverse the different semantics of other users?

Such as the semantics of the aristocrats who have self-anointed themselves to labor on their behalf?

If you have guessed that I have little patience with aristocracies, you are right in one.

I came by that aversion honestly.

I practiced law in a civilian jurisdiction for a decade. A specialist language, law, can be more precise, but it also excludes others from participation. The same experience was true when I studied theology and ANE languages. A bit later, in markup technologies (then SGML/HyTime), the same lesson was repeated. What I do with ODF and topic maps are two more specialized languages.

Yet a reasonably intelligent person can discuss issues in any of those fields, if they can get past the language barriers aristocrats take so much comfort in maintaining.

My answer to what we should be doing is:

Looking for ways to enable people to traverse and enjoy the semantic diversity that accounts for the richness of the human experience.

PS: Computers have a role to play in that quest, but a subordinate one.

### Saving the “Semantic” Web (part 3)

Tuesday, February 12th, 2013

On Semantic Transparency

The first responder to this series of posts, j22, argues the logic of the Semantic Web has been found to be useful.

I said as much in my post and stand by that position.

The difficulty is that the “logic” of the Semantic Web excludes vast swathes of human expression and the people who would make those expressions.

If you need authority for that proposition, consider George Boole (An Investigation of the Laws of Thought, pp. 327-328):

But the very same class of considerations shows with equal force the error of those who regard the study of Mathematics, and of their applications, as a suﬃcient basis either of knowledge or of discipline. If the constitution of the material frame is mathematical, it is not merely so. If the mind, in its capacity of formal reasoning, obeys, whether consciously or unconsciously, mathematical laws, it claims through its other capacities of sentiment and action, through its perceptions of beauty and of moral ﬁtness, through its deep springs of emotion and aﬀection, to hold relation to a diﬀerent order of things. There is, moreover, a breadth of intellectual vision, a power of sympathy with truth in all its forms and manifestations, which is not measured by the force and subtlety of the dialectic faculty. Even the revelation of the material universe in its boundless magnitude, and pervading order, and constancy of law, is not necessarily the most fully apprehended by him who has traced with minutest accuracy the steps of the great demonstration. And if we embrace in our survey the interests and duties of life, how little do any processes of mere ratiocination enable us to comprehend the weightier questions which they present! As truly, therefore, as the cultivation of the mathematical or deductive faculty is a part of intellectual discipline, so truly is it only a part. The prejudice which would either banish or make supreme any one department of knowledge or faculty of mind, betrays not only error of judgment, but a defect of that intellectual modesty which is inseparable from a pure devotion to truth. It assumes the oﬃce of criticising a constitution of things which no human appointment has established, or can annul. It sets aside the ancient and just conception of truth as one though manifold. Much of this error, as actually existent among us, seems due to the special and isolated character of scientiﬁc teaching—which character it, in its turn, tends to foster. The study of philosophy, notwithstanding a few marked instances of exception, has failed to keep pace with the advance of the several departments of knowledge, whose mutual relations it is its province to determine. It is impossible, however, not to contemplate the particular evil in question as part of a larger system, and connect it with the too prevalent view of knowledge as a merely secular thing, and with the undue predominance, already adverted to, of those motives, legitimate within their proper limits, which are founded upon a regard to its secular advantages. In the extreme case it is not diﬃcult to see that the continued operation of such motives, uncontrolled by any higher principles of action, uncorrected by the personal inﬂuence of superior minds, must tend to lower the standard of thought in reference to the objects of knowledge, and to render void and ineﬀectual whatsoever elements of a noble faith may still survive.

Or Justice Holmes writing in 1881 (The Common Law, page 1)

The life of the law has not been logic: it has been experience. The felt necessities of the time, the prevalent moral and political theories, intuitions of public policy, avowed or unconscious, even the prejudices which judges share with their fellow-men, have had a good deal more to do than the syllogism in determining the rules by which men should be governed. The law embodies the story of a nation’s development through many centuries, and it cannot be dealt with as if it contained only the axioms and corollaries of a book of mathematics.

In terms of historical context, remember that Holmes is writing at a time when works like John Stuart Mill’s A System of Logic, Ratiocinative and Inductive: being a connected view of The Principles of Evidence and the Methods of Scientific Investigation, were in high fashion.

The Semantic Web isn’t the first time “logic” has been seized upon as useful (as no doubt it is) and exclusionary (the part I object to) of other approaches.

Rather than presuming the semantic monotone the Semantic Web needs for its logic, a false presumption for owl:sameAs and no doubt other subjects, why not empower users to use more complex identifiers for subjects than solitary URIs?

It would not take anything away from the current Semantic Web infrastructure, simply makes its basis, URIs, less semantically opaque to users.

Isn’t semantic transparency a good thing?

### Saving the “Semantic” Web (part 2) [NOTLogic]

Monday, February 11th, 2013

Saving the “Semantic” Web (part 1) ended concluding authors of data/content should be asked about the semantics of their content.

I asked if there were compelling reasons to ask someone else and got no takers.

The acronym, NOTLogic may not be familiar. It expands to: Not Only Their Logic.

Users should express their semantics in the “logic” of their domain.

After all, it is their semantics, knowledge and domain that are being captured.

Their “logic” may not square up with FOL (first order logic) but where’s the beef?

Unless one of the project requirements is to maintain consistency with FOL, why bother?

The goal in most BI projects is ROI on capturing semantics, not adhering to FOL for its own sake.

Some people want to teach calculators how to mimic “reasoning” by using that subset known as “logic.”

However much I liked the Friden rotary calculator of my youth:

teaching it to mimic “reasoning” isn’t going to happen on my dime.

There are cases where machine learning technique are very productive and fully justified.

The question you need to ask yourself (after discovering if you should be using RDF at all, The Semantic Web Is Failing — But Why? (Part 2)) is whether “their” logic works for your use case.

I suspect you will find that you can express your semantics, including relationships, without resort to FOL.

Which may lead you to wonder: Why would anyone want you to use a technique they know, but you don’t?

I don’t know for sure but have some speculations on that score I will share with you tomorrow.

In the mean time, remember:

1. As the author of content or data, you are the person to ask about its semantics.
2. You should express your semantics in a way comfortable for you.

### Saving the “Semantic” Web (part 1)

Friday, February 8th, 2013

Semantics: Who You Gonna Call?

I quote “semantic” in ‘Semantic Web’ to emphasize the web had semantics long before puff pieces in Scientific American.

As a matter of fact, people traffic in semantics every day, in a variety of mediums. The “Web,” for all of its navel gazing, is just one.

At your next business or technical meeting, if a colleague uses a term you don’t know, here are some options:

1. Search Cyc.
2. Query WordNet.
3. Call Pat Hayes.
4. Ask the speaker what they meant.

Other than Tim Berners-Lee, I suspect the vast majority of us will pick #4.

Here’s another quiz.

If asked, will the speaker respond with:

1. Repeating the term over again, perhaps more loudly? (An Americanism that English spoken loudly is more understandable by non-English speakers. Same is true for technical terms.)
2. Restating the term in Common Logic syntax?
3. Singing a “cool” URI?
4. Expanding the term by offering other properties that may be more familiar to you?

Again, other than Tim Berners-Lee, I suspect the vast majority of us will pick #4.

To summarize up to this point:

1. We all have experience with semantics and encountering unknown semantics.
2. We all (most of us) ask the speaker of unknown semantics to explain.
3. We all (most of us) expect an explanation to offer additional information to clue us into the unknown semantic.

My answer to the question of “Semantics: Who You Gonna Call?” is the author of the data/information.

Do you have a compelling reason for asking someone else?

### The Semantic Web Is Failing — But Why? (Part 5)

Thursday, February 7th, 2013

Impoverished Identification by URI

There is one final part of the faliure of the Semantic Web puzzle to explore before we can talk about a solution.

In owl:sameAs and Linked Data: An Empircal Study, Ding, Shinavier, Finin and McGuinness write:

Our experimental results have led us to identify several issues involving the owl:sameAs property as it is used in practice in a linked data context. These include how best to manage owl:sameAs assertions from “third parties”, problems in merging assertions from sources with different contexts, and the need to explore an operational semantics distinct from the strict logical meaning provided by OWL.

To resolve varying usages of owl:sameAs, the authors go beyond identifications provided by a URI to look to other properties. For example:

Many owl:sameAs statements are asserted due to the equivalence of the primary feature of resource description, e.g. the URIs of FOAF profiles of a person may be linked just because they refer to the same person even if the URIs refer the person at different ages. The odd mashup on job-title in previous section is a good example for why the URIs in different FOAF profiles are not fully equivalent. Therefore, the empirical usage of owl:sameAs only captures the equivalence semantics on the projection of the URI on social entity dimension (removing the time and space dimensions). In thisway, owl:sameAs is used to indicate p artial equivalence between two different URIs, which should not be considered as full equivalence.

Knowing the dimensions covered by a URI and the dimensions covered by a property, it is possible to conduct better data integration using owl:sameAs. For example, since we know a URI of a person provides a temporal-spatial identity, descriptions using time-sensitive properties, e.g. age, height and workplace, should not be aggregated, while time-insensitive properties, such as eye color and social security number, may be aggregated in most cases.

When an identification is insufficient based on a single URI, additional properties can be considered.

My question then is why do ordinary users have to wait for experts to decide their identifications are insufficient? Why can’t we empower users to declare multiple properties, including URIs, as a means of identification?

It could be something as simple as JSON key/value pairs with a notation of “+” for must match, “-” for must not match, and “?” for optional to match.

A declaration of identity by users about the subjects in their documents. Who better to ask?

Not to mention that the more information supplies with for an identification, the more likely they are to communicate, successfully, with other users.

URIs may be Tim Berners-Lee’s nails, but they are insufficient to support the scaffolding required for robust communication.

The next series starts with Saving the “Semantic” Web (Part 1)

### The Semantic Web Is Failing — But Why? (Part 4)

Thursday, February 7th, 2013

Who Authors The Semantic Web?

With the explosion of data, “big data” to use the oft-abused terminology, authoring semantics cannot be solely the province of a smallish band of experts.

Ordinary users must be enabled to author semantics on subjects of importance to them, without expert supervision.

The Semantic Web is designed for the semantic equivalent of:

An F16 cockpit has an interface some people can use, but hardly the average user.

The VW “Bettle” has an interface used by a large number of average users.

Using a VW interface, users still have accidents, disobey rules of the road, lock their keys inside and make other mistakes. But the number of users who can use the VW interface is several orders of magnitude greater than the F-16/RDF interface.

Designing a solution that only experts can use, if participation by average users is a goal, is a path to failure.

The next series starts with Saving the “Semantic” Web (Part 1)