One Billion Points of Failure!

In No 303’s for Topic Maps? I mentioned that distinguishing between identifiers and addresses with 303’s has architectural implications.

The most obvious one is the additional traffic that 303 traffic is going to add to the Web.

Another concern is voiced in the Cool URIs for the Semantic Web document when it says:

Content negotiation, with all its details, is fairly complex, but it is a powerful way of choosing the best variant for mixed-mode clients that can deal with HTML and RDF.

Great, more traffic, it isn’t going to be easy to implement, what else could be wrong?

It is missing the one feature that made the Web a successful hypertext system when more complex systems failed. The localization of failure is missing from the Semantic Web.

If you follow a link and a 404 is returned, then what? Failure is localized because your document is still valuable. It can be processed just like before.

What if you need to know if a URL is an identifier for “people, products, places, ideas and concepts such as ontology classes”? If the 303 fails, you don’t get that information.

It is important enough information for the W3C to invent ways to fix the failure of RDF to distinguish between identifiers and resource addresses.

But the 303 fix puts you at the mercy of an unreliable network, unreliable software and unreliable users.

With triples relying on other triples, failure cascades. The system has one billion points of potential failure, the reported number of triples.

The Semantic Web only works if our admittedly imperfect systems, built and maintained by imperfect people, running over imperfect networks, don’t fail, maybe. I would rather take my chances with a technology that works for imperfect users, that would be us. The technology would be topic maps.

Comments are closed.