I do not think it means what you think it means by Taylor Cowan is a deeply amusing take on Pellet, an OWL 2 Reasoner for Java.
I particularly liked the line:
I believe the semantic web community is falling into the same trap that the AI community fell into, which is to grossly underestimate the meaning of “reason”. As Inigo Montoya says in the Princess Bride, “You keep using that word. I do not think it means what you think it means.”
(For an extra 5 points, what is the word?)
Taylor’s point that Pellet will underscore unstated assumptions in an ontology and make sure that your ontology is consistent is a good one. If you are writing an ontology to support inferences that is a good thing.
Topic maps can support “consistent” ontologies but I find encouragement in their support for how people actually view the world as well. That some people “logically” infer from Boeing 767 -> “means of transportation” should not prevent me from capturing that some people “logically” infer -> “air-to-ground weapon.”
A formal reasoning system could be extended to include that case, but can that be done as soon as an analyst has that insight or must it be carefully crafted and tested to fit into a reasoning system when “the lights are blinking red?”
INCONCEIVABLE!
Comment by Alexander Johannesen — June 4, 2010 @ 7:08 am