Another aspect of the “oh woe is topic maps” discussion is the lack of interest in topic maps by geeks. There are open source topic map projects, presentations at geeky conferences, demos, etc., but no real geek swell for topic maps. But that same isn’t true for ontologies, RDF, description logic (ok, maybe less for DL), etc.
In retrospect, that isn’t all that surprising. Take a gander inside any of the software project categories at sourceforge.org. Any of those projects could benefit from more participation but every year sees more projects in the same categories and oft times covering the same capabilities.
Does any of that say to you: There is an answer and it has to be my answer? I won’t bother with collecting the stats for the lack of code reuse, another aspect of this issue. It is too well known to belabor.
Topic maps made the fatal mistake of saying answers are supplied by users and not developers. If you don’t think that was a mistake, take a look at any RDF vocabulary and tell me it was written by a typical user community. Almost without exception (I am sure there must be some somewhere), RDF vocabularies are written by experts and imposed on users. Hence their popularity, at least among experts anyway.
Topic map inverted the usual world view to say that since users are the source of the semantics in the texts they read, that we should start with their views. Imposing world views is always more popular than learning them, particularly among the geek community. They know what users should be doing and they damned well better do it.
Oh, the other mistake that topic maps made was to say there was more than one world view. Multiple world views that could be aligned together. The ontologists scotched that idea decades ago, although they haven’t been able to agree on the one world view that should be in place. I suppose there may be (small letters), multiple world views, but that is composed of the correct World View and numerous incorrect world views.
That would certainly be the position of US intelligence and diplomatic circles, who map into the correct World View all “incorrect world views,” which may account for their notable lack of successes over the last fifty or so years.
We should market topic maps to audiences who are interested in their own goals, not the goals of others, even geeks.
Goals from group to group. Some groups want to engage in disruptive behavior, other groups wish to prevent disruptive behavior, some want to advance research, still others want to be patent trolls.
Topic maps: Advance your goals with military grade IT. (How’s that for a new topic map slogan?)
Well, as *the* person currently being “woe is us” on the mailing-list, I’d like to point out that I agree with this post completely. As mentioned in that spiel, there are tons of advantages with Topic Maps, from technical to human, and the reversal of ontology roles is just one of many – but perhaps as you point out one of the stronger ones – reasons people are *terrified* by it! Terrified because, well, Topic Maps makes changes all up and down the stack of software development and interaction, unavoidable changes that when reasoned upon looks correct, but still scares people away. Any number of perceived obstacles (from training / skills needed to technology choices to testability to tried-and-true mind-sets and tools) become real obstacles by virtue of being different from what they currently got.
And I think it is still doable, but it requires either brilliance over a long time, or some heavy backing by organisations with resources. But the funny thing is that I think most companies in IT are quite happy for the shitty state of software development and interaction these days, it’s very profitable.
So, how can Topic Maps be *profitable*?
Comment by shelterit — June 7, 2011 @ 9:28 pm
More tomorrow but part of the problem is that semantic impedance is a lost opportunity cost. Lost opportunities never appear as line items in a budget. So even though $100 bills are blowing over head (possible opportunities), managers will snipe at pennies and nickels on the ground. I have seen that in for-profit and non-profit contexts.
It also depends on what you are selling and how. Are you selling authoring/development services? Information? Software? Something else? What works for one may not work for the others.
Comment by Patrick Durusau — June 9, 2011 @ 7:39 pm
Well, I was always of the impression that most Topic Mappers were selling solutions to rather large and complicated problems, with all the vagueness of that statement. 🙂 I was a consultant for many, many years, as well in the top-tier of the library world in terms of technology, and yes, there were tons and tons of problems that just screamed Topic Maps (especially in the library world, I might add).
However, nothing really came of it. I think people understand the technology, even where technology of one kind will save money over another, but I fear more the ontological argument is at bay, that it is too hard to find people who can think and work with a new paradigm. We’ve spent hundreds of years building a society built around the structures and constraints we currently have, so introducing a new paradigm (“What? No tables nor columns or rows, no schema, no look-up tables? No language barriers between data and the people who use it? You just define your constraint, and use them? Inconceivable!”
I know it sounds simple, but I think it is mostly correct. We should never have tried to sell Topic Maps, we should all have worked on a killer app. We should have had the FOSS Ontopia about 10 years ago, and started from there.
Comment by shelterit — June 10, 2011 @ 12:04 am
At a library conference I saw what could have been a topic map application that harvested the library collection and then offered the capability of commenting on items in the collection.
It wasn’t written against the OPAC’s data structure but exported to a separate database. Why you ask? Because the vendor’s data structure was so arcane that export was the only way to create a data structure that supported another purpose.
This wasn’t some sub-1,000 liberal arts college, it is probably one of the top 30 technology universities in the US. Had IT resources to spare.
The problem (as I see it) with libraries is that they are facing vendor lock on one side and the inherent conservatism of not wanting to risk their data (understandable) on the other.
I think the “killer” app may be a bridge too far. In part because the advantages have to be so overwhelming that breaking vendor lock, overcoming the risk of change and paying the $cost of change, have to all happen at one time.
Fail on any of those three counts and adoption fails, told its a good idea, but let someone else go first.
But mine is not the counsel of despair. Why not the “killer” add-on? So you already have vendor X software but it won’t do …., have you tried our add-on? It has capability Y and can build out to integrate other information/capabilities. Information that you can seamlessly share with other libraries. Try that with vendor X. 😉
Could be FOSS but I don’t object to vendor software, just poorly written/documented vendor software that has a primary goal of vendor lockin. Good software doesn’t need lockin to succeed.
Comment by Patrick Durusau — June 10, 2011 @ 7:16 am