We justify full-text searching because users are unable to find a subject an index.
Users don’t know what terms an indexer used for a subject in an index.
Users search full-text not knowing what terms hundreds if not thousands of people used for a subject.
It may just be me but that sounds like the problem went from bad to worse.
There may be two separate but related saving graces to full-text searching:
- As I pointed on in Is 00.7% of Relevant Documents Enough? a user may get lucky and guess a popular term or terms for some subject.
- It is very unlikely that any user will enter a full-text search result and get no results.
Some of the questions raised: Is a somewhat useful result more important than a better result? How to measure the distance between the two? How much effort is acceptable to users to obtain a better result?
If you know of any research along those lines please let me know about it.
My suspicion is that the gap between actual and user estimates of retrieval (Size Really Does Matter…) says something very fundamental about users. Something we need to account for in search engines and interfaces.