John McCarthy Notes on Formalizing Context says in Entering and Leaving Contexts:
Human natural language risks ambiguity by not always specifying such assumptions, relying on the hearer or reader to guess what contexts makes sense. The hearer employs a principle of charity and chooses an interpretation that assumes the speaker is making sense. In AI usage we probably don’t usually want computers to make assertions that depend on principles of charity for their interpretation.
Natural language statements, outside formal contexts, almost never specify their assumptions. And even when they attempt to specify assumptions, such as in formal contexts, it is always a partial specification.
Complete specification of context or assumptions isn’t possible. That would require recursive enumeration of all the information that forms a context and the context of that information and so on.
It really is a question of the degree of charity that is being practiced to resolve any potential ambiguity.
If AI chooses to avoid charity altogether, I think that says a lot about its chances for success.
Topic maps, on the other hand, can specify both the result of the charitable assumption, the subject recognized, as well as the charitable assumption itself. Which could (but not necessarily will be) expressed as scope.
For example, if I see the token who and I specify the scope as being rock-n-roll-bands, that avoids any potential ambiguity, at least from my perspective. I could be wrong, or it could have some other scope, but at least you know my charitable assumption.
What is particularly clever about topic maps is that other users can combine my charitable assumptions with their own as they merge topic maps together.
Think of it as stitching together a fabric of interpretation with a thread of charitable assumptions. A fabric that AI applications will never know.