Reading Rought Sets: Theoretical Aspects of Reasoning about Data by Zdzislaw Pawlak, when I ran across this comparison of rough versus fuzzy sets:
Rough sets has often been compared to fuzzy sets, sometimes with a view to introduce them as competing models of imperfect knowledge. Such a comparison is unfounded. Indiscernibility and vagueness are distinct facets of imperfect knowledge. Indiscernibility refers to the granularity of knowledge, that affects the definition of universes of discourse. Vagueness is due to the fact that categories of natural language are often gradual notions, and refer to sets with smooth boundaries. Borrowing an example from image processing, rough set theory is about the size of pixels, fuzzy set theory is about the existence of more than two levels of grey. (pp. ix-x)
It occurred to me that the precision of our identifications or perhaps better, the fixed precision of our identifications is a real barrier to semantic integration. Because the precision I need for semantic integration is going to vary from subject to subject, depending upon what I already know, what I need to know and for what purpose. Very coarse identification may be acceptable for some purposes but not others.
I don’t know what it would look like to have varying degrees of precision to subject identification or even how that would be represented. But, I suspect solving those problems will be involved in any successful approach to semantic integration.