Working with ontologies teaches us immediately something very important : giving meaning to things corresponds to place them in the proper context.
Nothing new, of course: Gregory Bateson and the Palo Alto school, for example, have taught us that the same word or concept placed in a different context takes on an entirely different meaning, and that precisely in losing the typically human capacity to create and distinguish contexts (or abstraction levels) lies the essence of schizophrenia. Today all of this sounds trivial, very well known, almost tautological ...
Nevertheless, when we build an ontology we try to convey the meaning of things to machines, to computers, and then the importance of contextualization arises in all its evidence . After all, Ross Quillian's insight was precisely this one: in order to teach machines the 'meanings of thing' we have to make them behave as we do. When we don't know what a word or a concept means, we immediately start to build around it a context, a dense network of relationships, first digging in our memory and then using external resources (dictionaries? encyclopedias? ) We try to rebuild a 'semantic network ', and that's exactly what we are teaching computers to do.
Having to do with machines, we are obliged to utilize explicit, exact, unambiguous contexts: in essence, a semantic engineer uses taxonomic contexts, and more specifically subsumption trees whose relations obey to the rigid requirements of description logic . These tools look poor, arid, while on the contrary they turn out to be extraordinarily powerful, even enlightening. It often happens that building the ontology of a knowledge domain we are very familiar with, we end up discovering many aspects we hadn't noticed before, we weren't aware of. Teaching to machines, we also learn a lot. Such is the power of contextualization .
' context ' from Latin contextus , past part. of contexere, weave together