The Meeting of Minds

5 minute read

“I’ve broken it off.”

“What do you mean, you’ve broken it off? She was the best thing that ever happened to you. I loved her too, if the truth be known. You’re such an idiot! I have a mind to…”

“I mean I’ve broken the tip off my pen.”


Reciprocal understanding is one of the important goals when we communicate and when not properly held back on track, it is a source of confusions. Though we easily recognize when the conversation goes off track, it might be less clear how those communicative processes run; we’re hardly conscious of the mechanics of the mind working silently under the hood. This post is the first of a short serie, an attempt to shed some light on them with a distillation of ‘Geometry of Mearning’ written by Prof. Gärdenfors, an interesting set of theories that can help us to understand better what semantics is and how we process it. I have found those topics very curious and fascinating to explore, especially in the ultimate goal to build automatic systems that attempt to implement those processes in a similar way we do, for solving problems for us.

Layers of Knowledge

In the short dialog I used to open this post, Bob and Alice align themselves into a shared knowledge, but sometimes it is far from simple to establish apparently simple truths. Bob is ambiguous in his ‘I’ve broken off’; he does not specify what he has actually broken off with, and Alice, the interlocutor, wrongly infers he’s leaving his girlfriend. To create the basis of reciprocal understanding Bob pulls the communication level down and he raises Alice’s knowledge by precising that what is broken is the tip of his pen, and not his relationship.

I guess everybody agrees on the importance of sharing a common background of knowledge and Bob shows us we shift from a high semantic layer to a lower one to meet our interlocutor and restart the coordination from there. This admits that knowledge lies in hierarchical structures where the first rung of the ladder is populated with the fundamental and cognitively irreducible terms; they form new and more abstract concepts with increasing abstraction as long as we climb the ladder, in the upper layers. Simon Winter explain this idea of layered levels of knowledge analyzing how a mastercraft teaches a novice on how to replace a violin’s strings, where non-verbal communication is also part the game. He summarizes context levels into:

  • Level 3. Non verbal communication. People interact with minimal information exchange.
  • Level 2. Instruction. coordination of action is achieved by instruction.
  • Level 1. Coordination of the inner world. people inform each other so as to reach a richer or better coordination. It can also be achieved via questions.
  • Level 0. Meaning of words. Lowest level people negotiate the meanings of words and other basic communicative elements.

I don’t know whether there is a finite number of layers or it is open-ended, but on a high level of shared knowledge, the communication style would flow according to the ‘the obvious goes without saying’, where the implicit understandings are at their maximum. I’m also a bit skeptical of the lower bound of this ranking. If we take as granted that meaning of words can be decomposed into a finite set of conditions that are necessary and sufficient to describe the meaning, it is clashing with the level 0 where concepts are irreducible. If I open a vocabulary and every word - even the simplest one - is defined by means of other words, the case of circular loops in meaning references is inevitable, and inequivocal sign of inconsistency.

Communication as Coordination

Language and other forms of communication open the way to various types of engagement. Sophisticated collaborations can enable collective creation of value that can’t be done by single individuals; they’re fruits of communication. If we can summarise in a single word what communication stands for, I would recommend coordination. Coordination among interlocutors is a practical way to point at objects, places or actions, but also as alignment of intents, ideas, persuasions and even entertainments. More generically, coordination is a convergence of mental representations.

If coordination implies a transfer of information, it is expressed in several forms where language is the most rich and expressive one. In the dawn of humanity when the communicative acts became more varied and detached from the immediate and practical purposes, the value of meanings, or semantics, turned out to be more salient in communication. The coordination among participants is an iterative process where the meeting of minds ultimately converge in an alignment of meanings. Sounds almost romantic! As borrowed from maths, the metaphorical meeting point is also named fixpoint. A fixpoint is a value for which a function returns exactly the same value. It is a value with which a specific function works as an identity function. Let’s unfold how the process occurs in our dialog.


  • Before Bob speeches out with Alice we can assume he has his own imperscrutable beliefs that we can’t have access but he’s willing to express to his friend.
  • Bob pulls out of his linguistic toolset for transforming his ideas into verbal sentences.
  • Once Bob has expressed his view, Alice can transform such sentences into her inner mental representation.

More mechanically, Bob applies the expressive function f to his mental idea and the sentence is passed to Alice, then she applies g’ - the interpretative function - and acquires the meaning of that sentence, which is returned back to Bob as she has understood. Alice encodes her understanding into expression, then Bob applies f’- the inverse function (cofunctor) of f - to acquire Alice’s expression. If the idea generated in the round back coincides with the original one, the coordination has been successful. If the original and derived ideas don’t overlap, the alignment failed and Bob will attempt to correct Alice.

For it to be effective on how to fix Alice’s misunderstanding, the distance between Bob’s idea and what he derived from Alice’s feedback should be articulated enough to convey a rich sort of information. It should not just give a quantitative distance from the successful fixpoint, and also a qualitative account on how they differ from each other. This is why the discrepancy should be encoded as a vector in order to be also descriptive of the conversation’s status. This introduces the next topic of conceptual spaces, which poses the representation of semantic values in the between of symbolic AI and neural networks.