The Eternal Quest for Meaning
In 1668, John Wilkins published An Essay Towards a Real Character, and a Philosophical Language. In it, he proposed a universal language that would represent every concept with its own symbol.
Here is an excerpt from his treatise:
Needless, to say, Wilkins did not succeed in this monumental and aspirational task. Today, his treatise survives mainly as an object of curiosity and satire.
Wilkins’s 17th-century quest, however quixotic, is startlingly relevant today. In order help machines understand language, we use embeddings that represent words as vectors that encode meaning. Mapping concepts to points in Euclidean space seems no less ambitious than Wilkins’s quest for a universal language.
After all, what do we really mean when we say that two words or phrases are synonyms? Or that two search queries represent the same intent? Doesn’t all of our mathematics and engineering rely on the assumption that there is a ground truth or meaning at the foundation? Or is it turtles all the way down?
It’s easy for those of us studying AI today to imagine that we’re just starting this work, or at least that we don’t need to look back further than the work of John McCarthy and his colleagues in the 1950s.
But our quest to understand language and map it to meaning has a long and winding trajectory. Understanding the nature of language gets at the heart of understanding what makes us human.
Yet what was once the esoteric theoretical domain of philosophy and linguistics has become a practical matter as we develop applications for daily use that depend on natural language understanding.
Reflecting on that centuries-long trajectory is humbling. But it’s important to keep our current work in perspective. And, as we try to make sense of embeddings and other representations, we can remember what Wilkins said four centuries ago: “Science doesn’t deliver even that much ‘truth’; it delivers empirically adequate generalisations, and that is all we need.” Indeed.