In 1668, John Wilkins published An Essay Towards a Real Character, and a Philosophical Language. In it, he proposed a universal language that would represent every concept with its own symbol.
Here is an excerpt from his treatise:
Needless, to say, Wilkins did not succeed in this monumental and aspirational task. Today, his treatise survives mainly as an object of curiosity and satire.
Wilkins’s 17th-century quest, however quixotic, is startlingly relevant today. In order help machines understand language, we use embeddings that represent words as vectors that encode meaning. …
Most folks who work on search worry about relevance. But it’s surprisingly difficult to find a useful definition of relevance.
Merriam-Webster defines relevance as “the ability (as of an information retrieval system) to retrieve material that satisfies the needs of the user.”
William Goffman defines it as “a measure of information conveyed by a document relative to a query…[but] the relationship between the document and the query, though necessary, is not sufficient to determine relevance.”
These strike me less as definitions and more as an “I know it when I see it” standard. But they’ll have to do.
For feeds and recommendations, ranking is critical. All the inputs are implicit, so machine-learned ranking is the only practical way to optimize engagement.
A search engine elicits the searcher’s explicit intent, expressed as keywords, and this explicit intent is, by far, its most valuable input. Searchers, quite understandably, expect results that are relevant to their expressed intent. Ranking is still valuable, but it plays less of a role than for other applications.
Before a search engine retrieves and ranks results, query understanding maps the query to a representation of searcher intent. Good ranking depends on robust query understanding. …
In improv, there’s a fun game called “new choice”. At any point during the improvised scene, the host can call for a “new choice”. The player who last spoke has to backtrack and substitute a new line. At its best, the substitution takes the scene in a new, unexpected, and hilarious direction.
Interacting with a search engine or a digital assistant probably shouldn’t feel like watching — or participating in — improvisational comedy. But I often wish I could play “new choice” to nudge the information-seeking experience in a different direction.
In the past decade, the incredible progress in word embeddings and deep learning has fueled an interest in neural information retrieval. An increasing number of folks believe that it’s time to retire the traditional inverted indexes (aka posting lists) that search engines use for retrieval and ranking.
In its place, they advocate a model where search engines use neural networks to represent documents and queries as vectors, and then use nearest neighbor search — or more sophisticated ranking models — to retrieve and rank results.
This revolutionary approach is tempting, but — in my view — misdirected. …
Recall measures the fraction of relevant results that are retrieved. Naturally, recall is correlated to the size of the result set. But we have to be careful not to overstate that correlation.
Consider the simplest case: a search that returns no results. A lack of results does not necessarily indicate a recall problem: there may simply be no results that relate to the searcher’s information need. Still, the upside from trying harder…
In grade school, we were taught the three Rs: reading, writing, and ‘rithmetic. In search, we can be thankful that the three Rs actually start with the letter R: relevance, recall, and ranking.
Relevance is the prime directive of search: the guiding principle for a search engine is to return results that satisfy the searcher’s information need. That means understanding what the searcher wants and retrieving relevant results.
Achieving relevance is a trade-off between precision and recall. We’ll discuss recall in a moment, but precision is the measure that people associate most with relevance: the fraction of results that satisfy…
Search developers tend to focus most of their efforts on the first page of results. As a result, they prioritize investment in ranking models, with the goal of improving quality and business metrics, such as relevance and conversion.
In information retrieval terms, this focus on the first page corresponds to an emphasis on precision, the fraction of results that are relevant. To be more precise — no pun intended — it corresponds to an emphasis on position-biased precision measures like discounted cumulative gain (DCG).
But precision isn’t the only measure of search quality. There’s also recall, which measures the fraction…
Francis Fukuyama, Barak Richman, and Ashish Goel recently published a piece in Foreign Affairs, which they ambitiously titled “How to Save Democracy From Technology: Ending Big Tech’s Information Monopoly”.
The gist of their proposal is to take away the role of giant platforms (i.e., Google, Facebook, Twitter) as gatekeepers of content by allowing users to choose from among middleware companies to manage information access. They see this approach as addressing the threat that the concentration of information platforms poses to democracy — and to society generally.
Ads? Hold That Thought…
I believe that this threat is compounded by an ad-supported…