What is Searcher Happiness?

Daniel Tunkelang
4 min readJul 24, 2023

As a search specialist, my main goal is to improve searcher happiness. Sure, companies pay me to improve business metrics like conversion rate and revenue, but I mostly accomplish this by helping searchers find what they are looking for, and, in the case of ecommerce, buy it.

But what is searcher happiness? How do we define it, let alone measure it?

Life would be easier if searchers were rational.

Historically, information retrieval has embraced the probability ranking principle. The probability ranking principle essentially tells us that a search application should return results is decreasing order of relevance. Since ranking is not the same as relevance, I will take the liberty to interpret the probability ranking principle as telling us to return results in decreasing order of utility, or risk-adjusted expected utility to be pedantic.

The probability ranking principle has an appealing theoretical simplicity, and it is grounded in classical rationality. Unfortunately, we humans are neither that simple nor that rational. Research on behavioral economics by economists and psychologists like Herb Simon, Amos Tversky, and Daniel Kahneman shows that we do not make decisions by assigning a utility to each choice independently and picking the choice with the highest utility. Instead, our choices are highly subject to framing and other considerations that violate classical rationality.

Most choices involve tradeoffs, and search applications encode those tradeoffs in the way they combine factors in the scoring functions they use for ranking. If, however, each result independently receives a score corresponding to its utility, then the search application is implicitly assuming that searchers are grounded classical rationality. And, indeed, that is how nearly all search applications implement ranking.

Searchers are human, and humans exhibit heuristics and biases.

On the whole, we humans are not good at managing tradeoffs, and we are not particularly rational about it. Presented with a variety of competing factors, we use heuristics — often unconsciously — to reduce our cognitive load. We often apply the “fast and frugal” heuristic of picking the single most important factor and ignoring the rest. In an ecommerce setting, that can means evaluating search results solely based on their price or popularity — assuming that the results meet the threshold of being relevant.

But even deciding which single factor is the most important can be a challenge for searchers, and it can depend on how the choices are framed. Indeed, an analysis of preference reversals showed that the importance people give to a factor can depend on whether they are evaluating each choice in isolation or comparing choices to one another. Rather than arriving at a choice with predefined preferences, we construct preferences in response to the set of available choices and tradeoffs.

Searchers do not need to make the “best” choice to be happy.

Given how humans make decision, it is not clear that there is always a “best” search result for the searcher. But perhaps search applications should not even be trying to guide searchers to a best result. After all, their real goal is to optimize for searcher happiness — that is, to help searchers feel happy about they choices they ultimately make.

In 2006, Harr Chen and David Karger wrote “Less is More”, in which they explored alternatives to the probability ranking principle. It is a fascinating paper that I continue to recommend to anyone working on search applications. It proposes the “k-call at n”, a binary metric representing whether at least k of the top n results are relevant. A key takeaway from the paper is that different searcher needs call for different values of k and n. For example, 10-call at 10 represents the need for perfect precision in the top 10 results, while 1-call at 10 represents the need for at least 1 relevant result in the top 10.

After reading the paper, I approached the authors with a suggestions: had they considered adapting their work to reflect what behavioral economics tells us about searchers? For example, searchers might be happiest if the search applications present a handful of choices followed by clearly irrelevant results, so that searchers feel like they are making a rational decision among a few good alternatives, but do have to exert too much effort in the process, or feel regret about the choices they do not make.

One author’s reply was I’m a computer scientist, not a psychologist!” or something to that effect. To the best of my knowledge, no one has explicitly pursued this approach to improve searcher happiness — though I suspect that a number of search application developers may have stumbled into it in their attempts to address search result diversity.


The human beings who use search applications are complicated and irrational. The probability ranking principle, despite its appealing theoretical simplicity and its grounding in classical rationality, is a poor fit to the complex, irrational humanity of searchers. Improving searcher happiness does not always mean maximizing the utility of results. It may not even be possible to consistently define that utility. So I urge my fellow search application developers to keep an open mind.