New Choice
In improv, there’s a fun game called “new choice”. At any point during the improvised scene, the host can call for a “new choice”. The player who last spoke has to backtrack and substitute a new line. At its best, the substitution takes the scene in a new, unexpected, and hilarious direction.
Interacting with a search engine or a digital assistant probably shouldn’t feel like watching — or participating in — improvisational comedy. But I often wish I could play “new choice” to nudge the information-seeking experience in a different direction.
Relevance Feedback
The idea of searchers providing interactive feedback to guide search engines is an old one in information retrieval: it’s called explicit relevance feedback. Unfortunately, it’s seen much less adoption than its non-interactive cousins: implicit relevance feedback, which infers relevance judgements from behavior, and pseudo-relevance feedback, which simply assumes that the top-ranked results are relevant and feeds them back into a second round of retrieval. Explicit relevance feedback — and human–computer information retrieval in general —tends to get short shrift from search researchers and practitioners as being too complex for both systems and searchers to handle.
But there’s really no substitute for explicit feedback. Inferring relevance judgements from behavior is at best delayed — by the time you’ve abandoned the search session, it’s a bit late to ask “wait, wait, why didn’t you click on that?” — and at worst misleading, since the reasons for engagement or lack thereof may have nothing to do with relevance. Pseudo-relevance feedback, despite being surprisingly effective, does not take any input from the searcher, but rather doubles down on the relevance model — as well as its biases. Only explicit feedback takes real — and real-time — input from the searcher and creates the opportunity for the search engine to act directly on that input.
Usability Challenges
As noted above, many people dismiss explicit relevance feedback as being too complex. Indeed, if we want search interfaces to allow searchers to provide explicit relevance feedback, we have to address several usability challenges:
- Asymmetric feedback. When searchers find what they are looking for, they’re unlikely to make an extra effort to provide explicit feedback. On the other hand, when searchers can’t find what they want, they may be more inclined to provide feedback, especially if it leads to different results.
- Underspecified feedback. When searchers provide explicit feedback about a result, it’s likely to be binary, e.g., thumbs up or down, or perhaps swiping left or right. Unfortunately, binary feedback doesn’t indicate what a searcher liked or didn’t like about the result. Since obtaining non-binary feedback (e.g., asking why) adds additional complexity to the interface, it may be better to collect multiple data points to infer a pattern.
- Managing expectations. When searchers provide explicit feedback, they expect the search engine to listen to them. Specifically, they expect the search engine to immediately act on their feedback, in the context of their current search session. Many search engines are unable to incorporate this real-time feedback; other search engines may overreact to it.
These are some of the key usability challenges we have to address in order to make use of explicit relevance feedback.
Baby Steps
It’s difficult to introduce searchers to new search interfaces, especially interfaces that ask searchers for more input rather than less. So how do we incubate explicit relevance feedback into the search experience?
Here are some ideas that directly address the above usability challenges:
- Embrace diversity. Searchers are more able and willing to provide binary feedback (e.g., “more like this”, “less like this”) when they see a diversity of results. Indeed, explicit relevance feedback works best for broad and ambiguous queries, so make sure to convey that breadth or ambiguity. That way, searchers have a sense of the intent space they are navigating.
- Get lots of binary feedback. It’s tempting to ask searchers for rich feedback, but it’s better to ask simple questions — lots of them. Just as an eye exam arrives at a precise prescription through a series of binary questions, we can use binary questions to solicit preferences. The tricky part is including enough redundancy to establish confidence in a pattern, while including enough variety to cover the preference space — all while not wearing out searchers’ patience. But let’s at least keep it simple!
- Do something. When searchers provide feedback, they expect something to happen — preferably immediately. Negative feedback on a result should, at the very least, remove that result from view. After sufficient feedback, searchers should see changes to the results that reflect their input. The interface needs to set expectations about how much feedback is sufficient, e.g., 5 to 10 judgments, and then meet those expectations with action.
Search is a communication process.
Explicit relevance feedback is not an end in itself, but rather a means to help searchers better communicate their search intent. Done right, explicit relevance feedback improves that communication process. Researchers and practitioners raise reasonable objections about its complexity, but there’s really no substitute for explicit feedback. So let’s at least take baby steps to make it happen. Because searchers deserve a new choice!