Interview Questions for Search Relevance Engineers, Data Scientists, and Product Managers

Daniel Tunkelang
4 min readJul 19, 2018

--

Technical interviewing is hard, and it’s a subject I’m very passionate about personally.

But this post isn’t about technical interviewing in general. It’s specifically about questions I’ve used to interview candidates for search relevance positions — which of course includes query understanding. If you’re hiring people for these positions, I encourage you to keep reading!

Search Metrics

Anyone who works on search relevance problems — whether they’re training machine-learned ranking systems or designing search user interfaces — should have an informed perspective on metrics. As Lord Kelvin said, “If you can not measure it, you can not improve it.” Search engineers and product managers should always define metrics to evaluate the systems they build. At the same time, they should be mindful of George Box’s admonition that “all models are wrong, but some are useful.”

Here are some search metrics questions that I’ve used:

  • What are the trade-offs between using click-through rate (CTR) and conversion rate as search success metrics?
  • How would you expect improving snippet quality to affect your metrics? And how would you go about measuring snippet quality?
  • Describe a situation where a change might increase the mean-reciprocal rank (MRR) of click positions but also increase search abandonment.
  • How do you use searcher behavior to measure a system that offers both autocomplete and a traditional search results page?
  • What are reasons to use explicit human relevance judgments to measure search quality, rather than just measuring behavior? What are the downsides?

None of these questions have a single right answer — they’re open-ended enough to give candidates a chance to use what they know. Still, you should expect candidates to go down some familiar paths. For example, a candidate should recognize that conversions are sparser than clicks, but that spending money is generally a stronger relevance signal than just clicking on a result. Indeed, the questions encourage candidates to work through trade-offs.

Also, while these questions are intended for experienced candidates, you can also use them to see how inexperienced candidates reason about search metrics from first principles.

Working with Text

The future of search may be image and video — or even mind reading — but the present is all about text. For most of us, building effective search systems means making sense of the text in documents (or product descriptions) and search queries.

Software engineers working on search should be comfortable processing text at many levels — characters, tokens, phrases, etc. Search product managers, while hardly expected to implement text processing algorithms, should be still familiar with the state of the art and be able to understand and articulate the trade-offs among techniques.

Here are some questions that I’ve used that relate to working with text:

  • How do you determine whether a bigram (or, more generally an n-gram) represents a single concept?
  • How do you determine whether two words or phrases are synonyms? In general, how do you find relationships to use for query expansion?
  • How do you implement techniques like stemming and lemmatization, and what trade-offs do you have to manage?
  • How do you handle the tokenization challenges of phone numbers, part numbers, etc., where the spacing searchers use may be different from what’s in the document collection? (cf. this blog post)
  • What are the benefits and drawbacks of replacing a traditional keyword index by one that embeds all of the documents as vectors?

Again, these are open-ended questions, allowing candidates to showcase what they know. A software engineer or data scientist should be able to answer them with technical depth, while a product manager should be able to frame the problems, work through options, and relate them to broader product concerns.

Computing Relevance

Search relevance engineers, data scientists, and product managers are responsible for delivering relevant search results to users. As such, they should be familiar with general techniques for determining which results are relevant.

Here are some questions that I’ve used that relate to computing relevance:

  • Give some examples of query-dependent vs. query-independent search relevance factors, and discuss the implications for computation.
  • Describe and compare two ways (e.g., a simple heuristic and a principled approach) to incorporate query expansion into a relevance model.
  • How can you train a machine-learned ranking model for search? Elaborate on the differences between two approaches, e.g., pointwise and pairwise.
  • Discuss differences between using filtering and ranking to deliver search relevance, as well as the implications for evaluation and training models.
  • How can sorting search results by relevance lead to a lack of diversity in the results? What techniques can you use to ensure result diversity?

How candidates answer these questions often reflects the specific search problems they’ve worked on, e.g., people who have worked on product search are likely to come up with different relevance factors than those who have worked on web search. That’s fine — the point is to allow candidates to use what they know, demonstrating their depth and ability to reason about problems in the space.

And again, while these questions are designed for experienced candidates, inexperienced candidates should still be able to come up with reasonable answers by working from first principles.

Summary

Search relevance is a deep problem space, and it’s important that people working on it demonstrate a general aptitude to navigate that space. In this post, I’ve touched on three broad areas: metrics, text, and computing relevance. The above list of questions isn’t intended to be comprehensive, but I hope it gives you a good idea of how to assess search relevance engineers and product managers. Good luck, and happy hiring!

--

--

Daniel Tunkelang
Daniel Tunkelang

Responses (1)