Much of the innovation in information retrieval and recommender systems focuses on better targeting in order to improve relevance. Those of us who develop such systems often take for granted that the axiomatic goal of relevance is to deliver the right content to the right people at the right time. We debate the means to achieve that goal, but we rarely debate the goal itself.
But we do need to question the unstated assumption that relevance should be our overarching goal. Optimizing for relevance can lead us to dark places.
Let’s consider ad targeting on Facebook, which has been in the news of late. Last year, civil rights groups sued Facebook for providing self-service tools that allowed advertisers to target housing and employment ads based on a variety of user attributes that included gender, age, and race. More recently, a research study demonstrated that, even when advertisers did not explicitly target specific demographic groups, Facebook’s own algorithms implemented a targeting strategy that reflected stereotypes.
Do Facebook and its advertisers intentionally engage in discrimination? It’s possible, but I doubt it — or at least I doubt that intentional discrimination is the main factor at play. Advertisers seek to maximize the return on their investment, which they believe they can achieve by optimizing for relevance. As a result, their relevance models, whether hand-tuned or machine-learned, aim to maximize click-through or conversion rate.
But regardless of their creators’ intentions, relevance-optimized models can both reflect and perpetuate stereotypes. For example, once a model “decides” that women are less likely than men to click on job ads for police officers — which may indeed be the case — the model becomes less likely to show those ads to women, which in turn makes those opportunities less accessible to women. People can still discover the opportunities without the benefit of the ads, but targeted advertising nonetheless contributes to discrimination.
Not Just Ads
Targeting doesn’t just take place in the context of advertising. Search engines and other recommender systems increasingly look to personalization as a way to improve relevance. Personalization often means that the user’s gender, age, and race — or proxies for them — serve as inputs to the relevance model.
Consider a search for “shoes” or “pants” on an ecommerce site. A search engine that knows the searcher’s gender or age can use this knowledge to deliver more relevant results for broad queries like these.
But it’s easy to slide down a slippery slope to amplifying stereotypes. For example, what if someone searches for “toys”? A personalized relevance model might favor stereotypically masculine and feminine toys, depending on the searcher’s gender — especially if results that align with those stereotypes tend to lead to higher click-through or conversion rates. Similarly, personalized relevance on a job search site could promote the same kinds of stereotypes as Facebook’s targeted advertising.
A Conflict of Objectives
By design, relevance models optimize for return on investment. Advertisers want to maximize the number of clicks or conversions per dollar spent. Search engines and recommender systems have similar objectives.
In the best cases, these models also optimize for user happiness. That makes relevance sound like the right goal—after all, what users want to see content that isn’t relevant to their interests?
Relevance models, however, have an impact that extends beyond users’ immediate interactions with them. Because we constantly interact with advertising, search, and recommender systems, these models are increasingly filters that influence and mediate our access to the world around us.
For a single interaction, relevance is an individual concern. However, the aggregate effect of all of our interactions with relevance models is a collective concern. Our relevance models play an increasingly important role in shaping our society. Specifically, by optimizing for click-through and conversion rate, relevance models have a tendency to reinforce and amplify social problems.
No Easy Answers
It’s hard to blame people or algorithms for pursuing relevance as an objective. Advertisers have good reasons to want to maximize their return on their financial investment, and the developers of search engines and recommender systems seek to make the most of their opportunities to attract user engagement. Moreover, many of the creators of these systems truly aim to align their benefits with those of their users.
But when doing so harms the overall ecosystem, it’s a problem. As a society, we risk sliding down the gradient and trapping ourselves in local optima — specifically trapping ourselves in a place where demographics determine opportunity. That is simply unacceptable in an society that values inclusion.
Those of us who build these systems need to think beyond the immediate optimization of our relevance models. We must consider the ecological costs. Sometimes, we have to make decisions that sacrifice our individual return on investment to ensure a healthy world for everyone.
Because, like freedom, inclusivity isn’t free.