Hallucinating a Post-Search World

Daniel Tunkelang
5 min readAug 30, 2024

--

When I first heard about 3D printing, I imagined something like a Star Trek replicator that could synthesize arbitrary objects — or at least meals — on demand. While 3D printing is a valuable and promising technology, the reality is more mundane. For the foreseeable future, we will continue purchasing manufactured products rather than printing them on demand.

Digital 3D Printing

That brings us to generative AI and search. Generative AI promises to synthesize arbitrary responses on demand, while search can only retrieve already-created documents or products from an index. How far can we take this digital 3D printing analogy?

Like 3D printing, generative AI depends on raw materials to synthesize its results. However, its raw materials are digital: it uses massive document collections to train a large language model (LLM) and documents retrieved at query time to augment the model’s knowledge.

Generative AI produces results by excerpting, aggregating, combining, and generalizing those materials. Compared to 3D printing, the relationship between input and output is a stochastic black box rather than a comparatively deterministic process. This difference leads us to see generative AI as creative but also creates the challenge of hallucination.

Made to Order

The exciting value proposition of generative AI is addressing unique, unforeseen needs by constructing responses on demand. In contrast, search applications only produce previously indexed content as results.

The difference in practice varies from evolutionary to revolutionary. Generative AI can answer questions by extracting a short passage from a document and modifying it cosmetically, but the result is almost the same as returning a search result snippet. However, generative AI has more ambitious applications, such as abstractive summarization and code generation. Generative AI can even appear to perform reasoning, or at least deliver a persuasive simulation.

In any case, the defining characteristic of generative AI is that, like 3D printing, it creates responses on demand. It may not meet its overhyped expectations, but it is still very cool.

Economy of Scale

The downside of generating responses on demand is that it sacrifices the benefit of an economy of scale. Mass production is more cost-efficient than producing a comparable number of bespoke products. Digital production is less expensive than manufacturing physical objects, but generating a unique document at query time is still much more expensive than retrieval.

Generative AI does achieve some economy of scale. Training an AI model costs millions of dollars while using the model (i.e., inference) costs pennies per request. However, these pennies add up and eventually exceed the cost of training if a model is used at a sufficient scale. Moreover, inference is far more expensive than retrieving a document from storage.

Generative AI is becoming less expensive, as the cost of inference rapidly decreases through the use of smaller models and more efficient hardware. It is hard to imagine, however, that it will become so cost-efficient as to be practically indistinguishable from the much lower cost of retrieval.

Cost-Benefit Analysis

We have established that generative AI is more expensive than traditional search, much as 3D printing is more expensive than traditional mass production. But what about its benefits? Do they justify the cost?

E-commerce is a domain that has attracted intense interest from search application developers since improvements in search translate directly into better user and business outcomes. After all, to buy things, people need to be able to find them. Can generative AI meaningfully improve on search as a way to enable shopping experiences?

In today’s e-commerce search applications, most queries are short, containing one to three words. These are mostly entities like product types and brands. Even if searchers prefer to engage using conversational natural language, that does not make their search intents more complex. Search applications do not have to try that hard to address simple search intents. If generative AI is to improve on e-commerce search applications, it will most likely be through use cases adjacent to product search, such as asking questions about a product, comparing products, or performing new kinds of searches, e.g., is this or a similar product available in a particular color?

What about other application domains? Searchers often want information contained in documents rather than the documents themselves. As we noted earlier, extracting a passage from a document is essentially the same as returning a search result snippet. However, summarization can be a meaningful improvement over search, especially if it synthesizes information from multiple documents. Better yet, generative AI can connect information across documents, similar to how relational databases join information from different tables. While some search platforms have limited join capabilities, they cannot reliably perform semantic joins.

However, the value of this powerful capability depends on two factors: how often users need it, and how well it works. Common information needs often map to documents that address those needs (e.g., Wikipedia entries), or to passages that can be extracted from those documents. Hence, the need for generative AI depends on the frequency of needs that are sufficiently uncommon to not map to a single source. However, the needs cannot be so challenging that the model cannot address them by synthesizing information from multiple sources. While companies are promoting generative AI as a replacement for web search, it seems unlikely that most of what web search does today requires a more complex, more expensive, and less reliable solution. There may be more opportunities for enterprise search, but that has always been a challenging market.

Looking Forward

So, do the benefits of generative AI justify the costs? For most search applications today, probably not. Of course, prediction is hard, especially about the future. Perhaps our information needs will become more complex as generative AI raises our expectations of human-computer interaction. After all, the search applications we now see as legacy technology are only a few decades old.

Nonetheless, I do not recommend rushing to throw away your inverted index. Search applications may have lost some of their sheen, but they are still a technology that fits the needs of countless information-seeking applications. Generative AI is powerful, but it is not clear if or when its benefits will outweigh its costs, at least when it comes to replacing search.

Still, given how quickly this space is evolving, it would be foolish to make bold predictions. The best that we can do is to keep ourselves informed and co-evolve our strategies with the technologies available to us.

--

--