Trust Through Provenance

Daniel Tunkelang
3 min readMay 22, 2023

--

The recent advances in generative AI have led many people to fear all manner of deepfakes — particularly text, images, audio, and video. People like Yuval Noah Hariri feel that AI threatens the future of humanity.

On one hand, I feel that these fears are well-grounded: even if AI-generated fakes are not good enough already, I believe that they soon will reach the point of being indistinguishable from reality. On the other hand, I do not think the content is the problem. What we need to focus on is provenance.

We cannot determine what is real from just looking at it.

As a species, we have always been susceptible to forgery, whether in the form of fake signatures, photoshopped images, impersonated voices, or all manner of social engineering. Our senses and intuitions are fallible, and there are a lot of highly motivated tricksters out there.

At this point, we should consider the possibility that content presented to us is fake, especially if the content is surprising and might lead us to unusual actions, such as a report of an explosion near the Pentagon.

While such an attitude might have been considered paranoid in the past, today it is simply realistic. It is already fairly cheap and easy to create convincing fake content, and advances in technology will continue to lower the barrier to forgeries. It is not paranoia if they are really out to trick you.

Consider the source before you consider the content.

If we cannot trust what we perceive with our senses, then what can we trust? I believe that the better question is: who can we trust?

We learned to trust content in an age where it was difficult to create convincing forgeries. That difficulty allowed us to put less emphasis on ensuring that we trusted the source of the content.

That age is ending, if it has not ended already. Today and looking to the future, we need to consider the source before we consider the content.

For example, I trust that the Apollo moon landing happened because I trust the institutions that documented it, not because I do not believe the footage could not be convincingly faked. Conversely, I am skeptical of the authenticity of much of the content I see shared on social media, no matter how realistic the content is, if I cannot trace it back to a source I trust.

Lots of work to make this easy enough for mass adoption.

Shifting our focus from content to the source of the content will require a big change in how we consume and evaluate information. I suspect that most people have never even encountered the word “provenance”, let alone worried about the provenance of the content in their feeds.

The misinformation challenge facing us is serious and urgent. It is also one that we have the technology to solve. We have known how to create robust digital signatures for decades, as well as how to establish a public key infrastructure. We have the technical tools to track provenance.

What we have not done is integrate these tools into browsers, apps, and devices in way that makes it simple for consumers to determine whether the provenance of a piece of content have earned their trust. Moreover, we are unlikely to cure human laziness. If we cannot make the process of evaluating content easy and frictionless, most people will not bother.

A successful integration will require a careful blend of engineering and design, as well as a management of tradeoffs. It will also require buy-in, or regulation, in order to interoperate with the many ways we consume content. Moreover, no system will be perfect, nor will one system or configuration necessarily meet everyone’s needs. After all, we do not always agree on whom to trust, which splinters our collective reality.

But we have to start somewhere. Focusing exclusively on the content is a dead end. The sooner we accept that, the sooner we can start building effective solutions that focus on provenance.

To update a phrase from the Cold War: we need to verify, then trust.

--

--