Is The Valley Getting Cannier?
In 1970, roboticist Masahiro Mori coined the phrase “uncanny valley” (不気味の谷 in the original Japanese) to denote the phenomenon that when a machine seems close-but-not-quite human, it triggers a negative response. We especially see this effect in animated films, such as The Polar Express.
Or at least we used to. Modern animated films and video games have achieved so much realism that we take their realism for granted. We are now at the point where deepfake photos and videos threaten to undermine our historical ability to distinguish reality from artifice. That doesn’t sound like the uncanny valley — indeed, it sounds far more ominous.
Generative models are not especially new — Claude Shannon talked about generative language models back in the 1940s. But 2022 has clearly been the year of generative AI, with models like DALL-E 2 and Stable Diffusion commodifying AI-generated art and ChatGPT teasing the commodification of AI-generated “knowledge”.
Have we achieved an inflection point where AI can produce human-level performance? Did the Singularity arrive when we weren’t looking?
I, for one, welcome our robot overlords; but I don’t think they’ve arrived quite yet. What we have instead is the ultimate gaslighting: AI-generated content that looks and feels human-generated, but still makes hilarious mistakes when it comes to substance.
Here’s a great example from ChatGPT:
Question: If A is greater than B and A is greater than C, then is B greater than C?
Answer: If A is greater than B and A is greater than C, it is possible for B to be greater than C, but it is not necessarily true. For example, if A is 5, B is 3, and C is 2, then A is greater than B and A is greater than C, but B is not greater than C.
ChatGPT correctly surmises that A > B and A > C does not necessarily imply that that B > C. But its example is comically wrong, since of course 3 > 2.
Here is a more subtle mistake:
Question: Is the cardinality of the set of rational numbers greater than the cardinality of the set of integers?
Answer: Yes, the cardinality of the set of rational numbers is greater than the cardinality of the set of integers. Cardinality refers to the number of elements in a set, and in this case, the set of rational numbers contains more elements than the set of integers. This is because the set of rational numbers is a larger set that includes all integers, as well as other numbers that can be expressed as simple fractions. Because the set of rational numbers is larger, it has a greater cardinality than the set of integers.
That explanation is well-written and convincing. But wrong.
Luckily, my dad loved to explain infinities to me when I was a kid, so I learned about countable and uncountable infinities. As it turns out, even though, as ChatGPT says, the set of rational numbers includes all integers, as well as non-integers that can be expressed as fractions, its cardinality is not larger than that of the set of integers. That’s because you can establish a one-to-one correspondence between the two sets. Both sets are countably infinite, and their cardinality is Aleph nought.
But I’m sure I would have accepted ChatGPT’s explanation if I had not already studied this problem. And I shudder to think what answers it could generate that I would accept, in areas where I lack domain expertise.
Because what we have today isn’t a system that reliably understands questions and correctly answers them — though a lot of its answers are correct. Rather, we have a system that produces answers that look and feel right. It’s AI-splaining, the ultimate form of gaslighting.
So, are we out of the uncanny valley? Is it getting cannier? I’m not sure what to call this new place where we’ve arrived. But we’d better figure out what to do about it. Because we’ll need to quickly develop some new skills to consume AI-generated content more critically.