In a recent Tom Fishburne cartoon, one employee says “AI turns this single bullet point into a long email I can pretend I wrote,” while another says, “AI makes a single bullet point out of this long email I pretend I read.”
Fishburne follows up by noting how we have moved beyond “TL;DR” (“too long; didn’t read”) to “TL;DW” (“too long, didn’t write”).
I hope that this sort of make-work is not the future — or present — of generative AI. But I wonder how much of the communication that we can’t blame on AI is already like this. How often do we communicate a simple idea with more words than needed, only to discover that our audience just wants to distill our verbosity to a concise summary? If so, then using AI on both sides just highlights the absurdity of our inefficient communication.
We marvel at how ChatGPT generates several pages from a one-sentence prompt. But I wonder how many of us reflect that the ability of ChatGPT to do so much with so little implies that its output is highly compressible. Aside from randomization, there is no difference between the information content of the output and the information content of the prompt.
If we can articulate our thoughts as short prompts, then why do we go through the trouble of blowing those prompts up into essays, photos, videos, or other formats that have far less information density? And if we could get over this seemingly wasteful practice, would that significantly reduce the transformative potential of generative AI?
To sum it up, as perhaps I should have done in the first place, could we accept the compressibility of thought and just communicate with prompts?