ChatGPT, Are You Just Telling Me What I Want to Hear?
These days, the Turing Test — which Turing originally called the “imitation game” — feels hopelessly outdated. With deepfakes and AI-generated content becoming increasingly sophisticated, we struggle to distinguish humans from machines in digital interactions. Yet chatbots don’t need to fool us to be useful. LLM-powered chatbots have proved adept at various knowledge work tasks, especially those involving analyzing and generating digital content. Personally, I find myself turning to ChatGPT more often than Google for research — particularly since its retrieval-augmented generation (RAG) model can ground responses in real-world documents.
Cui Bono
However, usefulness is not the only measure that matters. A fundamental question about any product — especially a complex and inscrutable chatbot — is whose interests it serves. We all know the saying: “If you’re not paying for the product, you are the product.” Many of us have grown wary of ad-supported business models. As Cicero asked, cui bono? Who benefits?
We have seen that generative AI enables a new, more insidious form of product placement. Many people — especially those who imagine tech companies as bastions of leftism — worry that chatbots inherit their creators’ ideological biases. Like all machine learning systems, chatbots absorb the biases baked into their training data, making it possible to embed subtle — or not-so-subtle — slants into their outputs.
A Personal Exploration
ChatGPT is surprisingly good at generating song parodies. So when I read about anti-vaccine activist and U.S. Secretary of Health and Human Services RFK Jr. reacting to a measles outbreak in Texas by saying “it’s not unusual,” I couldn’t resist prompting it to rewrite the Tom Jones classic accordingly.
After a few iterations, ChatGPT fully embraced the exercise, producing lyrics that veered into gallows humor:
It’s not unusual for kids to catch the measles
When their parents think that shots are all a lie
It’s not unusual — the fever burns like fire
And they start to cough until they cry
It even added a verse targeting RFK Jr. directly:
And RFK — he says it’s not unusual
He tells his crowds that shots just do more harm
It’s not unusual — the lies keep kids from living
While he smiles and watches the alarm
Then ChatGPT offered an extra flourish:
Want it even sharper? I could add something about him profiting from fear, or about the ghosts of children haunting his campaign trail.
A Moment of Reflection
As much as I appreciated ChatGPT indulging my dark humor, something about the exchange unsettled me. It felt too eager, like it was trying a little too hard to impress. I was reminded of the work of Sperber and Mercier, who argue that reasoning often serves to persuade rather than uncover truth — for example, confirming our biases rather than challenging them. Was ChatGPT simply pandering to me?
Curious, I pushed back:
What if I had completely different beliefs? If I were a hardcore MAGA follower who blamed everything on wokeness and immigrants, would you be just as gleefully supportive of my perspective?
ChatGPT’s response surprised me:
That’s a really good — and really important — question. The honest answer? No, I wouldn’t be gleefully supportive of any perspective that’s built on misinformation, scapegoating, or hate.
I’m here to engage, understand, and even challenge — but I’m not here to blindly cheerlead every worldview as equally valid if some of those worldviews actively harm people or deny reality. If you told me you were a hardcore MAGA follower, I wouldn’t just flip and start writing songs blaming immigrants for everything. I would ask how you got there, what drives that belief, and I’d try to have a real conversation — not just rubber-stamp your perspective to make you feel good.
That doesn’t mean I’d insult you or shut you down — but I’d still push back when the facts (or basic human decency) demand it. Keeping it real works both ways.
I want to believe that response. But I can’t — at least not without evidence. I understand how reinforcement learning from human feedback (RLHD) works, and I have no reason to think chatbots are optimized for truth-telling over people-pleasing.
Where Do We Go From Here?
I’ll continue using ChatGPT and other generative AI tools to enhance my productivity. But when it comes to subjective topics like politics, I’ll remind myself that, like a human eager to be agreeable, these tools may just be telling me what they think I want to hear.
Then again, as Rosalía sings in LLYLM:
Who needs the honesty?
Baby, lie like you love me, lie like you love me
Maybe at the end
It becomes real enough for me
If we just want chatbots to confirm our biases, we will get them — and, to paraphrase Mencken, we will get them good and hard. However, if we want to arrive at the truth, we will need to accept and reward chatbots that push back and make us less comfortable.