"Common sense is not so common and rarely sensible"
— Voltaire
Healthy scepticism is the starting point for wisdom, because it prepares our minds for new truths and fresh insights
___
If you saw an ancient Roman statue painted in bright colours, what would you think? Would it be jarring?
If you read a romance novel set in the medieval period with a female lead called Tiffany, would you find it believable?
Most of us would hesitate — not because we know the history, but because something feels off. That’s bias in action.
We humans are deeply irrational creatures. Daniel Kahneman, in Thinking, Fast and Slow, proposed that we operate with two cognitive systems: one fast, intuitive, and powered by assumptions; the other slow, logical, and more accurate — but cognitively expensive.
With respect to the eminent psychologist, that second system isn’t quite as reliable as it sounds. In truth, we often switch between intuitive and analytical modes mid-thought, like someone trying to patch a sailboat mid-storm. Even our “slow thinking” is shaped by unseen assumptions — and subject to logical fallacies we know better than to trust.
And we do know better. If slow thinking were consistently rational, we wouldn’t need repeated lessons on how to interpret statistics, or reminders about the slippery power of framing, narrative, and belief.
Back to those statues and medieval romances.
While many visitors to the “Gods in Color” exhibition have found the restored pigments of classical sculptures fascinating, others reacted with dismay — even outrage. Some critics have called them garish, kitsch, or accused the exhibition of “rewriting Western civilisation.” (It’s worth noting that both Hitler and Mussolini fetishised white marble, imagining it as a symbol of Aryan purity — despite historical evidence that the originals were boldly painted.)
As for Tiffany — the name comes from Tiffanie or Tiffania, traditionally given to girls born around the Feast of the Epiphany. But in today’s cultural shorthand, it evokes a blonde West Coast college student, not a twelfth-century noblewoman. That’s bias, too — subtle, pervasive, and largely invisible.
In our media-rich world, we often gravitate toward stories that are emotionally true, even if they’re factually suspect. We trust what feels right. We follow arguments that confirm our existing worldview. And when we’re called to question those narratives, it’s often our ego — not our logic — that answers first.
This tendency affects how we relate to AI as well.
Despite how smoothly they speak, large language models like ChatGPT don’t “know” anything. They don’t think. They don’t reason. They predict. Given your prompt, an LLM calculates what kind of answer is most likely to satisfy your request — based on patterns it’s learned, not truths it’s checked. Unless explicitly linked to live data sources, it’s operating from memory — memory formed by exposure to human content and shaped by probabilistic models, not facts.
When it gets things wrong (and it will), it’s not lying. It’s responding, as humans often do, with confidence shaped more by tone than by truth.
And that’s before we consider the filters of neurodivergent thinking — including my own Autism and ADHD. These lenses come with their own biases, blind spots, and, yes, superpowers. But they also amplify the need to question what we assume we “just know.”
So, a gentle reminder:
Question the emotionally satisfying story.
Question the assumptions that live inside your “logical” reasoning.
And yes — question the AI.
Be kind to yourself when you uncover your own errors. Be kind to others when you catch theirs. And always leave room for the possibility that the voice speaking most confidently — whether human or machine — might not be speaking from certainty at all.