Unknown Unknowns
How can AI help us fill in the gaps?
Inspired by Andrew Maynard’s essay, “Spiky Surfaces and Jagged Edges: Moving Beyond What’s Known in an Age of AI”, I’ve added some thoughts on the 'niggle factor' — how human curiosity stumbles into unknown unknowns in ways AI still can’t emulate. A complementary view, not a contradiction.
___
In his thought-provoking essay, “Spiky Surfaces and Jagged Edges”, discussing how we conceptualize knowledge generation in the age of AI, Andrew Maynard critiques the familiar metaphor that knowledge is like a smooth, expanding circle, that as we learn more, the boundary between the known and unknown pushes outward.
But what if that boundary isn’t smooth at all?
Drawing on the work of AI researcher Subbarao Kambhampati, Maynard explores the idea that large language models (LLMs) operate within a zone of inferable knowledge — that is, knowledge that can be statistically or logically derived from what is already known. This space includes much of our formal and procedural knowledge, and it's where LLMs excel. But crucially, these systems struggle with — and may never independently reach — knowledge that lies beyond this boundary, especially in areas that require physical embodiment or lived experience.
Maynard then critiques the classic "circle" model of knowledge on three counts:
It assumes that discovery is uniformly outward and divergent.
It imagines a smooth and consistent knowledge frontier.
It is limited to two dimensions.
In contrast, he suggests we imagine the boundary of knowledge as more fractal — irregular, jagged, and spiked. Drawing inspiration from marine geologist Kim Kastens, he proposes that discovery doesn’t simply radiate out but also juts into the unknown in sharp, unpredictable directions. These “spikes” represent moments where existing knowledge presses into new territory. And intriguingly, where such spikes approach each other, there may be opportunities for “knowledge tunneling” — bridging separate ideas or domains in unexpected ways. While Maynard doesn’t claim that AI is currently performing this kind of tunneling, he poses it as a provocative possibility.
He also speculates that these spiky surfaces may extend into multiple dimensions — that the boundary between known and unknown may not just be complex, but n-dimensionally complex.
It’s a compelling model. And yet…
✦ The Question That Still Haunts Me
As rich as these theoretical models are, they don’t yet answer one essential question:
How do we stumble into the places where we don’t even know what we don’t know?
These so-called unknown unknowns — insights we didn’t know we were missing — often come not from inference or extrapolation, but from discomfort. From anomaly. From that niggling sense that something isn’t quite right.
Scientists, after all, rarely shout “Eureka!” More often, they quietly mutter:
“That’s odd.”
We might think of this as the niggle factor — a low hum of unease or curiosity that draws us to question the obvious.
✦ The Power of the Prepared Mind
To quote the phrase attributed to Louis Pasteur, “Chance favours the prepared mind.” Alexander Fleming’s accidental discovery of penicillin didn’t come from methodical logic. It came from not throwing away the contaminated petri dish — from sensing there was something there worth pausing over.
Or take Max Planck. At the end of the 19th century, physics was widely thought to be nearly complete. Planck’s work on blackbody radiation — a small anomaly in an otherwise tidy system — cracked the door to quantum theory and shattered classical assumptions.
These weren’t just acts of reasoning. They were acts of attention — often to things others had dismissed.
Science, at its core, is a game of “what if”:
What if I try X instead of Y?
What if this outlier matters?
What if I’ve been looking at it upside down?
A good hypothesis is not just a method — it’s an excuse to follow a hunch.
✦ What AI Misses (So Far)
AI can model, infer, generate. It can spot patterns in petabytes of data that no human ever could. But it lacks that niggle. It doesn’t wonder.
Why?
It’s not continuous.
LLMs don’t exist between prompts. They’re not pondering while we sleep. No model says, unprompted:
“I’ve been thinking…”
They don’t have value hierarchies.
Humans are driven by meaning. We care that Mercury’s orbit doesn’t quite fit Newton’s laws — even if 99.99% of the cosmos seems to comply. That care — that value map — is missing from current AI systems.
They don’t notice anomalies unless asked to.
AI might process millions of data points, but it won’t fixate on the one thing that doesn’t quite belong — not unless we tell it to.
Sometimes, the most significant discoveries come not from pushing at the edge of the unknown, but from snagging on a loose thread woven deep within what we thought we already understood.
✦ Toward a Better Question
So rather than asking:
When will AI become like us?
We might better ask:
In what ways is AI different — and how can we use those differences to explore what we can't yet see?
If we think of inferable knowledge as solid land — territory we can walk, chart, and revisit — then unknown unknowns are more like the open sea. Featureless. Without paths. Governed by shifting currents and unexpected reefs.
AI, as it stands, is made for land. That’s what we trained it for: patterning across mapped terrain. It doesn’t know where to sail, or what to look for on the horizon. But it can sail with us. It can help us record what we find, map the islands we land on, and even detect patterns in the stars above.
It cannot lead the voyage. But it can help us make sense of where we’ve been — and where we might wish to go next.
We don’t need AI to replicate our minds. We need it to complement them.
Let AI roam the jagged edges of inferable knowledge. Let it model the chaotic, the entangled, the multidimensional. But let us remain the ones who notice the strange result. Who follow the hunch. Who mutter “that’s odd” — and keep pulling the thread.
Because sometimes, the future begins with a niggle.


