I wrote a post recently about how science seems to be slowing down. It’s an interesting phenomenon, and lots of clever people are thinking about the problem. The views range from pessimists who claim we’re running out of low-hanging fruit, to sceptics who reject the very framing of the problem. One of those is Roberta Sinatra, Professor of Computational Social Science at the University of Copenhagen. She’s quoted in Nature saying:

“Disruptive papers are still out there, but the scientific community has limited time to read, understand and cite new works, meaning that only a set amount each year can be lauded as breakthroughs.”

I end that piece saying I’m putting my money on Sinatra’s hypothesis — but I’m left with an itch.

On the one hand, it would make perfect sense: the rate of publication is increasing exponentially, while our capacity to sift through, understand, and integrate new knowledge remains linear. The two trajectories must inevitably diverge. This is what Thomas Kuhn had in mind with his notion of a crisis of proliferation.

By this interpretation — which is hopeful, in a certain sense — it’s not that scientists are getting less daring or less creative in terms of output. It’s just that they’re overwhelmed by the incoming firehose of new findings. And since peer review is built into the very foundations of the scientific edifice, the whole machine begins to struggle.

The refrain is familiar from critiques of the attention economy: when content is infinite but attention is finite, visibility becomes a lottery — and merit alone is no guarantee of discovery. It also calls to mind Tim O’Reilly’s famous observation that “obscurity is a far greater threat to authors and creative artists than piracy.” The same might be true for scientists. The real danger isn’t that disruptive ideas are stolen or suppressed — it’s that they vanish, unnoticed, into the noise.

It all makes sense. It looks neat. But on the other hand, I can’t help thinking: isn’t sifting through mountains of data to find the golden nuggets exactly what AI is supposed to be good at? Isn’t it a blatant contradiction that just as we enter the age of intelligent filtering, pattern recognition, and large-scale summarisation, the very problem we face is one of overload and missed opportunity?

As ever, I turned to Chet for their opinion. Prompting: We’ve built tools designed to surface the signal in the noise, and yet the signal — if Sinatra’s hypothesis is right — is being lost more than ever. Is it that the tools aren’t as smart as we hoped, or does the structure of the scientific ecosystem somehow resist their help?

The question opened a Pandora’s box of tangents — sparking a long conversation with no neat, quotable catch-all answer.

One theme that emerged: the tools might indeed not quite be up to the task. Large language models are excellent at reflecting what’s already legible — they summarise, paraphrase, pattern-match across vast corpora. But true disruption rarely arrives in familiar form. It hides in odd phrasing, marginal methods, results that don’t quite belong.

And when it does, the models — like many of us — tend to look the other way.
Because the strangeness of genuine novelty often registers as noise, not signal.

Add to that the fact that academia isn’t even set up to want this kind of help — not yet. The incentive structures don’t reward tools that challenge consensus or surface overlooked work; they reward novelty, yes, but only within familiar frameworks. Citations must point to the right people, results must fit within the prevailing paradigm, and prestige journals remain the ultimate arbiters of value. So even if AI could flag buried brilliance, there’s little institutional appetite — or pathway — for recognising and acting on it.

If that’s the case — and let’s just sit with that thought for a moment — it reminds me of an old anecdote journalism teachers used to tell, to illustrate the mindset shift their students had to go through.

The story goes like this: an inexperienced reporter, fresh out of journalism school, is sent to cover a football match. He returns to the newsroom empty-handed, dejected. “There’s nothing to report,” he tells the editor. “The game was called off halfway through — the grandstand collapsed during an earthquake.”

The story is funny because it’s such a drastic illustration of something that — in reality — happens all the time: people miss the real story because they’re looking for the wrong thing.

It’s not that we don’t see the outliers — we just fail to recognise them for what they are, because our attention is tuned to a different wavelength. In science, that wavelength keeps getting shorter, narrowing to ever thinner slices of ever more specialised domains.

The group dynamic begins to resemble that old story about the man who loses his keys somewhere along a dark street, and goes searching for it under the only lamp post.

Or to leave the metaphors behind: it’s not attention that’s truly scarce — it’s our capacity for attribution that’s become the bottleneck.

If there’s truth to that view — and it has the ring of truth to me — then we’re back where this post began. We’re back in Denmark.

I’m thinking, of course, of the recent findings by math magicians Jørgen Ellegaard Andersen and Shan Shan.

I might be wrong — I’m a relative newcomer to quantum, after all. But then again, that may be why I see something the experts don’t: an earth-shattering breakthrough that defies — and may yet come to rewrite — current disciplinary fault lines.

I feel I can make this admittedly tall claim with some measure of confidence, because Andersen and Shan’s work has received a fair amount of exposure — and I’ve discussed it with a number of well-informed, highly respected insiders. None of them dismissed it — but many hinted at the same notion: “This thing can’t be as big as it seems — because if it were, it would have caused more of a stir.”

I’m not saying they’re wrong. But the reasoning feels circular.

But if Andersen and Shan really have cracked something big, something that’s right there in the open for everyone to see, yet fails to gain traction — then maybe we need to give Sinatra’s theory a light tweak. Maybe it’s not that we’ve stopped noticing — but that science is now moving faster than our capacity for attribution.