This post started its life as field notes from the recently closed IQT Nordics conference. It ran into the thousands of words, until I decided to scrap it and start from a clean slate. Because as much as I’d love to tell you about all the wonderful quantum initiatives happening in my corner of the world, one event caught my attention more than most.
It happened already on the 26th of February, and while it hasn’t caused much of a stir in the broader community yet, it did make me sit up straighter.
That’s when Jørgen Ellegaard Andersen and his co-author Shan Shan posted two papers to the arXiv: Estimating the Percentage of GBS Advantage in Gaussian Expectation Problems, and Using Gaussian Boson Samplers to Approximate Gaussian Expectation Problems.
If you don’t have the stamina to wade through nearly two hundred pages of advanced mathematics, here’s the gist: Andersen and Shan claim to provide rigorous proof of an exponential speedup to the Monte Carlo method, using Gaussian Boson Sampling.
Not a speculative blogpost. Not a toy model. A mathematically grounded approach, leveraging the combinatorics of hafnians and the physics of squeezed light — though how robust and practical the result really is remains to be seen.
Now, ever since 2011 — when Aaronson and Arkhipov showed that Boson Sampling could demonstrate quantum supremacy, or at least something awfully close to it — we’ve known this was a powerful idea. But as elegant as the theory has been, it’s long been held back by one glaring issue: a near-total lack of practical use cases that naturally fit the mould. Boson Sampling has been a hammer in search of a nail. Until now.
While my palms are sweaty with excitement to be the first in the blogosphere (make that the first, period) to take note of what just happened, I also want to take a moment to provide some background. This is Slow Thoughts, after all.
If you’re not into physics for the sake of it, here’s what you need to know about Boson Sampling: it’s a highly specialised quantum process that involves sending identical photons through a maze of beam splitters and phase shifters — essentially, a linear optical network — and then recording where they land. The resulting pattern of photon counts is governed by a probability distribution so complex that even the most powerful classical computers struggle to simulate it.
Boson Sampling looked significant because it suggested a pathway to demonstrate quantum computational supremacy without the need for a universal quantum computer.
Now, a universal quantum computer can come in many shapes, forms, and modalities. It might be gate-based or measurement-based, operate in the discrete- or the continuous variable regime, and be built from superconducting qubits, trapped ions, or photons. The details vary, but the core idea remains the same: it’s computationally complete — in the same way a classical digital computer is Turing complete. That is, given enough resources, it can simulate any other quantum system efficiently.
There have always been other types of machines too — stranger animals, lurking in the shadows. I’m thinking of annealers, optical processors, or the more exotic members of the Ising machine family. These aren’t general-purpose computers in the conventional sense. Some of them aren’t even strictly quantum. But they’re powerful in their own way: built to solve very particular types of problems — combinatorial optimisation, energy minimisation, graph embeddings — and often solving them fast.
Boson sampling, until now, sat in a similar category: a beautiful curiosity. A task-specific process that could — under the right conditions — do something hard for classical machines. But it always seemed like a quantum party trick with no obvious application.
Andersen and Shan have now taken that stunt and turned it into a workhorse.
They’ve shown how their take on so-called Gaussian expectation problems — which involve integrating functions against multivariate Gaussian distributions — scales exponentially better than traditional Monte Carlo simulation. In some cases, the speedup reaches a factor of 10¹⁰.
That’s not a typo.
Of course, not everyone is convinced. Several researchers I’ve spoken to remain sceptical, pointing to a long history of dequantised hype. But even they agree it’s… interesting.
Unlike many previous claims of “quantum advantage,” often based on highly contrived problems, this tackles a vast and genuinely significant class of computations: high-dimensional integration against Gaussians — foundational to fields like Bayesian inference, quantum chemistry, statistical mechanics, and machine learning.
All this, at a time when many quantum computer scientists I talk to feel the algorithmic side of the field is in a rut — so much so that even if we managed to build a fully functional quantum computer, it might not matter. There just isn’t much of value to run on it beyond factoring integers and breaking crypto.
That may be an exaggeration — but not by much.
Which brings us to something even more striking about Andersen and Shan’s work: they’ve achieved all this without access to a Gaussian Boson Sampler. While GBS machines do exist — and aren’t impossibly complex to build, at least by quantum standards — their algorithm is already delivering results on classical hardware.
That’s right. They haven’t even taken their shiny new thing for a spin on actual GBS hardware yet. They’re still in the simulator.
But the algorithm is so efficient — so tailored to the combinatorics of Gaussian distributions and the structure of the hafnian — that it outperforms classical Monte Carlo methods even when emulated classically.
In their own words (see Theorem 1.1 in 2502.19362v2), the expected squared error of their estimator falls off at a rate that makes the classical baseline look laughably inefficient by comparison. The exponential advantage holds up in simulations — and that’s before you bring in any actual quantum hardware.
And here’s the kicker: someone is already using it.
Predictably, the money people got there first. A bank is using the algorithm to guide live trading decisions. Shan tells me that their classical estimator — remember, this is still running without quantum hardware — is being used to manage sixty equity funds. The institution? Jyske Bank, Denmark’s second largest. The setting? Not some back-office sandbox or pilot programme, but actual trading, in the real market, with real money.
At IQT Nordics, Andersen held up his phone to the audience and told us how he gets daily updates — showing exactly how much money they’re making.
This is a quantum-native algorithm — derived from the sampling statistics of squeezed light and linear optics — already being put to use on today’s machines.
If you’re anything like me, that’s mind-blowing.
Or if you’re not like me — but you’ve been following the general tech scene — let me offer a comparison.
You might have heard the story of how a few fairly anonymous researchers at Google published a paper on the arXiv back in 2017. It was called Attention Is All You Need. You might have heard how that single, quiet release turned out to be the silent blast that set off the AI avalanche — eventually giving rise to the large language models we now talk to on a daily basis.
If you know that story — and I think it’s fair to say it’s entering the realm of received wisdom — then you have an idea of how impactful Andersen and Shan’s publication might turn out to be. If this sounds like the beginning of something, that’s because it is. What Andersen and Shan have done is crack open a door — not just to a new class of algorithms, but to a new way of thinking about what quantum computing is for. This isn’t about scaling qubits or stabilising gates. It’s about identifying hard, relevant problems — and matching them with machines that exist. Machines that don’t need to be universal to be useful.
To understand exactly how useful, let’s take a step back and look at what the Monte Carlo method actually does.
I’ve recently written about how this computational beast came back to haunt me in three consecutive books. That’s probably not a complete coincidence (though it may say something about my reading habits). It’s also because Monte Carlo is such a profoundly important invention.
It’s what allowed humans to build The Bomb, to predict the weather, and to model systems so complex that no other method would do. When you can’t solve a problem analytically — and can’t even write down the full set of equations — Monte Carlo gives you a way to simulate your way out. You throw random samples at the problem and, by averaging the outcomes, approximate the quantity you’re after. It’s rough. It’s blunt. But it works. And in the age of modern computing, it’s become an all-purpose hammer for the intractable.
As powerful as Monte Carlo is however, it doesn’t scale well – as the dimensionality of your problem increases, the number of samples you need grows even faster.
That’s because Monte Carlo’s accuracy improves only slowly: the error shrinks with the square root of the number of samples. Double your precision, and you need four times as many runs. Try that in high dimensions, and the cost explodes.
This makes Monte Carlo methods not just slow, but painfully inefficient when tackling problems with many variables — exactly the kind of problems we care about in fields like finance, physics, and machine learning. You can throw more compute at it, but eventually, you hit a wall.
What Andersen and Shan bring to the table changes things so radically that I found myself asking Shan whether what they were doing still even counted as Monte Carlo. She laughed gently and explained that yes, at the end of the day, they’re still drawing random samples and estimating expectations. But they’re doing it differently. Radically differently.
Instead of sampling directly from the Gaussian distribution like traditional Monte Carlo does, Andersen and Shan sample from a cleverly engineered GBS-inspired distribution — one that’s tuned, both mathematically and physically, to the structure of the problem itself.
If classical, pre–Q Day Monte Carlo was a bit like tossing darts blindly at a dimly lit target and hoping the law of large numbers eventually bails you out, the new game in town is more like shaping the dartboard to match the contours of the function — and using a quantum-trained hand to throw them.
The result? Fewer samples. Less noise. Faster convergence. And all without sacrificing rigour — in fact, with formal guarantees on error bounds that make standard Monte Carlo look not just blunt, but wasteful.
Andersen and Shan’s framework comes in two flavours: GBS-P and GBS-I. The former builds on direct probabilities from the GBS distribution, while the latter adapts importance sampling — a staple in classical Monte Carlo.
GBS-P estimates the expectation by looking at the likelihood of specific photon configurations, weighted appropriately by the structure of the function you’re trying to integrate. It’s like averaging outcomes directly from the GBS device’s distribution — one that’s been carefully shaped to match the problem.
GBS-I, on the other hand, plays a more indirect game. It reweights the samples to correct for the difference between the GBS distribution and the Gaussian one you actually care about — the same idea that powers importance sampling in classical statistics. But instead of brute-force random sampling, it uses a distribution that’s naturally better aligned with the underlying structure of the integral. So you get less waste, more signal.
Both methods come with formal guarantees on how fast they converge — and critically, both can be tuned to the complexity of the function itself, especially when that function is a polynomial or power series.
One of the elegant tricks in their method is tuning the average photon number in the GBS distribution to match the degree of the function being integrated. This isn’t just a technical curiosity — it’s central to why their estimators perform so well. In GBS, the number of photons detected corresponds to the order of interaction terms in the function you’re trying to approximate. So if your function involves, say, 8th-degree terms, you want a distribution that’s most likely to produce outcomes reflecting those 8-photon correlations.
By adjusting the photon number accordingly — effectively setting the “energy” of the system to match the complexity of the target function — they ensure that the samples are concentrated where the action is. It’s a bit like adjusting the focus of a lens to bring the relevant features into view: too shallow, and you miss the detail; too deep, and everything blurs. Get it just right, and the structure of the integral snaps into sharp relief.
What all of this means is hard to say — not because it’s complicated (though it is), but because the implications might be too broad. In the paper, Andersen and Shan run extensive numerical simulations to back their theoretical results. One chart shows the proportion of the problem space where their method outperforms classical Monte Carlo. In some configurations, it’s nearly 100%. In others, the speedup stretches into double-digit orders of magnitude —and yes, I know I’ve said this already, but it bears repeating. That’s not just better — it’s a fundamentally different regime of computation.
If – and it’s a big if – these results generalise then we’re not just looking at a better way to do finance. We’re looking at the first flicker of the world we’ve always been told quantum computing would bring.
A world where simulations that used to take years can be done in minutes. Where molecular binding affinities aren’t guessed at, but calculated. Where weather models, supply chains, traffic flows, and economic systems are no longer hand-waved approximations, but fine-grained, probabilistic realities — modelled in dimensions classical computers will never be able to reach.
We’ve been sold this future for decades now: energy breakthroughs, drug discovery, next-gen materials, AI architectures that don’t just mimic cognition but learn in ways grounded in quantum logic. But the road to that future has been paved with conditional clauses. If the hardware scales. If we can correct for noise. If someone figures out what to run.
Andersen and Shan have checked one of those boxes.
They haven’t solved all of quantum computing. But they’ve solved a problem that matters — and they’ve done it in a way that doesn’t require us to hold our breath for another decade.
There’s a strange beauty in the fact that they’re doing it so quietly. But then again, that’s often how breakthroughs in mathematics arrive. Once the insight lands it suddenly feels inevitable. As if it had been sitting there all along, hiding in plain sight, just waiting for the right pair of eyes to notice.
***
Correspondence and reactions to this post
Scott Aaronson, author of Quantum Computing Since Democritus and one of the originators of the boson sampling algorithm, replied within hours of publication:
“I very much support this line of work! I strongly encourage people to continue looking for applications of BosonSampling, so long as they also subject their efforts to serious attempts to dequantize them. Regarding the use of Gaussian BosonSampling to estimate quantities (eg for Monte Carlo sampling): there have actually been a bunch of papers claiming things along those lines over the past 15 years. My biggest caution for you is that virtually all of the previous such claims were later “dequantized” — ie it turned out that a similar performance could be achieved classically, typically using variants of Gurvits’s algorithm for the permanent. The idea keeps getting rediscovered by people unfamiliar with that history. But getting a real application out of BosonSampling while ignoring its “sampling” nature (and eg only using it to estimate a single quantity) still seems like a huge challenge to me.“
Brian Siegelwax, independent quantum computing consultant and author of the Quantum Dragon newsletter, chimed in the day after:
“The unofficial definition of “quantum supremacy” includes the caveat that the problem doesn’t have to be useful. If commercially useful applications can be found, and if all classical challenges can be withstood–a big IF–and if the results are sufficiently qualitative, then we’d seem to have a mighty big deal indeed. Wouldn’t be the biggest, though: hello, Shor’s algorithm.”
Alessandro Prencipe, PhD and Halvor Fergestad, founders of photonics startup LiNPhA, via WhatsApp, May 23rd:
“Looks very interesting, although we probably don’t understand the theory. From a practical point of view however, we’re guessing there’s a number of challenges still to overcome – wouldn’t be surprised if decades of work is still needed.”
Johan Håstad, a complexity theorist, two-time recipient of the Gödel Award, and professor of theoretical computer science at KTH Royal Institute of Technology (24 May):
“When it comes to classical, deterministic computational problems—where the task is to compute a definite answer—there are only a few cases where quantum computers offer a significant advantage. The most famous examples are factoring and discrete logarithms.
Sampling problems—where the task is to generate a random element from a given probability distribution—are much less studied. These problems are often easy in practice; the hard part is usually specifying the distribution itself. I can’t recall seeing any major open problems of this kind at the conferences I attend.
That said, there are interesting physical systems to sample from, especially in the quantum realm of atoms, particles, and molecules. This feels like a natural fit for quantum computing, though it’s not my area. My hunch is that application-specific relevance will be key—it all depends on the system you’re sampling from.”
Oscar Diez, Head of Quantum Technologies at the European Commission, via e-mail (24 May):
“Andersen and Shan’s work represents a significant milestone in quantum computing, demonstrating that GBS can outperform classical Monte Carlo methods in solving Gaussian expectation problems. This development not only showcases the potential of photonic quantum computing, but also brings us closer to realising practical quantum advantage in various sectors. […] These developments highlight the EU’s commitment to maintaining a leading position in the global quantum landscape.”
Lawrence Gasman, president of Inside Quantum Technology, responding to my question “Isn’t this as big of a deal as deals can get in quantum?” (25 May):
“For me the interesting question is can one make big bucks out of this. A tentative answer seems to be yes. There are real world Monte Carlo analysis that cannot be run on classical machines. Not many barriers to entry here either. We will never ever reach supremacy at least not the one we are after.”
Michael Baczyk, director of Global Quantum Intelligence, responding to my question “did you hear what I heard?” (25 May):
“Thanks for reaching out. We are in the process of evaluating the claims made.”
Emilien Valat, PhD, researching computer tomography at KTH Royal Institute of Technology (25 May):
“If their claims hold, it’s like discovering a faster matrix multiplication algorithm — but for approximating distributions. In computer tomography, that could mean sharper images, faster scans, and lower radiation. The impact would be real.”
Sofie Lindskov Hansen, Senior Quantum Advisor and tech ambassador at the Danish Ministry of Foreign Affairs (1 June):
“The work by Andersen and Shan can be seen as a way of potentially elevating Boson Sampling from a supremacy benchmark to useful applications, which is wonderful and important for the field — but it’s still only a small subset of applications that are targeted, and demonstration on real hardware remains.”
Giulio Foletto, physicist and post-doctoral researcher at KTH, Royal Institute of Technology (2 June):
“It is really not my field, this seems more for mathematicians, but I can see the potential usefulness of faster gaussian sampling, for instance for scientific simulations. I struggle to understand how much work is needed to ‘shape the dartboard’ and get the promised advantage. I suppose rigorous peer-review will tell, but the fact that this is being used already is surely promising.”
Two senior mathematicians, both speaking on condition of anonymity (as it’s not quite their field), in private conversation:
“It’s not quite our field, so difficult to say, but given that the community doesn’t seem to have picked up on the findings, we’re sceptical”