How AI and Quantum Computing Are Accelerating Each Other

ai and quantum force innovation

Key Takeaways

  • AI and quantum computing are no longer developing separately. They’re evolving together in real time, each accelerating the other.
  • AI is already helping stabilise and optimise quantum computing, turning a major limitation into a workable challenge.
  • Quantum systems are beginning to enhance specific AI tasks, especially in optimisation and complex search problems.
  • This creates a feedback loop between two still-developing technologies, where progress no longer follows a clear, step-by-step path.
  • As a result, technological progress is becoming faster, less predictable, and increasingly non-linear, making it harder to anticipate what comes next.

Up until now, AI has developed in waves. At the end of each wave, progress usually hit a wall, whether it was due to insufficient data, hardware limitations, optimisation problems growing too complex to solve efficiently, or costs starting to outweigh the benefits. And yet, progress never really stopped. Instead of giving up, developers always found a way forward, either by reworking ideas, refining techniques, or exploring entirely new directions. Right now, that way forward seems to be quantum computing.

In theory, quantum machines could search through huge numbers of possible scenarios simultaneously, a task that would take classical computers longer than the age of the universe. If those capabilities ever move beyond theory and hold up in practice, quantum computing could tackle problems that are currently out of reach and, in doing so, push AI forward in ways we can barely imagine today.

This may very well turn into reality one day. But it glosses over an important contradiction that doesn’t get the attention it deserves yet becomes hard to ignore once you look closer.

Quantum Computing Is Advancing But It’s Still Fragile

Recently, quantum computers have transitioned from something that was too unstable, error-prone, and difficult to operate, into more reliable systems. This has been done thanks to breakthroughs in Quantum Error Correction (QEC), which have moved the industry from the “Noisy Intermediate-Scale Quantum” (NISQ) era into the “fault-tolerant foundation era”. Here are a few key aspects of the current state of quantum computing:

  • The recent Google’s Willow chip demonstrated “below threshold error correction,” where adding more physical qubits (quantum bits) to a logical qubit actually reduces the overall error rate.
  • The “death problem”, where qubits decohere before finishing calculations, is being tackled through quantum error correction. Established techniques, like  lattice surgery, allow operations to be performed while continuously managing errors, and more recent advances, such as quantum low-density parity-check (qLDPC) codes, are improving how efficiently those errors can be detected and corrected.
  • True quantum utility is currently realised through hybrid classical-quantum systems, where supercomputers offload specific, highly complex mathematical bottlenecks to quantum processors via the cloud, not as a replacement for classical computing but as a specialised extension of it.
  • The industry is moving away from counting physical qubits to measuring “QuOps” (Quantum Operations), which indicate the number of reliable, error-free operations a machine can perform, with goals targeting one million error-free operations (MegaQuOp).

But despite all this progress, quantum computers still require temperatures colder than outer space just to function. Their qubits collapse under the slightest disturbance passing through the room, whether it’s temperature fluctuations, electromagnetic noise, or cosmic rays. Not to mention that calibrating a quantum computer now is a bit like tuning an instrument that detunes itself while you’re playing it. You adjust one string, another one slips. You fix that, and something else drifts. For years, that instability was the defining wall of quantum computing, forcing us to ask the most obvious question: how do you stabilise something that refuses to stay stable?

The good news? As of 2025, that question has shifted from unanswerable to difficult but tractable. While the field hasn’t solved the problem yet, it did find ways to work around it and increasingly through it.

AI Steps In Before Quantum Is Ready

Here’s the part most people don’t expect: instead of waiting years for the quantum computing technology to mature and fix its own limitations, engineers can now use AI to bridge the gap.

Machine learning models are already stepping in where traditional methods struggle. Currently, developers are using AI techniques, like reinforcement learning agents, Bayesian optimisers, and convolutional neural networks, to calibrate, control, and optimise quantum systems in real time. And even though we’re talking about a technology that’s still under development, neural network decoders can already decode and correct quantum errors faster than any human-designed algorithm.

For instance, researchers at a quantum computing startup called Oratomic used AI as a key part of developing a new quantum algorithm. Rather than trying a handful of ideas manually, they used an AI tool to optimise their algorithms by testing thousands of different approaches, something that would have been impossible to do using traditional methods. What this means is that, quietly but significantly, AI is accelerating the development of the very quantum systems it is expected to eventually run on. So instead of waiting for quantum to catch up, AI is already playing an active role in building it.

What we’re witnessing right now is something rarely seen in the history of technology: AI is helping to build the systems that could one day make it exponentially more powerful. In other words, AI isn’t waiting to be upgraded; it’s forging its own upgrade.

But AI isn’t the only one pulling its weight. Quantum computing is already returning the favour, albeit modestly. Quantum processors are being used to accelerate specific AI tasks that classical hardware handles inefficiently, particularly optimisation problems, sampling in generative models, and combinatorial search. Researchers are also exploring quantum circuits as an alternative to classical neural networks, a field known as quantum machine learning, where the promise isn’t just speed but drastically reduced energy consumption. Although none of this is production-ready yet, the early results are promising enough to keep both fields invested in each other’s success.

Two Incomplete Technologies, One Unexpected Feedback Loop

To be fair, this isn’t quite the partnership we would have imagined. Instead of two mature technologies joining forces, as has traditionally been the case, we have two technologies still in their infancy propping each other up. And quite surprisingly, that’s working.

The reason it works comes down to a simple but convenient compatibility of weaknesses. On the one hand, quantum computing offers promise but struggles with stability. On the other hand, AI offers adaptability but struggles with scale and cost. In practice, that means quantum computing pushes AI researchers into new optimisation challenges, while AI gives quantum engineers a tool to manage the chaos that quantum hardware generates. Thus, each field somehow becomes the other’s workaround. And strangely enough, that seems to be enough to keep things moving in the right direction.

This Breaks the Usual Pattern of Innovation

To understand why this is unusual, we need to look back. Technological progress has usually been orderly, with one field waiting until it matured before enabling the next. In other words, one technology got built, it stabilised, and then the next one came in.

Take the transistor, for instance. It had to exist before the microprocessor. Then, the microprocessor had to exist before the internet, and the internet before social media. Each technology waited for the previous to settle before it could develop.

But what we’re seeing today doesn’t follow that pattern. AI isn’t waiting for quantum computing, and quantum computing isn’t waiting for AI. They’re co-developing in real time, side by side, each pulling the other forward before either is finished. It’s as if the transistor, the microprocessor, the internet, and social media had all been developing simultaneously, in an almost perfect symbiosis, and we ended up with all four at once. That’s what’s happening right now with AI and quantum computing. Two technologies accelerating together, with no clean way to predict where the ceiling is, or whether there even is one.

And yes, I agree: it doesn’t feel stable. But nor should it.

Which raises the question: why does any of this matter? What makes this particular relationship worth paying attention to?

It matters because it’s changing the pace of progress, and with it, our ability to predict what comes next. When progress is layered, there is a certain clarity to it. You know what needs to be solved first, what comes next, and roughly when. Timelines may be imperfect, but they exist.

But when two new technologies start reinforcing each other even during the development stage, that clarity disappears and progress becomes non-linear.

Currently, a new breakthrough in AI might suddenly unlock new capabilities in quantum systems. Then, an advance in quantum might reshape what AI can realistically attempt. Each step forward solves some bottlenecks while quietly creating new ones further down the line, shifting priorities and rewriting expectations in ways that are difficult to anticipate.

What makes this particularly unusual is that the timeline stops being “first this, then that.” These technologies are building each other in real time, which means the roadmap is being rewritten as it is followed. When you look at it this way, it’s truly mindblowing.

So Where Does This Lead?

That depends on who you ask.

Some see a path to breakthroughs that would have seemed implausible a decade ago, with biological systems simulated in molecular detail, materials engineered from the atomic level up, and optimisation problems that currently take years solved in hours.

Others are more cautious. For instance, John Preskill, the Caltech physicist who coined the term “NISQ” and is widely considered a pioneer of quantum computing, has been among the more measured voices. Progress, in his view, will come in stages, each expanding what’s possible, but true fault-tolerant, application-scale quantum computing remains a distant goal.

I believe that both views can coexist without cancelling each other out. Because the truth is, the timeline is uncertain. What isn’t uncertain is that these fields are no longer separate. Boundaries are starting to blur, as researchers move more fluidly between the two fields, funding flows into both at the same time, and that the hardest unresolved problems, like optimisation, error correction, and system control, now sit squarely in the overlap between them.

That overlap is worth watching. Because when two fields begin to share not just tools but also questions, the result is rarely a simple combination of the two. It usually becomes something else entirely, something that didn’t have a name before it existed.

So, I think this might be the most honest way to end this blog post: no one really knows what will come out of this loop. We can see the pieces, the direction, but the end result doesn’t quite fit into any of the categories we currently have.

What does seem clear is that it won’t look like today’s AI, and it won’t look like today’s quantum machines either. It will be shaped by both, but defined by neither.

And maybe that’s the most important point: we’re no longer building technology in neat, predictable layers. We’re building it in loops. And right now, whether we fully realise it or not, we’re building the shovel with the gold itself.

Human-in-the-Loop?

But here’s what that actually means, and why it matters.  

When progress was linear, humans were naturally in the loop. You built something, assessed it, debated it, approved it, and then moved forward. There was, of course, friction in that process, and friction, it turns out, is where governance lives. It’s where society gets to ask what is allowed, under what conditions, and who is accountable, before moving ahead. 

The feedback loop we’re now seeing between AI and quantum computing starts to compress that friction. This is not because anyone has decided to reduce it, but because the loop doesn’t pause for consensus. A breakthrough in AI accelerates quantum. A quantum advance reshapes what AI can practically pursue. Each cycle shortens the window between development and impact, leaving less time to fully understand, evaluate, and properly control what’s being built. In that environment, the role of the human begins to shift. While it doesn’t disappear entirely, it moves further away from certain points where key changes happen.

We’ve already seen a version of this earlier in the blog. At Oratomic, AI tested thousands of algorithmic variations that no human team could realistically work through. The humans set the objective, but the AI decided what was worth exploring. In other words, humans are no longer doing the exploration. Instead, people define the goal and the limits, while the AI systems explore how to get there. That division between humans setting the objectives and AIs exploring the possible solutions is already emerging in some of today’s most advanced research, and as the loop accelerates, that gap is likely to widen, with AI systems taking on more of the exploration while humans remain at the level of defining goals and selecting from what AIs surface. 

This is not a call for alarm, but a call for attention. The question of where the human fits in this new loop, of who is watching, who is deciding, and who is accountable when the roadmap keeps rewriting itself isn’t just a technical question; it’s also a question of governance. Right now, the technology is moving faster than the structures we use to understand, oversee, and regulate it. That gap is worth watching at least as closely as the technology itself.

Extra Sources and Further Reading

  • Reinforcement Learning for Quantum Technology – Cornell University
    https://arxiv.org/html/2601.18953v1
    This paper reviews how reinforcement learning can be used to tackle key challenges in quantum technology, such as state preparation, circuit design, control, and error correction, while highlighting recent advances, practical applications, and remaining open problems in the field.
  • Quantum Bayesian Optimization – MIT
    https://web.mit.edu/jaillet/www/general/neurips23b.pdf
    This study introduces a quantum-enhanced Bayesian optimisation algorithm (Q-GP-UCB) that leverages quantum techniques to significantly reduce regret and improve efficiency compared to classical methods, while providing theoretical guarantees on its performance.
  • Scalable Quantum Convolutional Neural Network for Image Classification – Science Direct
    https://www.sciencedirect.com/science/article/abs/pii/S0378437124007350
    This research proposes a scalable quantum convolutional neural network (SQCNN) for image classification that leverages multiple quantum devices to improve accuracy and overcome current hardware limitations, demonstrating strong performance on benchmark datasets.
  • Swinging Between Classical and Quantum Bits – Dell Technologies https://learning.dell.com/content/dam/dell-emc/documents/en-us/2022KS_Nannapaneni-Swinging_between_Classical_and_Quantum_Bits.pdf This document explains the fundamental differences between classical bits and quantum bits, illustrating key quantum concepts, like superposition and entanglement, through examples, such as superdense coding, where two classical bits can be transmitted using a single qubit.

Share This Post

Continue Your Learning Journey...

Subscribe to Newsletter

Transform Your Business: NexusJump Data & AI Tips

To get you started, over the next few days
we will send you a series of seven data and AI tips.

More To Explore

EU AI regulation is hard to hit
Governance

The EU AI Act Is Governing a Moving Target

Europe’s AI rulebook is already out of date. Written for simpler, more predictable systems, the AI Act can’t keep pace with today’s fast-moving models. Accountability gaps, rigid thresholds, and one-time approvals leave regulators chasing technology they can’t quite see. Something has to change.

Tension between GDPR, EU AI ACT and US EO
Blog

Where GDPR and the EU AI Act Collide (Washington Has Left the Chat)

Everyone’s focused on compliance. Nobody’s talking about the gap between two frameworks that were never designed to work together. GDPR wants answers at collection. The AI Act wants them at deployment. Your data sits somewhere in between — and so do you. The rules aren’t broken. They just don’t fit.

Empower Every Learner's Journey — Connect With Us Today

Reach out to customize their learning path — we're here to help.

Subscribe to Newsletter

Transform Your Business: NexusJump Data & AI Tips

To get you started, over the next few days
we will send you a series of seven data and AI tips.