Taming Noise by Time-Optimizing Quantum Control Across Platforms

Meet a smarter way to steer quantum pulses

In the messy, noisy world of real quantum devices, the clock is as important as the pulse shape. Engineers have spent years squeezing every last bit of performance from pulse designs, but the stopwatch itself has often been treated as a background constraint rather than a variable to optimize. A new study from the Quantum Motion group in London and the Department of Materials at the University of Oxford, led by Minjun Jeon and Zhenyu Cai, flips that assumption. They argue that the best fidelity—how perfectly a quantum operation lands in its target state or gate—can hinge as much on how long you let the evolution run as on how you tailor the pulses themselves. The paper’s title could be summarized as this: in quantum control, time is a resource you can and should optimize, not just a constraint you endure.

What’s new here is a practical, noise-aware twist on an already useful technique called CRAB—Chopped Random Basis. CRAB builds control pulses from a truncated Fourier set with randomly chosen frequencies. It’s loved for being implementable: it doesn’t demand solving every possible pulse combination in a giant search space. Jeon and Cai push CRAB further by weaving in realistic noise models and, crucially, by treating the evolution time T—the duration of the pulse sequence—as a tunable knob, not a fixed dial. The result is a method that can hunt for the global best combination of pulse shape and duration, even when the system is not perfectly quiet.

Where this comes from is a collaboration anchored in two venerable institutions: Quantum Motion (based in London) and the University of Oxford’s Department of Materials. The study is led by Minjun Jeon, with Zhenyu Cai as senior author. The team’s framing is pragmatic: quantum control matters across quantum communication, sensing, state preparation, and gate compilation, but real devices never live in the idealized, noiseless world of textbook models. Their work asks a pointed question: can we push the practical limits of control by letting time itself be the optimized variable under the same constraints we apply to pulse shapes?

The core idea in plain language is deceptively simple. If the noise in your device behaves in a way that commutes with the gate you’re trying to implement—think of it as kinds of “static” that don’t scramble the path you’re steering the system along—then you can mathematically separate the noisy evolution from the unitary, pulse-driven evolution. That separation lets you predict how fidelity will decay with time without simulating the full noisy quantum state at every moment. In practice, you can then search not only for the best pulse shape but also for the best total operation time that maximizes fidelity, even in the presence of noise.

Why this matters now is that quantum hardware is not waiting for perfect theory. Noise, drift, and tiny, unavoidable imperfections in the control lines conspire to limit how reliable a quantum operation can be. If you ignore time as a tunable resource, you risk settling for a pulse that’s elegant on paper but suboptimal once the hardware’s quirks kick in. This work makes a compelling case that optimizing duration can unlock fidelity gains otherwise hidden in the fog of practical imperfections.

Why timing matters in quantum control

The authors ground the discussion in a tension that’s been simmering at the edge of quantum control theory for decades: the quantum speed limit versus real-world decoherence and drift. The speed limit tells you the fastest possible way to morph one quantum state into another (or to implement a gate) given the system’s energy constraints. But that limit is a theoretical bound—one that assumes an ideal, noiseless world. In the lab, decoherence grows with time, drift drags the target out of reach, and the optimization landscape becomes a jagged, treacherous terrain full of local minima. The message is clear: shorter isn’t always better, and longer isn’t automatically worse. There exists an optimal sweet spot where you finish the job before noise erodes fidelity too much.

The paper’s numerical experiments paint this picture vividly. In one entanglement-generation task with two coupled Josephson charge qubits under a depolarizing channel, the best fidelity wasn’t achieved by the absolute shortest pulse. Instead, there was a distinct optimal duration, Topt ≈ 1.35 (in their normalized units) at which the infidelity dipped to about 1%. When you consider the opposite tendency—noise growing with time—the curve of achievable fidelity versus time looks like a lifecycle: race toward the speed limit, then watch decoherence peel away the gains if you linger too long. The take-home is not merely “longer is worse” or “shorter is better,” but “there exists a best time to finish the job, given the noise and drift you’re contending with.”

Oscillations as a telltale sign are another striking feature. In several CZ-gate scenarios, the authors observe fidelity oscillations as a function of time. The reason is geometric: if the drift part of the Hamiltonian can’t be perfectly countered by the chosen control basis, the system undergoes extra, drift-driven rotations. Those rotations can bring you in and out of alignment with the target state, producing ripples in the fidelity curve. The upshot isn’t a failure mode but a diagnostic: by looking at how Fopt(T) behaves in the noiseless limit, you can diagnose whether your control set can, in principle, suppress drift, or if you’ll always be fighting an inescapable oscillation.

Local traps and time as a route around them is another practical insight. Standard CRAB optimizes pulse amplitudes in a fixed time, but if the landscape is bumpy, the search can get stuck. Adding time as a dimension—optimizing over T alongside pulse coefficients—gives the algorithm a new axis to escape stubborn local minima. In their tests, when basin-hopping (a global search strategy) was used to optimize both T and the pulse parameters, they consistently located robust, near-global optima across several problem classes. The contrast with separate optimization (first fix α, then adjust T) was telling: simultaneous optimization tended to find better solutions and was less prone to getting trapped.

Gate-level insight spills from the state-to-state results into gate compilation. For two-spin-qubit gates in quantum dots, the same noise-aware, time-aware CRAB framework was applied to realize CZ-like operations. Depending on whether the Zeeman gradient was weak or strong relative to the exchange coupling (Ω ≪ J versus Ω ≫ J), the effective control Hamiltonian swapped between SWAP-like terms and Z ⊗ Z interactions, and the noise channels behaved differently. Yet in each regime, time optimization remained crucial: the fastest route often collided with fidelity losses from drift or decoherence, while a carefully timed longer route could harvest higher fidelity despite noise.

Noise-aware CRAB and the time dimension

Delving into the mechanics, the authors’ key move is to couple a practical, noisy version of CRAB with a time-search strategy that balances speed and resilience. In the standard CRAB setup, control pulses fi(t) are expanded as a truncated Fourier series with random frequencies. The optimization then tunes the Fourier coefficients to maximize fidelity. What Jeon and Cai add is a structured way to include noise: they focus on noise channels that commute with the gate Hamiltonian, a condition that appears in common physical noise models like certain Pauli channels and dephasing processes. When this commutation holds, the unitary (the control-driven evolution) and the dissipative (the noise-driven evolution) parts effectively live in compatible subspaces. That compatiblity allows a remarkable simplification: the noisy fidelity can be expressed analytically as a sum of exponentials with decay rates λj that depend on the noise—without simulating the full, noisy quantum state trajectory at every moment.

The upshot of this analytic structure is efficiency. Instead of repeatedly solving the full Lindblad master equation for every trial pulse and every time step, the authors can reuse a compact, precomputed representation of how noise would dampen observables. In practice, they show that for Pauli-type noise, the fidelity decays with a relatively clean, multi-exponential pattern or, in some structured cases called group channels, with a single effective exponential envelope. They then fold this into a practical fidelity expression F(T, α) that matches the noiseless fidelity FU(T, α) in the appropriate limit but winds in the noise-induced decay as time grows. This is a crucial enabler: it makes the time-dimension search tractable even as the system size grows, which would otherwise be computationally prohibitive if you tried to simulate every noisy trajectory in detail.

The math behind the practicality rests on a simple, powerful observation. If [H, Lk] = αk Lk for the jump operators Lk (a Pauli channel is a canonical example), then the unitary and dissipative parts of the Liouvillian essentially commute. This lets you write the noisy final state as eLHt eLDT acting on the initial state, and you can compute the fidelity by applying a time-evolved, noiseless state to a modified observable that encodes the noise. In short, the noise-aware CRAB framework gives you a faithful, fast route to predict how any candidate pulse will perform under a realistic decoherence model, and it does so in time that scales much better with system size than brute-force density-matrix propagation would.

The authors don’t stop at fidelity calculations in the abstract. They also show how this framework can be folded into a time-optimized CRAB (TCRAB) protocol. A pair of optimization strategies anchors their approach: basin-hopping, a global search that can jump across rough landscapes and avoid being trapped in local basins, and a local solver (L-BFGS-B) that refines promising candidates. With TCRAB, one can either optimize T and α simultaneously or fix T and optimize α for that fixed duration, then scan T for the best outcome. The former tends to be more robust in practice, according to their experiments, while the latter can be faster in clean, convex-like problems. Their results suggest that basins-hop-assisted TCRAB can reliably land near the global optimum across a spectrum of problems, from entanglement generation to gate synthesis, under a variety of noise models.

Mapping gates to states is another clever move in their toolkit. Gate compilation, which asks: can we implement a target two-qubit gate with a given control set and noise, with high fidelity?—is reframed through the Choi–Jamiolkowski isomorphism as a state-to-state transfer problem. This mapping lets the same CRAB machinery ride herd on gates by treating the Choi state of the target gate as the final state to reach. It also clarifies how the noise channel acts on the Choi state, which has twice as many qubits for a two-qubit gate, and how to apply the corresponding Pauli-noise formalism to that doubled system. In other words, the same physical insight—commuting noise simplifies the math—lets them handle both state synthesis and gate synthesis with a unified, efficient engine.

What comes next for quantum control

The authors close with a candid roadmap for expanding this line of work. First, they envision pushing time optimization beyond CRAB to other popular quantum-control suites like GRAPE and Krotov methods. If the commutation tricks hold, the same trickle-down in computational cost could turn time-optimized control into a general practice, not a rare optimization tweak. They also point to richer noise landscapes—noise that only approximately commutes with the gate Hamiltonian, non-Pauli channels, and non-Markovian effects—as fertile ground for extending their analytic toy. A natural next move they mention is incorporating Pauli-twirling-like techniques to actively promote better commutation between noise and control, effectively “mixing” the noise into a friendlier form for optimization.

Beyond algorithmic extensions, the paper invites a broader shift in how we think about robustness in quantum devices. If you can reliably choose an evolution time that threads the needle between speed and decoherence, you gain a practical buffer against the capriciousness of real hardware. In other words, time optimization isn’t a cosmetic improvement; it’s a lever that can unlock higher fidelities with the same hardware and the same pulse library. The authors’ demonstrations—state transfer in spin models, entanglement generation in superconducting qubits, and gate compilation in quantum dots—are not just proof-of-concept; they are a stimulus for rethinking the design of control protocols in noisy, near-term quantum devices.

Bottom line: this work reframes a stubborn problem in quantum control—how long to run a pulse to maximize fidelity in the real world—as a problem you can and should solve jointly with pulse shaping. By showing that certain noise models let you decouple unitary control from dissipation, and by coupling that insight to a robust, time-aware optimization strategy, Jeon, Cai, and their collaborators offer a practical blueprint for squeezing more performance out of today’s quantum hardware. If the quantum revolution is cooled by careful engineering, then time optimization is a fire that can keep burning without burning out the qubits.