Salle 2, Site Marcelin Berthelot
En libre accès, dans la limite des places disponibles
-

Résumé

We revisit the topic of quantum linear system solvers. We do so from the perspective of convex optimization and in particular gradient descent-type algorithms. We connect previous work of Childs, Kothari, and Somma to gradient descent on the convex function $\|Ax - b\|_2^2$: it can be viewed as expressing the resulting polynomial in the Chebyshev basis, and then truncating at an appropriate degree. We then proceed to improve the constants in the big-O. Instead of starting from the basic gradient descent algorithm, we start from the optimal Chebyshev iteration method (which can be viewed as an accelerated gradient descent algorithm). We show that this results in a bounded polynomial by showing that the $1$-norm of the coefficients in the Chebyshev basis is small.

As in previous work, one can then use the quantum singular value transformation (QSVT) framework to solve the QLS-problem. Such an approach requires the computation of certain angles, which is known to be numerically a bit unstable. We highlight that implementing linear combinations of Chebyshev polynomials is a reasonable alternative as long as the $1$-norm of the coefficients is small: for the QLS-problem this approach only costs an additional logarithmic number of ancilla qubits. We then give analytical evidence that the $1$-norm of the Chebyshev coefficients is small for many functions: powers of $x$, $e^x$, $\log(x)$, the sign function, the rectangle function,...

This is joint work with Sander Gribling and Iordanis Kerenidis.

Intervenants

Daniel Szilagyi

IRIF - UP/CNRS