论文标题

改进线性和非线性微分方程的量子算法

Improved quantum algorithms for linear and nonlinear differential equations

论文作者

Krovi, Hari

论文摘要

对于先前的不均线性和非线性普通微分方程(ODE)的工作,我们提供了基本概括和改进的量子算法。具体而言,我们展示了矩阵指数的规范是如何表征线性ODE的量子算法的运行时间,以打开应用程序,以应用于更宽的线性和非线性ODES。在Berry等人(2017年)中,给出了一类线性ODE的量子算法,其中涉及的矩阵需要对角线化。此处介绍的线性ODE的量子算法扩展到许多类别的不可用的矩阵。这里的算法的速度也比Berry等人(2017年)中的某些类别可对角矩阵中得出的边界更快。然后,使用Carleman线性化将我们的线性ODE算法应用于非线性微分方程(我们最近在Liu等人(2021)中采用的一种方法)。对该结果的改进是两个倍。首先,我们获得了指数的更好依赖性对误差。 Xue等人(2021)也实现了这种对数对误差的依赖性,但仅用于同质的非线性方程。其次,如果目前的算法可以处理任何稀疏,可逆矩阵(该模型耗散),如果它具有负log-norm(包括不可用的矩阵),而Liu等人(2021)和Xue等人(2021)(2021)也需要正常性。

We present substantially generalized and improved quantum algorithms over prior work for inhomogeneous linear and nonlinear ordinary differential equations (ODE). Specifically, we show how the norm of the matrix exponential characterizes the run time of quantum algorithms for linear ODEs opening the door to an application to a wider class of linear and nonlinear ODEs. In Berry et al., (2017), a quantum algorithm for a certain class of linear ODEs is given, where the matrix involved needs to be diagonalizable. The quantum algorithm for linear ODEs presented here extends to many classes of non-diagonalizable matrices. The algorithm here is also exponentially faster than the bounds derived in Berry et al., (2017) for certain classes of diagonalizable matrices. Our linear ODE algorithm is then applied to nonlinear differential equations using Carleman linearization (an approach taken recently by us in Liu et al., (2021)). The improvement over that result is two-fold. First, we obtain an exponentially better dependence on error. This kind of logarithmic dependence on error has also been achieved by Xue et al., (2021), but only for homogeneous nonlinear equations. Second, the present algorithm can handle any sparse, invertible matrix (that models dissipation) if it has a negative log-norm (including non-diagonalizable matrices), whereas Liu et al., (2021) and Xue et al., (2021) additionally require normality.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源