filmov
tv
TCS+ Talk: Richard Peng
![preview_player](https://i.ytimg.com/vi/xizJeKLF8_E/hqdefault.jpg)
Показать описание
Title: Solving Sparse Linear Systems Faster than Matrix Multiplication
Can linear systems be solved faster than matrix multiplication? While there has been remarkable progress for the special cases of graph structured linear systems, in the general setting, the bit complexity of solving an n-by-n linear system Ax=b is n^omega, where omega 2.372864 is the matrix multiplication exponent. Improving on this has been an open problem even for sparse linear systems with poly(n) condition number.
We present an algorithm that solves linear systems in sparse matrices
asymptotically faster than matrix multiplication for any omega gt 2. This
speedup holds for any input matrix A with o(n^omega-1/log(kappa(A))) non-zeros, where kappa(A) is the condition number of A. For poly(n)-conditioned matrices with O(n) nonzeros, and the current value of \omega, the bit complexity of our algorithm to solve to within any 1/poly(n) error is O(n^2.331645).
Our algorithm can be viewed as an efficient randomized implementation of
the block Krylov method via recursive low displacement rank factorizations. It is inspired by the algorithm of [Eberly-Giesbrecht-Giorgi-Storjohann-Villard ISSAC `06 `07] for inverting matrices over finite fields. In our analysis of numerical stability, we develop matrix anti-concentration techniques to bound the smallest eigenvalue and the smallest gap in eigenvalues of semi-random matrices.
Can linear systems be solved faster than matrix multiplication? While there has been remarkable progress for the special cases of graph structured linear systems, in the general setting, the bit complexity of solving an n-by-n linear system Ax=b is n^omega, where omega 2.372864 is the matrix multiplication exponent. Improving on this has been an open problem even for sparse linear systems with poly(n) condition number.
We present an algorithm that solves linear systems in sparse matrices
asymptotically faster than matrix multiplication for any omega gt 2. This
speedup holds for any input matrix A with o(n^omega-1/log(kappa(A))) non-zeros, where kappa(A) is the condition number of A. For poly(n)-conditioned matrices with O(n) nonzeros, and the current value of \omega, the bit complexity of our algorithm to solve to within any 1/poly(n) error is O(n^2.331645).
Our algorithm can be viewed as an efficient randomized implementation of
the block Krylov method via recursive low displacement rank factorizations. It is inspired by the algorithm of [Eberly-Giesbrecht-Giorgi-Storjohann-Villard ISSAC `06 `07] for inverting matrices over finite fields. In our analysis of numerical stability, we develop matrix anti-concentration techniques to bound the smallest eigenvalue and the smallest gap in eigenvalues of semi-random matrices.