Kernel-Matrix Determinant Estimates from stopped Cholesky Decomposition

Research output: Contribution to journalJournal articleResearchpeer-review

Standard

Kernel-Matrix Determinant Estimates from stopped Cholesky Decomposition. / Bartels, Simon; Boomsma, Wouter; Frellsen, Jes; Garreau, Damien.

In: Journal of Machine Learning Research, Vol. 24, 71, 2023, p. 1-57.

Research output: Contribution to journalJournal articleResearchpeer-review

Harvard

Bartels, S, Boomsma, W, Frellsen, J & Garreau, D 2023, 'Kernel-Matrix Determinant Estimates from stopped Cholesky Decomposition', Journal of Machine Learning Research, vol. 24, 71, pp. 1-57. <https://www.jmlr.org/papers/v24/21-0781.html>

APA

Bartels, S., Boomsma, W., Frellsen, J., & Garreau, D. (2023). Kernel-Matrix Determinant Estimates from stopped Cholesky Decomposition. Journal of Machine Learning Research, 24, 1-57. [71]. https://www.jmlr.org/papers/v24/21-0781.html

Vancouver

Bartels S, Boomsma W, Frellsen J, Garreau D. Kernel-Matrix Determinant Estimates from stopped Cholesky Decomposition. Journal of Machine Learning Research. 2023;24:1-57. 71.

Author

Bartels, Simon ; Boomsma, Wouter ; Frellsen, Jes ; Garreau, Damien. / Kernel-Matrix Determinant Estimates from stopped Cholesky Decomposition. In: Journal of Machine Learning Research. 2023 ; Vol. 24. pp. 1-57.

Bibtex

@article{b0612f01f41944309562c2a03eaf6cef,
title = "Kernel-Matrix Determinant Estimates from stopped Cholesky Decomposition",
abstract = "Algorithms involving Gaussian processes or determinantal point processes typically require computing the determinant of a kernel matrix. Frequently, the latter is computed from the Cholesky decomposition, an algorithm of cubic complexity in the size of the matrix. We show that, under mild assumptions, it is possible to estimate the determinant from only a sub-matrix, with probabilistic guarantee on the relative error. We present an augmentation of the Cholesky decomposition that stops under certain conditions before processing the whole matrix. Experiments demonstrate that this can save a considerable amount of time while rarely exceeding an overhead of more than 5% when not stopping early. More generally, we present a probabilistic stopping strategy for the approximation of a sum of known length where addends are revealed sequentially. We do not assume independence between addends, only that they are bounded from below and decrease in conditional expectation.",
author = "Simon Bartels and Wouter Boomsma and Jes Frellsen and Damien Garreau",
year = "2023",
language = "English",
volume = "24",
pages = "1--57",
journal = "Journal of Machine Learning Research",
issn = "1533-7928",
publisher = "MIT Press",

}

RIS

TY - JOUR

T1 - Kernel-Matrix Determinant Estimates from stopped Cholesky Decomposition

AU - Bartels, Simon

AU - Boomsma, Wouter

AU - Frellsen, Jes

AU - Garreau, Damien

PY - 2023

Y1 - 2023

N2 - Algorithms involving Gaussian processes or determinantal point processes typically require computing the determinant of a kernel matrix. Frequently, the latter is computed from the Cholesky decomposition, an algorithm of cubic complexity in the size of the matrix. We show that, under mild assumptions, it is possible to estimate the determinant from only a sub-matrix, with probabilistic guarantee on the relative error. We present an augmentation of the Cholesky decomposition that stops under certain conditions before processing the whole matrix. Experiments demonstrate that this can save a considerable amount of time while rarely exceeding an overhead of more than 5% when not stopping early. More generally, we present a probabilistic stopping strategy for the approximation of a sum of known length where addends are revealed sequentially. We do not assume independence between addends, only that they are bounded from below and decrease in conditional expectation.

AB - Algorithms involving Gaussian processes or determinantal point processes typically require computing the determinant of a kernel matrix. Frequently, the latter is computed from the Cholesky decomposition, an algorithm of cubic complexity in the size of the matrix. We show that, under mild assumptions, it is possible to estimate the determinant from only a sub-matrix, with probabilistic guarantee on the relative error. We present an augmentation of the Cholesky decomposition that stops under certain conditions before processing the whole matrix. Experiments demonstrate that this can save a considerable amount of time while rarely exceeding an overhead of more than 5% when not stopping early. More generally, we present a probabilistic stopping strategy for the approximation of a sum of known length where addends are revealed sequentially. We do not assume independence between addends, only that they are bounded from below and decrease in conditional expectation.

M3 - Journal article

VL - 24

SP - 1

EP - 57

JO - Journal of Machine Learning Research

JF - Journal of Machine Learning Research

SN - 1533-7928

M1 - 71

ER -

ID: 341785510