Noise cleaning the precision matrix of short time series.
Miguel Ibáñez-Berganza (Networks Unit, IMT School for Advanced Studies Lucca and Istituto Italiano di Tecnologia, Napoli), Carlo Lucibello (AI Lab, Institute for Data Science and Analytics, Bocconi University, Milano), Francesca Santucci and Tommaso Gili( Networks Unit, IMT School for Advanced Studies Lucca), Andrea Gabrielli (Dipartimento di Ingegneria Civile, Informatica e delle Tecnologie Aeronautiche, Universitá degli Studi Roma Tre and Centro Ricerche Enrico Fermi, Rome)
We present a comparison between various algorithms of inference of covariance and precision matrices in small data sets of real vectors of the typical length and dimension of human brain activity time series retrieved by functional magnetic resonance imaging (fMRI). Assuming a Gaussian model underlying the neural activity, the problem is denoising the empirically observed matrices to obtain a better estimator of the (unknown) true precision and covariance matrices. We consider several standard noise-cleaning algorithms and compare them on two types of data sets. The first type consists of synthetic time series sampled from a generative Gaussian model, of which we can vary the fraction of dimensions per sample q and the strength of off-diagonal correlations. The second type consists of a time series of fMRI brain activity of human subjects at rest. The reliability of each algorithm is assessed in terms of test-set likelihood and, in the case of synthetic data, of the distance from the true precision matrix. Based on random matrix theory, we observe that the so-called optimal rotationally invariant estimator leads to a significantly lower distance from the true precision matrix in synthetic data and higher test likelihood in natural fMRI data. We propose a variant of the optimal rotationally invariant estimator in which one of its parameters is optimised by cross-validation. It outperforms all the other estimators in the severe undersampling regime (large q) typical of the fMRI series. We propose a simple algorithm based on an iterative likelihood gradient ascent, leading to very accurate estimations in weakly correlated synthetic data sets.
Read the full article on PHYSICAL REVIEW E 108, 024313 (2023)