Skip to main content

Teodor Bucht: Random matrices and motivations from high dimensional statistics

Time: Thu 2026-03-12 15.15 - 16.15

Location: 3721

Export to calendar

Consider a vector of random signals for which we wish to know the covariance matrix. The classical solution to this problem is to independently sample the random vector and compute the sample covariance matrix. The law of large numbers says that the sample covariance matrix converges to the underlying unknown covariance matrix. This approach works well when the number of samples is much larger than the number of signals, this is the classical regime. In the high dimensional regime, the number of samples is proportional to the number of signals. In this regime the law of large numbers fail. The simplest high dimensional case to study is when the underlying covariance matrix is the identity, in this case the eigenvalue distribution of the sample covariance matrix converges to the Marchenko–Pastur law. I will sketch the proof of the convergence to Marchenko-Pastur and highlight some classic arguments used in random matrix theory. Another important application is detecting signals affected by random noise, in relation to this problem I will introduce a spiked random matrix model and discuss the BBP transition.