Till innehåll på sidan

Andreas Karlsson: Machine Learning in Kreĭn Spaces: An Exposition to the Main Representer Theorems

Bachelor Thesis presentation

Tid: To 2025-12-11 kl 13.00 - 14.00

Plats: Mittag-Lefflerrummet (mötesrum 16), Albano, Hus 1, Vån 3

Respondent: Andreas Karlsson

Handledare: Annemarie Luger (SU)

Exportera till kalender

Abstract:

Kernel methods are powerful in machine learning because they enable nonlinear learning with simpler linear algebra tools. Yet most theory is built around positive semidefinite kernels and Hilbert-space geometry. This expository thesis explains how learning with indefinite kernels can be made rigorous and tractable in reproducing kernel Kreĭn spaces (RKKS). The thesis builds a concise pathway from the geometry of Kreĭn spaces (fundamental decomposition and symmetry, strong topology via the associated Hilbert space) to reproducing kernels, where evaluations are continuous and the kernel-induced form coincides with the RKKS inner product. On this foundation we synthesize two RKKS formulations of regularized empirical risk minimization: (1) a stabilization setup with a weak representer theorem, where stationary solutions lie in the data span, and (2) a variance-constrained minimization setup with a strong representer theorem, where optimizers lie in the span under strong-topology Tikhonov regularization. In both cases, the representer theorems make the infinite-dimensional problems collapse to finite-dimensional Gram-matrix algebra, introducing the kernel trick in the indefinite setting. A limitation of this setup is that naïve objectives can be unbounded below. This clarifies when indefinite kernels can be used reliably in specific supervised learning problems.