In this article, we show that the notion of Tikhonov well-posedness is
suitable for studying supervised learning for a wide range of loss functions.
We show that supervised learning can be studied from the perspective of
variational systems, where one deals with the stability properties of a family
of optimization problems. In particular, we prove that the problem
of consistency is related to the Attouch–Wets convergence of a sequence of
perturbed functionals. Our aim is understanding the potential benefits of
applying variational convergence methods to learning theory.