Home Grants
 
Home Grants
Learning from examples as an inverse problem
Year: 2005 Keywords: statistical learning, inverse problems, regularization theory, consistency
Authors: E. De Vito, L. Rosasco, A. Caponnetto, U. De Giovannini, F. Odone  
Journal: Journal of Machine Learning Research Volume: 6
Pages: 883-904
   
Abstract:
Many works related learning from examples to regularization techniques for inverse problems, em- phasizing the strong algorithmic and conceptual analogy of certain learning algorithms with regu- larization algorithms. In particular it is well known that regularization schemes such as Tikhonov regularization can be effectively used in the context of learning and are closely related to algo- rithms such as support vector machines. Nevertheless the connection with inverse problem was considered only for the discrete (finite sample) problem and the probabilistic aspects of learning from examples were not taken into account. In this paper we provide a natural extension of such analysis to the continuous (population) case and study the interplay between the discrete and con- tinuous problems. From a theoretical point of view, this allows to draw a clear connection between the consistency approach in learning theory and the stability convergence property in ill-posed in- verse problems. The main mathematical result of the paper is a new probabilistic bound for the regularized least-squares algorithm. By means of standard results on the approximation term, the consistency of the algorithm easily follows.
Digital version