M. Sugiyama (Germany and Japan)
Machine Learning, Supervised Learning, Test Error, Ex pected Test Error, Model Selection, Ridge Regression
In model selection procedures in supervised learning, a model is usually chosen so that the expected test error over all possible test input points is minimized. On the other hand, when the test input points (without output values) are available in advance, it is more effetive to choose a model so that the test error only at the test input points at hand is minimized. In this paper, we follow this idea and de rive an estimator of the test error at the given test input points for linear regression. Our estimator is proved to be an unbiased estimator of the test error at the given test in put points under certain conditions. Through the simula tions with artificial and standard benchmark data sets, we show that the proposed method is successfully applied in test error estimation and is compared favorably to the stan dard cross-validation and an empirical Bayesian method in ridge parameter selection.
Important Links:
Go Back