Catalog Home Page

Asymptotic optimality of generalized cross-validation for choosing the regularization parameter

Lukas, M.A. (1993) Asymptotic optimality of generalized cross-validation for choosing the regularization parameter. Numerische Mathematik, 66 (1). pp. 41-66.

Link to Published Version:
*Subscription may be required


Let fnλ be the regularized solution of a general, linear operator equation, Kf0=g, from discrete, noisy data yi=g(x) +e{open}i, i=1,..., n, where e{open}i are uncorrelated random errors. We consider the prominent method of generalized cross-validation (GCV) for choosing the crucial regularization parameter λ. The practical GCV estimate {Mathematical expression} and its "expected" counterpart λV are defined as the minimizers of the GCV function V(λ) and EV(λ), respectively, where E denotes expectation. We investigate the asymptotic performance of λV with respect to each of the following loss functions: the risk, an L2 norm on the output error Kfnλ-g, and a whole class of stronger norms on the input error fnλ-f0. In the special cases of data smoothing and Fourier differentiation, it is known that as n→∞, λV is asymptotically optimal (ao) with respect to the risk criterion. We show this to be true in general, and also extend it to the L2 norm criterion. The asymptotic optimality is independent of the error variance, the ill-posedness of the problem and the smoothness index of the solution f0. For the input error criterion, it is shown that λV is weakly ao for a certain class of f0 if the smoothness of f0 relative to the regularization space is not too high, but otherwise λV is sub-optimal. This result is illustrated in the case of numerical differentiation.

Publication Type: Journal Article
Murdoch Affiliation: School of Mathematical and Physical Sciences
Publisher: Springer New York
Copyright: © 1993 Springer-Verlag
Item Control Page Item Control Page