Boosted optimal weighted least-squares

12/15/2019
by   Cécile Haberstich, et al.
0

This paper is concerned with the approximation of a function u in a given approximation space V_m of dimension m from evaluations of the function at n suitably chosen points. The aim is to construct an approximation of u in V_m which yields an error close to the best approximation error in V_m and using as few evaluations as possible. Classical least-squares regression, which defines a projection in V_m from n random points, usually requires a large n to guarantee a stable approximation and an error close to the best approximation error. This is a major drawback for applications where u is expensive to evaluate. One remedy is to use a weighted least squares projection using n samples drawn from a properly selected distribution. In this paper, we introduce a boosted weighted least-squares method which allows to ensure almost surely the stability of the weighted least squares projection with a sample size close to the interpolation regime n=m. It consists in sampling according to a measure associated with the optimization of a stability criterion over a collection of independent n-samples, and resampling according to this measure until a stability condition is satisfied. A greedy method is then proposed to remove points from the obtained sample. Quasi-optimality properties are obtained for the weighted least-squares projection, with or without the greedy procedure. The proposed method is validated on numerical examples and compared to state-of-the-art interpolation and weighted least squares methods.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset
Success!
Error Icon An error occurred

Sign in with Google

×

Use your Google Account to sign in to DeepAI

×

Consider DeepAI Pro