Abstract
Suppose we are given a vector f in a class FsubeRopf N , e.g., a class of digital signals or digital images. How many linear measurements do we need to make about f to be able to recover f to within precision epsi in the Euclidean (lscr 2 ) metric? This paper shows that if the objects of interest are sparse in a fixed basis or compressible, then it is possible to reconstruct f to within very high accuracy from a small number of random measurements by solving a simple linear program. More precisely, suppose that the nth largest entry of the vector |f| (or of its coefficients in a fixed basis) obeys |f| (n) lesRmiddotn -1 p/, where R>0 and p>0. Suppose that we take measurements y k =langf # ,X k rang,k=1,...,K, where the X k are N-dimensional Gaussian vectors with independent standard normal entries. Then for each f obeying the decay estimate above for some 0<p<1 and with overwhelming probability, our reconstruction f t , defined as the solution to the constraints y k =langf # ,X k rang with minimal lscr 1 norm, obeys parf-f # par lscr2 lesC p middotRmiddot(K/logN) -r , r=1/p-1/2. There is a sense in which this result is optimal; it is generally impossible to obtain a higher accuracy from any set of K measurements whatsoever. The methodology extends to various other random measurement ensembles; for example, we show that similar results hold if one observes a few randomly sampled Fourier coefficients of f. In fact, the results are quite general and require only two hypotheses on the measurement ensemble which are detailed