Colloquium




Abstract
 
The usual paradigm for signal processing is to model a signal as a bandlimited function and capture the signal by means of its time samples. The Shannon-Nyquist theory says that the sampling rate needs to be at least twice the bandwidth. For broadbanded signals, such high sampling rates may be impossible to implement in circuitry.

Compressed Sensing is a new area of signal processing whose aim is to circumvent this dilemma by sampling signals closer to their information rate instead of their bandwidth. Rather than model the signal as bandlimited, Compressed Sensing, assumes the signal can be represented or approximated by a few suitably chosen terms from a basis expansion of the signal. It also enlarges the concept of sample to include the application of any linear functional applied to the signal.

We give a brief introduction to Compressed Sensing that centers on the effectiveness and implementation of random sampling. Working for simplicity with the Kronecker basis in RN this amounts to the problem of recovering a signal x ∈ RN from a small number of measurements represented by a vector y = Φ x ∈ Rn where Φ is a randomly generated n × N matrix consisting e.g. of independent Bernoulli or Gaussian entries. It is shown that for such matrices the orthogonal matching pursuit algorithm on the data y using the columns of Φ as a dictionary, works in a nearly optimal way. This means that for a sufficiently large, whenever n ≥ a k log N, the algorithm recovers an approximation x to x such that with overwhelming probability || x - x ||l2 is within a fixed multiplicative constant of the error of best k-term approximation σk(x)l2.

This is a joint work with Albert Cohen and Ron DeVore


For future talks or to be added to the mailing list: www.math.uh.edu/colloquium