One-bit quantization is the process of approximating continuous-valued signals by judiciously chosen bi-level (e.g. +/-1) sequences such that only a small amount of
error is incurred in a suitable subspace of interest (typically a low-frequency subspace). Digital halftoning algorithms used in printers and the analog-digital
interface in many audio signal processing devices utilize this type of encoding. Although the subject has a long and rich engineering history, the mathematical theory
of one-bit quantization, especially from a rate-distortion point of view, is still incomplete.
In this talk, we will survey some of the recent progress in the approximation theory of one-bit quantization in various functional settings. Of particular concern
will be the fundamental limits of resolution that can be achieved by one-bit quantization and the development and improvement of schemes yielding exponential accuracy
in the oversampling rate for the class of bandlimited signals. We will also highlight some of the connections of this theory to other mathematical areas.
Webmaster University of Houston
---
Last modified: April 11 2016 - 18:14:43