Communications and Signal Processing Seminar

Data Compression at High Sampling Rates

David NeuhoffProfessorUniversity of Michigan - Department of EECS
SHARE:

When compressing (i.e. digitizing or source encoding) analog data (in time or space), the first thought is usually to sample at the lowest rate that enables reconstruction with the desired quality. However, whatever compression can be attained at a low sampling rate can also be attained with higher sampling rate, with the potential advantages of simpler compression algorithms and robustness to sample loss. (Notice that adjacent samples become increasingly correlated as sampling rate increases, i.e., the data becomes increasingly sparse.) Motivated originally by field gathering dense sensor networks, this talk will describe the rate-distortion performances attainable by four source coding scenarios in the limit as sampling rate increases. The first is uniform scalar quantization followed by lossless coding at the entropy rate, the second is ideal distributed lossy source coding, the third is transform coding with scalar quantization, and the last is coding with no constraints. In each case, the coding rate in bits per second needed to attain a given decoded quality is the product of an encoding rate in bits per sample that goes to zero times the sampling rate, itself. It will be argued that in the limit of large sampling rate, the product attainable by the first system (the simplest) goes to infinity, while the products attainable by the others do not, and specific characterizations of their rate-distortion tradeoffs will be given for stationary Gaussian sources and mean-squared error distortion. Thus, while using high rate sampling with the simplicity of the first system is catastrophically bad, the other methods successfully mitigate the large sampling rate.

Sponsored by

University of Michigan, Department of Electrical Engineering & Computer Science