Electrical and Computer Engineering
menu MENU

Communications and Signal Processing Seminar

Capacity upper bounds for deletion-type channels

Mahdi CheragchiAssistant ProfessorUniversity of Michigan, Department of Electrical Engineering and Computer Science
WHERE:
1005 EECS BuildingMap
SHARE:

Abstract

We develop a systematic approach, based on convex programming and real analysis, for obtaining upper bounds on the capacity of the binary deletion channel and, more generally, channels with i.i.d. insertions and deletions. Other than the classical deletion channel, we give a special attention to the Poisson-repeat channel introduced by Mitzenmacher and Drinea (IEEE Transactions on Information Theory, 2006). Our framework can be applied to obtain capacity upper bounds for any repetition distribution (the deletion and Poisson-repeat channels corresponding to the special cases of Bernoulli and Poisson distributions).  Our techniques essentially reduce the task of proving capacity upper bounds to maximizing a univariate, real-valued, and often concave function over a bounded interval. We show the following:

  1. The capacity of the binary deletion channel with deletion probability $d$ is at most $(1-d)log(phi)$ for $d > 1/2$, and, assuming the capacity function is convex, is at most $1-d log(4/phi)$ for $d<1/2$, where $phi=(1+sqrt{5})/2$ is the golden ratio. This is the first nontrivial capacity upper bound for any value of $d$ outside the limiting case $d to 0$ that is fully explicit and proved without computer assistance.
  2. We derive the first set of capacity upper bounds for the Poisson-repeat channel.
  3. We derive several novel upper bounds on the capacity of the deletion channel. All upper bounds are maximums of efficiently computable, and concave, univariate real functions over a bounded domain. In turn, we upper bound these functions in terms of explicit elementary and standard special functions, whose maximums can be found even more efficiently (and sometimes, analytically, for example for $d=1/2$).

Along the way, we develop several new techniques of potentially independent interest in information theory, probability, and mathematical analysis.

[Based on work published in ACM STOC 2018 and Journal of the ACM]

 

Biography

Mahdi Cheraghchi is an Assistant Professor of EECS at the University of Michigan, Ann Arbor. Before joining U of M in 2019, he was on the faculty of Imperial College London, UK, and prior to that held post-doctoral appointments at UC Berkeley, MIT, CMU, UT Austin, as well as a visiting engineer position at Qualcomm. He obtained his Ph.D. in 2010 from EPFL, where he received the Patrick Denantes Memorial Prize for outstanding doctoral thesis. Mahdi is broadly interested in all theoretical aspects of computer science, especially the role of information and coding theory in cryptography, complexity, algorithms, and high-dimensional geometry. He is a senior member of the ACM and IEEE.

 

Sponsored by

ECEKLA