Universal Gaussian Quantization With Side-Information Using Polar Lattices

Submitted by admin on Mon, 06/10/2024 - 05:00
We consider universal quantization with side information for Gaussian observations, where the side information is a noisy version of the sender’s observation with noise variance unknown to the sender. In this paper, we propose a universally rate optimal and practical quantization scheme for all values of unknown noise variance.

Strategic Successive Refinement With Interdependent Decoders Cost Functions

Submitted by admin on Mon, 06/10/2024 - 05:00
In decentralized and decision-oriented communication paradigms, autonomous devices strategically implement information compression policies. In this work, we study a strategic communication game between an encoder and two decoders. An i.i.d. information source, observed by the encoder, is transmitted to the decoders via two perfect links, one reaching the first decoder only and the other reaching both decoders, as in the successive refinement setup.

Compressing Multisets With Large Alphabets

Submitted by admin on Mon, 06/10/2024 - 05:00
Current methods which compress multisets at an optimal rate have computational complexity that scales linearly with alphabet size, making them too slow to be practical in many real-world settings. We show how to convert a compression algorithm for sequences into one for multisets, in exchange for an additional complexity term that is quasi-linear in sequence length. This allows us to compress multisets of exchangeable symbols at an optimal rate, with computational complexity decoupled from the alphabet size.

Tail Redundancy and its Characterization of Compression of Memoryless Sources

Submitted by admin on Mon, 06/10/2024 - 05:00
We formalize the tail redundancy of a collection ${\mathcal{ P}}$ of distributions over a countably infinite alphabet, and show that this fundamental quantity characterizes the asymptotic per-symbol minimax redundancy of universally compressing sequences generated i.i.d. from ${\mathcal{ P}}$ . Contrary to the worst case formulations of universal compression, finite single letter minimax (average case) redundancy of ${\mathcal{ P}}$ does not automatically imply that the expected minimax redundancy of describing length- $n$ strings sampled i.i.d.

Local Decoding in Distributed Compression

Submitted by admin on Mon, 06/10/2024 - 05:00
A recent result says that the lossless compression of a single source $X^{n}$ is achievable with a strong locality property; any $X_{i}$ can be decoded from a constant number of compressed bits, with a vanishing in $n$ probability of error. By contrast, we show that for two separately encoded sources $(X^{n},Y^{n})$ , lossless compression and strong locality is generally not possible. Specifically, we show that for the class of “confusable” sources, strong locality cannot be achieved whenever one of the sources is compressed below its entropy.

Theoretical Perspectives on Deep Learning Methods in Inverse Problems

Submitted by admin on Mon, 06/10/2024 - 05:00
In recent years, there have been significant advances in the use of deep learning methods in inverse problems such as denoising, compressive sensing, inpainting, and super-resolution. While this line of works has predominantly been driven by practical algorithms and experiments, it has also given rise to a variety of intriguing theoretical problems. In this paper, we survey some of the prominent theoretical developments in this line of works, focusing in particular on generative priors, untrained neural network priors, and unfolding algorithms.

A Universal Low Complexity Compression Algorithm for Sparse Marked Graphs

Submitted by admin on Mon, 06/10/2024 - 05:00
Many modern applications involve accessing and processing graphical data, i.e., data that is naturally indexed by graphs. Examples come from Internet graphs, social networks, genomics and proteomics, and other sources. The typically large size of such data motivates seeking efficient ways for its compression and decompression. The current compression methods are usually tailored to specific models, or do not provide theoretical guarantees.

Information Leakage in Index Coding With Sensitive and Nonsensitive Messages

Submitted by admin on Mon, 06/10/2024 - 05:00
Index coding can be viewed as a compression problem with multiple decoders with side information. In such a setup, an encoder compresses a number of messages into a common codeword such that every decoder can decode its requested messages with the help of knowing some other messages as side information. In this paper, we study how much information is leaked to a guessing adversary observing the codeword in index coding, where some messages in the system are sensitive and others are not.

TurbuGAN: An Adversarial Learning Approach to Spatially-Varying Multiframe Blind Deconvolution With Applications to Imaging Through Turbulence

Submitted by admin on Mon, 06/10/2024 - 05:00
We present a self-supervised and self-calibrating multi-shot approach to imaging through atmospheric turbulence, called TurbuGAN. Our approach requires no paired training data, adapts itself to the distribution of the turbulence, leverages domain-specific data priors, and can generalize from tens to thousands of measurements. We achieve such functionality through an adversarial sensing framework adapted from CryoGAN (Gupta et al. 2021), which uses a discriminator network to match the distributions of captured and simulated measurements.

Efficient Representation of Large-Alphabet Probability Distributions

Submitted by admin on Mon, 06/10/2024 - 05:00
A number of engineering and scientific problems require representing and manipulating probability distributions over large alphabets, which we may think of as long vectors of reals summing to 1. In some cases it is required to represent such a vector with only $b$ bits per entry. A natural choice is to partition the interval $[{0,1}]$ into $2^{b}$ uniform bins and quantize entries to each bin independently.