Neural Distributed Source Coding

Submitted by admin on Sat, 06/15/2024 - 08:33
We consider the Distributed Source Coding (DSC) problem concerning the task of encoding an input in the absence of correlated side information that is only available to the decoder. Remarkably, Slepian and Wolf showed in 1973 that an encoder without access to the side information can asymptotically achieve the same compression rate as when the side information is available to it. This seminal result was later extended to lossy compression of distributed sources by Wyner, Ziv, Berger, and Tung.

Secure Source Coding Resilient Against Compromised Users via an Access Structure

Submitted by admin on Tue, 06/11/2024 - 10:37
Consider a source and multiple users who observe the iid copies of correlated Gaussian random variables. The source wishes to compress its observations and store the result in a public database such that (i) authorized sets of users are able to reconstruct the source with a certain distortion level, and (ii) information leakage to non-authorized sets of colluding users is minimized. In other words, the recovery of the source is restricted to a predefined access structure.

Information-Theoretic Tools to Understand Distributed Source Coding in Neuroscience

Submitted by admin on Tue, 06/11/2024 - 10:37
This paper brings together topics of two of Berger’s main contributions to information theory: distributed source coding, and living information theory. Our goal is to understand which information theory techniques can be helpful in understanding a distributed source coding strategy used by the natural world. Towards this goal, we study the example of the encoding of location of an animal by grid cells in its brain.

Fast Variational Inference for Joint Mixed Sparse Graphical Models

Submitted by admin on Mon, 06/10/2024 - 05:00
Mixed graphical models are widely implemented to capture interactions among different types of variables. To simultaneously learn the topology of multiple mixed graphical models and encourage common structure, people have developed a variational maximum likelihood inference approach, which takes advantage of the log-determinant relaxation. In this article, we further improve the computational efficiency of this method by exploiting the block diagonal structure of the solution.

Generalized Autoregressive Linear Models for Discrete High-Dimensional Data

Submitted by admin on Mon, 06/10/2024 - 05:00
Fitting multivariate autoregressive (AR) models is fundamental for time-series data analysis in a wide range of applications. This article considers the problem of learning a p-lag multivariate AR model where each time step involves a linear combination of the past p states followed by a probabilistic, possibly nonlinear, mapping to the next state. The problem is to learn the linear connectivity tensor from observations of the states. We focus on the sparse setting, which arises in applications with a limited number of direct connections between variables.

rTop-k: A Statistical Estimation Approach to Distributed SGD

Submitted by admin on Mon, 06/10/2024 - 05:00
The large communication cost for exchanging gradients between different nodes significantly limits the scalability of distributed training for large-scale learning models. Motivated by this observation, there has been significant recent interest in techniques that reduce the communication cost of distributed Stochastic Gradient Descent (SGD), with gradient sparsification techniques such as top-k and random-k shown to be particularly effective.

A Unified Approach to Translate Classical Bandit Algorithms to the Structured Bandit Setting

Submitted by admin on Mon, 06/10/2024 - 05:00
We consider a finite-armed structured bandit problem in which mean rewards of different arms are known functions of a common hidden parameter 8*. Since we do not place any restrictions on these functions, the problem setting subsumes several previously studied frameworks that assume linear or invertible reward functions. We propose a novel approach to gradually estimate the hidden 8* and use the estimate together with the mean reward functions to substantially reduce exploration of sub-optimal arms.

Minimax Estimation of Divergences Between Discrete Distributions

Submitted by admin on Mon, 06/10/2024 - 05:00
We study the minimax estimation of α-divergences between discrete distributions for integer α ≥ 1, which include the Kullback-Leibler divergence and the χ2-divergences as special examples. Dropping the usual theoretical tricks to acquire independence, we construct the first minimax rate-optimal estimator which does not require any Poissonization, sample splitting, or explicit construction of approximating polynomials.

Convex Parameter Recovery for Interacting Marked Processes

Submitted by admin on Mon, 06/10/2024 - 05:00
We introduce a new general modeling approach for multivariate discrete event data with categorical interacting marks, which we refer to as marked Bernoulli processes. In the proposed model, the probability of an event of a specific category to occur in a location may be influenced by past events at this and other locations. We do not restrict interactions to be positive or decaying over time as it is commonly adopted, allowing us to capture an arbitrary shape of influence from historical events, locations, and events of different categories.

Information-Theoretic Limits for the Matrix Tensor Product

Submitted by admin on Mon, 06/10/2024 - 05:00
This article studies a high-dimensional inference problem involving the matrix tensor product of random matrices. This problem generalizes a number of contemporary data science problems including the spiked matrix models used in sparse principal component analysis and covariance estimation and the stochastic block model used in network analysis.