Tensor Estimation With Structured Priors

Submitted by admin on Mon, 06/10/2024 - 05:00
We consider rank-one symmetric tensor estimation when the tensor is corrupted by gaussian noise and the spike forming the tensor is a structured signal coming from a generalized linear model. The latter is a mathematically tractable model of a non-trivial hidden lower-dimensional latent structure in a signal. We work in a large dimensional regime with fixed ratio of signal-to-latent space dimensions.

Fast Robust Subspace Tracking via PCA in Sparse Data-Dependent Noise

Submitted by admin on Mon, 06/10/2024 - 05:00
This work studies the robust subspace tracking (ST) problem. Robust ST can be simply understood as a (slow) time-varying subspace extension of robust PCA. It assumes that the true data lies in a low-dimensional subspace that is either fixed or changes slowly with time. The goal is to track the changing subspaces over time in the presence of additive sparse outliers and to do this quickly (with a short delay). We introduce a “fast” mini-batch robust ST solution that is provably correct under mild assumptions.

Successive Refinement of Privacy

Submitted by admin on Mon, 06/10/2024 - 05:00
This work examines a novel question: how much randomness is needed to achieve local differential privacy (LDP)? A motivating scenario is providing multiple levels of privacy to multiple analysts, either for distribution or for heavy hitter estimation, using the same (randomized) output. We call this setting successive refinement of privacy, as it provides hierarchical access to the raw data with different privacy levels.

Exact Asymptotics for Learning Tree-Structured Graphical Models With Side Information: Noiseless and Noisy Samples

Submitted by admin on Mon, 06/10/2024 - 05:00
Given side information that an Ising tree-structured graphical model is homogeneous and has no external field, we derive the exact asymptotics of learning its structure from independently drawn samples. Our results, which leverage the use of probabilistic tools from the theory of strong large deviations, refine the large deviation (error exponents) results of Tan et al. (2011) and strictly improve those of Bresler and Karzand (2020). In addition, we extend our results to the scenario in which the samples are observed in random noise.

The Limiting Poisson Law of Massive MIMO Detection With Box Relaxation

Submitted by admin on Mon, 06/10/2024 - 05:00
Estimating a binary vector from noisy linear measurements is a prototypical problem for MIMO systems. A popular algorithm, called the box-relaxation decoder, estimates the target signal by solving a least squares problem with convex constraints. This article shows that the performance of the algorithm, measured by the number of incorrectly-decoded bits, has a limiting Poisson law. This occurs when the sampling ratio and noise variance, two key parameters of the problem, follow certain scalings as the system dimension grows.

Lower Bounds and a Near-Optimal Shrinkage Estimator for Least Squares Using Random Projections

Submitted by admin on Mon, 06/10/2024 - 05:00
We consider optimization using random projections as a statistical estimation problem, where the squared distance between the predictions from the estimator and the true solution is the error metric. In approximately solving a large-scale least squares problem using Gaussian sketches, we show that the sketched solution has a conditional Gaussian distribution with the true solution as its mean.

Distributed Hypothesis Testing With Variable-Length Coding

Submitted by admin on Mon, 06/10/2024 - 05:00
The problem of distributed testing against independence with variable-length coding is considered when the average and not the maximum communication load is constrained as in previous works. The paper characterizes the optimum type-II error exponent of a single-sensor single-decision center system given a maximum type-I error probability when communication is either over a noise-free rate-R link or over a noisy discrete memoryless channel (DMC) with stop-feedback. Specifically, let E denote the maximum allowed type-I error probability.

On the All-or-Nothing Behavior of Bernoulli Group Testing

Submitted by admin on Mon, 06/10/2024 - 05:00
In this article, we study the problem of non-adaptive group testing, in which one seeks to identify which items are defective given a set of suitably-designed tests whose outcomes indicate whether or not at least one defective item was included in the test. The most widespread recovery criterion seeks to exactly recover the entire defective set, and relaxed criteria such as approximate recovery and list decoding have also been considered.

Fisher Information Under Local Differential Privacy

Submitted by admin on Mon, 06/10/2024 - 05:00
We develop data processing inequalities that describe how Fisher information from statistical samples can scale with the privacy parameter $\varepsilon $ under local differential privacy constraints. These bounds are valid under general conditions on the distribution of the score of the statistical model, and they elucidate under which conditions the dependence on $\varepsilon $ is linear, quadratic, or exponential.

Structured Alternating Minimization for Union of Nested Low-Rank Subspaces Data Completion

Submitted by admin on Mon, 06/10/2024 - 05:00
In this article, we consider a particular data structure consisting of a union of several nested low-rank subspaces with missing data entries. Given the rank of each subspace, we treat the data completion problem, i.e., to estimate the missing entries. Starting from the case of two-dimensional data, i.e., matrices, we show that the union of nested subspaces data structure leads to a structured decomposition U = XY where the factor Y has blocks of zeros that are determined by the rank values.