A number of engineering and scientific problems require representing and manipulating probability distributions over large alphabets, which we may think of as long vectors of reals summing to 1. In some cases it is required to represent such a vector with only $b$ bits per entry. A natural choice is to partition the interval $[{0,1}]$ into $2^{b}$ uniform bins and quantize entries to each bin independently. We show that a minor modification of this procedure– applying an entrywise non-linear function (compander) $f(x)$ prior to quantization– yields an extremely effective quantization method. For example, for $b=8 (16)$ and $10^{5}$ -sized alphabets, the quality of representation improves from a loss (under KL divergence) of $0.5 (0.1)$ bits/entry to $10^{-4} (10^{-9})$ bits/entry. Compared to floating point representations, our compander method improves the loss from $10^{-1}(10^{-6})$ to $10^{-4}(10^{-9})$ bits/entry. These numbers hold for both real-world data (word frequencies in books and DNA $k$ -mer counts) and for synthetic randomly generated distributions. Theoretically, we analyze a minimax optimality criterion and show that the closed-form compander $f(x) \propto {\mathrm {ArcSinh}}(\sqrt {c_{K} ({K} \log {K}) x})$ is (asymptotically as $b\to \infty$ ) optimal for quantizing probability distributions over a ${K}$ -letter alphabet. Non-asymptotically, such a compander (substituting $1/2$ for $c_{K}$ for simplicity) has KL-quantization loss bounded by $\leq 8\cdot 2^{-2b} \log ^{2} {K}$ . Interestingly, a similar minimax criterion for the quadratic loss on the hypercube shows optimality of the standard uniform quantizer. This suggests that the ArcSinh quantizer is as fundamental for KL-distortion as the uniform quantizer for quadratic distortion.