Modern Rate-Distortion Theory (RDT) stands at the intersection of information theory, signal processing, and machine learning, offering a profound understanding of the tradeoff between data compression and reconstruction fidelity. This special issue aims to present the latest advancements in RDT, ranging from theoretical developments to practical applications across diverse domains. Topics include novel formulations of the rate-distortion tradeoff, deep learning approaches for optimization, applications in image and video compression, and extensions to non-standard data sources. By bringing together cutting-edge research and innovative methodologies, this special issue aims to shape the future landscape of RDT and its applications.