Recent Issues

Blog posts
Blogs
by Elsa
Title: Exploring the Symbiotic Relationship Between Information Theory and Machine Learning In the vast realm of artificial intelligence, two pillars stand prominently: Information Theory and Machine Learning. At first glance, they might seem like distinct fields with little in common, but upon closer inspection, their connection runs deep, forming a symbiotic relationship that underpins many modern AI advancements. Understanding Information Theory Information Theory, pioneered by Claude Shannon in the 1940s, is the study of quantifying information and its transmission. At its core, it deals with the fundamental limits of compressing, transmitting, and storing data. Key concepts like entropy, mutual information, and channel capacity provide a rigorous framework for analyzing communication systems. Unraveling Machine Learning On the other hand, Machine Learning (ML) focuses on developing algorithms that enable computers to learn from data and make predictions or decisions without being explicitly programmed. From recommendation systems to autonomous vehicles, ML algorithms permeate various aspects of our lives, continuously improving through experience. The Marriage of Concepts So, how do these seemingly disparate fields intertwine? The answer lies in their shared principles and mutual benefits: 1. Information as a Metric: Information theory provides a solid foundation for measuring uncertainty and complexity in data. In ML, this translates into quantifying the amount of information contained in features, helping algorithms discern meaningful patterns from noise. 2. Compression and Generalization: At its core, learning is about generalization—extracting regularities from data to make predictions on unseen instances. Information theory's insights into compression shed light on how to distill essential features from raw data, facilitating better generalization in ML models. 3. Learning as Optimization: Machine learning often boils down to optimization—tweaking model parameters to minimize prediction errors. Information theory offers tools like variational principles and rate-distortion theory, guiding the optimization process towards efficient representation and decision-making. 4. Channel Coding and Error Correction: Just as communication channels face noise and distortion, ML models encounter data imperfections and uncertainties. Techniques from information theory, such as error-correcting codes, inspire robust learning algorithms capable of handling noisy or incomplete data. 5. Mutual Information for Feature Selection: Mutual information, a concept from information theory, quantifies the amount of information shared between two variables. In ML, it serves as a powerful tool for feature selection, aiding in identifying the most informative attributes for predictive modeling. Future Directions As both fields continue to evolve, their synergy opens doors to exciting possibilities: Interpretable AI: Leveraging information-theoretic principles can lead to more interpretable ML models, shedding light on the decision-making process behind AI predictions. Privacy-Preserving Learning: Information theory offers robust frameworks for quantifying and preserving privacy in data-driven systems, crucial for building trust in AI technologies. Neuroscience and AI: Drawing parallels between information processing in neural systems and ML algorithms can deepen our understanding of both domains, fostering biologically inspired AI architectures. In essence, the marriage of information theory and machine learning exemplifies the interdisciplinary nature of modern AI research. By bridging theoretical insights with practical applications, this symbiotic relationship continues to drive innovation, shaping the future of artificial intelligence. As we delve deeper into the intricacies of both fields, the boundaries between them blur, revealing new avenues for exploration and discovery.
Issue Image
by James Douglas
 The January 2023 issue of IEEE JSAC is a special issue on “Beyond Transmitting Bits: Context, Semantics, and Task-Oriented Communications”. The phrase “semantic communications” started to conquer a significant real estate in the overall discussion on future wireless systems; yet, it sometimes remains fuzzy what the objective and scope of it is. The tutorial article written by the Guest Editors is an excellently written piece that brings clarity to the discourse on semantic communications. Regarding the technical articles published in this issue, the one that was selected to be featured in this blog is: Z. Meng, C. She, G. Zhao and D. De Martini, "Sampling, Communication, and Prediction Co-Design for Synchronizing the Real-World Device and Digital Model in Metaverse," in IEEE Journal on Selected Areas in Communications, vol. 41, no. 1, pp. 288-300, January 2023. I wanted to add a few sentences here to see if everything works well. The process of fusing the physical and the digital world has been going on for decades, but it is believed that this will be brought to a new qualitative level with the emergence of metaverse, digital twins, and similar concepts. Synchronization of the events in the physical and the digital world is of central importance in the metaverse in order to have consistency and proper causal relationships between the events. To characterize the synchronization performance, one important measure is Motion-To-Photon (MTP) latency, measured as the time between a user’s action and the corresponding effect displayed in the virtual world. The other two measures to characterize the synchronization are standard measures of communication performance, data rate and packet loss rate. In practice, it is hard to meet all three performance measures simultaneously and one needs to look into the tradeoffs. The main premise of this paper is that, rather than looking only in the communication, one has to consider, in an integrated way, three operations: sampling, communication, and prediction. This is plausible to improve the performance indicators of synchronization. In doing this, a key observation is that a good prediction decreases the data quantity that is required to be communicated. The authors introduce a co-design framework that includes sampling, communication, and prediction. The framework includes an algorithm that combines Deep Reinforcement Learning (DRL) techniques with expert knowledge on sampling, communication, and prediction. The approach is evaluated through an actual prototype of a robotic arm, which brings an additional credibility to the proposed approach. JSAC: In the paper you make certain assumptions in the communication model about synchronization, slotted structure, or similar. Which practical scenarios would challenge this model? How would the model change if you assume bandwidth-intensive transmissions, such as video streaming along the trajectory? Authors: In this paper, we took a standard orthogonal frequency division multiplexing (OFDM) communication model to conduct the cross-system design among sampling, communication and prediction. An ideal assumption in our work is that the communication delay is bounded and known by the system. In practical scenarios, the upper bound of communication delay may not be available to the system. The wireless link is part of the whole communication system, and the latency and jitter in backhaul and core networks heavily rely on the specific scenario. Although we showed in the paper that our system can be easily extended to some other communication models, we need to fine-tune the pre-trained deep reinforcement learning algorithm in a new scenario. For bandwidth-intensive transmissions, our approach can reduce the communication load and thus can save bandwidth if the state of the system is predictable. Let’s take virtual reality (VR) as an example. In VR video streaming, a head-mounted device (e.g., VR glasses) needs to transmit the trajectory of human user to the access point, and the access point sends the video within the field-of-view to be requested by the user. Since the trajectory of the user is predictable, our framework can be used to predict the future trajectory and the corresponding field-of-view. In this way, we can reduce the required bandwidth for VR streaming. JSAC: What was criticized by the reviewers and how did you address it? Authors: The reviewers’ comments were generally positive. The reviewers paid particular attention to how the expert knowledge is adopted to assist the constrained deep reinforcement learning algorithm. Thanks to this comment, we were able to better sort out the role of expert knowledge as an aid for training. To address this comment, we discussed the impacts of different expert knowledge on the training performance of the proposed knowledge-assisted constrained twin-delayed deep deterministic algorithm (KT-TD3). In addition, we showed that the proposed strategy with full expert knowledge has the best performance in terms of stability and can meet the average tracking error constraint. JSAC: What are the main communication challenges, not necessarily wireless, you are seeing on the way forward towards enabling fusion of digital with the physical world, such as in the Metaverse? Authors: There are several communication challenges in enabling the fusion of the digital and physical worlds. One of them is to define new key performance indicators (KPIs) (e.g., motion-to-photon (MTP) latency in human-avatar interaction) for Metaverse applications. Existing communication KPIs are not sufficient to support emerging applications in the Metaverse. Scalability is another issue to both wireless and wired communications. As the communication, computing, and storage resources are limited, it is difficult to support a large number of users or devices with diverse applications and human-computer interfaces in one communication network. The integration of our sampling, communication, and prediction framework and different types of human-computer interfaces or devices is challenging. Another challenge is the design approach. Most of the existing communication design and optimization methods are independent of sensing, computing, and robotics systems. Without the domain knowledge and dynamics of these applications, we can only obtain sub-optimal solutions with high communication and computing overhead. Thus, the task-oriented cross-system design would be the approach to tailor the communication system for specific use cases and applications. Statements and opinions given in a work published by the IEEE or the IEEE Communications Society are the expressions of the author(s). Responsibility for the content of published articles rests upon the authors(s), not IEEE nor the IEEE Communications Society.
Blogs
by John Jackson
 The January 2023 issue of IEEE JSAC is a special issue on “Beyond Transmitting Bits: Context, Semantics, and Task-Oriented Communications”. The phrase “semantic communications” started to conquer a significant real estate in the overall discussion on future wireless systems; yet, it sometimes remains fuzzy what the objective and scope of it is. The tutorial article written by the Guest Editors is an excellently written piece that brings clarity to the discourse on semantic communications. Regarding the technical articles published in this issue, the one that was selected to be featured in this blog is: Z. Meng, C. She, G. Zhao and D. De Martini, "Sampling, Communication, and Prediction Co-Design for Synchronizing the Real-World Device and Digital Model in Metaverse," in IEEE Journal on Selected Areas in Communications, vol. 41, no. 1, pp. 288-300, January 2023. The process of fusing the physical and the digital world has been going on for decades, but it is believed that this will be brought to a new qualitative level with the emergence of metaverse, digital twins, and similar concepts. Synchronization of the events in the physical and the digital world is of central importance in the metaverse in order to have consistency and proper causal relationships between the events. To characterize the synchronization performance, one important measure is Motion-To-Photon (MTP) latency, measured as the time between a user’s action and the corresponding effect displayed in the virtual world. The other two measures to characterize the synchronization are standard measures of communication performance, data rate and packet loss rate. In practice, it is hard to meet all three performance measures simultaneously and one needs to look into the tradeoffs. Figure 1: Co-design framework for synchronizing the real device with its virtual copy.The main premise of this paper is that, rather than looking only in the communication, one has to consider, in an integrated way, three operations: sampling, communication, and prediction. This is plausible to improve the performance indicators of synchronization. In doing this, a key observation is that a good prediction decreases the data quantity that is required to be communicated. The authors introduce a co-design framework that includes sampling, communication, and prediction. The framework includes an algorithm that combines Deep Reinforcement Learning (DRL) techniques with expert knowledge on sampling, communication, and prediction. The approach is evaluated through an actual prototype of a robotic arm, which brings an additional credibility to the proposed approach. JSAC: In the paper you make certain assumptions in the communication model about synchronization, slotted structure, or similar. Which practical scenarios would challenge this model? How would the model change if you assume bandwidth-intensive transmissions, such as video streaming along the trajectory? Authors: In this paper, we took a standard orthogonal frequency division multiplexing (OFDM) communication model to conduct the cross-system design among sampling, communication and prediction. An ideal assumption in our work is that the communication delay is bounded and known by the system. In practical scenarios, the upper bound of communication delay may not be available to the system. The wireless link is part of the whole communication system, and the latency and jitter in backhaul and core networks heavily rely on the specific scenario. Although we showed in the paper that our system can be easily extended to some other communication models, we need to fine-tune the pre-trained deep reinforcement learning algorithm in a new scenario. For bandwidth-intensive transmissions, our approach can reduce the communication load and thus can save bandwidth if the state of the system is predictable. Let’s take virtual reality (VR) as an example. In VR video streaming, a head-mounted device (e.g., VR glasses) needs to transmit the trajectory of human user to the access point, and the access point sends the video within the field-of-view to be requested by the user. Since the trajectory of the user is predictable, our framework can be used to predict the future trajectory and the corresponding field-of-view. In this way, we can reduce the required bandwidth for VR streaming. JSAC: What was criticized by the reviewers and how did you address it? Authors: The reviewers’ comments were generally positive. The reviewers paid particular attention to how the expert knowledge is adopted to assist the constrained deep reinforcement learning algorithm. Thanks to this comment, we were able to better sort out the role of expert knowledge as an aid for training. To address this comment, we discussed the impacts of different expert knowledge on the training performance of the proposed knowledge-assisted constrained twin-delayed deep deterministic algorithm (KT-TD3). In addition, we showed that the proposed strategy with full expert knowledge has the best performance in terms of stability and can meet the average tracking error constraint. JSAC: What are the main communication challenges, not necessarily wireless, you are seeing on the way forward towards enabling fusion of digital with the physical world, such as in the Metaverse? Authors: There are several communication challenges in enabling the fusion of the digital and physical worlds. One of them is to define new key performance indicators (KPIs) (e.g., motion-to-photon (MTP) latency in human-avatar interaction) for Metaverse applications. Existing communication KPIs are not sufficient to support emerging applications in the Metaverse. Scalability is another issue to both wireless and wired communications. As the communication, computing, and storage resources are limited, it is difficult to support a large number of users or devices with diverse applications and human-computer interfaces in one communication network. The integration of our sampling, communication, and prediction framework and different types of human-computer interfaces or devices is challenging. Another challenge is the design approach. Most of the existing communication design and optimization methods are independent of sensing, computing, and robotics systems. Without the domain knowledge and dynamics of these applications, we can only obtain sub-optimal solutions with high communication and computing overhead. Thus, the task-oriented cross-system design would be the approach to tailor the communication system for specific use cases and applications. Statements and opinions given in a work published by the IEEE or the IEEE Communications Society are the expressions of the author(s). Responsibility for the content of published articles rests upon the authors(s), not IEEE nor the IEEE Communications Society.
Blogs
by Petar Popovski
​ The January 2023 issue of IEEE JSAC is a special issue on “Beyond Transmitting Bits: Context, Semantics, and Task-Oriented Communications”. The phrase “semantic communications” started to conquer a significant real estate in the overall discussion on future wireless systems; yet, it sometimes remains fuzzy what the objective and scope of it is. The tutorial article written by the Guest Editors is an excellently written piece that brings clarity to the discourse on semantic communications. Regarding the technical articles published in this issue, the one that was selected to be featured in this blog is: Z. Meng, C. She, G. Zhao and D. De Martini, "Sampling, Communication, and Prediction Co-Design for Synchronizing the Real-World Device and Digital Model in Metaverse," in IEEE Journal on Selected Areas in Communications, vol. 41, no. 1, pp. 288-300, January 2023. The process of fusing the physical and the digital world has been going on for decades, but it is believed that this will be brought to a new qualitative level with the emergence of metaverse, digital twins, and similar concepts. Synchronization of the events in the physical and the digital world is of central importance in the metaverse in order to have consistency and proper causal relationships between the events. To characterize the synchronization performance, one important measure is Motion-To-Photon (MTP) latency, measured as the time between a user’s action and the corresponding effect displayed in the virtual world. The other two measures to characterize the synchronization are standard measures of communication performance, data rate and packet loss rate. In practice, it is hard to meet all three performance measures simultaneously and one needs to look into the tradeoffs. Figure 1: Co-design framework for synchronizing the real device with its virtual copy. The main premise of this paper is that, rather than looking only in the communication, one has to consider, in an integrated way, three operations: sampling, communication, and prediction. This is plausible to improve the performance indicators of synchronization. In doing this, a key observation is that a good prediction decreases the data quantity that is required to be communicated. The authors introduce a co-design framework that includes sampling, communication, and prediction. The framework includes an algorithm that combines Deep Reinforcement Learning (DRL) techniques with expert knowledge on sampling, communication, and prediction. The approach is evaluated through an actual prototype of a robotic arm, which brings an additional credibility to the proposed approach. JSAC: In the paper you make certain assumptions in the communication model about synchronization, slotted structure, or similar. Which practical scenarios would challenge this model? How would the model change if you assume bandwidth-intensive transmissions, such as video streaming along the trajectory? Authors: In this paper, we took a standard orthogonal frequency division multiplexing (OFDM) communication model to conduct the cross-system design among sampling, communication and prediction. An ideal assumption in our work is that the communication delay is bounded and known by the system. In practical scenarios, the upper bound of communication delay may not be available to the system. The wireless link is part of the whole communication system, and the latency and jitter in backhaul and core networks heavily rely on the specific scenario. Although we showed in the paper that our system can be easily extended to some other communication models, we need to fine-tune the pre-trained deep reinforcement learning algorithm in a new scenario. In practical scenarios, the upper bound of communication delay may not be available to the system. The wireless link is part of the whole communication system, and the latency and jitter in backhaul and core networks heavily rely on the specific scenario. Although we showed in the paper that our system can be easily extended to some other communication models, we need to fine-tune the pre-trained deep reinforcement learning algorithm in a new scenario. JSAC: What was criticized by the reviewers and how did you address it? Authors: The reviewers’ comments were generally positive. The reviewers paid particular attention to how the expert knowledge is adopted to assist the constrained deep reinforcement learning algorithm. Thanks to this comment, we were able to better sort out the role of expert knowledge as an aid for training. To address this comment, we discussed the impacts of different expert knowledge on the training performance of the proposed knowledge-assisted constrained twin-delayed deep deterministic algorithm (KT-TD3). In addition, we showed that the proposed strategy with full expert knowledge has the best performance in terms of stability and can meet the average tracking error constraint. JSAC: What are the main communication challenges, not necessarily wireless, you are seeing on the way forward towards enabling fusion of digital with the physical world, such as in the Metaverse? Authors: There are several communication challenges in enabling the fusion of the digital and physical worlds. One of them is to define new key performance indicators (KPIs) (e.g., motion-to-photon (MTP) latency in human-avatar interaction) for Metaverse applications. Existing communication KPIs are not sufficient to support emerging applications in the Metaverse. Scalability is another issue to both wireless and wired communications. As the communication, computing, and storage resources are limited, it is difficult to support a large number of users or devices with diverse applications and human-computer interfaces in one communication network. The integration of our sampling, communication, and prediction framework and different types of human-computer interfaces or devices is challenging. Another challenge is the design approach. Most of the existing communication design and optimization methods are independent of sensing, computing, and robotics systems. Without the domain knowledge and dynamics of these applications, we can only obtain sub-optimal solutions with high communication and computing overhead. Thus, the task-oriented cross-system design would be the approach to tailor the communication system for specific use cases and applications. Statements and opinions given in a work published by the IEEE or the IEEE Communications Society are the expressions of the author(s). Responsibility for the content of published articles rests upon the authors(s), not IEEE nor the IEEE Communications Society. ​
Calls for papers
Modern Rate-Distortion Theory (RDT) stands at the intersection of information theory, signal processing, and machine learning, offering a profound understanding of the tradeoff between data compression and reconstruction fidelity.This special issue aims to present the latest advancements in RDT, ranging from theoretical developments to practical applications across diverse domains. Topics include novel formulations of the rate-distortion tradeoff, deep learning approaches for optimization, applications in image and video compression, and extensions to non-standard data sources. By bringing together cutting-edge research and innovative methodologies, this special issue aims to shape the future landscape of RDT and its applications.
The CFP Abstract. This section explains the basics of DNA storage systems. DNA storage system is a bio-based storage system. Thanks to tremendous research, we can manipulate DNA sequences. More precisely, it is now possible to be able to synthesize nucleotides freely and to read them. Here, the reading of nucleotides is called sequencing, and making DNA is called synthesizing. DNA storage systems uses these techniques and DNA can be seen as a medium that retains information, that is storage. Nucleotides are composed of four types of nucleobases: adenine (A), cytosine (C), guanine (G), and thymine (T). The DNA storage system retains information as sequences of nucleotides. The number of letters to handle information is the one of the biggest differences with other storage system that uses binary messages.
Deadline:
Aug 1, 2024 (Extended)
This special issue of the IEEE Journal on Selected Areas in Information Theory is dedicated to the memory of Toby Berger, one of the most important information theorists of our time, who passed away in 2022 at the age of 81. He made foundational contributions to a wide range of areas in information theory, including rate-distortion theory, network information theory, quantum information theory, and bio-information theory. He also left a deep imprint on diverse fields in applied mathematics and theoretical engineering, such as Markov random fields, group testing, multiple access theory, and detection and estimation. Well known for his technical brilliance, he tackled many challenging problems, but above all, it is his pursuit of elegance in research and writing that shines throughout his work. The goal of this special issue is to celebrate Toby Berger’s lasting legacy and his impact on information theory and beyond. Original research papers on topics within the realm of his scientific investigations and their “offspring”, as well as expository articles that survey his pioneering contributions and their modern developments, are invited.
Over the past decade, machine learning (ML), that is the process of enabling computing systems to take data and churn out decisions, has been enabling tremendously exciting technologies. Such technologies can assist humans in making a variety of decisions by processing complex data to identify patterns, detect anomalies, and make inferences. At the same time, these automated decision-making systems raise questions about security and privacy of user data that drive ML, fairness of the decisions, and reliability of automated systems to make complex decisions that can affect humans in significant ways. In short, how can ML models be deployed in a responsible and trustworthy manner that ensures fair and reliable decision-making? This requires ensuring that the entire ML pipeline assures security, reliability, robustness, fairness, and privacy. Information theory can shed light on each of these challenges by providing a rigorous framework to not only quantify these desirata but also rigorously evaluate and provide assurances. From its beginnings, information theory has been devoted to a theoretical understanding of the limits of engineered systems. As such, it is a vital tool in guiding machine learning advances. We invite previously unpublished papers that contribute to the fundamentals, as well as the applications of information- and learning-theoretic methods for secure, robust, reliable, fair, private, and trustworthy machine learning. Exploration of such techniques to practical systems is also relevant.