Friday, September 6, 2024
10:20 – 11:10 a.m. (CST)
ETB 1020
Dr. Srinivas Shakkottai
Professor, Electrical & Computer Engineering, Texas A&M University
Title: “Structured Reinforcement Learning in NextG Cellular Networks”
Abstract
NextG cellular networks face increasing demands for intelligent control, especially with the advent of softwarized Open Radio Access Networks (O-RAN) and diverse user applications. We present EdgeRIC, a real-time RAN Intelligent Controller (RIC) co-located with the Distributed Unit (DU) in the O-RAN architecture, enabling sub-millisecond AI-optimized decision-making. We propose a constrained reinforcement learning (CRL) approach for developing such real-time strategies, showing that these algorithms can be trained with only a logarithmic increase in complexity compared to traditional RL. We introduce structured learning using threshold and Whittle index-based policies, which provides low-complexity learning and scalable, real-time inference for optimizing resource allocation and enhancing user experience. For media streaming, we prove the optimality of a threshold policy and develop a soft-threshold natural policy gradient (NPG) algorithm that prioritizes clients based on video buffer length, achieving inference times of about 10μs and improving user quality of experience by over 30%. Additionally, we leverage Whittle indexability to simplify resource allocation, ensuring service guarantees such as ultra-low latency or high throughput by training neural networks to compute constrained Whittle indices. Our Whittle index approach, implemented on EdgeRIC, achieves allocation decisions within 20μs per user, enhancing service guarantees across standardized 3GPP service classes, making a case for structured, scalable reinforcement learning for real-time control of NextG networks.
Biography
Srinivas Shakkottai received his PhD in Electrical and Computer Engineering from the University of Illinois at Urbana-Champaign in 2007, after which he was a postdoctoral scholar in Management Science and Engineering at Stanford University. He joined Texas A&M University in 2008, where he is currently a professor at the Dept. of Electrical and Computer Engineering and at the Dept. of Computer Science and Engineering (by courtesy). His research interests include multi-agent learning and game theory, reinforcement learning, communication and information networks, networked markets, as well as data collection and analytics. He co-directs the Learning and Emerging Network Systems (LENS) Laboratory and the RELLIS Spectrum Innovation Laboratory (RSIL). He has served as an Associate editor of IEEE/ACM Transactions on Networking and the IEEE Transactions on Wireless Communications. Srinivas is the recipient of the Defense Threat Reduction Agency (DTRA) Young Investigator Award and the NSF CAREER Award, as well as research awards from Cisco and Google. His work has received honors at fora such as ACM MobiHoc, ACM eEnergy and the International Conference on Learning Representations. He has also received an Outstanding Professor Award, the Select Young Faculty Fellowship, and the Engineering Genesis Award (twice) at Texas A&M University.
For more on Dr. Shakkottai and his work, go to Srinivas Shakkottai (tamu.edu).
Please join us on Friday, 9/6/24 at 10:20 a.m. in ETB 1020.