• Skip to main content
  • Skip to primary sidebar

Department of Electrical and Computer Engineering

The Computer Engineering and Systems Group

Texas A&M University College of Engineering
  • Research
  • News
  • People
    • Faculty
    • Joint Faulty
    • Staff
    • Students
  • Academics
    • Graduate Degrees
      • All Courses
    • Undergraduate
  • Seminars
    • CESG Seminars
    • Fishbowl Seminar Series
    • Computer Engineering Eminent Scholar Seminar Series
    • Topics In Systems Seminar
    • Related Seminars
  • Contact

Vickie Winston

CESG Seminar: Vasudev Gohil

Posted on April 19, 2023 by Vickie Winston

Friday, April 28, 2023
3:50 – 4:50 p.m. (CST)
ETB 1020 or Zoom (see syllabus or email list for link)

Vasudev Gohil
CE PhD Student
Dept. of Electrical and Computer Engineering; Computer Engineering
Texas A&M University

Title: “Reinforcement Learning for Hardware Security ”

Talking Points

  • Security threats such as hardware Trojans due to a globalized integrated circuits supply chain
  • Using reinforcement learning to detect hardware Trojans efficiently and effectively
  • Using reinforcement learning to evaluate hardware Trojan detection techniques accurately

Abstract
Reinforcement learning (RL) has shown great promise in solving problems in novel domains, e.g., marketing, chip placement, and matrix multiplication. In this talk, I will discuss another area that has just begun to reap the powers of RL: hardware security. In particular, I will discuss two of our recent works that use RL to address the threat of hardware Trojans (HTs) in integrated circuits. HTs are malicious logic added by adversaries to harm integrated circuits. They pose a significant threat to critical infrastructures and have been the focus of much research.

In the first part of the talk, I will present a reinforcement learning (RL) agent that returns a minimal set of patterns most likely to detect HTs. Our experimental results demonstrate the efficacy and scalability of our RL agent, which significantly reduces the number of test patterns while maintaining or improving coverage compared to state-of-the-art techniques. In the second part of the talk, I will discuss how we play the role of a realistic adversary and question the efficacy of existing HT detection techniques by developing an automated, scalable, and practical attack framework. Our framework uses RL to evade eight detection techniques across two HT detection categories, demonstrating its agnostic behavior.

Using the example of HTs, our work highlights the potential of RL in solving hardware security problems. The talk will conclude with a discussion of future directions for research in this area.

Biography

Vasudev Gohil is pursuing a Ph.D. in Computer Engineering at Texas A&M University in College Station, Texas. His research interests lie at the intersection of machine learning and hardware security. He is keenly interested in examining and developing IP protection techniques and applying reinforcement learning techniques for security. Before his doctoral studies, Vasudev received a Bachelor of Technology degree in Electrical Engineering with minors in Computer Science from the Indian Institute of Technology Gandhinagar.

More on Vasudev Gohil: https://gohilvasudev.wixsite.com/website

More on CESG Seminars: HERE

Please join on Friday, 4/28/22 at 3:50 p.m. in ETB 1020 or via Zoom.
Zoom option: Links and PW in syllabus or found in email announcement.

Filed Under: Seminars

CESG Seminar: Archana Bura

Posted on March 29, 2023 by Vickie Winston

Friday, March 31, 2023
3:50 – 4:50 p.m. (CST)
Zoom (see syllabus or email list for link)

Archana Bura 
PhD Candidate, Spring 2023
Dept. of Electrical and Computer Engineering
Texas A&M University

Title: “Constrained Reinforcement Learning for Wireless Networks ”

Talking Points

  • Challenges in learning under real world problems with constraints
  • Developing CRL algorithms for two real world resource allocation problems under constraints
  • Safe exploration in learning a generic Constrained MDP

Abstract
In this talk, I will discuss how we efficiently applied reinforcement learning methods for real world problems. I consider two motivating real world problems: Resource allocation for Media streaming at the Wireless edge, and Resource block allocation in an Open RAN system. Under throughput, latency and resource constraints, these systems can be modeled by Constrained Markov Decision Processes (CMDPs). Since these systems have complex dynamics, a constrained reinforcement learning (CRL) approach is attractive for determining an optimal control policy. Applying off-the-shelf RL algorithms yields better results compared to naive solutions, but these algorithms need a lot of samples to train or have high complexity. We overcome these issues by providing CRL methods that efficiently utilize the structure in the problem. Motivated by these results, we study the fundamental “safe exploration” problem in a generic CRL, and propose a safe RL method that does not violate constraints during the learning process with high probability.

Biography
Archana Bura is a PhD candidate at Texas A&M University’s Department of Electrical and Computer Engineering, where she specializes in constrained reinforcement learning. Her research centers around implementing RL for real world problems, where safety is critical, by developing algorithms and theory of constrained reinforcement learning that leverage structure to enhance the learning process.

More on Archana Bura: HERE

More on CESG Seminars: HERE

Please join on Friday, 3/31/22 at 3:50 p.m. via Zoom.
Zoom option: Links and PW in syllabus or found in email announcement.

Filed Under: Seminars

CESG Seminar: Peipei Zhou

Posted on March 1, 2023 by Vickie Winston

Friday, April 21, 2023
3:50 – 4:50 p.m. (CST)
ZOOM

Peipei Zhou
Assistant Professor
Dept. Electrical and Computer Engineering
University of Pittsburgh

Title: “CHARM: Composing Heterogeneous AcceleRators for Matrix Multiply on Versal ACAP Architecture”

Talking Points

  • Which platform beats 7nm GPU A100 in energy efficiency? AMD Versal ACAP (FPGA+AI Chip)!
  • How to program AMD Versal ACAP, i.e., FPGA + AI Chip within the same chip die for deep learning applications in 10 lines of code? Use CHARM!

Abstract
Dense matrix multiply (MM) serves as one of the most heavily used kernels in deep learning applications. To cope with the high computation demands of these applications, heterogeneous architectures featuring both FPGA and dedicated ASIC accelerators have emerged as promising platforms. For example, the AMD/Xilinx Versal ACAP architecture combines general-purpose CPU cores and programmable logic (PL) with AI Engine processors (AIE) optimized for AI/ML. An array of 400 AI Engine processors executing at 1 GHz can theoretically provide up to 6.4 TFLOPs performance for 32-bit floating-point (fp32) data. However, machine learning models often contain both large and small MM operations. While large MM operations can be parallelized efficiently across many cores, small MM operations typically cannot. In our investigation, we observe that executing some small MM layers from the BERT natural language processing model on a large, monolithic MM accelerator in Versal ACAP achieved less than 5% of the theoretical peak performance. Therefore, one key question arises: How can we design accelerators to fully use the abundant computation resources under limited communication bandwidth for end-to-end applications with multiple MM layers of diverse sizes? In this talk, we will discuss CHARM framework to compose multiple diverse MM accelerator architectures working concurrently towards different layers within one application. CHARM includes analytical models which guide design space exploration to determine accelerator partitions and layer scheduling. To facilitate the system designs, CHARM automatically generates code, enabling thorough onboard design verification. We deploy the CHARM framework for four different deep learning applications, including BERT, ViT, NCF, and MLP, on the AMD/Xilinx Versal ACAP VCK190 evaluation board. Our experiments show that we achieve 1.46 TFLOPs, 1.61 TFLOPs, 1.74 TFLOPs, and 2.94 TFLOPs inference throughput for BERT, ViT, NCF, MLP, respectively, which obtain 5.40x, 32.51x, 1.00x and 1.00x throughput gains compared to one monolithic accelerator.

Biography
Peipei Zhou is an assistant professor of the Electrical Computer Engineering (ECE) department at the University of Pittsburgh. She has over 10 years of experience in hardware and software co-design. She has published 20+ papers in top-tier IEEE/ACM computer system and design automation conferences and journals including FPGA, FCCM, DAC, ICCAD, ISPASS, TCAD, TECS, TODAES, IEEE Micro, etc. The algorithm and tool proposed in her FCCM’18 paper have been realized in the commercial Vitis HLS (high-level synthesis) compiler from Xilinx (acquired by AMD in Feb 2022). Her work in FPGA acceleration for deep learning won the 2019 Donald O. Pederson Best Paper Award from the IEEE Council for Design Automation (CEDA). Her work in cloud-based application optimization won the 2018 IEEE International Symposium on Performance Analysis of Systems and Software (ISPASS) Best Paper Nominee and her work in FPGA acceleration for computer vision won the 2018 IEEE/ACM International Conference on Computer-Aided Design (ICCAD) Best Paper Nominee. Before joining Pitt, she worked as a full-time staff software engineer in a start-up company and led a team of 6 members to develop CNN and MM kernels in the deep learning libraries for two generations of AI training application-specific integrated circuit (ASIC) chip products.

More on Dr. Zhou:
Homepage: https://peipeizhou-eecs.github.io/
Google Scholar: https://scholar.google.com/citations?user=px_jwFgAAAAJ&hl=en

More on CESG Seminars: HERE

Please join on Friday, 4/21/22 at via Zoom (see emails or syllabus for link and password)

Filed Under: Seminars

CESG Seminar: Desik Rengarajan

Posted on February 21, 2023 by Vickie Winston

Friday, March 24, 2023
3:50 – 4:50 p.m. (CST)
Zoom (see syllabus or email list for link)

Desik Rengarajan 
PhD Candidate, Spring 2023
Dept. of Electrical and Computer Engineering
Texas A&M University

Title: “Enhancing Reinforcement Learning Using Data and Structure”

Talking Points

  • Challenges in learning in sparse reward environments
  • Developing RL algorithms that take advantage of sub-optimal demonstration data to learn in sparse reward environments
  • Developing meta-RL algorithms that take advantage of sub-optimal demonstration data and structure to learn in sparse reward environments

Abstract
In reinforcement learning, reward functions serve as an indirect method of defining the goal of the algorithm. Designing a reward function that accurately captures the task at hand while effectively guiding the learning process can be a difficult challenge, requiring expert domain knowledge and manual fine-tuning. To overcome this, it is often easier to rely on sparse rewards that merely indicate partial or complete task completion. However, this leads to RL algorithms failing to learn an optimal policy in a timely manner due to the lack of fine-grained feedback. During this talk, I will delve into the impact of sparse rewards on reinforcement and meta reinforcement learning and present algorithms that leverage sub-optimal demonstration data to overcome these challenges.

Biography
Desik Rengarajan is a PhD candidate at Texas A&M University’s Department of Electrical and Computer Engineering, where he specializes in reinforcement learning. His research centers on the development of reinforcement learning algorithms that take advantage of side information, such as demonstration data and structure, to enhance the learning process and overcome challenges that arise when implementing RL in real-world situations.

More on Desik Rengarajan: HERE

More on CESG Seminars: HERE

Please join on Friday, 3/24/22 at 3:50 p.m. via Zoom.
Zoom option: Links and PW in syllabus or email announcement.

Filed Under: Seminars

CESG Seminar: Manoranjan Majji

Posted on February 13, 2023 by Vickie Winston

Friday, February 24, 2023
3:50 – 4:50 p.m. (CST)
ETB 1020 

Dr. Manoranjan Majji
Associate Professor
Dept. of Aerospace Engineering
Texas A&M University

Title: “Advances in Computer Engineering: Impact on Aerospace Applications ”

Talking Points

  • Revolutions in computing continue to advance a wide variety of aerospace vehicle navigation and control problems. Three broad applications are discussed to demonstrate this tangible impact.
  • Recent research advances in space manufacturing and assembly automation at LASR lab.
  • Novel velocimeter LIDAR and interferometric rate sensing technologies developed by Prof. Majji and his students are discussed.
  • New embedded processing pipelines developed by Prof. Majji’s students to estimate the forces sensed by optomechanical accelerometers developed by Prof. Guzman are elaborated.

Abstract

Recent advances in aerospace vehicle guidance, navigation and control furthered by emerging computer engineering technologies are elaborated in the lecture. Novel approaches for relative navigation using doppler sensing technologies are outlined with applications to terrain relative navigation and ship landing. Approaches to automate space systems and manufacture elements of swarm satellites in space are demonstrated using proximity operation emulation robots developed at the Land, Air and Space Robotics (LASR) laboratory. Embedded compute elements to process sensor data in order to realize an advanced optomechanical accelerometer are described to showcase advances in space avionics. The new accelerometer technology developed in collaboration with Prof. Felipe Guzman is discussed, which is found to enable spacecraft autonomy.

Biography
Dr. Manoranjan Majji is an Associate Professor of Aerospace Engineering and is the Director of the Land, Air and Space Robotics (LASR) Laboratory at Texas A&M University. He has a diverse background in several aspects of dynamics and control of aerospace vehicles with expertise spanning the whole spectrum of analysis, modeling, computations and experiments. In the areas of astrodynamics, estimation and system identification, he has made fundamental contributions documented in over 170 publications (including 45 journal articles) in the areas of guidance, navigation and control. Working with a team of 20 graduate students and 6 undergraduate researchers at the LASR lab, he works on a variety of research projects sponsored by NGA, NASA, JPL, AFRL, AFOSR, ONR, DARPA, JHTO, and the IC, in addition to various industrial partners, including BlackSky Geospatial, Dezyne Technologies, and VectorNav Technologies. His 10 PhD graduates are making valuable contributions in the academic, national laboratory, and industrial research establishments. In addition to being a scholar, Majji has a great deal of engineering experience developing software systems and embedded systems from OEM products. He holds a provisional patent on a simultaneous location and mapping software suite and was awarded a patent for developing a novel omni directional robot. He has disclosed various sensor inventions in the past decade. Manoranjan is the recipient of the 2021 Dean of Engineering Excellence Award at Texas A&M and the 2021 Texas A&M Institute of Data Science Career Initiation Fellowship. He is an Associate Fellow of the American Institute of Aeronautics and Astronautics (AIAA), a senior member of the Institute of Electrical and Electronics Engineers (IEEE), and is a Fellow of the American Astronautical Society (AAS).

More at https://cesg.tamu.edu/people-2/faculty/jiang-hu/

More on CESG Seminars: HERE

Please join on Friday, 2/24/22 at 3:50 p.m. in ETB 1020.
Zoom option: Links and PW in syllabus or email announcement.

Filed Under: Seminars

CESG Seminar: Jiang Hu

Posted on February 2, 2023 by Vickie Winston

Friday, February 10, 2023
3:50 – 4:50 p.m. (CST)
ETB 1020 

Dr. Jiang Hu
Professor
Dept. of Electrical and Computer Engineering
Affiliate of Computer Science Electrical and Computer Engineering
Texas A&M University

Title: “Machine Learning for EDA and EDA for Machine Learning”

Talking Points

  • A stochastic approach to handling noisy labels in machine learning models for chip design automation
  • An analytical approach to co-optimization of CNN hardware and dataflow mapping

Abstract
The wave of machine learning splashes to almost every corner of the world due to its unprecedented success. The first part of this talk will be focused on how to leverage machine learning for EDA (Electronic Design Automation). Specifically, a machine learning-based early routability prediction technique will be introduced. This technique provides a stochastic approach to handling non-deterministic data labels, which may exist in other machine learning applications. In the second part, an EDA technique for ML hardware acceleration will be presented. This is the first analytical approach to CNN hardware and dataflow co-optimization, and outperforms state-of-the-art methods in terms of both solution quality and computation runtime.

Biography
Dr. Jiang Hu is a professor in the Department of Electrical and Computer Engineering at Texas A&M University. His research interests include design automation of VLSI circuits and systems, computer architecture optimization and hardware security. He has published over 240 technical papers. He received best paper awards at DAC 2001, ICCAD 2011, MICRO 2021 and ASPDAC 2023. He was the technical program chair and the general chair of the ACM International Symposium on Physical Design (ISPD) in 2011 and 2012, respectively. He was named an IEEE fellow in 2016. He will serve as the program co-chair for ACM/IEEE Workshop on Machine Learning for CAD 2023.

More at https://cesg.tamu.edu/people-2/faculty/jiang-hu/

More on CESG Seminars: HERE

Please join on Friday, 2/10/22 at 4:10 p.m. in ETB 1020.
Zoom option: Links and PW in syllabus or email announcement.

Filed Under: Seminars

CESG Seminar: Sabit Ekin

Posted on January 24, 2023 by Vickie Winston

Friday, February 3, 2023
3:50 – 4:50 p.m. (CST)
ETB 1020  (Zoom option; Links and PW in syllabus or email)

Dr. Sabit Ekin
Associate Professor
Affiliate of Electrical and Computer Engineering
Department of Engineering Technology & Industrial Distribution
Texas A&M University

Title: “An Overview of Wireless Communication, Sensing and IoT Research Projects at Texas Wireless Lab (TWL)”

Talking Points

  • mmWave/Terahertz wireless communication systems for 5G, 6G and Beyond technologies
  • Hybrid RF/Optical communication system design
  • UAV-assisted wireless communications
  • Satellite and space communications

Abstract
Wireless communication and sensing constitute two of the most critical technological advances that broadly impact myriad aspects of the evolving digital society and support the burgeoning era of smart & connected communities and the Internet of Things (IoT). In this talk, I will provide an overview of our state-of-the-art research projects that tackles new fundamental scientific questions and addresses the challenges in three main synergistic research thrusts: (i) Wireless Communication, (ii) Wireless Sensing, and (iii) Wireless IoT. Example wireless communication technologies and applications include mmWave/Terahertz wireless communication systems for 5G, 6G and Beyond technologies to support the ever-increasing demand for higher data rates, UAV-assisted wireless communications, satellite and space communications. Wireless sensing projects include gesture recognition for human-computer interaction (HCI) applications and vital signs monitoring such as respiration, heart rate, and glucose level for healthcare applications. Finally, the projects on wireless IoT applications include remote control and monitoring applications such as livestock monitoring, soil monitoring, and localization.

Biography
Dr. Sabit Ekin is a wireless system design researcher and engineer. He received his Ph.D. in Electrical and Computer Engineering from Texas A&M University (TAMU) in 2012. In January 2023, he joined TAMU as an Associate Professor of Engineering Technology & Industrial Distribution, and Electrical & Computer Engineering (affiliated faculty).  He has 11+ years (post Ph.D.) of successful track records, including 4 years of industry experience as a Wireless System Engineer at Qualcomm Inc—a world leader in wireless technologies—where he has received numerous awards for his achievements on cellular modem designs for Apple, Samsung, Google, Nokia, etc. Prior to joining TAMU, he was an Assosciate Professor of ECE at Oklahoma State University, where he worked for 6 years. He was the Director/Co-founder of Oklahoma CubeSat Initiative (OKSat)—the first CubeSat program in the state of Oklahoma. He received the Department of Energy (DOE) 2022 Early Career Award—one of the 83 scientists selected from across the nation. He is awarded with OSU PSO/Albrecht Naeter Endowed Professor of ECE (2022), and Jack H. Graham Endowed Fellow of Engineering (2021). His research focuses on design and analysis of mmWave/Terahertz wireless communication systems for 5G-6G and Beyond technologies and wireless sensing systems.  His research is sponsored by major agencies, including NSF(5), NASA(2), DOE-CAREER(1), DOD(4), DOT(2), Qatar Foundation(1), and U.S. corporations(2).

More at www.sabitekin.com

More on CESG Seminars: HERE

Please join on Friday, 2/3/22 at 4:10 p.m. in ETB 1020.
Zoom option: Links and PW in syllabus or email announcement.

Filed Under: Seminars

Dr. Jiang Hu: New Publication

Posted on January 13, 2023 by Vickie Winston

CESG’s Jiang Hu has a new publication: Machine Learning Applications in Electronic Design Automation by himself and Dr. Haoxing Ren.

This book covers a wide range of the latest research on ML applications in electronic design automation (EDA), including analysis and optimization of digital design, analysis and optimization of analog design, as well as functional verification, FPGA and system level designs, design for manufacturing, and design space exploration. The ML techniques covered in this book include classical ML, deep learning models such as convolutional neural networks, graph neural networks, generative adversarial networks and optimization methods such as reinforcement learning and Bayesian optimization.

More information at https://www.barnesandnoble.com/w/machine-learning-applications-in-electronic-design-automation-haoxing-ren/1141727406?ean=9783031130748

Filed Under: News

CESG Seminar: Sanjay Shakkottai

Posted on November 2, 2022 by Vickie Winston

Friday, November 18, 2022
10:20 – 11:10 a.m. (CST)
Zoom (Links and PW in syllabus or email)

Dr. Sanjay Shakkottai
Professor, Department of Electrical and Computer Engineering
University of Texas at Austin

Title: “The Power of Adaptivity in Representation Learning: from Meta-Learning to Federated Learning”

Talking Points

  • Algorithms for multi-task learning that learn representation
  • Understanding the training dynamics of meta-learning and federated averaging with fine tuning

Abstract
A central problem in machine learning is as follows: How should we train models using data generated from a collection of clients/environments, if we know that these models will be deployed in a new and unseen environment? In the setting of few-shot learning, two prominent approaches are: (a) develop a modeling framework that is “primed” to adapt, such as Model Adaptive Meta Learning (MAML), or (b) develop a common model using federated learning (such as FedAvg), and then fine tune the model for the deployment environment. We study both these approaches in the multi-task linear representation setting. We show that the reason behind generalizability of the models in new environments trained through either of these approaches is that the dynamics of training induces the models to evolve toward the common data representation among the clients’ tasks. In both cases, the structure of the bi-level update at each iteration (an inner and outer update with MAML, and a local and global update with FedAvg) holds the key — the diversity among client data distributions are exploited via inner/local updates, and induces the outer/global updates to bring the representation closer to the ground-truth. In both these settings, these are the first results that formally show representation learning, and derive exponentially fast convergence to the ground-truth representation. Based on joint work with Liam Collins, Hamed Hassani, Aryan Mokhtari, and Sewoong Oh. Papers: https://arxiv.org/abs/2202.03483 , https://arxiv.org/abs/2205.13692

Biography
Dr. Sanjay Shakkottaireceived his Ph.D. from the ECE Department at the University of Illinois at Urbana-Champaign in 2002. He is with The University of Texas at Austin, where he is a Professor in the Department of Electrical and Computer Engineering, and holds the Cockrell Family Chair in Engineering #15. He received the NSF CAREER award in 2004 and was elected as an IEEE Fellow in 2014. He was a co-recipient of the IEEE Communications Society William R. Bennett Prize in 2021. He is currently the Editor in Chief of IEEE/ACM Transactions on Networking. His research interests lie at the intersection of algorithms for resource allocation, statistical learning and networks, with applications to wireless communication networks and online platforms.

Webpage to learn more about Dr. Shakkottai: HERE

More on CESG Seminars: HERE

Please join on Friday, 11/18/22 at 10:20 a.m. in ETB 1020.

Filed Under: Seminars

CESG Seminar: Mohammad Ghavamzadeh

Posted on October 28, 2022 by Vickie Winston

Friday, November 11, 2022
10:20 – 11:10 a.m. (CST)
Virtual via Zoom: https://tamu.zoom.us/j/93347193479 (password in emails or syllabus)

Dr.  Mohammad Ghavamzadeh
Senior Staff Research Scientist
Google

Title: “Mitigating the Risk Associated with Epistemic and Aleatory Uncertainties in MDPs”

Abstract
Prior work on safe reinforcement learning (RL) has studied risk-aversion to randomness in dynamics (aleatory) and to model uncertainty (epistemic) in isolation. We propose and analyze a new framework to jointly model the risk associated with epistemic and aleatory uncertainties in finite-horizon and discounted infinite-horizon MDPs. We call this framework that combines risk-averse and soft-robust methods RASR. We show that when the risk-aversion is defined using either the entropic value-at-risk (EVaR) or the entropic risk measure (ERM), the optimal policy in RASR can be computed efficiently using a new dynamic program formulation with a time-dependent risk level. As a result, the optimal risk-averse policies are deterministic but time-dependent, even in the infinite-horizon discounted setting. We also show that particular RASR objectives reduce to risk-averse RL with mean posterior transition probabilities. Our empirical results show that our new algorithms consistently mitigate uncertainty as measured by EVaR and other standard risk measures.

Biography 
Dr. Mohammad Ghavamzadeh received a Ph.D. degree from UMass Amherst in 2005. He was a postdoctoral fellow at UAlberta from 2005 to 2008. He was a permanent researcher at INRIA from 2008 to 2013. He was the recipient of the “INRIA award for scientific excellence” in 2011, and obtained his Habilitation in 2014. Since 2013, he has been a senior researcher at Adobe and FAIR, and now a senior staff research scientist at Google. He has published over 100 refereed papers in major machine learning, AI, and control journals and conferences. He has co-chaired more than 10 workshops and tutorials at NeurIPS, ICML, and AAAI. His research has been mainly focused on the areas of reinforcement learning, bandit algorithms, and recommendation systems.

More information on Dr. Ghavamzadeh can be found at
https://mohammadghavamzadeh.github.io/
https://scholar.google.ca/citations?user=Bo-wyrkAAAAJ&hl=en

More info. on past and future CESG Seminars at CESG Seminars (tamu.edu)

* Friday, 11/11/22 at 10:20 a.m. via Zoom *

Filed Under: Seminars

  • « Go to Previous Page
  • Go to page 1
  • Interim pages omitted …
  • Go to page 5
  • Go to page 6
  • 7
  • Go to page 8
  • Go to page 9
  • Go to page 10
  • Go to Next Page »

© 2016–2025 Department of Electrical and Computer Engineering

Texas A&M Engineering Experiment Station Logo