• Skip to primary navigation
  • Skip to main content
  • Skip to primary sidebar
  • Research
  • People
    • Faculty
    • Staff
    • Students
  • Seminars
    • CESG Seminars
    • Fishbowl Seminar Series
    • Computer Engineering Eminent Scholar Seminar Series
    • Topics In Systems Seminar
    • Related Seminars
  • Academics
    • Graduate
    • Undergraduate
  • All Courses
  • Contact
  • News
  • Information
    • Information
    • Technology Resources
    • Directions

Department of Electrical and Computer Engineering

The Computer Engineering and Systems Group

Texas A&M University College of Engineering

Front Page

CESG Seminar: Azalia Mirhoseini

Posted on November 14, 2023 by Vickie Winston

Friday, December 1, 2023
10:20 a.m. – 11:10 a.m. (CST)
ETB 1020

Dr. Azalia Mirhoseini
Assistant Professor, Department of Computer Science
Stanford University

Title: “Pushing the Limits of Scaling Laws in the Age of Large Language Models“

Abstract 
The recent success of large language models has been characterized by scaling laws – the power law relationship between performance and training dataset size, model parameter size, and training compute. In this talk, we will discuss ways to push the scaling laws even further by innovating across data, models, software and hardware. This includes reinforcement learning from human and AI feedback to improve learning efficiency, sparse and dynamic mixture-of-experts neural architectures for better performance, an automated framework for co-designing custom AI accelerators, and a deep RL method for chip floorplanning used in multiple generations of Google AI’s accelerator chips (TPU). Through these cutting-edge examples, we will outline a full-stack approach that leverages AI to overcome the next set of scaling challenges.

Biography
Dr. Azalia Mirhoseini is an assistant professor of computer science at Stanford University and a senior staff research scientist at DeepMind. Her research interest is developing capable and efficient AI systems that can solve high-impact, real-world problems. Before joining Stanford, Prof. Mirhoseini spent several years in industry, working on frontier generative AI and deep reinforcement learning projects at Anthropic and Google Brain. She has led a diverse portfolio of AI and Systems projects, with publications in Nature, ICML, ICLR, NeurIPS, UAI, ASPLOS, SIGMETRICS, DAC, DATE, and ICCAD. She has received a number of awards, including the MIT Technology Review 35 Under 35, the Best Ph.D. Thesis at Rice University’s ECE Department, and a Gold Medal in the National Math Olympiad in Iran. Her work has been covered in various media outlets, including MIT Technology Review, IEEE Spectrum, The Verge, Times of London, ZDNet, VentureBeat, and WIRED.

More on Azalia Mirhoseini: http://azaliamirhoseini.com/

More on CESG Seminars: HERE

Please join on Friday, 12/1/23 at 10:20 a.m. in ETB 1020.

Filed Under: News

NEWS! – Dr. Dileep Kalathil and Dr. Moble Benedict receive Office of Naval Research (ONR) grant

Posted on November 6, 2023 by Nandu Giri

Dr. Dileep Kalathil and Dr. Moble Benedict received  an Office of Naval Research (ONR) grant to study Autonomous VTOL Aircraft Ship Landing. The team is geared to develop the next generation of fully autonomous vertical takeoff and landing (VTOL) aircraft on ships under rough conditions by combining an optimal aircraft design with a robust reinforcement learning control algorithm so precise that even if a vehicle is changing course or is in the presence of heavy winds, it can still track the horizon bar on the ship, which is a green, lighted, gyro-stabilized strip that provides the pilot an artificial horizon.

Dr. Benedict and Dr. Kalathil have proven success in using reinforcement learning to track and safely land an unmanned aerial system (UAS) in various conditions, including moderate horizontal winds, foggy visibility and changes in course and speed. Now, they’re merging their respective disciplines of aerospace engineering and electrical and computer engineering to build on these advancements.

More on this HERE

       

Filed Under: Uncategorized

NEWS! – Dr. Srinivas Shakkottai and Dr. Dileep Kalathil receive National Science Foundation (NSF) grant

Posted on October 13, 2023 by Nandu Giri

Principal investigator Dr. Srinivas Shakkottai and co-principal investigator Dr. Dileep Kalathil recently received a National Science Foundation (NSF) grant to research EdgeRIC: Real-time radio access network intelligent control for the next generation of cellular networks. In their lab, Shakkottai and Kalathil are conducting experiments to show how EdgeRIC operates and the significance of real-time control happening every millisecond. Joining the project is Dr. Dinesh Bharadia, an associate professor from the University of California San Diego and an adjunct professor at Texas A&M.

More on this at https://engineering.tamu.edu/news/2023/09/engineering-researchers-to-study-wireless-communication-and-machine-learning-with-nsf-grant.html

       

 

 

 

 

 

Filed Under: Uncategorized

CESG Seminar: Abu Sebastian

Posted on October 10, 2023 by Vickie Winston

Friday, October 20, 2023
10:20 a.m. – 11:10 a.m. (CST)
ETB 1020

Abu Sebastian
Distinguished Research Scientist, IBM Research – Europe

Title: “Analog In-Memory Computing for Deep Learning Inference“

Talking Points: 

  • What is analog in-memory computing (AIMC)?
  • How mature is AIMC?
  • What are the key open research topics and what’s next for AIMC?

Abstract
I will introduce analog in-memory computing based on non-volatile memory technology with a focus on the key concepts and the associated terminology. Subsequently, a multi-tile mixed-signal AIMC chip for deep learning inference will be presented. This chip fabricated in 14nm CMOS technology comprises 64 AIMC cores/tiles based on phase-change memory technology. It will serve as the basis to delve deeper into the device, circuits, architectural and algorithmic aspects of AIMC. Of particular focus will be achieving floating point-equivalent classification accuracy while performing the bulk of computations in the analog domain with relatively less precision. I will also present an architectural vision for a next generation AIMC chip for DNN inference. I will conclude with an outlook for the future.

Two papers that may be of interest:

  1. “Y2023_legallo_NatureElectronics.pdf”
  2. “Y2020_sebastian_NatureNano.pdf”

Biography
Abu Sebastian is one of the technical leaders of IBM’s research efforts towards next generation AI Hardware and manages the in-memory computing group at IBM Research – Zurich. He is the author of over 200 publications in peer-reviewed journals/conference proceedings and holds over 90 US patents.  In 2015 he was awarded the European Research Council (ERC) consolidator grant and in 2020, he was awarded an ERC Proof-of-concept grant. He was an IBM Master Inventor and was named Principal and Distinguished Research Staff Member in 2018 and 2020, respectively. In 2019, he received the Ovshinsky Lectureship Award for his contributions to “Phase-change materials for cognitive computing”. In 2023, he was conferred the title of Visiting Professor in Materials by University of Oxford. He is a distinguished lecturer and fellow of the IEEE.

More on Abu Sebastian: https://research.ibm.com/people/abu-sebastian

More on CESG Seminars: HERE

Please join on Friday, 10/20/23 at 10:20 a.m. in ETB 1020.

Filed Under: Seminars

CESG Seminar: Arif Merchant

Posted on October 3, 2023 by Vickie Winston

Friday, October 13, 2023
10:20 a.m. – 11:10 a.m. (CST)
ETB 1020

Arif Merchant
Research Scientist @ Google

Title: “Research Directions in Google Storage“

Talking Points: 

  • How does Google Storage work?
  • How do you lay out data across tens of thousands of disks, dozens of device types, with wildly varying failure rates and capacities, while keeping them all well utilized, and the data safe?
  • What are the open research questions in Cloud Storage?

Abstract
Google’s Cloud is a very large consumer of storage, and these storage systems are geographically distributed, multi-layered, heterogeneous, and complex. We present a high-level overview of Google’s storage systems, the common challenges faced by storage at Google, and explore several research directions for managing and optimizing the resources used. We will touch upon topics in layout, encoding, and new storage technologies. We will conclude with a discussion of some open questions.

Biography
Arif Merchant is a Research Scientist at Google and leads the Storage Analytics group, which studies interactions between components of the storage stack. His interests include distributed storage systems, storage management, and stochastic modeling. He holds the B.Tech. degree from IIT Bombay and the Ph.D. in Computer Science from Stanford University. He is an ACM Distinguished Scientist.


More on CESG Seminars: HERE

Please join on Friday, 10/13/23 at 10:20 a.m. in ETB 1020.

Filed Under: Seminars

Best Student Paper Award

Posted on September 21, 2023 by Vickie Winston

Congratulations to Dr. Jiang Hu, Ph.D. student Yishuang Lin and former Ph.D. student Yaguang Li!

Their paper “MMM: Machine Learning-Based Macro-Modeling for Linear Analog ICs and ADC/DACs” won the Best Student Paper Award at the 5th ACM/IEEE Workshop on Machine Learning for CAD (MLCAD 2023).

This work introduces macro-model level machine learning techniques to address the problems of huge model construction cost and low model reusability for linear analog ICs and ADC/DACs.

Kudos!

Filed Under: News

CESG Seminar – Bingzhe Li

Posted on September 8, 2023 by Caroline Jurecka

Friday, October 6, 2023
10:20 a.m. – 11:10 a.m. (CST)
ETB 1020

Bingzhe Li
Assistant Professor I Dept. of Computer Science
University of Texas at Dallas

Title: “Next-Generation Storage Systems for Big Data”

Talking Points: 

  • How to efficiently manage current emerging storage devices
  • How to design a new DNA storage system
  • How to build a high-performance storage system for big-data applications

Abstract
Tremendous technology developments have been witnessed in the area of computing, network and storage systems. A huge amount of digital data was generated in the past decades with the rapid growth of new technology development such Internet of Things (IoT) devices, edge devices, sensors, 5G, and so forth. Such vast amounts of digital data are being generated and are available for access to all new applications. It becomes a critical and increasing challenge to manage this huge amount of data available at our fingertips and to locate the information that we need at anytime from anywhere.

In this talk, I will focus on emerging storage systems for big data from two perspectives. First, from the capacity perspective, two emerging storage devices/systems with large areal densities (i.e., shingled magnetic recording (SMR) and DNA storage) are introduced including their management and utilization. For SMR, based on its unique properties, we introduce a Machine Learning (ML) based scheme to improve the performance of the SMR storage system. Moreover, for image-based applications, we will introduce how to efficiently store images into DNA storage with higher reliability and capacity. Secondly, from the performance perspective, I will present a high-performance system to accelerate graph neural networks (GNNs). The system is co-designed with the storage systems and algorithms in GNNs and finally can significantly speed up the training process of GNNs.

Biography
Dr. Bingzhe Li is currently an assistant professor of Computer Science at the University of Texas at Dallas. He received his PhD degree in Electrical and Computer Engineering from the University of Minnesota, Twin Cities in 2018 after which he worked as a postdoctoral associate in the Department of Computer Science and Engineering, University of Minnesota, Twin Cities.

His research interests focus on memory and storage systems, computer architecture, and low-cost computing architecture. He has served on conference organization committees as well as technical program committees and as reviewer for several major conferences and journals in computer system, storage, and computing architecture. In recognition of his research, he received the Best Paper Nomination at the ICCD’21 and the featured paper of the Month at IEEE Transactions of Computers on March 2021.


More on Bingzhe Li: Bingzhe Li (tamu.edu)

More on CESG Seminars: HERE

Please join on Friday, 10/6/23 at 10:20 a.m. in ETB 1020.

Filed Under: Seminars

2022 Awards: Dr. Reddy & Dr. Khatri

Posted on June 13, 2023 by Vickie Winston

Congratulations Dr. Khatri on your 2022 Engineering Faculty Award! Well done!

Congratulations Dr. Reddy on your Engineering Honoree Award!

Filed Under: Uncategorized

Congratulations Dr. Hu!

Posted on January 13, 2023 by Vickie Winston

CESG’s Jiang Hu has a new publication: Machine Learning Applications in Electronic Design Automation by himself and Dr. Haoxing Ren.

This book covers a wide range of the latest research on ML applications in electronic design automation (EDA), including analysis and optimization of digital design, analysis and optimization of analog design, as well as functional verification, FPGA and system level designs, design for manufacturing, and design space exploration. The ML techniques covered in this book include classical ML, deep learning models such as convolutional neural networks, graph neural networks, generative adversarial networks and optimization methods such as reinforcement learning and Bayesian optimization.

More information at https://www.barnesandnoble.com/w/machine-learning-applications-in-electronic-design-automation-haoxing-ren/1141727406?ean=9783031130748

Filed Under: Front Page, News, People, Uncategorized

Dr. P.R. Kumar – IEEE Alexander Graham Bell Medal

Posted on February 17, 2022 by Vickie Winston

Dr. Kumar is the 2022 recipient of one of the Institute of Electrical and Electronics Engineers’ (IEEE) most prestigious honors — the IEEE Alexander Graham Bell Medal. It is the highest award by IEEE in communications and networking. Kumar was recognized for his seminal contributions to the modeling, analysis and design of wireless networks.

For more, go to https://engineering.tamu.edu/news/2021/12/kumar-awarded-institute-of-electrical-and-electronics-engineers-medal.html.

Congratulations Dr. Kumar!

Filed Under: Faculty, News

Dr. JV Rajendran – 2022 Young Investigator Award Recipients

Posted on February 17, 2022 by Vickie Winston

Dr. JV Rajendran has won the 2022 Young Investigator Award from the Office of Naval Research Science & Technology!

His research work is titled Steel Wool: Next-Generation Hardware Fuzzers and addresses the area of Cyber Security and Complex Software Systems.

Congratulations JV!

Filed Under: Faculty, News, Uncategorized

Best Paper Award – IEEE: Drs. Yasin and Rajendran

Posted on February 17, 2022 by Vickie Winston

Congratulations to former CESG Post-Doc Dr. Muhammad Yasin and Dr. JV Rajendran!  Their 2020 paper “Removal Attacks on Logic Locking and Camouflaging Techniques” won a Best Paper Award from the Computer Society Publications Board and IEEE Transactions on Emerging Topics in Computing.

 

Filed Under: Faculty, News, Uncategorized

Congratulations Dr. Karan Watson!

Posted on September 7, 2021 by Vickie Winston

Dr. Karan Watson, Regents Professor, was awarded the 2021 American Society for Engineering Education (ASEE) Lifetime Achievement Award in Engineering Education. Dr. Watson was recognized for her pioneering leadership and sustained contributions to education in the fields of engineering and engineering technology.

For the full article or a more in-depth look at her work, please visit: Texas A&M Engineering News and Dr. Watson’s Google Scholar Profile

Past Recipients
2012 Richard M. Felder
2014 James E. Stice
2015 Karl A. Smith
2016 Russ Pimmel
2018 James L. Melsa
2019 K.L. DeVries
2020 Don P. Giddens
2021  Karan L. Watson

Filed Under: Uncategorized

CESG Former Student Shiyan Hu Elected to European Academy of Sciences and Arts

Posted on June 7, 2021 by Paul Gratz

CESG former student, Shiyan Hu, who received his Ph.D. in Computer Engineering in 2008, has been elected as a Member for  European Academy of Sciences and Arts for his significant contributions to Design, Optimization, and Security of Cyber-Physical Systems. European Academy of Sciences and Arts currently has about 2,000 members, including 34 Nobel Prize Laureates, who are world leading scientists, artists, and practitioners of governance, with expertise ranging from Natural Sciences, Medicine, Technical & Environmental Sciences, Humanities, to Social Sciences. Academy members, who are dedicated to innovative research, international collaboration as well as the exchange and dissemination of knowledge, are elected based on their outstanding achievements.

Shiyan Hu is a professor and the Chair in Cyber-Physical System Security and Director of Cyber Security Academy at University of Southampton. He has published more than 150 refereed papers in the area of Cyber-Physical Systems, Cyber-Physical System Security, and VLSI Computer Aided Design, where most of his journal articles appeared in IEEE/ACM Transactions. He is an ACM Distinguished Speaker, an IEEE Systems Council Distinguished Lecturer, a recipient of the 2017 IEEE Computer Society TCSC Middle Career Researcher Award, and a recipient of the 2014 U.S. National Science Foundation CAREER Award. His publications have received distinctions such as the 2018 IEEE Systems Journal Best Paper Award, the 2017 Keynote Paper in IEEE Transactions on Computer-Aided Design, the Front Cover Paper in IEEE Transactions on Nanobioscience in March 2014, multiple Thomson Reuters ESI Highly Cited Papers/Hot Papers, etc. His ultra-fast slew buffering technique has been widely deployed in the industry for designing over 50 microprocessor and ASIC chips such as IBM flagship chips POWER 7 and 8.

He is a well-recognized international leader in his field. He is chairing the IEEE Technical Committee on Cyber-Physical Systems, leading IET Cyber-Physical Systems: Theory & Applications, and chaired the 2020 Editor-in-Chief Search Committee Chair for ACM TODAES. He has served as an Associate Editor

r for 5 IEEE/ACM Transactions such as IEEE TCAD, IEEE TII and ACM TCPS and as a Guest Editor for various IEEE/ACM journals such as Proceedings of the IEEE and IEEE Transactions on Computers. He is an Elected Member of the European Academy of Sciences and Arts, a Fellow of IET, and a Fellow of British Computer Society.

Shiyan Hu says: “I am delighted to be elected as a Member of European Academy of Sciences and Arts. It is a unique honor in recognition of my research accomplishments and international leadership in my research fields. After many years following my graduation, I still feel very grateful to the education I received from Texas A&M’s Computer Engineering Group and research experience with my Ph.D. advisor Professor Jiang Hu. These were pivotally helpful for me to contribute significantly to my fields.”

Filed Under: News

Agricultural Blue Legacy Award

Posted on March 26, 2021 by Paul Gratz

Congratulations to Dr. Jiang Hu and team for receiving the Agricultural Blue Legacy Award this March.

They developed a center pivot automation and control system known as CPACS. This contributes to water conservation in the field of agriculture. To learn more, go to http://www.hpwd.org/newswire/2021/3/18/amarillo-water-management-team-honored.

The team is referred to as the “Amarillo Water Management Team” and includes:
Dr. Hongxin Kong, CEEN, PhD Graduate
Jianfeng Song, CEEN, PhD Candidate
Dr. Justin Sun, CEEN, PhD Graduate
Dr. Yanxiang Yang, CEEN, PhD Graduate
Dr. Jiang Hu, co-director of graduate programs in the Texas A&M Department of Electrical and omputer Engineering at College Station;
Dr. Gary Marek, U.S. Department of Agriculture-Agricultural Research Service agricultural engineer at Bushland;
Thomas Marek, AgriLife Research senior research engineer at Amarillo;
Dr. Dana Porter, Texas A&M AgriLife Extension Service program leader in the Department of Biological and Agricultural Engineering at Lubbock; and
Dr. Qingwu Xue, AgriLife Research crop stress physiologist at Amarillo.

Thank you Amarillo Water Management Team for improving our world with your projects!

 

Pic 1: Dr. Hongxin Kong
Pic 2: Dr. Jiang Hu & Dr. Yanxiang Yang
Pic 3: Dr. Hongxin Kong
Feature Pic: Yanxiang Yang, Thomas Marek & Justin Sun

Filed Under: News

  • 1
  • Go to page 2
  • Go to page 3
  • Interim page numbers omitted …
  • Go to page 15
  • Go to Next Page »

Congratulations Dr. Hu!

Posted on January 13, 2023 by Vickie Winston

CESG’s Jiang Hu has a new publication: Machine Learning Applications in Electronic Design Automation by himself and Dr. Haoxing Ren.

This book covers a wide range of the latest research on ML applications in electronic design automation (EDA), including analysis and optimization of digital design, analysis and optimization of analog design, as well as functional verification, FPGA and system level designs, design for manufacturing, and design space exploration. The ML techniques covered in this book include classical ML, deep learning models such as convolutional neural networks, graph neural networks, generative adversarial networks and optimization methods such as reinforcement learning and Bayesian optimization.

More information at https://www.barnesandnoble.com/w/machine-learning-applications-in-electronic-design-automation-haoxing-ren/1141727406?ean=9783031130748

Filed Under: Front Page, News, People, Uncategorized

CESG Seminar – Dr. Joshua Peeples

Posted on August 24, 2022 by Vickie Winston

Friday, September 2, 2022
10:20 – 11:10 a.m. (CST)
ETB 1020 – **In-person** (or by Zoom for those receiving emails)

Dr. Joshua Peeples
ACES Faculty Fellow & Visiting Assistant Professor, Texas A&M University, Electrical & Computer Engineering

Title: “Statistical Texture Feature Learning for Image Analysis”

Talking Points:

  • Convolutional neural networks are biased towards structural textures
  • Histogram layer(s) provide statistical context within deep learning models to improve performance

Abstract

Feature engineering often plays a vital role in the fields of computer vision and machine learning. A few common examples of engineered features include histogram of oriented gradients (HOG), local binary patterns (LBP), and edge histogram descriptors (EHD). Features such as pixel gradient directions and magnitudes for HOG, encoded pixel differences for LBP, and edge orientations for EHD are aggregated through histograms to extract texture information. However, the process of designing handcrafted features can be difficult and time consuming. Artificial neural networks (ANNs) such as convolutional neural networks (CNNs) have performed well in various applications such as facial recognition, semantic segmentation, object detection, and image classification through automated feature learning.

A new histogram layer is proposed to learn features and maximize the performance of ANNs for statistical texture analysis. Current approaches using ANNs or handcrafted features do not perform well for some texture applications due to inherent problems within texture datasets (e.g., high intrinsic dimensionality, large intra-class variations) and limitations in methods that use handcrafted and/or deep learning features. The proposed approach is a novel method to synthesize both neural and traditional features into a single pipeline. The histogram layer can estimate bin centers and widths through the backpropagation of errors to aggregate the features from the data while also maintaining spatial information. The improved performance of each network with the addition of histogram layer(s) demonstrates the potential for the use of this new element within ANNs.

Biography

Dr. Joshua Peeples is an ACES Faculty Fellow and Visiting Assistant Professor in the Department of Electrical and Computer Engineering at Texas A&M University. Dr. Peeples received his Bachelor of Science degree in electrical engineering with a minor in mathematics from the University of Alabama at Birmingham. He earned his Ph.D. in the Department of Electrical and Computer Engineering at the University of Florida with Dr. Alina Zare. During his Ph.D. studies, Dr. Peeples developed and refined novel deep learning methods for texture characterization, segmentation, and classification. Dr. Peeples’ current research seeks to extend his dissertation work and explore new aspects such as developing algorithms for explainable AI and various real-world applications in other domains (e.g., biomedical, agriculture). These methods can then be applied toward automated image understanding, object detection, and classification. Dr. Peeples has been recognized with several awards, including the Florida Education Fund’s McKnight Doctoral Fellowship and National Science Foundation Graduate Research Fellowship. In addition to research and teaching, Dr. Peeples is dedicated to service and advocacy for students at the university and in the community.

More information on Dr. Peeples at https://engineering.tamu.edu/electrical/profiles/peeples-joshua.html 

Please join on Friday, 9/2/22 at 10:20 a.m. in ETB 1020.

 

Filed Under: Front Page, News, Uncategorized

CESG Seminar: Dr. Bo Yuan

Posted on January 25, 2022 by Vickie Winston

Friday, January 25, 2021
4:10 – 5:00 p.m.
via Zoom (link below)
 
Dr. Bo Yuan
Asst. Professor, Dept. of Electrical & Computer Engineering, Rutgers University

Title: “Algorithm and Hardware Co-Design for Efficient Deep Learning: Sparse and Low-rank Perspective”

Talking Points

  • Algorithm and hardware co-design for structured and unstructured deep neural networks
  • Algorithm and hardware co-design for high-order tensor decomposition-based deep neural networks

Abstract
In the emerging artificial intelligence era, deep neural networks (DNNs), a.k.a. deep learning, have gained unprecedented success in various applications. However, DNNs are usually storage intensive, computation intensive and very energy consuming, thereby posing severe challenges on the future wide deployment in many application scenarios, especially for the resource-constraint low-power IoT application and embedded systems. In this talk, I will introduce the algorithm/hardware co-design works for energy-efficient DNN in my group, from both the sparse and low-rank perspectives. First, I will show the benefit of using structured and unstructured sparsity of DNN for designing low-latency and low-power DNN hardware accelerators. In the second part of my talk, I will present an algorithm/hardware co-design framework that leverages low tensor rankness towards energy-efficient high-accuracy DNN model and accelerators.

Biography
Dr. Bo Yuan is currently the assistant professor in the Department of Electrical and Computer Engineering in Rutgers University. Before that, he was with City University of New York from 2015-2018. Dr. Bo Yuan received his bachelor and master degrees from Nanjing University, China in 2007 and 2010, respectively. He received his PhD degree from University of Minnesota, Twin Cities in 2015. His research interests include algorithm and hardware co-design and implementation for machine learning and signal processing systems, error-resilient low-cost computing techniques for embedded and IoT systems and machine learning for domain-specific applications. He is the recipient of Global Research Competition Finalist Award in Broadcom Corporation. Dr. Yuan serves as technical committee track chair and technical committee member for several IEEE/ACM conferences. He is the associated editor of Springer Journal of Signal Processing System

Zoom Link: https://tamu.zoom.us/j/96343481647; Zoom ID: 963 4348 1647

Filed Under: Front Page, Seminars

CESG Seminar: Dr. Mayank Parasar

Posted on January 14, 2022 by Vickie Winston

Friday, March 25, 2022
4:10 – 5:00 p.m.
ETB 1020 – *In-person* (Emerging Technologies Building)
Dr. Mayank Parasar
Samsung Austin R&D Center (SARC) in Austin, TX

Title: 
“Subactive Techniques for Guaranteeing Routing and Protocol Deadlock Freedom in Interconnection”

Talking Points:

    • Correctness is of paramount concern in interconnection networks. (Routing and Protocol) Deadlock freedom is a cornerstone of correctness.
    • Prior solutions either over-provision the network or incur performance penalty to provide deadlock freedom
    • We propose new set of unified techniques to resolve routing and protocol deadlocks

Abstract
Interconnection networks are the communication backbone for any system. They occur at various scales: from on-chip networks, for example 2.5D/chiplet networks, between processing cores, to supercomputers between compute nodes, to data centers between high-end servers. One of the most fundamental challenges in an interconnection network is that of deadlocks. Deadlocks can be of two types: routing level deadlocks and protocol level deadlocks. Routing level deadlocks occur because of cyclic dependency between packets trying to acquire buffers, whereas protocol level deadlock occurs because the response message is stuck indefinitely behind the queue of request messages. Both kinds of deadlock render the forward movement of packets impossible leading to complete system failure.

Prior work either restricts the path that packets take in the network or provisions an extra set of buffers to resolve routing level deadlocks. For protocol level deadlocks, separate sets of buffers are reserved at every router for each message class. Naturally, proposed solutions either restrict the packet movement resulting in lower performance or require higher area and power.

We propose a new set of efficient techniques for providing both routing and protocol level deadlock freedom. Our techniques provide periodic forced movement to the packets in the network, which breaks any cyclic dependency of packets. Breaking this cyclic dependency results in resolving routing level deadlocks. Moreover, because of periodic forced movement, the response message is never stuck indefinitely behind the queue of request messages; therefore, our techniques also resolve protocol level deadlocks. We use the term ‘subactive’ for these new class of techniques.

Biography
:
Dr. Mayank parasar works at Samsung Austin R&D Center (SARC) in Austin, TX. Mayank Parasar has received his Ph.D. from the School of Electrical and Computer Engineering at Georgia Institute of Technology. He received an M.S. in Electrical and Computer Engineering from Georgia Tech in 2017 and a B.Tech. in Electrical Engineering department from Indian Institute of Technology (IIT) Kharagpur in 2013.

He works in computer architecture with the research focus on proposing breakthrough solutions in the field of interconnection networks, memory system and system software/application layer co-design. His dissertation, titled Subactive Techniques for Guaranteeing Routing and Protocol Deadlock Freedom in Interconnection Networks, formulates techniques that guarantee deadlock freedom with a significant reduction in both area and power budget.

He held the position of AMD Student Ambassador at Georgia Tech in the year 2018-19. He received the Otto & Jenny Krauss Fellow award in the year 2015-16.

In-Person @ ETB 1020 @ 4:10 p.m. on Friday, 3/11/22

Filed Under: Front Page, News, Seminars

© 2016–2023 Department of Electrical and Computer Engineering Log in

Texas A&M Engineering Experiment Station Logo
  • State of Texas
  • Open Records
  • Risk, Fraud & Misconduct Hotline
  • Statewide Search
  • Site Links & Policies
  • Accommodations
  • Environmental Health, Safety & Security
  • Employment