• Skip to main content
  • Skip to primary sidebar

Department of Electrical and Computer Engineering

The Computer Engineering and Systems Group

Texas A&M University College of Engineering
  • Research
  • News
  • People
    • Faculty
    • Joint Faulty
    • Staff
    • Students
  • Academics
    • Graduate Degrees
      • All Courses
    • Undergraduate
  • Seminars
    • CESG Seminars
    • Fishbowl Seminar Series
    • Computer Engineering Eminent Scholar Seminar Series
    • Topics In Systems Seminar
    • Related Seminars
  • Contact

Seminars

CESG Seminar: Kevin Nowka

Posted on January 17, 2024 by Vickie Winston

Friday, January 19, 2024
10:20 a.m. – 11:10 a.m. (CST)
ETB 1020

Dr. Kevin Nowka
Professor of Practice, Department of Electrical and Computer Engineering
Texas A&M University

Title: “Systems and Machine Learning Research for Application in Digital Agriculture“

Abstract 
Agricultural practices developed during the Green Revolution of the 1960s to 1980s are insufficient to deal with future global demands for food, especially with increasing natural resource controls. Digital Agriculture research is allowing for continued improvements in crop yields and crop quality with better management practices and more efficient resource utilization.  This talk will cover how use of large agricultural datasets and modern machine learning allows agricultural researchers and producers to improve predictability of crop health and crop yields in support of improved agricultural management practices. Recent research on integration of ML with crop imaging from drones and satellites for wheat, cotton, and sorghum will be presented. Finally, integration of learning systems into agriculture infrastructure will be described.

Biography
Dr. Kevin Nowka is a Professor of Practice in the Texas A&M University Department of Electrical and Computer Engineering. His research focuses on optimizing computer hardware and software and learning models for data-intensive, cognitive and AI applications.

He received a B.S. in Computer Engineering from Iowa State University in 1986 and M.S. and Ph.D. degrees in Electrical Engineering from Stanford University in 1986 and 1995, respectively.

Previously he was the Director of IBM Research – Austin, one of IBM’s 12 global research laboratories and was the IBM Senior State Executive for Texas. Prior to coming to IBM he was a Member of Technical Staff at AT&T Bell Labs.

Dr. Nowka has been granted 135 US Patents and has over 100 publications.

More on Kevin Nowka
https://www.linkedin.com/in/kevin-nowka-6587715b/ or https://www.researchgate.net/profile/Kevin-Nowka

More on CESG Seminars: HERE

Please join on Friday, 1/19/24 at 10:20 a.m. in ETB 1020.

Filed Under: Seminars

CESG Seminar: Azalia Mirhoseini

Posted on November 14, 2023 by Vickie Winston

Friday, December 1, 2023
10:20 a.m. – 11:10 a.m. (CST)
ETB 1020

Dr. Azalia Mirhoseini
Assistant Professor, Department of Computer Science
Stanford University

Title: “Pushing the Limits of Scaling Laws in the Age of Large Language Models“

Abstract 
The recent success of large language models has been characterized by scaling laws – the power law relationship between performance and training dataset size, model parameter size, and training compute. In this talk, we will discuss ways to push the scaling laws even further by innovating across data, models, software and hardware. This includes reinforcement learning from human and AI feedback to improve learning efficiency, sparse and dynamic mixture-of-experts neural architectures for better performance, an automated framework for co-designing custom AI accelerators, and a deep RL method for chip floorplanning used in multiple generations of Google AI’s accelerator chips (TPU). Through these cutting-edge examples, we will outline a full-stack approach that leverages AI to overcome the next set of scaling challenges.

Biography
Dr. Azalia Mirhoseini is an assistant professor of computer science at Stanford University and a senior staff research scientist at DeepMind. Her research interest is developing capable and efficient AI systems that can solve high-impact, real-world problems. Before joining Stanford, Prof. Mirhoseini spent several years in industry, working on frontier generative AI and deep reinforcement learning projects at Anthropic and Google Brain. She has led a diverse portfolio of AI and Systems projects, with publications in Nature, ICML, ICLR, NeurIPS, UAI, ASPLOS, SIGMETRICS, DAC, DATE, and ICCAD. She has received a number of awards, including the MIT Technology Review 35 Under 35, the Best Ph.D. Thesis at Rice University’s ECE Department, and a Gold Medal in the National Math Olympiad in Iran. Her work has been covered in various media outlets, including MIT Technology Review, IEEE Spectrum, The Verge, Times of London, ZDNet, VentureBeat, and WIRED.

More on Azalia Mirhoseini: http://azaliamirhoseini.com/

More on CESG Seminars: HERE

Please join on Friday, 12/1/23 at 10:20 a.m. in ETB 1020.

Filed Under: Seminars

CESG Seminar: Dr. Neena Iman

Posted on November 1, 2023 by Vickie Winston

Friday, November 10, 2023
10:20 a.m. – 11:10 a.m. (CST)
ETB 1020

Dr. Neena Iman
Director of the O’Donnell Data Science and Research Computing Institute (DSRCI)
Southern Methodist University (SMU)

Title: “Future of Data Science: HPC+AI+Beyond-Moore“

Abstract 

The computing ecosystem is at an inflection point with many disruptive technologies merging together. With the debut of exascale supercomputers recently, the architecture of the next generation of HPC platforms is being discussed in the scientific community. The definition of HPC has changed. HPC is no longer just about floating-point operations, but also about the ability to ingest and process huge amounts of data. The traditional HPC applications/workloads are benefiting from the incorporation of AI and machine learning. Additionally, with the plateauing of Moore’s law, there is tremendous momentum in Beyond-Moore technologies, particularly in quantum. This talk will discuss the future of data science in this rapidly changing technology landscape.

Biography
Dr. Neena Iman is the inaugural Director of the O’Donnell Data Science and Research Computing Institute (DSRCI) at Southern Methodist University (SMU), a position key to the university’s commitment to data-focused education and next-gen computational research. The DSRCI also serves as the gateway to SMU’s HPC environment. Before joining SMU,

Neena Imam served as the Director of Strategic Researcher Engagement at NVIDIA corporation, the industry-leader in GPU computing and AI/ML research. In this role, Neena worked with academic researchers to enable GPU-accelerated and AI/ML applications development. Before NVIDIA, Neena served as a distinguished scientist and the Director of Research Collaboration in the Computin

g and Computational Sciences Directorate at Oak Ridge National Laboratory (ORNL). At ORNL, Neena performed research in HPC, as well as next-generation microelectronics and Post Moore computing.  Neena is the author/co-author of many scientific articles, served as an invited speaker and panelist at many conferences, and is active in professional organizations to promote research and education in HPC and AI.

Neena holds a Doctoral degree in Electrical Engineering from Georgia Institute of Technology, with Master’s and Bachelor’s degrees in the same field from Case Western Reserve University and California Institute of Technology, respectively. Neena also served as the Science and Technology Fellow for Senator Lamar Alexander in Washington D.C. (2010-2012). Neena is a senior member of IEEE,  served as an IEEE officer for multiple years, and is the founding Chair of ACM SIGHPC ASCAN (Accelerated Scalable Computing and ANalytics) chapter.

More on CESG Seminars: HERE

Please join on Friday, 11/10/23 at 10:20 a.m. in ETB 1020.

Filed Under: Seminars

CESG Seminar: Dr. JV Rajendran

Posted on October 30, 2023 by Nandu Giri

Friday, November 17, 2023
10:20 a.m. – 11:10 a.m. (CST)
ETB 1020

Dr. JV Rajendran
Associate Professor | Department of Electrical and Computer Engineering | Texas A&M University

Title: “Hardware Fuzzing — Why? What? How?“

Talking Points:

  • Do you trust your Verilog code?
  • Do you want to learn how to hijack a chip?
  • What keeps CHIP designers up at night?

Abstract
Hardware is at the heart of computing systems. For decades, software was considered error-prone and vulnerable. However, recent years have seen increased attacks exploiting hardware vulnerabilities and exploits, which even traditional software-based protections cannot prevent. In this talk, I will describe what hardware vulnerabilities look like in hardware “programming languages,” such as Verilog and VHDL. Then, I will explain a new and radical approach called hardware fuzzing for finding these vulnerabilities. Finally, I will detail how these new fuzzing techniques can be efficiently combined with existing functional verific

ation and validation approaches.

Biography
Dr. Jeyavijayan (JV) Rajendran
is an Associate Professor and an ASCEND Fellow in the Department of Electrical and Computer Engineering at Texas A&M University. He obtained his Ph.D. degree from New York University in August 2015. His research interests include hardware security and computer security. His research has won the NSF CAREER Award in 2017, ONR Young Investigator Award in 2022, the IEEE CEDA Ernest Kuh Early Career Award in 2021, the ACM SIGDA Outstanding Young Faculty Award in 2019, the Intel Academic Leadership Award, the ACM SIGDA Outstanding Ph.D. Dissertation Award in 2017, and the Alexander Hessel Award for the Best Ph.D. Dissertation in the Electrical and Computer Engineering Department at NYU in 2016 along with several best student paper awards. He organizes and has co‐founded Hack@DAC, a student security competition co-located with DAC, and SUSHI.

More Information on Dr. JV Rajendran: HERE

More on CESG Seminars: HERE

Please join on Friday, 11/17/23 at 10:20 a.m. in ETB 1020.

Filed Under: Seminars

CESG Seminar: Neal Cardwell

Posted on October 20, 2023 by Nandu Giri

Friday, November 3, 2023
10:20 a.m. – 11:10 a.m. (CST)
ETB 1020

Neal Cardwell
Principal Software Engineer | Google

Title: “Congestion Control in the Real World“

Talking Points:

  • Swift is a delay-based congestion control for datacenters that achieves low latency, high utilization, and near-zero loss.
  • Swift achieves roughly 50 microsecond tail latency while maintaining near 100% utilization even at 100Gbps line rates.
  • BBR is a model-based congestion control for wide-area networks that achieves low latency, high utilization, and robustness to a targeted level of random packet loss.
  • BBR avoids bufferbloat (maintaining bounded queues in buffers of any depth), while maintaining near 100% utilization even at the moderate loss rates characteristic of today’s high-speed, shallow-buffered WANs with bursty short flows.

Abstract
Designing and deploying new high-performance congestion control algorithms at scale in today’s high-speed, real-world datacenter and wide-area networks is challenging. This talk will discuss the challenges in these environments, and then focus on the high-performance congestion control algorithms we have created and deployed at global scale at Google: Swift for datacenter congestion control, and BBR for wide-area congestion control. The talk will close with thoughts on interesting research questions and potential future research directions for real-world congestion control.

Biography
Neal Cardwell is a Principal Software Engineer in Google’s NYC office. He entered the UC Berkeley PhD program in 1996 and then followed his advisor, Tom Anderson, to the University of Washington, where he completed an MS in 1999, with research in the area of TCP congestion control. He worked at Steve McCanne’s FastForward Networks from 1999 to 2002. He has worked at Google since 2002, on projects including GFE (the Google Front End proxying all traffic for google.com), Googlebot (Google’s web crawler), routing performance, the open source Packetdrill network stack testing tool, and Linux TCP congestion control and loss recovery. He is currently a member of the Congestion Control team at Google, and his recent focus has been on improving the BBR and Swift congestion control algorithms.

More Information on Neal Cardwell: https://research.google/people/NealCardwell/

More on CESG Seminars: HERE

Please join on Friday, 11/03/23 at 10:20 a.m. in ETB 1020.

Filed Under: Seminars

CESG Seminar: Doug Burger

Posted on October 10, 2023 by Vickie Winston

Friday, October 27, 2023
10:20 a.m. – 11:10 a.m. (CST)
ETB 1020

Doug Burger
Technical Fellow | Microsoft

Title: “The New AI Computing Stack“

Talking Points:

  • Emergent capabilities of large language models, built on top of deep learning
  • The intersection of traditional computing and the AI stack
  • The evolution of the AI stack, key challenges and problems
  • Future implications of the tandem working of the traditional and AI stacks

Abstract
We have entered a new era of computing.  Large language models, built on top of deep learning, have shown surprising emergent capabilities.

These capabilities, such as rich semantic understanding, ability to generate content, and ability to plan and reason, will change how people use computers and for what computers can effectively be used.

This AI stack is effectively a second general class of computing that intersects with the traditional computing stack in surprising and compelling ways.

In this talk I will discuss how the AI stack is evolving, some of the key challenges that we are currently facing, and the most important problems to be working on in this area.

If we are successful, and these challenges are solved, these two computing stacks working in tandem will transform and up-level humanity’s capabilities.

Biography
Doug Burger is a Technical Fellow at Microsoft.  From 1999-2008, he served on the Computer Sciences faculty at UT-Austin, where he co-led the TRIPS project with Steve Keckler.

From 2008-2018, he was a researcher in Microsoft Research, where he led the Catapult and Brainwave projects, which both shipped at large scale in Microsoft’s datacenter infrastructure.

From 2018-2022 he served as a product executive in Azure’s new hardware group, leading teams architecting large-scale AI infrastructure.

In 2023, he returned to Microsoft Research to help drive advanced thinking in AI-based computing.  He is a Fellow of the ACM and the IEEE.

More on CESG Seminars: HERE

Please join on Friday, 10/27/23 at 10:20 a.m. in ETB 1020.

Filed Under: Seminars

CESG Seminar: Abu Sebastian

Posted on October 10, 2023 by Vickie Winston

Friday, October 20, 2023
10:20 a.m. – 11:10 a.m. (CST)
ETB 1020

Abu Sebastian
Distinguished Research Scientist, IBM Research – Europe

Title: “Analog In-Memory Computing for Deep Learning Inference“

Talking Points: 

  • What is analog in-memory computing (AIMC)?
  • How mature is AIMC?
  • What are the key open research topics and what’s next for AIMC?

Abstract
I will introduce analog in-memory computing based on non-volatile memory technology with a focus on the key concepts and the associated terminology. Subsequently, a multi-tile mixed-signal AIMC chip for deep learning inference will be presented. This chip fabricated in 14nm CMOS technology comprises 64 AIMC cores/tiles based on phase-change memory technology. It will serve as the basis to delve deeper into the device, circuits, architectural and algorithmic aspects of AIMC. Of particular focus will be achieving floating point-equivalent classification accuracy while performing the bulk of computations in the analog domain with relatively less precision. I will also present an architectural vision for a next generation AIMC chip for DNN inference. I will conclude with an outlook for the future.

Two papers that may be of interest:

  1. “Y2023_legallo_NatureElectronics.pdf”
  2. “Y2020_sebastian_NatureNano.pdf”

Biography
Abu Sebastian is one of the technical leaders of IBM’s research efforts towards next generation AI Hardware and manages the in-memory computing group at IBM Research – Zurich. He is the author of over 200 publications in peer-reviewed journals/conference proceedings and holds over 90 US patents.  In 2015 he was awarded the European Research Council (ERC) consolidator grant and in 2020, he was awarded an ERC Proof-of-concept grant. He was an IBM Master Inventor and was named Principal and Distinguished Research Staff Member in 2018 and 2020, respectively. In 2019, he received the Ovshinsky Lectureship Award for his contributions to “Phase-change materials for cognitive computing”. In 2023, he was conferred the title of Visiting Professor in Materials by University of Oxford. He is a distinguished lecturer and fellow of the IEEE.

More on Abu Sebastian: https://research.ibm.com/people/abu-sebastian

More on CESG Seminars: HERE

Please join on Friday, 10/20/23 at 10:20 a.m. in ETB 1020.

Filed Under: Seminars

CESG Seminar: Arif Merchant

Posted on October 3, 2023 by Vickie Winston

Friday, October 13, 2023
10:20 a.m. – 11:10 a.m. (CST)
ETB 1020

Arif Merchant
Research Scientist @ Google

Title: “Research Directions in Google Storage“

Talking Points: 

  • How does Google Storage work?
  • How do you lay out data across tens of thousands of disks, dozens of device types, with wildly varying failure rates and capacities, while keeping them all well utilized, and the data safe?
  • What are the open research questions in Cloud Storage?

Abstract
Google’s Cloud is a very large consumer of storage, and these storage systems are geographically distributed, multi-layered, heterogeneous, and complex. We present a high-level overview of Google’s storage systems, the common challenges faced by storage at Google, and explore several research directions for managing and optimizing the resources used. We will touch upon topics in layout, encoding, and new storage technologies. We will conclude with a discussion of some open questions.

Biography
Arif Merchant is a Research Scientist at Google and leads the Storage Analytics group, which studies interactions between components of the storage stack. His interests include distributed storage systems, storage management, and stochastic modeling. He holds the B.Tech. degree from IIT Bombay and the Ph.D. in Computer Science from Stanford University. He is an ACM Distinguished Scientist.


More on CESG Seminars: HERE

Please join on Friday, 10/13/23 at 10:20 a.m. in ETB 1020.

Filed Under: Seminars

CESG Seminar: Anshumali Shrivastava

Posted on September 15, 2023 by Caroline Jurecka

Friday, September 22, 2023
10:20 a.m. – 11:10 a.m. (CST)
ETB 1020, In-Person Presentation Only

Anshumali Shrivastava
Associate Professor at the Department of Computer Science
Rice University

Title: “How We Pre-Trained GPT/LLM Models from Scratch on a CPU-Only Cluster: Democratizing the GenAI Ecosystem with Algorithms and Dynamic Sparsity”

Talking Points: 

  • You can now Pre-train and fine-tune GPTs without any GPU.
  • AI/LLMS without GPUs is here.
  • AI farming on CPU.

Abstract

The Neural Scaling Law informally states that an increase in model size and data automatically improves AI. However, we have reached a point where growth has tipped, making the cost and energy associated with AI prohibitive. The barrier to entry into AI is enormous and reserved for only a few with access to expensive GPUs. Unfortunately, there is a severe shortage of GPUs, and it is unlikely to improve in the near future. This talk will demonstrate how algorithms and software can eliminate the need for GPUs altogether, allowing us to build (pre-train, fine-tune, and deploy) some of the most sophisticated software using commodity CPUs that are widely available.

This talk will demonstrate the algorithmic progress that can exponentially reduce the compute and memory cost of pre-training, training, fine-tuning, as well as inference with LLMs. Our experiments with OPT models reveal that more than 99% of floating-point operations associated with large neural networks result in zeros. Unfortunately, modern AI software stacks relying on dense matrix multiplications are forced to spend almost all of their cycles and energy computing these zeros. In this talk, we will show how data structures can fundamentally leverage the inherent “dynamic sparsity” efficiently and effectively. In particular, we will argue how randomized hash tables can be used to design an efficient “associative memory” that reduces the number of multiplications associated with the training of neural networks by several orders of magnitude. The implementation of this algorithm, in the form of ThirdAI’s BOLT software, challenges the common knowledge prevailing in the community that specialized processors like GPUs are required for building GPT. We will demonstrate the world’s first GPT-2.5B, a generative model that was entirely pre-trained on standard CPU clusters and can be fine-tuned on a single commodity desktop. We will also show how we can build a CPU-only Retrieval Augmented Generation (RAG) ecosystem that does not require any vector database management and surpasses the accuracies of some of the most sophisticated foundational models with computations running on laptops and desktops.

Biography

Anshumali Shrivastava is an associate professor in the computer science department at Rice University. He is also the Founder and CEO of ThirdAI Corp, a company that is democratizing AI to commodity hardware through software innovations. His broad research interests include probabilistic algorithms for resource-frugal deep learning. In 2018, Science news named him one of the Top-10 scientists under 40 to watch.  He is a recipient of the National Science Foundation CAREER Award, a Young Investigator Award from the Air Force Office of Scientific Research, a machine learning research award from Amazon, and a Data Science Research Award from Adobe. He has won numerous paper awards, including Best Paper Award at NIPS 2014, MLSys 2022, and Most Reproducible Paper Award at SIGMOD 2019. His work on efficient machine learning technologies on CPUs has been covered by popular press including Wall Street Journal, New York Times, TechCrunch, NDTV, etc.


More on Anshumali Shrivastava: https://www.cs.rice.edu/~as143/

More on CESG Seminars: HERE

Please join on Friday, 9/22/23 at 10:20 a.m. in ETB 1020.

Filed Under: Seminars

CESG Seminar: Laxmikant (Sanjay) Kale

Posted on September 13, 2023 by Caroline Jurecka

Monday, September 25, 2023
10:20 a.m. – 11:10 a.m. (CST)
ETB 1034

Laxmikant (Sanjay) Kale
Director & Research Professor at the Parallel Programming Laboratory
Paul and Cynthia Saylor Professor Emeritus of Computer Science
University of Illinois Urbana-Champaign

Title: “The Migratable Objects Parallel Programming Model: Successes and Prospects”

Talking Points: 

  • Parallel Programming Model with Runtime Adaptivity
  • Automated Dynamic Load balancing and energy optimization
  • Highly Scalable Parallel Applications
  • Coronavirus simulation on supercomputers

Abstract
The Migratable Objects programming model (MOPM) represents an approach to parallel programming where the notion of a processor is virtualized, and represented by an encapsulated object that can be migrated to any physical processor or host at will by an intelligent runtime system. Combined with over-decomposition, it separates concerns about how to partition data and what computations to do in parallel from where the data resides and which processor executes which actions. Thereby, it empowers highly adaptive runtime systems, which supports asynchronous task-based models and uniquely (and most consequentially) dynamic load balancing. It automatically overlaps communication and computation overlap and engenders parallel composition of independent modules efficiency. MOPM also supports automatic power and energy related optimizations as well as fault tolerance.

I will review the basic ideas of the programming model, its baseline implementation in Charm++, and the successes it has notched. The well-known application NAMD, which was used in many highly scaled supercomputer simulations of the coronavirus in recent years, is one such success along with applications in astronomy, fluid dynamics, and other domains. I will illustrate how these application’s successes are based on features of the MOPM.

Charm++ provides a good foundation for development of higher-level languages and frameworks as demonstrated by Adaptive MPI, Charades (discrete event simulation framework), etc.

I will present my assessment of the success and failures of this model over the past two decades, future prospects for it and its software ecosystem, as well as research opportunities.

Biography
Professor Laxmikant Kale is the director of the Parallel Programming Laboratory and Research Professor as well as the Paul and Cynthia Saylor Professor Emeritus of Computer Science at the University of Illinois at Urbana-Champaign.

Prof. Kale has been working on various aspects of parallel computing, with a focus on enhancing performance and productivity via adaptive runtime systems, and with the belief that only interdisciplinary research involving multiple CSE and other applications can bring back well-honed abstractions into Computer Science that will have a long-term impact on the state-of-art.

His collaborations include the widely used Gordon-Bell award winning (SC 2002) biomolecular simulation program NAMD and other collaborations on computational cosmology, quantum chemistry, rocket simulation, space-time meshes, and other unstructured mesh applications.

He takes pride in his group’s success in distributing and supporting software embodying his research ideas, including Charm++, Adaptive MPI and Charm4Py. He and his team won the HPC Challenge award at Supercomputing 2011, for their entry based on Charm++.

Prof. Kale is a fellow of the ACM and IEEE, and a winner of the 2012 IEEE Sidney Fernbach award.

More on Laxmikant Kale: https://charm.cs.illinois.edu/~kale/

More on CESG Seminars: HERE

Please join on Monday, 9/25/23 at 10:20 a.m. in ETB 1034.

Filed Under: Seminars

  • « Go to Previous Page
  • Go to page 1
  • Interim pages omitted …
  • Go to page 5
  • Go to page 6
  • 7
  • Go to page 8
  • Go to page 9
  • Interim page numbers omitted …
  • Go to page 11
  • Go to Next Page »

© 2016–2026 Department of Electrical and Computer Engineering

Texas A&M Engineering Experiment Station Logo