Three computer engineering professors named IEEE Fellow

Three professors in the Department of Electrical and Computer Engineering at Texas A&M University were named Fellows of the Institute of Electrical and Electronic Engineers (IEEE). Dr. Jiang Hu, Dr. Peng Li and Dr. Xi Zhang were named IEEE Fellows for their research contributions.

IEEE Fellow is the highest grade of membership and is recognized by the technical community as a prestigious honor and an important career achievement. The IEEE grade of Fellow is conferred by the IEEE board of directors upon a person with an outstanding record of accomplishments in any of the IEEE fields of interest. The total number selected in any one year cannot exceed one-tenth of one percent of the total voting membership.

Hu was elected for contributions to gate, interconnect and clock network optimization in VLSI circuits, Li was elected for contributions to the analysis and modeling of integrated circuits and systems and Zhang was elected for his contributions to quality of service (QoS) in mobile wireless networks.

Dr. Jiang Hu

Digital VLSI chips, such as microprocessors and video decoders, are mostly composed by logic gates, which are connected by interconnect wires and synchronized by clock networks. Hu’s research accomplishments encompass all these three key elements. For gate optimizations he is a main contributor to the state-of-the-art solutions that address industrial challenges in nanometer VLSI technologies, including competing design objectives, complex models, non-ideal effects and huge problem sizes. Interconnect is a critical bottleneck to digital chip performance.

On interconnect optimization, Hu has produced large impact in both academia and industry. His research results have been applied on many industrial chip products, facilitating better chip performance, less chip power, shorter design turn-around time and solving difficult design cases. Hu is also highly recognized for his research on VLSI clock network optimization. Among many contributions, he pioneered the concept of cross-link, which greatly enhances clock network robustness with very high energy-efficiency, and inspired numerous follow-up research activities. Hu’s overall achievement is instrumental in shaping the course of VLSI optimization research and helping the VLSI industry tackle real world challenges.

Dr. Peng Li

Li obtained his Ph. D. in electrical and computer engineering from Carnegie Mellon University and joined the department in 2004. He has established expertise in electronic design automation, integrated circuits and systems, brain-inspired computing and aspects of computational neuroscience. In addition to his elevation to IEEE Fellow, his work has been recognized by various distinctions including four best paper awards from prestigious VLSI and EDA conferences, an NSF Career Award, four Inventor Recognition Awards from the Microelectronics Advanced Research Corporation and the Semiconductor Research Corporation. Li received the Best Paper Hat Trick Award, Prolific Author Award and Top 10 Author in Fifth Decade Award, all from the IEEE/ACM Design Automation Conference, the world’s premier VLSI technology conference.

Li’s former associates have obtained faculty and research positions in academia and industrial labs (Michigan Tech, Cornell Medical College/Cornell University, Intel Strategic CAD Laboratories) and research and development positions in the United States high-tech industry. He has brought his work to the real world through technology transfer and consulting for major semiconductor firms and startups.

Dr. Xi Zhang

Zhang, director of the Networking and Information Systems Laboratory, joined the department in 2002. He received his Ph.D. in electrical engineering and computer science (electrical engineering – systems) from The University of Michigan. He was a research fellow with the School of Electrical Engineering, University of Technology, Sydney, Australia, and the Department of Electrical and Computer Engineering, James Cook University, Australia. He was with the Networks and Distributed Systems Research Department, AT&T Bell Laboratories, Murray Hills, New Jersey, and AT&T Laboratories Research, Florham Park, New Jersey.

Zhang has published more than 300 research papers, two books and multiple book chapters on mobile wireless networks, statistical delay-bounded QoS guarantee for multimedia wireless networks, 5G mobile wireless networks, wireless cognitive radio networks, wireless sensor networks, underwater wireless networks, network protocol design and modeling, statistical communications, random signal processing, information theory and control theory and systems. His publications have been extensively cited in the research community.

He received the National Science Foundation CAREER Award in 2004 for his research in the areas of mobile wireless and multicast networking and systems. He is an IEEE Distinguished Lecturer for the IEEE Communications Society and IEEE Vehicular Technology Society. He received Best Paper Awards at IEEE GLOBECOM 2014, IEEE GLOBECOM 2009, IEEE GLOBECOM 2007 and IEEE WCNC 2010. Zhang is author of an IEEE BEST READINGS (receiving the top citation rate) journal paper. He also received a TEES Select Young Faculty Award for Excellence in Research Performance from the Dwight Look College of Engineering at Texas A&M in 2006.

He is serving as, or has been editor for numerous IEEE Transactions and Journals, including IEEE Transactions on Communications, IEEE Transactions on Wireless Communications, IEEE Transactions on Vehicular Technology, IEEE Journal on Selected Areas in Communications, IEEE Communications Letters, IEEE Communications Magazine and IEEE Wireless Communications Magazine. He has served as the technical program (TPC) chair for IEEE GLOBECOM 2011, TPC vice-chair for IEEE INFOCOM 2010, TPC area chair for IEEE INFOCOM 2012, Panel/Demo/Poster chair for ACM MobiCom 2011 and general vice-chair for IEEE WCNC 2013, etc.

Duffield receives DARPA grant for research on network resilience

Dr. Nick Duffield, a professor in the Department of Electrical and Computer Engineering at Texas A&M University, and professor by courtesy in the Department of Computer Science and Engineering, is part of a group that was awarded a multi-million dollar contract from the Defense Advanced Research Projects Agency (DARPA) to help develop new networking and security technologies at the Wide Area Network (WAN) edge.

The awards fall under DARPA’s Edge-Directed Cyber Technologies for Reliable Mission or Edge-CT program that the agency says will combine real- time network analytics, holistic decision systems and dynamically configurable protocol stacks to mitigate WAN failures and attacks on the fly. Its objective is to bolster the resilience of communication over Internet Protocol networks solely by instantiating new capabilities in computing devices within user enclaves at the WAN edge.

The project is led by Applied Communication Sciences with partnership from Apogee Research, the Massachusetts Institute of Technology, the University of Pennsylvania and Texas A&M University, where Duffield is principal investigator. The partners propose to develop Distributed Enclave Defense Using Configurable Edges (DEDUCE). DEDUCE is a new architectural approach to edge-directed network adaptation that incorporates novel approaches to sensing, actuation and control, creating a robust and scalable system that exceeds Edge-CT goals and evolves in response to changes in the network.

Duffield’s involvement in the project stems from his research in Network Tomography, in which end–to-end performance measurements between network edges can be correlated to identify common origins of performance degradation. In DEDUCE, this information will be used to inform strategies for alternate routing on an overlay network between enclaves. Duffield was a co-recipient of the ACM SIGMETRCIS Test of Time Award in both 2012 and 2013 for work in Network Tomography.

Duffield received his bachelor’s degree in natural sciences in 1982 and a master’s in 1983 from the University of Cambridge, UK. He received his Ph.D. in mathematical physics from the University of London, U.K., in 1987. His research focuses on data and network science, particularly applications of probability, statistics, algorithms and machine learning to the acquisition, management and analysis of large datasets in communications networks and beyond.

Before joining the department, Duffield worked at AT&T Labs-Research, Florham Park, New Jersey, where he held the position of distinguished member of technical staff and was an AT&T Fellow. He previously held post-doctoral and faculty positions in Dublin, Ireland and Heidelberg, Germany.

Duffield, the author of numerous papers and holder of many patents, is co-inventor of the smart sampling technologies that lie at the heart of AT&T’s scalable Traffic Analysis Service. He is specialty editor-in-chief of journal Frontiers in ICT and he was charter chair of the IETF working group on packet sampling. Duffield is an IEEE Fellow and serves on the Board of Directors of ACM SIGMETRICS. He is an associate member of the Oxford-Man Institute of Quantitative Finance.

Ponniah and Kumar publish monograph on designing secure protocols for wireless ad-hoc networks

Jonathan Ponniah and P. R. Kumar, in the Department of Electrical and Computer Engineering at Texas A&M University, and a co-author Yih-Chun Hu, have published a monograph on designing secure protocols with provable security guarantees for wireless ad-hoc networks infiltrated with adversarial nodes. The monograph is titled “A Clean Slate Approach to Secure Wireless Networking,”

The authors note that the current process of designing secure protocols is tantamount to an arms race between attacks and “patches” that does not provide any security guarantees. Motivated by this, they introduce a system theoretic approach to the design of secure protocols with provable security as well as optimality guarantees.

Ponniah is a post-doc who completed his PhD under the advise of Kumar who is a university distinguished professor.

CESG Fishbowl Tele-Seminar: Usage-Generated Applications

Last week, Dr. Qiong Wang gave a talk on Usage-Generated Applications and their impact on preserving Best-Effort service on the internet in the presence of Managed Service. This presentation discussed issues in topics of net neutrality, the marketing strategies of internet service providers, and quality of service (QoS) analysis.

Best-Effort service describes a network service where the user is not guaranteed that data is always being delivered, or a certain QoS level or priority. This service is often unreliable, as delivery not always guaranteed. This service has been provided under a low subscription fee and free usage. Due to this dynamic, Best-Effort service has contributed to the growth of the internet and the creation of many network applications. With the discussion of Net Neutrality, a topic has come up discussing the concern of preserving the Internet as it is, with ISP’s providing Managed Service by restricting bandwidth and usage by Best-Effort users.

Dr. Wang, along with Dr. Debasis Mitra, developed a model analysis of the result of the scenario proposed. The model features a monopoly ISP that offers both Best-Effort service for free and Managed Service which guarantees QoS for a per-use fee. Customers make optimal choices regarding whether to subscribe to the network, which service to use, and how much to use the chosen service. The ISP manages fees and bandwidth for both services, working to maximize profit. This analysis shows the need for Usage-Generated Application, which stabilize the offering of Best-Effort Service, specifically in the presence of ISP’s who look to maximize profits with Managed Service.

Dr. Qiong Wang received his PhD in Engineering and Public Policy from Carnegie-Mellon University. He has worked at Alcatel-Lucent Bell Labs as a Member of Technical Staff. He is currently an associate professor at the Department of Industrial and Enterprise Systems Engineering in the University of Illinois at Urbana-Champaign. His research focuses on stochastic control of manufacturing and network economics.

Cyber-Physical Systems: Applications and Challenges

On Tuesday, March 31st, a regional meeting of the National Academy of Engineering was held here at Texas A&M University’s Annenberg Presidential Conference Center. This general symposium had featured speakers who discussed the topic of cyber-physical systems. The symposium addressed the potential benefits these new systems can have for society, the economy, and the environment.

A cyber-physical system (or CPS) is a system of computer elements controlling certain physical entities, heavily interacting with each other. Today, most CPS elements are referred to as embedded systems. However, embedded systems are focused more on computational elements, rather than the link between those and the physical world. Traditionally, a CPS is designed as a network rather than standalone devices. These are closely tied to robotics and sensors. CPS can be found nearly anywhere, including medicine, automobiles, power grids, city infrastructure, manufacturing, aircraft, and building systems. These systems experience increased adaptability, autonomy, efficiency, functionality, reliability, safety, and usability. This symposium was primarily focused on discussing these systems and their integration with computing, communication, and control technologies.

Speaker featured at this symposium include Dr. P.R. Kumar, a professor here at Texas A&M at the Department of Electrical and Computer Engineering; Dr. John Stankovic, BP America Professor at the Department of Computer Science at the University of Virginia; Dr. Vijay Kumar, UPS foundation professor at the University of Pennsylvania, working in Mechanical Engineering and Applied Mechanics, Computer and Information Science, and Electrical and Systems Engineering; and David Corman, National Science Foundation program director, division of Computer and Network Systems.

CESG Seminar: Genomic analysis tools for familial and case-control sequencing studies

Earlier this week, Tuesday April 7th, Dr. Chad Huff gave a talk on genomic analysis tools. He explained how academic efforts are becoming more focused on data analysis and interpretation as genomic data becomes more commoditized. Dr. Huff introduced tools his research group developed to analyze bioinformatics throughput sequencing data.

Previously, traditional genetic analysis tools are usually sub-par in large studies because of problems arising from low power and scalability. The topics Dr. Huff discussed in this talk include relationship estimation, pedigree reconstruction, functional variant prediction, and analysis of rare variants. He discussed a method for detecting genetic relationships called Estimation of Recent Shared Ancestry. This method can identify relatives as distant as 4th cousins, aiding in reconstruction of extended pedigrees. Another tool Dr. Huff presented is the Variant Annotation, Analysis, and Search Tool. This tool was described as a probabilistic disease gene-finder that combines amino acid substitution and allele frequency information. This tool was then extended to find genetic diseases in pedigrees

Dr. Chad Huff is an assistant professor at MD Anderson Cancer Center. His lab’s research is focused on human evolution and disease through statistical, computational, and population genomics. Currently his group is committed to finding new methods to analyze genomic data, and applying them to identifying genetic basis for human diseases, cancer in particular.

Texas A&M Workshop on Software Defined Networks

Last week, a workshop on Software Defined Networking was hosted at Texas A&M at the Emerging Technologies Building. Organized by Dr. Alex Sprintson, a professor here at Texas A&M ECE and Jasson Casey, Texas A&M PhD candidate, founder and executive director of

This workshop consisted of many talks encompassing several aspects of software defined networking. These included topics such as SDN security, architecture, data planes, abstractions, and research. The talks were primarily presented by members of, an organization focused on the OpenFlow protocol standard of SDN. is a coalition of researchers and industry engineers looking to widen the breadth of influence SDN has on the industry. The members of are looking to implement the OpenFlow protocol with a secure message layer, this is called the SDN stack. is also a research organization and thus seeks to further improve key aspects of SDN implementation and merging the gap between industry and academia.

This workshop also consisted of a talk by Chip Howes, an industry veteran, about internet startups. The talk focused on the startup experience, how startups rise and fall, and what it takes to make a successful startup. Mr. Howes is an industry expert and has created and sold six different successful startup companies. He has held many prominent positions in a wide range on industry leaders for over 30 years.

Special CESG Seminar – Maple: Simplifying SDN Programming Using Algorithmic Policies

Last week, Dr. Andreas Voellmy gave a talk about software defined networking (SDN). Dr. Voellmy gave an overview of SDN and presented several challenges with OpenFlow, a standard for SDN. He also presented Maple, which addresses these challenges

A recent development in networking, Software Defined Networking allows a network to make changes to its behavior through a central policy administered by a network controller. While previously, network architecture consisted of fixed, closed, vertically-integrated network appliances, SDN implements a more general packet processing approach, programmed through open control software executed on servers. This implementation is fairly open and very flexible. One standard for SDN is OpenFlow, which defines certain rules and guidelines for how SDN should be implemented. Many aspects of this implementation remain challenging, however.

To address these challenges, Dr. Voellmy presented Maple. Maple allows the user to create algorithmic policies, algorithms programmed in some general-purpose language and run on every packet of data that enters a network. These algorithmic policies replace the requirement of SDN to generate and maintain sets of rules on individual network switches. To implement these policies, Maple has a tracing runtime system which discovers reusable forwarding decisions from a control program.

Dr. Andreas Voellmy received his PhD in Computer Science from Yale University. His research focuses on Software Defined Networking, where he mainly draws on the OpenFlow library implementation, and the Glasgow Haskell Compiler.

CESG Seminar: Catapulting beyond Moore’s Law: Using FPGAs to Accelerate Data Centers

On March 13th, Dr. Derek Chiou gave a talk describing a joint project by Microsoft Research and Bing. This project studied the prospects of using field programmable gate arrays (FPGAs) to speed up cloud applications.

Field programmable gate arrays are integrated circuits created to be programmed by a consumer or designer after their manufacture. Usually this configuration is specified using a hardware description language. FPGAs contain a series of programmable logic blocks, which can be wired and reconfigured together. The possibilities for reconfiguring these blocks are in a wide spectrum from very simple logic operations to very complex functions.

In this project, an FPGA card was developed to accelerate a large part of Bing’s search engine. This card is plugged into a Microsoft cloud server. Since the cloud application cannot fit on one FPGA card, the application was partitioned across multiple FPGA cards across multiple servers. These cards were all connected with a network programmed in the FPGAs.

Doctor Derek Chiou received his PhD, S.M. and S.B. degrees in Electrical Engineering and Computer Science from MIT. Currently he is an associate professor at the University of Texas at Austin, where he researches various areas pertaining to performance acceleration. Also, Dr. Chiou is a Principal Architect at Microsoft where he co-leads a team working on FPGAs for data center applications.

CESG Seminar: Hardware Implementation of Cascade Support Vector Machine

Last week, Friday March 6th, PhD student Qian Wang presented a paper describing an architecture for support vector machine (SVM) training and classification. This architecture in particular, is a parallel digital very-large-scale integration (VLSI). The paper also presented a cascade SVM algorithm that was used to develop an efficient parallel VLSI architecture.

Cascade SVM is a training algorithm that, in this paper, is leveraged to produce and improve the scalability of hardware-based SVM training. This algorithm is also used in the paper to develop a parallel VLSI architecture. This architecture presented in the paper is shown to have improved scalability by spreading its workload over the many cascading SVM processes. The hardware implementation of the cascade SVM algorithm achieves a low overhead and allow for SVM training over variable sizes of data sets. In order to achieve full use of parallelism, a multilayer system bus and multiple distributed memories are used in the proposed parallel cascade architecture. Also, the proposed architecture can handle a wide range of uses, and can achieve a combined use of parallel processing and temporal reuse of resources, which leads to good tradeoffs between throughput, overhead, and power dissipation.

PhD student Qian Wang worked as a research assistant at University of Kansas, where he developed novel photonic devices and received his M.S. in Electrical Engineering in 2012. He received his B.S. in the same field from Harbin Institute of Technology in 2009. He is currently working on VLSI hardware implementation of machine learning algorithms as a PhD student at Texas A&M University.