Duffield receives DARPA grant for research on network resilience

Dr. Nick Duffield, a professor in the Department of Electrical and Computer Engineering at Texas A&M University, and professor by courtesy in the Department of Computer Science and Engineering, is part of a group that was awarded a multi-million dollar contract from the Defense Advanced Research Projects Agency (DARPA) to help develop new networking and security technologies at the Wide Area Network (WAN) edge.

The awards fall under DARPA’s Edge-Directed Cyber Technologies for Reliable Mission or Edge-CT program that the agency says will combine real- time network analytics, holistic decision systems and dynamically configurable protocol stacks to mitigate WAN failures and attacks on the fly. Its objective is to bolster the resilience of communication over Internet Protocol networks solely by instantiating new capabilities in computing devices within user enclaves at the WAN edge.

The project is led by Applied Communication Sciences with partnership from Apogee Research, the Massachusetts Institute of Technology, the University of Pennsylvania and Texas A&M University, where Duffield is principal investigator. The partners propose to develop Distributed Enclave Defense Using Configurable Edges (DEDUCE). DEDUCE is a new architectural approach to edge-directed network adaptation that incorporates novel approaches to sensing, actuation and control, creating a robust and scalable system that exceeds Edge-CT goals and evolves in response to changes in the network.

Duffield’s involvement in the project stems from his research in Network Tomography, in which end–to-end performance measurements between network edges can be correlated to identify common origins of performance degradation. In DEDUCE, this information will be used to inform strategies for alternate routing on an overlay network between enclaves. Duffield was a co-recipient of the ACM SIGMETRCIS Test of Time Award in both 2012 and 2013 for work in Network Tomography.

Duffield received his bachelor’s degree in natural sciences in 1982 and a master’s in 1983 from the University of Cambridge, UK. He received his Ph.D. in mathematical physics from the University of London, U.K., in 1987. His research focuses on data and network science, particularly applications of probability, statistics, algorithms and machine learning to the acquisition, management and analysis of large datasets in communications networks and beyond.

Before joining the department, Duffield worked at AT&T Labs-Research, Florham Park, New Jersey, where he held the position of distinguished member of technical staff and was an AT&T Fellow. He previously held post-doctoral and faculty positions in Dublin, Ireland and Heidelberg, Germany.

Duffield, the author of numerous papers and holder of many patents, is co-inventor of the smart sampling technologies that lie at the heart of AT&T’s scalable Traffic Analysis Service. He is specialty editor-in-chief of journal Frontiers in ICT and he was charter chair of the IETF working group on packet sampling. Duffield is an IEEE Fellow and serves on the Board of Directors of ACM SIGMETRICS. He is an associate member of the Oxford-Man Institute of Quantitative Finance.

Ponniah and Kumar publish monograph on designing secure protocols for wireless ad-hoc networks

Jonathan Ponniah and P. R. Kumar, in the Department of Electrical and Computer Engineering at Texas A&M University, and a co-author Yih-Chun Hu, have published a monograph on designing secure protocols with provable security guarantees for wireless ad-hoc networks infiltrated with adversarial nodes. The monograph is titled “A Clean Slate Approach to Secure Wireless Networking,”

The authors note that the current process of designing secure protocols is tantamount to an arms race between attacks and “patches” that does not provide any security guarantees. Motivated by this, they introduce a system theoretic approach to the design of secure protocols with provable security as well as optimality guarantees.

Ponniah is a post-doc who completed his PhD under the advise of Kumar who is a university distinguished professor.

CESG Fishbowl Tele-Seminar: Usage-Generated Applications

Last week, Dr. Qiong Wang gave a talk on Usage-Generated Applications and their impact on preserving Best-Effort service on the internet in the presence of Managed Service. This presentation discussed issues in topics of net neutrality, the marketing strategies of internet service providers, and quality of service (QoS) analysis.

Best-Effort service describes a network service where the user is not guaranteed that data is always being delivered, or a certain QoS level or priority. This service is often unreliable, as delivery not always guaranteed. This service has been provided under a low subscription fee and free usage. Due to this dynamic, Best-Effort service has contributed to the growth of the internet and the creation of many network applications. With the discussion of Net Neutrality, a topic has come up discussing the concern of preserving the Internet as it is, with ISP’s providing Managed Service by restricting bandwidth and usage by Best-Effort users.

Dr. Wang, along with Dr. Debasis Mitra, developed a model analysis of the result of the scenario proposed. The model features a monopoly ISP that offers both Best-Effort service for free and Managed Service which guarantees QoS for a per-use fee. Customers make optimal choices regarding whether to subscribe to the network, which service to use, and how much to use the chosen service. The ISP manages fees and bandwidth for both services, working to maximize profit. This analysis shows the need for Usage-Generated Application, which stabilize the offering of Best-Effort Service, specifically in the presence of ISP’s who look to maximize profits with Managed Service.

Dr. Qiong Wang received his PhD in Engineering and Public Policy from Carnegie-Mellon University. He has worked at Alcatel-Lucent Bell Labs as a Member of Technical Staff. He is currently an associate professor at the Department of Industrial and Enterprise Systems Engineering in the University of Illinois at Urbana-Champaign. His research focuses on stochastic control of manufacturing and network economics.

Cyber-Physical Systems: Applications and Challenges

On Tuesday, March 31st, a regional meeting of the National Academy of Engineering was held here at Texas A&M University’s Annenberg Presidential Conference Center. This general symposium had featured speakers who discussed the topic of cyber-physical systems. The symposium addressed the potential benefits these new systems can have for society, the economy, and the environment.

A cyber-physical system (or CPS) is a system of computer elements controlling certain physical entities, heavily interacting with each other. Today, most CPS elements are referred to as embedded systems. However, embedded systems are focused more on computational elements, rather than the link between those and the physical world. Traditionally, a CPS is designed as a network rather than standalone devices. These are closely tied to robotics and sensors. CPS can be found nearly anywhere, including medicine, automobiles, power grids, city infrastructure, manufacturing, aircraft, and building systems. These systems experience increased adaptability, autonomy, efficiency, functionality, reliability, safety, and usability. This symposium was primarily focused on discussing these systems and their integration with computing, communication, and control technologies.

Speaker featured at this symposium include Dr. P.R. Kumar, a professor here at Texas A&M at the Department of Electrical and Computer Engineering; Dr. John Stankovic, BP America Professor at the Department of Computer Science at the University of Virginia; Dr. Vijay Kumar, UPS foundation professor at the University of Pennsylvania, working in Mechanical Engineering and Applied Mechanics, Computer and Information Science, and Electrical and Systems Engineering; and David Corman, National Science Foundation program director, division of Computer and Network Systems.

CESG Seminar: Genomic analysis tools for familial and case-control sequencing studies

Earlier this week, Tuesday April 7th, Dr. Chad Huff gave a talk on genomic analysis tools. He explained how academic efforts are becoming more focused on data analysis and interpretation as genomic data becomes more commoditized. Dr. Huff introduced tools his research group developed to analyze bioinformatics throughput sequencing data.

Previously, traditional genetic analysis tools are usually sub-par in large studies because of problems arising from low power and scalability. The topics Dr. Huff discussed in this talk include relationship estimation, pedigree reconstruction, functional variant prediction, and analysis of rare variants. He discussed a method for detecting genetic relationships called Estimation of Recent Shared Ancestry. This method can identify relatives as distant as 4th cousins, aiding in reconstruction of extended pedigrees. Another tool Dr. Huff presented is the Variant Annotation, Analysis, and Search Tool. This tool was described as a probabilistic disease gene-finder that combines amino acid substitution and allele frequency information. This tool was then extended to find genetic diseases in pedigrees

Dr. Chad Huff is an assistant professor at MD Anderson Cancer Center. His lab’s research is focused on human evolution and disease through statistical, computational, and population genomics. Currently his group is committed to finding new methods to analyze genomic data, and applying them to identifying genetic basis for human diseases, cancer in particular.

Texas A&M Workshop on Software Defined Networks

Last week, a workshop on Software Defined Networking was hosted at Texas A&M at the Emerging Technologies Building. Organized by Dr. Alex Sprintson, a professor here at Texas A&M ECE and Jasson Casey, Texas A&M PhD candidate, founder and executive director of Flowgrammable.org.

This workshop consisted of many talks encompassing several aspects of software defined networking. These included topics such as SDN security, architecture, data planes, abstractions, and research. The talks were primarily presented by members of Flowgrammable.org, an organization focused on the OpenFlow protocol standard of SDN. Flowgrammable.org is a coalition of researchers and industry engineers looking to widen the breadth of influence SDN has on the industry. The members of Flowgrammable.org are looking to implement the OpenFlow protocol with a secure message layer, this is called the Flowgrammable.org SDN stack. Flowgrammable.org is also a research organization and thus seeks to further improve key aspects of SDN implementation and merging the gap between industry and academia.

This workshop also consisted of a talk by Chip Howes, an industry veteran, about internet startups. The talk focused on the startup experience, how startups rise and fall, and what it takes to make a successful startup. Mr. Howes is an industry expert and has created and sold six different successful startup companies. He has held many prominent positions in a wide range on industry leaders for over 30 years.

Special CESG Seminar – Maple: Simplifying SDN Programming Using Algorithmic Policies

Last week, Dr. Andreas Voellmy gave a talk about software defined networking (SDN). Dr. Voellmy gave an overview of SDN and presented several challenges with OpenFlow, a standard for SDN. He also presented Maple, which addresses these challenges

A recent development in networking, Software Defined Networking allows a network to make changes to its behavior through a central policy administered by a network controller. While previously, network architecture consisted of fixed, closed, vertically-integrated network appliances, SDN implements a more general packet processing approach, programmed through open control software executed on servers. This implementation is fairly open and very flexible. One standard for SDN is OpenFlow, which defines certain rules and guidelines for how SDN should be implemented. Many aspects of this implementation remain challenging, however.

To address these challenges, Dr. Voellmy presented Maple. Maple allows the user to create algorithmic policies, algorithms programmed in some general-purpose language and run on every packet of data that enters a network. These algorithmic policies replace the requirement of SDN to generate and maintain sets of rules on individual network switches. To implement these policies, Maple has a tracing runtime system which discovers reusable forwarding decisions from a control program.

Dr. Andreas Voellmy received his PhD in Computer Science from Yale University. His research focuses on Software Defined Networking, where he mainly draws on the OpenFlow library implementation, and the Glasgow Haskell Compiler.

CESG Seminar: Catapulting beyond Moore’s Law: Using FPGAs to Accelerate Data Centers

On March 13th, Dr. Derek Chiou gave a talk describing a joint project by Microsoft Research and Bing. This project studied the prospects of using field programmable gate arrays (FPGAs) to speed up cloud applications.

Field programmable gate arrays are integrated circuits created to be programmed by a consumer or designer after their manufacture. Usually this configuration is specified using a hardware description language. FPGAs contain a series of programmable logic blocks, which can be wired and reconfigured together. The possibilities for reconfiguring these blocks are in a wide spectrum from very simple logic operations to very complex functions.

In this project, an FPGA card was developed to accelerate a large part of Bing’s search engine. This card is plugged into a Microsoft cloud server. Since the cloud application cannot fit on one FPGA card, the application was partitioned across multiple FPGA cards across multiple servers. These cards were all connected with a network programmed in the FPGAs.

Doctor Derek Chiou received his PhD, S.M. and S.B. degrees in Electrical Engineering and Computer Science from MIT. Currently he is an associate professor at the University of Texas at Austin, where he researches various areas pertaining to performance acceleration. Also, Dr. Chiou is a Principal Architect at Microsoft where he co-leads a team working on FPGAs for data center applications.

CESG Seminar: Hardware Implementation of Cascade Support Vector Machine

Last week, Friday March 6th, PhD student Qian Wang presented a paper describing an architecture for support vector machine (SVM) training and classification. This architecture in particular, is a parallel digital very-large-scale integration (VLSI). The paper also presented a cascade SVM algorithm that was used to develop an efficient parallel VLSI architecture.

Cascade SVM is a training algorithm that, in this paper, is leveraged to produce and improve the scalability of hardware-based SVM training. This algorithm is also used in the paper to develop a parallel VLSI architecture. This architecture presented in the paper is shown to have improved scalability by spreading its workload over the many cascading SVM processes. The hardware implementation of the cascade SVM algorithm achieves a low overhead and allow for SVM training over variable sizes of data sets. In order to achieve full use of parallelism, a multilayer system bus and multiple distributed memories are used in the proposed parallel cascade architecture. Also, the proposed architecture can handle a wide range of uses, and can achieve a combined use of parallel processing and temporal reuse of resources, which leads to good tradeoffs between throughput, overhead, and power dissipation.

PhD student Qian Wang worked as a research assistant at University of Kansas, where he developed novel photonic devices and received his M.S. in Electrical Engineering in 2012. He received his B.S. in the same field from Harbin Institute of Technology in 2009. He is currently working on VLSI hardware implementation of machine learning algorithms as a PhD student at Texas A&M University.

CESG Fishbowl Tele-Seminar: What are They doing with Your Data?

Last week, Thursday March 5th, Professor Augustin Chaintreau gave a talk about what companies such as Google and Facebook are doing with the data their users are generating. He also proposed a solution for the current web transparency issue by building XRay, a tracking system for personal data on the Web.

Prof. Chaintreau explained that companies such as Google utilize user data for many different fields, including advertisements targeting specific groups and making recommendations based on a user’s activity on the web. Currently the average user possesses very little knowledge on how to find out what their personal data is being used for. To this end, Chaintreau proposed XRay. XRay allows a user to see what activity under their name is being monitored, and what is used to decide what ads or product recommendations they see. XRay tracks what ‘clue’, that being emails, history, and other user activity, is being used to trigger certain activity the user sees on the web.

Professor Augustin Chaintreau got his PhD in mathematics and computer science in 2006 as a student of the Ecole Normale Superieure in Paris. During his PhD, he worked at IBM where he designed and proved the first reliable, scalable and network-fair multicast architecture. As a member of the Technical Staff of Technicolor and while working for Intel he conducted the first measurement experience of human mobility as a communication transport tool. Currently, Chaintreau is an assistant professor of Computer Science at Columbia University.