Loading Events
Find Events

Event Views Navigation

Past Events

Events List Navigation

May 2017
Free

GENI Regional Workshop and Camp

May 22 @ 8:00 am - May 26 @ 1:00 pm
Emerging Technologies Bldg.,
101 Bizzell St. College Station, TX 77843
+ Google Map

More info: tx.ag/GENI Texas A&M University will be hosting the National Science Foundation’s (NSF) Global Environment for Network Innovations (GENI) regional workshop and camp, May 22-26, 2017. GENI provides a virtual laboratory for networking and distributed systems research and education. It is well-suited for exploring networks at scale, thereby promoting innovations in network science, security, services and applications. Highlights of the event include 5G cellular networks and software-defined networking.  The workshop and the camp will offer an opportunity to learn about GENI and how you can use it for your education and research needs. GRW will begin on Monday, May 22. And, the camp will continue through Friday, May 26. Please mark your calendar, and find details at tx.ag/GENI. The GRW agenda includes keynotes by two prominent researchers: – Professor Henning Schulzrinne, Columbia University & – Professor Lin Zhong, Rice University. Registration is free. Organizers: Dr. Alex Sprintson Dr. Walt Magnussen

Find out more »
Free

CESG Fishbowl Seminar: “Latency Analysis for Distributed Storage”

May 11 @ 2:30 pm - 3:30 pm

Prof. Parimal Parag, Dept. of ECE, Indian Institute of Science Abstract: Modern communication and computation systems consist of large networks of unreliable nodes. Yet, it is well known that such systems can provide aggregate reliability via information redundancy, duplicating paths, or replicating computations. While redundancy may increase the load on a system, it can also lead to major performance improvements through the judicious management of additional system resources. Two important examples of this abstract paradigm are content access from multiple caches in content delivery networks and master/slave computations on compute clusters. Many recent works in the area have proposed bounds on the latency performance of redundant systems, characterizing the latency-redundancy trade-off under specific load profiles. Following a similar line of research, this work introduces new analytical bounds and approximation techniques for the latency-redundancy trade-off for a range of system loads and two popular redundancy schemes. The proposed framework allows for approximating the equilibrium latency distribution, from which various metrics can be derived including mean, variance, and the tail decay of stationary distributions. Bio: Parimal Parag is currently an assistant professor in the department of electrical communication engineering at the Indian Institute of Science at Bangalore. He was working as senior systems engineer in R&D at ASSIA Inc. from October 2011 to November 2014. He received his B. Tech. and M. Tech. degrees from Indian Institute of Technology Madras in Fall 2004; and PhD degree from Texas A&M University in Fall 2011. He was at Stanford University and Los Alamos National Laboratory, in the autumn of 2010 and summer of 2007, respectively. He conducts research in network theory, applied probability, optimization methods, and in their applications to distributed systems. His previous work includes performance evaluation, monitoring, and control of large broadband communication systems and networks.   Free: Snacks & Drinks

Find out more »
Free

CESG Seminar: “Trustworthy Integrated Circuit Design”

May 5 @ 11:30 am - 12:30 pm
WEB, Room 236-C,
Wisenbaker Engineering Building
+ Google Map

“Trustworthy Integrated Circuit Design” Abstract: Designers use third-party intellectual property (IP) cores and outsource various steps in their integrated circuit (IC) design and manufacturing flow. As a result, security vulnerabilities have been emerging, forcing IC designers and end users to reevaluate their trust in ICs. If an attacker gets hold of an unprotected IC, attacks such as reverse-engineering the IC and piracy are possible. Similarly, if an attacker gets hold of an unprotected design, insertion of malicious circuits in the design and IP piracy are possible. To thwart these and similar attacks, we have developed three defenses: IC camouflaging, logic encryption, and split manufacturing. IC camouflaging modifies the layout of certain gates in the IC to deceive attackers into obtaining an incorrect netlist, thereby, preventing reverse engineering by a malicious user. Logic encryption implements a built-in locking mechanism on ICs to prevent reverse engineering and IP piracy by a malicious foundry and user.  Split manufacturing splits the layout and manufactures different metal layers in two separate foundries to prevent reverse engineering and piracy by a malicious foundry. We then describe how these techniques are enhanced by using provably-secure techniques thereby leading to trustworthy ICs. Bio: Jeyavijayan (JV) Rajendran is an Assistant Professor in the Department of Electrical and Computer Engineering at the University of Texas at Dallas. He obtained his Ph.D. degree in the Electrical and Computer Engineering Department at New York University in August 2015. His research interests include hardware security and emerging technologies. His research has won the NSF CAREER Award in 2017, the ACM SIGDA Outstanding Ph.D. Dissertation Award in 2017, and the Alexander Hessel Award for the Best Ph.D. Dissertation in the Electrical and Computer Engineering Department at NYU in 2016. He has won three Student Paper Awards (ACM CCS 2013, IEEE DFTS 2013, and IEEE VLSI Design 2012); four ACM Student Research Competition Awards (DAC 2012, ICCAD 2013, DAC 2014, and the Grand Finals 2013); Service Recognition Award from Intel; Third place at Kaspersky American Cup, 2011; and Myron M. Rosenthal Award for Best Academic Performance in M.S. from NYU, 2011. He organizes the annual Embedded Security Challenge, a red-team/blue-team hardware security competition and has co-founded Hack@DAC, a student security competition co-located with DAC. He is a member of IEEE and ACM.   FREE SNACKS

Find out more »
April 2017
Free

CESG Seminar: “Breaking the On-Chip Latency Barrier Using Single-Cycle Multi-Hop Networks”

April 28 @ 4:10 pm - 5:10 pm
WEB, Room 236-C,
Wisenbaker Engineering Building
+ Google Map

Dr. Tushar Krisha of Georgia Tech Abstract:  Compute systems are ubiquitous, with form factors ranging from smartphones at the edge to datacenters in the cloud. Chips in all these systems today comprise 10s to 100s of homogeneous/heterogeneous cores or processing elements. Ideally, any pair of these cores communicating with each other should have a dedicated link between them. But this design philosophy is not scalable beyond a few cores; instead chips use a shared interconnection network, with routers at crosses points to facilitate the multiplexing of links across message flows. These routers add multiple cycles of delay at every hop of the traversal. Conventional wisdom says that the latency of any multi-hop network traversal is directly proportional to the number of hops. This can profoundly limit scalability. In this talk, we challenge this conventional wisdom. We present a network-on-chip (NoC) design methodology called SMART* that enables messages to traverse multiple-hops, potentially all the way from the source to the destination, within a single-cycle, over a NoC with shared links. SMART leverages repeated wires in the datapath, which can traverse 10+ mm at a GHz frequency. We present a reconfiguration methodology to allow different message flows to reserve multiple links (with turns) within one cycle and traverse them in the next. An O (n)-wire SMART provides 5-8X latency reduction across traffic patterns, and approaches the performance of an “ideal” but impractical all-to-all connected O (n^2)-wire network. We also demonstrate two examples of micro-architectural optimizations enabled by SMART NoCs. The first is locality-oblivious cache organization architecture and the second is a recently demonstrated deep learning accelerator chip called Eyeriss. *Single-cycle Multi-hop Asynchronous Repeated Traversal Bio:  Tushar Krishna is an Assistant Professor in the School of Electrical and Computer Engineering at Georgia Tech, with an Adjunct appointment in the School of Computer Science. He received a Ph.D. in Electrical Engineering and Computer Science from the Massachusetts Institute of Technology in 2014. Prior to that he received a M.S.E in Electrical Engineering from Princeton University in 2009 and a B.Tech in Electrical Engineering from the Indian Institute of Technology (IIT) Delhi in 2007. Before joining Georgia Tech in 2015, Dr. Krishna spent a year as a post-doctoral researcher in the VSSAD Group at Intel, Massachusetts and a semester at the LEES IRG at the Singapore-MIT Alliance for Research and Technology. Dr. Krishna’s research spans the computing stack: from circuits/physical design to microarchitecture to system software. His key focus area is in architecting the interconnection networks and communication protocols for efficient data movement within computer systems, both on-chip and in the cloud.   Tele-seminar in 236C @ 4:10 p.m. Host: Dr. Sprintson FREE SNACKS

Find out more »
Free

CESG Eminent Scholar Series: “CPU and Server System Architecture Opportunities for AI Application Optimization”

April 21 @ 4:10 pm - 5:10 pm
WEB, Room 236-C,
Wisenbaker Engineering Building

CESG Eminent Scholar Series: Balint Fleischer of Huawei’s Central Research Institute “CPU and Server System Architecture Opportunities for  AI Application Optimization” Abstract:For the past 50 years, the computer industry has been focusing on improving transactional workloads. We are now seeing the emergence of a new class of “Narrow AI” based on applications playing an increasingly critical role in diverse use cases from Robotics, Smart Cities, Expert Systems, Medical Diagnostics, Financial Systems to Research and so forth. They perform assistive functions through speech recognition, face and image recognition, Fraud Detection, retrieving complex data structures and the integration of diverse information. AI applications are fundamentally different from classic applications. Classic applications are based on explicit programming using arithmetic and logic operations, while AI applications are trainable or self-learning algorithms to make predictions. AI applications use heterogeneous streaming data as opposed to classic applications, which transactional and structured data. Classic CPU architectures are very inefficient for AI applications; they lack sufficient memory BW for a diverse set of accelerators to emerge. However, E2E application “pipelines” are a hybrid requiring the creation of a new server platform capable of efficiently supporting new use cases. This presentation will highlight some of the ongoing development in this area and what could be the future direction. Bio: Balint Fleischer is currently Chief Scientist at Huawei’s Central Research Institute, where he is responsible for research into next generation data center and server architectures. He was most recently CTO at startup Parallel Machines, where he developed new architectures for advancing predictive analytics and machine learning. Previously he was the General Manager and Director of Architecture development, including efforts related to 3DXPoint and Rack Scale Architecture. He also had a long residency at Sun Microsystems including being VP/CTO of the Networked Storage Division, where he led the design of next generation storage systems and storage virtualization platforms; while at Sun he led Sun’s architecture development for many successful low end midrange server products and was responsible for the company’s InfiniBrand effort focusing on enterprise clustering, I/O, and storage. Free Snacks

Find out more »
Free

CESG Seminar: “How Much Time, Energy, and Power Does an Algorithm Need?”

April 7 @ 4:10 am - 5:10 pm
WEB, Room 236-C,
Wisenbaker Engineering Building
+ Google Map

Richard (Rich) Vuduc of Georgia Tech   Abstract: Given an algorithm and a computer system, can we estimate or bound the amount of physical energy (Joules) or power (Watts) it might require, in the same way that we do for time and storage? These physical measures of performance are relevant to nearly every class of computing device, from embedded mobile systems to power-constrained datacenters and supercomputers. Armed with models of such measures, we can try to answer many interesting questions. For instance, can algorithmic knobs be used to control energy or power as the algorithm runs? How might systems be better balanced in energy or power for certain classes of algorithms? This talk is about general ideas of what such analyses and models might look like, giving both theoretical predictions and early empirical validation of our algorithmic energy and power models on real software and systems. Bio: Rich Vuduc is an Associate Professor at the Georgia Institute of Technology (“Georgia Tech”), in the School of Computational Science and Engineering, a department devoted to the study of computer-based modeling and simulation of natural and engineered systems. His research lab, the HPC Garage (@hpcgarage), is interested in high-performance computing, with an emphasis on performance analysis and performance engineering. He has received a DARPA Computer Science Study Group grant; an NSF CAREER award; a collaborative Gordon Bell Prize in 2010; Lockheed Martin’s Award for Excellence in Teaching (2013); Best Paper Awards at the SIAM Conference on Data Mining (SDM, 2012) and the IEEE Parallel and Distributed Processing Symposium (IPDPS, 2015) among others. He also served as his department’s Associate Chair and Director of its graduate programs from 2013-2016. External to Georgia Tech, he was elected to be Vice President of the SIAM Activity Group on Supercomputing (2016-2018); co-chaired the Technical Papers Program of the “Supercomputing” (SC) Conference in 2016; and serves as an associate editor of both the International Journal of High-Performance Computing Applications (IJHPCA) and IEEE Transactions on Parallel and Distributed Systems (TPDS). He received his Ph.D. in Computer Science from the University of California, Berkeley, and was a postdoctoral scholar at the Lawrence Livermore National Laboratory.

Find out more »
Free

CESG Teleseminar: “Finite Blocklength Converses in Point-To-Point and Network Information Theory: A Convex Analytic Perspective”

April 6 @ 2:30 pm - 3:30 pm

Ankur Kulkarni – Assistant Professor Systems and Control Engineering Group Indian Institute of Technology Bombay (IITB)  Abstract: Finite blocklength converses in information theory have been discovered for several loss criteria using a variety of arguments. What is perhaps unsatisfactory is the absence of a common framework using which converses could be found for any loss criterion. We present a linear programming based framework for obtaining converses for finite blocklength lossy joint source-channel coding problems. The framework applies for any loss criterion, generalizes certain previously known converses, and also extends to multi-terminal settings. The finite blocklength problem is posed equivalently as a nonconvex optimization problem and using a lift-and-project-like method, a close but tractable LP relaxation of this problem is derived. Lower bounds on the original problem are obtained by the construction of feasible points for the dual of this LP relaxation. A particular application of this approach leads to new converses that improve on the converses of Kostina and Verdu ́ for joint source-channel coding and lossy source-coding, and imply the converse of Polyanksiy, Poor and Verdu for channel coding. Another construction leads to a new general converse for finite blocklength joint source-channel coding for a class of source-channel pairs. Employing this converse shows that the LP is tight for all blocklengths for the “matched setting” of minimization of the expected average bit-wise Hamming distortion of a q-ary uniform source over a q-ary symmetric memoryless channel. In the multi-terminal setting, using the above method we derive improvements to converses of Han for Slepian-Wolf coding, a new converse for the multiple access channel, and an improvement to a converse of Zhou et al for the successive refinement problem. Coincidentally, the recent past has seen a spurt of results on using duality to obtain outer bounds in combinatorial coding theory (including the author’s own nonasymptotic upper bounds for zero-error codes for the deletion channel). We speculate that these, and our results, hold the promise of a unified, duality-based theory of converses for problems in information theory. This is joint-work with Ph.D. student Sharu Theresa Jose. Bio: Ankur is an Assistant Professor (since 2013) with the Systems and Control Engineering group at Indian Institute of Technology Bombay (IITB). He received his B.Tech. in Aerospace Engineering from IITB in 2006 and his M.S. in 2008 and Ph.D. in 2010 – both from the University of Illinois at Urbana-Champaign (UIUC). From 2010-2012 he was a post-doctoral researcher at the Coordinated Science Laboratory at UIUC. His research interests include the role of information in stochastic control, game theory, information theory, combinatorial coding theory problems, optimization and variational inequalities, and operations research. He is an Associate (from 2015–2018) of the Indian Academy of Sciences, Bangalore, a recipient of the INSPIRE Faculty Award of the Department of Science and Technology, Government of India, 2013, the best paper award at the National Conference on Communications, 2017 and the William A. Chittenden Award, 2008 at UIUC.  He is a consultant to the Securities and Exchange Board of India on some matters…

Find out more »
March 2017
Free

CESG Seminar: “Prototyping Medium Access Control Protocols for Wireless Networks”

March 31 @ 4:10 pm - 5:10 pm
WEB, Room 236-C,
Wisenbaker Engineering Building

 Simon Yau – Texas A & M University   Abstract: Due to increasingly dense wireless deployments, increasing demand for multimedia applications, and plans to offload cellular traffic onto unlicensed bands, the efficiency of Medium Access Control (MAC) protocols has become critical to the performance of wireless networks. While many MAC protocols have been proposed, very few have been experimentally evaluated to establish realistic performance. Easy experimental evaluation of MAC protocols requires a flexible platform that is readily capable of implementing a wide range of protocols. MAC protocols have very strict timing requirements, which leads to tight coupling between the protocols and the underlying hardware. In this talk, we will present a platform for prototyping MAC protocols that uses a Mechanism vs Policy separation architecture which allows researchers to rapidly prototype different classes of MAC protocols, as well as some of the issues related to prototyping MAC protocols. Bio: Simon Yau is a Computer Engineering PhD candidate at the Department of Electrical and Computer Engineering at Texas A&M University. His research interests are in prototyping Medium Access Control protocols for current generation and next generation wireless networks. He has worked on several projects with National Instruments (NI) where some of his work is used in their 802.11 Application Framework. He currently leads the team for developing WiMAC, a rapid prototyping platform for MAC protocols and has given demonstrations of the platform at NI’s annual conference, NI Week, and SIGCOMM ’15. In addition to that, he is also currently working on developing an Unmanned Traffic Management system for

Find out more »
Free

CESG TELESEMINAR: “Collaborative Road Freight Transport”

March 30 @ 2:30 pm - 4:00 pm

“Collaborative Road Freight Transport” Karl H. Johansson –  KTH Royal Institute of Technology   Abstract: Freight transportation is of outmost importance for our society. Road transporting accounts for about 26% of all energy consumption and 18% of greenhouse gas emissions in the European Union. Goods transport in the EU amounts to 3.5 trillion ton-km per year with 3 million people employed in this sector, whereas people transport amounts to 6.5 trillion passenger-km with 2 million employees. Despite the influence the transportation system has on our energy consumption and the environment, individual long-haulage trucks with no real-time coordination or global optimization mainly do road goods transportation. In this talk, we will discuss how modern information and communication technology supports cyber-physical transportation system architecture with an integrated logistic system coordinating fleets of trucks traveling together in vehicle platoons. From the reduced air drag, platooning trucks traveling close together can save more than 10% of their fuel consumption. Control and estimation challenges and solutions on various level of this transportation system will be presented. It will be argued that a system architecture utilizing vehicle-to-vehicle and vehicle-to-infrastructure communication enable optimal and safe control of individual trucks as well as optimized vehicle fleet collaborations and new markets. Extensive experiments done on European highways will illustrate system performance and safety requirements. The presentation will be based on joint work over the last ten years with collaborators at KTH and at the truck manufacturer Scania.   Bio: Karl H. Johansson is Director of the Stockholm Strategic Research Area ICT The Next Generation and Professor at the School of Electrical Engineering, KTH Royal Institute of Technology. He received MSc and PhD degrees in Electrical Engineering from Lund University. He has held visiting positions at UC Berkeley, Caltech, NTU, HKUST Institute of Advanced Studies, and NTNU. His research interests are in networked control systems, cyber-physical systems, and applications in transportation, energy, and automation. He is a member of the IEEE Control Systems Society Board of Governors and the European Control Association Council. He has received several best paper awards and other distinctions, including a ten-year Wallenberg Scholar Grant, a Senior Researcher Position with the Swedish Research Council, and the Future Research Leader Award from the Swedish Foundation for Strategic Research. He is Fellow of the IEEE and IEEE Distinguished Lecturer.

Find out more »
Free

CESG Seminar: Exo-Core — Software-Defined Hardware-Security

March 24 @ 4:10 am - 5:10 pm
WEB, Room 236-C,
Wisenbaker Engineering Building

Mohit Tiwari – University of Texas at Austin Abstract: Confinement is a fundamental security primitive. The ability to put private data in a box and ship the box to run untrusted code in an untrusted data center can transform systems security and expand the use of cloud services to regulated data. However, untrusted applications are hard to confine — we show that using only meta-data about the computation, a malicious process can leak secrets at hundreds of kilo-bits per second on machines today. Closing such leaks in the past has followed a piece-meal approach of closing individual channels. In this talk, we propose that exposing the micro-architecture to software can enable flexible defenses to a large class of vulnerabilities, and show that software solutions can implement efficient and verifiable solutions to hardware-security problems.   Bio: Mohit Tiwari received his PhD from UCSB (2011) and joined UT Austin as an Assistant Professor in Fall 2013. His research enables privacy for end-users through information leak-free containers — such containers can be used to create trustworthy computing services using untrusted data centers and vulnerable applications — and through anomaly detection across the computing stack. Professor Tiwari’s research has received the NSF Career Award (2015), Best Paper Awards (ASPLOS’15, PACT’09), IEEE Micro Top Picks (2010, 2014 Honorable Mention), and industry research awards from Google and Qualcomm.

Find out more »
+ iCal Import Listed Events