Loading Events
Find Events

Event Views Navigation

Events for February 2017

Calendar Month Navigation

Monday Tuesday Wednesday Thursday Friday Saturday Sunday
1
2
3

CESG Seminar: Hardware Trojans Begone: Inspiring Trust in Outsourced IC Fabrication

CESG Seminar: Hardware Trojans Begone: Inspiring Trust in Outsourced IC Fabrication

February 3 @ 4:10 am - 5:10 pm

Room 222 Civil Engr. Building (CVE) Dr. Siddharth Garg Abstract: Only a few semiconductor foundries (“fabs”) have the capability to fabricate integrated circuits (IC) using state-of-the-art fabrication technology. Most chip design companies outsource fabrication to one of these advanced foundries, typically located off-shore. However, this comes at the expense of trust. How can the designer ensure the integrity of the ICs fabricated by an untrusted foundry? Malicious modifications of an IC, so-called “hardware Trojans” have become a major source of concern for both defense agencies and contractors, and commercial IC design companies. In this talk, I will discuss a new approach for inspiring trust in outsourced IC fabrication. The idea (referred to as “split fabrication”) is to leverage a second chip, fabricated at a low-end but trusted on-shore foundry, to guarantee the integrity of the high-end but untrusted chip. We will discuss two different (and orthogonal ways) in which split fabrication can be deployed to inspire trust in outsourced IC fabrication: to deter malicious modifications, or to detect modifications in the field. Compared to prior art, these approaches are the first to provide formal security guarantee, even for arbitrary Trojan misbehavior. [This talk covers work that was presented at the USENIX Security Symposium 2013, and IEEE S&P 2016] Bio: Siddharth Garg is an Assistant Professor of Electrical and Computer Engineering at New York University. His research interests are secure and reliable computing, with a focus on hardware security. For his work on hardware security, Siddharth was listed in Popular Science Magazine’s list of “Brilliant Ten” researchers for 2016, received the NSF CAREER Award in 2015, and paper awards at the IEEE Symposium on Security and Privacy (Oakland) 2016 and the USENIX Security Symposium 2013. Siddharth received his Ph.D. degree in Electrical and Computer Engineering from Carnegie Mellon University in 2009, where he was recognized with Angel G. Jordan Award for outstanding thesis contributions, an M.S. from Stanford University and a B.Tech. degree from the Indian Institute of Technology Madras. From 2010-2014, Siddharth was an Assistant Professor at the University of Waterloo, Canada. Host: Dr. Gratz

4
5
6
7
8
9

CESG Fishbowl Seminar: Stability of multi-dimensional Markov chains, with applications

CESG Fishbowl Seminar: Stability of multi-dimensional Markov chains, with applications

February 9 @ 2:30 pm - 4:00 pm

Room 333 Wisenbaker Engr. Building (fishbowl)      Sergey Foss / Heriot-Watt University, Edinburgh and Sobolev Institute of Mathematics + Novosibirsk State University Abstract:  I plan to talk about stability conditions for multi-component Markov chains in discrete time where one of components is an “autonomous” Markov chain itself and is stable. I will consider two types of models, with applications in wireless networks and in monotone economies. The talk is based on joint works with V. Shneer (HWU, Edinburgh), A. Turlikov (SUAI, St. Petersburg), and J. Thomas and T. Worrall (UoE, Edinburgh). Bio: Sergey Foss is a Professor of Applied Probability at Heriot-Watt University, Edinburgh He worked in Novosibirsk (Institute of Mathematics and Novosibirsk State University) from 1977 to 2000 he still keeps a part-time position at and moved to Edinburgh in December 2000. He still keeps a part-time position in Novosibirsk. His present research interests are principally in stability, continuity, optimisation, and long- range dependence in stochastic processes; exact simulation and tail asymptotics of steady-state distributions in Markovian models – with applications in (tele)communications, queueing, and risk. Within the last years he has also worked (a) on a variety of problems related to spatial stochastic models, stochastic geometry, contact processes, non-linear renewal theory, and to percolation theory, with applications to wired/wireless networks, seismology, and risk; and (b) on mathematical/stochastic problems in energy networks. He is a Fellow of the Royal Society of Edinburgh (since 2007), the Editor-in-Chief of “Queueing Systems” (since 2009), a member of EURANDOM Advisory Board (since 2012), the Scientific Advisor of Applied Probability Lab at NSU (since 2014). He was the Principal Organiser of 6-month programme “Stochastic Processes in Communication Sciences” (Jan-July 2010) at the Isaac Newton Institute and the organiser of many research conferences in Edinburgh, Novosibirsk and around the world. HOST: Dr. Kumar                                                                     

10
11
12
13
14
15
16

CESG Fishbowl Teleseminar: Efficient Fault-Tolerant Quantum Computing

CESG Fishbowl Teleseminar: Efficient Fault-Tolerant Quantum Computing

February 16 @ 2:30 pm - 4:00 pm

Room 333 Wisenbaker Engineering Building (fishbowl) Martin Suchara – AT&T Labs Research ABSTRACT: Quantum error correction presents some of the most significant and interesting challenges that must be resolved before building an efficient quantum computer. Quantum error correcting codes allow to successfully run quantum algorithms on unreliable quantum hardware. Because quantum hardware suffers from errors such as decoherence, leakage or qubit loss, and these errors corrupt delicate quantum states rather than binary information, and the known error correction techniques are complex and have a high overhead.  In my talk, I first introduce the basics of quantum computing. Then I describe the two main families of quantum error correcting codes and quantify their overhead using specific examples of algorithms and hardware technologies. I describe several new techniques that I developed to reduce this overhead. For example, the maximum likelihood decoder (MLD) is an efficient algorithm that finds the recovery operation that maximizes the probability of a successful error correction given the observed error syndrome. Numerical simulations of the MLD algorithm for physical error rates around 10% showed a 100-fold reduction of the logical error probability compared to earlier techniques. I also show new designs of error correcting codes that are tailored to work more efficiently with the constraints of specific physical technologies.   BIO: Martin Suchara is a Principal Inventive Scientist at AT&T Labs Research since 2015. Prior to joining AT&T he was a Postdoctoral Scholar in the quantum-computing group at IBM T. J. Watson Research Center. His work focuses on making computation with quantum computers more efficient and reliable. He developed new quantum error correcting codes that improve error decoding efficiency. Martin received his PhD from the Computer Science department at Princeton University and postdoctoral training from UC Berkeley. Between 2011 and 2013 he coordinated the work of a small group of postdocs and students on the IARPA Quantum Computer Science Program and delivered the results to the Program Manager. Martin is the recipient of the Best Student Paper Award at ACM Sigmetrics 2011  

17

CESG Seminar: Leaping Over the Memory Wall with Data Prefetching and Cache Replacement

CESG Seminar: Leaping Over the Memory Wall with Data Prefetching and Cache Replacement

February 17 @ 4:10 am - 5:15 pm

Room 236C in Wisenbaker Building (WEB) Jinchun Kim – Texas A&M University   Abstract:  For decades, the primary tools in alleviating the “Memory Wall” have been large cache hierarchies and data prefetchers. Both approaches, become more challenging in modern, Chip-multiprocessor (CMP) design. Increasing the last-level cache (LLC) size yields diminishing returns on size; given VLSI power scaling trends, this approach becomes hard to justify. These trends also impact hardware budgets for data prefetchers and LLC replacement modules. Moreover, in the context of CMPs running multiple concurrent processes, prefetching and replacement accuracy is critical to prevent cache pollution effects.   In this talk, I will discuss two novel on-chip memory management techniques: Signature Path Prefetching (SPP) and Kill-the-PC (KPC) replacement algorithm. SPP is a data prefetcher that adaptively throttles itself on a per-prefetch stream basis. We compress a series of memory accesses into a small signature and iteratively use the signature until the prefetching confidence falls below a certain threshold. Also, unlike other history based algorithms which miss out on many prefetching opportunities when address patterns make a transition between physical pages, SPP tracks complex patterns across physical page boundaries and continues prefetching as soon as they move to new pages. While SPP is a pure prefetching scheme, KPC bridges the gap between data prefetcher and cache replacement. I’ll discuss how KPC can be used to eliminate the use of program counter and improve the performance of LLC replacement policy.   Bio: Jinchun Kim is a Ph.D. candidate of Electrical and Computer Engineering at Texas A&M University, advised by Dr. Paul V. Gratz. Jinchun’s research interests are in computer architecture with particular emphasis on future memory system design. He has been recognized with two best paper nominations at MICRO 2014 and MICRO 2016. Jinchun is currently on the job market for academic/industry research positions.    

18
19
20
21
22
23

CESG Teleseminar: Securing Distributed Systems Against Adversarial Attacks

CESG Teleseminar: Securing Distributed Systems Against Adversarial Attacks

February 23 @ 2:30 pm - 4:00 pm

Room 333, Wisenbaker Building (fishbowl) Lili Su – University of Illinois Abstract: Distributed systems are ubiquitous in both industry and our daily life. For example, we use clusters and networked workstations to analyze large amounts of data, use the worldwide web for information and resource sharing, and use the Internet of Things (IoT) to access a much wider variety of resources. In distributed systems, components are more vulnerable to adversarial attacks. In this talk, we model the distributed systems as multi-agent networks, and consider the most general attack model – Byzantine fault model. In particular, this talk will focus on the problem of distributed learning over multi-agent networks, where agents repeatedly collect partially informative observations (samples) about an unknown state of the world, and try to collaboratively learn the true state. We focus on the impact of the Byzantine agents on the performance of consensus-based non-Bayesian learning. Our goal is to design algorithms for the non-faulty agents to collaboratively learn the true state through local communication. At the end of this talk, I will also briefly mention our exploration on tolerating adversarial attacks in multi-agent optimization problems.   Bio: Lili Su is a Ph.D. candidate in the Electrical and Computer Engineering Department at the University of Illinois at Urbana-Champaign, working with Prof. Nitin Vaidya on distributed computing. She expects to receive her Ph.D. degree in May 2017. Her research intersects distributed computing, security, optimization, and learning. She was one of the three nominees for the 2016 International Symposium on Distributed Computing Best Student Paper Award. She received the 2015 International Symposium on Stabilization, Safety, and Security of Distributed Systems Best Student Paper Award. She also received the Sundaram Seshu International Student Fellowship for the academic year of 2016 to 2017 conferred by UIUC. In addition, she received the Outstanding Reviewer Award for her review service for IEEE Transactions on Communication in 2015.

24

CESG Teleseminar: Data-Driven Control and Optimization for Urban Infrastructures

CESG Teleseminar: Data-Driven Control and Optimization for Urban Infrastructures

February 24 @ 2:30 pm - 4:00 pm

Room 333 Wisenbaker Engineering Building (fishbowl) Shuo Han  – University of Pennsylvania   Abstract: Recent advances in sensing technology and autonomy have brought a myriad of new access points for sensing and control in urban infrastructures. This leads to the concept of “smart cities”, in which urban infrastructures are operated at an increased level of autonomy with the aid of sensing and control. A key component of smart cities is algorithms that convert data collected from sensors to decisions used for city operation. In many applications, data are used for modeling certain stochastic phenomena (e.g., human demand in cities) upon which decisions are made.  In order to provide rigorous performance guarantees in decision making, it is often desirable to not only obtain from data a nominal (probabilistic) model of the stochastic phenomenon but also uncertainty in the model. In this talk, I will present an optimization-based framework that explicitly quantifies and handles probabilistic model uncertainty for decision making. A distinctive feature of the framework is that it models the unknown stochastic phenomenon by a set of probability distributions that are consistent with data. For a large class of problems including several planning and scheduling problems in smart cities, I will show that the resulting optimization problem can be reformulated as a convex optimization problem whose solution can be computed efficiently. Using examples from power systems and transportation, I will show that our framework offers several advantages over conventional ways of modeling uncertainty.    Bio: Shuo Han is a postdoctoral researcher in the Department of Electrical and Systems Engineering at the University of Pennsylvania. He received his Ph.D. in Electrical Engineering from the California Institute of Technology in 2014. His current research focuses on developing rigorous frameworks for data-driven decision making that enable reliable and efficient operations of networked systems such as power and transportation networks. He was a finalist for the Best Student Paper Award at the 2013 American Control Conference.  

CESG Teleseminar: Making Wi-Fi Work in Multi-Hop Topologies: Automatic Negotiation and Allocation of Airtime

CESG Teleseminar: Making Wi-Fi Work in Multi-Hop Topologies: Automatic Negotiation and Allocation of Airtime

February 24 @ 4:10 pm - 5:10 pm

Violet R. Syrotiuk – Arizona State University Abstract:  We propose a solution for mitigating the performance impairments of CSMA/CA protocols in multi-hop topologies based on the dynamic adaptation of the contention process experienced by nodes in a wireless network. A distributed protocol is used to negotiate the channel airtime for a node as a function of the traffic requirements of its neighborhood, taking into account bandwidth reserved for the control operations. A mechanism is provided for a node to tune its contention window depending on its allocated airtime. Different from previous schemes, a node’s contention window is fixed in size unless the traffic requirements of its neighborhood change.  The scheme is implemented on legacy commercial 802.11 devices.  Extensive experimental results, performed on the CREW European testbed, demonstrate the effectiveness of the approach. Bio: Violet R. Syrotiuk earned her Ph.D. in Computer Science from the University of Waterloo in Canada. She is currently an Associate Professor of Computer Science and Engineering at Arizona State University. Her interests lie in dynamic adaptation to changing conditions in wireless networks, especially at the MAC layer.  Her research has been supported by grants from NSF, ONR, and DSTO, and contracts with LANL, Raytheon, and General Dynamics among others. She serves on the editorial boards of Computer Networks and Computer Communications, as well as on the technical program and organizing committees of several major conferences sponsored by ACM and IEEE.

25
26
27
28
+ iCal Import Month's Events