Loading Events
Find Events

Event Views Navigation

Events for April 2017

Calendar Month Navigation

Monday Tuesday Wednesday Thursday Friday Saturday Sunday
1
2
3
4
5
6

CESG Teleseminar: “Finite Blocklength Converses in Point-To-Point and Network Information Theory: A Convex Analytic Perspective”

CESG Teleseminar: “Finite Blocklength Converses in Point-To-Point and Network Information Theory: A Convex Analytic Perspective”

April 6 @ 2:30 pm - 3:30 pm

Ankur Kulkarni – Assistant Professor Systems and Control Engineering Group Indian Institute of Technology Bombay (IITB)  Abstract: Finite blocklength converses in information theory have been discovered for several loss criteria using a variety of arguments. What is perhaps unsatisfactory is the absence of a common framework using which converses could be found for any loss criterion. We present a linear programming based framework for obtaining converses for finite blocklength lossy joint source-channel coding problems. The framework applies for any loss criterion, generalizes certain previously known converses, and also extends to multi-terminal settings. The finite blocklength problem is posed equivalently as a nonconvex optimization problem and using a lift-and-project-like method, a close but tractable LP relaxation of this problem is derived. Lower bounds on the original problem are obtained by the construction of feasible points for the dual of this LP relaxation. A particular application of this approach leads to new converses that improve on the converses of Kostina and Verdu ́ for joint source-channel coding and lossy source-coding, and imply the converse of Polyanksiy, Poor and Verdu for channel coding. Another construction leads to a new general converse for finite blocklength joint source-channel coding for a class of source-channel pairs. Employing this converse shows that the LP is tight for all blocklengths for the “matched setting” of minimization of the expected average bit-wise Hamming distortion of a q-ary uniform source over a q-ary symmetric memoryless channel. In the multi-terminal setting, using the above method we derive improvements to converses of Han for Slepian-Wolf coding, a new converse for the multiple access channel, and an improvement to a converse of Zhou et al for the successive refinement problem. Coincidentally, the recent past has seen a spurt of results on using duality to obtain outer bounds in combinatorial coding theory (including the author’s own nonasymptotic upper bounds for zero-error codes for the deletion channel). We speculate that these, and our results, hold the promise of a unified, duality-based theory of converses for problems in information theory. This is joint-work with Ph.D. student Sharu Theresa Jose. Bio: Ankur is an Assistant Professor (since 2013) with the Systems and Control Engineering group at Indian Institute of Technology Bombay (IITB). He received his B.Tech. in Aerospace Engineering from IITB in 2006 and his M.S. in 2008 and Ph.D. in 2010 – both from the University of Illinois at Urbana-Champaign (UIUC). From 2010-2012 he was a post-doctoral researcher at the Coordinated Science Laboratory at UIUC. His research interests include the role of information in stochastic control, game theory, information theory, combinatorial coding theory problems, optimization and variational inequalities, and operations research. He is an Associate (from 2015–2018) of the Indian Academy of Sciences, Bangalore, a recipient of the INSPIRE Faculty Award of the Department of Science and Technology, Government of India, 2013, the best paper award at the National Conference on Communications, 2017 and the William A. Chittenden Award, 2008 at UIUC.  He is a consultant to the Securities and Exchange Board of India on some matters […]

7

CESG Seminar: “How Much Time, Energy, and Power Does an Algorithm Need?”

CESG Seminar: “How Much Time, Energy, and Power Does an Algorithm Need?”

April 7 @ 4:10 am - 5:10 pm

Richard (Rich) Vuduc of Georgia Tech   Abstract: Given an algorithm and a computer system, can we estimate or bound the amount of physical energy (Joules) or power (Watts) it might require, in the same way that we do for time and storage? These physical measures of performance are relevant to nearly every class of computing device, from embedded mobile systems to power-constrained datacenters and supercomputers. Armed with models of such measures, we can try to answer many interesting questions. For instance, can algorithmic knobs be used to control energy or power as the algorithm runs? How might systems be better balanced in energy or power for certain classes of algorithms? This talk is about general ideas of what such analyses and models might look like, giving both theoretical predictions and early empirical validation of our algorithmic energy and power models on real software and systems. Bio: Rich Vuduc is an Associate Professor at the Georgia Institute of Technology (“Georgia Tech”), in the School of Computational Science and Engineering, a department devoted to the study of computer-based modeling and simulation of natural and engineered systems. His research lab, the HPC Garage (@hpcgarage), is interested in high-performance computing, with an emphasis on performance analysis and performance engineering. He has received a DARPA Computer Science Study Group grant; an NSF CAREER award; a collaborative Gordon Bell Prize in 2010; Lockheed Martin’s Award for Excellence in Teaching (2013); Best Paper Awards at the SIAM Conference on Data Mining (SDM, 2012) and the IEEE Parallel and Distributed Processing Symposium (IPDPS, 2015) among others. He also served as his department’s Associate Chair and Director of its graduate programs from 2013-2016. External to Georgia Tech, he was elected to be Vice President of the SIAM Activity Group on Supercomputing (2016-2018); co-chaired the Technical Papers Program of the “Supercomputing” (SC) Conference in 2016; and serves as an associate editor of both the International Journal of High-Performance Computing Applications (IJHPCA) and IEEE Transactions on Parallel and Distributed Systems (TPDS). He received his Ph.D. in Computer Science from the University of California, Berkeley, and was a postdoctoral scholar at the Lawrence Livermore National Laboratory.

8
9
10
11
12
13
14
15
16
17
18
19
20
21

CESG Eminent Scholar Series: “CPU and Server System Architecture Opportunities for AI Application Optimization”

CESG Eminent Scholar Series: “CPU and Server System Architecture Opportunities for AI Application Optimization”

April 21 @ 4:10 pm - 5:10 pm

CESG Eminent Scholar Series: Balint Fleischer of Huawei’s Central Research Institute “CPU and Server System Architecture Opportunities for  AI Application Optimization” Abstract:For the past 50 years, the computer industry has been focusing on improving transactional workloads. We are now seeing the emergence of a new class of “Narrow AI” based on applications playing an increasingly critical role in diverse use cases from Robotics, Smart Cities, Expert Systems, Medical Diagnostics, Financial Systems to Research and so forth. They perform assistive functions through speech recognition, face and image recognition, Fraud Detection, retrieving complex data structures and the integration of diverse information. AI applications are fundamentally different from classic applications. Classic applications are based on explicit programming using arithmetic and logic operations, while AI applications are trainable or self-learning algorithms to make predictions. AI applications use heterogeneous streaming data as opposed to classic applications, which transactional and structured data. Classic CPU architectures are very inefficient for AI applications; they lack sufficient memory BW for a diverse set of accelerators to emerge. However, E2E application “pipelines” are a hybrid requiring the creation of a new server platform capable of efficiently supporting new use cases. This presentation will highlight some of the ongoing development in this area and what could be the future direction. Bio: Balint Fleischer is currently Chief Scientist at Huawei’s Central Research Institute, where he is responsible for research into next generation data center and server architectures. He was most recently CTO at startup Parallel Machines, where he developed new architectures for advancing predictive analytics and machine learning. Previously he was the General Manager and Director of Architecture development, including efforts related to 3DXPoint and Rack Scale Architecture. He also had a long residency at Sun Microsystems including being VP/CTO of the Networked Storage Division, where he led the design of next generation storage systems and storage virtualization platforms; while at Sun he led Sun’s architecture development for many successful low end midrange server products and was responsible for the company’s InfiniBrand effort focusing on enterprise clustering, I/O, and storage. Free Snacks

22
23
24
25
26
27
28

CESG Seminar: “Breaking the On-Chip Latency Barrier Using Single-Cycle Multi-Hop Networks”

CESG Seminar: “Breaking the On-Chip Latency Barrier Using Single-Cycle Multi-Hop Networks”

April 28 @ 4:10 pm - 5:10 pm

Dr. Tushar Krisha of Georgia Tech Abstract:  Compute systems are ubiquitous, with form factors ranging from smartphones at the edge to datacenters in the cloud. Chips in all these systems today comprise 10s to 100s of homogeneous/heterogeneous cores or processing elements. Ideally, any pair of these cores communicating with each other should have a dedicated link between them. But this design philosophy is not scalable beyond a few cores; instead chips use a shared interconnection network, with routers at crosses points to facilitate the multiplexing of links across message flows. These routers add multiple cycles of delay at every hop of the traversal. Conventional wisdom says that the latency of any multi-hop network traversal is directly proportional to the number of hops. This can profoundly limit scalability. In this talk, we challenge this conventional wisdom. We present a network-on-chip (NoC) design methodology called SMART* that enables messages to traverse multiple-hops, potentially all the way from the source to the destination, within a single-cycle, over a NoC with shared links. SMART leverages repeated wires in the datapath, which can traverse 10+ mm at a GHz frequency. We present a reconfiguration methodology to allow different message flows to reserve multiple links (with turns) within one cycle and traverse them in the next. An O (n)-wire SMART provides 5-8X latency reduction across traffic patterns, and approaches the performance of an “ideal” but impractical all-to-all connected O (n^2)-wire network. We also demonstrate two examples of micro-architectural optimizations enabled by SMART NoCs. The first is locality-oblivious cache organization architecture and the second is a recently demonstrated deep learning accelerator chip called Eyeriss. *Single-cycle Multi-hop Asynchronous Repeated Traversal Bio:  Tushar Krishna is an Assistant Professor in the School of Electrical and Computer Engineering at Georgia Tech, with an Adjunct appointment in the School of Computer Science. He received a Ph.D. in Electrical Engineering and Computer Science from the Massachusetts Institute of Technology in 2014. Prior to that he received a M.S.E in Electrical Engineering from Princeton University in 2009 and a B.Tech in Electrical Engineering from the Indian Institute of Technology (IIT) Delhi in 2007. Before joining Georgia Tech in 2015, Dr. Krishna spent a year as a post-doctoral researcher in the VSSAD Group at Intel, Massachusetts and a semester at the LEES IRG at the Singapore-MIT Alliance for Research and Technology. Dr. Krishna’s research spans the computing stack: from circuits/physical design to microarchitecture to system software. His key focus area is in architecting the interconnection networks and communication protocols for efficient data movement within computer systems, both on-chip and in the cloud.   Tele-seminar in 236C @ 4:10 p.m. Host: Dr. Sprintson FREE SNACKS

29
30
+ iCal Import Month's Events