Loading Events
Find Events

Event Views Navigation

Past Events

Events List Navigation

June 2018
Free

CESG Fishbowl Seminar: “Systems of Unmanned Vehicles for Persistent Service: Concepts, Task Allocation, Design & Prototype Components”

June 18 @ 10:00 am - 11:00 am

Dr. James R. Morrison Associate Professor in the Department of Industrial and Systems Engineering (ISysE) at KAIST, South Korea   Key points to include: – Task allocation in multi-robot systems with logistics constraints for persistent operations – Extended VRP models with various solution methodologies ranging from formal branch & price to fast accurate heuristics – Stochastic control methods including dynamic programming, heuristics and learning to plan – System design approaches in both the deterministic and stochastic contexts – Implementation efforts including prototypes of system modules   Abstract: The capabilities of modern affordable unmanned aerial vehicles (UAVs) include vision based navigation, GPS tracking and cargo delivery among a host of other functions. Yet, despite these capabilities, their promise is limited by a finite energy source and restricted payload. To overcome the limitations of a single vehicle, a system consisting of a fleet of vehicles and replenishment service stations can be used to accomplish larger scale mission objectives. Persistent systems of unmanned vehicles can provide services including security escort, search and rescue and border patrol. An essential function required for the operation of a system of unmanned vehicles is task allocation. If system resources are allocated wisely, more can be accomplished. For the task allocation problem, we will discuss both deterministic and stochastic centralized optimization approaches. In the deterministic context, we discuss column generation, branch and bound, receding horizon task allocation (RHTA) and custom heuristic approaches. In the stochastic context, we discuss dynamic programming, reinforcement learning, heuristic and learning algorithms. Another key consideration is system design: how many resources are required and where should they be deployed? We discuss how task allocation methods can be extended to include design. A more computationally tractable approach considers system design at a higher level of abstraction. We provide an overview of work to address the design problem for multiple service stations and numerous customer service sites. Heuristic solution methods based on the classic savings algorithm combined with a Voronoi decomposition are compared with complete enumeration. Finally, we discuss our efforts to develop a system of UAVs and service stations to serve as an automated security escort system at KAIST. Such a system consists of many components including UAVs, service stations, a customer service request app, GPS tracking software, vision tracking software, a simple AI for each UAV and a central task allocation system. The progress on this system, its components and related issues will be reviewed.   Biography: Dr. James R. Morrison (james.morrison@kaist.edu, http://xS3D.kaist.edu) received his Ph.D. in Electrical and Computer Engineering, from the University of Illinois at Urbana-Champaign, USA. He is currently an Associate Professor in the Department of Industrial and Systems Engineering (ISysE) at KAIST, South Korea. Since 2016, he has served as the Director of KICEP at KAIST. His research interests include persistent UAV service, Industry 4.0 and education. He has published over 90 peer reviewed journal and conference papers in these areas. He has received teaching awards including the KAIST Creative Teaching (Grand Prize) Award in 2012 and the KAIST ISysE…

Find out more »
Free

CESG Fishbowl Seminar: Coded Caching and Ruzsa-Szemerédi Graphs

June 1 @ 11:30 am - 12:30 pm

Presenter: Karthikeyan Shanmugam of IBM Research AI in New York Title: Coded Caching and Ruzsa-Szemerédi Graphs Abstract: Coded caching is a problem where encoded broadcasts are used to satisfy users requesting popular files and having caching capabilities. Recent work by Maddah-Ali and Niesen showed that it is possible to satisfy a scaling number of users with only a constant number of broadcast transmissions by exploiting coding and caching. One of the outstanding issues is that the schemes known for this problem required the splitting of files into an exponential number of packets before the significant coding gains of caching appeared. The question of what can be achieved with polynomial subpacketization (in the number of users) has been a central open problem in this area. We resolve this problem and present the first coded caching scheme with polynomial (in fact, linear) subpacketization. We obtain a number of transmissions that is not constant, but can be any polynomial in the number of users with an exponent arbitrarily close to zero. Our central technical tool is a novel connection between Ruzsa-Szem\’eredi graphs and coded caching. However, this scheme requires the number of users to be very large. In recent years, search for schemes that are efficient in terms of file size has gathered considerable interest. We show that many existing schemes that optimize the number of file sub-packets can also be cast in the framework of Ruzsa-Szemeredi graphs. We also discuss remaining open problems in tackling this important practical issue. Biography: Karthikeyan Shanmugam is currently a Research Staff Member at IBM Research NY in the AI Science group. Previously, he was a Herman Goldstine Postdoctoral Fellow in the Math Sciences Division at IBM Research, NY. He obtained his Ph.D. in Electrical and Computer Engineering from UT Austin in 2016 under the supervision of Dr. Alex Dimakis. Prior to this, he obtained his MS degree from USC, B.Tech and M.Tech degrees from IIT Madras. His research interests broadly lie in Statistical Machine learning, Graph Algorithms, Coding Theory and Information Theory. In machine learning, his current research focus is on Causal inference, Online Learning and Interpretability in ML. In Information theory, he focuses on problems related to caching in wireless networks.  

Find out more »
May 2018
Free

Distinguished Speaker Series: Deep Learning: the surprise, the puzzles, and the promise…”

May 24 @ 12:00 pm - 1:00 pm
WEB, Room 236-C,
Wisenbaker Engineering Building

Abstract: The surprising success of learning with deep neural networks poses two fundamental challenges: understanding why these networks work so well and what this success tells us about the nature of intelligence and our biological brain. Our recent Information Theory of Deep Learning shows that large deep networks achieve the optimal tradeoff between training size and accuracy, and that this optimality is achieved through the noise in the learning process. In this talk, I will mainly address the relevance of these findings to the nature of intelligence and the human brain. Biography:  Dr. Naftali Tishby is a professor of Computer Science, and the incumbent of the Ruth and Stan Flinkman Chair for Brain Research at the Edmond and Lily Safra Center for Brain Science (ELSC) at the Hebrew University of Jerusalem. He is one of the leaders in machine learning research and computational neuroscience in Israel, and his numerous former students serve in key academic and industrial research positions all over the world. Tishby was the founding chair of the new computer-engineering program, and a director of the Leibnitz Center for Research in Computer Science at Hebrew University. Tishby received his PhD in theoretical physics from Hebrew University in 1985, and was a research staff member at MIT and Bell Labs from 1985 to 1991. Tishby has been a visiting professor at Princeton NECI, the University of Pennsylvania, UCSB, and IBM Research.

Find out more »
Free

Cognitive Systems: application studies and system architecture implications of AI and data analytics

May 23 @ 2:00 pm - 3:00 pm
WEB, Room 236-C,
Wisenbaker Engineering Building

Abstract: As cognitive, AI systems have become increasingly capable, cloud-based services have enabled the integration of learning systems and human agents into AI solutions. Robotic agents are sophisticated mobile sensor platforms and compelling dialog agents. This talk will focus on the opportunities and challenges of integrating AI and data analytics into computer systems research, case studies from research on conversational robotic agents for service industry and aging-in-place lab, and on system architecture, data services, and memory and compute optimizations for cognitive systems.   Biography: Dr. Kevin Nowka is the Director of IBM Research – Austin, one of IBM’s 12 global research laboratories. He leads a team of scientists and engineers working on optimized systems for big-data and analytics, cognitive computing systems, cloud infrastructure, and energy-efficient systems and datacenters. He is also IBM Senior State Executive for Texas responsible for government, community, and university relations in Texas. He received a B.S. degree in Computer Engineering from Iowa State University, Ames, in 1986 and M.S. and Ph.D. degrees in Electrical Engineering from Stanford University in 1988 and 1995, respectively. He has 79 US issued patents and has published over 70 technical papers on circuits, systems and processor design, and technology issues. He is an IBM Master Inventor and a member of the IBM Academy of Technology. Dr. Nowka is also an adjunct professor at Texas A&M University in the Electrical and Computer Engineering Department, is a member of the Texas Science and Engineering Fair Advisory Board, and is a member of the TEES Advisory Board.

Find out more »
April 2018
Free

CESG Seminar: “Topological Quantum Computation”

April 27 @ 4:10 pm - 5:10 pm
WEB, Room 236-C,
Wisenbaker Engineering Building

Dr. Eric Rowell Department of Mathematics Texas A&M University   Topics to include: – Quantum Error-Correction – Topological Quantum Computation used to overcome decoherence at the hardware level – Decoherence as a roadblock to the construction of a scalable quantum computer Abstract: Quantum computation refers to computational models that rely on the (theoretical) ability to create, manipulate and measure quantum states.  While there are several quantum systems for which we possess these abilities, decoherence is a major practical roadblock to the construction of a scalable quantum computer.  Certain kinds of local errors due to decoherence can sometimes be corrected at the software level using quantum error-correction.  An interesting alternative is topological quantum compution, which would overcome decoherence at the hardware level, by encoding information globally.  The price one pays is that the underlying quantum systems must be topological phases of matter, so that topologically non-trivial manipulations can be used as quantum gates.  In this talk I will give an overview of topological quantum computation, exploring such topics as fault-tolerance, universality and quantum supremacy.   Biography: Eric Rowell earned his PhD in Mathematics in 2003 from UC San Diego studying with Hans Wenzl.  After a postdoc at Indiana U. with Zhenghan Wang he came to Texas A&M in 2006 as an assistant professor, with promotions to associate and full professor in 2012 and 2017, respectively.  His current research focuses on modeling topological phases of matter and analyzing their quantum computational utility.  Rowell consults for Microsoft Research at Station Q, Santa Barbara, and has a Distinguished Visiting Professorship at BICMR, Peking University.   Refreshment Provided

Find out more »
Free

CESG Seminar: “Algorithms, Architectures, and Testbeds for Advanced Wireless Communication Systems”

April 20 @ 4:10 pm - 5:10 pm
WEB, Room 236-C,
Wisenbaker Engineering Building

Professor Joseph R. Cavallaro Center for Multimedia Communication Dept. Of Electrical & Computer Engineering Rice University Topics to include: – Design tools for high level synthesis (HLS) to capture and express parallelism in wireless algorithms – HLS applied to FPGA and ASIC synthesis, and the tradeoffs with flexibility and reuse of design – Discussion of computation testbeds form supercomputers through desktop GPU to single board systems Abstract Wireless comm. system concepts for 5G and beyond include a variety of advanced physical layer algorithms to provide high data rates & increased efficiency. Each algorithms provides different challenges for real-time performance based on the tradeoffs between computation, communication, & I/O bottlenecks & area, time, & power complexity.  In particular, Massive MIMO systems can provide many benefits for both uplink detection and downlink beamforming as the number of base station antennas increases.  Similarly, channel coding, such as LDPC, can support high data rates in many channel conditions. At the RF level, limited available spectrum is leading to noncontiguous channel allocations where digital pre-distortion (DPD) can be used to improve power amplifier efficiency.  Each of these schemes impose complex system organization challenges in the interconnection of multiple RF transceivers with multiple memory & computation units with multiple data rates within the system. Parallel numerical methods can be applied with minimal effect on error rate performance. Simulation acceleration environments can be used to provide thorough system performance analysis.  This talk will focus on design tools for high level synthesis (HLS) to capture and express parallelism in wireless algorithms. This also includes the mapping to GPU & multicore systems for high speed simulation.  HLS can also be applied to FPGA and ASIC synthesis, however, there exist tradeoffs in area required with flexibility & reuse of designs. Heterogeneous system architectures as expressed by Systems on Chip (SoC) attempt to address these system issues. We will conclude with a discussion of computation testbeds from supercomputers through desktop GPU to single board systems.  The integration w/ radio testbeds from WARP and USRP to NI and Argos prototype massive MIMO systems will be explored. Biography Joseph R. Cavallaro received his B.S. from the Univ. of Pennsylvania, M.S. from Princeton, & Ph.D. from Cornell Univ., all in electrical engineering. He was with AT&T Bell Laboratories. In 1988, he joined the faculty of Rice Univ. where he is currently a Professor of electrical and computer engineering. His research interests include computer arithmetic, and DSP, GPU, FPGA, and VLSI architectures for applications in wireless communications. He served at the NFS as Director of the Prototyping Tools & Methodology Program. He was a Nokia Foundation Fellow & a Visiting Professor at the Univ. of Oulu, Finland. He is the Director of the Center for Multimedia Communication at Rice as well as an advisory board member of the IEEE SPS TC on Design and Implementation of Signal Processing Systems and the Chair-Elect of the IEEE CAS TC on Circuits & Systems for Comm. He is currently an Associate Editor of the IEEE Transactions on Signal…

Find out more »

CESG Seminar: Dr. Rei Safavi-Naini, “Long-term secure communication without computational assumptions”

April 13 @ 4:10 pm - 5:10 pm
WEB, Room 236-C,
Wisenbaker Engineering Building

Dr. Rei Safavi-Naini University of Calgary Texas A&M Visiting Professor   Presentation Points: How to provide long-term (future-proof) security for data communication From physical layer models to concrete realizations of secure communication   Abstract: Security of today’s information-based societies relies on the cryptographic infrastructure of the Internet. This infrastructure is critically dependent on assumptions about the computational hardness of a number of long studied mathematical problems and have led to the discovery of elegant algorithms such as Diffie-Hellman key agreement and unexpected primitives such as RSA digital signature. This infrastructure however will collapse if a quantum computer is built (motivating the recent surge of research into quantum-safe cryptography). A second less talked about problem with today’s computational solutions for secure communication is leaving traces that will make the system vulnerable to future off-line attacks. Building security on physical layer assumptions overcomes both these problems. In this talk, we motivate cryptography based on physical assumptions, look at a number of physical layer assumptions that aim to capture adversaries’ power in communication systems, give cryptographic security definitions for these systems, and outline constructions with provable security. We also show universality of these assumptions by showing their applications to channel and network communication as well as storage systems. Finally, we discuss the challenges of implementing these systems.   Biography: Rei Safavi-Naini is the AITF Strategic Research Chair in Information Security and the Director of Institute for Security, Privacy and Information Assurance, at the University of Calgary.  She has co-authored over 350 articles in refereed journals and conferences and has given numerous key-note and invited talks, most recently at IEEE MASCOTS 2017 and the joint session of ICITS (International Conference on Information Theoretic Security) and CANS (Cryptology And Network Security) 2017. She has  served as Associate Editor of ACM Transaction on Information and System Security (TISSEC), IEEE Transactions on Secure and Dependable Computing, and IEEE Transaction on Information Theory (currently for the second term),  as well as IET Information Security and Journal of Mathematical Cryptology. She has been Program Chair/Co-Chair of ACM CCSW 2014, Financial Cryptography 2014, ACNS 2013 and Crypto 2012. Her current research interest includes cryptographic solutions for long-term secure communication, and emerging security challenges of highly distributed networked systems such as IoT and distributed ledger. During Spring 2018, she is a Visiting Professor in the Department of Computer Science and Engineering at Texas A&M University, College Station. Snacks Provided

Find out more »
Free

CESG Eminent Scholar Series: Dr. Sarma Vrudhula, “Embedding Threshold Logic into ASICs and FPGAs for Improving Performance, Power and Area”

April 9 @ 11:30 am - 12:30 pm
WEB, Room 236-C,
Wisenbaker Engineering Building

Dr. Sarma Vrudhula School of Computing, Informatics and Decision Systems Engineering Director NSF Center for Embedded Systems Arizona State University, Tempe AZ   Presentation Points: Embedding perceptrons into digital CMOS logic circuits Improving power, performance and area of ASICs while following conventional, commercial design methodologies Threshold logic enhanced Field Programmable Gate Arrays for reducing power and area Non-volatile threshold logic with improved energy efficiency   Abstract This talk will present a new approach to reduce dynamic power, leakage, and area of application-specified integrated circuits, without sacrificing performance. The approach is based on a design of configurable threshold logic gates (TLGs) (a.k.a perceptron) and their seamless integration with conventional standard-cell design flow. The starting point is a design of a standard-cell library of configurable circuits for implementing threshold functions. The library consists of a small number of cells, each of which can compute a set of complex threshold functions, which would otherwise require a multilevel network. The function realized by a given threshold gate is determined by how signals are mapped to its inputs. A simple method for mapping signals to the inputs of the threshold gate is presented, that accounts of delay power. Next, is an algorithm that replaces a subset of flip-flops and portions of their logic cones in a conventional logic netlist, with threshold gates from the library. The resulting circuits, with both conventional and TLGs (called hybrid circuits), are placed and routed using commercial tools. We demonstrate significant reductions (using post layout simulations and silicon implementation) in power, leakage, and area of the hybrid circuits when compared with the conventional logic circuits.   The talk will briefly explore a number of alternate uses of threshold logic including conventional FPGAs enhanced with TLGs, field programmable threshold logic arrays (FPTLA), NVL with threshold gates, and obfuscation of logic.   Biography Sarma Vrudhula is a Professor of Computer Science and Engineering at Arizona State University, Tempe AZ, and the director for the NSF I/UCRC Center for Embedded Systems. Prior to joining ASU, he has held faculty positions in ECE at the University of Arizona, and at the University of Southern California.  He was the founding Director of the NSF Center for Low Power Electronics as the University of Arizona.  Active research areas include: energy-efficient design of neural networks, threshold logic based digital design, new circuit architectures with emerging technologies for non-volatile computation; energy management of mobile systems; statistical methods for the analysis and optimization with of process variations. He holds a Bachelor of Mathematics degree from the University of Waterloo, Ontario, Canada, and an MSEE and Ph.D. degrees in Electrical and Computer Engineering from the University of Southern California. Pizza Provided

Find out more »

CESG Seminar: Dr. Andreas Gerstlauer, “Learning-Based Power and Performance Prediction for Heterogeneous System Design”

April 6 @ 4:10 pm - 5:10 pm
WEB, Room 236-C,
Wisenbaker Engineering Building

Dr. Andreas Gerstlauer The University of Texas at Austin   Presentation Points: * Applying machine learning methods to computer system design * Fast and accurate performance and power prediction of hardware/software systems * Synthesizing learning-based proxy models that predict performance/power of a target system from executions on different host   Abstract: Next to performance, early power and energy estimation is a key challenge in the design of heterogeneous computer systems today. Traditional simulation-based methods are often too slow while existing analytical models are often not sufficiently accurate. In this talk, I will present our work on bridging this gap by providing fast yet accurate alternatives for power and performance modeling of software and hardware. In the past, we have pioneered so-called source-level and host-compiled simulation techniques that are based on back-annotation of source code with estimated target metrics, here annotated code is then further wrapped into abstract, lightweight operating system and platform simulation models to be natively executed on a simulation host. More recently, however, we have studied alternative approaches in which we employ advanced machine learning techniques to synthesize analytical proxy models that can accurately predict time-varying power and performance of an application running on a target platform purely from data obtained while executing the application natively on a completely different host machine. We have developed such learning-based approaches for both hardware and software. On the hardware side, learning-based models for white-box and black-box IPs reach simulation speeds of 1 Mcycles/s at 97% accuracy. On the software side, depending on the granularity at which prediction is performed, cross-platform prediction can achieve more than 95% accuracy at more than 3 GIPS of equivalent simulation throughput.   Biography: Andreas Gerstlauer is an Associate Professor in Electrical and Computer Engineering at The University of Texas at Austin. He received his Ph.D. in Information and Computer Science from the University of California, Irvine (UCI) in 2004. Prior to joining UT Austin in 2008, he was an Assistant Researcher in the Center for Embedded Computer Systems (CECS) at UC Irvine, leading a research group to develop electronic system-level (ESL) design tools. Commercial derivatives of such tools are in use at the Japanese Aerospace Exploration Agency (JAXA), NEC Toshiba Space Systems and others. Dr. Gerstlauer is co-author on 3 books and more than 100 publications. His work was recognized with the 2016 DAC Best Research Paper Award, the 2015 SAMOS Best Paper Award, and as one of the most influential contributions in 10 years at DATE in 2008. He received a 2016-2017 Humboldt Research Fellowship and has presented in numerous industry and conference tutorials. He currently serves an Associate Editor for ACM TECS and TODAES journals, and he has served as Topic, Track or Program Chair of major international conferences such as DAC, DATE, ICCAD and CODES+ISSS. His research interests include system-level design automation, system modeling, design languages and methodologies, and embedded hardware and software synthesis. Refreshments Provided

Find out more »
March 2018

CESG Seminar: “Challenges and Opportunities in Advanced R&D at a Global Private Technology Company”

March 30 @ 4:10 pm - 5:10 pm
WEB, Room 236-C,
Wisenbaker Engineering Building

“Challenges and Opportunities in Advanced R&D at a Global Private Technology Company” by Dr. Jian Li, Research Director and Technology Strategist at Huawei Technologies   Abstract:  In the era of continuing technology acceleration, how to survive is the question that keeps us awake at night.  This is probably the exact reason that Huawei has survived over decades and will continue to excel.  I will introduce a few research topics of interest to the Information and Communication Technology (ICT) with some recent updates.  I will also compare the R&D endeavors between public and private technology companies.  As always we look forward to win-win collaborations with distinguished research universities and organizations such as TAMU.   Biography: Dr. Jian Li is a research director and technology strategist in charge of North America region at Huawei Technologies. He was the chief architect of Huawei’s FusionInsights big data platform.  He is currently leading strategy and R&D efforts in ICT technologies, working with global teams around the world. Before joining Huawei, he was an executive architect and a research scientist with IBM where he worked on advanced R&D, multi-site product development and global customer engagements on computer systems and analytics solutions with significant revenue growth. A frequent presenter at major industry and academic conferences around the world, he holds over 30 patents and has published over 30 peer-reviewed papers. He earned a Ph.D. in electrical and computer engineering from Cornell University. He has also hold adjunct or visiting scholar positions at Texas A&M University, Chinese Academy of Sciences, Tsinghua University among other similar academic roles.  In this capacity, he continues to collaborate with leading academic researchers and industry experts.

Find out more »
+ iCal Import Listed Events