Room 1034 ETB
Dr. Nick Duffield/TAMU
Abstract: One response to the proliferation of massive datasets in many fields has been to develop ingenious ways to throw resources at the problem, for example, using massive fault tolerant storage architectures, supercomputing platforms, and parallel graph computation models. However, not all environments can support this scale of resources, and not all queries need an exact response. Massive and diverse operational datasets have been employed by large Internet Service Providers for a number of years, and mathematical methods have underpinned their response to the challenges of data scale, incompleteness and complexity that are prevalent both in ISP data and in big data more generally. This talk reviews some recent progress in this direction, and surveys new opportunities for developing methods to for big data.
BIO: Nick Duffield is a Professor in the Department of Electrical and Computer Engineering at Texas A&M University. From 2013 until 2014, he was a Research Professor at DIMACS (the Center for Discrete Mathematics and Theoretical Computer Science) at Rutgers University, New Jersey, USA. From 1995 until 2013, he worked at AT&T Labs-Research
where he was a Distinguished Member of Technical Staff and an AT&T Fellow. He work on the acquisition, analysis and applications of Big Data to communication networks and beyond. Dr. Duffield has twice received the ACM SIGMETRICS Test of Time award (in 2013 and 2012), and he is a Fellow of the IEEE.