Speakers:

Tammy Kolda, Distinguished Member of Technical Staff, Sandia National Laboratories

Tamara Kolda is a Distinguished Member of Technical at Sandia National Laboratories in Livermore, California, where she works on a broad range of problems including network modeling and analysis, multilinear algebra and tensor decompositions, data mining, and cybersecurity. She has also worked in optimization, nonlinear solvers, parallel computing, and the design of scientific software. She has authored numerous software packages, including the well-known Tensor Toolbox for MATLAB. Before joining Sandia, Kolda held the Householder Postdoctoral Fellowship in Scientific Computing at Oak Ridge National Laboratory. She has received several awards including a 2003 Presidential Early Career Award for Scientists and Engineers (PECASE), two best papers awards (ICDM’08 and SDM’13), and Distinguished Member of the Association for Computing Machinery (ACM). She is an elected member of the Society for Industrial and Applied Mathematics (SIAM) Board of Trustees, Section Editor for the Software and High Performance Computing section of the SIAM Journal on Scientific Computing, and Associate Editor for SIAM Journal on Matrix Analysis. She received her Ph.D. in applied mathematics from the University of Maryland at College Park in 1997.

Abstract Summary:

Doug Eck, Research Scientist, Google

Doug leads Magenta, a Google Brain project working to generate music, video, image and text using deep learning and reinforcement learning. A main goal of Magenta is to better understanding how AI can enable artists and musicians to express themselves in innovative new ways. Before Magenta, Doug led the Google Play Music search and recommendation team. From 2003 to 2010 Doug was faculty at the University of Montreal’s MILA Machine Learning lab, where he worked on expressive music performance and automatic tagging of music audio.

Abstract Summary:

Xavier Amatriain, VP of Engineering, Quora

Xavier Amatriain is VP of Engineering at Quora, where he leads the team building the best source of knowledge in the Internet. With over 50 publications in different fields, Xavier is best known for his work on Machine Learning in general, and Recommender Systems in particular. Before Quora, he was Research/Engineering Director at Netflix, where he lead the team building the famous Netflix Recommendation algorithms. Previously, Xavier was also Research Scientist at Telefonica Research and Research Director at UCSB. He has also lectured at different universities both in the US and Spain and is frequently invited as a speaker at conferences and companies.

Abstract Summary:

Ted Willke, Sr. Principal Engineer, Intel

Ted Willke leads a team that researches large-scale machine learning and data mining techniques in Intel Labs. His research interests include parallel and distributed systems, image processing, machine learning, graph analytics, and cognitive neuroscience. Ted is also a co-principal investigator in a multi-year grand challenge project on real-time brain decoding with the Princeton Neuroscience Institute. Previously, he founded an Intel venture focused on graph analytics for data science that is now an Intel-supported open source project. In 2014, he won Intel’s highest award for this effort. In 2015, he was appointed to the Science & Technology Advisory Committee of the US Department of Homeland Security. Ted holds a doctorate in electrical engineering from Columbia University, a master’s from the University of Wisconsin-Madison, and a bachelor’s from the University of Illinois.

Abstract Summary:

Watch a previous presentation by Ted Willke here »

Anima Anandkumar

Anima Anadkumar, Principal Scientist, Amazon Web Services, Endowed Professor, CalTech

Anima Anandkumar is a principal scientist at Amazon Web Services, and is currently on leave from U.C.Irvine, where she is an associate professor. Her research interests are in the areas of large-scale machine learning, non-convex optimization and high-dimensional statistics. In particular, she has been spearheading the development and analysis of tensor algorithms. She is the recipient of several awards such as the Alfred. P. Sloan Fellowship, Microsoft Faculty Fellowship, Google research award, ARO and AFOSR Young Investigator Awards, NSF CAREER Award, Early Career Excellence in Research Award at UCI, Best Thesis Award from the ACM SIGMETRICS society, IBM Fran Allen PhD fellowship, and several best paper awards. She has been featured in a number of forums such as the Quora ML session, Huffington post, Forbes, O’Reilly media, and so on. She received her B.Tech in Electrical Engineering from IIT Madras in 2004 and her PhD from Cornell University in 2009. She was a postdoctoral researcher at MIT from 2009 to 2010, an assistant professor at U.C. Irvine between 2010 and 2016, and a visiting researcher at Microsoft Research New England in 2012 and 2014.

Abstract Summary:

Josh Wills, Head of Data Engineering, Slack

Josh Wills is the head of data engineering at Slack. Prior to Slack, he built and led data science teams at Cloudera and Google. He is the founder of the Apache Crunch project, co-authored an O’Reilly book on advanced analytics with Apache Spark, and wrote a popular tweet about data scientists.

Abstract Summary:

Franziska Bell, Data Science Manager on the Platform Team, Uber

Franziska Bell is a Data Science Manager on the Platform Team at Uber and leads Applied Machine Learning, Forecasting Platform, Anomaly Detection, Customer Support Data Science, and Communications Platform Data Science.

Before Uber, Franziska was a Postdoc at Caltech where she developed a novel, highly accurate approximate quantum molecular dynamics theory to calculate chemical reactions for large, complex systems, such as enzymes. Franziska earned her Ph.D. in theoretical chemistry from UC Berkeley focusing on developing highly accurate, yet computationally efficient approaches which helped unravel the mechanism of non-silicon-based solar cells and properties of organic conductors.

Abstract Summary:

Dr. Steve Liu, Chief Scientist, Tinder

Dr. Steve Liu is chief scientist at Tinder. In his role, he leads research innovation and applies novel technologies to new product developments.

He is currently a professor and William Dawson Scholar at McGill University School of Computer Science. He has also served as a visiting research scientist at HP Labs. Dr. Liu has published more than 280 research papers in peer-reviewed international journals and conference proceedings. He has also authored and co-authored several books. Over the course of his career, his research has focused on big data, machine learning/AI, computing systems and networking, Internet of Things, and more. His research has been referenced in articles publishing across The New York Times, IDG/Computer World, The Register, Business Insider, Huffington Post, CBC, NewScientist, MIT Technology Review, McGill Daily and others. He is a recipient of the Outstanding Young Canadian Computer Science Researcher Prizes from the Canadian Association of Computer Science and is a recipient of the Tomlinson Scientist Award from McGill University.

He is serving or has served on the editorial boards of ACM Transactions on Cyber-Physical Systems (TCPS), IEEE/ACM Transactions on Networking (ToN), IEEE Transactions on Parallel and Distributed Systems (TPDS), IEEE Transactions on Vehicular Technology (TVT), and IEEE Communications Surveys and Tutorials (COMST). He has also served on the organizing committees of more than 38 major international conferences and workshops.

Dr. Liu received his Ph.D. in Computer Science with multiple honors from the University of Illinois at Urbana-Champaign. He received his Master’s degree in Automation and BSc degree in Mathematics from Tsinghua University.

Abstract Summary:

Dr. June Andrews, Principal Data Scientist, Wise/GE Digital

June Andrews is a Principal Data Scientist at Wise/GE Digital working on a machine learning and data science platform for the Industrial Internet of Things, which includes aviation, trains, and power plants. Previously, she worked at Pinterest spearheading the Data Trustworthiness and Signals Program to create a healthy data ecosystem for machine learning. She has also lead efforts at LinkedIn on growth, engagement, and social network analysis to increase economic opportunity for professionals. June holds degrees in applied mathematics, computer science, and electrical engineering from UC Berkeley and Cornell.

Abstract Summary:

Counter Intuitive Machine Learning for the Industrial Internet of Things:
The Industrial Internet of Things (IIoT) is the infrastructure and data flow built around the world’s most valuable things like airplane engines, medical scanners, nuclear power plants, and oil pipelines. These machines and systems require far greater uptime, security, governance, and regulation than the IoT landscape based around consumer activity. In the IIoT the cost of being wrong can be the catastrophic loss of life on a massive scale. Nevertheless, given the growing scale through the digitalization of industrial assets, there is clearly a growing role for machine learning to help augment and automate human decision making. It is against this backdrop that traditional machine learning techniques must be adapted and need based innovations created. We see industrial machine learning as distinct from consumer machine learning and in this talk we will cover the counterintuitive changes of featurization, metrics for model performance, and human-in-the-loop design changes for using machine learning in an industrial environment.

Daniel Shank, Data Scientist, Talla

Neural Turing Machines are a landmark architecture in the field of machine learning. A differentiable version of a classic model of computation designed by Alan Turing, NTMs open up the possibility of using machine learning to learn algorithms that can access an external memory. However, more so than many other popular deep learning architectures, NTMs are notoriously difficult to implement effectively. This presentation will provide an overview of the NTM architecture as well as tips and tricks for implementation using conventional machine learning frameworks. This presentation will also describe how NTMs can be used for standard machine learning tasks, and will touch on Dynamic Neural Computers, the followup architecture which was recently published in Nature.

Abstract Summary:

Suneel Marthi, Sr. Principal Engineer, Red Hat Inc.

Suneel is a Senior Principal Engineer, Office of CTO, Red Hat. He is a member of Apache Software Foundation and is a committer on several apache projects like Apache Mahout, Apache OpenNLP, Apache Flink, Apache MxNet. He’s presented at Flink Forward, Hadoop Summit Europe, Berlin Buzzwords, Flink Forward and Apache Big Data in the past.

Abstract Summary:

Deriving Actionable Insights from High Volume Media Streams:
Media analysts have to deal with with analyzing high volumes of real-time news feeds and social media streams which is often a tedious process because they need to write search profiles for entities. Python tools like NLTK do not scale to large production data sets and cannot be plugged into a distributed scalable frameworks like Apache Flink.

Apache Flink being a streaming first engine is ideally suited for ingesting multiple streams of news feeds, social media, blogs etc.. and for being able to do streaming analytics on the various feeds. Natural Language Processing tools like Apache OpenNLP can be plugged into Flink streaming pipelines so as to be able to perform common NLP tasks like Named Entity Recognition (NER), Chunking, and text classification.

In this talk, we’ll be building a real-time media analyzer which does Named Entity Recognition (NER) on the individual incoming streams, calculates the co-occurrences of the named entities and aggregates them across multiple streams; index the results into a search engine and being able to query the results for actionable insights. We’ll also be showing as to how to handle multilingual documents for calculating co-occurrences.

NLP practitioners will come away from this talk with a better understanding of how the various Apache OpenNLP components can help in processing large streams of data feeds and can easily be plugged into a highly scalable and distributed framework like Apache Flink.

Sponsors

Gold:

Silver:

Publishers: