Luna Dong, Principal Scientist, Amazon

​Xin Luna Dong is a Principal Scientist at Amazon, leading the efforts of constructing Amazon Product Graph. She was one of the major contributors to the Knowledge Vault project, and has led the Knowledge-based Trust project, which is called the “Google Truth Machine” by Washington’s Post. She​ has won the VLDB Early Career Research Contribution Award for “advancing the state of the art of knowledge fusion”, and the Best Demo award in Sigmod 2005. She has co-authored book “Big Data Integration”, published 65+ papers in top conferences and journals, and given 20+ keynotes/invited-talks/tutorials. She is the PC co-chair for Sigmod 2018 and WAIM 2015, and serves as an area chair for Sigmod 2017, CIKM 2017, Sigmod 2015, ICDE 2013, and CIKM 2011.

Abstract summary

Leave No Valuable Data Behind: the Crazy Ideas and the Business​:
With the mission “leave no valuable data behind”, we developed techniques for knowledge fusion to guarantee the correctness of the knowledge. This talk starts with describing a few crazy ideas we have tested. The first, known as “Knowledge Vault”, used 15 extractors to automatically extract knowledge from 1B+ Webpages, obtaining 3B+ distinct (subject, predicate, object) knowledge triples and predicting well-calibrated probabilities for extracted triples. The second, known as “Knowledge-Based Trust”, estimated the trustworthiness of 119M webpages and 5.6M websites based on the correctness of their factual information. We then present how we bring the ideas to business in filling the gap between the knowledge at existing knowledge bases and the knowledge in the world.

Ryan Calo, Assistant Professor, University of Washington

Ryan Calo is an assistant professor at the University of Washington School of Law and a former research director at CIS. A nationally recognized expert in law and emerging technology, Ryan’s work has appeared in the New York Times, the Wall Street Journal, NPR, Wired Magazine, and other news outlets. Ryan serves on several advisory committees, including the Electronic Frontier Foundation, the Electronic Privacy Information Center, and the Future of Privacy Forum. He co-chairs the American Bar Association Committee on Robotics and Artificial Intelligence and serves on the program committee of National Robotics Week.

Abstract summary

Alex Korbonits, Data Scientist Analyst, Remitly

Alex Korbonits is a Data Scientist at Remitly, Inc., where he works extensively on feature extraction and putting machine learning models into production. Outside of work, he loves Kaggle competitions, is diving deep into topological data analysis, and is exploring machine learning on GPUs. Alex is a graduate of the University of Chicago with degrees in Mathematics and Economics.

Abstract summary

Margaret Mitchell, Senior Research Scientist, Google’s Research & Machine Intelligence group

Margaret Mitchell is the Senior Research Scientist in Google’s Research & Machine Intelligence group, working on advancing artificial intelligence towards positive goals. She works on vision-language and grounded language generation, focusing on how to help computers communicate based on what they can process. Her work combines computer vision, natural language processing, social media, many statistical methods, and insights from cognitive science. Before Google, She was the Researcher at Microsoft Research, and a founding member of their “Cognition” group. Her work focused on advancing artificial intelligence, with specific focus on generating language from visual inputs. Before MSR, she was a postdoctoral researcher at The Johns Hopkins University Center of Excellence, where she mainly focused on semantic role labeling and sentiment analysis using graphical models, working under Benjamin Van Durme.

Margaret received her PhD in Computer Science from the University of Aberdeen, with supervision from Kees van Deemter and Ehud Reiter, and external supervision from the University of Edinburgh (with Ellen Bard) and Oregon Health & Science University (OHSU) (with Brian Roark and Richard Sproat). She worked on referring expression generation from statistical and mathematical perspectives as well as a cognitive science perspective, and also worked on prototyping assistive/augmentative technology that people with language generation difficulties can use. Her thesis work was on generating natural, human-like reference to visible, everyday objects. She spent a good chunk of 2008 getting a Master’s in Computational Linguistics at the University of Washington, studying under Emily Bender and Fei Xia. Simultaneously (2005 – 2012), She worked on and off at the Center for Spoken Language Understanding, part of OHSU, in Portland, Oregon. Her title changed with time (research assistant/associate/visiting scholar), but throughout, she worked under Brian Roark on technology that leverages syntactic and phonetic characteristics to aid those with neurological disorders.

She continues to balance my time between language generation, applications for clinical domains, and core AI research.

Abstract summary

Serena Yeung, PhD Student, Stanford

Serena is a Ph.D. student in the Stanford Vision Lab, advised by Prof. Fei-Fei Li. Her research interests are in computer vision, machine learning, and deep learning. She is particularly interested in the areas of video understanding, human action recognition, and healthcare applications. She interned at Facebook AI Research in Summer 2016.

Before starting her Ph.D., she received a B.S. in Electrical Engineering in 2010, and an M.S. in Electrical Engineering in 2013, both from Stanford. She also worked as a software engineer at Rockmelt (acquired by Yahoo) from 2009-2011.

Abstract summary

Tianqi Chen, Computer Science PhD Student, University of Washington

Tianqi holds a bachelor’s degree in Computer Science from Shanghai Jiao Tong University, where he was a member of ACM Class, now part of Zhiyuan College in SJTU. He did his master’s degree at Changhai Jiao Tong University in China on Apex Data and Knowledge Management before joining the University of Washington as a PhD. He has had several prestigious internships and has been a visiting scholar including: Google on the Brain Team, at Graphlab authoring the boosted tree and neural net toolkit, at Microsoft Research Asia in the Machine Learning Group, and the Digital Enterprise Institute in Galway Ireland. What really excites Tianqi is what processes and goals can be enabled when we bring advanced learning techniques and systems together. He pushes the envelope on deep learning, knowledge transfer and lifelong learning. His PhD is supported by a Google PhD Fellowship.

Abstract summary

Hanie Sedghi, Research Scientist at Allen Institute for Artificial Intelligence

Hanie Sedghi is a Research Scientist at Allen Institute for Artificial Intelligence (AI2). Her research interests include large-scale machine learning, high-dimensional statistics and probabilistic models. More recently, she has been working on inference and learning in latent variable models. She has received her Ph.D. from University of Southern California with a minor in Mathematics in 2015. She was also a visiting researcher at University of California, Irvine working with professor Anandkumar during her Ph.D. She received her B.Sc. and M.Sc. degree from Sharif University of Technology, Tehran, Iran.

Abstract summary

Beating Perils of Non-convexity:Guaranteed Training of Neural Networks using Tensor Methods:
Neural networks have revolutionized performance across multiple domains such as computer vision and speech recognition. However, training a neural network is a highly non-convex problem and the conventional stochastic gradient descent can get stuck in spurious local optima. We propose a computationally efficient method for training neural networks that also has guaranteed risk bounds. It is based on tensor decomposition which is guaranteed to converge to the globally optimal solution under mild conditions. We explain how this framework can be leveraged to train feedforward and recurrent neural networks.

Byron Galbraith, Chief Data Scientist, Talla

Byron Galbraith is the Chief Data Scientist and co-founder of Talla, where he works to translate the latest advancements in machine learning and natural language processing to build AI-powered conversational agents. Byron has a PhD in Cognitive and Neural Systems from Boston University and an MS in Bioinformatics from Marquette University. His research expertise includes brain-computer interfaces, neuromorphic robotics, spiking neural networks, high-performance computing, and natural language processing. Byron has also held several software engineering roles including back-end system engineer, full stack web developer, office automation consultant, and game engine developer at companies ranging in size from a two-person startup to a multi-national enterprise.

Abstract summary

Neural Information Retrieval and Conversational Question Answering: