Ryan Calo, Assistant Professor, University of Washington
Ryan Calo is an assistant professor at the University of Washington School of Law and a former research director at CIS. A nationally recognized expert in law and emerging technology, Ryan’s work has appeared in the New York Times, the Wall Street Journal, NPR, Wired Magazine, and other news outlets. Ryan serves on several advisory committees, including the Electronic Frontier Foundation, the Electronic Privacy Information Center, and the Future of Privacy Forum. He co-chairs the American Bar Association Committee on Robotics and Artificial Intelligence and serves on the program committee of National Robotics Week.
Artificial Intelligence Law and Policy: Key Challenges:
These remarks are part of an ongoing project exploring key challenges for artificial intelligence law and policy.
Concerns over artificial intelligence are nothing new. In the nineteen-eighties, during the lead up to the so-called AI Winter when the field failed to deliver on its grander promises, headlines warned that robots would take our jobs (assuming Skynet didn’t kill us first). If there were calls for policymakers to intervene, none did.
Luna Dong, Principal Scientist, Amazon
Xin Luna Dong is a Principal Scientist at Amazon, leading the efforts of constructing Amazon Product Graph. She was one of the major contributors to the Knowledge Vault project, and has led the Knowledge-based Trust project, which is called the “Google Truth Machine” by Washington’s Post. She has won the VLDB Early Career Research Contribution Award for “advancing the state of the art of knowledge fusion”, and the Best Demo award in Sigmod 2005. She has co-authored book “Big Data Integration”, published 65+ papers in top conferences and journals, and given 20+ keynotes/invited-talks/tutorials. She is the PC co-chair for Sigmod 2018 and WAIM 2015, and serves as an area chair for Sigmod 2017, CIKM 2017, Sigmod 2015, ICDE 2013, and CIKM 2011.
Leave No Valuable Data Behind: the Crazy Ideas and the Business:
With the mission “leave no valuable data behind”, we developed techniques for knowledge fusion to guarantee the correctness of the knowledge. This talk starts with describing a few crazy ideas we have tested. The first, known as “Knowledge Vault”, used 15 extractors to automatically extract knowledge from 1B+ Webpages, obtaining 3B+ distinct (subject, predicate, object) knowledge triples and predicting well-calibrated probabilities for extracted triples. The second, known as “Knowledge-Based Trust”, estimated the trustworthiness of 119M webpages and 5.6M websites based on the correctness of their factual information. We then present how we bring the ideas to business in filling the gap between the knowledge at existing knowledge bases and the knowledge in the world.
Alex Korbonits, Data Scientist Analyst, Remitly
Alex Korbonits is a Data Scientist at Remitly, Inc., where he works extensively on feature extraction and putting machine learning models into production. Outside of work, he loves Kaggle competitions, is diving deep into topological data analysis, and is exploring machine learning on GPUs. Alex is a graduate of the University of Chicago with degrees in Mathematics and Economics.
Applications of machine learning and ensemble methods to risk rule optimization:
At Remitly, risk management involves a combination of manually created and curated risk rules as well as black-box inputs from machine learning models. Currently, domain experts manage risk rules in production using logical conjunctions of statements about input features. In order to scale this process, we’ve developed a tool and framework for risk rule optimization that generates risk rules from data and optimizes rule sets by ensembling rules from multiple models according to a particular objective function. In this talk, I will describe how we currently manage risk rules, how we learn rules from data, how we determine optimal rule sets, and the importance of smart input features extracted from complex machine learning models.
Margaret Mitchell, Senior Research Scientist, Google’s Research & Machine Intelligence group
Margaret Mitchell is the Senior Research Scientist in Google’s Research & Machine Intelligence group, working on advancing artificial intelligence towards positive goals. She works on vision-language and grounded language generation, focusing on how to help computers communicate based on what they can process. Her work combines computer vision, natural language processing, social media, many statistical methods, and insights from cognitive science. Before Google, She was the Researcher at Microsoft Research, and a founding member of their “Cognition” group. Her work focused on advancing artificial intelligence, with specific focus on generating language from visual inputs. Before MSR, she was a postdoctoral researcher at The Johns Hopkins University Center of Excellence, where she mainly focused on semantic role labeling and sentiment analysis using graphical models, working under Benjamin Van Durme.
Margaret received her PhD in Computer Science from the University of Aberdeen, with supervision from Kees van Deemter and Ehud Reiter, and external supervision from the University of Edinburgh (with Ellen Bard) and Oregon Health & Science University (OHSU) (with Brian Roark and Richard Sproat). She worked on referring expression generation from statistical and mathematical perspectives as well as a cognitive science perspective, and also worked on prototyping assistive/augmentative technology that people with language generation difficulties can use. Her thesis work was on generating natural, human-like reference to visible, everyday objects. She spent a good chunk of 2008 getting a Master’s in Computational Linguistics at the University of Washington, studying under Emily Bender and Fei Xia. Simultaneously (2005 – 2012), She worked on and off at the Center for Spoken Language Understanding, part of OHSU, in Portland, Oregon. Her title changed with time (research assistant/associate/visiting scholar), but throughout, she worked under Brian Roark on technology that leverages syntactic and phonetic characteristics to aid those with neurological disorders.
She continues to balance my time between language generation, applications for clinical domains, and core AI research.
The Seen and Unseen Factors Influencing Machine Perception of Images and Language:
The success of machine learning has recently surged, with similar algorithmic approaches effectively solving a variety of human-defined tasks. Tasks testing how well machines can perceive images and communicate about them have begun to show a lot of promise, and at the same time, have exposed strong effects of different types of bias, such as overgeneralization. In this talk, I will detail how machines are learning about the visual world, and discuss how machine learning and different kinds of bias interact.
Serena Yeung, PhD Student, Stanford
Serena is a Ph.D. student in the Stanford Vision Lab, advised by Prof. Fei-Fei Li. Her research interests are in computer vision, machine learning, and deep learning. She is particularly interested in the areas of video understanding, human action recognition, and healthcare applications. She interned at Facebook AI Research in Summer 2016.
Before starting her Ph.D., she received a B.S. in Electrical Engineering in 2010, and an M.S. in Electrical Engineering in 2013, both from Stanford. She also worked as a software engineer at Rockmelt (acquired by Yahoo) from 2009-2011.
Towards Scaling Video Understanding:
The quantity of video data is vast, yet our capabilities for visual recognition and understanding in videos lags significantly behind that for images. In this talk, I will first discuss some of the challenges of scale in labeling, modeling, and inference behind this gap. I will then present some of our recent work towards addressing these challenges, in particular using reinforcement learning-based formulations to tackle efficient inference in videos and learning classifiers from noisy web search results. Finally, I will conclude with discussion on future promising directions towards scaling video understanding.
Tianqi Chen, Computer Science PhD Student, University of Washington
Tianqi holds a bachelor’s degree in Computer Science from Shanghai Jiao Tong University, where he was a member of ACM Class, now part of Zhiyuan College in SJTU. He did his master’s degree at Changhai Jiao Tong University in China on Apex Data and Knowledge Management before joining the University of Washington as a PhD. He has had several prestigious internships and has been a visiting scholar including: Google on the Brain Team, at Graphlab authoring the boosted tree and neural net toolkit, at Microsoft Research Asia in the Machine Learning Group, and the Digital Enterprise Institute in Galway Ireland. What really excites Tianqi is what processes and goals can be enabled when we bring advanced learning techniques and systems together. He pushes the envelope on deep learning, knowledge transfer and lifelong learning. His PhD is supported by a Google PhD Fellowship.
Build Scalable and Modular Learning Systems:
Machine learning and data-driven approaches are becoming very important in many areas. There are one factors that drive these successful applications: scalable learning systems that learn the model of interest from large datasets. More importantly, the system needed to be designed in a modular way to work with existing ecosystem and improve users’ productivity environment. In this talk, I will talk about XGBoost and MXNet, two learning scalable and portable systems that I build. I will discuss how we can apply distributed computing, asynchronous scheduling and hardware acceleration to improve these systems, as well as how do they fit into bigger open-source ecosystem of machine learning.
Shiva Amiri, CEO, Biosymetrics
As the CEO of BioSymetrics Inc., Shiva is working on delivering a unique real-time machine learning technology for the analysis of massive data in the biomedical space. Prior to BioSymetrics Inc. she was Chief Product Officer of Real Time Data Solutions Inc. Prior to RTDS Inc. she lead the Informatics and Analytics team at the Ontario Brain Institute, where they developed Brain-CODE, a large-scale neuroinformatics platform for the management, processing, and analytics of big data in neuroscience across the province of Ontario. Shiva is also the President and CEO of Modecular Inc., a Computational Biochemistry start-up company developing next generation drug screening methodologies.
She has previously lead the British High Commission’s Science and Innovation team in Canada where she was facilitating research, innovation and commercialization between UK and Canada. Shiva completed her D.Phil. (Ph.D.) in Computational Biochemistry at the University of Oxford and her undergraduate degree in Computer Science and Human Biology at the University of Toronto. Shiva is involved with several organisations including Let’s Talk Science and Shabeh Jomeh International.
Distributed Analytics and Machine Learning for Large-Scale Medical Image Processing:
The scale of data being generated in medicine and research can easily overwhelm typical analytic capabilities. This is particularly true with MRI/fMRI scanning, where: 1) large file sizes often preclude studies of the magnitude needed for overcoming the inherent noise, 2) currently no gold-standard protocol exists for extraction of standardized characteristics from MRI/fMRI files, and, 3) traditional methods for group-wise comparison can often result in spurious findings.
Here we have addressed these challenges by generating an easily deployable, scalable image processing pipeline capable of quickly permuting multiple options for fMRI/MRI processing, determining the optimal set of parameters for each study. Uniquely, our approach leverages the rapid model building capabilities of our real time machine learning software to iterate through normalization parameters for each disease class. Our optimized pipeline exceeded classification accuracy seen with previous analyses of comparable scope and allowed easy integration with other medical data types (genome sequence, phenotypic, and metabolic data) allowing generation of more comprehensive disease classification models.
The ability to standardize and pre-process imaging data for machine learning, no matter the source and type, and effectively combine it with other data types is a powerful capability and holds promise for the future of diagnostics and precision medicine.
Hanie Sedghi, Research Scientist at Allen Institute for Artificial Intelligence
Hanie Sedghi is a Research Scientist at Allen Institute for Artificial Intelligence (AI2). Her research interests include large-scale machine learning, high-dimensional statistics and probabilistic models. More recently, she has been working on inference and learning in latent variable models. She has received her Ph.D. from University of Southern California with a minor in Mathematics in 2015. She was also a visiting researcher at University of California, Irvine working with professor Anandkumar during her Ph.D. She received her B.Sc. and M.Sc. degree from Sharif University of Technology, Tehran, Iran.
Beating Perils of Non-convexity:Guaranteed Training of Neural Networks using Tensor Methods:
Neural networks have revolutionized performance across multiple domains such as computer vision and speech recognition. However, training a neural network is a highly non-convex problem and the conventional stochastic gradient descent can get stuck in spurious local optima. We propose a computationally efficient method for training neural networks that also has guaranteed risk bounds. It is based on tensor decomposition which is guaranteed to converge to the globally optimal solution under mild conditions. We explain how this framework can be leveraged to train feedforward and recurrent neural networks.
Scott Clark, Co-Founder & CEO, SigOpt
Scott is co-founder and CEO of SigOpt, a YC and a16z backed “Optimization as a Service” startup in San Francisco. Scott has been applying optimal learning techniques in industry and academia for years, from bioinformatics to production advertising systems. Before SigOpt, Scott worked on the Ad Targeting team at Yelp leading the charge on academic research and outreach with projects like the Yelp Dataset Challenge and open sourcing MOE. Scott holds a PhD in Applied Mathematics and an MS in Computer Science from Cornell University and BS degrees in Mathematics, Physics, and Computational Physics from Oregon State University. Scott was chosen as one of Forbes’ 30 under 30 in 2016.
Bayesian Global Optimization: Using Optimal Learning to Deep Learning Models:
In this talk we introduce Bayesian Optimization as an efficient way to optimize machine learning model parameters, especially when evaluating different configurations is time-consuming or expensive. Deep learning pipelines are notoriously expensive to train and often have many tunable parameters including hyperparameters, the architecture, feature transformations that can have a large impact on the efficacy of the model.
We will motivate the problem by giving several example applications using open source deep learning frameworks and open datasets. We’ll compare the results of Bayesian Optimization to standard techniques like grid search, random search, and expert tuning.
Byron Galbraith, Chief Data Scientist, Talla
Byron Galbraith is the Chief Data Scientist and co-founder of Talla, where he works to translate the latest advancements in machine learning and natural language processing to build AI-powered conversational agents. Byron has a PhD in Cognitive and Neural Systems from Boston University and an MS in Bioinformatics from Marquette University. His research expertise includes brain-computer interfaces, neuromorphic robotics, spiking neural networks, high-performance computing, and natural language processing. Byron has also held several software engineering roles including back-end system engineer, full stack web developer, office automation consultant, and game engine developer at companies ranging in size from a two-person startup to a multi-national enterprise.
Neural Information Retrieval and Conversational Question Answering:
One the main affordances of conversational UIs is the ability to use natural language to succinctly convey to a bot what you want. An area where this interface excels is in question answering (Q&A). Research into Q&A systems often falls at the intersection of natural language processing (NLP) and information retrieval (IR), and while NLP has been getting a lot of attention from deep learning for several years now, it’s only largely within the last year or so that the field of IR has seen an equivalent explosion of interest in employing these techniques. In this presentation, I will touch on challenges facing conversational bots, provide a high level overview into the emerging field of Neural Information Retrieval, discuss how these methods can be used in a Q&A context, and then highlight some lessons learned attempting to design and deploy a conversational Q&A agent-based product.
Ashrith Barthur, Security Scientist, H2o.ai
Ashrith Barthur is a Security Scientist at H2O currently working on algorithms that detect anomalous behaviour in user activities, network traffic, attacks, financial fraud and global money movement. He has a PhD from Purdue University in the field of information security, specialized in Anomalous behaviour in DNS protocol.
ML(Machine Learning) in AML (Anti Money Laundering):
AML or anti money laundering has been a consistent bane of multiple governments and banks. A strong influences by countries to curb illegal money movement has resulted in a significant yet extremely small aspect of money laundering being identified – a success rate of about 2% average. A more global foot print the bank has the lesser is the accuracy of money laundering investigations. In its current mechanism, investigators analyse each money laundering alert and provide their subjective opinion towards a case. Unfortunately this takes time, and still has a return rate of about 2% at average and 10% at the highest. What we design are AI algorithms that work upon features that track monetary behaviour of every account. These features are essentially time-bound making them a fundamental aspect of algorithm design. The algorithms have a capability to improve the identification close to 70%, and we a certain exclusive features that are a function of time and improve much further.
Mukund Narasimhan, Engineer, Pinterest
Mukund Narasimhan is an engineer at Pinterest where he works on content modeling and recommendations. Prior to Pinterest, he has worked at Google, Facebook, Microsoft and Intel. He has a Ph.D. in Electrical Engineering from the University of Washington and a M.S in Mathematics from Louisiana State University.
Knowledge at Pinterest:
Pinterest is building the world’s largest catalog of ideas. These ideas are embodied in billions of Pins and well over 100 million Users and Boards which, together, form a complex picture of our users and their interests. The goal of the Knowledge team is to leverage our unstructured data (images, videos, and text), and structured data (user curated data, partner generated content, and engagement signals) to model users and their interests so we can help them discover fresh, diverse, and personalized new ideas. In this talk, we go into some detail on the progress we’ve made so far.
Andrew Musselman, Committer and PMC Member, Apache Mahout
Andrew recently joined Lucidworks to head up their Advisory practice, and is a Committer and PMC member on the Apache Mahout project.
Apache Mahout: Distributed Matrix Math for Machine Learning:
Machine learning and statistics tools like R and Scikit-learn are declarative, flexible, and extensible, but they scale poorly. “Big Data” tools such as Apache Spark, Apache Flink, and H2O distribute well, but have rudimentary functionality for machine learning and are not easily extensible. In this talk we present Apache Mahout, which provides a Scala-based, R-like DSL for doing linear algebra on distributed systems, letting practitioners quickly implement algorithms on distributed matrices. We will highlight new features in version 0.13 including the hybrid CPU/GPU-optimized engine, and a new framework for user-contributed methods and algorithms similar to R’s CRAN.
We will cover some history of Mahout, introduce the R-Like Scala DSL, provide an overview of how Mahout is able to operate on matrices distributed across multiple computers, and how it takes advantage of GPUs on each computer in a cluster creating a hybrid distributed/GPU-accelerated environment; then demonstrate the kinds of normally complex or unfeasible problems users can easily solve with Mahout; show an integration which allows Mahout to leverage the visualization packages of projects such as R, Python, and D3; and lastly explain how to develop algorithms and submit them to the Mahout project for other users to use.
Garrett Goh, Scientist (Pauling Fellow), Pacific Northwest National Lab (PNNL)
Garrett Goh is a Scientist at the Pacific Northwest National Lab (PNNL), in the Advanced Computing, Mathematics & Data Division. He was previously awarded the Howard Hughes Medical Institute fellowship which supported his PhD in Computational Chemistry at the University of Michigan. At PNNL, he was awarded the Pauling Fellowship that supports his research initiative of combining deep learning and artificial intelligence with traditional chemistry applications. His current interests is in AI-assisted computational chemistry, which is the application of deep learning to predict chemical properties and the discovery of new chemical insights, while using minimal expert knowledge.
A Deep Learning Computational Chemistry AI: Making chemical predictions with minimal expert knowledge:
Using deep learning and with virtually no expert knowledge, we construct computational chemistry models that perform favorably to existing state-of-the-art models developed by expert practitioners, whose models rely on the knowledge gained from decades of academic research. Our findings potentially demonstrates the impact of AI assistance in accelerating the scientific discovery process, where we envision future applications not just in chemistry, but in affiliated fields, such as biotechnology, pharmaceuticals, consumer goods, and perhaps other domains as well.
Sara Hooker, Executive Director, Delta Analytics
Sara is the Executive Director at Delta Analytics, a 501(c)3 Bay Area nonprofit. Nominally Irish, she spent her childhood in Africa, growing up in South Africa, Swaziland, Mozambique, Lesotho and Kenya. Her work is dedicated to empowering nonprofits to use data for good. She regularly teaches machine learning courses to the wider community, and will be in Nairobi, Kenya this summer training the next generation of data scientists.
Data Science for Good: Stopping Illegal Deforestation Using Deep Learning:
Interested in using your data skills to give back? Delta Analytics is a Bay Area non-profit that provides free data science consulting to grant recipients all over the world. Rainforest Connection, a Delta grant recipient, worked with Delta fellows to detect illegal deforestation by applying deep learning to audio streamed from rainforests in Peru, Ecuador and Brazil. We will share insights from our work with Rainforest Connection, discuss our fellowship and partnership process, and suggest some best practices for skill based volunteering.
John Maxwell, Data Scientist, Nordstrom
John Maxwell, a data scientist at Nordstrom, did his graduate work in international development economics, focusing on field experiments. He has since led research projects in Indonesia and Ethiopia related to microenterprise, developed large mathematical simulation models used for investment decisions by WSDOT, built dynamic pricing algorithms at Thriftbooks.com, and led the development of Nordstrom’s open source a/b testing service: Elwin. He currently focuses on contextual multi-armed bandit problems and machine learning infrastructure at Nordstrom.
Solving the Contextual Multi-Armed Bandit Problem at Nordstrom:
The contextual multi-armed bandit problem, also known as associative reinforcement learning or bandits with side information, is a useful formulation of the multi-armed bandit problem that takes into account information about arms and users when deciding which arm to pull. The barrier to entry for both understanding and implementing contextual multi-armed bandits in production is high. The literature in this field pulls from disparate sources including (but not limited to) classical statistics, reinforcement learning, and information theory. Because of this, finding material that fills the gap between very basic explanations and academic journal articles is challenging. The goal of this talk is to provide those lacking intermediate materials as well as an example implementation. Specifically, I will explain key findings from some of the more cited papers in the contextual bandit literature, discuss the minimum requirements for implementation, and give an overview of a production system for solving contextual multi-armed bandit problems.
Sean McPherson, Data Science Fellow, Delta Analytics
Sean is passionate about using data science to provide meaningful insights towards positive social change. As a Data Science Fellow with Delta Analytics, he is working to improve the detection of illegal logging for the Rainforest Connection. Sean is excited to work on this project as it combines two of his fields of interest, audio engineering and statistical signal processing, which he studied in undergraduate and graduate school, respectively.
Michael McCormick, Principal, Comet Labs
Mike is a Principal at Comet Labs, a venture capital firm focused on AI and machine intelligence in traditional industries. Mike focuses on investments in verticals including healthcare, biotech, transportation, aerospace and more. Prior to Comet, Mike co-founded Rubicon Venture Capital, an early-stage VC firm based in San Francisco. Mike’s passionate about helping founders bring audacious visions to life, and spends much of his time thinking about how to wield technology and compassion to address humanity’s most consequential collective challenges.