Franziska Bell, Senior Data Science Manager on the Platform Team, Uber

Franziska Bell is a Senior Data Science Manager on the Platform Team at Uber, where she founded the Anomaly Detection, Forecasting Platform and Natural Language Platform teams. In addition, she leads Applied Machine Learning, Behavioral Science, and Customer Support Data Science.

Before Uber, Franziska was a Postdoc at Caltech where she developed a novel, highly accurate approximate quantum molecular dynamics theory to calculate chemical reactions for large, complex systems, such as enzymes. Franziska earned her Ph.D. in theoretical chemistry from UC Berkeley focusing on developing highly accurate, yet computationally efficient approaches which helped unravel the mechanism of non-silicon-based solar cells and properties of organic conductors.

Abstract Summary:

NL- Use Cases at Uber

Yevgeniy Vahlis, Head of Applied Machine Learning, Borealis AI

Yevgeniy Vahlis is the head of the Applied Machine Learning group at Borealis AI. Prior to joining Borealis AI, Yevgeniy led an applied research team at Georgian Partners, a late-stage venture capital fund, and worked at a number of tech companies including Amazon and Nymi. Yevgeniy kicked off his career at AT&T Labs in New York as a research scientist after completing his PhD in Computer Science at the University of Toronto and a year of postdoctoral studies at Columbia University

Abstract Summary:

Differential Privacy in the Real World:
Differential privacy is making headlines thanks to the pioneering work of companies like Apple and Google, and it is now being used by companies of all sizes to provide data privacy guarantees. It is no secret that machine learning models can memorize (overfit) training data and that through carefully crafted adversarial inputs machine learning models can be subverted by an attacker. Combine these facts with a model that aggregates data from a multitude of customers and you have an AI-driven disaster waiting to happen. In this talk we will cover a defensive measure called “differential privacy” that is a potential solution to such threats. In this talk Yevgeniy will explain the core concepts of differential privacy and share a behind the scenes look at companies are successfully implementing differential privacy in their products.

Dr. Yi Li, Machine Learning Research Scientist, Baidu Silicon Valley AI Lab

Dr. Yi Li is a machine learning research scientist at Baidu Silicon Valley AI Lab. Dr. Li’s research interest lie in ML/AI technologies in health care, and is currently leading the research in this direction at Baidu Research.

Abstract Summary:

Cancer Metastasis Detection With Neural Conditional Random Field:
Breast cancer diagnosis often requires accurate detection of metastasis in lymph nodes through Whole-slide Images (WSIs). Recent advances in deep convolutional neural networks (CNNs) have shown significant successes in medical image analysis and particularly in computational histopathology. Because of the outrageous large size of WSIs, most of the methods divide one slide into lots of small image patches and perform classification on each patch independently. However, neighboring patches often share spatial correlations, and ignoring these spatial correlations may result in inconsistent predictions. In this talk, we introduce a neural conditional random field (NCRF) deep learning framework to detect cancer metastasis in WSIs. NCRF considers the spatial correlations between neighboring patches through a fully connected CRF which is directly incorporated on top of a CNN feature extractor. The whole deep network can be trained end-to-end with standard back-propagation algorithm with minor computational overhead from the CRF component. The CNN feature extractor can also benefit from considering spatial correlations via the CRF component. Compared to the baseline method without considering spatial correlations, we show that the proposed NCRF framework obtains probability maps of patch predictions with better visual quality. We also demonstrate that our method outperforms the baseline in cancer metastasis detection on the Camelyon16 dataset and achieves an average FROC score of 0.8096 on the test set. NCRF is open sourced at https://github.com/baidu-research/NCRF.

Ilke Demir, Postdoctoral Research Scientist, Facebook

Ilke Demir is a postdoctoral research scientist at Facebook. Her research interests include inverse procedural modeling, geometry processing and shape analysis, and deep learning for satellite image understanding. She received her Ph.D. degree in Computer Science from Purdue University in 2016, her M.S. degree also from Purdue, and B.S. degree in Computer Engineering from Middle East Technical University with a minor in Electrical Engineering. She worked at Pixar Animation Studios as an intern in 2015-2016. She has interned at KOVAN Robotics Lab and Havelsan Inc. on graphics and simulation projects, and served as the student system admin at METU. Ilke received numerous awards including Bilsland Dissertation Fellowship, GHC Scholarship, and best poster, paper, and reviewer awards. She has also been actively involved in women in science organisms for the past 10 years, always being an advocate for women and underrepresented minorities.

Abstract Summary:

Geospatial Machine Learning for Urban Development:
The collective mission of mapping the world is never complete: We need to discover and classify roads, settlements, land types, landmarks, and addresses. The recent proliferation of remote sensing data (overhang images, LiDAR, sensors) enabled automatic extraction of such structures to better understand our world. In this talk, we will first mention the motivation and results of DeepGlobe Satellite Image Challenge[1][2], for road extraction, building detection, and land cover classification. Then we will go into details of an example approach[3] which proposes a complete system to use deep learning for generating street addresses for unmapped developing countries. The approach applies deep learning to extract road vectors from satellite images, then processes the street network to output linear and hierarchical street addresses, by labeling regions, roads, and blocks; based on addressing schemes around the world and coherent with human cognitive system. We will share and demonstrate the motivation and algorithm behind the scenes, then compare them to current open and industrial solutions, and walk through our open source code[4] to generate the addresses for a given bounding box.

Michael Lindon, Senior Data Scientist, Tesla

Michael Lindon is a Senior Data Scientist on the Tesla Supply Chain Automation team.  He received a combined Bsc-MPhys degree from the University of Warwick in 2012, and MSc and PhD degrees from Duke University in 2015 and 2018 respectively, where he specialized in Bayesian methods, decision theory and optimization.

Abstract Summary:

When you’re going through hell, keep going: Using machine learning to deliver Tesla Model 3s:
In August 2018, the Tesla Model 3 outsold all BMW passenger cars combined in the United States. How did Tesla scale vehicle delivery to customers as it more than tripled the number of cars produced – and delivered to customers — throughout 2018?  In this applied talk, Michael will describe some of his recent work at Tesla developing predictive models for ETAs. Forecasting accurate ETAs along with quantifying uncertainty in a complex logistics network is imperative to the smooth running of operations and the making of optimal decisions. This is a challenging problem due to the sheer complexity of outbound logistics. Transit lead times and dwells are influenced by a plethora of factors and further complicated by rich dependency structures across the network. This talk walks through a complete Bayesian analysis, from modeling to inference to decision theory.

Leslie Smith, Senior Research Scientist, US Naval Research Laboratory

Leslie N. Smith received a combined BS and MS degrees in Physical Chemistry from the University of Connecticut in 1976 and a Ph.D. degree in chemical physics from the University of Illinois in 1979.

He is currently at the US Naval Research Laboratory in the Navy Center for Applied Research in Artificial Intelligence. His research focuses on deep learning, machine learning, computer vision, sparse modeling, and compressive sensing. Additionally, his interests includes super resolution, non-linear dimensionality reduction, function approximation, and feature selection.

Abstract Summary:

Competition Winning Learning Rates:
It is well known that learning rates are the most important hyper-parameter to tune for training deep neural networks.  Surprisingly, training with dynamic learning rates can lead to an order of magnitude speedup in training time.  This talk will discuss my path from static learning rates to dynamic cyclical learning rates and finally to fast training with very large learning rates (I named this technique “super-convergence”).  In particular, I will show that very large learning rates are the preferred method for regularizing the training because they provide the twin benefits of training speed and good generalization.  The super-convergence method was integrated into the fast.ai library and the Fastai team used it to win the DAWNBench and Kaggle’s iMaterialist challenges.

Been Kim, Research Scientist, Google Brain

Been Kim is a research scientist at Google Brain. Her research focuses on building interpretable machine learning to make humans empowered by machine learning, not overwhelmed by it. She gave ICML tutorial on the topic in 2017 and CVPR in 2018. She has been an area chair and/or a program committee at various conferences, NIPS, ICML, ICLR, and FAT* conferences. Before joining Brain, she was a research scientist at Institute for Artificial Intelligence (AI2) and an affiliate faculty in the Department of Computer Science & Engineering at the University of Washington. She received her PhD. from MIT.

Abstract Summary:

Interpretability Beyond Feature Attribution: Quantitative Testing with Concept Activation Vectors (TCAV):
The interpretation of deep learning models is a challenge due to their size, complexity, and often opaque internal state. In addition, many systems, such as image classifiers, operate on low-level features rather than high-level concepts. To address these challenges, we introduce Concept Activation Vectors (CAVs), which provide an interpretation of a neural net’s internal state in terms of human-friendly concepts. The key idea is to view the high-dimensional internal state of a neural net as an aid, not an obstacle. We show how to use CAVs as part of a technique, Testing with CAVs (TCAV), that uses directional derivatives to quantify the degree to which a user-defined concept is important to a classification result–for example, how sensitive a prediction of “zebra” is to the presence of stripes. Using the domain of image classification as a testing ground, we describe how CAVs may be used to explore hypotheses and generate insights for a standard image classification network as well as a medical application.

Irina Rish, Researcher, AI Science, IBM T.J. Watson Research Center

Irina Rish is a researcher at the AI Science department of the IBM T.J. Watson Research Center. She received MS in Applied Mathematics from Moscow Gubkin Institute, Russia, and PhD in Computer Science from the University of California, Irvine. Her areas of expertise include artificial intelligence and machine learning, with a particular focus on probabilistic graphical models, sparsity and compressed sensing, active learning, and their applications to various domains, ranging from diagnosis and performance management of distributed computer systems (“autonomic computing”) to predictive modeling and statistical biomarker discovery in neuroimaging and other biological data. Irina has published over 70 research papers, several book chapters, two edited books, and a monograph on Sparse Modeling, taught several tutorials and organized multiple workshops at machine-learning conferences, including NIPS, ICML and ECML. She holds 26 patents and several IBM awards. Irina currently serves on the editorial board of the Artificial Intelligence Journal (AIJ). As an adjunct professor at the EE Department of Columbia University, she taught several advanced graduate courses on statistical learning and sparse signal modeling.

Gopal Erinjippurath, Manager of the Imagery Analytics Engineering Team, Planet Inc.

Gopal manages the Imagery Analytics Engineering team at Planet Inc. His background is in the development of foundational technology that powers products in the imaging and machine learning space.

He is known for agile engineering execution from concept to scalable high quality products.

Gopal’s recent experience is in industry-leading analytics products from early concept demonstrations to multiple customers at Captricity and Harvesting, where he advises the CEO. Previously he led the algorithm engineering development of Dolby’s first imaging display product, the Emmy Award winning Dolby Professional Reference Monitor and technologies for high dynamic range video reproduction in Dolby Vision, now in the iPhone X/8/8s.

Gopal holds an MS in Electrical Engineering from University of Southern California and completed the Ignite Program, connecting technologists with new commercial ventures, at the Stanford Graduate School of Business.

Abstract Summary:

Large Scale datasets for Analytics on Satellite Imagery:
By imaging the entirety of Earth’s landmass every day at 3.7m resolution and enabling on-demand follow up imagery at 80cm resolution, Planet offers a uniquely valuable dataset for creating datasets for imagery analytics over varied context. We introduce Planet imagery towards creating large scale datasets for object detection and localization and associated analytics. We describe workflows for data collection and aggregation and demonstrate the results of experiments with baseline state of the art deep learning based object detection models (Faster RCNN and SSD) for object detection and localization. We also describe a few early experiments with transferability of object localization between datasets from different satellite constellations. These approaches are then applied towards localizing objects of relevance towards federal disaster response and emergency management efforts. We showcase examples of our imagery, objects of relevance and detections from the baseline model in disaster regions.

Forough Poursabzi, Researcher, Microsoft Research

Forough is a post-doctoral researcher at Microsoft Research New York City. She works in the interdisciplinary area of interpretable and interactive machine learning. Forough collaborates with psychologists to study human behavior when interacting with machine learning models. She uses these insights to design machine learning models that humans can use effectively. She is also interested in several aspects of fairness, accountability, and transparency in machine learning and their effect on users’ decision-making process. Forough holds a BE in computer engineering from the University of Tehran and a PhD in computer science from the University of Colorado at Boulder.

Abstract Summary:

Design and Empirical Evaluation of Interactive and Interpretable Machine Learning:
Machine learning is ubiquitous in making predictions that affect people’s decisions. While most of the research in machine learning focuses on improving the performance of models on held-out datasets, this is not enough to convince end-users that these models are trustworthy or reliable in the wild. To address this problem, a new line of research has emerged that focuses on developing interpretable machine learning methods and helping end-users make informed decisions. Despite the growing body of research in developing interpretable models, there is still no consensus on the definition and quantification of interpretability. In this talk, I argue that to understand interpretability, we need to bring humans in the loop and run human-subject experiments to understand the effect of interpretability on human behavior. I approach the problem of interpretability from an interdisciplinary perspective which builds on decades of research in psychology, cognitive science, and social science to understand human behavior and trust. I will talk about a set of controlled user experiments, where we manipulate various design factors in supervised models that are commonly thought to make models more or less interpretable and measure their influence on user behavior, performance, and trust. Additionally, I will talk about interpretable and interactive machine learning based systems that exploit unsupervised machine learning models to bring humans in the loop and help them complete real-world tasks. By bringing humans and machines together, we can empower humans to understand and organize large document collections better and faster. The findings and insights from these experiments can guide the development of next-generation machine learning models that can be used effectively and trusted by humans.

Michael Simpson, Senior Data Scientist and Data Science Practice Lead, Very

As a Senior Data Scientist and Data Science Practice Lead at Very, Michael Simpson collaborates with clients and other team members to solve difficult problems. Michael has leveraged machine learning to examine social communication networks, detect fake news, analyze legal documents, and used facial recognition to authenticate users. Michael contributed to a study about Separation of Distinct Photoexcitation Species in Femtosecond Transient Absorption Microscopy and is currently researching how cognitive diversity improves lab partner performance. Michael holds a Master’s in Computer Science: Machine Learning from Georgia Tech.

Abstract Summary:

Adopting Software Design Practices for Better Machine Learning:
Software engineering and design have developed patterns and practices over the last 50 years that allow them to quickly validate and implement ideas. In contrast, data science is often characterized by slow feedback loops with long periods of analysis and discovery followed by implementation. This workflow makes it more difficult to iterate and leads to problems that software engineering best practices were developed to address. However, it can be unclear how to apply these practices to data science. This talk will explain how Very adapts practices from software engineering and design to our data science projects to develop and deploy models with agility.

Joan Xiao, Lead Machine Learning Scientist, Figure Eight

Joan Xiao is a Lead Machine Learning Scientist at Figure Eight, a human-in-the-loop machine learning and artificial intelligence company. In her role, she leads research innovation and applies novel technologies to a broad range of real word problems. Previously she led the data science team at H5, a data search and analytics service company in e-Discovery industry. Prior to that, she led a Big Data Analytics team at HP. Joan received her Ph.D in Mathematics and MS in Computer Science from University of Pennsylvania.


Deep Learning for Product Title Summarization
Online marketplaces often have millions of products, and the product titles are typically intentionally made quite long for the purpose of being found by search engines. With voice shopping on the verge of taking off (voice shopping is estimated to hit $40+ billion across U.S. and U.K. by 2022), short versions (summaries) of product titles are desired to improve user experience with voice shopping. 
In this talk, we’ll present two different approaches to solve this problem using Natural Language Processing and Deep Learning. We’ll give a historical overview of the technology advancement in these approaches, and compare the evaluation results on a real world dataset.