Speakers:

Been Kim, Senior Research Scientist, Google Brain

Been Kim is a senior research scientist at Google Brain. Her research focuses on building interpretable machine learning – making ML understandable by humans.

The vision of her research is to make humans empowered by machine learning, not overwhelmed by it. She gave ICML tutorial on the topic in 2017, CVPR and MLSS at University of Toronto in 2018. She is a workshop Chair ICLR 2019, and has been an area chair and a program committee at NIPS, ICML, and FAT* conferences. In 2018, she gave a talk at G20 meeting on digital economy summit in Argentina. Before joining Brain, she was a research scientist at Institute for Artificial Intelligence (AI2) and an affiliate faculty in the Department of Computer Science & Engineering at the University of Washington. She received her PhD. from MIT.

Abstract Summary:

Interpretability Beyond Feature Attribution: Quantitative Testing with Concept Activation Vectors (TCAV):
I will share some of the recent results for improving interpretability when you already have a model (post-training interpretability) and our work on ways to test interpretability methods. Among them, I will take a deeper dive in one of my recent work – testing with concept activation vectors (TCAV) – a post-training interpretability method for complex models, such as neural network. This method provides an interpretation of a neural net’s internal state in terms of human-friendly, high-level concepts instead of low-level input features. The key idea is to view the high-dimensional internal state of a neural net as an aid, not an obstacle. We show how to use concept activation vectors (CAVs) as part of a technique, Testing with CAVs (TCAV), that uses directional derivatives to quantify the degree to which a user-defined concept is important to a classification result–for example, how sensitive a prediction of “zebra” is to the presence of stripes. Using the domain of image classification as a testing ground, we describe how CAVs may be used to explore hypotheses and generate insights for a standard image classification network as well as a medical application.

Gopal Erinjippurath, Senior Director, Analytics Engineering, Planet Inc.

Gopal manages the Imagery Analytics Engineering team at Planet Inc. His background is in the development of foundational technology that powers products in the imaging and machine learning space.

He is known for agile engineering execution from concept to scalable high quality products.

Gopal’s recent experience is in industry-leading analytics products from early concept demonstrations to multiple customers at Captricity and Harvesting, where he advises the CEO. Previously he led the algorithm engineering development of Dolby’s first imaging display product, the Emmy Award winning Dolby Professional Reference Monitor and technologies for high dynamic range video reproduction in Dolby Vision, now in the iPhone X/8/8s.

Gopal holds an MS in Electrical Engineering from University of Southern California and completed the Ignite Program, connecting technologists with new commercial ventures, at the Stanford Graduate School of Business.

Abstract Summary:

Large Scale datasets for Analytics on Satellite Imagery:
By imaging the entirety of Earth’s landmass every day at 3.7m resolution and enabling on-demand follow up imagery at 80cm resolution, Planet offers a uniquely valuable dataset for creating datasets for imagery analytics over varied context. We introduce Planet imagery towards creating large scale datasets for object detection and localization and associated analytics. We describe workflows for data collection and aggregation and demonstrate the results of experiments with baseline state of the art deep learning based object detection models (Faster RCNN and SSD) for object detection and localization. We also describe a few early experiments with transferability of object localization between datasets from different satellite constellations. These approaches are then applied towards localizing objects of relevance towards federal disaster response and emergency management efforts. We showcase examples of our imagery, objects of relevance and detections from the baseline model in disaster regions.

Ilke Demir, Postdoctoral Research Scientist, Facebook

Ilke Demir is a postdoctoral research scientist at Facebook. Her research interests include inverse procedural modeling, geometry processing and shape analysis, and deep learning for satellite image understanding. She received her Ph.D. degree in Computer Science from Purdue University in 2016, her M.S. degree also from Purdue, and B.S. degree in Computer Engineering from Middle East Technical University with a minor in Electrical Engineering. She worked at Pixar Animation Studios as an intern in 2015-2016. She has interned at KOVAN Robotics Lab and Havelsan Inc. on graphics and simulation projects, and served as the student system admin at METU. Ilke received numerous awards including Bilsland Dissertation Fellowship, GHC Scholarship, and best poster, paper, and reviewer awards. She has also been actively involved in women in science organisms for the past 10 years, always being an advocate for women and underrepresented minorities.

Abstract Summary:

Geospatial Machine Learning for Urban Development:
The collective mission of mapping the world is never complete: We need to discover and classify roads, settlements, land types, landmarks, and addresses. The recent proliferation of remote sensing data (overhang images, LiDAR, sensors) enabled automatic extraction of such structures to better understand our world. In this talk, we will first mention the motivation and results of DeepGlobe Satellite Image Challenge[1][2], for road extraction, building detection, and land cover classification. Then we will go into details of an example approach[3] which proposes a complete system to use deep learning for generating street addresses for unmapped developing countries. The approach applies deep learning to extract road vectors from satellite images, then processes the street network to output linear and hierarchical street addresses, by labeling regions, roads, and blocks; based on addressing schemes around the world and coherent with human cognitive system. We will share and demonstrate the motivation and algorithm behind the scenes, then compare them to current open and industrial solutions, and walk through our open source code[4] to generate the addresses for a given bounding box.

Edo Liberty, Director of Research at AWS, head of Amazon AI Labs

Edo Liberty is a Director of Research at AWS and the head of Amazon AI Labs. Edo received his B.Sc. in Physics and Computer Science from Tel Aviv University and his PhD in Computer Science from Yale University, where he was also a postdoctoral fellow in Applied Mathematics. Edo then co-founded and ran a New York based startup. Later, Edo joined Yahoo Research in Israel and taught Data Mining at Tel Aviv University for three years. Before joining Amazon, Edo led Yahoo’s independent Research in New York and Yahoo’s Scalable Machine Learning group. At Amazon, his lab published cutting-edge results, contributes to open source projects, and builds algorithms and machine learning solutions for SageMaker, QuickSight, Kinesis, ElasticSearch, Rekognition and other AWS services. His research interests include data mining, optimization, streaming and online algorithms, machine learning, and numerical linear algebra. He is the author of more than thirty academic papers on these topics including award-winning works on streaming matrix approximation and fast random projections. Edo is a frequent keynote speaker, tutorial presenter, and committee member at international conferences.

Abstract

Amazon SageMaker: Infinitely Scalable Machine Learning Algorithms

At AWS, we continue to strive to enable builders to build cutting-edge technologies faster in a secure, reliable, and scalable fashion. Machine learning is one such transformational technology that is top of mind not only for CIOs and CEOs but also developers and data scientists. November 2017, we launched Amazon SageMaker to make the problem of authoring, training, and hosting ML models easier, faster, and more reliable. Now, thousands of customers are using Amazon SageMaker and building ML models on top of their data lakes in AWS.

While building Amazon SageMaker and applying it to large-scale machine learning problems, we realized that scalability is a key aspect we need to focus on. So, when designing Amazon SageMaker we took on a challenge: to build machine learning algorithms that can handle an infinite amount of data. There are many other challenges. For example, Machine learning models are often trained tens or hundreds of times. During development, many different versions of the eventual training job are run. Then, to choose the best hyperparameters, many training jobs are run simultaneously with slightly different configurations. Finally, re-training is performed every x-many minutes/hours/days to keep the models updated with new data. Training, therefore, must be both fast and cost-effective.

To that end, Amazon SageMaker offers machine learning algorithms that train on indistinguishable-from-infinite amounts of data both quickly and cheaply. This sounds like a pipe dream. Nevertheless, this is exactly what we set out to do. This talk lifts the veil on some of the scientific, system design, and engineering decisions we made along the way.

Srividya Kannan Ramachandran, Marketing Science, Facebook

Srividya Kannan Ramachandran currently works in Marketing Science at Facebook. Her prior roles involve leading a data science team at Digitas and building the data science practice within marketing at Level 3 Communications. Srividya is one of the pioneers in the synthetic art movement and is represented by two art galleries in the Chelsea art district in New York. In 2016, she led a 900-member strong Women’s Employee Resource Group at Level 3, aiming at creating a culture to inspire women to develop leadership abilities. Srividya is the author of a book on telecom policy. She studied at Columbia University and the University of Colorado, Boulder.

On Display at MLconf, Q&A During AM Break

Aesthetics in Synthetic Art

There has been an explosion of interest in generating art by effecting style transfer through generative adversarial networks. I have been showing AI-generated art in New York art galleries and in this talk, I want to share how I explore the aesthetic effect produced by different network configurations as determined by choice of hyperparameters, depth of intermediate layers or strength of the transfer effect.

My training set is a mixed corpus drawn from both my own work (abstract photography and oil paintings), that I have shown publicly in the last year as well as representative samples of older prominent art movements including Cubism, Impressionism and Abstract Expressionism.

In this Q&A, I step away from the mechanistic aspect of generation and focus instead on the aesthetic implications of design choices made in the training phase.

Franziska Bell, Senior Data Science Manager on the Platform Team, Uber

Franziska Bell is a Senior Data Science Manager on the Platform Team at Uber, where she founded the Anomaly Detection, Forecasting Platform and Natural Language Platform teams. In addition, she leads Applied Machine Learning, Behavioral Science, and Customer Support Data Science.

Before Uber, Franziska was a Postdoc at Caltech where she developed a novel, highly accurate approximate quantum molecular dynamics theory to calculate chemical reactions for large, complex systems, such as enzymes. Franziska earned her Ph.D. in theoretical chemistry from UC Berkeley focusing on developing highly accurate, yet computationally efficient approaches which helped unravel the mechanism of non-silicon-based solar cells and properties of organic conductors.

Abstract Summary:

NL- Use Cases at Uber

At Uber, we are using natural language processing and conversational AI to improve the user experience. In my talk I will be delving into 2 use cases. In the first application we use natural language processing and machine learning to improve our customer care. The other use case is the recent launch of a smart in-app reply system that allows driver partners to respond to incoming rider messages at a click-of-a-button. Finally, we will cover the components common to most conversational AI systems.

Joan Xiao, Lead Machine Learning Scientist, Figure Eight

Joan Xiao is a Lead Machine Learning Scientist at Figure Eight, a human-in-the-loop machine learning and artificial intelligence company. In her role, she leads research innovation and applies novel technologies to a broad range of real word problems. Previously she led the data science team at H5, a data search and analytics service company in e-Discovery industry. Prior to that, she led a Big Data Analytics team at HP. Joan received her Ph.D in Mathematics and MS in Computer Science from University of Pennsylvania.

Abstract

Deep Learning for Product Title Summarization
Online marketplaces often have millions of products, and the product titles are typically intentionally made quite long for the purpose of being found by search engines. With voice shopping on the verge of taking off (voice shopping is estimated to hit $40+ billion across U.S. and U.K. by 2022), short versions (summaries) of product titles are desired to improve user experience with voice shopping. 
 
In this talk, we’ll present two different approaches to solve this problem using Natural Language Processing and Deep Learning. We’ll give a historical overview of the technology advancement in these approaches, and compare the evaluation results on a real world dataset. 

Mike Tamir, Head of Data Science at Uber ATG, UC Berkeley Data Science faculty, and head of Phronesis ML Labs

Mike serves as Head of Data Science at Uber ATG, UC Berkeley Data Science faculty, and head of Phronesis ML Labs. He has led teams of Data Scientists in the bay area as Chief Data Scientist for InterTrust and Takt, Director of Data Sciences for MetaScale/Sears, and CSO for Galvanize where he founded the galvanizeU-UNH accredited Masters of Science in Data Science degree and oversaw the company’s transformation from co-working space to Data Science organization.

Abstract

How to use Deep Learning to solve the “Fake News Problem”
In this talk we explore real world use case applications for automated “Fake News” evaluation using contemporary deep learning article vectorization and tagging. We begin with the use case and an evaluation of the appropriate context applications for various deep learning applications in fake news evaluation. Technical material will review several methodologies for article vectorization with classification pipelines, ranging from traditional to advanced deep architecture techniques. We close with a discussion on troubleshooting and performance optimization when consolidating and evaluating these various techniques on active data sets.

Prasanth AnbalaganSenior Software Engineer (QE and Analysis) on the Artificial Intelligence Center of Excellence Team at Red Hat

Prasanth Anbalagan is a Senior Software Engineer (QE and Analysis) on the Artificial Intelligence Center of Excellence Team at Red Hat. Prasanth earned his M.S and Ph.D in Computer Science from North Carolina State University focusing on Software Reliability Engineering, Predictive Modeling and Automated Software Engineering. As a member of AI team at Red Hat, Prasanth focuses on development of services and tools to analyze, manipulate, and visualize data and execute automated operations as part of an Analytics, Machine Learning and AI platform.

Abstract

AI-Library: An Open Source Machine Learning Framework
Machine Learning is widely used in Software Engineering to improve tasks like software development, testing, maintenance, etc., For the purpose of continued improvement, machine learning techniques will need to be integrated into existing infrastructure that support the different stages of software life cycle. Such integration often presents challenges like implementing the machine learning models, selecting and deploying the right infrastructure to experiment with the models, in addition to having the necessary data science background and skills, etc., In this talk, we present AI-Library, an open source machine learning framework that includes machine learning algorithms as well as machine learning solutions to several software engineering use cases. AI-Library enables users to work with machine learning models without worrying about infrastructure issues, model complexity or any data science expertise. We share the lessons learned in our machine learning experiments performed over software artifacts at Red Hat and also demonstrate usage of the framework with an example.

Grishma Jena, Cognitive Software Engineer with the Data Science for Marketing Team, IBM Watson 

Grishma is a Cognitive Software Engineer with the Data Science for Marketing team at IBM Watson. She earned her Masters in Computer Science at University of Pennsylvania. Her research interests are in Machine Learning and Natural Language Processing. She was recently a mentor for the non-profit AI4ALL’s AI Project Fellowship where she guided a group of high school students to use AI for prioritizing 911 EMS calls. She is passionate about encouraging women and youngsters in technology. In her free time, she writes, cooks and likes conducting workshops and delivering talks.

On Display at MLconf, Q&A During Lunch

Enterprise to Computer : Star Trek Chatbot

Personality and emotions play a vital role in defining human interactions. There has been a recent shift in making conversational agents and chatbots appear more human-like. Adding a persona to a chatbot is essential for this goal and contributes to a better and more engaging user experience. In this work, we propose a design for a chatbot that captures the style of Star Trek by incorporating references from the show along with peculiar tones of the fictional characters therein. Enterprise to Computer bot (E2Cbot) treats Star Trek dialog style and general dialog style differently by using two recurrent neural network Encoder-Decoder models. The Star Trek dialog style uses sequence to sequence (SEQ2SEQ) models (Sutskever et al., 2014; Bahdanau et al., 2014) trained on Star Trek dialogs. The general dialog style uses Word Graph to shift the response of the SEQ2SEQ model into the Star Trek domain. To evaluate the bot, we use perplexity and word overlap with Star Trek vocabulary. We also perform further evaluation using human scores. This work is a joint project by Grishma Jena, Mansi Vashisht, João Sedoc and Abheek Basu under the guidance of Professor Lyle Ungar.

Fedor Borisyuk, Technical Leader in the Domain of Computer Vision at Facebook

Fedor Borisyuk is technical leader in the domain of computer vision at Facebook. He leads a team of great machine learning engineers and researchers working on Photo Search and other exciting problems related to visual understanding. His interests lie in using large amounts of weakly/noisy annotated image data to build a visual representation, modeling relationship between textual and image domain, visually similar recommendations, and building scalable user facing visual understanding systems. Prior to joining Facebook he worked in Microsoft and LinkedIn, and got his PhD from Lobachevsky Nizhny Novgorod State University.

Abstract

Image Search at Facebook: Making sense of one of the largest image databases in the world

Sharing of photos has become one of the primary ways for people to communicate. Billions of photos are uploaded on Facebook every single day. Such a large scale makes it challenging to sift through and find photos a person is interested in. To help people find the photos they’re looking for more easily, it becomes crucial to match their search queries with the right photos to satisfy search intent. This requires a deep understanding of both the photos as well as queries in multiple languages.

In this talk, we will present an overview of the infrastructure as well as machine learning models powering Facebook’s image search system.

Irina Rish, Researcher, AI Science, IBM T.J. Watson Research Center

Irina Rish is a researcher at the AI Foundations department of the IBM T.J. Watson Research Center. She received MS in Applied Mathematics from Moscow Gubkin Institute, Russia, and PhD in Computer Science from the University of California, Irvine. Her areas of expertise include artificial intelligence and machine learning, with a particular focus on probabilistic graphical models, sparsity and compressed sensing, active learning, and their applications to various domains, ranging from diagnosis and performance management of distributed computer systems (“autonomic computing”) to predictive modeling and statistical biomarker discovery in neuroimaging and other biological data. Irina has published over 70 research papers, several book chapters, two edited books, and a monograph on Sparse Modeling, taught several tutorials and organized multiple workshops at machine-learning conferences, including NIPS, ICML and ECML. She holds over 26 patents and several IBM awards. Irina currently serves on the editorial board of the Artificial Intelligence Journal (AIJ). As an adjunct professor at the EE Department of Columbia University, she taught several advanced graduate courses on statistical learning and sparse signal modeling.

Abstract Summary:

AI for Neuroscience &  Neuroscience for AI 
AI and neuroscience share the same age-old goal: to understand the essence of intelligence. Thus, despite different tools used and different questions explored by those disciplines, both have a lot to learn from each other. In this talk, I will summarize some of our recent projects which explore both directions,  AI for neuro and neuro for AI. AI for neuro involves  using machine learning to  recognize mental states and identify statistical biomarkers of various mental disorders from heterogeneous data (neuroimaging, wearables, speech), as well as applications of our recently proposed hashing-based representation learning  to dialog generation in depression therapy. Neuro for AI implies drawing inspirations from neuroscience to develop better machine learning algorithms. In particular, I will  focus on the continual (lifelong) learning objective, and discuss several examples of neuro-inspired approaches, including  (1) neurogenetic online model adaptation in nonstationary environments, (2) more biologically plausible alternatives to backpropagation, e.g., local optimization for  neural net learning  via alternating minimization with auxiliary activation variables, and co-activation memory,  (3) modeling reward-driven attention and attention-driven reward in contextual bandit setting, as well as (4) modeling and forecasting behavior of coupled nonlinear dynamical systems such as brain (from calcium imaging and fMRI) using a combination of analytical van der Pol model with LSTMs, especially in small-data regimes, where such hybrid approach outperforms both of its components used separately.

Dr. Yi Li, Machine Learning Research Scientist, Baidu Silicon Valley AI Lab

Dr. Yi Li is a machine learning research scientist at Baidu Silicon Valley AI Lab. Dr. Li’s research interest lie in ML/AI technologies in health care, and is currently leading the research in this direction at Baidu Research.

Abstract Summary:

Cancer Metastasis Detection With Neural Conditional Random Field:
Breast cancer diagnosis often requires accurate detection of metastasis in lymph nodes through Whole-slide Images (WSIs). Recent advances in deep convolutional neural networks (CNNs) have shown significant successes in medical image analysis and particularly in computational histopathology. Because of the outrageous large size of WSIs, most of the methods divide one slide into lots of small image patches and perform classification on each patch independently. However, neighboring patches often share spatial correlations, and ignoring these spatial correlations may result in inconsistent predictions. In this talk, we introduce a neural conditional random field (NCRF) deep learning framework to detect cancer metastasis in WSIs. NCRF considers the spatial correlations between neighboring patches through a fully connected CRF which is directly incorporated on top of a CNN feature extractor. The whole deep network can be trained end-to-end with standard back-propagation algorithm with minor computational overhead from the CRF component. The CNN feature extractor can also benefit from considering spatial correlations via the CRF component. Compared to the baseline method without considering spatial correlations, we show that the proposed NCRF framework obtains probability maps of patch predictions with better visual quality. We also demonstrate that our method outperforms the baseline in cancer metastasis detection on the Camelyon16 dataset and achieves an average FROC score of 0.8096 on the test set. NCRF is open sourced at https://github.com/baidu-research/NCRF.

Forough Poursabzi, Researcher, Microsoft Research

Forough is a post-doctoral researcher at Microsoft Research New York City. She works in the interdisciplinary area of interpretable and interactive machine learning. Forough collaborates with psychologists to study human behavior when interacting with machine learning models. She uses these insights to design machine learning models that humans can use effectively. She is also interested in several aspects of fairness, accountability, and transparency in machine learning and their effect on users’ decision-making process. Forough holds a BE in computer engineering from the University of Tehran and a PhD in computer science from the University of Colorado at Boulder.

Abstract Summary:

Manipulating and Measuring Model Interpretability:
Machine learning is increasingly used to make decisions that affect people’s lives in critical domains like criminal justice, fair lending, and medicine. While most of the research in machine learning focuses on improving the performance of models on held-out datasets, this is seldom enough to convince end-users that these models are trustworthy and reliable in the wild. To address this problem, a new line of research has emerged that focuses on developing interpretable machine learning methods and helping end-users make informed decisions. Despite the growing body of work in developing interpretable models, there is still no consensus on the definition and quantification of interpretability. In this talk, I will argue that to understand interpretability, we need to bring humans in the loop and run human-subject experiments. I approach the problem of interpretability from an interdisciplinary perspective which builds on decades of research in psychology, cognitive science, and social science to understand human behavior and trust. I will talk about a set of controlled user experiments, where we manipulated various design factors in models that are commonly thought to make them more or less interpretable and measured their influence on users’ behavior. Our findings emphasize the importance of studying how models are presented to people and empirically verifying that interpretable models achieve their intended effects on end-users.

Jerry Talton, Director of Data & Machine Learning, Carta

Jerry Talton is the Director of Data & Machine Learning at Carta, where he leads the company’s efforts to operationalize the ownership graph. Prior to joining Carta, he managed the Machine Learning Services team in Slack’s Search, Learning, & Intelligence group in Manhattan, and was the founder and CEO of Apropose, a data-driven design startup backed by NEA and Andreessen Horowitz. He holds a PhD in Computer Science from Stanford University, BS and MS degrees from the University of Illinois at Urbana-Champaign, and previously worked at Intel, Adobe, and Nvidia.

Title: Equity, Inequity, and Machine Learning
Carta was founded in 2012 with a mission to help private companies, public companies, investors, and employees manage their cap tables, valuations, portfolio investments, and equity plans. As a result, Carta is mapping one of the world’s most valuable propriety datasets: the ownership graph. In this talk, I’ll present recent work analyzing how startups distribute ownership across their employees, and discuss the implications for companies looking to deploy more just and equitable compensation strategies. Then, I’ll sketch some fundamental questions machine learning can answer about ownership, and outline strategies for tackling them at scale.

Ivan Yamshchikov, Researcher, Max Planck Institute for Mathematics in the Sciences

Ivan Yamshchikov is a researcher at the Max Planck Institute for Mathematics in the Sciences.

On Display at MLconf, Q&A During Afternoon Break

Neurona. How we generated four new Kurt Cobain songs with RNN

We created an EP of four songs where the texts were created by a neural network trained to resemble Kurt Cobain (who would have been 50 in 2017). We have generated the lyrics and recorded the music and Rob Carrol (an independent musician from New York) sang the generated lyrics. During the training we used concatenated embeddings that but for the standard word2vec representation of the word also included it’s transcription (so that the network could learn phonetics), the author of the document and other meta-information. You can read more about it here https://medium.com/@creaited/model-by-neurona-b6208e2693d1

Leslie Smith, Senior Research Scientist, US Naval Research Laboratory

Leslie N. Smith received a combined BS and MS degrees in Physical Chemistry from the University of Connecticut in 1976 and a Ph.D. degree in chemical physics from the University of Illinois in 1979.

He is currently at the US Naval Research Laboratory in the Navy Center for Applied Research in Artificial Intelligence. His research focuses on deep learning, machine learning, computer vision, sparse modeling, and compressive sensing. Additionally, his interests includes super resolution, non-linear dimensionality reduction, function approximation, and feature selection.

Abstract Summary:

Competition Winning Learning Rates:
It is well known that learning rates are the most important hyper-parameter to tune for training deep neural networks.  Surprisingly, training with dynamic learning rates can lead to an order of magnitude speedup in training time.  This talk will discuss my path from static learning rates to dynamic cyclical learning rates and finally to fast training with very large learning rates (I named this technique “super-convergence”).  In particular, I will show that very large learning rates are the preferred method for regularizing the training because they provide the twin benefits of training speed and good generalization.  The super-convergence method was integrated into the fast.ai library and the Fastai team used it to win the DAWNBench and Kaggle’s iMaterialist challenges.

Yevgeniy Vahlis, Head of Applied Machine Learning, Borealis AI

Yevgeniy Vahlis is the head of the Applied Machine Learning group at Borealis AI. Prior to joining Borealis AI, Yevgeniy led an applied research team at Georgian Partners, a late-stage venture capital fund, and worked at a number of tech companies including Amazon and Nymi. Yevgeniy kicked off his career at AT&T Labs in New York as a research scientist after completing his PhD in Computer Science at the University of Toronto and a year of postdoctoral studies at Columbia University

Abstract Summary:

Differential Privacy in the Real World:
Differential privacy is making headlines thanks to the pioneering work of companies like Apple and Google, and it is now being used by companies of all sizes to provide data privacy guarantees. It is no secret that machine learning models can memorize (overfit) training data and that through carefully crafted adversarial inputs machine learning models can be subverted by an attacker. Combine these facts with a model that aggregates data from a multitude of customers and you have an AI-driven disaster waiting to happen. In this talk we will cover a defensive measure called “differential privacy” that is a potential solution to such threats. In this talk Yevgeniy will explain the core concepts of differential privacy and share a behind the scenes look at companies are successfully implementing differential privacy in their products.

Michael Lindon, Senior Data Scientist, Tesla

Michael Lindon is a Senior Data Scientist on the Tesla Supply Chain Automation team.  He received a combined Bsc-MPhys degree from the University of Warwick in 2012, and MSc and PhD degrees from Duke University in 2015 and 2018 respectively, where he specialized in Bayesian methods, decision theory and optimization.

Abstract Summary:

When you’re going through hell, keep going: Using machine learning to deliver Tesla Model 3s:
In August 2018, the Tesla Model 3 outsold all BMW passenger cars combined in the United States. How did Tesla scale vehicle delivery to customers as it more than tripled the number of cars produced – and delivered to customers — throughout 2018?  In this applied talk, Michael will describe some of his recent work at Tesla developing predictive models for ETAs. Forecasting accurate ETAs along with quantifying uncertainty in a complex logistics network is imperative to the smooth running of operations and the making of optimal decisions. This is a challenging problem due to the sheer complexity of outbound logistics. Transit lead times and dwells are influenced by a plethora of factors and further complicated by rich dependency structures across the network. This talk walks through a complete Bayesian analysis, from modeling to inference to decision theory.

Jeff McGehee, Senior Data Scientist and IoT Practice Lead, Very

As a Senior Data Scientist and IoT Practice Lead at Very, Jeff McGehee works with clients to build data-driven products that interact with the physical world. Jeff is naturally drawn to problems that most people consider “unsolvable,” and he enjoys solving those kinds of problems at Very. Jeff brings his applied mathematics and machine learning knowledge to a vast array of problems and projects involving images, natural language, social graphs, temporal data, and geospatial data.
In addition to holding a patent for his work on computer-implemented intelligent alignment method for color sensing devices, Jeff has published research on optimal torque control of an integrated starter–generator using genetic algorithms.
Jeff holds a BS in Mechanical Engineering from Tennessee Tech University, an MS in Mechanical Engineering from Tennessee Tech University, and an MS in Computer Science with a focus in Machine Learning from Georgia Institute of Technology.

Event Emcees:

Chris Knorowski, CTO/Founder of SensiML

Chris Knorowski is an experienced software architect and data scientist, who is currently the CTO/Founder of SensiML. At SensIML he is focused on developing a distributed machine learning frameworks which extend analytics to the edge by allowing developers to rapidly create smart sensor algorithms for resource constrained embedded devices. Prior to working at SensiML he worked as a Data Scientist/Software Engineer for Intel Corporation, Dupont Pioneer and Strongbox Data Solutions.  He played a key role in the development of Intel Knowledge Builder toolkit working on developing a scalable cloud server infrastructure for data management and machine learning algorithms for the Intel Curie. He holds bachelor’s in Physics from Virginia Tech and PhD in Physics from Iowa State University.

Event Emcee- MLconf 2018

Chloe Liu, Lead for the Data Science Team, Lumos Labs

Chloe is leading the data science team at Lumos Labs. She has more than 10 years of experience working as a data scientist as well as the leader of the data science team across many different industries. Her current passion is in creating business values from data as well as creating beautiful data product.

Event Emcee- MLconf 2018

Roger Pincombe, Lead Software Engineer, Capital One

Roger is a lead software engineer at Capital One in San Francisco, with diverse technical and business experience. He has won over 15 hackathons and took his own startup through Techstars Boulder. His work often involves web scraping, machine learning, and integrating many disparate systems. Some of his more interesting projects include the AllThePeople.net social media data mining system and a teleprompter for Google Glass.

Event Emcee- MLconf 2018

Alex Korbonits, Machine Learning Engineer, Textio

Alex is a machine learning engineer at augmented writing platform Textio, where he ships machine learning models that drive the platform’s predictions and guidance. Formerly, Alex was Remitly’s first data scientist where he worked extensively on feature extraction and shipping machine learning models. Outside of work, he is an avid sailor and burgeoning writer. Alex is a graduate of the University of Chicago with degrees in Mathematics and Economics.

Event Emcee- MLconf 2018

Sarah Mead, Founder, Tradeink

Sarah Mead is a native San Franciscan, who received her BA at The Colorado College in Colorado Springs, CO and studied a prelaw track with a minor in film/data in an highly innovative educational program at undergrad. She worked as a producer on film and has since drawn on her inspiration from travel and learning at her alma matter to start Tradeink, a technology platform for higher learning whose mission is to increase access to technical, scientific and humanitarian learning at the level of excellence. The Urban Campus exists for exploration and learning

Even Emcee