Anna Choromanska, Department of Electrical and Computer Engineering at NYU Tandon School of Engineering

Professor Anna Choromanska did her Post-Doctoral studies in the Computer Science Department at Courant Institute of Mathematical Sciences in NYU and joined the Department of Electrical and Computer Engineering at NYU Tandon School of Engineering in Spring 2017 as an Assistant Professor. She is affiliated with the NYU Center for Data Science.
Prof. Choromanska’s research interests focus on machine learning both theoretical and applicable to the variety of real-life phenomena. Currently, her main research projects focus on numerical optimization, deep learning, large data analysis, and learning from data streams. Prof. Choromanska also works on machine learning for robotics and autonomous systems. She collaborates with NVIDIA (New Jersey lab) on the autonomous car driving project.
Prof. Choromanska was a recipient of The Fu Foundation School of Engineering and Applied Science Presidential Fellowship at Columbia University in the City of New York. She co-authored several international conference papers and refereed journal publications, as well as book chapters. The results her works are used in production by Facebook (training production vision systems and entry to COCO competition) and Baidu, and in product development by NVIDIA. She is also a contributor to the open source fast out-of-core learning system Vowpal Wabbit (aka VW). Prof. Choromanska gave over 50 invited and conference talks and serves as a book editor (MIT Press volume), organizer of top machine learning events (workshops at conferences such as the International Conference on Neural Information Processing Systems), and a reviewer and area chair for several top machine learning conferences and journals.

Abstract Summary:

Data-driven challenges in AI: scale, information selection, and safety:

The talk will focus on data-driven challenges in AI. First, the talk will focus on scaling algorithms to massive data sets. The multi-class and multi-label classification problems will be addressed, where the number of classes (k) is extremely large, with the goal of obtaining train and test time complexity logarithmic in the number of classes. A reduction of these problems to a set of binary classification problems organized in a tree structure will be discussed. A extensions to deep learning will be provided. Second, the talk will consider a problem of information selection for efficient inference in the context of autonomous platforms equipped with multiple perception sensors. Third, the talk will address safety issues in modern AI systems and develop GAN-based on-line monitoring framework for continuous real-time safety/security in learning-based control systems dedicated to autonomous vehicles.

Neel Sundaresan, Partner Director, Microsoft Cloud and AI Division

Neel Sundaresan is a Partner Director at Microsoft Cloud and AI Division where he leads projects in infusing AI into software development. Prior to Microsoft he was the head of eBay Research Labs and eBay Data Labs. He was also a research manager at IBM research and was a founding CTO of a CRM company. He has a PhD in computer Science from Indiana University Bloomington and a degree in Computer science and in mathematics from IIT Mumbai. He has over 100 technical publications and over 160 issued patents. He is a frequent speaker at national and international conferences.

Abstract Summary:

Teaching a Machine to Code:

There has been extensive work in machine learning in speech, vision, text, and machine translation. One new area that is gaining a lot of interest is program synthesis. With the availability of vast amount of open source code in a wide variety of languages from sources like github and also associated textual information from platforms like stackoverflow, the field is ripe for using the latest advances in machine learning techniques for automation in software engineering in general and program synthesis in particular. While programs are highly structured as they obey the syntactic and semantic structures imposed by the compilers the intent behind the code is only human decipherable. By using associated comments, and PR reviews along with the signatures from code structures machine learning techniques can be applied to implement code completion and also assist in other software development processes like automated testing, risk analysis, and review processes. We will discuss how we have used machine learning and, in some cases, deep learning methods to assist in this automation. More specifically I will talk about deep learning techniques for understanding ​programs beyond semantics for identifying intent and idioms and match them with comments so as to automate and create assistive tools to the software developer during editing, building, PR, CI/CD workflows of software deployment. Mundane tasks during software development can be automated while more complex tasks can be alleviated with assistive AI solutions. We will walk through examples, experiences on what worked and what didn’t work in deploying such models to production.

Emily Pitler, Staff Research Scientist, Google AI

Emily is a staff research scientist at Google AI in NYC.  Emily‘s research focus is natural language understanding.  Emily leads a team of researchers and engineers (spread across NYC, Mountain View, and San Francisco) working on making models for language understanding more robust.  Emily received her PhD in Computer and Information Science from the University of Pennsylvania in 2013 and her BS in Computer Science from Yale in 2007.

Esperanza Lopéz Aguilera, Machine Learning Engineer, Kx

Esperanza López Aguilera works in the Kx machine learning team on ML projects that use kdb+ for time-series analytics including for streaming IoT data, precision manufacturing, space analytics, and telecommunications. She has a Bachelor’s Degree in Mathematics from the Universidad de Granada, Spain and a Master’s Degree in Big Data Analytics from the Universidad Carlos III, Madrid.

Abstract Summary:

Using a Bayesian Neural Network in the Detection of Exoplanets:

This talk will describe a NASA Frontier Development Lab research project to analyze satellite data (TESS) for the discovery of exoplanets. The project used time-series TCE data which posed a challenge in the design of our ML application, due to the high volume and added complexity associated with time-series data analytics. This issue was solved by using the programming language kdb+/q. Different models were trained and tested to compare performance due to the complexity of the data. Ultimately a Bayesian Neural Network was chosen, and we obtained 91% accuracy and 83% precision.

Liliana Cruz-Lopez, Columbia University

Liliana is a graduate student in the Data Science Institute at Columbia University. Her areas of interest are applied Machine Learning including Reinforcement and Deep Learning for a variety of industry domains including healthcare, smarter cities and financial engineering. She is passionate about exploring how such data driven analysis could make world a better place. Liliana has worked on a variety of industry domains including a healthcare startup where she had the opportunity to build a real-time AI driven connected health analytics platform for analyzing patient data. She has developed various analytic models for providing deeper insights about diabetic, cardiac and sleep disorder from medical and sensor data.

Liliana is president of Diversity in Graduate Engineering (DGE) at Columbia University. She is a regular speaker and organizer of events that are particularly focused on addressing and promoting causes and challenges for under-represented students in computing and technology. Given her own growing up experience, this cause is very personal and important for her. Liliana promotes awareness and campaigns for this cause through organized networking events and workshops at Columbia University.

Abstract Summary:

Deep Reinforcement Learning based Insulin Controller for Effective Type-1 Diabetic Care:

As of 2017 CDC report, more than 100 million U.S. adults are now living with diabetes or prediabetes. This disease results from high blood glucose (blood sugar) due to an inability to properly derive energy from food, primarily in the form of glucose. Effective diabetic care for insulin dependent Type-1 patients requires that a healthy-level of glucose is maintained throughout the day with minimal fluctuations in either direction. The goal of insulin dependent diabetic care is to administer appropriate amount of insulin at appropriate time such that glucose level is maintained at near target level without reaching hypo or hyper level.

This research work proposes and explores the effectiveness of Deep Reinforcement Learning models as the insulin controller for Type 1 diabetes. Given the nature of the insulin-glucose dynamics, Reinforcement Learning based approaches seem to be more suitable compared to model driven controller typically employed for the diabetic care. Specifically, adapting the Deep Deterministic Policy Gradient (DDPG) algorithm, a Deep Reinforcement Learning based insulin controller is proposed and analyzed to study its efficacy in achieving better glucose control. Given that DDPG is a model-free, off-policy actor-critic algorithm using deep function approximators that can learn policies in high-dimensional, continuous action spaces, we develop the insulin controller using DDPG algorithm

We implement our approach in SimGlucose environment, a recently proposed Reinforcement Learning based software platform that supports Open AI Gym to evaluate our proposed controller. We compare the performance of DDPG based insulin controller that with widely used Padova model based insulin controller. Our simulation driven evaluations indicate that DDPG based controllers are able to better react and control blood sugar fluctuations compared to model driven controllers. While further evaluations with real datasets are required, the preliminary results indicate that Deep Reinforcement Learning based insulin controllers are promising candidates for better glucose control for Type-1 diabetic care.

Soumith Chintala, Researcher, Facebook AI Research

Soumith Chintala is a Researcher at Facebook AI Research, where he works on deep learning, reinforcement learning, generative image models, agents for video games and large-scale high-performance deep learning. Prior to joining Facebook in August 2014, he worked at MuseAmi, where he built deep learning models for music and vision targeted at mobile devices. He holds a Masters in CS from NYU, and spent time in Yann LeCun’s NYU lab building deep learning models for pedestrian detection, natural image OCR, depth-images among others.

Abstract Summary:

Increasing the Impact of AI Through Better Software:

The state of artificial intelligence is continuing to advance rapidly, with breakthroughs in areas from reinforcement learning to generative adversarial networks holding the potential to transform how we go about our day-to-day. Learn about how modern software frameworks and tooling, paired with cutting edge hardware, are enabling researchers to take state-of-the-art research and deploy at scale in areas from autonomous vehicles to medical imaging. We’ll deep dive on the latest updates to the PyTorch deep learning framework, including distributed training, the C++ frontend, quantization, and new libraries to support development.

Rishabh Mehrotra, Research Scientist, Spotify Research

Rishabh Mehrotra is a Research Scientist at Spotify Research in London. He obtained his PhD in the field of Machine Learning and Information Retrieval from University College London where he was partially supported by a Google Research Award. His PhD research focused on inference of search tasks from query logs and their applications. His current research focuses on bandit based recommendations, counterfactual analysis and experimentation. Some of his recent work has been published at top conferences including WWW, SIGIR, NAACL, CIKM, RecSys and WSDM. He has co-taught a number of tutorials at leading conferences (WWW & CIKM) & was recently invited to teach a course on “Learning from User Interactions” at the Russian Summer School on Information Retrieval and the ACM SIGKDD Africa Summer School on Machine Learning for Search.

Abstract Summary:

Recommendations in a Marketplace: Personalizing Explainable Recommendations with Multi-objective Contextual Bandits:

In recent years, two sided marketplaces have emerged as viable business models in many real world applications (e.g. Amazon, AirBnb, Spotify, YouTube), wherein the platforms have customers not only on the demand side (e.g. users), but also on the supply side (e.g. retailer, artists). Such multi-sided marketplace involves interaction between multiple stakeholders among which there are different individuals with assorted needs. While traditional recommender systems focused specifically towards increasing consumer satisfaction by providing relevant content to the consumers, two-sided marketplaces face an interesting problem of optimizing their models for supplier preferences, and visibility.

In this talk, we begin by describing a contextual bandit model developed for serving explainable music recommendations to users and showcase the need for explicitly considering supplier-centric objectives during optimization. To jointly optimize the objectives of the different marketplace constituents, we present a multi-objective contextual bandit model aimed at maximizing long-term vectorial rewards across different competing objectives. Finally, we discuss theoretical performance guarantees as well as experimental results with historical log data and tests with live production traffic in a large-scale music recommendation service.

Madalina Fiterau, Assistant Professor, University of Massachusetts Amherst

Madalina Fiterau is an Assistant Professor in the College of Information and Computer Sciences at UMass Amherst. She has completed a PhD in Machine Learning from Carnegie Mellon University (Fall 2015), and a Postdoc at Stanford University (Fall 2018). Madalina is currently expanding her research on interpretable models, in part by applying deep learning to obtain salient representations from biomedical unstructured data, including time series, text and images. She is the recipient of the Marr Prize for Best Paper at ICCV 2015 and of Star Research Award at the Annual Congress of the Society of Critical Care Medicine 2016. Madalina has co-organized NIPS workshops on the topic of Machine Learning in Healthcare in 2013, 2014, 2016, 2017 and 2018.

Abstract Summary:

Hybrid Machine Learning Methods for the Interpretation and Integration of Heterogeneous Multimodal Data:

The prevalence of smartphones and wearable devices and the widespread use of electronic health records have led to a surge in multimodal health data that is noisy, non-uniform, and collected at an unprecedented scale. My research focuses on machine learning techniques that learn expressive representations of multimodal, heterogeneous data for biomedical predictive models designed to interact with domain experts. In the first part of the talk, the focus is on techniques for partitioning data and leveraging low-dimensional structure to enable visualization and annotation by humans. The latter part addresses the construction of hybrid models that combine deep learning with random forests, and the fusing of structured information into temporal representation learning. This array of methods obviates the need for feature engineering while improving on the state of the art for diverse biomedical applications. Use cases include the classification of alerts in a vital sign monitoring system, the prediction of surgical outcomes in children with cerebral palsy, and forecasting the progression of osteoarthritis from subjects’ physical activity. Finally, I will present the use of weak supervision for the classification of rare aortic valve malformations from unlabeled cardiac MRI sequences.

Roy Lowrance, Chief Scientist and Co-Founder at 7 Chord Inc.

Roy is a visionary technologist and AI researcher, as well as the brain behind BondDroid TM. Former CTO of Capital One and Reuters, he has designed and led the development of the Center for Data Science at NYU. He wrote his PhD in Machine Learning on the topic of Predicting the Market Value of Single-Family Residential Real Estate under the supervision of Yann LeCun and Dennis Shasha (also an advisor to 7 Chord). Roy’s multi-year research in applications of AI to financial instrument pricing gave birth to BondDroid TM, 7 Chord’s proprietary price prediction and trading signal engine for bonds, now deployed in live trading environments at several financial institutions. Roy also has an MBA from Harvard and a BA in Math from Vanderbilt University.

Abstract Summary:

Predicting Bond Prices: Regime Changes:

Making predictions in financial markets is challenging because the price formation mechanism can change from time to time, in a process academics call a “concept drift”. We have implemented a unique machine learning approach to making short-term price predictions for corporate bonds, specifically designed to maintain high prediction accuracy across market regimes, issuer credit quality and instrument life cycles. Our commercially available engine BondDroid TM is currently used by several large buy-side and sell-side institutions. BondDroid TM predicts bid and ask prices for thousands of corporate bonds in real-time. The problem of identifying and automatically reacting to a regime change is not unique to financial markets. So, our conclusions are useful and applicable beyond finance.

Amir Sadoughi, Senior Software Development Engineer, Amazon Web Services

Amir Sadoughi is a Senior Software Development Engineer in the Amazon AI Labs group, working on Amazon SageMaker algorithms. He is passionate about technologies at the intersection of distributed systems and machine learning.

Abstract Summary:

Developing large-scale machine learning algorithms on Amazon SageMaker:

At AWS, we continue to strive to enable builders to build cutting-edge technologies faster in a secure, reliable, and scalable fashion. Machine learning is one such transformational technology that is top of mind not only for business decision makers but also developers and data scientists.

Amazon SageMaker enables machine learning developers and data scientists to build, train, and deploy machine learning models quickly and easily, at any scale. In this talk, we will dive deep to discuss the software development behind the Amazon SageMaker built-in algorithms, including implementation, integration, testing, and benchmarking. We will present our learnings from tackling the challenges of implementing scalable machine learning algorithms that work across various training and inference requirements. We will also discuss how users can build and vend their own machine learning algorithms and models as part of the AWS Marketplace for Machine Learning and use them with Amazon SageMaker.​

Kerry Weinberg, Senior Data Scientist at Amgen

Kerry Weinberg leads Data Science for Digital Health & Innovation at Amgen. Her team focuses on applying emerging data analytic techniques like machine learning and artificial intelligence to improve Amgen’s agility, better reach patients, and improve digital health by better identifying patients, treating them, and enabling their adherence. Prior to this role, Kerry led AI strategy for Process Development at Amgen. Before joining Amgen, Kerry received her MBA and M.S. Biological Engineering from MIT as part of the Leaders for Global Operations Program. She previously led systems engineering efforts for high-speed cell sorters at Beckman Coulter. Kerry holds a B.S. Biological Engineering also from MIT.