Speakers:

Anima Anadkumar, Principal Scientist, Amazon Web Services, Endowed Professor, CalTech

Anima Anandkumar is a principal scientist at Amazon Web Services and a Bren professor at Caltech CMS department. Her research interests are in the areas of large-scale machine learning, non-convex optimization and high-dimensional statistics. In particular, she has been spearheading the development and analysis of tensor algorithms. She is the recipient of several awards such as the Alfred. P. Sloan Fellowship, Microsoft Faculty Fellowship, Google research award, ARO and AFOSR Young Investigator Awards, NSF Career Award, Early Career Excellence in Research Award at UCI, Best Thesis Award from the ACM Sigmetrics society, IBM Fran Allen PhD fellowship, and several best paper awards. She has been featured in a number of forums such as the yourstory, Quora ML session, O’Reilly media, and so on. She received her B.Tech in Electrical Engineering from IIT Madras in 2004 and her PhD from Cornell University in 2009. She was a postdoctoral researcher at MIT from 2009 to 2010, an assistant professor at U.C. Irvine between 2010 and 2016, and a visiting researcher at Microsoft Research New England in 2012 and 2014.

Abstract Summary:

Large-scale Machine Learning: Deep, Distributed and Multi-Dimensional:
Modern machine learning involves deep neural network architectures which yields state-of-art performance on multiple domains such as computer vision, natural language processing and speech recognition. As the data and models scale, it becomes necessary to have multiple processing units for both training and inference. Apache MXNet is an open-source framework developed for distributed deep learning. I will describe the underlying lightweight hierarchical parameter server architecture that results in high efficiency in distributed settings.
Pushing the current boundaries of deep learning requires using multiple dimensions and modalities. These can be encoded into tensors, which are natural extensions of matrices. We present new deep learning architectures that preserve the multi-dimensional information in data end-to-end. We show that tensor contractions and regression layers are an effective replacement for fully connected layers in deep learning architectures. They result in significant space savings with negligible performance degradation. These functionalities are available in the Tensorly package with MXNet backend interface for large-scale efficient learning.

View the slides for this presentation »

Watch this presentation on YouTube »

Jonas Schneider, Head of Engineering for Robotics, OpenAI

Jonas leads technology development for OpenAI’s robotics group, developing methods to apply machine learning and AI to robots. He also helped build the infrastructure to scale OpenAI’s distributed ML systems to thousands of machines.

Abstract Summary:

Machine Learning Systems at Scale:
OpenAI is a non-profit research company, discovering and enacting the path to safe artificial general intelligence. As part of our work, we regularly push the limits of scalability in cutting-edge ML algorithms. We’ve found that in many cases, designing the systems we build around the core algorithms is as important as designing the algorithms themselves. This means that many systems engineering areas, such as distributed computing, networking, and orchestration, are crucial for machine learning to succeed on large problems requiring thousands of computers. As a result, at OpenAI engineers and researchers work closely together to build these large systems as opposed to a strict researcher/engineer split. In this talk, we will go over some of the lessons we’ve learned, and how they come together in the design and internals of our system for learning-based robotics research.

View the slides for this presentation »

Watch this presentation on YouTube »

Doug Eck, Research Scientist, Google

Doug leads Magenta, a Google Brain project working to generate music, video, image and text using deep learning and reinforcement learning. A main goal of Magenta is to better understanding how AI can enable artists and musicians to express themselves in innovative new ways. Before Magenta, Doug led the Google Play Music search and recommendation team. From 2003 to 2010 Doug was faculty at the University of Montreal’s MILA Machine Learning lab, where he worked on expressive music performance and automatic tagging of music audio.

Abstract Summary:

The Role of AI and Machine Learning in Creativity:
I’ll discuss Magenta, a Google Brain project investigating music and art generation using deep learning and reinforcement learning. I’ll describe the goals of Magenta and how it fits into the general trend of AI moving into our daily lives. One crucial question is: Where does AI and Machine Learning fit in the creative process? I’ll argue that it’s about augmenting and extending the artist rather than just creating artifacts (songs, paintings, etc.) with machines. I’ll talk about two recent projects. In the first, we explore the use of recurrent neural networks to extend musical phrases in different ways. In the second we look at teaching a neural network to draw with strokes. This will be a high-level overview talk with no need for knowledge of AI or Machine Learning.

View the slides for this presentation »

Watch this presentation on YouTube »

Tamara G. Kolda, Distinguished Member of Technical Staff, Sandia National Laboratories

Tamara G. Kolda is a member of the Data Science and Cyber Analytics Department at Sandia National Laboratories in Livermore, CA. Her research is generally in the area of computational science and data analysis, with specialties in multilinear algebra and tensor decompositions, graph models and algorithms, data mining, optimization, nonlinear solvers, parallel computing and the design of scientific software. She has received a Presidential Early Career Award for Scientists and Engineers (PECASE), been named a Distinguished Scientist of the Association for Computing Machinery (ACM) and a Fellow of the Society for Industrial and Applied Mathematics (SIAM). She was the winner of an R&D100 award and three best paper prizes at international conferences. She is currently a member of the SIAM Board of Trustees and serves as associate editor for both the SIAM J. Scientific Computing and the SIAM J. Matrix Analysis and Applications.

Abstract Summary:

Tensor Decomposition: A Mathematical Tool for Data Analysis:
Tensors are multiway arrays, and tensor decompositions are powerful tools for data analysis. In this talk, we demonstrate the wide-ranging utility of the canonical polyadic (CP) tensor decomposition with examples in neuroscience and chemical detection. The CP model is extremely useful for interpretation, as we show with an example in neuroscience. However, it can be difficult to fit to real data for a variety of reasons. We present a novel randomized method for fitting the CP decomposition to dense data that is more scalable and robust than the standard techniques. We further consider the modeling assumptions for fitting tensor decompositions to data and explain alternative strategies for different statistical scenarios, resulting in a _generalized_ CP tensor decomposition.

View the slides for this presentation »

Xavier Amatriain, Cofounder & CTO, Curai

Xavier Amatriain is currently co-founder and CTO of Curai, a stealth startup trying to radically improve healthcare for patients by using AI. Previous to this, he was VP of Engineering at Quora, and Research/engineering Director at Netflix, where he led the team building the famous Netflix recommendation algorithms. Before going into leadership positions in industry, Xavier was a research scientist at Telefonica Research and a research director at UCSB. With over 50 publications (and 3k+ citations) in different fields, Xavier is best known for his work on machine learning in general and recommender systems in particular. He has lectured at different universities both in the US and Spain and is frequently invited as a speaker at conferences and companies.

Abstract Summary:

ML to Cure the World:
The practice of medicine involves diagnosis, treatment, and prevention of diseases. Recent technological breakthroughs have made little dent to the centuries-old system of practicing medicine: complex diagnostic decisions are still mostly dependent on “educated” work-ups of the doctors, and rely on somewhat outdated tools and incomplete data. All of this often leads to imperfect, biased, and, at times, incorrect diagnosis and treatment.

With a growing research community as well as tech companies working on AI advances to medicine, the hope for healthcare renaissance is definitely not lost. The emphasis of this talk will be on ML-driven medicine. We will discuss recent AI advancements for aiding medical decision including language understanding, medical knowledge base construction and diagnosis systems. We will discuss the importance of personalized medicine that takes into account not only the user, but also the context, and other metadata. We will also highlight challenges in designing ML-based medical systems that are accurate, but at the same time engaging and trustworthy for the user.

View the slides for this presentation »

Watch this presentation on YouTube »

Dr. Steve Liu, Chief Scientist, Tinder

Dr. Steve Liu is chief scientist at Tinder. In his role, he leads research innovation and applies novel technologies to new product developments.

He is currently a professor and William Dawson Scholar at McGill University School of Computer Science. He has also served as a visiting research scientist at HP Labs. Dr. Liu has published more than 280 research papers in peer-reviewed international journals and conference proceedings. He has also authored and co-authored several books. Over the course of his career, his research has focused on big data, machine learning/AI, computing systems and networking, Internet of Things, and more. His research has been referenced in articles publishing across The New York Times, IDG/Computer World, The Register, Business Insider, Huffington Post, CBC, NewScientist, MIT Technology Review, McGill Daily and others. He is a recipient of the Outstanding Young Canadian Computer Science Researcher Prizes from the Canadian Association of Computer Science and is a recipient of the Tomlinson Scientist Award from McGill University.

He is serving or has served on the editorial boards of ACM Transactions on Cyber-Physical Systems (TCPS), IEEE/ACM Transactions on Networking (ToN), IEEE Transactions on Parallel and Distributed Systems (TPDS), IEEE Transactions on Vehicular Technology (TVT), and IEEE Communications Surveys and Tutorials (COMST). He has also served on the organizing committees of more than 38 major international conferences and workshops.

Dr. Liu received his Ph.D. in Computer Science with multiple honors from the University of Illinois at Urbana-Champaign. He received his Master’s degree in Automation and BSc degree in Mathematics from Tsinghua University.

Abstract Summary:

Personalized User Recommendations at Tinder: The TinVec Approach:
With 26 million matches per day and more than 20 billion matches made to date, Tinder is the world’s most popular app for meeting new people. Our users swipe for a variety of purposes, like dating to find love, expanding social networks and meeting locals when traveling.
Recommendation is an important service behind-the-scenes at Tinder, and a good recommendation system needs to be personalized to meet an individual user’s preferences. In this talk, we will discuss a new personalized recommendation approach being developed at Tinder, called TinVec. TinVec embeds users’ preferences into vectors leveraging on the large amount of swipes by Tinder users. We will discuss the design, implementation, and evaluation of TinVec as well as its application to
personalized recommendations.

View the slides for this presentation »

Watch this presentation on YouTube »

Ted Willke, Sr. Principal Engineer, Intel

Ted Willke leads a team that researches large-scale machine learning and data mining techniques in Intel Labs. His research interests include parallel and distributed systems, image processing, machine learning, graph analytics, and cognitive neuroscience. Ted is also a co-principal investigator in a multi-year grand challenge project on real-time brain decoding with the Princeton Neuroscience Institute. Previously, he founded an Intel venture focused on graph analytics for data science that is now an Intel-supported open source project. In 2014, he won Intel’s highest award for this effort. In 2015, he was appointed to the Science & Technology Advisory Committee of the US Department of Homeland Security. Ted holds a doctorate in electrical engineering from Columbia University, a master’s from the University of Wisconsin-Madison, and a bachelor’s from the University of Illinois.

Abstract Summary:

Can Machine Learning Save the Whales?:
In the 1960’s whales faced mass extinction from whaling. Careful scientific study and regulatory action saved them. Now many whale species are facing another wave of extinction due to poorly understood forces. Is it depletion of their food stock? Collisions with ships? High-energy sonar? The problem is not so obvious this time. I’ll argue that we need breakthroughs in machine learning to figure it out. I’ll discuss two new projects and the impact they had on our expeditions to Alaska in 2017. In the first, we explore the use of new features to identify individual whales. In the second, we use deep learning-based morphometry to study their energy stores, which are vital to their survival and reproduction. Both problems suffer data starvation and challenges unique to the marine environment. I’ll discuss our race to clear these hurdles and save the whales.

Watch a previous presentation by Ted Willke here »

Watch this presentation on YouTube »

Matineh Shaker, Artificial Intelligence Scientist, Bonsai

Matineh Shaker is an Artificial Intelligence Scientist at Bonsai in Berkeley, CA, where she builds machine learning, reinforcement learning, and deep learning tools and algorithms for general purpose intelligent systems. She was previously a Machine Learning Researcher at Geometric Intelligence, Data Science Fellow at Insight Data Science, Predoctoral Fellow at Harvard Medical School. She received her PhD from Northeastern University with a dissertation in geometry-inspired manifold learning.

Abstract Summary:

Deep Reinforcement Learning with Shallow Trees:
In this talk, I present Concept Network Reinforcement Learning (CNRL), developed at Bonsai. It is an industrially applicable approach to solving complex tasks using reinforcement learning, which facilitates problem decomposition, allows component reuse, and simplifies reward functions. Inspired by Sutton’s options framework, we introduce the notion of “Concept Networks” which are tree-like structures in which leaves are “sub-concepts” (sub-tasks), representing policies on a subset of state space. The parent (non-leaf) nodes are “Selectors”, containing policies on which sub-concept to choose from the child nodes, at each time during an episode. There will be a high-level overview on the reinforcement learning fundamentals at the beginning of the talk.

View the slides for this presentation »

Watch this presentation on YouTube »

Ashfaq Munshi, ML7 Fellow, Pepperdata

Before joining Pepperdata, Ash was executive chairman for Marianas Labs, a deep learning startup sold in December 2015. Prior to that he was CEO for Graphite Systems, a big data storage startup that was sold to EMC DSSD in August 2015. Munshi also served as CTO of Yahoo, as a CEO of both public and private companies, and is on the board of several technology startups.

Abstract Summary:

Classifying Multi-Variate Time Series at Scale:
Characterizing and understanding the runtime behavior of large scale Big Data production systems is extremely important. Typical systems consist of hundreds to thousands of machines in a cluster with hundreds of terabytes of storage costing millions of dollars, solving problems that are business critical. By instrumenting each running process, and measuring their resource utilization including CPU, Memory, I/O, network etc., as time series it is possible to understand and characterize the workload on these massive clusters. Each time series is a series consisting of tens to tens of thousands of data points that must be ingested and then classified. At Pepperdata, our instrumentation of the clusters collects over three hundred metrics from each task every five seconds resulting in millions of data points per hour. At this scale the data are equivalent to the biggest IOT data sets in the world. Our objective is to classify the collection of time series into a set of classes that represent different work load types. Or phrased differently, our problem is essentially the problem of classifying multivariate time series.

In this talk, we propose a unique, off-the-shelf approach to classifying time series that achieves near best-in-class accuracy for univariate series and generalizes to multivariate time series. Our technique maps each time series to a Grammian Angular Difference Field (GADF), interprets that as an image, uses Google’s pre-trained CNN (trained on Inception v3) to map the GADF images into a 2048-dimensional vector space and then uses a small MLP with two hidden layers, with fifty nodes in each layer, and a softmax output to achieve the final classification. Our work is not domain specific – a fact proven by our achieving competitive accuracies with published results on the univariate UCR data set as well as the multivariate UCI data set.

View the slides for this presentation »

Watch this presentation on YouTube »

Josh Wills, Head of Data Engineering, Slack

Josh Wills is the head of data engineering at Slack. Prior to Slack, he built and led data science teams at Cloudera and Google. He is the founder of the Apache Crunch project, co-authored an O’Reilly book on advanced analytics with Apache Spark, and wrote a popular tweet about data scientists.

Abstract Summary:

I Build The Black Box: Grappling with Product and Policy: 

The rate of improvement in techniques for building machine learning models over the past 2 years has been astounding; between generalized embedding models like starspace and scalable, portable classifiers like XGBoost now mean that we can compress months of work into days or even hours. Unfortunately, we have not had any similar improvements in our ability to solve the product and policy problems that so often go hand-in-hand with building and deploying models; if anything, our reliance on self-optimizing black box techniques means that these problems are only getting harder, and as we bring machine learning to bear on more diverse domains, the stakes are only getting higher.

View the slides for this presentation »

Watch this presentation on YouTube »

Rushin Shah, Engineering Leader, Facebook

Rushin Shah is an engineering leader at Facebook, currently working on natural language understanding and dialog. Previously, he was at Siri at Apple for 5 years, where he built and headed the natural language understanding group, which included teams dedicated to modeling, engineering and data science. He also worked at the query understanding group at Yahoo. He has worked on a broad range of problems in the NLP area including parsing, information extraction, dialog and question answering. He holds degrees in language technologies and computer science from Carnegie Mellon and IIT Kharagpur.

Abstract Summary:

Natural Language Understanding @ Facebook Scale:
At Facebook, text understanding is the key to surfacing content that’s relevant and personalized, plus enabling new experiences like social recommendations and Marketplace suggestions. In this talk, I will introduce you to DeepText, Facebook’s platform for text understanding, and discuss the various models it supports.

View the slides for this presentation »

Watch this presentation on YouTube »

Franziska Bell, Data Science Manager on the Platform Team, Uber

Franziska Bell is a Data Science Manager on the Platform Team at Uber and leads Applied Machine Learning, Forecasting Platform, Anomaly Detection, Customer Support Data Science, and Communications Platform Data Science.

Before Uber, Franziska was a Postdoc at Caltech where she developed a novel, highly accurate approximate quantum molecular dynamics theory to calculate chemical reactions for large, complex systems, such as enzymes. Franziska earned her Ph.D. in theoretical chemistry from UC Berkeley focusing on developing highly accurate, yet computationally efficient approaches which helped unravel the mechanism of non-silicon-based solar cells and properties of organic conductors.

Abstract Summary:

Uncertainty Estimation in Neural Networks for Time Series Prediction at Uber:
Reliable uncertainty estimations for forecasts are critical in many fields, including finance, manufacturing, and meteorology.

At Uber, probabilistic time series forecasting is essential for accurate hardware capacity predictions, marketing spend allocations, and real-time system outage detection across millions of metrics. Classical time series models are often used in conjunction with a probabilistic formulation for uncertainty estimation. However, such models can be hard to tune, scale, and add exogenous variables to. Motivated by the recent resurgence of Long Short Term Memory networks, we propose a novel end-to-end Bayesian deep model that provides time series prediction along with uncertainty estimation at scale.

Dr. June Andrews, Principal Data Scientist, Wise.io, From GE Digital

June Andrews is a Principal Data Scientist at Wise.io, From GE Digital working on a machine learning and data science platform for the Industrial Internet of Things, which includes aviation, trains, and power plants. Previously, she worked at Pinterest spearheading the Data Trustworthiness and Signals Program to create a healthy data ecosystem for machine learning. She has also lead efforts at LinkedIn on growth, engagement, and social network analysis to increase economic opportunity for professionals. June holds degrees in applied mathematics, computer science, and electrical engineering from UC Berkeley and Cornell.

Abstract Summary:

Counter Intuitive Machine Learning for the Industrial Internet of Things:
The Industrial Internet of Things (IIoT) is the infrastructure and data flow built around the world’s most valuable things like airplane engines, medical scanners, nuclear power plants, and oil pipelines. These machines and systems require far greater uptime, security, governance, and regulation than the IoT landscape based around consumer activity. In the IIoT the cost of being wrong can be the catastrophic loss of life on a massive scale. Nevertheless, given the growing scale through the digitalization of industrial assets, there is clearly a growing role for machine learning to help augment and automate human decision making. It is against this backdrop that traditional machine learning techniques must be adapted and need based innovations created. We see industrial machine learning as distinct from consumer machine learning and in this talk we will cover the counterintuitive changes of featurization, metrics for model performance, and human-in-the-loop design changes for using machine learning in an industrial environment.

View the slides for this presentation »

Watch this presentation on YouTube »

Daniel Shank, Data Scientist, Talla

Daniel Shank is a Senior Data Scientist at Talla, a company developing a platform for intelligent information discovery and delivery. His focus is on developing machine learning techniques to handle various business automation tasks, such as scheduling, polls, expert identification, as well as doing work on NLP. Before joining Talla as the company’s first employee in 2015, Daniel worked with TechStars Boston and did consulting work for ThriveHive, a small business focused marketing company in Boston. He studied economics at the University of Chicago.

Abstract Summary:

Getting Value Out of Chat Data:
Chat-based interfaces are increasingly common, whether as customers interacting with companies or as employees communicating with each other within an organization. Given the large number of chat logs being captured, along with recent advances in natural language processing, there is a desire to leverage this data for both insight generation and machine learning applications. Unfortunately, chat data is user-generated data, meaning it is often noisy and difficult to normalize. It is also mostly short texts and heavily context-dependent, which cause difficulty in applying methods such as topic modeling and information extraction.

Despite these challenges, it is still possible to extract useful information from these data sources. In this talk, I will be providing an overview of techniques and practices for working with chat-based user interaction data with a focus on machine-augmented data annotation and unsupervised learning methods.

View the slides for this presentation »

Watch this presentation on YouTube »

Michael Alcorn, Sr. Software Engineer, Red Hat Inc.

Michael first developed his data crunching chops as an undergraduate at Auburn University (War Eagle!) where he used a number of different statistical techniques to investigate various aspects of salamander biology (work that led to several publications). He then went on to earn a M.S. in evolutionary biology from The University of Chicago (where he wrote a thesis on frog ecomorphology) before changing directions and earning a second M.S. in computer science (with a focus on intelligent systems) from The University of Texas at Dallas. As a Machine Learning Engineer – Information Retrieval at Red Hat, Michael is constantly looking for ways to use the latest and greatest machine learning technology to improve search.

Abstract Summary:

Representation Learning @ Red Hat:
For many companies, the vast majority of their data is unstructured and unlabeled; however, the data often contains information that could be useful in a variety of scenarios. Representation learning is the process of extracting meaningful features from unlabeled data so that it can be used in other tasks. In this talk, you’ll hear about how Red Hat is using deep learning to discover meaningful entity representations in a number of different settings, including: (1) identifying duplicate documents on the Customer Portal, (2) finding contextually similar URLs with word2vec, and (3) clustering behaviorally similar customers with doc2vec. To close, we will walk through an example demonstrating how representation learning can be applied to Major League Baseball players.

View the slides for this presentation »

Watch this presentation on YouTube »

LN Renganarayana, Architect, ML Platform and Services, Workday San Francisco

LN leads the architecture and design of Workday’s ML Platform and Services. He is all about building large scale distributed systems and data platforms. Currently his days (and some nights) are spent on solving the challenges in building ML products for Enterprise SaaS. LN’s career spans across HP, IBM Research, Symantec and now Workday. At Symantec, he was the architect and lead of a streaming platform that ingested and processed 2+ billions of events per day. As a Research Staff Member at IBM T.J. Watson Research Center, LN built optimizations for automatic parallelization, techniques for approximate computing, deployment automation for OpenStack, and analytics for large scale cloud services.

LN holds a Ph.D. in Computer Science from Colorado State University and has published more than 40 technical publications / patents. His work has received awards from ACM, IBM, and HP.

Abstract Summary:

Lessons Learnt from building ML Products for enterprise SaaS:
Having spent the last 4+ years productizing ML powered enterprise products, we have learnt a lot! Join us to hear the stories of our stumbles (ahem learnings) in applying machine learning to solve business problems for Fortune 500 companies. Our hands-on experience has shaped our product strategy, ML platform design and organization’s operational principles. And the investments we made based on our learnings have helped us drastically improve our time to market for ML products. Come on by to hear the technical and organizational challenges (and some solutions) in building ML products for enterprise SaaS. Hopefully our learnings will be useful in your journey.

View the slides for this presentation »

Watch this presentation on YouTube »

Madhura Dudhgaonkar, Head of Engineering, Search, Data Science and Machine Learning, Workday San Francisco

Madhura Dudhgaonkar is responsible for leading Workday’s search, data science and machine learning teams based in San Francisco. Her teams have spent ~4 years building machine learning products used by Fortune 500 companies. Her experience ranges from being a hands-on engineer to leading large engineering organizations. Madhura’s career spans across SUN Microsystems, Adobe and now Workday. During her career, she has been involved with building a variety of products – from developing Java Language to building a version 1.0 consumer product to building enterprise SaaS products.

She holds a bachelor’s degree in electronics and telecommunications and a master’s degree in math and computer science.
Madhura is originally from a small town in India and came to the United States to pursue her passion in technology. She currently calls San Francisco home, and despite nine years here, can’t get enough of its hilly charm, the diversity of people, culture, and experiences.

Abstract Summary:

Lessons Learnt from building ML Products for enterprise SaaS:
Having spent the last 4+ years productizing ML powered enterprise products, we have learnt a lot! Join us to hear the stories of our stumbles (ahem learnings) in applying machine learning to solve business problems for Fortune 500 companies. Our hands-on experience has shaped our product strategy, ML platform design and organization’s operational principles. And the investments we made based on our learnings have helped us drastically improve our time to market for ML products. Come on by to hear the technical and organizational challenges (and some solutions) in building ML products for enterprise SaaS. Hopefully our learnings will be useful in your journey.

View the slides for this presentation »

Watch this presentation on YouTube »

Shiva Amiri, Director of Data Science, Zymergen Inc.

Shiva Amiri is the Director of Data Science at Zymergen Inc. a technology company in the San Francisco Bay Area focused on Biology, Automation and Data Science. She was previously the CEO of BioSymetrics Inc., a machine learning company specializing in complex biomedical data. Prior to BioSymetrics she was the Chief Product Officer at Real Time Data Solutions Inc., she has led the Informatics and Analytics team at the Ontario Brain Institute and she was the head of the British High Commission’s Science and Innovation team in Canada. Shiva completed her Ph.D. in Computational Biochemistry at the University of Oxford and her undergraduate degree in Computer Science and Human Biology at the University of Toronto.
Event Emcee

Sponsors

Platinum:

Gold:

Silver:

Bronze:

Media:

Publishers: