Our past Technical Chair, interviewed Numenta’s Austin Marshall about HTM’s Numenta’s view in Neural Networks/AI. [Read more…] about Interview with Austin Marshall, Numenta
Chatty Thoughts on Chatty Bots
As recently as this spring, we’re seeing an influx of releases of bots in all industries, from lean startups to some of the largest companies in existence. According to VentureBeat, just this week, Microsoft announced that they have a relationship assistant in the works. According to the article and announcement, Microsoft will incorporate a mobile-friendly “relationship assistant” into its Dynamics CRM customer relationship management (CRM) software. This assistant is designed to recommend actions for salespeople to take..”This can make customer service more efficient and improve customer satisfaction”, Nadella said. From the tone of the announcement, the tool is meant to help humans with efficiency, not take their jobs.
Microsoft isn’t the only tech giant to have recently made such an announcement.. Early last week, Salesforce announced the launch of an AI platform, Einstein, that will work within Sales Cloud and Marketing Cloud. Salesforce also announced a new research division, focusing on deep learning, led by Richard Socher, formerly co-founder and chief executive of A.I. startup MetaMind, which Salesforce acquired earlier this year. The tone of this announcement is similar to that of Microsoft- these AI enhancements are aimed to help sales and marketing teams to be more efficient, not to replace them. More details can be found in this VentureBeat article here.
The tech behemoths aren’t the only ones releasing business minded bots- some smaller startups are making waves with their work with bots in the recruiting industry. On September 3rd, our co-founder, Shon Burton posted his guest blog post “Why I hate Chatbots” In this post, Burton described a new recruiting AI tool, “RAI” and how he believes sourcers and recruiters will benefit from using the tool. Burton states “It won’t fool anyone into thinking it’s human, but for practical recruiting tasks it’s already quite useful. By combining the practical conversational interface of tools like Siri and Alexa, with a Wolfram Alpha inspired ‘knowledge engine’, we’ve developed a recruiting assistant with which a non-expert user can conduct a talent search conversationally.” This week, we saw a press release: https://hiringsolved.com/hello/rai
Also this week, Burton was featured in an interview piece in Fast Company about RAI, specifically focusing on the Diversity Boost feature within RAI, a feature that changes the relevance algorithm within a search, rather than a simple filter, helping recruiters to find a diverse pool of candidates to interview for open positions.
Other new projects to mention are: Mya, which came out of private beta as a job-seeker-facing chatbot to assist with the job application process this summer, as well as an MLconf sponsor: Talla. When asked to participate in this blog, Talla provided a few words:
“At Talla we launched a few bots and realized that the bot-as-an-application model is limited. Therefore, we’ve taken our product up to the platform level. Now, our bot is just the UI for an intelligent layer that lives across your HR applications, and can automate many different workflows.
A very simple interface can be used to do many things, like create custom employee onboarding flows that are delivered conversationally, answer basic benefits questions, and send simple polls throughout the organization within a company’s existing chat platform. All of these workflows are based on common core technologies, like automating or scheduling messages, tapping into common data sources, like HR systems, and performing NLP on structured or unstructured text.”
Though these various bots aim to solve different problems that exist in relationship management; whether in sales, marketing, recruiting, or onboarding, they all seem to aim to assist, enhance and raise efficacy, not replace. We’ll keep our eyes out for developments within these mentioned projects.. Stay tuned!
*[Photo: Flickr user interestedbystandr]
MLconf Atlanta Recommended Academic Papers
Hussein Mehana, Director of Engineering, Facebook:
- https://arxiv.org/abs/1502.01710
Patrick Koch, Principal Data Scientist, and Funda Gunes, Sr. Research Statistician Developer, SAS Institute Inc:
1. Bottou, L., Curtis, F. E., Nocedal, J., Optimization Methods for Large-Scale Machine Learning,arXiv:1606.04838 [stat.ML], 2016.
2. Sutskever, I., Martens, J., Dahl, G. and Hinton, G., E. On the importance of initialization and momentum in deep learning, In Proceedings of the 30th international conference on machine learning (ICML-13), Atlanta, GA, pp. 1139–1147, June 2013.
3. Bergstra, J. and Bengio, Y., Random Search for Hyper-Parameter Optimization, J. Machine Learning Research, 13: 281–305, 2012.
4. Sparks, E. R. , Talwalkar, A., Haas, D. , Franklin, M. J., Jordan, M. I., and Kraska, T., Automating Model Search for Large Scale Machine Learning, Proceedings of the Sixth ACM Symposium on Cloud Computing, August 27-29, 2015, Kohala Coast, Hawaii.
5. Local Search Optimization, SAS/OR®
6. SAS® Viya™ Distributed Analytics Platform
Dr. Le Song, Assistant Professor, College of Computing, Georgia Institute of Technology:
- H. Dai, Y. Wang, R. Trivedi and L. Song. Recurrent Coevolutionary Feature Embedding Processes for Recommendation, Recsys Workshop on Deep Learning for Recommendation Systems, 2016. PDF (BEST PAPER) (http://arxiv.org/pdf/1609.
03675.pdf) - H. Dai, B. Dai and L. Song. Discriminative Embeddings of Latent Variable Models for Structured Data, International Conference on Machine Learning (ICML), 2016. PDF (https://arxiv.org/pdf/1603.
05629.pdf) - Dai, B., Xie, B., He, N., Liang, Y., Raj, A., Balcan, M., and Song, L. Scalable Kernel Methods via Doubly Stochastic Gradients. Neural Information Processing Systems (NIPS 2014). PDF (https://arxiv.org/pdf/1407.
5599.pdf)
Great Machine Learning and Data Science Books to be on Display at MLconf Atlanta
We’re so grateful for the participating publishers that are sending books to be displayed and given away at MLconf Atlanta on Friday! We’re also displaying and giving out a collection of relevant machine learning books! Check them out!
To Win Books: participate in our event-day twitter contest. The most interesting and unique tweets will be awarded with free ML books! Make sure to mention @mlconf and #MLATL to win!
CRC Press:
*For more details, go to their virtual booth: https://www.crcpress.com/go/MLconf2016
- A First Course in Machine Learning, Second Edition
- Text Mining and Visualization: Case Studies Using Open-Source Tools
- Handbook of Big Data
- Accelerating Discovery: Mining Unstructured Information for Hypothesis Generation
- Statistical Learning with Sparsity: The Lasso and Generalizations
- Statistical Reinforcement Learning: Modern Machine Learning Approaches
- High Performance Parallel I/O
- Sparse Modeling: Theory, Algorithms, and Applications
- Computational Trust Models and Machine Learning
- Regularization, Optimization, Kernels, and Support Vector Machines
- Big Data and Social Science: A Practical Guide to Methods and Tools
Cambridge University Press:
- Agarwal/Chen, Statistical Methods for Recommender Systems
- Braun/Murdoch, A First Course in Statistical Programming with R
- Efron/Hastie, Computer Age Statistical Inference
- Flach, Machine Learning
- Fouss, Algorithms and Models for Network Data and Link Analysis
- Leskovec et al, Mining of Massive Data Sets
Springer Publishing:
Additional Machine Learning Books on Display:
- The Seven Pillars of Statistical Wisdom, Stigler, Stephen M.
- Algorithms to Live By: The Computer Science of Human Decisions, Christian, Brian
- Overcomplicated: Technology at the Limits of Comprehension, Arbesman, Samuel
- Naked Statistics: Stripping the Dread from the Data, Wheelan, Charles
- The Master Algorithm: How the Quest for the Ultimate Learning Machine Will Remake Our World, Domingos, Pedro
- Data Science from Scratch: First Principles with Python, Grus, Joel
- The Theory That Would Not Die: How Bayes’ Rule Cracked the Enigma Code, Hunted Down Russian Submarines, and Emerged Triumphant from Two Centuries of Controversy, McGrayne, Sharon Bertsch
- Think Bayes, Allen B. Downey
- How to Create a Mind: The Secret of Human Thought Revealed, Kurzweil, Ray
- Superforecasting: The Art and Science of Prediction, Tetlock, Philip E.
- The End of Average: How We Succeed in a World That Values Sameness, Rose,Todd
Interview with Sergey Razin, Ph.D., Chief Technology Officer, SIOS Technology
What is topological behavior analysis?
Topological Behavior Analysis (TBA) is the real-time algorithmic analysis of computer data that originates from complex virtualization and cloud environments. It derives from Topological Data Analysis that leverages K-means as its foundation.
Computer environments have many different layers that generate a large volume of statistical data – from the user experience layer (i.e. press of a button) to the data on the storage system, with many layers in between (cell phone towers, providers, networks, servers, etc.). All that data needs to be ingested, modeled (trained) and provide the “answers” to variety of questions in automated fashion that IT/DevOps may have, such as:
- Is there a problem?
- What is the root cause?
- What should I do about it?
K-means provides the ability to abstract and define the behavior of workloads and their impact on the infrastructure in a form of clusters (vs individual time series which would not scale) as well as to capture the seasonal behaviors that extremely necessary to understand the behaviors that can be very specific to the industry where the computer environment is being used (e.g., sales fluctuations in retail).
Combining K-means with Topological Data Analysis provides the ability to perform detect the anomalies based on multi-dimensional models that learn the interplay between the features of the statistical data that represent the behavior.
How do you combine k-means with mixture for TBA?
While developing a product feature that predicts performance issues within a computer environment (virtualization, cloud, etc.), we have developed an algorithm that applies Monte Carlo Simulation on top of K-means based models.
Once again, this approach leverages K-means as the foundation that provides the ability to model the behaviors of the workload and its impact on the computer environment. From the learned behavior encapsulated in clusters that also represents the seasonal behaviors of the data, we are able to derive a prediction of the behavior by:
- Deriving the predicted expected behavior of the workload and its impact on the components of the infrastructure
(such as compute, network, storage) by applying Monte Carlo Simulation. - Once the prediction for the expected workload behavior is derived for the individual workload, we perform the “stacking” function that stacks the predicted expected behavior to determine whether it will reach the capacity of the infrastructure (whether it is at the compute, network or storage layers).
Leveraging K-means and Monte Carlo Simulation we can accurately predict the performance issues within the compute environment.
What are the challenges in predicting workloads in servers?
I have mentioned a couple of issues in my prior responses, but let me summarize:
- Amount of data (Big Data),
- Inter-play as well as dependency (statistical dependency) between the features, Dimensionality of the features,
- Real-time nature of the matter that pending real-time decisions to avoid failure of critical applications,
- Seasonality of the behavior,
- Dynamic nature of the environment that moves workloads within the environment and across geographies as well as dynamic nature of the workloads that depend on user interaction as well as application changes
While (a) – (e) can be addressed through the algorithms mentioned earlier (f) requires almost weather-forecasting like analysis.
First, there is a prediction of the future based on learned behaviors. This is analogous to a 7-day weather forecast. However, like a weather forecast, severe storms (or issues in a computer environment) can start and move rapidly, affecting both the forecast and the recommendations that may be made as a result.
That is why in addition to forecasting the future, it is important to identify issues and provide recommendations (automated) in real time on how to address such issues without affecting parts of the system that were not affected by the storm and therefore should continue with the previously forecasted recommendations.
That’s where forecasting based on Monte Carlo Simulations needs to work in unison with Topological Behavior Analysis as causality algorithms (mentioned later) in real-time to track all the dynamic changes in the environment.
Why can’t you use time series modeling?
Unfortunately, time series modeling is the state of the art for most tools in the IT space, i.e. use of time series analysis. This is the case because most IT tools were built with a Computer Science approach rather than a Data Science approach. Before virtualization and cloud computing became popular, understanding and optimizing computing environments was seen as an infrastructure problem instead of a data problem. Expertise in data and statistical modeling was not a requirement or event considered. As a result, most IT tools were built with a solid knowledge of Computer Science and the IT space (i.e. architecture, design patterns, etc.). Time series analysis was the apogee of Machine Learning and implemented in IT tools today simply because it is easy to implement and understand. However, time series analysis cannot address challenges (a) – (f) in my response to the previous question.
- The amount of data that radiates from all the layers in the IT operations environment is simply impossible to deal with the individual data points and higher level of abstraction that is capable to represent the behavior is required (such as clusters) which directly relates to (a) and (c) challenges mentioned earlier.
- Time series modeling cannot capture the multi-dimensionality, interplay, and uncertainty within the features of the data (especially at scale) that is required to accurately identify the meaningful anomalies within the IT operations environment.
- Finally, some important data is not time series data but may include other features (such as data related to changes in the infrastructure, configuration, and code).
As a result, I have identified the gap and an opportunity to develop a new solution that addresses all of the challenges mentioned earlier and will ultimately deliver my vision of a self-driving datacenter that is based on data and data science, eliminating the human guesswork used today.
Why isn’t deep learning and option?
Deep learning is an option.
Today we are just scratching the surface of applying statistical modeling to IT operations data (that is not limited to metrics, but can also include code changes in the application, etc.). Our causality algorithm is already a network (Bayesian-like network) that is driven by posterior and conditional probabilities (still a pretty “shallow” model). However, we are in the process of experimenting with TensorFlow to introduce “deep”-er networks into our analysis that will enable us to address larger scale and more complex use cases (especially relevant to change management, networking and security where that are a lot of features to be explored).
In addition, our current platform operates on-premises and our goal is to push our platform into the “cloud” which would allow exposure to more compute capacity (for compute intensive operations including GPU) and more data that are essential for “deeper” models (i.e. more data and more computational power).
For example, one of the complex use cases (applied in performance and security analysis) is how to identify bad code changes that cause a problem and predict whether the bad code can cause security, reliability and performance problems. As use cases grow in scale and complexity, deep learning models will allow to determine the right features and to more dynamically and accurately discover issues that arise and their root cause(s).
As CTO, Sergey is responsible for driving product strategy and innovation at SIOS Technology Corp. A noted authority in advanced analytics and machine learning, Sergey pioneered the application of these technologies in the areas of IT security, media, and speech recognition. He is currently leading the development of innovative solutions based on these technologies that enable simple, intelligent management of applications in complex virtual and cloud environments.
Prior to joining SIOS, Sergey was an architect for EMC storage products and EMC CTO office where he drove initiatives in areas of network protocols, cloud and storage management, metrics, and analytics. Sergey has also served as Principal Investigator (PI), leader in research, development and architecture in areas of big data analytics, speech recognition, telephony, and networking.
Sergey holds PhD in computer science from the Moscow State Scientific Center of Informatics. He also holds a BS in computer science from the University of South Carolina.
Interview with John Melas-Kyriazi, Senior Associate at Spark Capital
Our past Technical Chair, interviewed John Melas-Kyriazi, Senior Associate at Spark Capital, regarding his thoughts on the intersection of Machine Learning and Venture Capital..
Previously, you have stated that big companies already own the data and they are not willing to share them. There is a big move for open data from universities, government, hospitals, etc. Do you see an opportunity for startups to mine them and come up with cool products?
JM-K) Yes, I do think there’s an interesting opportunity here.
Startups typically don’t bring proprietary data to the table — they’re startups, after all — so they have a few different strategies for building their own datasets. Many startups generate data through the use of their product (think user-generated content on Waze, or genetic data from 23andMe) that becomes a core competitive advantage over time. Another strategy, which is relevant to this question, is to aggregate third-party data that’s traditionally been locked in silos. Just imagine what interesting machine learning applications you could build on top of research data from universities, or across patient data from many different medical providers, to take two examples. However, this is difficult to pull off. The key challenge for a startup is getting permission to use that data, which can often be sensitive, from the relevant data owners.
Now, fully open data access sounds great on paper, but it would be a blessing and a curse for startups. It would become easier for startups to access that data; however, if one startup can, others can too, and any interesting new dataset would attract a flock of entrepreneurs and engineers competing to build the best applications. Low barriers to entry would make it difficult (although of course not impossible) for any one startup to create a truly outsized impact.
Data is hard to collect, algorithms are for free, but still putting them together to make an application that solves a specific enterprise problem is not easy. Do you believe that we are going to see a shift towards application oriented startups? Are we going to see the same explosion of app companies the same way we saw it in 80s/90s when databases became a standard in the enterprise world?
JM-K) It’s hard to compare one period of innovation to another, but I agree that we will continue to see a tremendous amount of activity from application-layer startups that leverage data and machine learning. As the tools for building these types of companies become cheaper and easier to use, and as relevant training data becomes easier to access, the benefits of machine learning technology will continue to become democratized and more widely used by smart software engineers.
Further, I think that machine learning technology will ultimately get woven into the fabric of many/most existing applications. While ML-native startups are roaring onto the scene, existing software companies will take a number of different strategies to get up to speed: 1) acquire startups with substantial machine learning IP and talent; 2) aggressively recruit machine learning engineers and data scientists; 3) build internal competency and leverage the growing portfolio of open source machine learning tools.
What is your opinion about data trading? We trade all sorts of commodities at high volumes. Are we going to see the data-markets grow?
JM-K) As we move from deterministic (rule-based) software to increasingly probabilistic methods in programming, data will continue to increase in value to a wider audience of developers and companies. I have no doubt that markets for data will continue to grow in importance, and we will start to see more businesses focused on brokering data sales, building online data marketplaces and collaborative data-oriented communities.
Established tech companies like Apple, Google, and Salesforce have acquired a substantial number of machine learning startups over the past five years. Will this trend continue?
JM-K) Consolidation in the machine learning space is natural given the massive talent gap that currently exists in the market. A few years ago, established tech companies were acqui-hiring teams of mobile engineers by the handful. Now, data science and machine learning are hot, and the easiest way to add machine learning talent to your company is to acquire a startup with a highly-functioning ML team.
Additionally, I do believe that many machine learning startups will face serious long-term defensibility challenges if they do not have best-in-class data. For some, joining forces with a tech company who brings superior data to the table is an applaudable and logical outcome.
John Melas-Kyriazi is a senior associate at Spark Capital. John is interested in the AI and machine learning space and as a firm, Spark Capital has invested in a number of companies focused on AI/ML, including Cruise Automation and Sift Science. Before joining Spark, John left a Ph.D. program at Stanford to help run StartX, a startup accelerator program affiliated with Stanford University. John received a B.S. in Engineering Physics and an M.S. in Materials Science & Engineering from Stanford.