The Startup Showcase featured new ML based applications and products. Each startup provides a 5 minute presentation of their product, followed by 3-5 minutes of Q&A from our distinguished guest, Mohan Reddy, Associate Director of Technology at Stanford Human Perception Lab, CTO The Hive.
- John Whaley, UnifyID: Math, Motion, and Machine Learning: Passive Authentication in the Real World
- How do you uniquely identify people? What is it that makes you, you? Certain aspects of human behavior can be as unique and as hard to spoof as a fingerprint. The way you walk, the way you move, the places you go, and your little idiosyncrasies have the promise of being more convenient and more secure than other forms of authentication like passwords or biometrics.But, there are big challenges in building a system that can authenticate a person to >99% accuracy with just a few seconds of passive sensor readings, while still maintaining user privacy. This requires lots of math, signal processing, ML, tricky engineering, and re-thinking existing security paradigms. Come hear about UnifyID’s experience in building such a platform and a glimpse into the future of authentication.
- Chris Kachris, InAccel: How to get 10x speedup on your ML applications using the power of accelerators
- In this talk we will present the easiest way to utilize the power of hardware accelerators to speedup your ML models. We will show how you can run up to 10x faster your ML applications with zero code changes from frameworks like Jupyter, Keras and Scikit learn. Using InAccel studio you can speedup your applications for classification, clustering and regression from your browser without any prior knowledge of FPGAs. InAccel provides a unique solution that utilize the power of FPGA-based accelerators and provides a rich library of ML cores.
- Anmol Suag, Blueshift: Learning to Rank for MarTech
- Given a user, product catalogue and a history of interactions between the users and products, we could create various types of recommendations for marketing. Some of these recommendations could be based on user’s category affinity, brand affinity, collaborative filtering, content-based filtering, next best product by textual similarity, product popularity, etc. The list of products (recommendation candidates) coming out of each aforementioned recommendation algorithm could be totally different and would vary from user to user based upon his/her activity, location, interests etc. This candidate set would be a lot smaller as compared to the size of the product catalogue. Once we are down to a small candidate set, how do you rank these candidates for a user with the most relevant products on top?At Blueshift, we use pair-wise Learning to Rank model trained on historical marketing campaigns to re-rank the candidate set. The features used in the model can be broadly put into these buckets: Recommendation Relevance, Product Quality, Product Fatigue, Category Affinity and User Activity. The LTR model is compared with other ranking techniques using MAP@K and delivers the best performance.
- Martin Isaksson, PerceptiLabs: A New Visual Way to Build Machine Learning Models
- PerceptiLabs is a dataflow driven, visual API for TensorFlow. It is a free Python package (hosted on PyPI) for everyone to use.PerceptiLabs wraps low-level TensorFlow code to create visual components, which allows users to visualize the model architecture as the model is being built.The settings/hyperparameters of each component (layer) can be set and tuned with the visual interface. Since these high-level settings generate low-level code, we can provide lots of support to the user, which is the key behind the benefits PerceptiLabs provides. This allows us to auto-generate granular visualizations for each component and to suggest settings (auto-configs) for the user in each component. PerceptiLabs gives the user warnings, errors, and tips during the modeling process to guide them towards building better models. When training starts, PerceptiLabs auto-generates visualizations for every single underlying variable in the model, which can the seen and analyzed in a statistic view. When the training is done, the user can automatically test and validate the model, before exporting the model (i.e., push to production or share on GitHub).
Since each component acts as a template for generating low-level TensorFlow code based of the settings, the user can work in a code view as well. We have made this is easy by allowing the user to toggle between the visual interface and the code. By keeping this level of transparency and flexibility, we hope to bridge the gap between beginners and researchers, where we have designed the UX in a similar way to Keras. Instead of using a high-level code API as Keras, PerceptiLabs is a visual API where each component can be configured in both high-level and low-level right away.
PerceptiLabs is designed so that the user drags and drops components on a workspace for each layer they want to include in their model. To complete and run the model, a Training component is connected at the end of the model’s graph. It’s designed in a similar way to Keras, where the user writes one-liners of code for each layer they want their model to include, and to wrap up and train the model, a .fit() method is invoked. However, the Training component in PerceptiLabs comes with the benefit of making it easier to build complex models and to use different machine learning techniques in an easy way. Due to the Training component’s unique design, PerceptiLabs supports any sort of novel model types and techniques. If the user wants to use Reinforcement learning (RL), the user will connect a Training-RL component at the end of the model. If the user wants to use Object detection (OD), the user will connect a Training-OD component, etc.
- PerceptiLabs is a dataflow driven, visual API for TensorFlow. It is a free Python package (hosted on PyPI) for everyone to use.PerceptiLabs wraps low-level TensorFlow code to create visual components, which allows users to visualize the model architecture as the model is being built.The settings/hyperparameters of each component (layer) can be set and tuned with the visual interface. Since these high-level settings generate low-level code, we can provide lots of support to the user, which is the key behind the benefits PerceptiLabs provides. This allows us to auto-generate granular visualizations for each component and to suggest settings (auto-configs) for the user in each component. PerceptiLabs gives the user warnings, errors, and tips during the modeling process to guide them towards building better models. When training starts, PerceptiLabs auto-generates visualizations for every single underlying variable in the model, which can the seen and analyzed in a statistic view. When the training is done, the user can automatically test and validate the model, before exporting the model (i.e., push to production or share on GitHub).