This MLconf has been our largest event yet! In effort to keep the intimate feel of MLconf, we’ve opted to keep attendance limited to 500 or less for each event. So far, we’ve found this to keep the personality and experience we’ve been hoping for. Overall, I have to say that this MLconf was the most successful one, so far. In terms of organization, things seemed to flow smoothly. Audio/Visual worked surprisingly well, including the streaming of the event. I remember last year 5 minutes before the morning keynote all the electronics were down!!! We were grateful to not have that stress this year. We did experience one unexpected glitch during lunch- the venue did not have enough waiters to cater the crowd and we, the organizers, had to rush into the kitchen and start serving food! I can’t complain- it was a fun experience.. All hands on deck!
The morning track had a theme of deep learning and tensor flow. Alex Smola, CEO at Marianas Labs, gave an impressive keynote with tricks, algorithms, and facts about mainstream algorithms; and of course- about deep learning. The second track, by Braxton McKee, CEO at Ufora, gave a different angle on parallelization and scaling. Following a programing language/compiler approach he emphasized on an automatic way where all the details about cache efficiency etc are taken care by machine learning algorithms that optimize data distribution under the hood. Ufora chose MLconf to announce the open sourcing of their platform. Taking a small break from platforms, Isabelle Guyon, President at ChaLearn, emphasized the problem of inferring causal factors in everyday data science. The competition she ran, revealed an interesting approach discovered by someone who happened to be in the audience! The first session ended with the room being packed, it was impossible to find a spot even to stand.
Right after the break we announced the winners of the MLconf Industry Impact Student Research Award, sponsored by Google. The winners were: Furong Huang, Student at UC Irvine, and Virginia Smith, Student at UC Berkeley. Find out more about their work in our blogpost. Following their brief summaries of their work, we introduced Quoc Le, Software Engineer at Google. Quoc presented on the recent advances in deep learning at Google, that raised philosophical issues about the meaning of life!
Irina Rish, Research Staff at IBM Watson, presented the brain, schizophrenia, FMRI, EKG while demonstrating a live and stylish EKG sensor. That was the best demonstration of sparse learning in action. Following Irina, Alison Gilmore, Data Scientist at Ayasdi, presented on the fascinating world of topological learning and showed how it can be applied in data analysis. As she presented, it’s all about finding the shape of the data. Just before going for lunch, Subutai Ahmad, VP of Research at Numenta, announced the open sourcing of their anomaly detection for streaming data.
During the break a loop of videos by Welch Labs presented the fundamentals of neural networks. Welch labs offers a very nice set of videos for machine learning in a user friendly way. It is not education, it is edutainment!
Following lunch, MLconf veteran speaker, Xavier Amatriain, VP of Engineering at Quora, gave us another 10 lessons he learnt from machine learning. Last year, he gave us his first 10 lessons. Both of his talks turned out to be total crowd favorites! Another presentation of lessons learned followed after Xavier; from Ben Hamner, CTO at Kaggle. Apparently this was the lesson’s session, which also included Justin Basilico, Research/Engineering Manager, at Netflix, who spoke about the lessons he learned from recommender systems. The session ended with a different type of recommendations- Brad Klingenberg, the Director of Styling Algorithms at Stitch Fix, reminded us that the human element is very important in recommendations for fashion and still more reliable than the machine.
Anima Anandkumar, from UC Irvine, presented on the application of tensors in a practical model. It is amazing what this simple model can do. Guaranteeing global optimality is a big plus. Following Anima, Alessandro Magnani, Data Scientist at Walmart Labs, presented a problem which is becoming hot these days- recommending items with short lifestyle! The last two talks by Naraynan Sundaram, Research Scientist at Intel, and Melanie Warrick, Deep Learning Engineer at Skymind kept the attendance at very high levels although it was already late. Narayan’s talk was the only one this time about graphs. The platform they develop at Intel seems promising and very fast. Melanie closed the conference with the favorite subject, attention models. It was a nice presentation for a subject that was mentioned briefly by Quoc earlier in the morning.
We want to thank the speakers for devoting considerable time to prepare their material and present it. If you want to get a glance of MLconf and test how much you understand the content, feel free to take our quiz. Video footage and slides from all the talks can be found on the event page.