This is a 7 hour (4hours on site and 3 online) training course in Google’s TensorFlow. Participants will setup a google/amazon cloud image, install TensorFlow (TF) and code machine learning algorithms. Prior to that they will be exposed to the architecture of TF as described in the original paper.

The structure of the course is the following:

  1. Two hour introduction to the basics of TensorFlow
  2. Class to be divided into two parts
    1. Advanced Power Users
      1. In the next two hours we will:
        1. Deep dive into the TF programing model. We will discuss how basic linear algebra concepts are mapped to TF operators and optimal strategies to combine them. You will learn how to code shallow models and deep models.
        2. work on performance details
        3. Code machine learning algorithms in TF
    2. Data Science Users
      1. In the next two hours we will:
        1. Overview of the tf.learn package
        2. Train linear models
        3. Train deep learning models
        4. Train wide and deep models
  3. Two online sessions that will last 1.5 hours each will follow. These sessions will focus on more examples and exercises on the topics covered during the onsite sessions

Who Should Attend?

Participants must be proficient in python. Participants should be able to install packages on their machine or on a google cloud instance. Understanding of basic machine learning concepts, such as training, linear regression, loss function, clustering, is also required. If you feel you can attend MLconf talks at a concept level, then you should have adequate knowledge to have a successful training experience.

Course Philosophy

In this course, we will teach you the fundamental abstractions behind TF.  We will teach you how to use the different packages of TensorFlow. Emphasis will be given in debugging the models and analyzing performance and results.

Why did you choose to split the training into onsite and online sessions?

After giving several training sessions and teaching courses in universities, our team has noticed that the attention span and ability to absorb knowledge drops dramatically after 4 hours. We have also noticed that given a break to study the material in those 4 hours and having a follow-up improves the learning experience significantly. We encourage you to take your time after the onsite and in the next 2 sessions you will have the opportunity to ask questions, repeat the coding yourself and properly absorb the material. After spending 4 hours together we believe that we will get the opportunity to get acquainted and be able to communicate better during the online sessions.

Why did you decide to split the class into two tracks?

The audience is usually inhomogeneous. It is also very hard to self-evaluate your level. We will give you the opportunity to attend the first hour and with the help of a questionnaire decide which level suits you better. Keep in mind that you will have access to all the training material. You will also be able to switch levels during the online sessions. For example for someone who chose option1 and later decided that he/she wants to get exposed to option 2 that will be possible

meet your trainers

Petros Mol

Software Engineer, Google

Petros Mol is a Software Engineer at Google, where he works on Machine Learning models using TensorFlow. Prior to that, he spent 2 years in Google Maps, working on local search and in particular on improving data quality for local businesses. Petros holds a PhD from the department of Computer Science & Engineering at University of California, San Diego. His research focused on Cryptography and more specifically on how to improve cryptographic constructions using coding theory.

Nikolaos Vasiloglou

Technical Chair, MLconf

Nikolaos Vasiloglou holds a PhD from the department of Electrical and Computer Engineering at Georgia Institute of Technology. His thesis was focused on salable machine learning over massive datasets. After graduating from Georgia Tech he founded Analytics1305 LLC and Ismion Inc. He has architected and developed the PaperBoat machine learning library which has been successfully integrated and used in the LogicBlox and HPCCSystems platforms. He has also served as a machine learning consultant for Predictix, Revolution Analytics, Damballa, Tapad and LexisNexis. Vasiloglou has recently focused his studies on Google's TensorFlow and has been active in developing the syllabus for a series of TensorFlow training events. His work has resulted in patents and production systems.

The following sponsors made this event possible

Training Schedule


Intro to TensorFlow

  • MLTrain Introduction
  • Tensors Basics
  • Computational Graph Model
  • Graph Inspection & Visualization with TensorBoard
  • Imperative vs Declarative
  • The TensorFlow Session
  • Basic Ops

TensorFlow Cool Features

  • Types, Constants and Placeholders
  • Variables (creation, initialization, mutation, saving/restoring)
  • Advanced Operations
  • Automatic Differentiation
  • Device/Hardware Management
  • Kinds of Parallelism and Distributed Computing (Synchronous vs Asynchronous)
  • Debugging

Coffee Break + Split into tracks

Attendees will split into 2 groups, based on experience/exposure level.


Live Afternoon Tracks Begin

Track 1 – (Power Users) Building Algorithms

11:15am – Session 1
Linear Algebra with TF

  • Sparse/Dense Matrix/Vectors Operations
  • Kronecker Products in TF
  • From Matrices to Tensors
  • Tensor Tiling: The Map Operator of TF
  • Reductions on Tensors
  • Limitations of TF

12:15pm – Coffee Break

12:30pm – Session 2
Numerical Optimization with TF

  • Creating a Symbolic Objective Function
  • Computing the Gradients
  • Build Your Own Simple Gradient Descent Optimizer
  • The Mini-Batch
  • Inside the Optimizer TF Class
  • Tweaking Predefined Optimizers Via Their Gradients
  • Presentation and Parameter Tuning of Famous Optimizers (AdamOptimizer, RmsPropOptimizer, AdaGrad)
  • Build Your First Linear Regression In 3 Lines!

Track 2 – (Data Scientists): ML with TensorFlow

11:15am – Session 1
Optimization In TensorFlow

  • Objective Function
  • Gradients Computation
  • The tf.Optimizer Class
  • Predefined Optimizers (FtrlOptimizer, GradientDescent, Adagrad, SDCAOptimizer)
  • TF Linear Regression Model In 3 Lines
  • Predefined Losses

12:15pm – Coffee Break

12:30pm – Session 2
The tf.contrib.learn

  • Overview of the tf.contrib.learn Package
  • The Estimator Class
  • Feature Columns & Feature Engineering
  • Input Processing
  • Linear Estimators
  • Logging and Debugging

Online Tracks Begin

Track 1 – (Power Users) Building Algorithms

Session 3
Basic Machine Learning Concepts with TF

  • Continue Building Linear Models (Logistic)
  • Using different objectives: L1, L2, Cross Entropy, Hinge Loss, and Maximum Likelihood
  • Adding L1/L2 Regularization
  • Code Your First Multilayer Perceptron (MLP)
  • Regularizing with Dropout
  • Debugging Your Model with TensorBoard
  • The Estimator Class
  • Using Components of tf.contrib.learn to Code New ML Algorithms

Session 4
Deep Learning Architectures and Debugging

  • Working with Relational Data
  • Experimenting with Different Architectures
  • Debugging Your Deep Learning Model
  • Achieving Better Task Parallelism
  • Distributed TensorFlow

Track 2 – (Data Scientists): ML with TensorFlow

Session 3
Deep Models in tf.contrib.learn Package

  • Training Deep Models in tf.contrib.learn
  • Wide & Deep Models
  • Creating Custom Estimators
  • Monitoring and Debugging with TensorBoard
  • Reading Data in TF
  • Canonical Data Format (Example: proto)
  • Input Readers

Session 4
Distributed TensorFlow

  • Batching
  • Preprocessing
  • Improving I/O
  • Optimization tips
  • Pitfalls

Preview of MLtrain 2

  • Generative Models
  • Modern Embedding Methods
  • Recurrent Architectures