Advanced Machine Learning Training

Training Advanced Machine Learning

Due to the COVID-19 outbreak our training courses will be taught via an online classroom.

Receive in-depth knowledge from industry professionals, test your skills with hands-on assignments & demos, and get access to valuable resources and tools.

The Advanced Machine Learning training provides a deep dive into several Machine Learning topics and methods. After this training you will be able to use ML methods and models, assess in which situation they can be applied, and understand the power and limitations of each of them. 

These classes are perfect for companies of all sizes that want to close the data gap and train their employees. You can follow the schedule below in our offices or contact us for a tailor-made program that meets your needs.

Are you interested? Contact us and we will get in touch with you.

Get in touch for more information

Fill in the form and we will contact you about the Advanced Machine Learning training:

* These fields are required.

What you will learn

The Advanced Machine Learning training is a great fit for people with a good foundation and experience with data science and/or machine learning who want to dive deeper into advanced ML technologies. 

During this training you will learn:

  • how to put machine learning methods and models into practice;
  • which method or model to use in any situation;
  • the strengths, weaknesses, and limitations of ML technologies.

After the training you will receive a Certificate of Completion. 

Training Dates

This Advanced Machine Learning training consists of 7 classes which are spread over a couple of months to ensure the maximum learning curve. The content of the classes is connected, and in general we advise to attend all classes. In case you would like to attend (a) single class(es), contact us so we can give you the right advice about a tailored training course. 

Join the Advanced Machine Learning Training on these dates in our office or contact us for a tailored training:

Class

  • Decision & Regression Trees
  • Neural Networks
  • Bayesian Learning 1
  • Local Models
  • Ensemble Models
  • Problem Solving & Optimization
  • Deep Learning

Available Dates

March 13, 2020  |  July 3, 2020
March 20, 2020  |  July 10, 2020
March 27, 2020  |  July 17, 2020
May 08, 2020  |   August 21, 2020
May 15, 2020  |  August 28, 2020
May 29, 2020  |  September 4, 2020
August 28, 2020  |  December 4, 2020

Detailed description of the Classes

Click below to open a detailed description of the class: 

Decision & Regression trees

With this training you will be introduced to Decision Trees, how they are constructed, and what algorithms you need. Furthermore, you will learn how to optimize them and why this is necessary. Finally, we will explain how these concepts are extended to their regression equivalents, using regression trees. In the end, we will implement a decision tree algorithm from scratch and then apply a scikit-learn decision tree to the Shuttle dataset provided by NASA.

The training includes theory, demos, and hands-on exercises. After this training you will have gained knowledge about:

  • Understanding high dimensional data
  • Decision Tree basics
  • Decision nodes, leaf nodes
  • Entropy, Information Gain and Gini impurity
  • ID3, CART algorithms
  • Optimizing decision trees
  • Regression trees
  • Advantages and disadvantages
  • Lab sessions to get hands-on experience applying this knowledge
Neural Networks

This training will provide a thorough foundation of the topic of Neural Networks. We will start off with an intuitive explanation of neural networks by drawing parallels to neurons and the brain, before moving to a theoretical explanation of the structure of Neural Networks. From their simplest building blocks called perceptrons, to multi-layer neural networks and gradually explaining additional components we gain a good conceptual understanding of how a signal travels forward through the network. Then, we dive deeper into how the network is trained to learn an objective, using techniques such as gradient descent and backpropagation. Furthermore, practical considerations are discussed, such as network design choices, tackling common issues and ways to optimize the network.

This theory will be interspersed with three hands-on lab sessions, in which we first implement a perceptron building block from scratch, then train a neural network to recognize handwritten digits using the Keras library, and finally learn how to implement backpropagation in Numpy.

The training includes theory, demos, and hands-on exercises. After this training you will have gained knowledge about:

  • Perceptrons, the building blocks of neural networks
  • Neural Networks as Multi Layer Perceptrons
  • Activation functions
  • Feed forward mechanism
  • Training: minimizing the loss function using gradient descent and backpropagation
  • Neural network design: architectures, loss functions, learning rate
  • Common issues such as over/underfitting and ways to counteract them
  • Optimizers such as Stochastic Gradient Descent and ADAM
  • Potential applications
  • Lab sessions to get hands-on experience on perceptrons, Keras models and backpropagation
Bayesian Learning

Bayesian Learning takes a statistical approach towards modelling, putting emphasis on probability distributions and uncertainty of outcomes. This training is more statistics focused and forms a good basis for understanding Generative Algorithms such as Naïve Bayes, in which prior beliefs are updated based on observed data (in contrast to Discriminative Algorithms).

We start with an introduction on the frequentist versus the Bayesian approach in determining probabilities and motivate the use of prior beliefs in the Bayesian way of thinking. Revisiting Bayes Theorem and probability distributions we explain Bayesian parameter estimation and finally arrive at the Naïve Bayes algorithm, as a classifier based on Bayesian Learning.

Having learned the theoretical foundation, we combine these concepts in the hands-on lab session by implementing a Naïve Bayes classifier in Python and using it on the Pima Nation diabetes dataset in order to predict proneness to diabetes.

The training includes theory, demos, and hands-on exercises. After this training you will have gained knowledge about:

  • Frequentist vs Bayesian thinking
  • Bayes' theorem
  • Prior and posterior probability distributions
  • Parameter estimation
  • Naïve Bayes algorithm
  • Lab sessions to get hands-on experience on Bayesian Estimation
Local Models

This training focuses on distance or density-based machine learning models that primarily take into account the local character of data points. Encompassing Density Estimation, Nearest Neighbors and Clustering, these algorithms are necessary tools in our Machine Learning toolkit.

We start by introducing the topic of Local Models and their applications in supervised as well as in unsupervised machine learning. We then discuss the relevance of different distance metrics and normalization methods, before delving into Density Estimation methods such as Kernel Density Estimation as well as parametric alternatives. Continuing to nearest neighbors algorithms like k-Nearest Neighbors and Approximate Nearest Neighbors, we finally arrive at unsupervised methods for Clustering, such as k-Means, Expectation Maximization and DBSCAN. This theoretical knowledge is applied in practice during a two-part lab session. 

The training includes theory, demos, and hands-on exercises. After this training you have gained knowledge about:

  • Distance Metrics
  • Normalization
  • Density Estimation
  • k-Nearest Neighbors
  • Approximate Nearest Neighbors
  • Clustering
  • k-Means
  • Expectation-Maximization
  • Hierarchical Clustering
  • DBSCAN
Ensemble Models

As Machine Learning encompasses a large set of different algorithms, many of them (if not all) suffer from high bias or variance. Ensemble Learning aims to reduce bias and/or variance using methods such as bagging, boosting and stacking, thereby combining weak learners into stronger ones.

We first revisit the Bias-Variance Tradeoff and give a good motivation for how Ensemble Learning tries to address this. We then discuss Bootstrap Aggregating (bagging), it’s role in reducing variance and how it is implemented in Random Forests. Continuing to Boosting, we explain how it aims to tackle issues of too high bias and discuss implementations like Adaboost and Gradient Boosting. We address how model performance can be improved using Stacking, and when this generally works best. We conclude with an overview of techniques and their advantages/disadvantages.

Having learned the theory, we apply these methods in practice during a lab exercise, thereby giving more understanding about all three methods, i.e. Stacking, Bagging and Boosting.

The training includes theory, demos and hands-on exercises. After this training you have gained knowledge about:

  • Combining algorithms
  • Bias-Variance Trade-off
  • Bagging (bootstrap aggregating)
  • Majority Voting
  • Random Forests
  • Boosting
  • Adaboost & Gradient Boosting
  • Stacking
Problem Solving & Optimization

Problem Solving & Optimization encompasses a broad range of techniques and subfields, that can be applied to many real-world applications. This training aims to provide a thorough overview of these techniques and valuable knowledge about how they can be applied, in machine learning as well as beyond. 

We will provide an overview of different groups of optimization methods, from ‘exact’ methods such as Mathematical Programming and Gradient-Based Optimization to heuristics methods like Simulated Annealing and Evolutionary Computation, spending more time on the most important ones. Finally, we discuss the most common challenges in optimization, e.g. local vs global optima, the exploration vs exploitation tradeoff and tuning, before concluding with some practical notes on specialized solvers and good use cases.

The training includes theory, demos, and hands-on exercises. After this training you will have gained knowledge about:

  • Problem solving contexts
  • Solution representations, constraints and objective functions
  • Mathematical programming
  • Gradient-based optimization
  • Black box optimization
  • (Meta)heuristics
  • Simulated Annealing
  • Tabu search
  • Evolutionary Computation
  • Local vs global optima
  • Exploration vs Exploitation tradeoff
  • Tuning
  • Specialized solvers
  • No free lunch theorem
Deep Learning

As part of a module on machine learning algorithms and as a follow-up on the Neural Networks training, this training will provide a good theoretical overview on the topic of Deep Neural Networks.

Starting with an extensive introduction into the history and background of Deep Learning we get an understanding of the obstacles and subsequent breakthroughs in (deep) neural networks and a broad overview of where the field is currently at. Moving more into the theory, we first give a short recap of neural networks basic concepts, before recognizing common issues that particularly occur in deep neural networks and subsequently discussing ways to overcome them. Additionally, we describe different kinds of deep neural networks, such as Restricted Boltzmann Machines, Deep Belief Nets before arriving at Convolutional Neural Networks. Finally, we summarize advantages and disadvantages of applying Deep Learning and mention commonly used deep learning frameworks, such as Keras and TensorFlow.

The training includes theory, demos, and hands-on exercises. After this training you will have gained knowledge about:

  • Deep Learning history and background
  • Applications
  • Neural networks recap: activation functions, loss functions, architectures, backpropagation, optimizers
  • Common issues, such as vanishing gradients, overfitting and tuning the learning rate
  • Methods to prevent overfitting, such as regularization, early stopping, dropout, sparse connectivity and data augmentation
  • Different kinds of deep neural networks
  • Restricted Boltzmann Machines
  • Deep Belief Nets
  • Stacked Denoising Autoencoder
  • Convolutional Neural Networks
  • Transfer learning
  • Software stack: Keras, TensorFlow/Theano, etc.
  • Advantages/disadvantages

For more information or to book your training

Are you interested in the Advanced Machine Learning training or do you have questions? Fill out the form and we will contact you personally.