Course Outline

The course is separated into three distinct days, the third being optional.

Day 1 - Machine Learning & Deep Learning: theoretical concepts

1. Introduction IA, Machine Learning & Deep Learning

- History, fundamental concepts and usual applications of artificial intelligence far from the fantasies carried by this field

- Collective intelligence: aggregating knowledge shared by many virtual agents

- Genetic algorithms: evolve a population of virtual agents by selection

- Machine Learning usual: definition.

- Types de tâches : supervised learning, unsupervised learning, reinforcement learning

- Types of actions: classification, regression, clustering, density estimation, dimensionality reduction

- Examples of algorithms Machine Learning: Linear regression, Naive Bayes, Random Tree

- Machine learning VS Deep Learning: problems on which Machine Learning remains the state of the art today (Random Forests & XGBoosts)

2. Fundamental concepts of a neural network (Application: multi-layer perceptron)

- Reminder of mathematical bases.

- Definition of a neural network: classic architecture, functions of activation and weighting of previous activations, depth of a network

- Definition of learning a neural network: cost functions, back-propagation, stochastic gradient descent, maximum likelihood.

- Modeling of a neural network: modeling of input and output data according to the type of problem (regression, classification...). Curse of dimensionality. Distinction between multi-features data and signal. Choice of a cost function according to the data.

- Approximation of a function by a neural network: presentation and examples

- Approximating a distribution by a neural network: presentation and examples

- Data Augmentation: how to balance a dataset

- Generalization of the results of a neural network.

- Initializations and regularizations of a neural network: L1/L2 regularization, Batch Normalization...

- Optimizations and convergence algorithms.

3. Common ML/DL tools

A simple presentation with advantages, disadvantages, position in the ecosystem and use is provided.

- Data management tools: Apache Spark, Apache Hadoop

- Tools Machine Learning usual: Numpy, Scipy, Sci-kit

- Frameworks DL today level: PyTorch, Keras, Lasagne

- Low level DL frameworks: Theano, Torch, Caffe, Tensorflow

 

Day 2 - Convolutional and recurrent networks

4. Convolutional Neural Networks (CNN).

- Presentation of CNNs: fundamental principles and applications

- Fundamental operation of a CNN: convolutional layer, use of a kernel, padding & stride, generation of feature maps, layers of the 'pooling' type. 1D, 2D and 3D extensions.

- Presentation of the different CNN architectures having carried the state of the art in image classification: LeNet, VGG Networks, Network in Network, Inception, Resnet. Presentation of the innovations brought by each architecture and their more global applications (1x1 convolution or residual connections)

- Use of an attention model.

- Application to a usual classification scenario (text or image)

- CNNs for generation: super-resolution, pixel-to-pixel segmentation. Presentation of the main strategies for augmenting feature maps for the generation of an image.

5. Recurrent Neural Networks (RNN).

- Presentation of RNNs: fundamental principles and applications.

- Fonctionnement fondamental du RNN : hidden activation, back propagation through time, unfolded version.

- Evolutions towards GRU (Gated Recurrent Units) and LSTM (Long Short Term Memory). Presentation of the different states and changes brought by these architectures

- Convergence problems and vanising gradient

- Types of classic architectures: Prediction of a time series, classification...

- RNN Encoder Decoder architecture. Using an attention model.

- Applications NLP : word/character encoding, traduction.

- Video applications: prediction of the next image generated from a video sequence.

 

Day 3 - Generational Models and Reinforcement Learning

6. Generational models: Variational AutoEncoder (VAE) and Generative Adversarial Networks (GAN).

- Presentation of generational models, link with CNNs seen in day 2

- Auto-encode: dimensionality reduction and limited generation

- Variational Auto-encoder: generational model and approximation of the distribution of a datum. Definition and use of latent space. Reparameterization trick. Applications and observed limitations

- Generative Adversarial Networks: fundamental principles. Two-network architecture (generator and discriminator) with alternate learning, cost functions available.

- Convergence of a GAN and difficulties encountered.

- Improved convergence: Wasserstein GAN, BeGAN. Earth Moving Distance.

- Applications for generating images or photographs, generating text, super-resolution.

7. Deep Reinforcement Learning.

- Presentation of reinforcement learning: control of an agent in an environment defined by a state and possible actions

- Using a neural network to approximate the state function

- Deep Q Learning: experience replay, and application to the control of a video game.

- Optimizations of the learning policy. On-policy && off-policy. Actor-critic architecture. A3C.

- Applications: control of a simple video game or a digital system.

Requirements

Engineer level

  21 Hours
 

Related Courses

Related Categories