tpuprogramming 
TPU Programming: Building Neural Network Applications on Tensor Processing Units 
7 hours 
The Tensor Processing Unit (TPU) is the architecture which Google has used internally for several years, and is just now becoming available for use by the general public. It includes several optimizations specifically for use in neural networks, including streamlined matrix multiplication, and 8bit integers instead of 16bit in order to return appropriate levels of precision.
In this instructorled, live training, participants will learn how to take advantage of the innovations in TPU processors to maximize the performance of their own AI applications.
By the end of the training, participants will be able to:
Train various types of neural networks on large amounts of data
Use TPUs to speed up the inference process by up to two orders of magnitude
Utilize TPUs to process intensive applications such as image search, cloud vision and photos
Audience
Developers
Researchers
Engineers
Data scientists
Format of the course
Part lecture, part discussion, exercises and heavy handson practice
To request a customized course outline for this training, please contact us. 
undnn 
Understanding Deep Neural Networks 
35 hours 
This course begins with giving you conceptual knowledge in neural networks and generally in machine learning algorithm, deep learning (algorithms and applications).
Part1(40%) of this training is more focus on fundamentals, but will help you choosing the right technology : TensorFlow, Caffe, Theano, DeepDrive, Keras, etc.
Part2(20%) of this training introduces Theano  a python library that makes writing deep learning models easy.
Part3(40%) of the training would be extensively based on Tensorflow  2nd Generation API of Google's open source software library for Deep Learning. The examples and handson would all be made in TensorFlow.
Audience
This course is intended for engineers seeking to use TensorFlow for their Deep Learning projects
After completing this course, delegates will:
have a good understanding on deep neural networks(DNN), CNN and RNN
understand TensorFlow’s structure and deployment mechanisms
be able to carry out installation / production environment / architecture tasks and configuration
be able to assess code quality, perform debugging, monitoring
be able to implement advanced production like training models, building graphs and logging
Not all the topics would be covered in a public classroom with 35 hours duration due to the vastness of the subject.
The Duration of the complete course will be around 70 hours and not 35 hours.
Part 1 – Deep Learning and DNN Concepts
Introduction AI, Machine Learning & Deep Learning
History, basic concepts and usual applications of artificial intelligence far Of the fantasies carried by this domain
Collective Intelligence: aggregating knowledge shared by many virtual agents
Genetic algorithms: to evolve a population of virtual agents by selection
Usual Learning Machine: definition.
Types of tasks: supervised learning, unsupervised learning, reinforcement learning
Types of actions: classification, regression, clustering, density estimation, reduction of dimensionality
Examples of Machine Learning algorithms: Linear regression, Naive Bayes, Random Tree
Machine learning VS Deep Learning: problems on which Machine Learning remains Today the state of the art (Random Forests & XGBoosts)
Basic Concepts of a Neural Network (Application: multilayer perceptron)
Reminder of mathematical bases.
Definition of a network of neurons: classical architecture, activation and
Weighting of previous activations, depth of a network
Definition of the learning of a network of neurons: functions of cost, backpropagation,
Stochastic gradient descent, maximum likelihood.
Modeling of a neural network: modeling input and output data according to
The type of problem (regression, classification ...). Curse of dimensionality. Distinction between
Multifeature data and signal. Choice of a cost function according to the data.
Approximation of a function by a network of neurons: presentation and examples
Approximation of a distribution by a network of neurons: presentation and examples
Data Augmentation: how to balance a dataset
Generalization of the results of a network of neurons.
Initialization and regularization of a neural network: L1 / L2 regularization, Batch
Normalization ...
Optimization and convergence algorithms
Standard ML / DL Tools
A simple presentation with advantages, disadvantages, position in the ecosystem and use is planned.
Data management tools: Apache Spark, Apache Hadoop
Tools Machine Learning: Numpy, Scipy, Scikit
DL high level frameworks: PyTorch, Keras, Lasagne
Low level DL frameworks: Theano, Torch, Caffe, Tensorflow
Convolutional Neural Networks (CNN).
Presentation of the CNNs: fundamental principles and applications
Basic operation of a CNN: convolutional layer, use of a kernel,
Padding & stride, feature map generation, pooling layers. Extensions 1D, 2D and
3D.
Presentation of the different CNN architectures that brought the state of the art in classification
Images: LeNet, VGG Networks, Network in Network, Inception, Resnet. Presentation of
Innovations brought about by each architecture and their more global applications (Convolution
1x1 or residual connections)
Use of an attention model.
Application to a common classification case (text or image)
CNNs for generation: superresolution, pixeltopixel segmentation. Presentation of
Main strategies for increasing feature maps for image generation.
Recurrent Neural Networks (RNN).
Presentation of RNNs: fundamental principles and applications.
Basic operation of the RNN: hidden activation, back propagation through time,
Unfolded version.
Evolutions towards the Gated Recurrent Units (GRUs) and LSTM (Long Short Term Memory).
Presentation of the different states and the evolutions brought by these architectures
Convergence and vanising gradient problems
Classical architectures: Prediction of a temporal series, classification ...
RNN Encoder Decoder type architecture. Use of an attention model.
NLP applications: word / character encoding, translation.
Video Applications: prediction of the next generated image of a video sequence.
Generational models: Variational AutoEncoder (VAE) and Generative Adversarial Networks (GAN).
Presentation of the generational models, link with the CNNs
Autoencoder: reduction of dimensionality and limited generation
Variational Autoencoder: generational model and approximation of the distribution of a
given. Definition and use of latent space. Reparameterization trick. Applications and
Limits observed
Generative Adversarial Networks: Fundamentals. Dual Network Architecture
(Generator and discriminator) with alternate learning, cost functions available.
Convergence of a GAN and difficulties encountered.
Improved convergence: Wasserstein GAN, Began. Earth Moving Distance.
Applications for the generation of images or photographs, text generation, super
resolution.
Deep Reinforcement Learning.
Presentation of reinforcement learning: control of an agent in a defined environment
By a state and possible actions
Use of a neural network to approximate the state function
Deep Q Learning: experience replay, and application to the control of a video game.
Optimization of learning policy. Onpolicy && offpolicy. Actor critic
architecture. A3C.
Applications: control of a single video game or a digital system.
Part 2 – Theano for Deep Learning
Theano Basics
Introduction
Installation and Configuration
Theano Functions
inputs, outputs, updates, givens
Training and Optimization of a neural network using Theano
Neural Network Modeling
Logistic Regression
Hidden Layers
Training a network
Computing and Classification
Optimization
Log Loss
Testing the model
Part 3 – DNN using Tensorflow
TensorFlow Basics
Creation, Initializing, Saving, and Restoring TensorFlow variables
Feeding, Reading and Preloading TensorFlow Data
How to use TensorFlow infrastructure to train models at scale
Visualizing and Evaluating models with TensorBoard
TensorFlow Mechanics
Prepare the Data
Download
Inputs and Placeholders
Build the GraphS
Inference
Loss
Training
Train the Model
The Graph
The Session
Train Loop
Evaluate the Model
Build the Eval Graph
Eval Output
The Perceptron
Activation functions
The perceptron learning algorithm
Binary classification with the perceptron
Document classification with the perceptron
Limitations of the perceptron
From the Perceptron to Support Vector Machines
Kernels and the kernel trick
Maximum margin classification and support vectors
Artificial Neural Networks
Nonlinear decision boundaries
Feedforward and feedback artificial neural networks
Multilayer perceptrons
Minimizing the cost function
Forward propagation
Back propagation
Improving the way neural networks learn
Convolutional Neural Networks
Goals
Model Architecture
Principles
Code Organization
Launching and Training the Model
Evaluating a Model
Basic Introductions to be given to the below modules(Brief Introduction to be provided based on time availability):
Tensorflow  Advanced Usage
Threading and Queues
Distributed TensorFlow
Writing Documentation and Sharing your Model
Customizing Data Readers
Manipulating TensorFlow Model Files
TensorFlow Serving
Introduction
Basic Serving Tutorial
Advanced Serving Tutorial
Serving Inception Model Tutorial

cntk 
Using Computer Network ToolKit (CNTK) 
28 hours 
Computer Network ToolKit (CNTK) is Microsoft's Open Source, Multimachine, MultiGPU, Highly efficent RNN training machine learning framework for speech, text, and images.
Audience
This course is directed at engineers and architects aiming to utilize CNTK in their projects.
Getting started
Setup CNTK on your machine
Enabling 1bit SGD
Developing and Testing
CNTK Production Test Configurations
How to contribute to CNTK
Tutorial
Tutorial II
CNTK usage overview
Examples
Presentations
Multiple GPUs¹ and machines
Configuring CNTK
Config file overview
Simple Network Builder
BrainScript Network Builder
SGD block
Reader block
Train, Test, Eval
Toplevel configurations
Describing Networks
Basic concepts
Expressions
Defining functions
Full Function Reference
Data readers
Text Format Reader
CNTK Text Format Reader
UCI Fast Reader (deprecated)
HTKMLF Reader
LM sequence reader
LU sequence reader
Image reader
Evaluating CNTK Models
Overview
C++ Evaluation Interface
C# Evaluation Interface
Evaluating Hidden Layers
C# Image Transforms for Evaluation
Advanced topics
Command line parsing rules
Toplevel commands
Plot command
ConvertDBN command
¹ The topic related to the use of CNTK with a GPU is not available as a part of a remote course. This module can be delivered during classroombased courses, but only by prior agreement, and only if both the trainer and all participants have laptops with supported NVIDIA GPUs (not provided by NobleProg). NobleProg cannot guarantee the availability of trainers with the required hardware. 
aiintrozero 
From Zero to AI 
35 hours 
This course is created for people who have no previous experience in probability and statistics.
Probability (3.5h)
Definition of probability
Binomial distribution
Everyday usage exercises
Statistics (10.5h)
Descriptive Statistics
Inferential Statistics
Regression
Logistic Regression
Exercises
Intro to programming (3.5h)
Procedural Programming
Functional Programming
OOP Programming
Exercises (writing logic for a game of choice, e.g. noughts and crosses)
Machine Learning (10.5h)
Classification
Clustering
Neural Networks
Exercises (write AI for a computer game of choice)
Rules Engines and Expert Systems (7 hours)
Intro to Rule Engines
Write AI for the same game and combine solutions into hybrid approach

aiint 
Artificial Intelligence Overview 
7 hours 
This course has been created for managers, solutions architects, innovation officers, CTOs, software architects and everyone who is interested overview of applied artificial intelligence and the nearest forecast for its development.
Artificial Intelligence History
Intelligent Agents
Problem Solving
Solving Problems by Searching
Beyond Classical Search
Adversarial Search
Constraint Satisfaction Problems
Knowledge and Reasoning
Logical Agents
FirstOrder Logic
Inference in FirstOrder Logic
Classical Planning
Planning and Acting in the Real World
Knowledge Representation
Uncertain Knowledge and Reasoning
Quantifying Uncertainty
Probabilistic Reasoning
Probabilistic Reasoning over Time
Making Simple Decisions
Making Complex Decisions
Learning
Learning from Examples
Knowledge in Learning
Learning Probabilistic Models
Reinforcement Learning
Communicating, Perceiving, and Acting;
Natural Language Processing
Natural Language for Communication
Perception
Robotics
Conclusions
Philosophical Foundations
AI: The Present and Future

MicrosoftCognitiveToolkit 
Microsoft Cognitive Toolkit 2.x 
21 hours 
Microsoft Cognitive Toolkit 2.x (previously CNTK) is an opensource, commercialgrade toolkit that trains deep learning algorithms to learn like the human brain. According to Microsoft, CNTK can be 510x faster than TensorFlow on recurrent networks, and 2 to 3 times faster than TensorFlow for imagerelated tasks.
In this instructorled, live training, participants will learn how to use Microsoft Cognitive Toolkit to create, train and evaluate deep learning algorithms for use in commercialgrade AI applications involving multiple types of data such data, speech, text, and images.
By the end of this training, participants will be able to:
Access CNTK as a library from within a Python, C#, or C++ program
Use CNTK as a standalone machine learning tool through its own model description language (BrainScript)
Use the CNTK model evaluation functionality from a Java program
Combine feedforward DNNs, convolutional nets (CNNs), and recurrent networks (RNNs/LSTMs)
Scale computation capacity on CPUs, GPUs and multiple machines
Access massive datasets using existing programming languages and algorithms
Audience
Developers
Data scientists
Format of the course
Part lecture, part discussion, exercises and heavy handson practice
Note
If you wish to customize any part of this training, including the programming language of choice, please contact us to arrange.
To request a customized course outline for this training, please contact us. 
neuralnet 
Introduction to the use of neural networks 
7 hours 
The training is aimed at people who want to learn the basics of neural networks and their applications.
The Basics
Whether computers can think of?
Imperative and declarative approach to solving problems
Purpose Bedan on artificial intelligence
The definition of artificial intelligence. Turing test. Other determinants
The development of the concept of intelligent systems
Most important achievements and directions of development
Neural Networks
The Basics
Concept of neurons and neural networks
A simplified model of the brain
Opportunities neuron
XOR problem and the nature of the distribution of values
The polymorphic nature of the sigmoidal
Other functions activated
Construction of neural networks
Concept of neurons connect
Neural network as nodes
Building a network
Neurons
Layers
Scales
Input and output data
Range 0 to 1
Normalization
Learning Neural Networks
Backward Propagation
Steps propagation
Network training algorithms
range of application
Estimation
Problems with the possibility of approximation by
Examples
XOR problem
Lotto?
Equities
OCR and image pattern recognition
Other applications
Implementing a neural network modeling job predicting stock prices of listed
Problems for today
Combinatorial explosion and gaming issues
Turing test again
Overconfidence in the capabilities of computers

snorkel 
Snorkel: Rapidly process training data 
7 hours 
Snorkel is a system for rapidly creating, modeling, and managing training data. It focuses on accelerating the development of structured or "dark" data extraction applications for domains in which large labeled training sets are not available or easy to obtain.
In this instructorled, live training, participants will learn techniques for extracting value from unstructured data such as text, tables, figures, and images through modeling of training data with Snorkel.
By the end of this training, participants will be able to:
Programmatically create training sets to enable the labeling of massive training sets
Train highquality end models by first modeling noisy training sets
Use Snorkel to implement weak supervision techniques and apply data programming to weaklysupervised machine learning systems
Audience
Developers
Data scientists
Format of the course
Part lecture, part discussion, exercises and heavy handson practice
To request a customized course outline for this training, please contact us.

rneuralnet 
Neural Network in R 
14 hours 
This course is an introduction to applying neural networks in real world problems using Rproject software.
Introduction to Neural Networks
What are Neural Networks
What is current status in applying neural networks
Neural Networks vs regression models
Supervised and Unsupervised learning
Overview of packages available
nnet, neuralnet and others
differences between packages and itls limitations
Visualizing neural networks
Applying Neural Networks
Concept of neurons and neural networks
A simplified model of the brain
Opportunities neuron
XOR problem and the nature of the distribution of values
The polymorphic nature of the sigmoidal
Other functions activated
Construction of neural networks
Concept of neurons connect
Neural network as nodes
Building a network
Neurons
Layers
Scales
Input and output data
Range 0 to 1
Normalization
Learning Neural Networks
Backward Propagation
Steps propagation
Network training algorithms
range of application
Estimation
Problems with the possibility of approximation by
Examples
OCR and image pattern recognition
Other applications
Implementing a neural network modeling job predicting stock prices of listed

aiauto 
Artificial Intelligence in Automotive 
14 hours 
This course covers AI (emphasizing Machine Learning and Deep Learning) in Automotive Industry. It helps to determine which technology can be (potentially) used in multiple situation in a car: from simple automation, image recognition to autonomous decision making.
Current state of the technology
What is used
What may be potentially used
Rules based AI
Simplifying decision
Machine Learning
Classification
Clustering
Neural Networks
Types of Neural Networks
Presentation of working examples and discussion
Deep Learning
Basic vocabulary
When to use Deep Learning, when not to
Estimating computational resources and cost
Very short theoretical background to Deep Neural Networks
Deep Learning in practice (mainly using TensorFlow)
Preparing Data
Choosing loss function
Choosing appropriate type on neural network
Accuracy vs speed and resources
Training neural network
Measuring efficiency and error
Sample usage
Anomaly detection
Image recognition
ADAS

d2dbdpa 
From Data to Decision with Big Data and Predictive Analytics 
21 hours 
Audience
If you try to make sense out of the data you have access to or want to analyse unstructured data available on the net (like Twitter, Linked in, etc...) this course is for you.
It is mostly aimed at decision makers and people who need to choose what data is worth collecting and what is worth analyzing.
It is not aimed at people configuring the solution, those people will benefit from the big picture though.
Delivery Mode
During the course delegates will be presented with working examples of mostly open source technologies.
Short lectures will be followed by presentation and simple exercises by the participants
Content and Software used
All software used is updated each time the course is run so we check the newest versions possible.
It covers the process from obtaining, formatting, processing and analysing the data, to explain how to automate decision making process with machine learning.
Quick Overview
Data Sources
Minding Data
Recommender systems
Target Marketing
Datatypes
Structured vs unstructured
Static vs streamed
Attitudinal, behavioural and demographic data
Datadriven vs userdriven analytics
data validity
Volume, velocity and variety of data
Models
Building models
Statistical Models
Machine learning
Data Classification
Clustering
kGroups, kmeans, nearest neighbours
Ant colonies, birds flocking
Predictive Models
Decision trees
Support vector machine
Naive Bayes classification
Neural networks
Markov Model
Regression
Ensemble methods
ROI
Benefit/Cost ratio
Cost of software
Cost of development
Potential benefits
Building Models
Data Preparation (MapReduce)
Data cleansing
Choosing methods
Developing model
Testing Model
Model evaluation
Model deployment and integration
Overview of Open Source and commercial software
Selection of Rproject package
Python libraries
Hadoop and Mahout
Selected Apache projects related to Big Data and Analytics
Selected commercial solution
Integration with existing software and data sources

encogintro 
Encog: Introduction to Machine Learning 
14 hours 
Encog is an opensource machine learning framework for Java and .Net.
In this instructorled, live training, participants will learn how to create various neural network components using ENCOG. Realworld case studies will be discussed and machine language based solutions to these problems will be explored.
By the end of this training, participants will be able to:
Prepare data for neural networks using the normalization process
Implement feed forward networks and propagation training methodologies
Implement classification and regression tasks
Model and train neural networks using Encog's GUI based workbench
Integrate neural network support into realworld applications
Audience
Developers
Analysts
Data scientists
Format of the course
Part lecture, part discussion, exercises and heavy handson practice
To request a customized course outline for this training, please contact us. 
mlintro 
Introduction to Machine Learning 
7 hours 
This training course is for people that would like to apply basic Machine Learning techniques in practical applications.
Audience
Data scientists and statisticians that have some familiarity with machine learning and know how to program R. The emphasis of this course is on the practical aspects of data/model preparation, execution, post hoc analysis and visualization. The purpose is to give a practical introduction to machine learning to participants interested in applying the methods at work
Sector specific examples are used to make the training relevant to the audience.
Naive Bayes
Multinomial models
Bayesian categorical data analysis
Discriminant analysis
Linear regression
Logistic regression
GLM
EM Algorithm
Mixed Models
Additive Models
Classification
KNN
Ridge regression
Clustering

encogadv 
Encog: Advanced Machine Learning 
14 hours 
Encog is an opensource machine learning framework for Java and .Net.
In this instructorled, live training, participants will learn advanced machine learning techniques for building accurate neural network predictive models.
By the end of this training, participants will be able to:
Implement different neural networks optimization techniques to resolve underfitting and overfitting
Understand and choose from a number of neural network architectures
Implement supervised feed forward and feedback networks
Audience
Developers
Analysts
Data scientists
Format of the course
Part lecture, part discussion, exercises and heavy handson practice
To request a customized course outline for this training, please contact us. 
appliedml 
Applied Machine Learning 
14 hours 
This training course is for people that would like to apply Machine Learning in practical applications.
Audience
This course is for data scientists and statisticians that have some familiarity with statistics and know how to program R (or Python or other chosen language). The emphasis of this course is on the practical aspects of data/model preparation, execution, post hoc analysis and visualization.
The purpose is to give practical applications to Machine Learning to participants interested in applying the methods at work.
Sector specific examples are used to make the training relevant to the audience.
Naive Bayes
Multinomial models
Bayesian categorical data analysis
Discriminant analysis
Linear regression
Logistic regression
GLM
EM Algorithm
Mixed Models
Additive Models
Classification
KNN
Bayesian Graphical Models
Factor Analysis (FA)
Principal Component Analysis (PCA)
Independent Component Analysis (ICA)
Support Vector Machines (SVM) for regression and classification
Boosting
Ensemble models
Neural networks
Hidden Markov Models (HMM)
Space State Models
Clustering

Neuralnettf 
Neural Networks Fundamentals using TensorFlow as Example 
28 hours 
This course will give you knowledge in neural networks and generally in machine learning algorithm, deep learning (algorithms and applications).
This training is more focus on fundamentals, but will help you choosing the right technology : TensorFlow, Caffe, Teano, DeepDrive, Keras, etc. The examples are made in TensorFlow.
TensorFlow Basics
Creation, Initializing, Saving, and Restoring TensorFlow variables
Feeding, Reading and Preloading TensorFlow Data
How to use TensorFlow infrastructure to train models at scale
Visualizing and Evaluating models with TensorBoard
TensorFlow Mechanics
Inputs and Placeholders
Build the GraphS
Inference
Loss
Training
Train the Model
The Graph
The Session
Train Loop
Evaluate the Model
Build the Eval Graph
Eval Output
The Perceptron
Activation functions
The perceptron learning algorithm
Binary classification with the perceptron
Document classification with the perceptron
Limitations of the perceptron
From the Perceptron to Support Vector Machines
Kernels and the kernel trick
Maximum margin classification and support vectors
Artificial Neural Networks
Nonlinear decision boundaries
Feedforward and feedback artificial neural networks
Multilayer perceptrons
Minimizing the cost function
Forward propagation
Back propagation
Improving the way neural networks learn
Convolutional Neural Networks
Goals
Model Architecture
Principles
Code Organization
Launching and Training the Model
Evaluating a Model

MLFWR1 
Machine Learning Fundamentals with R 
14 hours 
The aim of this course is to provide a basic proficiency in applying Machine Learning methods in practice. Through the use of the R programming platform and its various libraries, and based on a multitude of practical examples this course teaches how to use the most important building blocks of Machine Learning, how to make data modeling decisions, interpret the outputs of the algorithms and validate the results.
Our goal is to give you the skills to understand and use the most fundamental tools from the Machine Learning toolbox confidently and avoid the common pitfalls of Data Sciences applications.
Introduction to Applied Machine Learning
Statistical learning vs. Machine learning
Iteration and evaluation
BiasVariance tradeoff
Regression
Linear regression
Generalizations and Nonlinearity
Exercises
Classification
Bayesian refresher
Naive Bayes
Logistic regression
KNearest neighbors
Exercises
Crossvalidation and Resampling
Crossvalidation approaches
Bootstrap
Exercises
Unsupervised Learning
Kmeans clustering
Examples
Challenges of unsupervised learning and beyond Kmeans

datamodeling 
Pattern Recognition 
35 hours 
This course provides an introduction into the field of pattern recognition and machine learning. It touches on practical applications in statistics, computer science, signal processing, computer vision, data mining, and bioinformatics.
The course is interactive and includes plenty of handson exercises, instructor feedback, and testing of knowledge and skills acquired.
Audience
Data analysts
PhD students, researchers and practitioners
Introduction
Probability theory, model selection, decision and information theory
Probability distributions
Linear models for regression and classification
Neural networks
Kernel methods
Sparse kernel machines
Graphical models
Mixture models and EM
Approximate inference
Sampling methods
Continuous latent variables
Sequential data
Combining models

annmldt 
Artificial Neural Networks, Machine Learning, Deep Thinking 
21 hours 
DAY 1  ARTIFICIAL NEURAL NETWORKS
Introduction and ANN Structure.
Biological neurons and artificial neurons.
Model of an ANN.
Activation functions used in ANNs.
Typical classes of network architectures .
Mathematical Foundations and Learning mechanisms.
Revisiting vector and matrix algebra.
Statespace concepts.
Concepts of optimization.
Errorcorrection learning.
Memorybased learning.
Hebbian learning.
Competitive learning.
Single layer perceptrons.
Structure and learning of perceptrons.
Pattern classifier  introduction and Bayes' classifiers.
Perceptron as a pattern classifier.
Perceptron convergence.
Limitations of a perceptrons.
Feedforward ANN.
Structures of Multilayer feedforward networks.
Back propagation algorithm.
Back propagation  training and convergence.
Functional approximation with back propagation.
Practical and design issues of back propagation learning.
Radial Basis Function Networks.
Pattern separability and interpolation.
Regularization Theory.
Regularization and RBF networks.
RBF network design and training.
Approximation properties of RBF.
Competitive Learning and Self organizing ANN.
General clustering procedures.
Learning Vector Quantization (LVQ).
Competitive learning algorithms and architectures.
Self organizing feature maps.
Properties of feature maps.
Fuzzy Neural Networks.
Neurofuzzy systems.
Background of fuzzy sets and logic.
Design of fuzzy stems.
Design of fuzzy ANNs.
Applications
A few examples of Neural Network applications, their advantages and problems will be discussed.
DAY 2 MACHINE LEARNING
The PAC Learning Framework
Guarantees for finite hypothesis set – consistent case
Guarantees for finite hypothesis set – inconsistent case
Generalities
Deterministic cv. Stochastic scenarios
Bayes error noise
Estimation and approximation errors
Model selection
Radmeacher Complexity and VC – Dimension
Bias  Variance tradeoff
Regularisation
Overfitting
Validation
Support Vector Machines
Kriging (Gaussian Process regression)
PCA and Kernel PCA
Self Organisation Maps (SOM)
Kernel induced vector space
Mercer Kernels and Kernel  induced similarity metrics
Reinforcement Learning
DAY 3  DEEP LEARNING
This will be taught in relation to the topics covered on Day 1 and Day 2
Logistic and Softmax Regression
Sparse Autoencoders
Vectorization, PCA and Whitening
SelfTaught Learning
Deep Networks
Linear Decoders
Convolution and Pooling
Sparse Coding
Independent Component Analysis
Canonical Correlation Analysis
Demos and Applications

Torch 
Torch: Getting started with Machine and Deep Learning 
21 hours 
Torch is an open source machine learning library and a scientific computing framework based on the Lua programming language. It provides a development environment for numerics, machine learning, and computer vision, with a particular emphasis on deep learning and convolutional nets. It is one of the fastest and most flexible frameworks for Machine and Deep Learning and is used by companies such as Facebook, Google, Twitter, NVIDIA, AMD, Intel, and many others.
In this course we cover the principles of Torch, its unique features, and how it can be applied in realworld applications. We step through numerous handson exercises all throughout, demonstrating and practicing the concepts learned.
By the end of the course, participants will have a thorough understanding of Torch's underlying features and capabilities as well as its role and contribution within the AI space compared to other frameworks and libraries. Participants will have also received the necessary practice to implement Torch in their own projects.
Audience
Software developers and programmers wishing to enable Machine and Deep Learning within their applications
Format of the course
Overview of Machine and Deep Learning
Inclass coding and integration exercises
Test questions sprinkled along the way to check understanding
Introduction to Torch
Like NumPy but with CPU and GPU implementation
Torch's usage in machine learning, computer vision, signal processing, parallel processing, image, video, audio and networking
Installing Torch
Linux, Windows, Mac
Bitmapi and Docker
Installing Torch packages
Using the LuaRocks package manager
Choosing an IDE for Torch
ZeroBrane Studio
Eclipse plugin for Lua
Working with the Lua scripting language and LuaJIT
Lua's integration with C/C++
Lua syntax: datatypes, loops and conditionals, functions, functions, tables, and file i/o.
Object orientation and serialization in Torch
Coding exercise
Loading a dataset in Torch
MNIST
CIFAR10, CIFAR100
Imagenet
Machine Learning in Torch
Deep Learning
Manual feature extraction vs convolutional networks
Supervised and Unsupervised Learning
Building a neural network with Torch
Ndimensional arrays
Image analysis with Torch
Image package
The Tensor library
Working with the REPL interpreter
Working with databases
Networking and Torch
GPU support in Torch
Integrating Torch
C, Python, and others
Embedding Torch
iOS and Android
Other frameworks and libraries
Facebook's optimized deeplearning modules and containers
Creating your own package
Testing and debugging
Releasing your application
The future of AI and Torch 
deeplearning1 
Introduction to Deep Learning 
21 hours 
This course is general overview for Deep Learning without going too deep into any specific methods. It is suitable for people who want to start using Deep learning to enhance their accuracy of prediction.
Backprop, modular models
Logsum module
RBF Net
MAP/MLE loss
Parameter Space Transforms
Convolutional Module
GradientBased Learning
Energy for inference,
Objective for learning
PCA; NLL:
Latent Variable Models
Probabilistic LVM
Loss Function
Handwriting recognition

OpenNN 
OpenNN: Implementing neural networks 
14 hours 
OpenNN is an opensource class library written in C++ which implements neural networks, for use in machine learning.
In this course we go over the principles of neural networks and use OpenNN to implement a sample application.
Audience
Software developers and programmers wishing to create Deep Learning applications.
Format of the course
Lecture and discussion coupled with handson exercises.
Introduction to OpenNN, Machine Learning and Deep Learning
Downloading OpenNN
Working with Neural Designer
Using Neural Designer for descriptive, diagnostic, predictive and prescriptive analytics
OpenNN architecture
CPU parallelization
OpenNN classes
Data set, neural network, loss index, training strategy, model selection, testing analysis
Vector and matrix templates
Building a neural network application
Choosing a suitable neural network
Formulating the variational problem (loss index)
Solving the reduced function optimization problem (training strategy)
Working with datasets
The data matrix (columns as variables and rows as instances)
Learning tasks
Function regression
Pattern recognition
Compiling with QT Creator
Integrating, testing and debugging your application
The future of neural networks and OpenNN 
Fairsec 
Fairsec: Setting up a CNNbased machine translation system 
7 hours 
Fairseq is an opensource sequencetosequence learning toolkit created by Facebok for use in Neural Machine Translation (NMT).
In this training participants will learn how to use Fairseq to carry out translation of sample content. By the end of this training, participants will have the knowledge and practice needed to implement a live Fairseq based machine translation solution. Source and target language content samples can be prepared according to audience's requirements.
Audience
Localization specialists with a technical background
Global content managers
Localization engineers
Software developers in charge of implementing global content solutions
Format of the course
Part lecture, part discussion, heavy handson practice
Introduction
Why Neural Machine Translation?
Overview of the Torch project
Overview of a Convolutional Neural Machine Translation model
Convolutional Sequence to Sequence Learning
Convolutional Encoder Model for Neural Machine Translation
Standard LSTMbased model
Overview of training approaches
About GPUs and CPUs
Fast beam search generation
Installation and setup
Evaluating pretrained models
Preprocessing your data
Training the model
Translating
Converting a trained model to use CPUonly operations
Joining to the community
Closing remarks 
matlabdl 
Matlab for Deep Learning 
14 hours 
In this instructorled, live training, participants will learn how to use Matlab to design, build, and visualize a convolutional neural network for image recognition.
By the end of this training, participants will be able to:
Build a deep learning model
Automate data labeling
Work with models from Caffe and TensorFlowKeras
Train data using multiple GPUs, the cloud, or clusters
Audience
Developers
Engineers
Domain experts
Format of the course
Part lecture, part discussion, exercises and heavy handson practice
To request a customized course outline for this training, please contact us. 
Fairseq 
Fairseq: Setting up a CNNbased machine translation system 
7 hours 
Fairseq is an opensource sequencetosequence learning toolkit created by Facebok for use in Neural Machine Translation (NMT).
In this training participants will learn how to use Fairseq to carry out translation of sample content. By the end of this training, participants will have the knowledge and practice needed to implement a live Fairseq based machine translation solution. Source and target language content samples can be prepared according to audience's requirements.
Audience
Localization specialists with a technical background
Global content managers
Localization engineers
Software developers in charge of implementing global content solutions
Format of the course
Part lecture, part discussion, heavy handson practice
Introduction
Why Neural Machine Translation?
Overview of the Torch project
Overview of a Convolutional Neural Machine Translation model
Convolutional Sequence to Sequence Learning
Convolutional Encoder Model for Neural Machine Translation
Standard LSTMbased model
Overview of training approaches
About GPUs and CPUs
Fast beam search generation
Installation and setup
Evaluating pretrained models
Preprocessing your data
Training the model
Translating
Converting a trained model to use CPUonly operations
Joining to the community
Closing remarks 
mlbankingr 
Machine Learning for Banking (with R) 
28 hours 
In this instructorled, live training, participants will learn how to apply machine learning techniques and tools for solving realworld problems in the banking industry. R will be used as the programming language.
Participants first learn the key principles, then put their knowledge into practice by building their own machine learning models and using them to complete live team projects.
Introduction
Difference between statistical learning (statistical analysis) and machine learning
Adoption of machine learning technology by finance and banking companies
Different Types of Machine Learning
Supervised learning vs unsupervised learning
Iteration and evaluation
Biasvariance tradeoff
Combining supervised and unsupervised learning (semisupervised learning)
Machine Learning Languages and Toolsets
Open source vs proprietary systems and software
R vs Python vs Matlab
Libraries and frameworks
Machine Learning Case Studies
Consumer data and big data
Assessing risk in consumer and business lending
Improving customer service through sentiment analysis
Detecting identity fraud, billing fraud and money laundering
Introduction to R
Installing the RStudio IDE
Loading R packages
Data structures
Vectors
Factors
Lists
Data Frames
Matrixes and Arrays
How to Load Machine Learning Data
Databases, data warehouses and streaming data
Distributed storage and processing with Hadoop and Spark
Importing data from a database
Importing data from Excel and CSV
Modeling Business Decisions with Supervised Learning
Classifying your data (classification)
Using regression analysis to predict outcome
Choosing from available machine learning algorithms
Understanding decision tree algorithms
Understanding random forest algorithms
Model evaluation
Exercise
Regression Analysis
Linear regression
Generalizations and Nonlinearity
Exercise
Classification
Bayesian refresher
Naive Bayes
Logistic regression
KNearest neighbors
Exercise
Handson: Building an Estimation Model
Assessing lending risk based on customer type and history
Evaluating the performance of Machine Learning Algorithms
Crossvalidation and resampling
Bootstrap aggregation (bagging)
Exercise
Modeling Business Decisions with Unsupervised Learning
Kmeans clustering
Challenges of unsupervised learning
Beyond Kmeans
Exercise
Handson: Building a Recommendation System
Analyzing past customer behavior to improve new service offerings
Extending your company's capabilities
Developing models in the cloud
Accelerating machine learning with additional GPUs
Beyond machine learning: Artificial Intelligence (AI)
Applying Deep Learning neural networks for computer vision, voice recognition and text analysis
Closing Remarks 
opennmt 
OpenNMT: Setting up a Neural Machine Translation system 
7 hours 
OpenNMT is a fullfeatured, opensource (MIT) neural machine translation system that utilizes the Torch mathematical toolkit.
In this training participants will learn how to set up and use OpenNMT to carry out translation of various sample data sets. The course starts with an overview of neural networks as they apply to machine translation. Participants will carry out live exercises throughout the course to demonstrate their understanding of the concepts learned and get feedback from the instructor. By the end of this training, participants will have the knowledge and practice needed to implement a live OpenNMT solution.
Source and target language samples will be prearranged per the audience's requirements.
Audience
Localization specialists with a technical background
Global content managers
Localization engineers
Software developers in charge of implementing global content solutions
Format of the course
Part lecture, part discussion, heavy handson practice
Introduction
Why Neural Machine Translation?
Overview of the Torch project
Installation and setup
Preprocessing your data
Training the model
Translating
Using pretrained models
Working with Lua scripts
Using extensions
Troubleshooting
Joining the community
Closing remarks 
mlbankingpython_ 
Machine Learning for Banking (with Python) 
21 hours 
In this instructorled, live training, participants will learn how to apply machine learning techniques and tools for solving realworld problems in the banking industry. Python will be used as the programming language.
Participants first learn the key principles, then put their knowledge into practice by building their own machine learning models and using them to complete live team projects.
Introduction
Difference between statistical learning (statistical analysis) and machine learning
Adoption of machine learning technology and talent by finance and banking companies
Different Types of Machine Learning
Supervised learning vs unsupervised learning
Iteration and evaluation
Biasvariance tradeoff
Combining supervised and unsupervised learning (semisupervised learning)
Machine Learning Languages and Toolsets
Open source vs proprietary systems and software
Python vs R vs Matlab
Libraries and frameworks
Machine Learning Case Studies
Consumer data and big data
Assessing risk in consumer and business lending
Improving customer service through sentiment analysis
Detecting identity fraud, billing fraud and money laundering
Handson: Python for Machine Learning
Preparing the Development Environment
Obtaining Python machine learning libraries and packages
Working with scikitlearn and PyBrain
How to Load Machine Learning Data
Databases, data warehouses and streaming data
Distributed storage and processing with Hadoop and Spark
Exported data and Excel
Modeling Business Decisions with Supervised Learning
Classifying your data (classification)
Using regression analysis to predict outcome
Choosing from available machine learning algorithms
Understandind decision tree algorithms
Understanding random forest algorithms
Model evaluation
Exercise
Regression Analysis
Linear regression
Generalizations and Nonlinearity
Exercise
Classification
Bayesian refresher
Naive Bayes
Logistic regression
KNearest neighbors
Exercise
Handson: Building an Estimation Model
Assessing lending risk based on customer type and history
Evaluating the performance of Machine Learning Algorithms
Crossvalidation and resampling
Bootstrap aggregation (bagging)
Exercise
Modeling Business Decisions with Unsupervised Learning
Kmeans clustering
Challenges of unsupervised learning
Beyond Kmeans
Exercise
Handson: Building a Recommendation System
Analyzing past customer behavior to improve new service offerings
Extending your company's capabilities
Developing models in the cloud
Accelerating machine learning with GPU
Beyond machine learning: Artificial Intelligence (AI)
Applying Deep Learning neural networks for computer vision, voice recognition and text analysis
Closing Remarks 
facebooknmt 
Facebook NMT: Setting up a neural machine translation system 
7 hours 
Fairseq is an opensource sequencetosequence learning toolkit created by Facebok for use in Neural Machine Translation (NMT).
In this training participants will learn how to use Fairseq to carry out translation of sample content.
By the end of this training, participants will have the knowledge and practice needed to implement a live Fairseq based machine translation solution.
Audience
Localization specialists with a technical background
Global content managers
Localization engineers
Software developers in charge of implementing global content solutions
Format of the course
Part lecture, part discussion, heavy handson practice
Note
If you wish to use specific source and target language content, please contact us to arrange.
Introduction
Why Neural Machine Translation?
Borrowing from image recognition techniques
Overview of the Torch and Caffe2 projects
Overview of a Convolutional Neural Machine Translation model
Convolutional Sequence to Sequence Learning
Convolutional Encoder Model for Neural Machine Translation
Standard LSTMbased model
Overview of training approaches
About GPUs and CPUs
Fast beam search generation
Installation and setup
Evaluating pretrained models
Preprocessing your data
Training the model
Translating
Converting a trained model to use CPUonly operations
Joining to the community
Closing remarks 
mlbankingpython 
Machine Learning for Banking (with Python)  Bespoke 
28 hours 
In this instructorled, live training, participants will learn how to apply machine learning techniques and tools for solving realworld problems in the banking industry. Deep learning techniques are covered in the latter part of the course. Python will be used as the programming language.
Participants first learn the key principles, then put their knowledge into practice by building their own machine learning models and using them to complete live team projects.
Introduction
Difference between statistical learning (statistical analysis) and machine learning
Adoption of machine learning technology and talent by finance and banking companies
Different Types of Machine Learning
Supervised learning vs unsupervised learning
Iteration and evaluation
Biasvariance tradeoff
Combining supervised and unsupervised learning (semisupervised learning)
Machine Learning Languages and Toolsets
Open source vs proprietary systems and software
Python vs R vs Matlab
Libraries and frameworks
Machine Learning Case Studies
Consumer data and big data
Assessing risk in consumer and business lending
Improving customer service through sentiment analysis
Detecting identity fraud, billing fraud and money laundering
Handson: Python for Machine Learning
Preparing the Development Environment
Obtaining Python machine learning libraries and packages
Working with scikitlearn and PyBrain
How to Load Machine Learning Data
Databases, data warehouses and streaming data
Distributed storage and processing with Hadoop and Spark
Exported data and Excel
Modeling Business Decisions with Supervised Learning
Classifying your data (classification)
Using regression analysis to predict outcome
Choosing from available machine learning algorithms
Understanding decision tree algorithms
Understanding random forest algorithms
Model evaluation
Exercise
Regression Analysis
Linear regression
Generalizations and Nonlinearity
Exercise
Classification
Bayesian refresher
Naive Bayes
Logistic regression
KNearest neighbors
Exercise
Handson: Building an Estimation Model
Assessing lending risk based on customer type and history
Evaluating the performance of Machine Learning Algorithms
Crossvalidation and resampling
Bootstrap aggregation (bagging)
Exercise
Modeling Business Decisions with Unsupervised Learning
Kmeans clustering
Challenges of unsupervised learning
Beyond Kmeans
Exercise
Handson: Building a Recommendation System
Analyzing past customer behavior to improve new service offerings
Introduction to Neural Networks and Deep Learning
Layers and nodes
Convolutional neural networks
Recurrent neural networks
Multilayer perceptrons
Frameworks: Theano, TensorFlow, Keras
Exercise
Handson: Building an AI system
Monitoring big data to detect money laundering and billing fraud
Extending your company's capabilities
Developing models in the cloud
Accelerating machine learning with GPU
Beyond machine learning: Artificial Intelligence (AI)
Applying neural networks for computer vision, voice recognition and text analysis
Closing Remarks 