Big Data Training Courses

Big Data Training Courses

Local, instructor-led live Big Data training courses start with an introduction to elemental concepts of Big Data, then progress into the programming languages and methodologies used to perform Data Analysis. Tools and infrastructure for enabling Big Data storage, Distributed Processing, and Scalability are discussed, compared and implemented in demo practice sessions. Big Data training is available as "onsite live training" or "remote live training". The UAE onsite live Big Data trainings can be carried out locally on customer premises or in NobleProg corporate training centers. Remote live training is carried out by way of an interactive, remote desktop. NobleProg -- Your Local Training Provider

Testimonials

★★★★★
★★★★★

Big Data Course Outlines

CodeNameDurationOverview
smtwebintSemantic Web Overview7 hoursThe Semantic Web is a collaborative movement led by the World Wide Web Consortium (W3C) that promotes common formats for data on the World Wide Web. The Semantic Web provides a common framework that allows data to be shared and reused across application, enterprise, and community boundaries.
ApacheIgniteApache Ignite: Improve Speed, Scale and Availability with In-Memory Computing14 hoursApache Ignite is an in-memory computing platform that sits between the application and data layer to improve speed, scale, and availability.

In this instructor-led, live training, participants will learn the principles behind persistent and pure in-memory storage as they step through the creation of a sample in-memory computing project.

By the end of this training, participants will be able to:

- Use Ignite for in-memory, on-disk persistence as well as a purely distributed in-memory database.
- Achieve persistence without syncing data back to a relational database.
- Use Ignite to carry out SQL and distributed joins.
- Improve performance by moving data closer to the CPU, using RAM as a storage.
- Spread data sets across a cluster to achieve horizontal scalability.
- Integrate Ignite with RDBMS, NoSQL, Hadoop and machine learning processors.

Audience

- Developers

Format of the course

- Part lecture, part discussion, exercises and heavy hands-on practice
graphcomputingIntroduction to Graph Computing28 hoursMany real world problems can be described in terms of graphs. For example, the Web graph, the social network graph, the train network graph and the language graph. These graphs tend to be extremely large; processing them requires a specialized set of tools and processes -- these tools and processes can be referred to as Graph Computing (also known as Graph Analytics).

In this instructor-led, live training, participants will learn about the technology offerings and implementation approaches for processing graph data. The aim is to identify real-world objects, their characteristics and relationships, then model these relationships and process them as data using a graph computing approach. We start with a broad overview and narrow in on specific tools as we step through a series of case studies, hands-on exercises and live deployments.

By the end of this training, participants will be able to:

- Understand how graph data is persisted and traversed
- Select the best framework for a given task (from graph databases to batch processing frameworks)
- Implement Hadoop, Spark, GraphX and Pregel to carry out graph computing across many machines in parallel
- View real-world big data problems in terms of graphs, processes and traversals

Audience

- Developers

Format of the course

- Part lecture, part discussion, exercises and heavy hands-on practice
matlabpredanalyticsMatlab for Predictive Analytics21 hoursPredictive analytics is the process of using data analytics to make predictions about the future. This process uses data along with data mining, statistics, and machine learning techniques to create a predictive model for forecasting future events.

In this instructor-led, live training, participants will learn how to use Matlab to build predictive models and apply them to large sample data sets to predict future events based on the data.

By the end of this training, participants will be able to:

- Create predictive models to analyze patterns in historical and transactional data
- Use predictive modeling to identify risks and opportunities
- Build mathematical models that capture important trends
- Use data from devices and business systems to reduce waste, save time, or cut costs

Audience

- Developers
- Engineers
- Domain experts

Format of the course

- Part lecture, part discussion, exercises and heavy hands-on practice
nifidevApache NiFi for Developers7 hoursApache NiFi (Hortonworks DataFlow) is a real-time integrated data logistics and simple event processing platform that enables the moving, tracking and automation of data between systems. It is written using flow-based programming and provides a web-based user interface to manage dataflows in real time.

In this instructor-led, live training, participants will learn the fundamentals of flow-based programming as they develop a number of demo extensions, components and processors using Apache NiFi.

By the end of this training, participants will be able to:

- Understand NiFi's architecture and dataflow concepts
- Develop extensions using NiFi and third-party APIs
- Custom develop their own Apache Nifi processor
- Ingest and process real-time data from disparate and uncommon file formats and data sources

Audience

- Developers
- Data engineers

Format of the course

- Part lecture, part discussion, exercises and heavy hands-on practice
nifiApache NiFi for Administrators21 hoursApache NiFi (Hortonworks DataFlow) is a real-time integrated data logistics and simple event processing platform that enables the moving, tracking and automation of data between systems. It is written using flow-based programming and provides a web-based user interface to manage dataflows in real time.

In this instructor-led, live training, participants will learn how to deploy and manage Apache NiFi in a live lab environment.

By the end of this training, participants will be able to:

- Install and configure Apachi NiFi
- Source, transform and manage data from disparate, distributed data sources, including databases and big data lakes
- Automate dataflows
- Enable streaming analytics
- Apply various approaches for data ingestion
- Transform Big Data and into business insights

Audience

- System administrators
- Data engineers
- Developers
- DevOps

Format of the course

- Part lecture, part discussion, exercises and heavy hands-on practice
solrcloudSolrCloud14 hoursApache SolrCloud is a distributed data processing engine that facilitates the searching and indexing of files on a distributed network.

In this instructor-led, live training, participants will learn how to set up a SolrCloud instance on Amazon AWS.

By the end of this training, participants will be able to:

- Understand SolCloud's features and how they compare to those of conventional master-slave clusters
- Configure a SolCloud centralized cluster
- Automate processes such as communicating with shards, adding documents to the shards, etc.
- Use Zookeeper in conjunction with SolrCloud to further automate processes
- Use the interface to manage error reporting
- Load balance a SolrCloud installation
- Configure SolrCloud for continuous processing and fail-over

Audience

- Solr Developers
- Project Managers
- System Administrators
- Search Analysts

Format of the course

- Part lecture, part discussion, exercises and heavy hands-on practice
datavaultData Vault: Building a Scalable Data Warehouse28 hoursData vault modeling is a database modeling technique that provides long-term historical storage of data that originates from multiple sources. A data vault stores a single version of the facts, or "all the data, all the time". Its flexible, scalable, consistent and adaptable design encompasses the best aspects of 3rd normal form (3NF) and star schema.

In this instructor-led, live training, participants will learn how to build a Data Vault.

By the end of this training, participants will be able to:

- Understand the architecture and design concepts behind Data Vault 2.0, and its interaction with Big Data, NoSQL and AI.
- Use data vaulting techniques to enable auditing, tracing, and inspection of historical data in a data warehouse
- Develop a consistent and repeatable ETL (Extract, Transform, Load) process
- Build and deploy highly scalable and repeatable warehouses

Audience

- Data modelers
- Data warehousing specialist
- Business Intelligence specialists
- Data engineers
- Database administrators

Format of the course

- Part lecture, part discussion, exercises and heavy hands-on practice
datameerDatameer for Data Analysts14 hoursDatameer is a business intelligence and analytics platform built on Hadoop. It allows end-users to access, explore and correlate large-scale, structured, semi-structured and unstructured data in an easy-to-use fashion.

In this instructor-led, live training, participants will learn how to use Datameer to overcome Hadoop's steep learning curve as they step through the setup and analysis of a series of big data sources.

By the end of this training, participants will be able to:

- Create, curate, and interactively explore an enterprise data lake
- Access business intelligence data warehouses, transactional databases and other analytic stores
- Use a spreadsheet user-interface to design end-to-end data processing pipelines
- Access pre-built functions to explore complex data relationships
- Use drag-and-drop wizards to visualize data and create dashboards
- Use tables, charts, graphs, and maps to analyze query results

Audience

- Data analysts

Format of the course

- Part lecture, part discussion, exercises and heavy hands-on practice
tigonTigon: Real-time Streaming for the Real World14 hoursTigon is an open-source, real-time, low-latency, high-throughput, native YARN, stream processing framework that sits on top of HDFS and HBase for persistence. Tigon applications address use cases such as network intrusion detection and analytics, social media market analysis, location analytics, and real-time recommendations to users.

This instructor-led, live training introduces Tigon's approach to blending real-time and batch processing as it walks participants through the creation a sample application.

By the end of this training, participants will be able to:

- Create powerful, stream processing applications for handling large volumes of data
- Process stream sources such as Twitter and Webserver Logs
- Use Tigon for rapid joining, filtering, and aggregating of streams

Audience

- Developers

Format of the course

- Part lecture, part discussion, exercises and heavy hands-on practice
vespaVespa: Serving Large-Scale Data in Real-Time14 hoursVespa is an open-source big data processing and serving engine created by Yahoo. It is used to respond to user queries, make recommendations, and provide personalized content and advertisements in real-time.

This instructor-led, live training introduces the challenges of serving large-scale data and walks participants through the creation of an application that can compute responses to user requests, over large datasets in real-time.

By the end of this training, participants will be able to:

- Use Vespa to quickly compute data (store, search, rank, organize) at serving time while a user waits
- Implement Vespa into existing applications involving feature search, recommendations, and personalization
- Integrate and deploy Vespa with existing big data systems such as Hadoop and Storm.

Audience

- Developers

Format of the course

- Part lecture, part discussion, exercises and heavy hands-on practice
bigdatabicriminalBig Data Business Intelligence for Criminal Intelligence Analysis35 hoursAdvances in technologies and the increasing amount of information are transforming how law enforcement is conducted. The challenges that Big Data pose are nearly as daunting as Big Data's promise. Storing data efficiently is one of these challenges; effectively analyzing it is another.

In this instructor-led, live training, participants will learn the mindset with which to approach Big Data technologies, assess their impact on existing processes and policies, and implement these technologies for the purpose of identifying criminal activity and preventing crime. Case studies from law enforcement organizations around the world will be examined to gain insights on their adoption approaches, challenges and results.

By the end of this training, participants will be able to:

- Combine Big Data technology with traditional data gathering processes to piece together a story during an investigation
- Implement industrial big data storage and processing solutions for data analysis
- Prepare a proposal for the adoption of the most adequate tools and processes for enabling a data-driven approach to criminal investigation

Audience

- Law Enforcement specialists with a technical background

Format of the course

- Part lecture, part discussion, exercises and heavy hands-on practice
apexApache Apex: Processing Big Data-in-Motion21 hoursApache Apex is a YARN-native platform that unifies stream and batch processing. It processes big data-in-motion in a way that is scalable, performant, fault-tolerant, stateful, secure, distributed, and easily operable.

This instructor-led, live training introduces Apache Apex's unified stream processing architecture, and walks participants through the creation of a distributed application using Apex on Hadoop.

By the end of this training, participants will be able to:

- Understand data processing pipeline concepts such as connectors for sources and sinks, common data transformations, etc.
- Build, scale and optimize an Apex application
- Process real-time data streams reliably and with minimum latency
- Use Apex Core and the Apex Malhar library to enable rapid application development
- Use the Apex API to write and re-use existing Java code
- Integrate Apex into other applications as a processing engine
- Tune, test and scale Apex applications

Audience

- Developers
- Enterprise architects

Format of the course

- Part lecture, part discussion, exercises and heavy hands-on practice
alluxioAlluxio: Unifying Disparate Storage Systems7 hoursAlluxio is an open-source virtual distributed storage system that unifies disparate storage systems and enables applications to interact with data at memory speed. It is used by companies such as Intel, Baidu and Alibaba.

In this instructor-led, live training, participants will learn how to use Alluxio to bridge different computation frameworks with storage systems and efficiently manage multi-petabyte scale data as they step through the creation of an application with Alluxio.

By the end of this training, participants will be able to:

- Develop an application with Alluxio
- Connect big data systems and applications while preserving one namespace
- Efficiently extract value from big data in any storage format
- Improve workload performance
- Deploy and manage Alluxio standalone or clustered

Audience

- Data scientist
- Developer
- System administrator

Format of the course

- Part lecture, part discussion, exercises and heavy hands-on practice
flinkFlink for Scalable Stream and Batch Data Processing28 hoursApache Flink is an open-source framework for scalable stream and batch data processing.

This instructor-led, live training introduces the principles and approaches behind distributed stream and batch data processing, and walks participants through the creation of a real-time, data streaming application.

By the end of this training, participants will be able to:

- Set up an environment for developing data analysis applications
- Package, execute, and monitor Flink-based, fault-tolerant, data streaming applications
- Manage diverse workloads
- Perform advanced analytics using Flink ML
- Set up a multi-node Flink cluster
- Measure and optimize performance
- Integrate Flink with different Big Data systems
- Compare Flink capabilities with those of other big data processing frameworks

Audience

- Developers
- Architects
- Data engineers
- Analytics professionals
- Technical managers

Format of the course

- Part lecture, part discussion, exercises and heavy hands-on practice
samzaSamza for Stream Processing14 hoursApache Samza is an open-source near-realtime, asynchronous computational framework for stream processing. It uses Apache Kafka for messaging, and Apache Hadoop YARN for fault tolerance, processor isolation, security, and resource management.

This instructor-led, live training introduces the principles behind messaging systems and distributed stream processing, while walking participants through the creation of a sample Samza-based project and job execution.

By the end of this training, participants will be able to:

- Use Samza to simplify the code needed to produce and consume messages.
- Decouple the handling of messages from an application.
- Use Samza to implement near-realtime asynchronous computation.
- Use stream processing to provide a higher level of abstraction over messaging systems.

Audience

- Developers

Format of the course

- Part lecture, part discussion, exercises and heavy hands-on practice
zeppelinZeppelin for Interactive Data Analytics14 hoursApache Zeppelin is a web-based notebook for capturing, exploring, visualizing and sharing Hadoop and Spark based data.

This instructor-led, live training introduces the concepts behind interactive data analytics and walks participants through the deployment and usage of Zeppelin in a single-user or multi-user environment.

By the end of this training, participants will be able to:

- Install and configure Zeppelin
- Develop, organize, execute and share data in a browser-based interface
- Visualize results without referring to the command line or cluster details
- Execute and collaborate on long workflows
- Work with any of a number of plug-in language/data-processing-backends, such as Scala (with Apache Spark), Python (with Apache Spark), Spark SQL, JDBC, Markdown and Shell.
- Integrate Zeppelin with Spark, Flink and Map Reduce
- Secure multi-user instances of Zeppelin with Apache Shiro

Audience

- Data engineers
- Data analysts
- Data scientists
- Software developers

Format of the course

- Part lecture, part discussion, exercises and heavy hands-on practice
magellanMagellan: Geospatial Analytics on Spark14 hoursMagellan is an open-source distributed execution engine for geospatial analytics on big data. Implemented on top of Apache Spark, it extends Spark SQL and provides a relational abstraction for geospatial analytics.

This instructor-led, live training introduces the concepts and approaches for implementing geospacial analytics and walks participants through the creation of a predictive analysis application using Magellan on Spark.

By the end of this training, participants will be able to:

- Efficiently query, parse and join geospatial datasets at scale
- Implement geospatial data in business intelligence and predictive analytics applications
- Use spatial context to extend the capabilities of mobile devices, sensors, logs, and wearables

Audience

- Application developers

Format of the course

- Part lecture, part discussion, exercises and heavy hands-on practice
hdpHortonworks Data Platform (HDP) for Administrators21 hoursHortonworks Data Platform is an open-source Apache Hadoop support platform that provides a stable foundation for developing big data solutions on the Apache Hadoop ecosystem.

This instructor-led live training introduces Hortonworks and walks participants through the deployment of Spark + Hadoop solution.

By the end of this training, participants will be able to:

- Use Hortonworks to reliably run Hadoop at a large scale
- Unify Hadoop's security, governance, and operations capabilities with Spark's agile analytic workflows.
- Use Hortonworks to investigate, validate, certify and support each of the components in a Spark project
- Process different types of data, including structured, unstructured, in-motion, and at-rest.

Audience

- Hadoop administrators

Format of the course

- Part lecture, part discussion, exercises and heavy hands-on practice
PentahoDIPentaho Data Integration Fundamentals21 hoursPentaho Data Integration is an open-source data integration tool for defining jobs and data transformations.

In this instructor-led, live training, participants will learn how to use Pentaho Data Integration's powerful ETL capabilities and rich GUI to manage an entire big data lifecycle, maximizing the value of data to the organization.

By the end of this training, participants will be able to:

- Create, preview, and run basic data transformations containing steps and hops
- Configure and secure the Pentaho Enterprise Repository
- Harness disparate sources of data and generate a single, unified version of the truth in an analytics-ready format.
- Provide results to third-part applications for further processing

Audience

- Data Analyst
- ETL developers

Format of the course

- Part lecture, part discussion, exercises and heavy hands-on practice
BDATRBig Data Analytics for Telecom Regulators16 hoursTo meet compliance of the regulators, CSPs (Communication service providers) can tap into Big
Data Analytics which not only help them to meet compliance but within the scope of same
project they can increase customer satisfaction and thus reduce the churn. In fact since
compliance is related to Quality of service tied to a contract, any initiative towards meeting the
compliance, will improve the “competitive edge” of the CSPs. Therefore, it is important that
Regulators should be able to advise/guide a set of Big Data analytic practice for CSPs that will
be of mutual benefit between the regulators and CSPs.

2 days of course : 8 modules, 2 hours each = 16 hours
sparkpythonPython and Spark for Big Data (PySpark)21 hoursPython is a high-level programming language famous for its clear syntax and code readibility. Spark is a data processing engine used in querying, analyzing, and transforming big data. PySpark allows users to interface Spark with Python.

In this instructor-led, live training, participants will learn how to use Python and Spark together to analyze big data as they work on hands-on exercises.

By the end of this training, participants will be able to:

- Learn how to use Spark with Python to analyze Big Data
- Work on exercises that mimic real world circumstances
- Use different tools and techniques for big data analysis using PySpark

Audience

- Developers
- IT Professionals
- Data Scientists

Format of the course

- Part lecture, part discussion, exercises and heavy hands-on practice
kylinApache Kylin: From Classic OLAP to Real-Time Data Warehouse14 hoursApache Kylin is an extreme, distributed analytics engine for big data.

In this instructor-led live training, participants will learn how to use Apache Kylin to set up a real-time data warehouse.

By the end of this training, participants will be able to:

- Consume real-time streaming data using Kylin
- Utilize Apache Kylin's powerful features, including snowflake schema support, a rich SQL interface, spark cubing and subsecond query latency

Note

- We use the latest version of Kylin (as of this writing, Apache Kylin v2.0)

Audience

- Big data engineers
- Big Data analysts

Format of the course

- Part lecture, part discussion, exercises and heavy hands-on practice
foundrFoundation R7 hoursThe objective of the course is to enable participants to gain a mastery of the fundamentals of R and how to work with data.
sparkcloudApache Spark in the Cloud21 hoursApache Spark's learning curve is slowly increasing at the begining, it needs a lot of effort to get the first return. This course aims to jump through the first tough part. After taking this course the participants will understand the basics of Apache Spark , they will clearly differentiate RDD from DataFrame, they will learn Python and Scala API, they will understand executors and tasks, etc. Also following the best practices, this course strongly focuses on cloud deployment, Databricks and AWS. The students will also understand the differences between AWS EMR and AWS Glue, one of the lastest Spark service of AWS.

AUDIENCE:

Data Engineer, DevOps, Data Scientist
bigdataanahealthBig Data Analytics in Health21 hoursBig data analytics involves the process of examining large amounts of varied data sets in order to uncover correlations, hidden patterns, and other useful insights.

The health industry has massive amounts of complex heterogeneous medical and clinical data. Applying big data analytics on health data presents huge potential in deriving insights for improving delivery of healthcare. However, the enormity of these datasets poses great challenges in analyses and practical applications to a clinical environment.

In this instructor-led, live training (remote), participants will learn how to perform big data analytics in health as they step through a series of hands-on live-lab exercises.

By the end of this training, participants will be able to:

- Install and configure big data analytics tools such as Hadoop MapReduce and Spark
- Understand the characteristics of medical data
- Apply big data techniques to deal with medical data
- Study big data systems and algorithms in the context of health applications

Audience

- Developers
- Data Scientists

Format of the Course

- Part lecture, part discussion, exercises and heavy hands-on practice.

Note

- To request a customized training for this course, please contact us to arrange.
sqoopMoving Data from MySQL to Hadoop with Sqoop14 hoursSqoop is an open source software tool for transfering data between Hadoop and relational databases or mainframes. It can be used to import data from a relational database management system (RDBMS) such as MySQL or Oracle or a mainframe into the Hadoop Distributed File System (HDFS). Thereafter, the data can be transformed in Hadoop MapReduce, and then re-exported back into an RDBMS.

In this instructor-led, live training, participants will learn how to use Sqoop to import data from a traditional relational database to Hadoop storage such HDFS or Hive and vice versa.

By the end of this training, participants will be able to:

- Install and configure Sqoop
- Import data from MySQL to HDFS and Hive
- Import data from HDFS and Hive to MySQL

Audience

- System administrators
- Data engineers

Format of the Course

- Part lecture, part discussion, exercises and heavy hands-on practice

Note

- To request a customized training for this course, please contact us to arrange.
beamUnified Batch and Stream Processing with Apache Beam14 hoursApache Beam is an open source, unified programming model for defining and executing parallel data processing pipelines. It's power lies in its ability to run both batch and streaming pipelines, with execution being carried out by one of Beam's supported distributed processing back-ends: Apache Apex, Apache Flink, Apache Spark, and Google Cloud Dataflow. Apache Beam is useful for ETL (Extract, Transform, and Load) tasks such as moving data between different storage media and data sources, transforming data into a more desirable format, and loading data onto a new system.

In this instructor-led, live training (onsite or remote), participants will learn how to implement the Apache Beam SDKs in a Java or Python application that defines a data processing pipeline for decomposing a big data set into smaller chunks for independent, parallel processing.

By the end of this training, participants will be able to:

- Install and configure Apache Beam.
- Use a single programming model to carry out both batch and stream processing from withing their Java or Python application.
- Execute pipelines across multiple environments.

Audience

- Developers

Format of the Course

- Part lecture, part discussion, exercises and heavy hands-on practice

Note

- This course will be available Scala in the future. Please contact us to arrange.
pentahoPentaho Open Source BI Suite Community Edition (CE)28 hoursPentaho Open Source BI Suite Community Edition (CE) is a business intelligence package that provides data integration, reporting, dashboards, and load capabilities.

In this instructor-led, live training, participants will learn how to maximize the features of Pentaho Open Source BI Suite Community Edition (CE).

By the end of this training, participants will be able to:

- Install and configure Pentaho Open Source BI Suite Community Edition (CE)
- Understand the fundamentals of Pentaho CE tools and their features
- Build reports using Pentaho CE
- Integrate third party data into Pentaho CE
- Work with big data and analytics in Pentaho CE

Audience

- Programmers
- BI Developers

Format of the course

- Part lecture, part discussion, exercises and heavy hands-on practice

Note

- To request a customized training for this course, please contact us to arrange.
amazonredshiftAmazon Redshift21 hoursAmazon Redshift is a petabyte-scale cloud-based data warehouse service in AWS.

In this instructor-led, live training, participants will learn the fundamentals of Amazon Redshift.

By the end of this training, participants will be able to:

- Install and configure Amazon Redshift
- Load, configure, deploy, query, and visualize data with Amazon Redshift

Audience

- Developers
- IT Professionals

Format of the course

- Part lecture, part discussion, exercises and heavy hands-on practice

Note

- To request a customized training for this course, please contact us to arrange.

Upcoming Big Data Courses

CourseCourse DateCourse Price [Remote / Classroom]
A Practical Introduction to Data Analysis and Big Data - DubaiSun, 2019-01-27 09:3036750AED / 48200AED
Apache Hadoop: Manipulation and Transformation of Data Performance - DubaiTue, 2019-02-19 09:3017550AED / 25300AED
A Practical Introduction to Data Analysis and Big Data - DubaiSun, 2019-03-24 09:3036750AED / 48200AED
Apache Hadoop: Manipulation and Transformation of Data Performance - DubaiSun, 2019-04-14 09:3017550AED / 25300AED
Apache Hadoop: Manipulation and Transformation of Data Performance - DubaiTue, 2019-06-04 09:3017550AED / 25300AED
Weekend Big Data courses, Evening Big Data training, Big Data boot camp, Big Data instructor-led, Weekend Big Data training, Evening Big Data courses, Big Data coaching, Big Data instructor, Big Data trainer, Big Data training courses, Big Data classes, Big Data on-site, Big Data private courses, Big Data one on one training

Course Discounts

CourseVenueCourse DateCourse Price [Remote / Classroom]
Transact SQL AdvancedDubaiWed, 2018-12-19 09:305792AED / 9842AED
Agile Software TestingBCB, DubaiWed, 2018-12-19 09:3011583AED / 17483AED
Introduction to Data Visualization with Tidyverse and RDubaiSun, 2019-02-03 09:306615AED / 10665AED
Marketing Analytics using RDubaiMon, 2019-03-04 09:3019845AED / 27595AED
Deep Learning for Finance (with Python)DubaiSun, 2019-06-16 09:3029400AED / 39000AED

Course Discounts Newsletter

We respect the privacy of your email address. We will not pass on or sell your address to others.
You can always change your preferences or unsubscribe completely.

Some of our clients

is growing fast!

We are looking to expand our presence in the UAE!

As a Business Development Manager you will:

  • expand business in the UAE
  • recruit local talent (sales, agents, trainers, consultants)
  • recruit local trainers and consultants

We offer:

  • Artificial Intelligence and Big Data systems to support your local operation
  • high-tech automation
  • continuously upgraded course catalogue and content
  • good fun in international team

If you are interested in running a high-tech, high-quality training and consulting business.

Apply now!