Data Streaming and Real Time Data Processing Training Course
Course Overview
This course offers a structured and practical introduction to constructing real-time data streaming systems. It explores essential concepts, architectural patterns, and industry-standard tools for processing continuous data at scale. Participants will acquire the skills to design, implement, and optimize streaming pipelines using modern frameworks. The curriculum advances from foundational principles to practical applications, empowering learners to confidently develop production-ready real-time solutions.
Format of Training
• Instructor-led sessions with guided explanations
• Concept walkthroughs featuring real-world examples
• Hands-on demonstrations and coding exercises
• Progressive labs aligned with daily topics
• Interactive discussions and Q&A
Course Objectives
• Understand real-time data streaming concepts and system architecture
• Differentiate between batch and streaming data processing models
• Design scalable and fault-tolerant streaming pipelines
• Work with distributed streaming tools and frameworks
• Apply event time processing, windowing, and stateful operations
• Build and optimize real-time data solutions for business use cases
This course is available as onsite live training in United Arab Emirates or online live training.Course Outline
Course Outline Day 1
• Introduction to data streaming concepts
• Batch vs real-time processing fundamentals
• Event-driven architecture basics
• Common use cases in industry
• Overview of streaming ecosystem
Day 2
• Streaming architecture design patterns
• Fundamentals of distributed messaging systems
• Producers and consumers
• Topics, partitions, and data flow
• Data ingestion strategies
Day 3
• Stream processing concepts and frameworks
• Event time vs processing time
• Windowing techniques and use cases
• Stateful stream processing
• Fault tolerance and checkpointing basics
Day 4
• Data transformation in streaming pipelines
• ETL and ELT in real-time systems
• Schema management and evolution
• Stream joins and enrichment
• Introduction to cloud-based streaming services
Day 5
• Monitoring and observability in streaming systems
• Security and access control basics
• Performance tuning and optimization
• End-to-end pipeline design review
• Real-world use cases such as fraud detection and IoT processing
Need help picking the right course?
Data Streaming and Real Time Data Processing Training Course - Enquiry
Testimonials (1)
Hands on exercises. Class should have been 5 days, but the 3 days helped to clear up a lot of questions that I had from working with NiFi already
James - BHG Financial
Course - Apache NiFi for Administrators
Upcoming Courses
Related Courses
Administrator Training for Apache Hadoop
35 HoursTarget Audience:
This course is designed for IT professionals seeking solutions to store and process large-scale datasets within a distributed system environment.
Learning Objectives:
To acquire in-depth knowledge regarding Hadoop cluster administration.
Big Data Analytics in Health
21 HoursBig data analytics entails the examination of extensive and diverse datasets to uncover correlations, hidden patterns, and actionable insights.
The healthcare sector generates vast volumes of complex, heterogeneous medical and clinical data. Leveraging big data analytics within this domain offers significant potential for deriving insights that enhance healthcare delivery. However, the sheer scale of these datasets presents considerable challenges for analysis and practical implementation in clinical settings.
In this instructor-led, live remote training, participants will learn how to execute big data analytics in healthcare by engaging in a series of hands-on laboratory exercises.
Upon completion of this training, participants will be able to:
- Install and configure big data analytics tools, including Hadoop MapReduce and Spark
- Comprehend the unique characteristics of medical data
- Apply big data techniques to manage and analyze medical data
- Explore big data systems and algorithms within the context of health applications
Audience
- Developers
- Data Scientists
Format of the Course
- A combination of lectures, discussions, exercises, and intensive hands-on practice.
Note
- To request customized training for this course, please contact us to arrange it.
Hadoop For Administrators
21 HoursApache Hadoop stands as the leading framework for processing Big Data across server clusters. This three-day (or four-day optional) course equips participants with a comprehensive understanding of Hadoop's business advantages and practical use cases. Attendees will learn how to plan for cluster deployment and scalability, and master the installation, maintenance, monitoring, troubleshooting, and optimization of Hadoop systems. The curriculum includes hands-on practice with bulk data loading, exploration of various Hadoop distributions, and the installation and management of ecosystem tools. The course concludes with an in-depth discussion on securing clusters using Kerberos.
“...The materials were very well prepared and covered thoroughly. The Lab was very helpful and well organized”
— Andrew Nguyen, Principal Integration DW Engineer, Microsoft Online Advertising
Audience
Hadoop administrators
Format
Lectures and hands-on labs, approximate balance 60% lectures, 40% labs.
Hadoop for Developers (4 days)
28 HoursApache Hadoop stands as the leading framework for processing Big Data across server clusters. This course provides developers with an introduction to the key components of the Hadoop ecosystem, including HDFS, MapReduce, Pig, Hive, and HBase.
Advanced Hadoop for Developers
21 HoursApache Hadoop stands out as one of the leading frameworks for processing Big Data across server clusters. This course explores data management in HDFS, alongside advanced techniques in Pig, Hive, and HBase. These sophisticated programming skills are designed to benefit experienced Hadoop developers.
Audience: developers
Duration: three days
Format: lectures (50%) and hands-on labs (50%).
Hadoop Administration on MapR
28 HoursTarget Audience:
This course is designed to demystify big data and Hadoop technologies, demonstrating that they are accessible and straightforward to master.
Hadoop and Spark for Administrators
35 HoursThis instructor-led, live training in the UAE (online or onsite) is designed for system administrators who wish to learn how to set up, deploy, and manage Hadoop clusters within their organization.
By the end of this training, participants will be able to:
- Install and configure Apache Hadoop.
- Understand the four major components in the Hadoop ecosystem: HDFS, MapReduce, YARN, and Hadoop Common.
- Use Hadoop Distributed File System (HDFS) to scale a cluster to hundreds or thousands of nodes.
- Set up HDFS to operate as storage engine for on-premise Spark deployments.
- Set up Spark to access alternative storage solutions such as Amazon S3 and NoSQL database systems such as Redis, Elasticsearch, Couchbase, Aerospike, etc.
- Carry out administrative tasks such as provisioning, management, monitoring and securing an Apache Hadoop cluster.
HBase for Developers
21 HoursThis course provides an introduction to HBase, a NoSQL database built on top of Hadoop. It is designed for developers who plan to build applications using HBase, as well as administrators responsible for managing HBase clusters.
Participants will explore HBase architecture, data modeling, and application development techniques. The curriculum also covers integrating MapReduce with HBase and addresses key administration topics focused on performance optimization. This course is highly practical, featuring numerous lab exercises.
Duration : 3 days
Audience : Developers & Administrators
Infomatica with Big Data (BDM)
7 HoursInformatica with Big Data (BDM) is a specialized program aimed at empowering data professionals to develop, manage, and analyze extensive datasets by leveraging cutting-edge technologies and architectures in the Big Data landscape. The training emphasizes the complete data lifecycle, encompassing ingestion, integration, cleansing, curation, analytics, and the delivery and consumption of big data services.
Participants will explore solutions for processing large datasets using prominent Big Data technologies such as Apache Hive, Apache Hadoop, and Apache Spark. The course also offers hands-on experience with Informatica tools like Bloombox, Big Data Management, and iData Fabric to deepen understanding of underlying big data concepts like MapReduce and Hadoop. Upon completion, learners will be capable of building comprehensive, end-to-end data solutions using Informatica and its associated Big Data offerings.
Apache NiFi for Administrators
21 HoursApache NiFi is an open-source platform designed for flow-based data integration and event processing. It facilitates automated, real-time data routing, transformation, and system mediation between disparate systems, featuring a web-based UI and fine-grained control.
This instructor-led live training (available onsite or remotely) targets intermediate-level administrators and engineers who aim to deploy, manage, secure, and optimize NiFi dataflows within production environments.
Upon completion of this training, participants will be capable of:
- Installing, configuring, and maintaining Apache NiFi clusters.
- Designing and managing dataflows from diverse sources and destinations.
- Implementing flow automation, routing, and transformation logic.
- Optimizing performance, monitoring operations, and troubleshooting issues.
Course Format
- Interactive lectures with real-world architecture discussions.
- Hands-on labs focused on building, deploying, and managing flows.
- Scenario-based exercises conducted in a live-lab environment.
Course Customization Options
- To request customized training for this course, please contact us to arrange.
Apache NiFi for Developers
7 HoursIn this instructor-led, live training in the UAE, participants will learn the fundamentals of flow-based programming as they develop a number of demo extensions, components and processors using Apache NiFi.
By the end of this training, participants will be able to:
- Understand NiFi's architecture and dataflow concepts.
- Develop extensions using NiFi and third-party APIs.
- Custom develop their own Apache Nifi processor.
- Ingest and process real-time data from disparate and uncommon file formats and data sources.
PySpark and Machine Learning
21 HoursThis training offers a hands-on introduction to constructing scalable data processing and Machine Learning workflows using PySpark. Participants will gain insights into how Apache Spark functions within contemporary Big Data ecosystems and learn to process large datasets efficiently by leveraging distributed computing principles.
Python and Spark for Big Data (PySpark)
21 HoursIn this instructor-led, live training in the UAE, participants will learn how to use Python and Spark together to analyze big data as they work on hands-on exercises.
By the end of this training, participants will be able to:
- Learn how to use Spark with Python to analyze Big Data.
- Work on exercises that mimic real world cases.
- Use different tools and techniques for big data analysis using PySpark.
Python, Spark, and Hadoop for Big Data
21 HoursThis instructor-led, live training in the UAE (online or onsite) is aimed at developers who wish to use and integrate Spark, Hadoop, and Python to process, analyze, and transform large and complex data sets.
By the end of this training, participants will be able to:
- Set up the necessary environment to start processing big data with Spark, Hadoop, and Python.
- Understand the features, core components, and architecture of Spark and Hadoop.
- Learn how to integrate Spark, Hadoop, and Python for big data processing.
- Explore the tools in the Spark ecosystem (Spark MLlib, Spark Streaming, Kafka, Sqoop, Kafka, and Flume).
- Build collaborative filtering recommendation systems similar to Netflix, YouTube, Amazon, Spotify, and Google.
- Use Apache Mahout to scale machine learning algorithms.
Stratio: Rocket and Intelligence Modules with PySpark
14 HoursStratio serves as a comprehensive, data-centric platform that unifies big data, artificial intelligence, and governance into a single solution. Its Rocket and Intelligence modules facilitate rapid data exploration, transformation, and advanced analytics within enterprise settings.
This instructor-led live training (available online or on-site) targets intermediate-level data professionals aiming to leverage the Rocket and Intelligence modules in Stratio effectively with PySpark. The curriculum focuses on looping structures, user-defined functions, and advanced data logic.
Upon completion of this training, participants will be capable of:
- Navigating and operating within the Stratio platform using the Rocket and Intelligence modules.
- Applying PySpark for data ingestion, transformation, and analysis.
- Utilizing loops and conditional logic to manage data workflows and feature engineering tasks.
- Creating and managing user-defined functions (UDFs) for reusable data operations in PySpark.
Course Format
- Interactive lectures and discussions.
- Extensive exercises and practical sessions.
- Hands-on implementation in a live lab environment.
Customization Options
- To request customized training for this course, please contact us to make arrangements.