Course Outline
Introduction
Overview of Big Data Tools and Technologies
Installing and Configuring Apache Ignite
Overview of Ignite Architecture
Querying Data in Ignite
Spreading Large Data Sets across a Cluster
Understanding the In-Memory Data Grid
Writing a Service in Ignite
Running Distributed Computing with Ignite
Integrating Ignite with RDBMS, NoSQL, Hadoop and Machine Learning Processors
Testing and Troubleshooting
Summary and Conclusion
Requirements
- An understanding of databases.
- An understanding of Java.
Audience
- Developers
Testimonials
I enjoyed the good balance between theory and hands-on labs.
- N. V. Nederlandse Spoorwegen
I generally was benefit from the more understanding of Ignite.
- N. V. Nederlandse Spoorwegen
I mostly liked the good lectures.
- N. V. Nederlandse Spoorwegen
Exercises.
David Lehotak - NVision Czech Republic ICT a.s.
Related Courses
Apache Ignite for Developers
14 hoursApache Ignite is an in-memory computing platform that sits between the application and data layer to improve speed, scale, and availability. In this instructor-led, live training, participants will learn the principles behind persistent and pure
Apache Apex: Processing Big Data-in-Motion
21 hoursApache Apex is a YARN-native platform that unifies stream and batch processing. It processes big data-in-motion in a way that is scalable, performant, fault-tolerant, stateful, secure, distributed, and easily operable. This instructor-led, live
Unified Batch and Stream Processing with Apache Beam
14 hoursApache Beam is an open source, unified programming model for defining and executing parallel data processing pipelines. It's power lies in its ability to run both batch and streaming pipelines, with execution being carried out by one of
Building Kafka Solutions with Confluent
14 hoursThis instructor-led, live training (online or onsite) is aimed at engineers who wish to use Confluent (a distribution of Kafka) to build and manage a real-time data processing platform for their applications. By the end of this training,
A Practical Introduction to Stream Processing
21 hoursStream Processing refers to the real-time processing of "data in motion", that is, performing computations on data as it is being received. Such data is read as continuous streams from data sources such as sensor events, website user
Apache Kafka for Python Programmers
7 hoursApache Kafka is an open-source stream-processing platform that provides a fast, reliable, and low-latency platform for handling real-time data analytics. Apache Kafka can be integrated with available programming languages such as Python. This
Stream Processing with Kafka Streams
7 hoursKafka Streams is a client-side library for building applications and microservices whose data is passed to and from a Kafka messaging system. Traditionally, Apache Kafka has relied on Apache Spark or Apache Storm to process data between message
Real-Time Stream Processing with MapR
7 hoursIn this instructor-led, live training, participants will learn the core concepts behind MapR Stream Architecture as they develop a real-time streaming application. By the end of this training, participants will be able to build producer and
Samza for Stream Processing
14 hoursApache Samza is an open-source near-realtime, asynchronous computational framework for stream processing. It uses Apache Kafka for messaging, and Apache Hadoop YARN for fault tolerance, processor isolation, security, and resource
Tigon: Real-time Streaming for the Real World
14 hoursTigon is an open-source, real-time, low-latency, high-throughput, native YARN, stream processing framework that sits on top of HDFS and HBase for persistence. Tigon applications address use cases such as network intrusion detection and analytics,
Apache Flink Fundamentals
28 hoursApache Flink is an open-source framework for scalable stream and batch data processing. This instructor-led, live training introduces the principles and approaches behind distributed stream and batch data processing, and walks participants
Confluent KSQL
7 hoursConfluent KSQL is a stream processing framework built on top of Apache Kafka. It enables real-time data processing using SQL operations. This instructor-led, live training (online or onsite) is aimed at developers who wish to implement Apache
Apache NiFi for Administrators
21 hoursApache NiFi (Hortonworks DataFlow) is a real-time integrated data logistics and simple event processing platform that enables the moving, tracking and automation of data between systems. It is written using flow-based programming and provides a
Apache NiFi for Developers
7 hoursApache NiFi (Hortonworks DataFlow) is a real-time integrated data logistics and simple event processing platform that enables the moving, tracking and automation of data between systems. It is written using flow-based programming and provides a
Apache Storm
28 hoursApache Storm is a distributed, real-time computation engine used for enabling real-time business intelligence. It does so by enabling applications to reliably process unbounded streams of data (a.k.a. stream processing). "Storm is for