Course Outline

 

Introduction:

  • Apache Spark in Hadoop Ecosystem
  • Short intro for python, scala

Basics (theory):

  • Architecture
  • RDD
  • Transformation and Actions
  • Stage, Task, Dependencies

Using Databricks environment understand the basics (hands-on workshop):

  • Exercises using RDD API
  • Basic action and transformation functions
  • PairRDD
  • Join
  • Caching strategies
  • Exercises using DataFrame API
  • SparkSQL
  • DataFrame: select, filter, group, sort
  • UDF (User Defined Function)
  • Looking into DataSet API
  • Streaming

Using AWS environment understand the deployment (hands-on workshop):

  • Basics of AWS Glue
  • Understand differencies between AWS EMR and AWS Glue
  • Example jobs on both environment
  • Understand pros and cons

Extra:

  • Introduction to Apache Airflow orchestration

Requirements

Programing skills (preferably python, scala)

SQL basics

  21 Hours
 

Testimonials

Related Courses

Python and Spark for Big Data (PySpark)

  21 hours

Introduction to Graph Computing

  28 hours

Apache Spark MLlib

  35 hours

Artificial Intelligence - the most applied stuff - Data Analysis + Distributed AI + NLP

  21 hours

Spark for Developers

  21 hours

Hortonworks Data Platform (HDP) for Administrators

  21 hours

Magellan: Geospatial Analytics on Spark

  14 hours

Alluxio: Unifying Disparate Storage Systems

  7 hours

Apache Spark SQL

  7 hours

A Practical Introduction to Stream Processing

  21 hours

Big Data Analytics in Health

  21 hours

Apache Spark Streaming with Scala

  21 hours

SMACK Stack for Data Science

  14 hours

Apache Spark Fundamentals

  21 hours

Apache Spark for .NET Developers

  21 hours