Programming with Big Data in R Training Course
Big Data encompasses solutions designed for managing and analyzing extensive datasets. Initially developed by Google, these Big Data solutions have advanced significantly and have spurred numerous comparable initiatives, many of which are now offered as open-source options. In the financial sector, R is a widely-used programming language.
This course is available as onsite live training in United Arab Emirates or online live training.Course Outline
Introduction to Programming Big Data with R (bpdR)
- Setting up your environment to use pbdR
- Scope and tools available in pbdR
- Packages commonly used with Big Data alongside pbdR
Message Passing Interface (MPI)
- Using pbdR MPI 5
- Parallel processing
- Point-to-point communication
- Send Matrices
- Summing Matrices
- Collective communication
- Summing Matrices with Reduce
- Scatter / Gather
- Other MPI communications
Distributed Matrices
- Creating a distributed diagonal matrix
- SVD of a distributed matrix
- Building a distributed matrix in parallel
Statistics Applications
- Monte Carlo Integration
- Reading Datasets
- Reading on all processes
- Broadcasting from one process
- Reading partitioned data
- Distributed Regression
- Distributed Bootstrap
Need help picking the right course?
Programming with Big Data in R Training Course - Enquiry
Testimonials (2)
The subject matter and the pace were perfect.
Tim - Ottawa Research and Development Center, Science Technology Branch, Agriculture and Agri-Food Canada
Course - Programming with Big Data in R
Michael the trainer is very knowledgeable and skillful about the subject of Big Data and R. He is very flexible and quickly customize the training meeting clients' need. He is also very capable to solve technical and subject matter problems on the go. Fantastic and professional training!.
Xiaoyuan Geng - Ottawa Research and Development Center, Science Technology Branch, Agriculture and Agri-Food Canada
Course - Programming with Big Data in R
Upcoming Courses
Related Courses
Big Data Analytics with Google Colab and Apache Spark
14 HoursThis instructor-led, live training in the UAE (online or onsite) is aimed at intermediate-level data scientists and engineers who wish to use Google Colab and Apache Spark for big data processing and analytics.
By the end of this training, participants will be able to:
- Set up a big data environment using Google Colab and Spark.
- Process and analyze large datasets efficiently with Apache Spark.
- Visualize big data in a collaborative environment.
- Integrate Apache Spark with cloud-based tools.
Big Data Analytics in Health
21 HoursThe analysis of big data entails examining extensive and diverse datasets to discover correlations, concealed patterns, and other valuable insights.
The healthcare sector possesses vast quantities of intricate and varied medical and clinical information. Leveraging big data analytics on health-related data offers significant potential for enhancing the delivery of healthcare through derived insights. However, the sheer volume of these datasets presents substantial challenges in both analysis and practical implementation within a clinical setting.
In this instructor-led live training (conducted remotely), participants will learn how to conduct big data analytics in healthcare by working through a series of hands-on lab exercises.
Upon completion of this training, participants will be able to:
- Set up and configure big data analytics tools such as Hadoop MapReduce and Spark
- Comprehend the attributes of medical data
- Utilize big data methodologies for handling medical information
- Examine big data systems and algorithms within healthcare applications
Audience
- Software Developers
- Data Scientists
Course Format
- The course includes lectures, discussions, exercises, and extensive hands-on practice.
Note
- To arrange a customized training for this course, please contact us to make the necessary arrangements.
Hadoop and Spark for Administrators
35 HoursThis instructor-led, live training in the UAE (online or onsite) is aimed at system administrators who wish to learn how to set up, deploy and manage Hadoop clusters within their organization.
By the end of this training, participants will be able to:
- Install and configure Apache Hadoop.
- Understand the four major components in the Hadoop ecoystem: HDFS, MapReduce, YARN, and Hadoop Common.
- Use Hadoop Distributed File System (HDFS) to scale a cluster to hundreds or thousands of nodes.
- Set up HDFS to operate as storage engine for on-premise Spark deployments.
- Set up Spark to access alternative storage solutions such as Amazon S3 and NoSQL database systems such as Redis, Elasticsearch, Couchbase, Aerospike, etc.
- Carry out administrative tasks such as provisioning, management, monitoring and securing an Apache Hadoop cluster.
A Practical Introduction to Stream Processing
21 HoursIn this instructor-led, live training in the UAE (onsite or remote), participants will learn how to set up and integrate different Stream Processing frameworks with existing big data storage systems and related software applications and microservices.
By the end of this training, participants will be able to:
- Install and configure different Stream Processing frameworks, such as Spark Streaming and Kafka Streaming.
- Understand and select the most appropriate framework for the job.
- Process of data continuously, concurrently, and in a record-by-record fashion.
- Integrate Stream Processing solutions with existing databases, data warehouses, data lakes, etc.
- Integrate the most appropriate stream processing library with enterprise applications and microservices.
SMACK Stack for Data Science
14 HoursThis instructor-led, live training in the UAE (online or onsite) is aimed at data scientists who wish to use the SMACK stack to build data processing platforms for big data solutions.
By the end of this training, participants will be able to:
- Implement a data pipeline architecture for processing big data.
- Develop a cluster infrastructure with Apache Mesos and Docker.
- Analyze data with Spark and Scala.
- Manage unstructured data with Apache Cassandra.
Apache Spark Fundamentals
21 HoursThis instructor-led, live training in the UAE (online or onsite) is aimed at engineers who wish to set up and deploy Apache Spark system for processing very large amounts of data.
By the end of this training, participants will be able to:
- Install and configure Apache Spark.
- Quickly process and analyze very large data sets.
- Understand the difference between Apache Spark and Hadoop MapReduce and when to use which.
- Integrate Apache Spark with other machine learning tools.
Administration of Apache Spark
35 HoursThis instructor-led, live training in the UAE (online or onsite) is aimed at beginner-level to intermediate-level system administrators who wish to deploy, maintain, and optimize Spark clusters.
By the end of this training, participants will be able to:
- Install and configure Apache Spark in various environments.
- Manage cluster resources and monitor Spark applications.
- Optimize the performance of Spark clusters.
- Implement security measures and ensure high availability.
- Debug and troubleshoot common Spark issues.
Apache Spark in the Cloud
21 HoursThe initial learning curve for Apache Spark can be steep, requiring significant effort to achieve the first results. This course is designed to help participants navigate through this challenging phase. By the end of the course, attendees will have a solid grasp of the fundamentals of Apache Spark, including the distinction between RDD and DataFrame. They will also become proficient in using both Python and Scala APIs, gain an understanding of executors and tasks, and more. Additionally, adhering to best practices, this course places strong emphasis on cloud deployment strategies, with a focus on Databricks and AWS. Students will learn to differentiate between AWS EMR and AWS Glue, one of the latest Spark services offered by AWS.
AUDIENCE:
Data Engineers, DevOps Specialists, Data Scientists
Spark for Developers
21 HoursOBJECTIVE:
This course will provide an introduction to Apache Spark. Students will learn how Spark integrates into the Big Data ecosystem and how to utilize Spark for data analysis. The curriculum includes using the Spark shell for interactive data analysis, understanding Spark's internal workings, working with Spark APIs, leveraging Spark SQL, implementing Spark streaming, and applying machine learning and graphX.
AUDIENCE :
Developers / Data Analysts
Scaling Data Pipelines with Spark NLP
14 HoursThis instructor-led, live training in the UAE (online or onsite) is aimed at data scientists and developers who wish to use Spark NLP, built on top of Apache Spark, to develop, implement, and scale natural language text processing models and pipelines.
By the end of this training, participants will be able to:
- Set up the necessary development environment to start building NLP pipelines with Spark NLP.
- Understand the features, architecture, and benefits of using Spark NLP.
- Use the pre-trained models available in Spark NLP to implement text processing.
- Learn how to build, train, and scale Spark NLP models for production-grade projects.
- Apply classification, inference, and sentiment analysis on real-world use cases (clinical data, customer behavior insights, etc.).
Python and Spark for Big Data (PySpark)
21 HoursIn this instructor-led, live training in the UAE, participants will learn how to use Python and Spark together to analyze big data as they work on hands-on exercises.
By the end of this training, participants will be able to:
- Learn how to use Spark with Python to analyze Big Data.
- Work on exercises that mimic real world cases.
- Use different tools and techniques for big data analysis using PySpark.
Python, Spark, and Hadoop for Big Data
21 HoursThis instructor-led, live training in the UAE (online or onsite) is aimed at developers who wish to use and integrate Spark, Hadoop, and Python to process, analyze, and transform large and complex data sets.
By the end of this training, participants will be able to:
- Set up the necessary environment to start processing big data with Spark, Hadoop, and Python.
- Understand the features, core components, and architecture of Spark and Hadoop.
- Learn how to integrate Spark, Hadoop, and Python for big data processing.
- Explore the tools in the Spark ecosystem (Spark MlLib, Spark Streaming, Kafka, Sqoop, Kafka, and Flume).
- Build collaborative filtering recommendation systems similar to Netflix, YouTube, Amazon, Spotify, and Google.
- Use Apache Mahout to scale machine learning algorithms.
Apache Spark SQL
7 HoursSpark SQL is the module within Apache Spark designed for handling both structured and unstructured data. It offers insights into the structure of the data and the computations being executed, which can be leveraged for optimization purposes. Two primary applications of Spark SQL include:
- Executing SQL queries.
- Accessing data from an existing Hive setup.
In this instructor-led training session (conducted either on-site or remotely), participants will gain skills in analyzing diverse datasets using Spark SQL.
By the conclusion of this course, attendees will be equipped to:
- Set up and configure Spark SQL.
- Conduct data analysis with Spark SQL.
- Query datasets available in various formats.
- Visualize data and the outcomes of queries.
Course Format
- An interactive lecture combined with discussion sessions.
- A multitude of exercises and practical applications.
- Hands-on implementation in a live-lab setting.
Customization Options for the Course
- To tailor this course to specific needs, please reach out to us to discuss your requirements.
Stratio: Rocket and Intelligence Modules with PySpark
14 HoursStratio is a data-centric platform that integrates big data, AI, and governance into a single solution. Its Rocket and Intelligence modules enable rapid data exploration, transformation, and advanced analytics in enterprise environments.
This instructor-led, live training (online or onsite) is aimed at intermediate-level data professionals who wish to use the Rocket and Intelligence modules in Stratio effectively with PySpark, focusing on looping structures, user-defined functions, and advanced data logic.
By the end of this training, participants will be able to:
- Navigate and work within the Stratio platform using Rocket and Intelligence modules.
- Apply PySpark in the context of data ingestion, transformation, and analysis.
- Use loops and conditional logic to control data workflows and feature engineering tasks.
- Create and manage user-defined functions (UDFs) for reusable data operations in PySpark.
Format of the Course
- Interactive lecture and discussion.
- Lots of exercises and practice.
- Hands-on implementation in a live-lab environment.
Course Customization Options
- To request a customized training for this course, please contact us to arrange.
Introduction to Data Visualization with Tidyverse and R
7 HoursThe Tidyverse is a suite of flexible R packages designed for data cleaning, processing, modeling, and visualization. Key components include ggplot2, dplyr, tidyr, readr, purrr, and tibble.
During this instructor-led live training session, participants will learn how to manipulate and visualize data using the tools provided by the Tidyverse.
By the end of the course, participants will be able to:
- Analyze data and generate compelling visualizations
- Derive meaningful insights from various sample datasets
- Filter, sort, and summarize data to address exploratory questions
- Create informative line plots, bar charts, and histograms from processed data
- Import and filter data from a variety of sources such as Excel, CSV, and SPSS files
Audience
- Beginners to the R programming language
- Newcomers to data analysis and visualization
Course Format
- The course includes lectures, discussions, exercises, and extensive hands-on practice.