Autonomous Navigation & SLAM with ROS 2 Training Course
ROS 2 (Robot Operating System 2) serves as an open-source framework engineered to facilitate the creation of complex and scalable robotic applications.
This instructor-led live training, available either online or onsite, is designed for intermediate-level robotics engineers and developers aiming to implement autonomous navigation and SLAM (Simultaneous Localization and Mapping) using ROS 2.
Upon completion of this training, participants will be capable of:
- Setting up and configuring ROS 2 for autonomous navigation applications.
- Implementing SLAM algorithms to facilitate mapping and localization.
- Integrating sensors, including LiDAR and cameras, with ROS 2.
- Simulating and testing autonomous navigation within the Gazebo environment.
- Deploying navigation stacks onto physical robots.
Format of the Course
- Interactive lectures and discussions.
- Hands-on practice utilizing ROS 2 tools and simulation environments.
- Live-lab implementation and testing on either virtual or physical robots.
Course Customization Options
- To request customized training for this course, please contact us to arrange.
Course Outline
Introduction to ROS 2 and Autonomous Navigation
- Overview of ROS 2 architecture and capabilities
- Understanding navigation systems in robotics
- Setting up the ROS 2 environment
Working with Sensors and Data Acquisition
- Integrating LiDAR and camera sensors
- Collecting and processing sensor data
- Visualizing sensor outputs using Rviz
Mapping and Localization Fundamentals
- Principles of SLAM
- Implementing 2D and 3D mapping
- Localization using AMCL and other techniques
Path Planning and Obstacle Avoidance
- Exploring path planning algorithms
- Dynamic obstacle detection and avoidance
- Testing navigation in simulated environments
Using Gazebo for Simulation
- Setting up Gazebo simulations with ROS 2
- Testing robot models and navigation stacks
- Analyzing performance in virtual environments
Deploying SLAM and Navigation on Real Robots
- Connecting ROS 2 to physical hardware
- Calibrating sensors and actuators
- Running real-time navigation experiments
Troubleshooting and Performance Optimization
- Debugging navigation issues in ROS 2
- Optimizing SLAM algorithms for efficiency
- Fine-tuning navigation parameters
Summary and Next Steps
Requirements
- A solid understanding of robotics principles
- Hands-on experience with Linux-based systems
- Basic proficiency in programming with Python or C++
Audience
- Robotics engineers
- Automation developers
- Research and development professionals specializing in autonomous systems
Need help picking the right course?
Autonomous Navigation & SLAM with ROS 2 Training Course - Enquiry
Testimonials (1)
Supply of the materials (virtual machine) to get straight into the excersises, and the explanation of the Ros2 core. Why things work a certain way.
Arjan Bakema
Course - Autonomous Navigation & SLAM with ROS 2
Upcoming Courses
Related Courses
Artificial Intelligence (AI) for Robotics
21 HoursThe intersection of Artificial Intelligence (AI) and robotics integrates machine learning, control systems, and sensor fusion to engineer intelligent machines capable of autonomous perception, reasoning, and action. Leveraging contemporary frameworks such as ROS 2, TensorFlow, and OpenCV, engineers are empowered to design robotic solutions that navigate, plan, and interact with physical environments with sophistication.
This instructor-led live training, available either online or onsite, targets intermediate-level engineers seeking to develop, train, and deploy AI-driven robotic systems using state-of-the-art open-source technologies.
Upon completion of this training, participants will be equipped to:
- Utilize Python and ROS 2 to construct and simulate robotic behaviors.
- Deploy Kalman and Particle Filters for precise localization and tracking.
- Apply OpenCV-based computer vision techniques for perception and object detection.
- Employ TensorFlow for motion prediction and learning-based control mechanisms.
- Integrate SLAM (Simultaneous Localization and Mapping) to enable autonomous navigation.
- Create reinforcement learning models to enhance robotic decision-making capabilities.
Course Format
- Interactive lectures paired with group discussions.
- Practical implementation exercises using ROS 2 and Python.
- Hands-on practice within both simulated and real-world robotic environments.
Course Customization Options
For inquiries regarding customized training sessions for this course, please contact our team to make arrangements.
AI and Robotics for Nuclear - Extended
120 HoursIn this instructor-led, live training in the UAE (online or onsite), participants will learn the different technologies, frameworks and techniques for programming different types of robots to be used in the field of nuclear technology and environmental systems.
The 6-week course is held 5 days a week. Each day is 4-hours long and consists of lectures, discussions, and hands-on robot development in a live lab environment. Participants will complete various real-world projects applicable to their work in order to practice their acquired knowledge.
The target hardware for this course will be simulated in 3D through simulation software. The ROS (Robot Operating System) open-source framework, C++ and Python will be used for programming the robots.
By the end of this training, participants will be able to:
- Understand the key concepts used in robotic technologies.
- Understand and manage the interaction between software and hardware in a robotic system.
- Understand and implement the software components that underpin robotics.
- Build and operate a simulated mechanical robot that can see, sense, process, navigate, and interact with humans through voice.
- Understand the necessary elements of artificial intelligence (machine learning, deep learning, etc.) applicable to building a smart robot.
- Implement filters (Kalman and Particle) to enable the robot to locate moving objects in its environment.
- Implement search algorithms and motion planning.
- Implement PID controls to regulate a robot's movement within an environment.
- Implement SLAM algorithms to enable a robot to map out an unknown environment.
- Extend a robot's ability to perform complex tasks through Deep Learning.
- Test and troubleshoot a robot in realistic scenarios.
AI and Robotics for Nuclear
80 HoursIn this instructor-led live training in the UAE (available online or onsite), participants will learn the diverse technologies, frameworks, and techniques for programming robots used in nuclear technology and environmental systems.
The four-week course takes place five days a week. Each session lasts four hours and includes lectures, discussions, and hands-on robot development in a live lab environment. Participants will complete practical projects applicable to their work to practice their acquired knowledge.
The target hardware for this course will be simulated in 3D through simulation software. The code will then be loaded onto physical hardware (Arduino or other) for final deployment testing. The ROS (Robot Operating System) open-source framework, C++ and Python will be used for programming the robots.
By the end of this training, participants will be able to:
- Understand the key concepts used in robotic technologies.
- Understand and manage the interaction between software and hardware in a robotic system.
- Understand and implement the software components that underpin robotics.
- Build and operate a simulated mechanical robot that can see, sense, process, navigate, and interact with humans through voice.
- Understand the necessary elements of artificial intelligence (machine learning, deep learning, etc.) applicable to building a smart robot.
- Implement filters (Kalman and Particle) to enable the robot to locate moving objects in its environment.
- Implement search algorithms and motion planning.
- Implement PID controls to regulate a robot's movement within an environment.
- Implement SLAM algorithms to enable a robot to map out an unknown environment.
- Test and troubleshoot a robot in realistic scenarios.
Developing Intelligent Bots with Azure
14 HoursAzure Bot Service integrates the capabilities of the Microsoft Bot Framework and Azure Functions, offering a robust platform for the rapid development of intelligent bots.
Through this instructor-led live training, participants will explore efficient methods for creating intelligent bots using Microsoft Azure.
Upon completion of the training, participants will be able to:
Grasp the fundamental concepts underlying intelligent bots.
Construct intelligent bots using cloud-based applications.
Acquire practical expertise in the Microsoft Bot Framework, the Bot Builder SDK, and Azure Bot Service.
Implement established bot design patterns in real-world scenarios.
Create and deploy their first intelligent bot using Microsoft Azure.
Target Audience
This course is tailored for developers, hobbyists, engineers, and IT professionals interested in bot development.
Course Format
The training blends lectures and discussions with exercises, placing a strong emphasis on hands-on practice.
Computer Vision for Robotics: Perception with OpenCV & Deep Learning
21 HoursOpenCV is an open-source computer vision library that enables real-time image processing, while deep learning frameworks such as TensorFlow provide the tools for intelligent perception and decision-making in robotic systems.
This instructor-led, live training (online or onsite) is aimed at intermediate-level robotics engineers, computer vision practitioners, and machine learning engineers who wish to apply computer vision and deep learning techniques for robotic perception and autonomy.
By the end of this training, participants will be able to:
- Implement computer vision pipelines using OpenCV.
- Integrate deep learning models for object detection and recognition.
- Use vision-based data for robotic control and navigation.
- Combine classical vision algorithms with deep neural networks.
- Deploy computer vision systems on embedded and robotic platforms.
Format of the Course
- Interactive lecture and discussion.
- Hands-on practice using OpenCV and TensorFlow.
- Live-lab implementation on simulated or physical robotic systems.
Course Customization Options
- To request a customized training for this course, please contact us to arrange.
Developing a Bot
14 HoursA bot, or chatbot, functions as a digital assistant designed to automate user interactions across various messaging platforms. This enables tasks to be completed more efficiently without requiring direct communication with a human operator.
Through this instructor-led live training, participants will learn how to begin developing bots by creating sample chatbots using specific development tools and frameworks.
Upon completing this training, participants will be able to:
- Comprehend the diverse uses and applications of bots
- Grasp the end-to-end process of bot development
- Explore the various tools and platforms utilized in bot construction
- Construct a sample chatbot for Facebook Messenger
- Develop a sample chatbot using the Microsoft Bot Framework
Audience
- Developers interested in creating their own bots
Course Format
- A blend of lectures, discussions, exercises, and extensive hands-on practice
Edge AI for Robots: TinyML, On-Device Inference & Optimization
21 HoursEdge AI allows artificial intelligence models to execute directly on embedded or resource-limited devices, thereby lowering latency and power usage while boosting autonomy and privacy within robotic systems.
This instructor-led live training (available online or onsite) targets intermediate-level embedded developers and robotics engineers seeking to implement machine learning inference and optimization techniques directly on robotic hardware using TinyML and edge AI frameworks.
Upon completing this training, participants will be able to:
- Grasp the core principles of TinyML and edge AI for robotics.
- Convert and deploy AI models for on-device inference.
- Optimize models to enhance speed, reduce size, and improve energy efficiency.
- Integrate edge AI systems into robotic control architectures.
- Evaluate performance and accuracy in real-world scenarios.
Format of the Course
- Interactive lectures and discussions.
- Practical exercises utilizing TinyML and edge AI toolchains.
- Hands-on applications on embedded and robotic hardware platforms.
Course Customization Options
- To request a customized training session for this course, please contact us to arrange.
Human-Centric Physical AI: Collaborative Robots and Beyond
14 HoursThis instructor-led live training in the UAE (online or onsite) is tailored for intermediate-level participants interested in examining the role of collaborative robots (cobots) and other human-centric AI systems in modern workplaces.
Upon completing this training, participants will be equipped to:
- Grasp the core principles of Human-Centric Physical AI and its practical applications.
- Investigate how collaborative robots contribute to improved workplace productivity.
- Recognize and resolve challenges associated with human-machine interactions.
- Develop workflows that maximize synergy between humans and AI-driven systems.
- Foster a culture of innovation and adaptability within AI-integrated work environments.
Human-Robot Interaction (HRI): Voice, Gesture & Collaborative Control
21 HoursHuman-Robot Interaction (HRI): Voice, Gesture & Collaborative Control is a practical course focused on the design and implementation of intuitive interfaces for effective human–robot communication. This training integrates theoretical knowledge, design principles, and programming practice to create natural and responsive interaction systems leveraging speech, gestures, and shared control techniques. Participants will acquire the skills to integrate perception modules, develop multimodal input systems, and design robots that safely collaborate with humans.
Delivered by an instructor either online or onsite, this training targets participants at a beginner to intermediate level who aim to design and implement human–robot interaction systems that improve usability, safety, and overall user experience.
Upon completion of this training, participants will be capable of:
- Grasping the foundational concepts and design principles of human–robot interaction.
- Developing voice-based control and response mechanisms for robots.
- Implementing gesture recognition utilizing computer vision techniques.
- Designing collaborative control systems that ensure safe and shared autonomy.
- Evaluating HRI systems based on usability, safety, and human factors.
Format of the Course
- Interactive lectures and demonstrations.
- Hands-on coding and design exercises.
- Practical experiments conducted in simulation or real robotic environments.
Course Customization Options
- To request a customized training for this course, please contact us to arrange.
Industrial Robotics Automation: ROS-PLC Integration & Digital Twins
28 HoursIndustrial Robotics Automation: ROS-PLC Integration & Digital Twins is a practical, hands-on course designed to bridge the gap between industrial automation and modern robotic frameworks. Participants will acquire the skills to integrate ROS-based robotic systems with PLCs for synchronized operations, while exploring digital twin environments to simulate, monitor, and optimize production processes. The curriculum emphasizes interoperability, real-time control, and predictive analysis using digital replicas of physical systems.
This instructor-led training, available both online and onsite, targets intermediate-level professionals seeking to build practical expertise in connecting ROS-controlled robots with PLC environments and implementing digital twins for automation and manufacturing optimization.
Upon completion of this training, participants will be able to:
- Comprehend the communication protocols used between ROS and PLC systems.
- Implement real-time data exchange mechanisms between robots and industrial controllers.
- Develop digital twins for monitoring, testing, and process simulation.
- Integrate sensors, actuators, and robotic manipulators into industrial workflows.
- Design and validate industrial automation systems using hybrid simulation environments.
Format of the Course
- Interactive lectures and architectural walkthroughs.
- Hands-on exercises focused on integrating ROS and PLC systems.
- Implementation of simulation and digital twin projects.
Course Customization Options
- To request a customized training session for this course, please contact us to arrange.
Artificial Intelligence (AI) for Mechatronics
21 HoursThis instructor-led live training in the UAE (online or onsite) is tailored for engineers interested in learning how artificial intelligence applies to mechatronic systems.
By the end of this training, participants will be able to:
- Gain an overview of artificial intelligence, machine learning, and computational intelligence.
- Understand the concepts of neural networks and different learning methods.
- Choose artificial intelligence approaches effectively for real-life problems.
- Implement AI applications in mechatronic engineering.
Multi-Robot Systems and Swarm Intelligence
28 HoursMulti-Robot Systems and Swarm Intelligence is an advanced training course that delves into the design, coordination, and control of robotic teams inspired by biological swarm behaviors. Participants will learn how to model interactions, implement distributed decision-making, and optimize collaboration across multiple agents. The course blends theoretical foundations with practical simulation to prepare learners for applications in logistics, defense, search and rescue, and autonomous exploration.
This instructor-led, live training (available online or onsite) targets advanced-level professionals aiming to design, simulate, and implement multi-robot and swarm-based systems using open-source frameworks and algorithms.
By the end of this training, participants will be able to:
- Understand the principles and dynamics of swarm intelligence and cooperative robotics.
- Design communication and coordination strategies for multi-robot systems.
- Implement distributed decision-making and consensus algorithms.
- Simulate collective behaviors such as formation control, flocking, and coverage.
- Apply swarm-based techniques to real-world scenarios and optimization problems.
Format of the Course
- Advanced lectures with algorithmic deep dives.
- Hands-on coding and simulation in ROS 2 and Gazebo.
- Collaborative project applying swarm intelligence principles.
Course Customization Options
- To request a customized training for this course, please contact us to arrange.
Multimodal AI in Robotics
21 HoursThis instructor-led, live training in the UAE (online or onsite) is aimed at advanced-level robotics engineers and AI researchers who wish to utilize Multimodal AI for integrating various sensory data to create more autonomous and efficient robots that can see, hear, and touch.
By the end of this training, participants will be able to:
- Implement multimodal sensing in robotic systems.
- Develop AI algorithms for sensor fusion and decision-making.
- Create robots that can perform complex tasks in dynamic environments.
- Address challenges in real-time data processing and actuation.
Smart Robots for Developers
84 HoursA Smart Robot represents an Artificial Intelligence (AI) system capable of learning from its environment and past experiences, thereby enhancing its capabilities through accumulated knowledge. These robots are designed to collaborate with humans, working alongside them and adapting to their behaviors. Beyond performing manual labor, Smart Robots are equipped to handle complex cognitive tasks. Furthermore, they are not limited to physical form; Smart Robots can also exist as pure software applications within a computer, operating without mechanical parts or direct physical interaction with the world.
In this instructor-led live training, participants will explore the various technologies, frameworks, and techniques required to program different types of mechanical Smart Robots. Attendees will apply this knowledge to design and complete their own Smart Robot projects.
The curriculum is structured into four sections, each covering three days of lectures, discussions, and hands-on robot development within a live lab environment. Each section concludes with a practical, hands-on project, allowing participants to practice and demonstrate the skills they have acquired.
The target hardware for this course will be simulated in 3D using specialized simulation software. Programming for these robots will utilize the open-source Robot Operating System (ROS) framework, along with C++ and Python.
By the end of this training, participants will be able to:
- Grasp the fundamental concepts underlying robotic technologies
- Understand and manage the interaction between software and hardware in a robotic system
- Comprehend and implement the software components that form the foundation of Smart Robots
- Build and operate a simulated mechanical Smart Robot capable of seeing, sensing, processing, grasping, navigating, and interacting with humans via voice
- Enhance a Smart Robot's capacity to perform complex tasks through Deep Learning
- Test and troubleshoot a Smart Robot in realistic scenarios
Audience
- Developers
- Engineers
Format of the course
- A blend of lectures, discussions, exercises, and extensive hands-on practice
Note
- To customize any aspect of this course (such as programming language or robot model), please contact us to make arrangements.
Smart Robotics in Manufacturing: AI for Perception, Planning, and Control
21 HoursSmart Robotics involves integrating artificial intelligence into robotic systems to enhance perception, decision-making capabilities, and autonomous control.
This instructor-led training, available online or onsite, is designed for advanced robotics engineers, systems integrators, and automation leads who want to implement AI-driven perception, planning, and control within smart manufacturing environments.
Upon completion of this training, participants will be able to:
- Understand and apply AI techniques for robotic perception and sensor fusion.
- Develop motion planning algorithms for both collaborative and industrial robots.
- Deploy learning-based control strategies for real-time decision-making.
- Integrate intelligent robotic systems into smart factory workflows.
Format of the Course
- Interactive lecture and discussion.
- Numerous exercises and practice opportunities.
- Hands-on implementation in a live-lab environment.
Course Customization Options
- To request customized training for this course, please contact us to arrange.