Human-Robot Interaction (HRI): Voice, Gesture & Collaborative Control Training Course
Human-Robot Interaction (HRI): Voice, Gesture & Collaborative Control is a practical course crafted to equip participants with the skills needed to design and implement intuitive interfaces for human–robot communication. Blending theoretical insights, core design principles, and hands-on programming, the training empowers learners to build natural and responsive interaction systems leveraging speech, gestures, and shared control methodologies. Attendees will gain the expertise to integrate perception modules, engineer multimodal input systems, and develop robots capable of safe, effective collaboration with humans.
Delivered as an instructor-led, live session (available online or on-site), this programme is tailored for beginner to intermediate-level professionals seeking to design and deploy human–robot interaction systems that significantly enhance usability, safety, and the overall user experience.
Upon completion of this training, participants will be able to:
- Grasp the foundational concepts and design principles underpinning human–robot interaction.
- Engineer voice-based control and response mechanisms for robotic systems.
- Deploy gesture recognition capabilities using advanced computer vision techniques.
- Architect collaborative control systems that ensure safe and shared autonomy.
- Assess HRI systems through the lenses of usability, safety, and human factors.
Format of the Course
- Interactive lectures complemented by live demonstrations.
- Practical coding and design exercises.
- Real-world experiments conducted within simulation or actual robotic environments.
Course Customization Options
- To arrange a bespoke training session for this course, please contact us to discuss your specific requirements.
Course Outline
Introduction to Human-Robot Interaction
- Overview of HRI and its multidisciplinary nature
- Applications in industry, healthcare, and service robotics
- Human-centered design principles for interactive systems
Voice Interaction and Speech-Based Control
- Basics of speech recognition and natural language understanding
- Developing voice commands and responses using Python
- Integrating speech interfaces with ROS-based robots
Gesture Recognition and Nonverbal Communication
- Role of gestures and body language in human–robot communication
- Using computer vision for gesture detection and classification
- Implementing real-time gesture recognition with OpenCV and AI models
Collaborative and Shared Control
- Principles of human–robot collaboration and shared autonomy
- Safety frameworks for physical and cognitive interaction
- Integrating sensor feedback and adaptive control for cooperative tasks
Designing Multimodal Interaction Systems
- Combining voice, gesture, and visual feedback
- Managing context and user intent in multimodal systems
- Implementing a simple multimodal HRI prototype in simulation
Human Factors, Ethics, and Safety in HRI
- Human perception, trust, and acceptance in robotic systems
- Ethical considerations in collaborative robotics
- Evaluating usability and safety of interaction interfaces
Hands-on Project: Building a Voice and Gesture-Controlled Collaborative Robot
- Designing system architecture and defining interaction modes
- Implementing speech and gesture modules
- Integrating and testing the complete HRI prototype
Summary and Next Steps
Requirements
- Basic understanding of robotics concepts and Python programming
- Familiarity with human–machine interface or control systems
- Interest in interaction design, perception, or applied AI
Audience
- HRI researchers studying human–robot collaboration
- Product designers developing interactive or assistive robots
- Engineers exploring multimodal interaction and control systems
Need help picking the right course?
Human-Robot Interaction (HRI): Voice, Gesture & Collaborative Control Training Course - Enquiry
Testimonials (2)
Supply of the materials (virtual machine) to get straight into the excersises, and the explanation of the Ros2 core. Why things work a certain way.
Arjan Bakema
Course - Autonomous Navigation & SLAM with ROS 2
its knowledge and utilization of AI for Robotics in the Future.
Ryle - PHILIPPINE MILITARY ACADEMY
Course - Artificial Intelligence (AI) for Robotics
Upcoming Courses
Related Courses
Artificial Intelligence (AI) for Robotics
21 HoursRobotics is a field within artificial intelligence (AI) that focuses on the development and programming of intelligent and efficient machines.
This instructor-led live training session (either online or in-person) is designed for engineers who want to learn how to program and build robots using fundamental AI techniques.
By the end of this course, participants will be able to:
- Apply filters such as Kalman and particle filters to help a robot identify moving objects within its surroundings.
- Implement search algorithms and motion planning strategies.
- Use PID controls to manage a robot's movement in an environment.
- Utilize SLAM algorithms to allow a robot to map out unfamiliar environments.
Course Format
- Interactive lectures and discussions.
- A variety of exercises and practical practice sessions.
- Hands-on implementation in a live-lab setting.
Customization Options for the Course
- To request a tailored training session, please contact us to make arrangements.
AI and Robotics for Nuclear - Extended
120 HoursIn this instructor-led, live training in the UAE (online or onsite), participants will learn the different technologies, frameworks and techniques for programming different types of robots to be used in the field of nuclear technology and environmental systems.
The 6-week course is held 5 days a week. Each day is 4-hours long and consists of lectures, discussions, and hands-on robot development in a live lab environment. Participants will complete various real-world projects applicable to their work in order to practice their acquired knowledge.
The target hardware for this course will be simulated in 3D through simulation software. The ROS (Robot Operating System) open-source framework, C++ and Python will be used for programming the robots.
By the end of this training, participants will be able to:
- Understand the key concepts used in robotic technologies.
- Understand and manage the interaction between software and hardware in a robotic system.
- Understand and implement the software components that underpin robotics.
- Build and operate a simulated mechanical robot that can see, sense, process, navigate, and interact with humans through voice.
- Understand the necessary elements of artificial intelligence (machine learning, deep learning, etc.) applicable to building a smart robot.
- Implement filters (Kalman and Particle) to enable the robot to locate moving objects in its environment.
- Implement search algorithms and motion planning.
- Implement PID controls to regulate a robot's movement within an environment.
- Implement SLAM algorithms to enable a robot to map out an unknown environment.
- Extend a robot's ability to perform complex tasks through Deep Learning.
- Test and troubleshoot a robot in realistic scenarios.
AI and Robotics for Nuclear
80 HoursIn this instructor-led, live training in the UAE (online or onsite), participants will learn the different technologies, frameworks and techniques for programming different types of robots to be used in the field of nuclear technology and environmental systems.
The 4-week course is held 5 days a week. Each day is 4-hours long and consists of lectures, discussions, and hands-on robot development in a live lab environment. Participants will complete various real-world projects applicable to their work in order to practice their acquired knowledge.
The target hardware for this course will be simulated in 3D through simulation software. The code will then be loaded onto physical hardware (Arduino or other) for final deployment testing. The ROS (Robot Operating System) open-source framework, C++ and Python will be used for programming the robots.
By the end of this training, participants will be able to:
- Understand the key concepts used in robotic technologies.
- Understand and manage the interaction between software and hardware in a robotic system.
- Understand and implement the software components that underpin robotics.
- Build and operate a simulated mechanical robot that can see, sense, process, navigate, and interact with humans through voice.
- Understand the necessary elements of artificial intelligence (machine learning, deep learning, etc.) applicable to building a smart robot.
- Implement filters (Kalman and Particle) to enable the robot to locate moving objects in its environment.
- Implement search algorithms and motion planning.
- Implement PID controls to regulate a robot's movement within an environment.
- Implement SLAM algorithms to enable a robot to map out an unknown environment.
- Test and troubleshoot a robot in realistic scenarios.
Autonomous Navigation & SLAM with ROS 2
21 HoursROS 2 (Robot Operating System 2) is an open-source framework designed to support the development of complex and scalable robotic applications.
This instructor-led, live training (online or onsite) is aimed at intermediate-level robotics engineers and developers who wish to implement autonomous navigation and SLAM (Simultaneous Localization and Mapping) using ROS 2.
By the end of this training, participants will be able to:
- Set up and configure ROS 2 for applications in autonomous navigation.
- Implement SLAM algorithms to create maps and localize robots within them.
- Integrate sensors such as LiDAR and cameras with ROS 2 for enhanced perception.
- Simulate and test autonomous navigation scenarios using Gazebo.
- Deploy navigation stacks on physical robotic systems.
Format of the Course
- Interactive lectures and discussions to enhance understanding.
- Hands-on practice with ROS 2 tools and simulation environments to apply concepts practically.
- Live-lab implementation and testing on virtual or physical robots to ensure real-world applicability.
Course Customization Options
- To request a customized training for this course, please contact us to arrange the details.
Developing Intelligent Bots with Azure
14 HoursThe Azure Bot Service leverages the capabilities of the Microsoft Bot Framework and Azure functions to facilitate the quick development of intelligent bots.
In this instructor-led live training session, participants will learn how to effortlessly create an intelligent bot using Microsoft Azure.
By the end of this training, participants will be able to:
- Master the basics of intelligent bots
- Create intelligent bots through cloud applications
- Utilize the Microsoft Bot Framework, Bot Builder SDK, and Azure Bot Service effectively
- Design bots using established bot patterns
- Develop their first intelligent bot with Microsoft Azure
Audience
- Software Developers
- Hobbyists
- Engineers
- IT Professionals
Course Format
- The course includes lectures, discussions, exercises, and extensive hands-on practice.
Computer Vision for Robotics: Perception with OpenCV & Deep Learning
21 HoursOpenCV serves as an open-source computer vision library that facilitates real-time image processing, while deep learning frameworks such as TensorFlow equip robotic systems with the necessary tools for intelligent perception and decision-making.
This instructor-led, live training, available either online or on-site, is designed for intermediate-level robotics engineers, computer vision practitioners, and machine learning engineers who aspire to leverage computer vision and deep learning techniques to enhance robotic perception and autonomy.
Upon completion of this training, participants will be equipped to:
- Develop computer vision pipelines using OpenCV.
- Integrate deep learning models for effective object detection and recognition.
- Utilise vision-based data to drive robotic control and navigation.
- Blend classical vision algorithms with deep neural networks.
- Deploy computer vision systems across embedded and robotic platforms.
Course Format
- Interactive lectures followed by group discussions.
- Practical, hands-on sessions using OpenCV and TensorFlow.
- Live laboratory implementation on either simulated or physical robotic systems.
Course Customisation Options
- To arrange a customised training session for this course, please contact us to discuss your requirements.
Developing a Bot
14 HoursA chatbot or bot is essentially a digital assistant designed to automate user interactions across various messaging platforms, enabling quicker task completion without human intervention.
This instructor-led live training will guide participants through the process of developing bots by creating sample chatbots using different development tools and frameworks.
By the end of this course, participants will be able to:
- Grasp the diverse uses and applications of bots
- Comprehend the entire bot development process
- Explore various tools and platforms utilized in building bots
- Create a sample chatbot for Facebook Messenger
- Develop a sample chatbot using Microsoft Bot Framework
Audience
- Developers keen on developing their own bot
Course Format
- A blend of lectures, discussions, exercises, and extensive hands-on practice
Edge AI for Robots: TinyML, On-Device Inference & Optimization
21 HoursEdge AI empowers artificial intelligence models to operate directly on embedded or resource-constrained devices, thereby minimising latency and power consumption while enhancing autonomy and data privacy within robotic systems.
This instructor-led, live training, available either online or on-site, is designed for intermediate-level embedded developers and robotics engineers seeking to implement machine learning inference and optimisation techniques directly on robotic hardware using TinyML and edge AI frameworks.
Upon completing this training, participants will be able to:
- Grasp the core fundamentals of TinyML and edge AI as applied to robotics.
- Convert and deploy AI models for efficient on-device inference.
- Optimise models to achieve superior speed, reduced size, and improved energy efficiency.
- Integrate edge AI systems seamlessly into robotic control architectures.
- Evaluate system performance and accuracy within real-world operational scenarios.
Course Format
- Interactive lectures accompanied by in-depth discussions.
- Hands-on practice leveraging TinyML and edge AI toolchains.
- Practical exercises executed on embedded and robotic hardware platforms.
Course Customisation Options
- To arrange a customised version of this training tailored to your specific needs, please contact us for further arrangements.
Human-Centric Physical AI: Collaborative Robots and Beyond
14 HoursThis instructor-led, live training in the UAE (online or onsite) is aimed at intermediate-level participants who wish to explore the role of collaborative robots (cobots) and other human-centric AI systems in modern workplaces.
By the end of this training, participants will be able to:
- Understand the principles of Human-Centric Physical AI and its applications.
- Explore the role of collaborative robots in enhancing workplace productivity.
- Identify and address challenges in human-machine interactions.
- Design workflows that optimize collaboration between humans and AI-driven systems.
- Promote a culture of innovation and adaptability in AI-integrated workplaces.
Industrial Robotics Automation: ROS-PLC Integration & Digital Twins
28 HoursIndustrial Robotics Automation: ROS-PLC Integration & Digital Twins is a practical course designed to bridge the gap between traditional industrial automation and modern robotics frameworks. Participants will gain the expertise to integrate ROS-based robotic systems with PLCs for seamless, synchronized operations, while exploring digital twin environments to simulate, monitor, and optimise production processes. The curriculum places a strong emphasis on interoperability, real-time control, and predictive analysis through the use of digital replicas of physical assets.
Delivered as an instructor-led, live training session (available online or onsite), this programme is tailored for intermediate-level professionals seeking to build hands-on skills in connecting ROS-controlled robots with PLC environments and deploying digital twins to drive automation and manufacturing efficiency.
Upon completion of this training, participants will be equipped to:
- Master the communication protocols that link ROS and PLC systems.
- Facilitate real-time data exchange between robotic units and industrial controllers.
- Develop digital twins for the purposes of monitoring, testing, and process simulation.
- Seamlessly integrate sensors, actuators, and robotic manipulators into industrial workflows.
- Design and validate industrial automation systems using hybrid simulation environments.
Course Format
- Engaging lectures accompanied by architecture walkthroughs.
- Practical exercises focused on integrating ROS and PLC systems.
- Implementation of simulation and digital twin projects.
Course Customisation Options
- To request a customised training session for this course, please contact us to make arrangements.
Artificial Intelligence (AI) for Mechatronics
21 HoursThis instructor-led, live training in the UAE (online or onsite) is aimed at engineers who wish to learn about the applicability of artificial intelligence to mechatronic systems.
By the end of this training, participants will be able to:
- Gain an overview of artificial intelligence, machine learning, and computational intelligence.
- Understand the concepts of neural networks and different learning methods.
- Choose artificial intelligence approaches effectively for real-life problems.
- Implement AI applications in mechatronic engineering.
Multi-Robot Systems and Swarm Intelligence
28 HoursThe Multi-Robot Systems and Swarm Intelligence programme is an advanced training course dedicated to the design, coordination, and control of robotic teams, drawing inspiration from biological swarm behaviours. Participants will explore how to model interactions, execute distributed decision-making, and optimise collaboration across multiple agents. Blending theoretical foundations with practical simulation, the course prepares learners to deploy these capabilities in sectors such as logistics, defence, search and rescue, and autonomous exploration.
Delivered as an instructor-led, live session (either online or on-site), this training is tailored for senior-level professionals seeking to design, simulate, and implement multi-robot and swarm-based systems using open-source frameworks and algorithms.
Upon completion of this training, participants will be equipped to:
- Grasp the core principles and dynamics underpinning swarm intelligence and cooperative robotics.
- Develop effective communication and coordination strategies for complex multi-robot systems.
- Deploy distributed decision-making frameworks and consensus algorithms.
- Simulate collective behaviours, including formation control, flocking, and area coverage.
- Leverage swarm-based techniques to address real-world scenarios and complex optimisation challenges.
Course Format
- In-depth lectures focusing on advanced algorithmic concepts.
- Practical coding and simulation exercises using ROS 2 and Gazebo.
- A collaborative project that applies swarm intelligence principles to a tangible use case.
Customisation Options
- For a customised training session tailored to your specific requirements, please contact us to arrange the details.
Multimodal AI in Robotics
21 HoursThis instructor-led, live training in the UAE (online or onsite) is aimed at advanced-level robotics engineers and AI researchers who wish to utilize Multimodal AI for integrating various sensory data to create more autonomous and efficient robots that can see, hear, and touch.
By the end of this training, participants will be able to:
- Implement multimodal sensing in robotic systems.
- Develop AI algorithms for sensor fusion and decision-making.
- Create robots that can perform complex tasks in dynamic environments.
- Address challenges in real-time data processing and actuation.
Smart Robots for Developers
84 HoursA Smart Robot is an Artificial Intelligence (AI) system capable of learning from its surroundings and experiences, thereby enhancing its capabilities. These robots can work alongside humans, learning from their behavior while performing both manual labor and cognitive tasks. In addition to physical robots, Smart Robots can also exist as software applications without any moving parts or direct interaction with the physical world.
This instructor-led live training will cover various technologies, frameworks, and techniques for programming different types of mechanical Smart Robots. Participants will apply this knowledge to complete their own Smart Robot projects.
The course is structured into four sections, each comprising three days of lectures, discussions, and hands-on robot development in a live lab environment. Each section concludes with a practical project to reinforce the acquired skills.
For this course, the target hardware will be simulated using 3D simulation software. Participants will use the ROS (Robot Operating System) open-source framework along with C++ and Python for programming the robots.
By the end of this training, participants will:
- Grasp the fundamental concepts in robotic technologies
- Manage the interaction between software and hardware within a robotic system
- Implement the software components that support Smart Robots
- Create and operate a simulated mechanical Smart Robot capable of seeing, sensing, processing, grasping, navigating, and interacting with humans through voice commands
- Enhance a Smart Robot's ability to perform complex tasks using Deep Learning techniques
- Test and troubleshoot a Smart Robot in realistic scenarios
Audience
- Developers
- Engineers
Format of the course
- The course includes lectures, discussions, exercises, and extensive hands-on practice.
Note
- To tailor any aspect of this course (programming language, robot model, etc.), please contact us to make arrangements.
Smart Robotics in Manufacturing: AI for Perception, Planning, and Control
21 HoursSmart Robotics refers to the integration of artificial intelligence into robotic systems to enhance perception, decision-making capabilities, and autonomous control.
This instructor-led, live training session (available online or on-site) is designed for advanced-level robotics engineers, systems integrators, and automation leads who aim to implement AI-driven perception, planning, and control within smart manufacturing environments across the UAE.
Upon completion of this training, participants will be able to:
- Comprehend and apply AI techniques for robotic perception and sensor fusion.
- Develop motion planning algorithms tailored for both collaborative and industrial robots.
- Deploy learning-based control strategies to facilitate real-time decision-making.
- Seamlessly integrate intelligent robotic systems into smart factory workflows.
Course Format
- Engaging lectures combined with interactive discussions.
- Extensive exercises and practical application.
- Hands-on implementation within a live-lab environment.
Course Customization Options
- To request a customized version of this training, please contact us to make the necessary arrangements.