Physical AI for Robotics and Automation Training Course
Physical AI merges artificial intelligence with robotics to build machines that can make autonomous decisions and interact with the real world.
This instructor-led, live training (available online or onsite) targets intermediate-level professionals looking to sharpen their skills in designing, programming, and deploying intelligent robotic systems for automation and other applications.
Upon completing this training, participants will be able to:
- Grasp the core principles of Physical AI and its uses in robotics and automation.
- Design and code intelligent robotic systems for dynamic settings.
- Apply AI models to enable autonomous decision-making in robots.
- Utilise simulation tools to test and optimise robotic performance.
- Tackle challenges like sensor fusion, real-time processing, and energy efficiency.
Course Format
- Interactive lectures and discussions.
- Extensive exercises and practice sessions.
- Hands-on implementation in a live laboratory environment.
Customisation Options
- For tailored training on this course, please reach out to us to arrange your requirements.
Course Outline
Introduction to Physical AI and Robotics
- Overview of Physical AI and its evolution
- Applications in industrial automation and beyond
- Key components of intelligent robotic systems
Robotics System Design
- Mechanical design principles for robots
- Integration of sensors and actuators
- Power systems and energy efficiency
AI Models for Robotics
- Using machine learning for perception and decision-making
- Reinforcement learning in robotics
- Building AI pipelines for robotic systems
Real-Time Sensor Integration
- Sensor fusion techniques
- Processing data from LiDAR, cameras, and other sensors
- Real-time navigation and obstacle avoidance
Simulation and Testing
- Using simulation tools like Gazebo and MATLAB Robotics Toolbox
- Modeling dynamic environments
- Performance evaluation and optimization
Automation and Deployment
- Programming robots for industrial automation
- Developing workflows for repetitive tasks
- Ensuring safety and reliability in deployments
Advanced Topics and Future Trends
- Collaborative robots (cobots) and human-robot interaction
- Ethical and regulatory considerations in robotics
- The future of Physical AI in automation
Summary and Next Steps
Requirements
- Fundamental knowledge of robotics and automation systems.
- Proficiency in programming, with Python being preferred.
- Familiarity with the basics of AI.
Target Audience
- Robotics engineers.
- Automation specialists.
- AI developers.
Open Training Courses require 5+ participants.
Physical AI for Robotics and Automation Training Course - Booking
Physical AI for Robotics and Automation Training Course - Enquiry
Physical AI for Robotics and Automation - Consultancy Enquiry
Testimonials (2)
Supply of the materials (virtual machine) to get straight into the excersises, and the explanation of the Ros2 core. Why things work a certain way.
Arjan Bakema
Course - Autonomous Navigation & SLAM with ROS 2
its knowledge and utilization of AI for Robotics in the Future.
Ryle - PHILIPPINE MILITARY ACADEMY
Course - Artificial Intelligence (AI) for Robotics
Upcoming Courses
Related Courses
Artificial Intelligence (AI) for Robotics
21 HoursArtificial Intelligence (AI) for Robotics merges machine learning, control systems, and sensor fusion to engineer intelligent machines capable of autonomous perception, reasoning, and action. Leveraging contemporary tools such as ROS 2, TensorFlow, and OpenCV, engineers can now develop robots that navigate, plan, and interact with real-world environments in an intelligent manner.
This instructor-led live training (available online or onsite) is designed for intermediate-level engineers seeking to develop, train, and deploy AI-driven robotic systems using the latest open-source technologies and frameworks.
Upon completion of this training, participants will be able to:
- Utilize Python and ROS 2 to construct and simulate robotic behaviors.
- Implement Kalman and Particle Filters for localization and tracking purposes.
- Apply computer vision techniques via OpenCV for perception and object detection.
- Employ TensorFlow for motion prediction and learning-based control mechanisms.
- Integrate SLAM (Simultaneous Localization and Mapping) to enable autonomous navigation.
- Develop reinforcement learning models to enhance robotic decision-making capabilities.
Course Format
- Interactive lectures and discussions.
- Hands-on implementation using ROS 2 and Python.
- Practical exercises involving simulated and real robotic environments.
Customization Options
To arrange a customized training session for this course, please contact us.
AI and Robotics for Nuclear - Extended
120 HoursThrough this instructor-led live training in Malaysia (online or onsite), attendees will explore the various technologies, frameworks, and techniques required to program diverse robotic systems for applications in nuclear technology and environmental management.
This intensive six-week programme runs five days a week. Each session spans four hours and includes lectures, group discussions, and practical robot development in a live laboratory setting. Participants will engage in real-world projects relevant to their professional roles to reinforce their newly acquired skills.
The hardware targeted in this course is simulated in 3D using specialized simulation software. Programming the robots will be conducted using the open-source ROS (Robot Operating System) framework, along with C++ and Python.
Upon completion of this training, participants will be equipped to:
- Grasp the fundamental concepts underpinning robotic technologies.
- Manage and comprehend the interface between software and hardware in robotic systems.
- Understand and deploy the software infrastructure that supports robotics.
- Construct and operate a simulated mechanical robot capable of visual detection, sensing, data processing, navigation, and voice-based human interaction.
- Comprehend the essential Artificial Intelligence components (such as machine learning and deep learning) required to build intelligent robots.
- Deploy Kalman and Particle filters to allow the robot to identify moving objects within its surroundings.
- Implement search algorithms and motion planning strategies.
- Utilize PID controls to regulate a robot's movement effectively within an environment.
- Apply SLAM algorithms to enable robots to map unknown environments.
- Enhance a robot's capacity to execute complex tasks via Deep Learning.
- Test and resolve issues with robots in realistic operational scenarios.
AI and Robotics for Nuclear
80 HoursIn this instructor-led, live training in Malaysia (online or onsite), participants will learn the different technologies, frameworks and techniques for programming different types of robots to be used in the field of nuclear technology and environmental systems.
The 4-week course is held 5 days a week. Each day is 4-hours long and consists of lectures, discussions, and hands-on robot development in a live lab environment. Participants will complete various real-world projects applicable to their work in order to practice their acquired knowledge.
The target hardware for this course will be simulated in 3D through simulation software. The code will then be loaded onto physical hardware (Arduino or other) for final deployment testing. The ROS (Robot Operating System) open-source framework, C++ and Python will be used for programming the robots.
By the end of this training, participants will be able to:
- Understand the key concepts used in robotic technologies.
- Understand and manage the interaction between software and hardware in a robotic system.
- Understand and implement the software components that underpin robotics.
- Build and operate a simulated mechanical robot that can see, sense, process, navigate, and interact with humans through voice.
- Understand the necessary elements of artificial intelligence (machine learning, deep learning, etc.) applicable to building a smart robot.
- Implement filters (Kalman and Particle) to enable the robot to locate moving objects in its environment.
- Implement search algorithms and motion planning.
- Implement PID controls to regulate a robot's movement within an environment.
- Implement SLAM algorithms to enable a robot to map out an unknown environment.
- Test and troubleshoot a robot in realistic scenarios.
Autonomous Navigation & SLAM with ROS 2
21 HoursROS 2 (Robot Operating System 2) is an open-source framework designed to support the development of complex and scalable robotic applications.
This instructor-led, live training (online or onsite) is aimed at intermediate-level robotics engineers and developers who wish to implement autonomous navigation and SLAM (Simultaneous Localization and Mapping) using ROS 2.
By the end of this training, participants will be able to:
- Set up and configure ROS 2 for autonomous navigation applications.
- Implement SLAM algorithms for mapping and localization.
- Integrate sensors such as LiDAR and cameras with ROS 2.
- Simulate and test autonomous navigation in Gazebo.
- Deploy navigation stacks on physical robots.
Format of the Course
- Interactive lecture and discussion.
- Hands-on practice using ROS 2 tools and simulation environments.
- Live-lab implementation and testing on virtual or physical robots.
Course Customization Options
- To request a customized training for this course, please contact us to arrange.
Developing Intelligent Bots with Azure
14 HoursAzure Bot Service integrates the Microsoft Bot Framework with Azure Functions, offering a robust platform for rapidly creating intelligent bots.
Through this instructor-led live training, participants will learn how to efficiently develop smart bots using Microsoft Azure.
Upon completion of the training, participants will be able to:
Grasp the fundamental concepts underlying intelligent bots.
Develop intelligent bots using cloud-based applications.
Acquire practical expertise in the Microsoft Bot Framework, Bot Builder SDK, and Azure Bot Service.
Implement established bot design patterns in practical scenarios.
Create and deploy their first intelligent bot using Microsoft Azure.
Target Audience
This course is tailored for developers, hobbyists, engineers, and IT professionals with an interest in bot development.
Course Format
The training blends lectures and discussions with exercises, placing a strong emphasis on hands-on practice.
Computer Vision for Robotics: Perception with OpenCV & Deep Learning
21 HoursOpenCV, an open-source library for computer vision, facilitates real-time image processing, while deep learning frameworks like TensorFlow equip robotic systems with the capabilities for intelligent perception and decision-making.
This instructor-led training, available both online and onsite, is designed for robotics engineers, computer vision specialists, and machine learning engineers at an intermediate level who aim to leverage computer vision and deep learning techniques to enhance robotic perception and autonomy.
Upon completing this training, participants will be capable of:
- Building computer vision pipelines using OpenCV.
- Incorporating deep learning models for object detection and recognition.
- Utilizing vision-based data for robotic control and navigation.
- Merging classical vision algorithms with deep neural networks.
- Deploying computer vision solutions on robotic and embedded platforms.
Course Format
- Interactive lectures and discussions.
- Practical exercises using OpenCV and TensorFlow.
- Live-lab implementation on physical or simulated robotic systems.
Course Customization Options
- To arrange customized training for this course, please contact us directly.
Developing a Bot
14 HoursA bot, or chatbot, acts as a digital assistant designed to automate user interactions across various messaging platforms, enabling tasks to be completed more swiftly without requiring human intervention.
In this instructor-led live training, participants will gain hands-on experience in bot development by building sample chatbots using established development tools and frameworks.
By the conclusion of this training, participants will be able to:
- Comprehend the diverse uses and applications of bots
- Grasp the end-to-end process of bot development
- Explore the array of tools and platforms available for constructing bots
- Develop a sample chatbot for Facebook Messenger
- Construct a sample chatbot leveraging the Microsoft Bot Framework
Audience
- Developers keen on creating their own bot solutions
Format of the course
- A blend of lectures, discussions, exercises, and intensive hands-on practice
Edge AI for Robots: TinyML, On-Device Inference & Optimization
21 HoursEdge AI allows artificial intelligence models to operate directly on embedded or resource-constrained devices, thereby minimising latency and power usage while enhancing autonomy and privacy within robotic systems.
This instructor-led live training, available both online and onsite, targets intermediate-level embedded developers and robotics engineers seeking to apply machine learning inference and optimisation techniques directly onto robotic hardware via TinyML and edge AI frameworks.
Upon completing this training, participants will be equipped to:
- Grasp the core principles of TinyML and edge AI for robotics.
- Convert and deploy AI models for on-device inference.
- Optimise models for improved speed, compactness, and energy efficiency.
- Integrate edge AI systems into robotic control architectures.
- Assess performance and accuracy in real-world scenarios.
Format of the Course
- Interactive lectures and discussions.
- Practical exercises utilising TinyML and edge AI toolchains.
- Hands-on application on embedded and robotic hardware platforms.
Course Customization Options
- For those seeking a tailored version of this course, please reach out to us to arrange it.
Human-Centric Physical AI: Collaborative Robots and Beyond
14 HoursThis instructor-led live training in Malaysia (online or onsite) is designed for intermediate-level participants who wish to explore the role of collaborative robots (cobots) and other human-centric AI systems in modern workplaces.
By the end of this training, participants will be able to:
- Understand the principles of Human-Centric Physical AI and its applications.
- Explore the role of collaborative robots in enhancing workplace productivity.
- Identify and address challenges in human-machine interactions.
- Design workflows that optimize collaboration between humans and AI-driven systems.
- Promote a culture of innovation and adaptability in AI-integrated workplaces.
Human-Robot Interaction (HRI): Voice, Gesture & Collaborative Control
21 HoursHuman-Robot Interaction (HRI): Voice, Gesture & Collaborative Control is a practical course aimed at introducing participants to the design and implementation of intuitive interfaces for human–robot communication. The training blends theoretical concepts, design principles, and programming practice to construct natural and responsive interaction systems utilising speech, gesture, and shared control techniques. Participants will gain insights into integrating perception modules, developing multimodal input systems, and designing robots that collaborate safely with humans.
This instructor-led, live training (available online or onsite) is tailored for beginner to intermediate-level participants keen on designing and implementing human–robot interaction systems that enhance usability, safety, and user experience.
Upon completing this training, participants will be able to:
- Grasp the fundamentals and design principles of human–robot interaction.
- Develop voice-based control and response mechanisms for robots.
- Implement gesture recognition using computer vision techniques.
- Design collaborative control systems for safe and shared autonomy.
- Evaluate HRI systems based on usability, safety, and human factors.
Format of the Course
- Interactive lectures and demonstrations.
- Hands-on coding and design exercises.
- Practical experiments in simulation or real robotic environments.
Course Customization Options
- To request a customized training for this course, please contact us to arrange.
Industrial Robotics Automation: ROS-PLC Integration & Digital Twins
28 HoursIndustrial Robotics Automation: Integrating ROS with PLCs and Digital Twins is a practical course designed to bridge the gap between industrial automation and modern robotics frameworks. Participants will acquire the skills to integrate ROS-based robotic systems with Programmable Logic Controllers (PLCs) for synchronized operations, while also exploring digital twin environments to simulate, monitor, and optimize production processes. The curriculum places a strong emphasis on interoperability, real-time control, and predictive analysis by utilizing digital replicas of physical systems.
This instructor-led, live training, available online or on-site, is tailored for intermediate-level professionals seeking to develop practical expertise in connecting ROS-controlled robots with PLC environments and implementing digital twins to enhance automation and manufacturing efficiency.
Upon completion of this training, participants will be able to:
- Understand the communication protocols governing interaction between ROS and PLC systems.
- Implement real-time data exchange mechanisms between robots and industrial controllers.
- Develop digital twins for monitoring, testing, and simulating processes.
- Integrate sensors, actuators, and robotic manipulators into industrial workflows.
- Design and validate industrial automation systems using hybrid simulation environments.
Format of the Course
- Interactive lectures and architecture walkthroughs.
- Hands-on exercises focused on integrating ROS and PLC systems.
- Implementation of simulation and digital twin projects.
Course Customization Options
- To request customized training for this course, please contact us to arrange.
Artificial Intelligence (AI) for Mechatronics
21 HoursThis instructor-led live training in Malaysia (available online or onsite) is aimed at engineers who wish to understand the applicability of artificial intelligence to mechatronic systems.
By the end of this training, participants will be able to:
- Gain an overview of artificial intelligence, machine learning, and computational intelligence.
- Understand the concepts of neural networks and different learning methods.
- Choose artificial intelligence approaches effectively for real-life problems.
- Implement AI applications in mechatronic engineering.
Multi-Robot Systems and Swarm Intelligence
28 HoursMulti-Robot Systems and Swarm Intelligence is an advanced training programme that delves into the design, coordination, and control of robotic teams, drawing inspiration from biological swarm behaviours. Participants will gain insights into modelling interactions, implementing distributed decision-making processes, and optimising collaboration across multiple agents. The course integrates theoretical concepts with practical simulation exercises to equip learners with the skills needed for applications in logistics, defence, search and rescue operations, and autonomous exploration.
This instructor-led, live training session (available online or onsite) is tailored for advanced-level professionals who aim to design, simulate, and deploy multi-robot and swarm-based systems using open-source frameworks and algorithms.
Upon completion of this training, participants will be able to:
- Grasp the principles and dynamics of swarm intelligence and cooperative robotics.
- Design communication and coordination strategies for multi-robot systems.
- Implement distributed decision-making and consensus algorithms.
- Simulate collective behaviours such as formation control, flocking, and coverage.
- Apply swarm-based techniques to real-world scenarios and optimization challenges.
Course Format
- Advanced lectures featuring in-depth algorithmic analysis.
- Practical coding and simulation work in ROS 2 and Gazebo.
- Collaborative project applying swarm intelligence principles.
Course Customization Options
- To request a customized training for this course, please contact us to arrange.
Smart Robots for Developers
84 HoursA Smart Robot is an Artificial Intelligence (AI) system capable of learning from its surroundings and experiences, thereby enhancing its capabilities based on acquired knowledge. These robots can collaborate with humans, working alongside them and learning from their behavior. Beyond physical tasks, they are equipped to handle cognitive duties. In addition to physical robots, Smart Robots can exist purely as software applications on a computer, devoid of moving parts or physical interaction with the physical world.
Through this instructor-led live training, participants will explore the various technologies, frameworks, and techniques used to program different types of mechanical Smart Robots, applying this knowledge to complete their own Smart Robot projects.
The course is structured into 4 sections, each comprising three days of lectures, discussions, and hands-on robot development in a live lab environment. Each section concludes with a practical hands-on project, allowing participants to practice and demonstrate their acquired knowledge.
The target hardware for this course will be simulated in 3D using simulation software. The ROS (Robot Operating System) open-source framework, along with C++ and Python, will be utilized for programming the robots.
Upon completing this training, participants will be able to:
- Grasp the key concepts underpinning robotic technologies
- Understand and manage the interaction between software and hardware in a robotic system
- Understand and implement the software components that form the foundation of Smart Robots
- Build and operate a simulated mechanical Smart Robot capable of seeing, sensing, processing, grasping, navigating, and interacting with humans via voice
- Enhance a Smart Robot's ability to perform complex tasks through Deep Learning
- Test and troubleshoot a Smart Robot in realistic scenarios
Audience
- Developers
- Engineers
Format of the course
- A mix of lectures, discussions, exercises, and extensive hands-on practice
Note
- To customize any part of this course (such as programming language, robot model, etc.), please contact us to arrange.
Smart Robotics in Manufacturing: AI for Perception, Planning, and Control
21 HoursSmart Robotics involves the integration of artificial intelligence into robotic systems to enhance perception, decision-making capabilities, and autonomous control.
This instructor-led live training, available either online or onsite, is designed for advanced-level robotics engineers, systems integrators, and automation leads who aim to implement AI-driven perception, planning, and control within smart manufacturing environments.
Upon completion of this training, participants will be equipped to:
- Understand and apply AI techniques for robotic perception and sensor fusion.
- Develop motion planning algorithms suitable for both collaborative and industrial robots.
- Deploy learning-based control strategies to facilitate real-time decision making.
- Integrate intelligent robotic systems into smart factory workflows.
Format of the Course
- Interactive lectures and discussions.
- Numerous exercises and practical practice sessions.
- Hands-on implementation within a live-lab environment.
Course Customization Options
- To request customized training for this course, please contact us to arrange the details.