A hands-on, project-based introduction to the fundamentals of artificial intelligence
Applied AI and Robotics with NVIDIA Jetson Nano provides a hands-on, project-based introduction to the fundamentals of artificial intelligence, machine learning, perception, and autonomy within real-world contexts.
This course leverages a scaffolded series of interactive labs and the NVIDIA Jetson Nano on the JetBot platform to cover core AI and ML techniques – Classification, Regression, Transfer Learning, and Reinforcement Learning – applied to robotic tasks like Collision Avoidance, Path Following, and Autonomous Racing. Learners gain a generalizable understanding of how AI enables robots to perceive and autonomously interact with their environment.
Additional Resources
Autonomous Systems are comprised of hardware and software enabling machines to operate independently. In this unit, participants will configure their JetBot, including software, network requirements, assembly, and initial operation.
Autonomous systems like the JetBot can be configured to navigate using pre-programmed routines, operator teleoperation, or a blend of both. This unit guides participants through motion control, precise navigation techniques, and teleoperation.
To prepare to navigate unknown environments, autonomous systems are often trained with data from known environments. This unit emphasizes the importance of data collection and labeling for applications like Collision Avoidance and Path Following. Participants will perform supervised learning techniques, utilizing Classification for detecting obstacles and Regression for path prediction.
Introduction: Cargo Unmanned Ground Vehicle
Robot and Environment Configuration
Introduction to Collision Avoidance
Collision Avoidance: Data Collection
Collision Avoidance: Model Training
Collision Avoidance: Model Optimization
Mini-Challenge: Figurine Safety
Introduction to Path Following
Robot and Environment Configuration
Path Following: Data Collection
Path Following: Model Training
Path Following: Model Optimization
Unit Challenge: UGV Challenge
Unit Quiz: Collision Avoidance and Path Following
Reinforcement Learning is a type of machine learning where a robot learns to make decisions through trial-and-error. In Autonomous Racing, the robot learns through trial-and-error as it drives on the track. In this unit, participants collect data from the track, train and visualize the base model, and then provide feedback to the robot as it learns to race.
Introduction: Boss Autonomous Vehicle
Autonomous Racing and Reinforcement Learning
Robot and Environment Configuration
Autonomous Racing: Data Collection
Autonomous Racing: Model Training
Autonomous Racing: Real-Time Visualization
Autonomous Racing: Reinforcement Learning
Unit Challenge: JetBot Race!
Unit Quiz: Autonomous Racing
AprilTags are a special type of marker that allows a robot to know its precise position (localization) and orientation (pose estimation) for accurate navigation. In this unit, participants calibrate the camera to improve AprilTag detection accuracy and leverage ROS (Robot Operating System) to perform waypoint navigation with the AprilTag markers.