JetPack 설정부터 실시간 신경망 추론까지 — NVIDIA Jetson Nano에서 YOLO, 세그멘테이션 모델, ROS 2 노드를 실행합니다.

Coming Soon — This track is under development and requires Jetson Nano hardware (currently planned for acquisition). Unit pages will be added once hardware is available.

Prerequisites

What You Will Build

By the end of this track, students will have:

  • A working JetPack development environment with GPU-verified CUDA
  • A real-time object detection pipeline (YOLOv8 via TensorRT) running at ≥ 15 FPS
  • A ROS 2 perception node publishing detected object poses to a robot platform
  • A benchmark comparison: ESP32 TFLite micro vs. Jetson Nano TensorRT for the same task

Planned Unit Outline

UnitTitleFocus
01JetPack Setup & GPIOOS install, CUDA verify, LED/sensor via Jetson.GPIO
02Python Computer VisionOpenCV, camera capture, frame processing
03TFLite on Jetson CPURun a MobileNet model, measure latency
04TensorRT OptimizationConvert TFLite → TensorRT, GPU acceleration
05YOLOv8 Real-Time DetectionObject detection pipeline at camera frame rate
06ROS 2 on Jetson NanoInstall ROS 2 Humble, publish/subscribe basics
07Perception NodePublish detection results as ROS 2 topics
08Capstone ProjectMobile robot with vision-guided navigation