Thursday, May 2, 2024

NVIDIA Jetson with LIDAR

NVIDIA Jetson with LIDAR

 

Hardware

  • NVIDIA Jetson  ( we use NVIDIA Jetson Nano 4GB develop kit )
  • RPLIDAR A1M8R6 ( 360 Degree Laser Scanner Development Kit )


What is RPlidar?

RPlidar is a brand of inexpensive and lightweight laser range finders that use a rotating laser scanner to measure distances and create a 2D point cloud of the environment. It is commonly used in robotics, drones, and other autonomous systems for obstacle detection, mapping, and navigation. RPlidar provides accurate and reliable distance measurements with a wide field of view and a long range, making it a popular choice for many robotics enthusiasts and professionals.

The RPLidar A1M8R6 - 360 Degree Laser Scanner Development Kit is a low cost 2D UDAR solution developed by RoboPeak Team. It can scan 360° environment within 12 meter radius. The output of RPUDAR is very suitable to build map, do SLAM, or build 3D model. RPLIDAR A1’s scanning frequency reached 5.5 Hz when sampling 360 points each round. And it can be configured up to 10 Hz maximum. RPLIDAR A1 is basically a laser triangulation measurement system. It can work excellent in all kinds of indoor environment and outdoor environment without sunlight.

System Diagram


Connect Micro USB cable from NVIDIA Jetson to RPLidar.
The RPLidar will begin spinning and transmit data.


Install RPLidar SDK

The RPLidars work with all of the NVIDIA Jetson.  
A Linux kernel driver called CP210x must be installed on the Jetson. 
The CP210x driver talks serial to the RPLidar over USB.

Check Serial to USB Port


$ lsusb             

$ usb-devices  

Product=CP2102 USB to UART Bridge Controller
If#= 0 Alt= 0 #EPs= 2 Cls=ff(vend.) Sub=00 Prot=00 Driver=cp210x

RPLidar SDK

$ git clone https://github.com/Slamtec/rplidar_sdk
$ cd rplidar_sdk
$ make


This will download the SDK and build the libraries and examples. For the Jetson, 
the output will be in rp_lidar/output/Linux/Release. 

Check Baud rate Communication 

$ cd output/Linux/Release    
$ ./custom_baudrate /dev/ttyUSB0 115200  



To run the ultra_simple demo.

$ ./ultra_simple  --channel --serial /dev/ttyUSB0 115200  


To run the simple grabber demo.

$ ./simple_grabber  --channel --serial /dev/ttyUSB0 115200  



Install rplidar_ros for ROS


sudo sh -c 'echo "deb  http://packages.ros.org/ros/ubuntu  $(lsb_release -sc) main" > /etc/apt/sources.list.d/ros-latest.list'
sudo apt-key adv --keyserver 'hkp://keyserver.ubuntu.com:80' --recv-key C1CF6E31E6BADE8868B172B4F42ED6FBAB17C654
sudo apt update

Install ros-melodic

sudo apt install ros-melodic-desktop
echo "source /opt/ros/melodic/setup.bash" >> ~/.bashrc
source ~/.bashrc

Check the version

rosversion -d

Create the catkin root and source folders

clearros
mkdir -p ~/catkin_ws/src
cd ~/catkin_ws/src

Clone the github repository of RPLIDAR ROS package

git clone  https://github.com/robopeak/rplidar_ros.git
cd ..

Run catkin_make to compile your catkin workspace

catkin_make

Run to source the environment with your current terminal.

source devel/setup.bash

Launch the RPlidar Node

roslaunch rplidar_ros view_rplidar.launch


Reference

Getting Started with the Low-Cost RPLIDAR Using NVIDIA Jetson Nano

https://collabnix.com/getting-started-with-the-low-cost-rplidar-using-jetson/






Adun Nantakaew อดุลย์ นันทะแก้ว 081-6452400
LINE : adunnan





Tuesday, January 9, 2024

NVIDIA Jetson YOLOv8 Object Tracking

NVIDIA Jetson YOLOv8 Object Tracking

YOLOv8 from Ultralytics

Ultralytics YOLOv8 is a cutting-edge, state-of-the-art (SOTA) model that builds upon the success of previous YOLO versions and introduces new features and improvements to further boost performance and flexibility. YOLOv8 is designed to be fast, accurate, and easy to use, making it an excellent choice for a wide range of object detection and tracking, instance segmentation, image classification and pose estimation tasks


Regions Counting Using YOLOv8 (Inference on Video)

  • Region counting is a method employed to tally the objects within a specified area, allowing for more sophisticated analyses when multiple regions are considered. These regions can be adjusted interactively using a Left Mouse Click, and the counting process occurs in real time.
  • Regions can be adjusted to suit the user's preferences and requirements.


Source Code


Install Ultralytics on NVDIA Jetson


Run Python Demo Code







Thursday, September 14, 2023

NVIDIA Jetson YOLOv8 Object Detection

NVIDIA Jetson YOLOv8 Object Detection 







YOLOv8 from Ultralytics

Ultralytics YOLOv8 is a cutting-edge, state-of-the-art (SOTA) model that builds upon the success of previous YOLO versions and introduces new features and improvements to further boost performance and flexibility. YOLOv8 is designed to be fast, accurate, and easy to use, making it an excellent choice for a wide range of object detection and tracking, instance segmentation, image classification and pose estimation tasks

YOLO: A Brief History

YOLO (You Only Look Once), a popular object detection and image segmentation model, was developed by Joseph Redmon and Ali Farhadi at the University of Washington. Launched in 2015, YOLO quickly gained popularity for its high speed and accuracy.

  • YOLOv2, released in 2016, improved the original model by incorporating batch normalization, anchor boxes, and dimension clusters.
  • YOLOv3, launched in 2018, further enhanced the model's performance using a more efficient backbone network, multiple anchors and spatial pyramid pooling.
  • YOLOv4 was released in 2020, introducing innovations like Mosaic data augmentation, a new anchor-free detection head, and a new loss function.
  • YOLOv5 further improved the model's performance and added new features such as hyperparameter optimization, integrated experiment tracking and automatic export to popular export formats.
  • YOLOv6 was open-sourced by Meituan in 2022 and is in use in many of the company's autonomous delivery robots.
  • YOLOv7 added additional tasks such as pose estimation on the COCO keypoints dataset.
  • YOLOv8 is the latest version of YOLO by Ultralytics. As a cutting-edge, state-of-the-art (SOTA) model, YOLOv8 builds on the success of previous versions, introducing new features and improvements for enhanced performance, flexibility, and efficiency. YOLOv8 supports a full range of vision AI tasks, including detectionsegmentationpose estimationtracking, and classification. This versatility allows users to leverage YOLOv8's capabilities across diverse applications and domains.

Models

YOLOv8 DetectSegment and Pose models pretrained on the COCO dataset are available here, as well as YOLOv8 Classify models pretrained on the ImageNet dataset. Track mode is available for all Detect, Segment and Pose models.

All Models download automatically from the latest Ultralytics release on first use.














YOLOv8 on NVIDIA Jetson Nano



Test on NVIDIA Jetson Nano

Detect Image with Pytorch Model = 108.3 ms
Detect Image with TensorRt Model = 75.8 ms








Install Torch and TorchVision

uninstall old version first. ( if you have )

jetson@nano:~$ sudo pip3 uninstall torch torchvision scipy pandas urllib3 -y


jetson@nano:~$ sudo pip3 install scipy pandas urllib3



For Torch

Wheel Method

# download the wheel

$ gdown https://drive.google.com/uc?id=1TqC6_2cwqiYacjoLhLgrZoap6-sVL2sd


# install PyTorch 1.10.0

$ sudo -H pip3 install torch-1.10.0a0+git36449ea-cp36-cp36m-linux_aarch64.whl


From Source

jetson@nano:~$ sudo apt-get install -y libopenblas-base libopenmpi-dev


jetson@nano:~$ wget https://nvidia.box.com/shared/static/fjtbno0vpo676a25cgvuqc1wty0fkkg6.whl -O torch-1.10.0-cp36-cp36m-linux_aarch64.whl


jetson@nano:~$ sudo -H pip3 install torch-1.10.0-cp36-cp36m-linux_aarch64.whl




For TorchVision

# download TorchVision 0.11.0

$ gdown https://drive.google.com/uc?id=1C7y6VSIBkmL2RQnVy8xF9cAnrrpJiJ-K

# install TorchVision 0.11.0

$ sudo -H pip3 install torchvision-0.11.0a0+fa347eb-cp36-cp36m-linux_aarch64.whl



jetson@nano:~$ sudo apt-get install libjpeg-dev zlib1g-dev libpython3-dev


jetson@nano:~$ sudo apt-get install libavcodec-dev libavformat-dev libswscale-dev


jetson@nano:~$ gdown https://drive.google.com/uc?id=1C7y6VSIBkmL2RQnVy8xF9cAnrrpJiJ-K


jetson@nano:~$ sudo -H pip3 install torchvision-0.11.0a0+fa347eb-cp36-cp36m-linux_aarch64.whl


*** if not work for you try install from source

From Scratch

sudo apt install -y libjpeg-dev zlib1g-dev

git clone --branch v0.11.1 https://github.com/pytorch/vision torchvision

cd torchvision

sudo python3 setup.py install



Test your Torch Library

jetson@nano:~$ python3

Python 3.6.9 (default, Mar 15 2022, 13:55:28) 

[GCC 8.4.0] on linux

Type "help", "copyright", "credits" or "license" for more information.

>>> import torch

>>> torch.cuda.is_available()

True

>>> 










Install ultralytics from Source

git clone https://github.com/amphancm/ultralytics.git


jetson@nano:~$ git clone https://github.com/amphancm/ultralytics.git


jetson@nano:~$ cd ultralytics/


jetson@nano:~/ultralytics$ sudo python3 setup.py install



Using /usr/local/lib/python3.6/dist-packages

Finished processing dependencies for ultralytics==8.0.51


We use ultralytics 8.0.51 version


Run Object Detection with Pytorch Model ( model.pt )



jetson@nano:~$ mkdir YOLOv8


jetson@nano:~/YOLOv8$ yolo detect predict model=yolov8n.pt source='https://ultralytics.com/images/bus.jpg' show=True


Downloading https://github.com/ultralytics/assets/releases/download/v0.0.0/yolov8n.pt to yolov8n.pt...

100%|███████████████████████████████████████| 6.23M/6.23M [00:00<00:00, 10.1MB/s]

Unable to init server: Could not connect: Connection refused

WARNING ⚠️ Environment does not support cv2.imshow() or PIL Image.show()

OpenCV(4.6.0) /home/jetson/opencv/modules/highgui/src/window_gtk.cpp:635: error: (-2:Unspecified error) Can't initialize GTK backend in function 'cvInitSystem'

Ultralytics YOLOv8.0.51 🚀 Python-3.6.9 torch-1.10.0 CUDA:0 (NVIDIA Tegra X1, 3963MiB)

YOLOv8n summary (fused): 168 layers, 3151904 parameters, 0 gradients, 8.7 GFLOPs

Downloading https://ultralytics.com/images/bus.jpg to bus.jpg...

100%|█████████████████████████████████████████| 476k/476k [00:00<00:00, 16.1MB/s]

image 1/1 /home/jetson/YOLOv8/bus.jpg: 640x480 4 persons, 1 bus, 1 stop sign, 89.7ms

Speed: 25.0ms preprocess, 89.7ms inference, 273.7ms postprocess per image at shape (1, 3, 640, 640)

jetson@nano:~/YOLOv8



Convert Model to TensorRT ( Engine file ) for better performance





Convert PT (pytorch) to engine (TensorRT)


yolo export model=yolov8n.pt format=engine half=True device=0


jetson@nano:~$ yolo export model=yolov8n.pt format=engine half=True device=0



TensorRT: export success ✅ 350.0s, saved as yolov8n.engine (12.8 MB)

Export complete (374.0s)

Results saved to /home/jetson/YOLOv8

Predict:         yolo predict task=detect model=yolov8n.engine imgsz=640 

Validate:        yolo val task=detect model=yolov8n.engine imgsz=640 data=coco.yaml



89.7ms Speed : 25.0ms preprocess, 89.7ms inference, 273.7ms

postprocess per image at shape (1, 3, 640, 640)





Now You RUN with Engine Model file




jetson@nano:~$ yolo detect predict model=yolov8n.engine source=’bus.jpg’ show=True




Compare Model PT and Engine

On NVDIA Jetson Nano
Detect Image with Pytorch Model = 108.3 ms
Detect Image with TensorRt Model = 75.8 ms















On NVDIA Jetson Xavier NX

Detect Image with Pytorch Model = 204.6 ms
Detect Image with TensorRt Model = 49.9 ms



Reference









Adun Nantakaew อดุลย์ นันทะแก้ว 081-6452400
LINE : adunnan