update README

This commit is contained in:
Burak Ozter 2024-11-04 17:29:28 -04:00
parent 2fa4e45818
commit 72878c06e8
1 changed files with 239 additions and 16 deletions

245
README.md
View File

@ -25,38 +25,261 @@ These templates are intended to be used with VSCode.
To get started with our project templates install the [copier](https://copier.readthedocs.io/en/stable/) project
templating utility.
* [template-service-ros1-catkin](https://git.spirirobotics.com/Spiri/template-service-ros1-catkin)
- [template-service-ros1-catkin](https://git.spirirobotics.com/Spiri/template-service-ros1-catkin)
This template uses the last stable release of ROS1 (ros noetic) and supports python and c++ programming
languages.
ROS1 is considered end of life. It's recomended to use a ROS2 template instead
* ROS2 template
- ROS2 template
We're working on it...
## NVIDIA Container Toolkit
## Single Vehicle Simulation
[Source](https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/latest/install-guide.html)
Ensure .env `SIM_DRONE_COUNT` is 1.
### Installing with Apt
1. `docker compose --profile uav-sim --profile ui up`
2. Click `Launch Gazebo` on the menu.
```bash
curl -fsSL https://nvidia.github.io/libnvidia-container/gpgkey | sudo gpg --dearmor -o /usr/share/keyrings/nvidia-container-toolkit-keyring.gpg \
&& curl -s -L https://nvidia.github.io/libnvidia-container/stable/deb/nvidia-container-toolkit.list | \
sed 's#deb https://#deb [signed-by=/usr/share/keyrings/nvidia-container-toolkit-keyring.gpg] https://#g' | \
sudo tee /etc/apt/sources.list.d/nvidia-container-toolkit.list
```
## Multi vehicle Simulation
```bash
sudo apt-get update
```
```bash
sudo apt-get install -y nvidia-container-toolkit
```
#### Configuration
##### Prerequisites
- You installed a supported container engine (Docker, Containerd, CRI-O, Podman).
- You installed the NVIDIA Container Toolkit.
```bash
sudo nvidia-ctk runtime configure --runtime=docker --cdi.enabled
```
```bash
sudo systemctl restart docker
```
##### Rootless Mode
To configure the container runtime for Docker running in [Rootless mode](https://docs.docker.com/engine/security/rootless/), follow these steps:
1. Configure the container runtime by using the nvidia-ctk command:
```bash
nvidia-ctk runtime configure --runtime=docker --config=$HOME/.config/docker/daemon.json
```
2. Restart the Rootless Docker daemon
```bash
systemctl --user restart docker
```
3. Configure /etc/nvidia-container-runtime/config.toml by using the sudo nvidia-ctk command:
```bash
sudo nvidia-ctk config --set nvidia-container-cli.no-cgroups --in-place
```
### Sample Workload
After you install and configure the toolkit and install an NVIDIA GPU Driver, you can verify your installation by running a sample workload.
```bash
sudo docker run --rm --runtime=nvidia --gpus all ubuntu nvidia-smi
```
Expected output:
```console
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 535.86.10 Driver Version: 535.86.10 CUDA Version: 12.2 |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|===============================+======================+======================|
| 0 Tesla T4 On | 00000000:00:1E.0 Off | 0 |
| N/A 34C P8 9W / 70W | 0MiB / 15109MiB | 0% Default |
| | | N/A |
+-------------------------------+----------------------+----------------------+
+-----------------------------------------------------------------------------+
| Processes: |
| GPU GI CI PID Type Process name GPU Memory |
| ID ID Usage |
|=============================================================================|
| No running processes found |
+-----------------------------------------------------------------------------+
```
## Support for Container Device Interface(CDI)
[Source](https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/latest/cdi-support.html)
### Prerequisites
- You installed either the NVIDIA Container Toolkit or you installed the `nvidia-container-toolkit-base` package.
The base package includes the container runtime and the `nvidia-ctk` command-line interface, but avoids installing the container runtime hook and transitive dependencies.
The hook and dependencies are not needed on machines that use CDI exclusively
- You installed an NVIDIA GPU Driver.
Two common locations for CDI specifications are `/etc/cdi/` and `/var/run/cdi/`. The contents of the `/var/run/cdi/` directory are cleared on boot.
1. Generate the CDI specification file:
```bash
sudo nvidia-ctk cdi generate --output=/etc/cdi/nvidia.yaml
```
Example output
```console
INFO[0000] Auto-detected mode as "nvml"
INFO[0000] Selecting /dev/nvidia0 as /dev/nvidia0
INFO[0000] Selecting /dev/dri/card1 as /dev/dri/card1
INFO[0000] Selecting /dev/dri/renderD128 as /dev/dri/renderD128
INFO[0000] Using driver version xxx.xxx.xx
...
```
2. (Optional) Check the names of the generated devices:
```bash
nvidia-ctk cdi list
```
Output
```console
INFO[0000] Found 9 CDI devices
nvidia.com/gpu=all
nvidia.com/gpu=0
```
### Sample Workload
```bash
docker run --rm -ti --runtime=nvidia \
-e NVIDIA_VISIBLE_DEVICES=nvidia.com/gpu=all \
ubuntu nvidia-smi -L
```
Output
```console
GPU 0: NVIDIA GeForce RTX 3080 Laptop GPU (UUID: GPU-17c2b9a6-6be2-3857-f8e0-88143e2e621b)
```
## Technologies
* Ubuntu 22.04.1 amd64
* Docker Compose version v2.29.7
* Python 3.10.12
* pip 22.0.2
* NVIDIA Container Toolkit CLI version 1.16.2
* NVIDIA Driver 550.120
* CUDA Version 12.4
## How to Run
Ensure variables in the `.env` file are correct.
Install the required libraries in the python script.
```bash
pip install -r requirements.txt
```
### First Terminal
1. `docker compose --profile ui up`
2. Click `Launch Gazebo` on the menu.
1. Start the user interface with the following command.
```bash
docker compose --profile ui up
```
2. Click `Launch Gazebo` on the menu.
Simulated world and the drone model should be up and running in a Gazebo instance.
### Second Terminal
This will launch ardupilot, mavproxy and mavros sessions, and will scale by the `SIM_DRONE_COUNT` env variable.
This will launch ardupilot, mavproxy and mavros services, and will scale up by the `SIM_DRONE_COUNT` env variable.
1. `python3 sim_drone.py`
1. Start the docker services with the following python script.
```bash
python3 sim_drone.py
```
### QGroundControl
Simulated vehicle(s) should be connected to QGroundControl. `SIM_DRONE_COUNT` value should match the detected vehicle count on GCS.
## Simulation Environment Variables
`DRONE_SYS_ID` and `INSTANCE` environment variables are incremented by 1 for each additional simulated vehicle. `SERIAL0_PORT`, `SITL_PORT`, `MAVROS2_PORT`, `MAVROS1_PORT`,`FDM_PORT_IN` and `GSTREAMER_UDP_PORT` are incremented by 10. Without in-depth knowledge, changing the default value for these ports are not recommended.
| Variable | Type | Default | Description |
| :----------------: | :----: | :--------------------------------------------------: | :---------------------------------------------------------------------------------------------------------------- |
| DRONE_SYS_ID | int | 1 | System-ID for the simulated drone. |
| INSTANCE | int | 0 | Instance of simulator. |
| SERIAL0_PORT | int | 5760 | Mavproxy master port the simulation communicating on. |
| SITL_PORT | int | 5501 | Mavproxy Software in the Loop(SITL) port to send simulated RC input for the simulator. |
| MAVROS2_PORT | int | 14560 | MAVROS ROS 2 UDP port |
| MAVROS1_PORT | int | 14561 | MAVROS ROS 1 UDP port |
| FDM_PORT_IN | int | 9002 | Gazebo Flight Dynamics Model (FDM) UDP port |
| GSTREAMER_UDP_PORT | int | 5600 | UDP Video Streaming port |
| ROS_MASTER_URI | string | "http://0.0.0.0:11311" | This tells ROS 1 nodes where they can locate the master |
| ARDUPILOT_VEHICLE | string | "-v copter -f gazebo-mu --model=JSON -L CitadelHill" | "-v" is vehicle type,"-L" start location, "-f" is vehicle frame type, "--model" overrides simulation model to use |
| WORLD_FILE_NAME | string | "citadel_hill_world.sdf" | Name of the file that exists in the worlds folder. |
| WORLD_NAME | string | "citadel_hill" | Name of the world defined in the world file. |
| DRONE_MODEL | string | "spiri_mu" | Drone model that exists in the models folder. |
| SIM_DRONE_COUNT | int | 1 | Number of drones to be simulated. |
| GCS_PORT | int | 14550 | Ground Control Station(GCS) UDP connection port. |
### Understanding Multi-Vehicle Simulation Parameters
For instance, if `SIM_DRONE_COUNT` is 2, each additional vehicle's ports are incremented by 10.
First simulated vehicle would have these following values,
| Variable | Value |
| :----------------: | :---: |
| DRONE_SYS_ID | 1 |
| INSTANCE | 0 |
| SERIAL0_PORT | 5760 |
| SITL_PORT | 5501 |
| MAVROS2_PORT | 14560 |
| MAVROS1_PORT | 14561 |
| FDM_PORT_IN | 9002 |
| GSTREAMER_UDP_PORT | 5600 |
Second Vehicle,
| Variable | Value |
| :----------------: | :---: |
| DRONE_SYS_ID | 2 |
| INSTANCE | 1 |
| SERIAL0_PORT | 5770 |
| SITL_PORT | 5511 |
| MAVROS2_PORT | 14570 |
| MAVROS1_PORT | 14571 |
| FDM_PORT_IN | 9012 |
| GSTREAMER_UDP_PORT | 5610 |
Video stream would be available on UDP port on 5610 for the second vehicle after enabling streaming.