2024-09-19 11:20:08 -03:00
# Spiri SDK - Simulated robot
2024-01-22 15:42:59 -04:00
2024-11-04 17:29:28 -04:00
The Spiri SDK consists of a number of components. What you're looking at right now
2024-09-19 11:20:08 -03:00
is the drone simulation component, which is the core of the SDK.
Spiri Robots run a number of docker containers to achieve their core functionality,
2024-11-04 17:29:28 -04:00
we try to keep these essential docker containers in one docker compose file. The
2024-09-19 11:37:02 -03:00
docker compose file you'll find in this repository starts an ardupilot-based UAV simulation
2024-09-19 11:20:08 -03:00
as well as a ROS master, and mavproxy to tie it together.
2024-09-19 11:37:02 -03:00
To get started you can simply clone this repository and run `docker compose --profile uav-sim up` .
2024-09-19 11:20:08 -03:00
Once the simulated UAV is running you can connect to it with QGroundControl or other
MavLink compatible software. We expose the UAVs Mavlink conenction on tcp port 5760.
2024-09-20 08:12:27 -03:00
There is experimental GUI support you can enable by running `docker compose --profile uav-sim --profile ui up` .
2024-09-19 11:20:08 -03:00
## Creating a new project
We provide project templates you can use for development that integrate seamlessly into
our simulated robots.
2024-09-19 11:37:02 -03:00
These templates are intended to be used with VSCode.
2024-09-19 11:20:08 -03:00
To get started with our project templates install the [copier ](https://copier.readthedocs.io/en/stable/ ) project
templating utility.
2024-11-04 17:29:28 -04:00
- [template-service-ros1-catkin ](https://git.spirirobotics.com/Spiri/template-service-ros1-catkin )
2024-09-19 11:20:08 -03:00
2024-11-04 17:29:28 -04:00
This template uses the last stable release of ROS1 (ros noetic) and supports python and c++ programming
2024-09-19 11:20:08 -03:00
languages.
ROS1 is considered end of life. It's recomended to use a ROS2 template instead
2024-11-04 17:29:28 -04:00
- ROS2 template
2024-09-19 11:20:08 -03:00
We're working on it...
2024-10-29 10:29:01 -03:00
2024-11-04 17:29:28 -04:00
## NVIDIA Container Toolkit
2024-10-29 10:29:01 -03:00
2024-11-04 17:29:28 -04:00
[Source ](https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/latest/install-guide.html )
2024-10-29 10:29:01 -03:00
2024-11-04 17:29:28 -04:00
### Installing with Apt
2024-10-29 10:29:01 -03:00
2024-11-04 17:29:28 -04:00
```bash
curl -fsSL https://nvidia.github.io/libnvidia-container/gpgkey | sudo gpg --dearmor -o /usr/share/keyrings/nvidia-container-toolkit-keyring.gpg \
& & curl -s -L https://nvidia.github.io/libnvidia-container/stable/deb/nvidia-container-toolkit.list | \
sed 's#deb https://#deb [signed-by=/usr/share/keyrings/nvidia-container-toolkit-keyring.gpg] https://#g' | \
sudo tee /etc/apt/sources.list.d/nvidia-container-toolkit.list
```
2024-10-29 10:29:01 -03:00
2024-11-04 17:29:28 -04:00
```bash
sudo apt-get update
```
2024-10-29 10:29:01 -03:00
2024-11-04 17:29:28 -04:00
```bash
sudo apt-get install -y nvidia-container-toolkit
```
#### Configuration
##### Prerequisites
- You installed a supported container engine (Docker, Containerd, CRI-O, Podman).
- You installed the NVIDIA Container Toolkit.
```bash
sudo nvidia-ctk runtime configure --runtime=docker --cdi.enabled
```
```bash
sudo systemctl restart docker
```
### Sample Workload
After you install and configure the toolkit and install an NVIDIA GPU Driver, you can verify your installation by running a sample workload.
```bash
sudo docker run --rm --runtime=nvidia --gpus all ubuntu nvidia-smi
```
Expected output:
```console
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 535.86.10 Driver Version: 535.86.10 CUDA Version: 12.2 |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|===============================+======================+======================|
| 0 Tesla T4 On | 00000000:00:1E.0 Off | 0 |
| N/A 34C P8 9W / 70W | 0MiB / 15109MiB | 0% Default |
| | | N/A |
+-------------------------------+----------------------+----------------------+
+-----------------------------------------------------------------------------+
| Processes: |
| GPU GI CI PID Type Process name GPU Memory |
| ID ID Usage |
|=============================================================================|
| No running processes found |
+-----------------------------------------------------------------------------+
```
## Support for Container Device Interface(CDI)
[Source ](https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/latest/cdi-support.html )
### Prerequisites
- You installed either the NVIDIA Container Toolkit or you installed the `nvidia-container-toolkit-base` package.
The base package includes the container runtime and the `nvidia-ctk` command-line interface, but avoids installing the container runtime hook and transitive dependencies.
The hook and dependencies are not needed on machines that use CDI exclusively
- You installed an NVIDIA GPU Driver.
Two common locations for CDI specifications are `/etc/cdi/` and `/var/run/cdi/` . The contents of the `/var/run/cdi/` directory are cleared on boot.
1. Generate the CDI specification file:
```bash
sudo nvidia-ctk cdi generate --output=/etc/cdi/nvidia.yaml
```
Example output
```console
INFO[0000] Auto-detected mode as "nvml"
INFO[0000] Selecting /dev/nvidia0 as /dev/nvidia0
INFO[0000] Selecting /dev/dri/card1 as /dev/dri/card1
INFO[0000] Selecting /dev/dri/renderD128 as /dev/dri/renderD128
INFO[0000] Using driver version xxx.xxx.xx
...
```
2. (Optional) Check the names of the generated devices:
```bash
nvidia-ctk cdi list
```
Output
```console
INFO[0000] Found 9 CDI devices
nvidia.com/gpu=all
nvidia.com/gpu=0
```
### Sample Workload
```bash
docker run --rm -ti --runtime=nvidia \
-e NVIDIA_VISIBLE_DEVICES=nvidia.com/gpu=all \
ubuntu nvidia-smi -L
```
Output
```console
GPU 0: NVIDIA GeForce RTX 3080 Laptop GPU (UUID: GPU-17c2b9a6-6be2-3857-f8e0-88143e2e621b)
```
## Technologies
* Ubuntu 22.04.1 amd64
* Docker Compose version v2.29.7
* Python 3.10.12
* pip 22.0.2
* NVIDIA Container Toolkit CLI version 1.16.2
* NVIDIA Driver 550.120
* CUDA Version 12.4
## How to Run
Ensure variables in the `.env` file are correct.
Install the required libraries in the python script.
```bash
pip install -r requirements.txt
```
2024-10-29 10:29:01 -03:00
### First Terminal
2024-11-04 17:29:28 -04:00
1. Start the user interface with the following command.
```bash
docker compose --profile ui up
```
2. Click `Launch Gazebo` on the menu.
Simulated world and the drone model should be up and running in a Gazebo instance.
### Second Terminal
This will launch ardupilot, mavproxy and mavros services, and will scale up by the `SIM_DRONE_COUNT` env variable.
1. Start the docker services with the following python script.
```bash
python3 sim_drone.py
```
### QGroundControl
Simulated vehicle(s) should be connected to QGroundControl. `SIM_DRONE_COUNT` value should match the detected vehicle count on GCS.
## Simulation Environment Variables
`DRONE_SYS_ID` and `INSTANCE` environment variables are incremented by 1 for each additional simulated vehicle. `SERIAL0_PORT` , `SITL_PORT` , `MAVROS2_PORT` , `MAVROS1_PORT` ,`FDM_PORT_IN` and `GSTREAMER_UDP_PORT` are incremented by 10. Without in-depth knowledge, changing the default value for these ports are not recommended.
| Variable | Type | Default | Description |
| :----------------: | :----: | :--------------------------------------------------: | :---------------------------------------------------------------------------------------------------------------- |
| DRONE_SYS_ID | int | 1 | System-ID for the simulated drone. |
| INSTANCE | int | 0 | Instance of simulator. |
| SERIAL0_PORT | int | 5760 | Mavproxy master port the simulation communicating on. |
| SITL_PORT | int | 5501 | Mavproxy Software in the Loop(SITL) port to send simulated RC input for the simulator. |
| MAVROS2_PORT | int | 14560 | MAVROS ROS 2 UDP port |
| MAVROS1_PORT | int | 14561 | MAVROS ROS 1 UDP port |
| FDM_PORT_IN | int | 9002 | Gazebo Flight Dynamics Model (FDM) UDP port |
| GSTREAMER_UDP_PORT | int | 5600 | UDP Video Streaming port |
| ROS_MASTER_URI | string | "http://0.0.0.0:11311" | This tells ROS 1 nodes where they can locate the master |
| ARDUPILOT_VEHICLE | string | "-v copter -f gazebo-mu --model=JSON -L CitadelHill" | "-v" is vehicle type,"-L" start location, "-f" is vehicle frame type, "--model" overrides simulation model to use |
| WORLD_FILE_NAME | string | "citadel_hill_world.sdf" | Name of the file that exists in the worlds folder. |
| WORLD_NAME | string | "citadel_hill" | Name of the world defined in the world file. |
| DRONE_MODEL | string | "spiri_mu" | Drone model that exists in the models folder. |
| SIM_DRONE_COUNT | int | 1 | Number of drones to be simulated. |
| GCS_PORT | int | 14550 | Ground Control Station(GCS) UDP connection port. |
### Understanding Multi-Vehicle Simulation Parameters
For instance, if `SIM_DRONE_COUNT` is 2, each additional vehicle's ports are incremented by 10.
2024-10-29 10:29:01 -03:00
2024-11-04 17:29:28 -04:00
First simulated vehicle would have these following values,
2024-10-29 10:29:01 -03:00
2024-11-04 17:29:28 -04:00
| Variable | Value |
| :----------------: | :---: |
| DRONE_SYS_ID | 1 |
| INSTANCE | 0 |
| SERIAL0_PORT | 5760 |
| SITL_PORT | 5501 |
| MAVROS2_PORT | 14560 |
| MAVROS1_PORT | 14561 |
| FDM_PORT_IN | 9002 |
| GSTREAMER_UDP_PORT | 5600 |
2024-10-29 10:29:01 -03:00
2024-11-04 17:29:28 -04:00
Second Vehicle,
2024-10-29 10:29:01 -03:00
2024-11-04 17:29:28 -04:00
| Variable | Value |
| :----------------: | :---: |
| DRONE_SYS_ID | 2 |
| INSTANCE | 1 |
| SERIAL0_PORT | 5770 |
| SITL_PORT | 5511 |
| MAVROS2_PORT | 14570 |
| MAVROS1_PORT | 14571 |
| FDM_PORT_IN | 9012 |
| GSTREAMER_UDP_PORT | 5610 |
2024-10-29 10:29:01 -03:00
2024-11-04 17:29:28 -04:00
Video stream would be available on UDP port on 5610 for the second vehicle after enabling streaming.