spiri-sdk/README.md

12 KiB

Spiri SDK - Simulated robot

The Spiri SDK consists of a number of components. What you're looking at right now is the drone simulation component, which is the core of the SDK.

Spiri Robots run a number of docker containers to achieve their core functionality, we try to keep these essential docker containers in one docker compose file. The docker compose file you'll find in this repository starts an ardupilot-based UAV simulation as well as a ROS master, and mavproxy to tie it together.

To get started you can simply clone this repository and run docker compose --profile uav-sim up.

Once the simulated UAV is running you can connect to it with QGroundControl or other MavLink compatible software. We expose the UAVs Mavlink conenction on tcp port 5760.

There is experimental GUI support you can enable by running docker compose --profile uav-sim --profile ui up.

Creating a new project

We provide project templates you can use for development that integrate seamlessly into our simulated robots.

These templates are intended to be used with VSCode.

To get started with our project templates install the copier project templating utility.

This template uses the last stable release of ROS1 (ros noetic) and supports python and c++ programming languages.

ROS1 is considered end of life. It's recomended to use a ROS2 template instead

  • ROS2 template

We're working on it...

Special docker options

We support the following special docker options to make metadata available inside the SDK

  • x-spiri-sdk-doc: ""

This will appear as a comment when creating a new robot

  • x-spiri-sdk-default-enabled: true

Set this to false and that docker compose will be commented out by default when creating a new robot

  • x-spiri-sdk-default-args: ""

Extra arguments to pass to docker compose. Should accept all docker compose arguments.

Common options would be --build. You might also appreciate --pull ("always"|"missing"|"never").

NVIDIA Container Toolkit

Source

Installing with Apt

curl -fsSL https://nvidia.github.io/libnvidia-container/gpgkey | sudo gpg --dearmor -o /usr/share/keyrings/nvidia-container-toolkit-keyring.gpg \
  && curl -s -L https://nvidia.github.io/libnvidia-container/stable/deb/nvidia-container-toolkit.list | \
    sed 's#deb https://#deb [signed-by=/usr/share/keyrings/nvidia-container-toolkit-keyring.gpg] https://#g' | \
    sudo tee /etc/apt/sources.list.d/nvidia-container-toolkit.list
sudo apt-get update
sudo apt-get install -y nvidia-container-toolkit

Configuration

Prerequisites
  • You installed a supported container engine (Docker, Containerd, CRI-O, Podman).
  • You installed the NVIDIA Container Toolkit.
sudo nvidia-ctk runtime configure --runtime=docker --cdi.enabled
sudo systemctl restart docker

Sample Workload

After you install and configure the toolkit and install an NVIDIA GPU Driver, you can verify your installation by running a sample workload.

sudo docker run --rm --runtime=nvidia --gpus all ubuntu nvidia-smi

Expected output:

+-----------------------------------------------------------------------------+
| NVIDIA-SMI 535.86.10    Driver Version: 535.86.10    CUDA Version: 12.2     |
|-------------------------------+----------------------+----------------------+
| GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
|                               |                      |               MIG M. |
|===============================+======================+======================|
|   0  Tesla T4            On   | 00000000:00:1E.0 Off |                    0 |
| N/A   34C    P8     9W /  70W |      0MiB / 15109MiB |      0%      Default |
|                               |                      |                  N/A |
+-------------------------------+----------------------+----------------------+

+-----------------------------------------------------------------------------+
| Processes:                                                                  |
|  GPU   GI   CI        PID   Type   Process name                  GPU Memory |
|        ID   ID                                                   Usage      |
|=============================================================================|
|  No running processes found                                                 |
+-----------------------------------------------------------------------------+

Support for Container Device Interface(CDI)

Source

Prerequisites

  • You installed either the NVIDIA Container Toolkit or you installed the nvidia-container-toolkit-base package. The base package includes the container runtime and the nvidia-ctk command-line interface, but avoids installing the container runtime hook and transitive dependencies. The hook and dependencies are not needed on machines that use CDI exclusively
  • You installed an NVIDIA GPU Driver.

Two common locations for CDI specifications are /etc/cdi/ and /var/run/cdi/. The contents of the /var/run/cdi/ directory are cleared on boot.

  1. Generate the CDI specification file:

    sudo nvidia-ctk cdi generate --output=/etc/cdi/nvidia.yaml
    

    Example output

    INFO[0000] Auto-detected mode as "nvml"
    INFO[0000] Selecting /dev/nvidia0 as /dev/nvidia0
    INFO[0000] Selecting /dev/dri/card1 as /dev/dri/card1
    INFO[0000] Selecting /dev/dri/renderD128 as /dev/dri/renderD128
    INFO[0000] Using driver version xxx.xxx.xx
     ...
    
  2. (Optional) Check the names of the generated devices:

    nvidia-ctk cdi list
    

    Output

    INFO[0000] Found 9 CDI devices
    nvidia.com/gpu=all
    nvidia.com/gpu=0
    

Sample Workload

docker run --rm -ti --runtime=nvidia \
    -e NVIDIA_VISIBLE_DEVICES=nvidia.com/gpu=all \
      ubuntu nvidia-smi -L

Output

GPU 0: NVIDIA GeForce RTX 3080 Laptop GPU (UUID: GPU-17c2b9a6-6be2-3857-f8e0-88143e2e621b)

Technologies

  • Ubuntu 22.04.1 amd64
  • Docker Compose version v2.29.7
  • Python 3.10.12
  • pip 22.0.2
  • NVIDIA Container Toolkit CLI version 1.16.2
  • NVIDIA Driver 550.120
  • CUDA Version 12.4

How to Run

Ensure variables in the .env file are correct.

Install the required libraries in the python script.

pip install -r requirements.txt

First Terminal

  1. Start the user interface with the following command.
docker compose --profile ui up
  1. Click Launch Gazebo on the menu.

Simulated world and the drone model should be up and running in a Gazebo instance.

Second Terminal

This will launch ardupilot, mavproxy and mavros services, and will scale up by the SIM_DRONE_COUNT env variable.

  1. Start the docker services with the following python script.
python3 sim_drone.py

QGroundControl

Simulated vehicle(s) should be connected to QGroundControl. SIM_DRONE_COUNT value should match the detected vehicle count on GCS.

Simulation Environment Variables

DRONE_SYS_ID and INSTANCE environment variables are incremented by 1 for each additional simulated vehicle. SERIAL0_PORT, SITL_PORT, MAVROS2_PORT, MAVROS1_PORT,FDM_PORT_IN and GSTREAMER_UDP_PORT are incremented by 10. Without in-depth knowledge, changing the default value for these ports are not recommended.

Variable Type Default Description
DRONE_SYS_ID int 1 System-ID for the simulated drone.
INSTANCE int 0 Instance of simulator.
SERIAL0_PORT int 5760 Mavproxy master port the simulation communicating on.
SITL_PORT int 5501 Mavproxy Software in the Loop(SITL) port to send simulated RC input for the simulator.
MAVROS2_PORT int 14560 MAVROS ROS 2 UDP port
MAVROS1_PORT int 14561 MAVROS ROS 1 UDP port
FDM_PORT_IN int 9002 Gazebo Flight Dynamics Model (FDM) UDP port
GSTREAMER_UDP_PORT int 5600 UDP Video Streaming port
ROS_MASTER_URI string "http://0.0.0.0:11311" This tells ROS 1 nodes where they can locate the master
ARDUPILOT_VEHICLE string "-v copter -f gazebo-mu --model=JSON -L CitadelHill" "-v" is vehicle type,"-L" start location, "-f" is vehicle frame type, "--model" overrides simulation model to use
WORLD_FILE_NAME string "citadel_hill_world.sdf" Name of the file that exists in the worlds folder.
WORLD_NAME string "citadel_hill" Name of the world defined in the world file.
DRONE_MODEL string "spiri_mu" Drone model that exists in the models folder.
SIM_DRONE_COUNT int 1 Number of drones to be simulated.
GCS_PORT int 14550 Ground Control Station(GCS) UDP connection port.

Understanding Multi-Vehicle Simulation Parameters

For instance, if SIM_DRONE_COUNT is 2, each additional vehicle's ports are incremented by 10.

First simulated vehicle would have these following values,

Variable Value
DRONE_SYS_ID 1
INSTANCE 0
SERIAL0_PORT 5760
SITL_PORT 5501
MAVROS2_PORT 14560
MAVROS1_PORT 14561
FDM_PORT_IN 9002
GSTREAMER_UDP_PORT 5600

Second Vehicle,

Variable Value
DRONE_SYS_ID 2
INSTANCE 1
SERIAL0_PORT 5770
SITL_PORT 5511
MAVROS2_PORT 14570
MAVROS1_PORT 14571
FDM_PORT_IN 9012
GSTREAMER_UDP_PORT 5610

Video stream would be available on UDP port on 5610 for the second vehicle after enabling streaming.