Merge branch 'ros2'
Build Docs / build (push) Failing after 51s Details

This commit is contained in:
Alex Davies 2024-11-05 10:27:49 -04:00
commit a8810dea8a
8 changed files with 588 additions and 148 deletions

19
.env Normal file
View File

@ -0,0 +1,19 @@
DRONE_SYS_ID=1
INSTANCE=0
SERIAL0_PORT=5760
SITL_PORT=5501
MAVROS2_PORT=14560
MAVROS1_PORT=14561
FDM_PORT_IN=9002
GSTREAMER_UDP_PORT=5600
ROS_MASTER_URI="http://0.0.0.0:11311"
ARDUPILOT_VEHICLE="-v copter -f gazebo-mu --model=JSON -L CitadelHill"
WORLD_FILE_NAME="citadel_hill_world.sdf"
WORLD_NAME="citadel_hill"
DRONE_MODEL="spiri_mu"
SIM_DRONE_COUNT=1
GCS_PORT=14550

359
README.md
View File

@ -1,108 +1,12 @@
# Spiri SDK
# Spiri SDK - Simulated robot
## Overview
The Spiri SDK consists of a number of components. What you're looking at right now
is the drone simulation component, which is the core of the SDK.
Spiri Robots run a number of docker containers to achieve their core functionality,
we try to keep these essential docker containers in one docker compose file. The
we try to keep these essential docker containers in one docker compose file. The
docker compose file you'll find in this repository starts an ardupilot-based UAV simulation
as well as a ROS master, and mavproxy to tie it together, mirroring the core deployment of
a spiri robot.
## Prerequisites
This SDK was tested using Ubuntu 22.04 and an Nvidia GPU.
UI features like 3D worlds (gazebo simulation) were tested with Nvidia GPUs using CDI passthrough.
Machine-learning features like image recognition are expected to only work with NVIDIA GPUs.
We use VSCode as the default IDE, and we use Copier to manage project templates.
### Ensure nvidia drivers are working
Ensuring nvidia drivers are installed is outside of the scope of
this document, but you can confirm they working are using the `nvidia-smi` command.
If the command is present and shows output relevent to your GPU, the drivers are installed.
You can find the official ubuntu documentation for install nvidia drivers [here](https://ubuntu.com/server/docs/nvidia-drivers-installation).
You can use the following command to let ubuntu try to install the appropriete drivers automatically:
```bash
sudo ubuntu-drivers install --gpgpu
```
### Installing Docker
As per the [official Docker documentation](https://docs.docker.com/engine/install/).
```bash
#Uninstall any older docker packages
for pkg in docker.io docker-doc docker-compose docker-compose-v2 podman-docker containerd runc; do sudo apt-get remove $pkg; done
# Add Docker's official GPG key:
sudo apt-get update
sudo apt-get install ca-certificates curl
sudo install -m 0755 -d /etc/apt/keyrings
sudo curl -fsSL https://download.docker.com/linux/ubuntu/gpg -o /etc/apt/keyrings/docker.asc
sudo chmod a+r /etc/apt/keyrings/docker.asc
# Add the repository to Apt sources:
echo \
"deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.asc] https://download.docker.com/linux/ubuntu \
$(. /etc/os-release && echo "$VERSION_CODENAME") stable" | \
sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
sudo apt-get update
#Install latest docker
sudo apt-get install docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin
#Allow current user to use docker without sudo
sudo groupadd docker
sudo usermod -aG docker $USER
#Reload the group
newgrp docker
```
### Installing Copier
As per the [official Copier documentation](https://copier.readthedocs.io/en/stable/#installation)
```bash
python3 -m pip install --user pipx
python3 -m pipx ensurepath
pipx install copier
```
### Installing VSCode
As per the [official VSCode documentation](https://code.visualstudio.com/docs/setup/linux)
```bash
sudo apt-get install wget gpg
wget https://code.visualstudio.com/sha/download?build=stable&os=linux-deb-x64 -O /tmp/vscode.deb
sudo dpkg -i /tmp/vscode.deb
```
### Installing nvidia-container-toolkit
As per the [offical nvidia-container-toolkit guide](https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/latest/install-guide.html).
```bash
curl -fsSL https://nvidia.github.io/libnvidia-container/gpgkey | sudo gpg --dearmor -o /usr/share/keyrings/nvidia-container-toolkit-keyring.gpg \
&& curl -s -L https://nvidia.github.io/libnvidia-container/stable/deb/nvidia-container-toolkit.list | \
sed 's#deb https://#deb [signed-by=/usr/share/keyrings/nvidia-container-toolkit-keyring.gpg] https://#g' | \
sudo tee /etc/apt/sources.list.d/nvidia-container-toolkit.list
sudo apt-get update
sudo apt-get install -y nvidia-container-toolkit
sudo nvidia-ctk runtime configure --runtime=docker
sudo nvidia-ctk cdi generate --output=/etc/cdi/nvidia.yaml
nvidia-ctk cdi list
```
## Quickstart
as well as a ROS master, and mavproxy to tie it together.
To get started you can simply clone this repository and run `docker compose --profile uav-sim up`.
@ -111,7 +15,7 @@ MavLink compatible software. We expose the UAVs Mavlink conenction on tcp port 5
There is experimental GUI support you can enable by running `docker compose --profile uav-sim --profile ui up`.
### Creating a new project
## Creating a new project
We provide project templates you can use for development that integrate seamlessly into
our simulated robots.
@ -121,16 +25,261 @@ These templates are intended to be used with VSCode.
To get started with our project templates install the [copier](https://copier.readthedocs.io/en/stable/) project
templating utility.
* [template-service-ros1-catkin](https://git.spirirobotics.com/Spiri/template-service-ros1-catkin)
- [template-service-ros1-catkin](https://git.spirirobotics.com/Spiri/template-service-ros1-catkin)
This template uses the last stable release of ROS1 (ros noetic) and supports python and c++ programming
This template uses the last stable release of ROS1 (ros noetic) and supports python and c++ programming
languages.
ROS1 is considered end of life. It's recomended to use a ROS2 template instead
* ROS2 template
- ROS2 template
We're working on it...
## NVIDIA Container Toolkit
[Source](https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/latest/install-guide.html)
### Installing with Apt
```bash
curl -fsSL https://nvidia.github.io/libnvidia-container/gpgkey | sudo gpg --dearmor -o /usr/share/keyrings/nvidia-container-toolkit-keyring.gpg \
&& curl -s -L https://nvidia.github.io/libnvidia-container/stable/deb/nvidia-container-toolkit.list | \
sed 's#deb https://#deb [signed-by=/usr/share/keyrings/nvidia-container-toolkit-keyring.gpg] https://#g' | \
sudo tee /etc/apt/sources.list.d/nvidia-container-toolkit.list
```
```bash
sudo apt-get update
```
```bash
sudo apt-get install -y nvidia-container-toolkit
```
#### Configuration
##### Prerequisites
- You installed a supported container engine (Docker, Containerd, CRI-O, Podman).
- You installed the NVIDIA Container Toolkit.
```bash
sudo nvidia-ctk runtime configure --runtime=docker --cdi.enabled
```
```bash
sudo systemctl restart docker
```
##### Rootless Mode
To configure the container runtime for Docker running in [Rootless mode](https://docs.docker.com/engine/security/rootless/), follow these steps:
1. Configure the container runtime by using the nvidia-ctk command:
```bash
nvidia-ctk runtime configure --runtime=docker --config=$HOME/.config/docker/daemon.json
```
2. Restart the Rootless Docker daemon
```bash
systemctl --user restart docker
```
3. Configure /etc/nvidia-container-runtime/config.toml by using the sudo nvidia-ctk command:
```bash
sudo nvidia-ctk config --set nvidia-container-cli.no-cgroups --in-place
```
### Sample Workload
After you install and configure the toolkit and install an NVIDIA GPU Driver, you can verify your installation by running a sample workload.
```bash
sudo docker run --rm --runtime=nvidia --gpus all ubuntu nvidia-smi
```
Expected output:
```console
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 535.86.10 Driver Version: 535.86.10 CUDA Version: 12.2 |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|===============================+======================+======================|
| 0 Tesla T4 On | 00000000:00:1E.0 Off | 0 |
| N/A 34C P8 9W / 70W | 0MiB / 15109MiB | 0% Default |
| | | N/A |
+-------------------------------+----------------------+----------------------+
+-----------------------------------------------------------------------------+
| Processes: |
| GPU GI CI PID Type Process name GPU Memory |
| ID ID Usage |
|=============================================================================|
| No running processes found |
+-----------------------------------------------------------------------------+
```
## Support for Container Device Interface(CDI)
[Source](https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/latest/cdi-support.html)
### Prerequisites
- You installed either the NVIDIA Container Toolkit or you installed the `nvidia-container-toolkit-base` package.
The base package includes the container runtime and the `nvidia-ctk` command-line interface, but avoids installing the container runtime hook and transitive dependencies.
The hook and dependencies are not needed on machines that use CDI exclusively
- You installed an NVIDIA GPU Driver.
Two common locations for CDI specifications are `/etc/cdi/` and `/var/run/cdi/`. The contents of the `/var/run/cdi/` directory are cleared on boot.
1. Generate the CDI specification file:
```bash
sudo nvidia-ctk cdi generate --output=/etc/cdi/nvidia.yaml
```
Example output
```console
INFO[0000] Auto-detected mode as "nvml"
INFO[0000] Selecting /dev/nvidia0 as /dev/nvidia0
INFO[0000] Selecting /dev/dri/card1 as /dev/dri/card1
INFO[0000] Selecting /dev/dri/renderD128 as /dev/dri/renderD128
INFO[0000] Using driver version xxx.xxx.xx
...
```
2. (Optional) Check the names of the generated devices:
```bash
nvidia-ctk cdi list
```
Output
```console
INFO[0000] Found 9 CDI devices
nvidia.com/gpu=all
nvidia.com/gpu=0
```
### Sample Workload
```bash
docker run --rm -ti --runtime=nvidia \
-e NVIDIA_VISIBLE_DEVICES=nvidia.com/gpu=all \
ubuntu nvidia-smi -L
```
Output
```console
GPU 0: NVIDIA GeForce RTX 3080 Laptop GPU (UUID: GPU-17c2b9a6-6be2-3857-f8e0-88143e2e621b)
```
## Technologies
* Ubuntu 22.04.1 amd64
* Docker Compose version v2.29.7
* Python 3.10.12
* pip 22.0.2
* NVIDIA Container Toolkit CLI version 1.16.2
* NVIDIA Driver 550.120
* CUDA Version 12.4
## How to Run
Ensure variables in the `.env` file are correct.
Install the required libraries in the python script.
```bash
pip install -r requirements.txt
```
### First Terminal
1. Start the user interface with the following command.
```bash
docker compose --profile ui up
```
2. Click `Launch Gazebo` on the menu.
Simulated world and the drone model should be up and running in a Gazebo instance.
### Second Terminal
This will launch ardupilot, mavproxy and mavros services, and will scale up by the `SIM_DRONE_COUNT` env variable.
1. Start the docker services with the following python script.
```bash
python3 sim_drone.py
```
### QGroundControl
Simulated vehicle(s) should be connected to QGroundControl. `SIM_DRONE_COUNT` value should match the detected vehicle count on GCS.
## Simulation Environment Variables
`DRONE_SYS_ID` and `INSTANCE` environment variables are incremented by 1 for each additional simulated vehicle. `SERIAL0_PORT`, `SITL_PORT`, `MAVROS2_PORT`, `MAVROS1_PORT`,`FDM_PORT_IN` and `GSTREAMER_UDP_PORT` are incremented by 10. Without in-depth knowledge, changing the default value for these ports are not recommended.
| Variable | Type | Default | Description |
| :----------------: | :----: | :--------------------------------------------------: | :---------------------------------------------------------------------------------------------------------------- |
| DRONE_SYS_ID | int | 1 | System-ID for the simulated drone. |
| INSTANCE | int | 0 | Instance of simulator. |
| SERIAL0_PORT | int | 5760 | Mavproxy master port the simulation communicating on. |
| SITL_PORT | int | 5501 | Mavproxy Software in the Loop(SITL) port to send simulated RC input for the simulator. |
| MAVROS2_PORT | int | 14560 | MAVROS ROS 2 UDP port |
| MAVROS1_PORT | int | 14561 | MAVROS ROS 1 UDP port |
| FDM_PORT_IN | int | 9002 | Gazebo Flight Dynamics Model (FDM) UDP port |
| GSTREAMER_UDP_PORT | int | 5600 | UDP Video Streaming port |
| ROS_MASTER_URI | string | "http://0.0.0.0:11311" | This tells ROS 1 nodes where they can locate the master |
| ARDUPILOT_VEHICLE | string | "-v copter -f gazebo-mu --model=JSON -L CitadelHill" | "-v" is vehicle type,"-L" start location, "-f" is vehicle frame type, "--model" overrides simulation model to use |
| WORLD_FILE_NAME | string | "citadel_hill_world.sdf" | Name of the file that exists in the worlds folder. |
| WORLD_NAME | string | "citadel_hill" | Name of the world defined in the world file. |
| DRONE_MODEL | string | "spiri_mu" | Drone model that exists in the models folder. |
| SIM_DRONE_COUNT | int | 1 | Number of drones to be simulated. |
| GCS_PORT | int | 14550 | Ground Control Station(GCS) UDP connection port. |
### Understanding Multi-Vehicle Simulation Parameters
For instance, if `SIM_DRONE_COUNT` is 2, each additional vehicle's ports are incremented by 10.
First simulated vehicle would have these following values,
| Variable | Value |
| :----------------: | :---: |
| DRONE_SYS_ID | 1 |
| INSTANCE | 0 |
| SERIAL0_PORT | 5760 |
| SITL_PORT | 5501 |
| MAVROS2_PORT | 14560 |
| MAVROS1_PORT | 14561 |
| FDM_PORT_IN | 9002 |
| GSTREAMER_UDP_PORT | 5600 |
Second Vehicle,
| Variable | Value |
| :----------------: | :---: |
| DRONE_SYS_ID | 2 |
| INSTANCE | 1 |
| SERIAL0_PORT | 5770 |
| SITL_PORT | 5511 |
| MAVROS2_PORT | 14570 |
| MAVROS1_PORT | 14571 |
| FDM_PORT_IN | 9012 |
| GSTREAMER_UDP_PORT | 5610 |
Video stream would be available on UDP port on 5610 for the second vehicle after enabling streaming.

View File

@ -1,8 +1,9 @@
version: '3.8'
version: "3.8"
services:
gui-tools:
env_file:
- .env
build:
context: ./guiTools/
@ -14,62 +15,102 @@ services:
# If running with Wayland
- XDG_RUNTIME_DIR=${XDG_RUNTIME_DIR}
- ROS_MASTER_URI=http://ros-master:11311
# - NVIDIA_DRIVER_CAPABILITIES=compute,video,utility
# - NVIDIA_VISIBLE_DEVICES=all
volumes:
# X11 socket
- /tmp/.X11-unix:/tmp/.X11-unix
- ${XAUTHORITY:-~/.Xauthority}:/root/.Xauthority
# Wayland socket
- ${XDG_RUNTIME_DIR}/wayland-0:${XDG_RUNTIME_DIR}/wayland-0
#- ${XDG_RUNTIME_DIR}/wayland-0:${XDG_RUNTIME_DIR}/wayland-0
# Allow access to the host's GPU
# - /dev/dri:/dev/dri
# devices:
# # Provide access to GPU devices
# - /dev/dri:/dev/dri
# network_mode: host
- /dev/dri:/dev/dri
devices:
# Provide access to GPU devices
- /dev/dri:/dev/dri
network_mode: host
ipc: host
privileged: true # Allow privileged access if necessary (e.g., for GPU access)
#user: "${UID}:${GID}"
privileged: true # Allow privileged access if necessary (e.g., for GPU access)
# restart: unless-stopped
# command: /bin/bash -c "source /opt/ros/foxy/setup.bash && rvis2" # Replace with the actual command to run Gazebo Ignition
profiles: [ui,]
profiles: [ui]
deploy:
resources:
reservations:
devices:
- driver: cdi
device_ids:
- driver: cdi
device_ids:
- nvidia.com/gpu=all
ardupilot:
env_file:
- .env
image: git.spirirobotics.com/spiri/ardupilot:spiri-master
command: >
./Tools/autotest/sim_vehicle.py -v copter --no-rebuild
--out=udpin:0.0.0.0:5000
--enable-dds
command:
- /bin/bash
- -c
- |
./Tools/autotest/sim_vehicle.py $ARDUPILOT_VEHICLE --no-rebuild \
--no-mavproxy --enable-dds --sysid $DRONE_SYS_ID -I$INSTANCE
profiles:
- uav-sim
stdin_open: true
tty: true
network_mode: host
mavproxy:
env_file:
- .env
image: git.spirirobotics.com/spiri/services-mavproxy:main
command: >
mavproxy.py --non-interactive
--out=tcpin:0.0.0.0:5760
--master=udpout:ardupilot:5000
--master tcp:127.0.0.1:$SERIAL0_PORT
--out udpout:0.0.0.0:$MAVROS2_PORT
--out udpout:0.0.0.0:$MAVROS1_PORT
--sitl 127.0.0.1:$SITL_PORT
--out udp:0.0.0.0:$GCS_PORT
profiles:
- uav-sim
ipc: host
network_mode: host
restart: always
ports:
- 5760:5760
mavros2:
env_file:
- .env
image: git.spirirobotics.com/spiri/services-ros2-mavros:main
command: ros2 launch mavros apm.launch fcu_url:="udp://0.0.0.0:$MAVROS2_PORT@:14555" namespace:="spiri$DRONE_SYS_ID" tgt_system:="$DRONE_SYS_ID"
profiles:
- uav-sim
ipc: host
network_mode: host
restart: always
depends_on:
ardupilot:
condition: service_started
mavproxy:
condition: service_started
deploy:
resources:
limits:
# cpus: '0.01'
memory: 200M
ulimits:
nofile:
soft: 1024
hard: 524288
mavros:
#This service bridges our mavlink-based robot-coprosessor into ROS
#In this example it connects to a simulated coprocessor.
env_file:
- .env
image: git.spirirobotics.com/spiri/services-ros1-mavros:master
command: roslaunch mavros px4.launch fcu_url:="udp://:14555@mavproxy:14550" tgt_system:="1"
environment:
- "ROS_MASTER_URI=http://ros-master:11311"
depends_on:
ros-master:
condition: service_healthy
mavproxy:
condition: service_started
command: rosrun mavros mavros_node __name:=spiri$DRONE_SYS_ID _fcu_url:="udp://0.0.0.0:$MAVROS1_PORT@:14559" _target_system_id:="$DRONE_SYS_ID"
profiles:
- uav-sim
ipc: host
network_mode: host
restart: always
deploy:
resources:
@ -82,13 +123,15 @@ services:
hard: 524288
ros-master:
env_file:
- .env
image: git.spirirobotics.com/spiri/services-ros1-core:main
command: stdbuf -o L roscore
environment:
- "ROS_MASTER_URI=http://localhost:11311"
profiles:
- ros-master
ipc: host
network_mode: host
restart: always
ports:
- "127.0.0.1:11311:11311"
deploy:
resources:
limits:
@ -99,4 +142,3 @@ services:
nofile:
soft: 1024
hard: 524288

View File

@ -1,8 +1,27 @@
FROM osrf/ros:jazzy-desktop-full
RUN apt-get update
RUN apt-get install qterminal mesa-utils -y
RUN apt-get -y install qterminal mesa-utils \
libgstreamer1.0-dev \
libgstreamer-plugins-base1.0-dev \
gstreamer1.0-libav \
gstreamer1.0-gl \
gstreamer1.0-plugins-good \
gstreamer1.0-plugins-bad \
gstreamer1.0-plugins-ugly
COPY --from=git.spirirobotics.com/spiri/gazebo-resources:main /plugins /ardupilot_gazebo/plugins
COPY --from=git.spirirobotics.com/spiri/gazebo-resources:main /models /ardupilot_gazebo/models
COPY --from=git.spirirobotics.com/spiri/gazebo-resources:main /worlds /ardupilot_gazebo/worlds
ENV GZ_SIM_SYSTEM_PLUGIN_PATH=/ardupilot_gazebo/plugins
ENV GZ_SIM_RESOURCE_PATH=/ardupilot_gazebo/models:/ardupilot_gazebo/worlds
COPY ./spawn_drones.sh /spawn_drones.sh
RUN chmod +x /spawn_drones.sh
COPY ./launcher.py /launcher.py
CMD python3 /launcher.py

View File

@ -3,21 +3,24 @@ import subprocess
# Dictionary of applications: key is the button text, value is the command to execute
applications = {
"Terminal": ['qterminal'],
"rqt": ["rqt"],
"rviz2": ["rviz2"],
"Gazebo": "gz sim -v4 -g".split(),
"Gazebo Standalone": "gz sim -v4".split(),
"glxgears (GPU test)": ["glxgears"],
"Launch Terminal": ["qterminal"],
"Launch rqt": ["rqt"],
"Launch rviz2": ["rviz2"],
"Launch Gazebo": ["/spawn_drones.sh"],
"Launch Gazebo Standalone": "gz sim -v4".split(),
# Add more applications here if needed
}
# Function to launch an application
def launch_app(command):
try:
subprocess.Popen(command)
except FileNotFoundError:
print(f"{command[0]} not found. Make sure it's installed and accessible in the PATH.")
print(
f"{command[0]} not found. Make sure it's installed and accessible in the PATH."
)
# Create the main application window
root = tk.Tk()
@ -28,9 +31,14 @@ label.pack(pady=10)
# Create and place buttons dynamically based on the dictionary
for app_name, command in applications.items():
button = tk.Button(root, text=app_name, command=lambda cmd=command: launch_app(cmd), width=20, height=2)
button = tk.Button(
root,
text=app_name,
command=lambda cmd=command: launch_app(cmd),
width=20,
height=2,
)
button.pack()
# Run the Tkinter main loop
root.mainloop()

32
guiTools/spawn_drones.sh Normal file
View File

@ -0,0 +1,32 @@
#!/bin/bash
set -e #PR #6
source /opt/ros/$ROS_DISTRO/setup.bash
if [[ -z $SIM_DRONE_COUNT ]]
then
echo "SIM_DRONE_COUNT environment variable is not set."
exit 1
fi
gz sim -v -r $WORLD_FILE_NAME &
while true
do
topics=$(gz topic -l)
if [[ $topics == *"/world/$WORLD_NAME"* ]]
then
break
fi
sleep 1
done
cd /ardupilot_gazebo/models/$DRONE_MODEL
for (( j=0; j<$SIM_DRONE_COUNT; j++ ));
do
xacro -v gstreamer_udp_port:=$(($GSTREAMER_UDP_PORT + ($j * 10))) fdm_port_in:=$(($FDM_PORT_IN + ($j * 10))) model.xacro.sdf -o model.sdf
#! string is better than using -file option. File is not up to date in the next iteration.
value=$(</ardupilot_gazebo/models/$DRONE_MODEL/model.sdf)
gz_drone_name=spiri-$(($j + 1))
#? Maybe read from text file for formation of drones? x y z r p y
ros2 run ros_gz_sim create -world $WORLD_NAME -string "$value" -name $gz_drone_name -x $j -y 0 -z 0.195
done
exec "$@"

4
requirements.txt Normal file
View File

@ -0,0 +1,4 @@
typer==0.12.5
loguru==0.7.2
sh==2.1.0
python-dotenv==1.0.1

167
sim_drone.py Normal file
View File

@ -0,0 +1,167 @@
#!/bin/env python3
import typer
import os
import sys
import contextlib
from dotenv import load_dotenv
from typing import List
from loguru import logger
import sh
import atexit
load_dotenv()
logger.remove()
logger.add(
sys.stdout, format="<green>{time}</green> <level>{level}</level> {extra} {message}"
)
# px4Path = os.environ.get("SPIRI_SIM_PX4_PATH", "/opt/spiri-sdk/PX4-Autopilot/")
# logger.info(f"SPIRI_SIM_PX4_PATH={px4Path}")
app = typer.Typer()
# This is a list of processes that we need to .kill and .wait for on exit
processes = []
class outputLogger:
"""
Logs command output to loguru
"""
def __init__(self, name, instance):
self.name = name
self.instance = instance
def __call__(self, message):
with logger.contextualize(cmd=self.name, instance=self.instance):
if message.endswith("\n"):
message = message[:-1]
# ToDo, this doesn't work because the output is coloured
if message.startswith("INFO"):
message = message.lstrip("INFO")
logger.info(message)
elif message.startswith("WARN"):
message = message.lstrip("WARN")
logger.warning(message)
elif message.startswith("ERROR"):
message = message.lstrip("ERROR")
logger.error(message)
elif message.startswith("DEBUG"):
message = message.lstrip("DEBUG")
logger.debug(message)
else:
logger.info(message)
@contextlib.contextmanager
def modified_environ(*remove, **update):
"""
Temporarily updates the ``os.environ`` dictionary in-place.
The ``os.environ`` dictionary is updated in-place so that the modification
is sure to work in all situations.
:param remove: Environment variables to remove.
:param update: Dictionary of environment variables and values to add/update.
"""
env = os.environ
update = update or {}
remove = remove or []
# List of environment variables being updated or removed.
stomped = (set(update.keys()) | set(remove)) & set(env.keys())
# Environment variables and values to restore on exit.
update_after = {k: env[k] for k in stomped}
# Environment variables and values to remove on exit.
remove_after = frozenset(k for k in update if k not in env)
try:
env.update(update)
[env.pop(k, None) for k in remove]
yield
finally:
env.update(update_after)
[env.pop(k) for k in remove_after]
# @app.command()
def start(instance: int = 0, sys_id: int = 1):
"""Starts the simulated drone with a given sys_id,
each drone must have it's own unique ID.
"""
if sys_id < 1 or sys_id > 254:
logger.error("sys_id must be between 1 and 254")
raise typer.Exit(code=1)
with logger.contextualize(syd_id=sys_id):
env = os.environ
with modified_environ(
SERIAL0_PORT=str(int(env["SERIAL0_PORT"]) + 10 * instance),
MAVROS2_PORT=str(int(env["MAVROS2_PORT"]) + 10 * instance),
MAVROS1_PORT=str(int(env["MAVROS1_PORT"]) + 10 * instance),
FDM_PORT_IN=str(int(env["FDM_PORT_IN"]) + 10 * instance),
SITL_PORT=str(int(env["SITL_PORT"]) + 10 * instance),
INSTANCE=str(instance),
DRONE_SYS_ID=str(sys_id),
):
logger.info("Starting drone stack, this may take some time")
docker_stack = sh.docker.compose(
"--profile",
"uav-sim",
"-p",
f"spiri-sdk-{sys_id}",
"up",
_out=outputLogger("docker_stack", sys_id),
_err=sys.stderr,
_bg=True,
)
processes.append(docker_stack)
@app.command()
def start_group():
env = os.environ
sim_drone_count = int(env["SIM_DRONE_COUNT"])
start_ros_master()
"""Start a group of robots"""
for i in range(sim_drone_count):
logger.info(f"start robot {i}")
start(instance=i, sys_id=i + 1)
# if i == 0:
# wait_for_gazebo()
def start_ros_master():
docker_stack = sh.docker.compose(
"--profile",
"ros-master",
"up",
_out=outputLogger("docker_stack", "ros-master"),
_err=sys.stderr,
_bg=True,
)
processes.append(docker_stack)
def cleanup():
# Wait for all subprocesses to exit
logger.info("Waiting for commands to exit")
try:
if processes:
print(processes)
for waitable in processes:
waitable.kill()
waitable.wait()
except Exception as e:
print(e)
atexit.register(cleanup)
if __name__ == "__main__":
try:
app()
except KeyboardInterrupt:
logger.info("KeyboardInterrupt caught, exiting...")
cleanup()
sys.exit(0)