Deploy to servers for continuous integration testing
Simulation can provide a virtual world that can be used to test software. Often times users want to have the software tested against this simulation in a continuous integration context. In this guide, we'll walk through how to optimize the build and deploy the system to a tool like Jenkins, CircleCI, or GithubActions.
- Test software changes against 100s of simulated scenarios on nightly builds.
- Smoke test individual changes against a few simple runs to make sure the obvious stuff works together.
- Identify metrics and only try changes on real devices that are likely to cause positive results by trying many scenarios first.
You can optionally distribute the compute across multiple compute nodes. The servers you deploy to can be your own private servers or on a public or private cloud.
Assumptions:
- You are already familiar with the CI/CD process
- You are familiar with at least one such tool. And in this case we are using Jenkins to walk through a setup. Other CI/CD systems work more or less the same.
Key Steps involved:
Get Jenkins and install following instructions
- Create an admin account, and secure the password!
- Upgrade the plugins
- Get Docker plugin
Pipeline plugins are nice to have, same with Blue Ocean plugins
Recommended: If you have a dedicated DevOps, and can handle all of this for you, just leverage that expertise and wait until they are done setting it up based on these instructions. Then you can just run the jobs and analyze the results.
Provision & configure the executor agents, instructions per your needs (with or without docker, etc.)
- You could have lightweight nodes separate from the Unity Editor nodes just to run the ROS stack and/or the Test Rigs/Harness.
Install Unity Editor in the nodes you want to use as the build nodes We are using and recommending Linux(Ubuntu) nodes for this step. Please follow instructions to get the Hub & the Editor installed. Any Editor in the 2020.3.x range should be fine.
Once the editor is installed, add the nodes to the main Jenkins controller node, following the instructions within Jenkins (Manage Jenkins -> Manage Nodes and Clouds.)
Create a Unity Editor Activation job, so as to allow the Editor instance to use your license.
- Following is the command you'd have in this job. If you have multiple Unity nodes, just label/tag them, and use that label to restrict where this job would run. (a Jenkins feature)
unity-license-activator.sh
:$BASE_UNITY_PATH/Unity -quit -batchmode -nographics -serial $SERIAL_NUM -username $YOUR_UNITY_ID -password $PASSWORD
Where:
- BASE_UNITY_PATH is the location of the Editor version that you want to use
- SERIAL_NUM is the provided license number for you, found in your unity.com dashboard
- YOUR_UNITY_ID is the id/email address you use to login to the above link
PASSWORD is the password you use for the above link
Note: You just need to run this once per node. Those env vars in that command can be setup as parameters for this job, and you can pass them when you execute.
Create the Build Unity Simulation job Next we are going to actually build the Unity binary that runs our simulation. The 'Authoring' section of the guide walked you through how to setup the Unity project, and build out the simulation environment. Here we are going to take that project and build a binary through this CI tool, and run it as part of our testing.
We expect the project codebase to be committed to SCM, like git. In this job, we'll first clone the repo, and then kick off the build.
- In order to clone the repo, we need to have the Git tool setup in the Jenkins controller node (should be available, if not you can manually install a version and configure under Manage Jenkins -> Global Tool Configuration)
Create a personal access token in the git repo, and use that as your password for the 'Username and password' option in the Credentials section when entering the repo URL. There should not be any errors as you tab out of this section. If you see any auth errors, grok the message and see if Jenkins site or the git system that you are using has any specific instructions to work with Jenkins.
- Now we are ready to put in the build script, which should look like this:
build-unity-linux-player.sh
$BASE_UNITY_PATH/Unity -accept-apiupdate -quit -batchmode -nographics -logFile $LOG_FILE -projectPath path/to/project -buildLinux64Player $OUTPUT_PATH
build-unity-linux-headless-player.sh
$BASE_UNITY_PATH/Unity -accept-apiupdate -quit -batchmode -nographics -logFile $LOG_FILE -projectPath path/to/project -buildLinuxHeadlessSimulation $OUTPUT_PATH
...as before, the environment variables are the parameters to this job, and you'll set them per your setup when executing this job.
Create the Run the Unity Simulation job
Once the binary is built, it's time to launch/run it. Here's a sample script that you'd use to run the simulation:
run-unity-simulator.sh
:
$PATH_TO_SIMULATOR/$EXECUTABLE_NAME --[flag-key] [flag-value] &
Where:
- PATH_TO_SIMULATOR is the directory in which the binary created in the previous step is located
- EXECUTABLE_NAME is the actual file name of the binary, with the executable bit set.
- the optional, flags (keys & values) are custom, as in, we don't provide any script that handles such parameters. But such aspects can be coded in Unity Editor, using C#, and included in your projects. Let us know if you'd rather have us provide such a feature out of the box. Email us: unitysimulationpro@unity3d.com
Create the container images for the ROS application stack
Assumption: you have your ROS stacks containerized, and you are using docker. If not, skip to Step 9.
In this step, we are going to build the docker containers using the Dockerfiles you have created as part of your ROS stack.
build-docker-image-for-ros-stack.sh
:export DOCKER_BUILDKIT=0 export COMPOSE_DOCKER_CLI_BUILD=0 # To be run from the base directory where the Dockerfile is located in the repo docker build --platform linux/amd64 -t $PARAM_TAG_NAME .
Where:
PARAM_TAG_NAME is whatever you want to tag this image as.
Again, this job is expected to clone the git repo that contains the codebase for this stack, and execute the script above. The output is docker image that's available on the current node for use. If you need to create multiple images for different stacks, you could clone this job and create more.
Bonus Tip: You could add a publish step to push the image to a docker registry of your choice, so you can pull it down from another node.
Create the Run ROS containers job
In this step, we'll be creating the job that launches the containers using the images we created in the previous step.
run-ros-stack-docker.sh
# From the base of the repo...copy necessary files for use by the container
# OPTIONAL: follow these only if you have config.json or similar such file, that your scripts expect. Rename as needed.
cp config.json ../docker/config.json
# OPTIONAL: follow only if you have test_rigs folder, with some test scripts. Rename as needed.
cp -r test_rigs ../docker
# Set the env vars
# used for the ROS-TCP endpoint connections
export TCP_PORT=10000
#
export DOCKER_PORT=5005
export CONTAINER_ID_NAME=$PARAM_CONTAINER_NAME
# Create output folder
mkdir $PARAM_DATA_OUTPUT_BASE_DIR/$BUILD_NUMBER
# Set more env vars
MOUNTS="$PARAM_DATA_OUTPUT_BASE_DIR/$BUILD_NUMBER"
TEMPS="$MOUNTS/temps"
WORKSPACE=catkin_ws
USER=rosdev
CONTAINER_NAME=$PARAM_CONTAINER_TAG_NAME
echo "Starting $CONTAINER_NAME. To attach #additional shells to this container use:"
# run the container
docker run -v "$MOUNTS:/home/$USER/$WORKSPACE/src/external" -v "$TEMPS/gazebo:/home/$USER/.gazebo" -v "$TEMPS/cache:/home/$USER/.cache" -v "$TEMPS/ros:/home/$USER/.ros" -v "$TEMPS/history:/home/$USER/.history" --rm -p $TCP_PORT:10000 -p $DOCKER_PORT:5005 -e LIBGL_ALWAYS_INDIRECT=0 --name $CONTAINER_ID_NAME $CONTAINER_NAME
Where:
- the ones with PARAM_ prefix are configured as job parameters, and passed it at exec time.
Note this assumes the Unity simulator is running in the same node, on port 10000 for the ROS-TCP communication. If that's not the case, your scripts might need to take in the host IP of that node in addition to the port.
For non-Docker users If you are not using docker, you can just run your launch scripts for the ROS stack in any of the Jenkins nodes. You can create a jenkins job (that executes a shell script), that clones the repo (like earlier), and launches the scripts in the order that you prefer.
Add a reporting job to capture/see the run results
If you want to clearly separate the data collection from the test runs, you could create a job that saves the output of the test runs to a common archive in a cloud storage location, which would be the central place to store all the results.
Of course this will vary depending on your cloud provider or if you are using your own data center. Our recommendation is to use the cloud providers' toolkit to do the uploads to their storage, following the best practices. For e.g., having a service account with the right set of minimum permissions required to push the data from the nodes to the cloud buckets.
Example solution if using GCP:
gather-reports-upload-to-gcs.sh
export GOOGLE_APPLICATION_CREDENTIALS=$PARAM_PATH_TO_CREDENTIALS_FILE
gsutil -m cp -r $PARAM_ROS_OUTPUT_DIR/* $PARAM_GCS_URI
Where:
PARAM_ prefixed entries are set during execution time to the right values.
Assuming gsutil is installed in the nodes selected for this run.
Optionally, Create a pipeline to string them together (Advanced topic)
Optionally, add a notification to let the team know of the test status (as a post build action)
You may configure the run schedule based on your needs, some options:
- Based on a source code commit to your simulation project
- Based on a time schedule (daily/nightly)
- Based on a manual trigger event, using a webhook, for e.g.
For GPU Enabled Nodes: If you can provision GPU for your simulations, great! In this section, we'll quickly cover what it takes to setup the host node so that Unity Simulation binary can take advantage of the additional compute!
In the host, you'll need to install the required drivers, if you didn't use the NVIDIA GPU ready VM image.
install-nvidia-drivers.sh
:
sudo apt install nvidia-driver-510
Applies only to those with NVIDIA GPUs attached. Restart of the VM recommended after this installation. Once rebooted, run 'nvidia-smi' to verify the installation was successful.
Next, if you are using our Linux Headless Simulation build (fka Cloud Rendering Target), then we also need to get Vulkan API installed.
install-vulkan-tools.sh
# Run to get the library/api installed
sudo apt install vulkan-tools
# Run to verify the installation
vulkaninfo
Once these steps are complete, the LHS binary is ready to take advantage of the hardware in the system.