|
Thunderbots Project
|
These instructions assume that you have the following accounts setup:
These instructions assume you have a basic understanding of Linux and the command-line. There are many great tutorials online, such as LinuxCommand. The most important things you'll need to know are how to move around the filesystem, and how to run programs or scripts.
We currently only support Linux, specifically Ubuntu.
If you have a X86_64 machine, we support Ubuntu 22.04 LTS and Ubuntu 24.04 LTS.
If you have a ARM64 (also known as AARCH64) machine, we support Ubuntu 24.04 LTS.
You are welcome to use a different version or distribution of Linux, but may need to make some tweaks in order for things to work.
You can use Ubuntu 22.04 LTS or Ubuntu 24.04 LTS inside Windows through Windows Subsystem for Linux, by following this guide. Running and developing Thunderbots on Windows is experimental and not officially supported.
sudo apt-get install gitFork button in the top-right to fork the repository (click here to learn about Forks)git clone git@github.com:<your_username>/Software.gitClone or Download button on the main page of the Software repository, under the SSH tab. (This should now be available after adding your SSH key to GitHub successfully.)origin that points to your fork of the repository. Git will have set this up automatically when you cloned your fork in the previous step.upstream, that points to our main Software repository, which is where you created your fork from. (Note: This is not your fork)cd path/to/the/repository/Softwaregit remote add upstream <the url> (without the angle brackets)git remote add upstream https://github.com/UBC-Thunderbots/Software.gitgit remote -v from your terminal (at the base of the repository folder again). You should see two entries: origin with the url for your fork of the repository, and upstream with the url for the main repositorySee our workflow for how to use git to make branches, submit Pull Requests, and track issues
We have several setup scripts to help you easily install the necessary dependencies in order to build and run our code. You will want to run the following scripts, which can all be found in Software/environment_setup
cd path/to/the/repository/Software/environment_setup./setup_software.shAI softwareFor those who prefer working on C/C++ with an IDE, we provide two options: CLion for an integrated experience and VSCode for a more lightweight setup. Both support our build system bazel.
CLion is the most full-featured IDE, with code completion, code navigation, and integrated building, testing, and debugging.
CLion is free for students, and you can use your UBC alumni email address to create a student account. If you already have a student account with JetBrains, you can skip this step.
cd path/to/the/repository/Software/environment_setup./install_clion.sh (* DO NOT download CLion yourself unless you know what you're doing. The install_clion.sh script will grab the correct version of CLion and the Bazel plugin to ensure everything is compatible *).VSCode is the more lightweight IDE, with support for code navigation, code completion, and integrated building and testing. However, debugging isn't integrated into this IDE.
cd path/to/the/repository/Software/environment_setup./install_vscode.sh (* DO NOT download VSCode yourself unless you know what you're doing. The install_vscode.sh script will grab the most stable version of VSCode *)vscode. You can type vscode in the terminal, or click the icon on your Desktop. &. Click Open Folder and navigate to where you cloned software. So if I cloned the repo to /home/my_username/Downloads/Software, I would select /home/my_username/Downloads/Software.Install, this installs necessary plugins to work on the codebase. (Bazel, C++, Python, etc..)Bazel: Enable Code Lens option.When editing with Vim or NeoVim, it's helpful to use plugins, such as COC or LSP to find references, go-to-definition, autocompletion, and more. These tools require a compile_commands.json file, which can be generated by following these instructions:
src/external to bazel-out/../../../external: ln -s bazel-out/../../../external . from the src foldercompile_commands.json file: bazel run //:refresh_compile_commands.src.bazel build //software/geom:angle_testbazel run //software/geom:angle_testbazel test //software/geom:angle_testbazel build //...bazel test //...See the Bazel command-line docs for more info. Note: the targets are defined in the BUILD files in our repo
We have a ./tbots.py test runner script in the src folder that will fuzzy find for targets. For example,
./tbots.py build angletest./tbots.py run goalietactictest -t./tbots.py test goalietactictest -twhere the -t flag indicates whether Thunderscope should be launched. Run ./tbots.py --help for more info
First we need to setup CLion
Import Bazel ProjectWorkspace to wherever you cloned the repository + /src. So if I cloned the repo to /home/my_username/Downloads/Software, my workspace would be /home/my_username/Downloads/Software/src.Import project view file, and select the file .bazelproject (which will be under the src folder)NextFinish and you're good to go! Give CLion some time to find everything in your repo.Now that you're setup, if you can run it on the command line, you can run it in CLion. There are two main ways of doing so.
BUILD file and right clight in a cc_library() call. This will give you the option to Run or Debug that specific target. Try it by opening Software/src/software/geom/BUILD and right-clicking on the cc_library for angle_test!Add Configuration from the drop-down in the top-right of CLion+, choose Bazel Command.Target Expression, you can put anything that comes after a build, run, test, etc. call on the command line. For example: //software/geom:angle_test.Bazel Command you can put any Bazel command, like build, run, test, etc.Ok, then there should be a green arrow in the top right corner by the drop-down menu. Click it and the test will run!Software/src/software/geom/BUILDcc_test, cc_library and cc_binary there should be a Test ..., Build ... or Run ... for the respective target.Test //software/geom:angle_test to run the angle_test./tbots.py run thunderscope_main --enable_autoref will start Thunderscope with a Simulator, a blue FullSystem, yellow FullSystem and a headless Autoref.--enable_autoref flag. If you run thunderscope_main with --enable_autoref --show_autoref_gui flags, an additional TigersAutoref window shows information about Tigers's filtered vision, obstacles that could result in rules violations and ball speed history.localhost:8081 in your browser. Here, we should see the GameController page with two columns of buttons on the left: one representing commands for the Yellow team and one for the Blue team. We can control gameplay by issuing RefereeCommands../tbots.py run thunderscope_main [--run_blue | --run_yellow] [--run_diagnostics] will start Thunderscope[--run_blue | --run_yellow] indicate which FullSystem to run[--run_diagnostics] indicates if diagnostics should be loaded as wellcd into Software/src and run ifconfig.For example, on a sample machine, the output may look like this:
``` wlp3s0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500 ... [omitted] ...
lo: flags=73<UP,LOOPBACK,RUNNING> mtu 65536 ... [omitted] ... ```
wlp3s0e-. If you are using a WiFi connection, the interface will likely start with w-../tbots.py run thunderscope_main --interface=[interface_here] --run_blue./tbots.py run thunderscope_main --interface=[interface_here] --run_yellow[interface_here] corresponds to the ifconfig interfaces seen in the previous step./tbots.py run thunderscope_main --interface=wlp3s0 --run_blue. This will start Thunderscope and set up communication with robots over the wifi interface. It will also listen for referee and vision messages on the same interface.--interface=[interface_here] argument! You can run Thunderscope without it and use the dynamic configuration widget to set the interfaces for communication to send and receive robot, vision and referee messages.--interface=[interface_here] argument, Thunderscope will listen for and send robot messages on this port as well as receive vision and referee messages.--run_blue or --run_yellow, navigate to the "Parameters" widget. In "ai_config" > "ai_control_config" > "network_config", you can set the appropriate interface using the dropdowns for robot, vision and referee message communication../tbots.py run thunderscope_main [--run_blue | --run_yellow] --run_diagnostics --interface=[interface_here] will start Thunderscope[--run_blue | --run_yellow] indicate which FullSystem to run--run_diagnostics indicates if diagnostics should be loaded as well--interface=[interface_here] corresponds to the ifconfig interfaces seen in the previous step./tbots.py run thunderscope_main --interface=wlp3s0 --run_blue --run_diagnostics--interface flag is optional. If you do not include it, you can set the interface in the dynamic configuration widget. See above for how to set the interface in the dynamic configuration widget../tbots.py run thunderscope --run_diagnostics --interface <network_interface>./tbots.py run thunderscope_main --visualize_cpp_test./tbots.py test [some_target_here] --run_sim_in_realtime./tbots.py test [some_target_here] -t./tbots.py test [some_target_here]./tbots.py test [some_target_here]./tbots.py run thunderscope_main --visualize_cpp_test./tbots.py test [some_target_here] --run_sim_in_realtime./tbots.py test [some_target_here] -t./tbots.py test [some_target_here]./tbots.py test [some_target_here]Debugging from the command line is certainly possible, but debugging in a full IDE is really nice (plz trust us).
Debugging in CLion is as simple as running the above instructions for building CLion, but clicking the little green bug in the top right corner instead of the little green arrow!
To debug from the command line, first you need to build your target with the debugging flag - bazel build -c dbg //some/target:here. When the target builds, you should see a path bazel-bin/<target>. Copy that path, and run gdb <path>. Please see here for a tutorial on how to use gdb if you're not familiar with it. Alternatively, you could do bazel run -c dbg --run_under="gdb" //some/target:here, which will run the target in gdb. While this is taken directly from the Bazel docs, gdb may sometimes hang when using --run_under, so building the target first with debugging flags and running afterwards is preferred.
Profiling is an optimization tool used to identify the time and space used by code, with a detailed breakdown to help identify areas of potential performance improvements. Unfortunately profiling for Bazel targets is not supported in CLion at this time. Hence the only way is via command line. Use the following command:
Callgrind is a profiling tool that is part of the Valgrind suite, designed for analyzing program execution and performance with a focus on functional calls and cache usage. It is useful for determining specific functions in the code that may bottleneck performance.
This will output the file at the absolute path given via the --callgrind-out-file argument. This file can then be viewed using kcachegrind (example: kcachegrind /tmp/profile.callgrind), giving lots of useful information about where time is being spent in the code.
Callgrind requires generating the profile by tracking every single instruction executed by the code. This design adds significant overhead to the runtime performance and significantly slows down the code. Callgrind is appropriate in finding a general sense of bottlenecks in the code but it is difficult to track issues with blocking code and deadlocks.
Tracy is a lightweight, real-time profiler designed for understanding the performance of a system. It offers insights into CPU usage and memory allocations by adding Tracy's markup API.
To run Tracy:
./environment_setup/install_tracy.sh. Tracy is very_particular about its dependencies!./tbots.py run tracy--tracy flag. Requires Tracy markup symbols to be added to the code:Thunderloop: ./tbots.py build thunderloop_main --tracyFullSystem: ./tbots.py run thunderscope_main --tracyUnlike [Callgrind](Callgrind), we can run (and encouraged to run) Tracy with the binary compiled with any and full compiler optimizations. It can provide us a better understanding of the real-time performance of the code.
Warning: Bewarned from the Tracy 16.10.2023 manual:
The captured data is stored in RAM and only written to the disk when the capture finishes. This can result in memory exhaustion when you capture massive amounts of profile data or even in typical usage situations when the capture is performed over a long time. Therefore, the recommended usage pattern is to perform moderate instrumentation of the client code and limit capture time to the strict necessity.
Tracy also samples call stacks. If the profiled binary is run with root permissions, then Tracy can also inspect the kernel stack trace. By default, Thunderloop is run with root permissions but we can profile unix_full_system with elevated permissions by following the on-screen instructions by running:
./tbots.py run thunderscope_main --tracy --sudo
To build for the robot computer, build the target with the --platforms=//cc_toolchain:robot flag and the toolchain will automatically build using the ARM toolchain. For example, bazel build --platforms=//cc_toolchain:robot //software/geom/....
We use Ansible to automatically update software running on the robot. More info here.
To update binaries on a working robot, you can run:
bazel run //software/embedded/ansible:run_ansible --platforms=//cc_toolchain:robot --//software/embedded:host_platform=<platform> -- --playbook deploy_robot_software.yml --hosts <robot_ip> --ssh_pass <robot_password>
Where <platform> is the robot platform you are deploying to (PI or NANO), and <robot_ip> is the IP address of the robot you are deploying to. The robot_password is the password used to login to the robot user on the robot.
It is possible to run Thunderloop without having a fully-working robot. Using this mode is useful when testing features that don't require the power board or motors.
redis is installed. Installation instructions can be found here. The result of these installation directions will likely enable redis-server as a service that starts on boot. You may want to run sudo systemctl disable redis-server to prevent this.redis-server in a terminal.redis-cli set /robot_id "{robot_id}" where {robot_id} is the robot's ID (e.g. 1, 2, etc.)redis-cli set /network_interface "{network_interface}" where {network_interface} is one of the interfaces listed by ip a.redis-cli set /channel_id "{channel_id}" where {channel_id} is the channel id of the robot (e.g. 1, 2, etc.)redis-cli set /kick_coeff "{kick_coeff}" where {kick_coeff} is a calibrated kicking parameter. When running locally, this parameter doesn't matter so 0 is fine.redis-cli set /kick_constant "{kick_constant}" where {kick_constant} is a calibrated kicking parameter. When running locally, this parameter doesn't matter so 0 is fine.redis-cli set /chip_pulse_width "{chip_pulse_width}" where {chip_pulse_width} is a calibrated kicking parameter. When running locally, this parameter doesn't matter so 0 is fine.bazel run //software/embedded:thunderloop_main --//software/embedded:host_platform=LIMITEDbazel build //software/embedded:thunderloop_main --//software/embedded:host_platform=LIMITED --platforms=//cc_toolchain:robot<robot_ip> of the robot you want to run Thunderloop on. This guide may help you find the IP address of the robot: Useful Robot Commands.scp bazel-bin/software/embedded/thunderloop_main robot@<robot_ip>:/home/robot/thunderloop_mainssh robot@<robot_ip>sudo ./thunderloop_mainWe try keep our issue and project tracking fairly simple, to reduce the overhead associated with tracking all the information and to make it easier to follow. If you are unfamiliar with GitHub issues, this article gives a good overview.
We use issues to keep track of bugs in our system, and new features or enhancements we want to add. When creating a new issue, we have a simple "Task" template that can be filled out. We strongly recommend using the template since it provides guiding questions/headings to make sure we have all the necessary information in each issue.
It is very important to give lots of detail and context when creating an issue. It is best to pretend you are writing the issue for someone who has not worked on the relevant part of the system before, and to leave a good enough explanation that someone with very little prior knowledge could get started. Sometimes issues get worked on many months after they were created, and we don't want to forget exactly what we wanted to do and why.
In general if you find an issue with the system, first check with others on your team to make sure that this is indeed unintended behavior (you never know), and make sure that an issue has not already been created before you create a new one.
The same goes for feature requests. Just make sure that whatever you want to say doesn't already exist in an issue.
In general, we follow the Forking Workflow
For each Issue of project you are working on, you should have a separate branch. This helps keep work organized and separate.
Branches should always be created from the latest code on the master branch of our main Software repository. If you followed the steps in Installation and Setup, this will be upstream/master. Once this branch is created, you can push it to your fork and update it with commits until it is ready to merge.
cd path/to/the/repository/Softwareupstream by running git fetch upstreamupstream/master by running git checkout upstream/master then git checkout -b your-branch-nameyour_name/branch_name (all lowercase, words separated by underscores). The branch name should be short and descriptive of the work being done on the branch.Example: if you were working on a new navigation system using RRT and your name was "Bob" your branch name might look like: bob/new_rrt_navigator
git push origin your_branch_name or git push -uBecause we squash our commits when we merge Pull Requests, a new commit with a new hash will be created, containing the multiple commits from the PR branch. Because the hashes are different, git will not recognize that the squashed commit and the series of commits that are inside the squashed commit contain the same changes, which can result in conflicts.
For example, lets pretend you have branch A, which was originally branched from upstream/master. You make a few commits and open a Pull Request. While you're waiting for the Pull Request to be reviewed and merged, you create a new branch, branch B, from branch A to get a head start on a new feature. Eventually branch A gets merged into upstream/master. Now you want to pull the latest changes from upstream/master into branch B to make sure you have the latest code. git will treat the squashed commit that was merged from branch A's Pull Request as a new change that needs to be merged, since branch B will not have a commit with the same git hash. But branch B already has these changes because it was created from branch A! This will cause massive merge conflicts that are nearly impossible to resolve cleanly.
tl;dr Always create new branches from upstream/master. Do not create branches from other feature branches.
We don't impose any rules for how you should be committing code, just keep the following general rules in mind:
As you are working on your code on your branch and making commits, you'll want to update your branch with the latest code on upstream/master to make sure you're working with the latest code. This is important in case someone else merged new code that affects the code you're working on.
To do this, you have 2 options: rebase or merge. What's the difference?.
Merging is generally recommended, because it is easier to handle conflicts and get stuff working. To merge, simply run git pull upstream master.
Rebasing requires more knowledge of git and can cause crazy merge conflicts, so it isn't recommended. You can simply git pull --rebase upstream master to rebase your branch onto the latest upstream/master.
If you do rebase or merge and get conflicts, you'll need to resolve them manually. See here for a quick tutorials on what conflicts are and how to resolve them. Feel free to do this in your IDE or with whatever tool you are most comfortable with. Updating your branch often helps keep conflicts to a minimum, and when they do appear they are usually smaller. Ask for help if you're really stuck!
We use clang-format to automatically format our code. Using an automatic tool helps keep things consistent across the codebase without developers having to change their personal style as they write. See the code style guide for more information on exactly what it does.
To format the code, from the Software directory run ./scripts/lint_and_format.sh.
We recommend running the formatting script and then committing all your changes, so that your commits can more easily pass CI.
Pull Requests give us a chance to run our automated tests and review the code before it gets merged. This helps us make sure our code on upstream/master always compiles and is as bug-free as possible.
The code-review process gives us a chance ask questions or suggest improvements regarding a proposed change, so that the code is of the highest possible quality before being merged. It is also a good opportunity for others on the team to see what changes are being made, even if they are not involved in the project.
The Pull Request process usually looks like the following:
UBC-Thunderbots/Software repository with branch master