In the first set of exercises, we will showcase a demo of running a simulated robot, and control it using ROS 2. You will also learn the very basics of ROS 2 topics and transforms.
- Basic Concepts
- Launching the Andino robot in a Gazebo simulation
- ROS 2 Topics
- RViz
- TFs - The coordinate transforms
- Summary
ROS 2 (Robot Operating System 2) is an open-source framework for building robot software. It provides a set of tools, libraries, and conventions, including a middleware for internal communication. It is designed to support real-time performance and multi-robot systems.
Gazebo is a powerful open-source robotics simulator that allows developers to test and validate robot designs in complex environments, offering realistic physics and sensor models.
RViz is a 3D visualization tool for ROS that enables developers to visualize sensor data, robot models, and environment maps, aiding in debugging and monitoring robot behavior.
Andino is a fully open-source, educational low-cost robot developed by Ekumen. It uses ROS 2 to implement its functionalities and has fully functional Gazebo simulations of it available.
If you didn't yet, follow the instructions in Exercises 0 - Setup to setup and launch an Andino Gazebo simulation with RViz.
Here is a quick summary of all the required steps for launching the simulation:
cd robotics_essentials_ros2/docker/
docker compose up -d
docker exec -it robotics_essentials_ros2 bash
ros2 launch andino_gz andino_gz.launch.py
Exercise 1:
Open the teleop panel and give commands to move the robot around. Try out all the teleoperation menus, and experiment with all the ways in which you can control Andino. On startup, the Gazebo simulation will most likely be paused. Make sure you firstly press the play button in the bottom left of the Gazebo UI to start the simulation and see the robot moving once you send teleoperation commands.
ROS 2 topics are a core communication mechanism in ROS 2 that enable data exchange in a publish/subscribe model. Publishers send messages to a named topic, while subscribers listen to that topic to receive relevant data.
By subscribing to a topic, you can read sensor data (lidar, camera) and other useful data (map, odometry) from your robot.
Open a new terminal inside the Docker container and run the following commands (How to open terminal in Docker container):
-
List all the available ROS 2 topics
ros2 topic list
-
Read the sensor data from the laser scanner (Press CTRL+C to stop after a while).
ros2 topic echo /scan
-
Get more info about the /scan topic to learn the message type
ros2 topic info /scan
-
Move the robot by publishing to cmd_vel topic
ros2 topic pub /cmd_vel geometry_msgs/msg/Twist "{linear: {x: 0.2}}"
-
Send a 0-velocity command to stop the robot
ros2 topic pub /cmd_vel geometry_msgs/msg/Twist "{linear: {x: 0.0}}"
Exercise 2:
Publish a message to rotate the robot in its place. First, check what is the message type of the /cmd_vel
topic using ros2 topic info
command, and then check the possible message contents with ros2 interface show <msg_type>
Solution:
-
ros2 topic info /cmd_vel
shows us that the message type isgeometry_msgs/msg/Twist
-
ros2 interface show geometry_msgs/msg/Twist
shows us that we have the "Angular"Vector3
field that has thez
field available, to make the robot rotate. -
We can rotate the robot with a command:
ros2 topic pub /cmd_vel geometry_msgs/msg/Twist "{angular: {z: 0.5}}"
RViz is a useful visualization tool that allows us to display data from ROS 2 topics. With these examples, you will learn how to do that.
Our robot is constantly publishing images from the simulated camera. Let's see how those images look like!
-
Click "Add" -button from the bottom left corner of RViz
-
Choose to create visualization "By topic"
-
Choose
Camera
under the/image_raw
and Press ok.
Tip: Set overlay alpha to 1 to hide the artifacts on top of the image:
In ROS 2, transforms are used to describe the spatial relationships between different coordinate frames in a robotic system. You can think about TFs as the coordinate frames that sit in the most important locations in your robot and in the environment. They sit at the center of the robot, at the center of sensors, at joints, and there are so the "Map" and "Odometry" frames. They allow you to convert positions and orientations from one frame to another, and basically keep track of how each part of your robot moves in relation to the other parts of the robot. This is crucial for tasks like navigation, sensor fusion, and manipulation.
The main component for handling transforms in ROS 2 is the tf2 library. It provides:
- Coordinate Frames: Each sensor or part of a robot has its own coordinate frame (e.g., the robot's base, sensors, end effectors).
- Transformations: These include translations (movement along axes) and rotations (changes in orientation) between frames.
By using transforms, robots can effectively understand their position in the world and how their sensors and motors are located in relation to their body.
The relationship between these coordinate frames is determined with tf-tree. It essentially tells with a tree-like structure what is the child-frame's position in relation to the parent frame.
Image source: wiki.ros.org
map
The map frame provides a global reference point for the robot's environment, allowing it to understand its position within a larger context. Typically, the coordinates in the map frame present the robot's coordinates on a 2D map.
odom
The odom frame represents the robot's position based on its odometry data. It tracks the robot's movement from its starting point, being subject to drift and inaccuracies.
base_footprint
The base_footprint frame is a 2D representation of the robot's footprint on the ground, typically used for planning and movement calculations without considering the robot's height.
base_link
The base_link frame represents the robot's main body and is used as a reference for other components, such as sensors and arms.
laser_link
The laser_link frame denotes the position of a laser sensor on the robot. It is essential for interpreting the data collected by the laser for tasks like mapping and obstacle detection, providing a reference for where the sensor is located in relation to other frames.
When working with RViz, you will need to use the "Fixed Frame" to determine from which frame's perspective you are visualizing the data. This is an important feature to know about, as sometimes the data you are looking to visualize might not be available if you are visualizing a wrong frame.
-
On Andino, the default frame is set to "base_footprint". This means that RViz coordinate origin (0, 0), is set to the robot's footprint. Move the robot around with teleoperation using Gazebo. You can see that the robot is always located in the center of the grid that RViz visualizes.
-
Change the "Fixed Frame" under "Global Options" from "base_footprint" to "odom" to use odometry as the coordinate frame instead of the robot base_footprint frame.
-
Drive the robot around. You will see the robot moving in relation to "odom" frame.
Sometimes it might be useful to check the robot's tf-tree for debugging purposes. You can do it by opening the "Tree" option under the TF menu.
Tip: To ensure the odom frame appears correctly at the top of the tree, you may need to press the reset button on the bottom left of Rviz. The odom frame tracks the robot's movement in the environment, making it the logical parent frame for accurately tracking the motion of all other frames.
By the end of these exercises, you have now learned
- What are ROS 2, Gazebo, and Rviz
- How to launch Andino simulation and control the robot from Gazebo
- What ROS 2 topics are
- How to publish to a topic
- How to subscribe to a topic
- What tf-frames are
Next exercises: Exercises 2: SLAM and Navigation Demo