Skip to content

WisconsinAutonomous/ros_sandbox

Repository files navigation

ros_sandbox

This repository can be used to generate custom test data in the form of "ROS bags". ROS bags record a live replay of ROS topic data that is transmitted by a ROS stack, and accumulates the data passed on any number of topics and saves it in a database. You can then replay the data to reproduce the data recorded by the ROS bag. More documentation can be found here.

This is extremely useful for testing specific nodes in our ROS stack -- we can create a ROS bag of the direct inputs to the node we want to test which creates a very controlled testing environment. Add your own nodes to produce artificial test data!

Repository structure

This repository has a number of existing data publishers to use for creating ROS bags. These packages can be found in workspace/src/. Each of the packages has its own associated launch file that can be found in workspace/launch/. The existing packages so far are as follows:

  • image_lidar_roi_publisher: publishes an image, a LiDAR point cloud (specified by a .pkl file), and an array of ROIs (specified in YOLO format as a .txt file). This has been used to generate ROS bags for testing the sensor fusion node using the Pixset dataset.
  • image_roi_publisher: publishes an image and an array of ROIs (specified in YOLO format as a .txt file). At the time of writing this, it only publishes ROIs with one classification that is hard-coded in the node (more info in the example below). This has been used to generate ROS bags for testing the traffic light state node and the traffic sign classification node.
  • multi_image_publisher: publishes to as many image topics as the launch file specifies. This has been used to generate ROS bags for testing multi-image perception (i.e. to mimic the camera images coming from our sensor suite).
  • video_publisher: publishes a sequence of images to a single image topic that are read from a video. This has been used to generate ROS bags for testing the YOLO object detection node on videos taken at MCity.

These packages should cover a lot of use cases for generating test ROS bags. That being said, it is very possible that your specific testing case will require some new functionality. Please create a new package for new use cases! That way, the old use cases are still handled (they were used at one point, so who's to say they won't be used again), while still accommodating more and more testing cases.

image_roi_publisher example

To test the traffic_light_state_determination node, artificial test data (in the form of a ROS bag) was created that consisted of an image topic and an array of bounding boxes (a.k.a Region Of Interest) topic -- the inputs to the traffic_light_state_determination node. This was done using the image_roi_publisher node. This node is largely the same as the basic tutorial publisher node here.

Node walkthrough

The image topic and bounding boxes topic each get their data from a files in the data/traffic_lights directory of the image_roi_publisher node. The image is just a .jpg file, and the bounding boxes are stored in YOLOv5 label format in a .txt file. Three ROS parameters are used to tell image_roi_publisher where the files are.

self.declare_parameter('package_path', '')
self.declare_parameter('image_path', '')
self.declare_parameter('roi_path', '')

package_path is the absolute path to the package directory.
image_path is the path to the image file within the package directory.
roi_path is the path to the label file within the package directory.

Inside the timer_callback (which goes off every 0.5 seconds), the node loads in the data from the image and label file if it hasn't already.

For the image:

if self.image is None:
img_path = os.path.join(self.get_parameter('package_path').get_parameter_value().string_value,
self.get_parameter('image_path').get_parameter_value().string_value)
print(img_path)
self.image = cv2.cvtColor(cv2.imread(img_path), cv2.COLOR_RGB2BGR)

For the bounding boxes:

if self.rois is None:
roi_path = os.path.join(self.get_parameter('package_path').get_parameter_value().string_value,
self.get_parameter('roi_path').get_parameter_value().string_value)
self.rois = np.loadtxt(roi_path)

The image and bounding boxes now have to be packaged into a ROS message before they can be published to a ROS topic. For the image topic, we use the sensor_msgs Image, as this is what is used in the WAutoDrive ROS stack. For the bounding boxes topic, we use a custom ROS message called ROIArray. we copied over the wauto_perception_msgs from WAutoDrive, which has all of the custom ROS messages used by the perception stack.

For the image message:

image_msg = self.bridge.cv2_to_imgmsg(self.image, encoding="rgb8")
image_msg.header.stamp = self.get_time_msg()
image_msg.header.frame_id = "sample traffic light image"

For the bounding boxes message:

roi_array_msg = RoiArray()
for i,roi in enumerate(self.rois):
cx = roi[1] * self.image.shape[1]
cy = roi[2] * self.image.shape[0]
w = roi[3] * self.image.shape[1]
h = roi[4] * self.image.shape[0]
roi_msg = Roi()
roi_msg.id = i
roi_msg.classification.classification = ObjectClassification.WA_OBJECT_CLASSIFICATION_TRAFFIC_LIGHT
roi_msg.bottom_left.x = cx-w/2
roi_msg.bottom_left.y = cy+h/2
roi_msg.top_right.x = cx+w/2
roi_msg.top_right.y = cy-h/2

Finally, we can publish these messages.

self.image_publisher.publish(image_msg)
self.roi_publisher.publish(roi_array_msg)

Launch file

A simple launch file was used to run the node executable. It can be found in workspace/launch/, and ran with the command (from the workspace directory)

ros2 launch launch/image_roi_publisher_launch.py

The most notable aspect of the launch file is the parameters section, where we tell the node where to look for the data files that we are going to publish.

parameters=[
{"package_path": "/home/ros_sandbox/ros_sandbox/workspace/src/image_roi_publisher",
"image_path": "data/traffic_lights/sample_many.jpg",
"roi_path": "data/traffic_lights/sample_many.txt"}
],

Everything else is pretty standard, inspired the basic launch file tutorial here.

Recording and using ROS bags

Once the node has been launched (using the launch file), it will start publishing the information to ROS topics. We can record this data using ROS bags. In a separate terminal window inside the Docker container (i.e. use tmux to create a new tab and then use atk dev -a to attach to the running container), run the following command:

ros2 topic list

You should see a list of all the topics that are being published. These will include image_roi_publisher/image_roi_publisher/output/image and image_roi_publisher/image_roi_publisher/output/rois, which are the two topics that we are publishing from the image_roi_publisher node. To record a ROS bag of these topics, use the following command:

ros2 bag record --all

This will record all topics until you Ctrl+C to stop it. It will save the data, along with a .yml, in a folder that corresponds to the current date and time. Congrats, you just recorded a ROS bag!

Now, we can transfer this folder over to the WAutoDrive ROS stack and play the ROS bag to replay the data that was captured:

ros2 bag play <folder_name> -l

The -l option tells it to loop indefinitely. When we do this, we should be able to list the ROS topics and see the topics that we captured in ros_sandbox. This data can then be used as input to nodes we want to test by mapping the topics to the input topics of the node in the launch file. For example, in traffic_light_state_determination: https://github.com/WisconsinAutonomous/WAutoDrive/blob/f7e803d08a13c75095ae7cb9cf59cfc619950276/workspace/src/common/launch/wauto_perception_launch/launch/traffic_light_state_determination.launch.py#L28-L30