-
Notifications
You must be signed in to change notification settings - Fork 2
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
merge or differentiate ProfilingSonar and ImagingSonar #2
Comments
Profiling sonars measure range. They can be multibeam (several ranges at a time) or mechanical scanning sonars (i.e., they mechanically rotate and provide a range at each angle). Imaging sonars provide an image. It means that for each beam they return hundreds of measures. |
@narcispr, yeah, this can be difficult because some sonars even provide phase information (e.g. each 'cell' is a complex number). I think writing a message that is generic "enough" in ROS is always going to be hard, given how the ROS msg IDL is limited to very simple patterns (only composition, no sense of inheritance). Limiting the options to ranging vs. imaging systems is a pragmatic approach. (NB that in some ways the profiling sonar is a down-sampling of the imaging sonar --- e.g. a MB return looks like an image but can be processed to be (range, intensity) tuples vs beam angle) |
The suggested message type with unified field names can be split back into two, by removing line 32 for the profiling sonar. I believe all other information remains valuable for both cases. However, I don't recognize an advantage in the split. Whilst more complex sonar data exists, in my experience single- or multi beam intensity data [0] is common enough to warrant being considered in the generic message. My concern with splitting this into two message types is that packages using the profiling data only would likely also only using the profiling message. This means a typical sensor driver for an intensities capable sonar would have to publish both, leading to a lot of information duplication. [0] What I call multi beam intensity data seems to be what @narcispr refers to as imaging. As far as I can tell, his distinction does not seem to consider single beam sonars (scanning or fixed) that can provide both ranges and intensities, or devices that would be called 'imaging sonars' but also provide a device internal range detection. |
How much detail are we looking to get into? Some multibeams (Kongsberg EM-series, Reson-7125) make it possible to get per-beam TX time, RX time, and steering angles. This can be especially important with deep water multibeam from the surface as the multibeam can roll/pitch quite a lot in the two seconds it takes to make the round-trip to 1500m. Multi-sector systems also have a time offset between transmitting different beams. Also, virtually all of these higher-end multibeams support at least equi-distant or equi-angular beam spacing. Does the broader community have a sense of how important it is for the standard multibeam message to support raytracing a new sound velocity profile? |
It sounds like the SonarBeam msg could be enhanced with more optional fields, giving the details for individual beams that @ivaughn suggested, which can be left empty if unused (did I miss any?):
A further suggestion for a field in the main message is With all the extra optional fields, the message becomes a bit overwhelming for new users, however I am unsure where to draw the lnie for a simpler message. A possible split would be removing the fields
Rather than differentiating ranging and intensity data, this approach differentiates single and multiple beams. This principle could also be further extended by a third message type for sonars with beam positions rotated around two axis (though I don't know enough about those to know if this would indeed work). Any other split suggestions, arguments for/against splitting, or extra field suggestions? |
Can I ask what our goal is for multibeam data? In particular, what's the role for a sonar message vs. a more general ROS pointcloud message? As a rule, multibeam data is stored in one of three formats: 1). Beamformed RX and TX angles + travel time (usually two-way). This is generally not converted to a range because in deep water or when trying to meet International Hydrographic Office standards the conversion involves solving a PDE rather than simply dividing by sound velocity. This format is of interest here for two reasons: first, it's what survey-grade multibeams spit out, and second there are some folks trying to do IHO-style processing. I just can't get them to chime in (C'mon Val!). Reson's s7k datagrams don't actually include a range. 2). Processed point data is typically stored in a pointcloud, admittedly in a pretty confusing coordinate frame (usually local-level but with heading+lat/lon defined in the body frame). Without a compelling reason to do otherwise, I'd suggest we use pointclouds for multibeam data. With a sensible choice of instrument coordinate frame, this becomes a decent way to handle low-end planar multibeams. Note that we have to use a full XYZ pointclouds because the points from yaw-compensating multibeams are not coplanar. For details on how this has traditionally been handled, check out Leido's Generic Sensor Format (GSF -- see http://www3.mbari.org/products/mbsystem/formatdoc/gsf_lib.pdf) and MBsystem's native FBT format (http://www3.mbari.org/products/mbsystem/batheditintegration/index.html) Conventional multibeam processing almost always uses indexed pointclouds. If your multibeam has 256 beams, every datagram should have 256 beams. Users, filtering algorithms, and the sonar itself can then flag beams as invalid. 3). Gridded data. This should ALSO be a thing, but will vary a fair bit based on whether people are using octomap, conventional grids, etc etc. Another topic for another issue. |
Interesting! The proposed message is based on the existing suggestions (which in turn both seem to be based on LaserScan), and my (limited) experience using mutlibeam data for obstacle avoidance and mapping. It sounds like we need more message options for multibeam data, depending on the further processing & use of the data. It seems to me like the proposed message is useful, but cannot cover all use cases. I would suggest to open a new issue discussing further multibeam sonar messages (#9), and focusing this issue back on the suggested message, how/if it should be split and if it is missing any information. When to use existing message types and when to define our own is a general question, that we will likely run into repeatedly. I added some general notes to #4. |
(apologies if this is not the right issue/place for this comment) @ivaughn IMO the multibeam sonar message should be closer to what you're describing in (1) - data containing the intensity over time/range for each beam, as this would not prevent the user from running whatever processing is needed on the raw data. Additionally, a ROS driver for the multibeam could publish a separate range/point cloud message in parallel (something similar to LaserScan). Regarding your point on reference frames - and maybe overlapping with #7 - I would go in the same direction as @smaria 's proposed message specification, where a multibeam message contains an array of single-beam Sonar messages, each carrying its own header (and reference frame). I would also suggest to change SonarBeam to be more "self-contained", so that it can be used with single-beam sonars (e.g. depth sounders or mechanically-scanned profiling sonars), as well as accommodating beams that are not coplanar and/or equally spaced.
Another advantage of moving some of the complexity to this single-beam message would be that you would potentially no longer need separate messages for profiling and imaging multibeams. |
I struggle to identify the difference between the two sonar messages. Both seem to be based on the laser scan messages from sensor_msgs.
Extra fields in ProfilingSonar:
Extra fields in Imaging Sonar:
It appears two are slightly different implementations of the same concept:
range_resolution <-> range_increment
intensities <-> beams
With my current knowledge of sonar sensing I can not assign a sonar device clearly to either of these messages, and know of no argument why both are needed. As a merge of the two I would suggest https://github.com/smaria/marine_ros_conventions_discussion/blob/master/sensor_msgs/SonarScan.msg
The text was updated successfully, but these errors were encountered: