-
Notifications
You must be signed in to change notification settings - Fork 134
Descriptor for noisy data #6
Comments
Hi gawela, Yes, FPFH is not perfect and suffers from noisy 3D point clouds. In the existence of severe noise, the general advice is to increase the radius for surface normal estimation and feature extraction. This is equivalent to increase the histogram bin size in 3D space, and gets more resilient to noise. Please keep the relative ratio for the two radius. Our approach works best if the correspondences have more than 30% of inliers. What I concern is that the case you are dealing with. If my understanding is correct, it seems like a global localization case: aligning local map (query map) to the global map (base map). Please note that FPFH is local descriptor. One of drawback of many local 3D features including FPFH is its distinctiveness. It means that there might be lots of similar features on the base map, and they shall make the inaccurate correspondences between base map an query map. This issue gets more obvious if the base map gets bigger and bigger. I am not sure selecting keypoint will be helpful as it is one way of selecting local feature. Instead, I recommend to begin with the toy example. Please try to crop base map in order to get a subset and try to match it with the query map. If they matches well after tweaking FPFH parameters, that means you are in a good starting point. If the matching begins to fail if the base map gets larger and larger, it implies that there is a need to consider scene level descriptor not the local descriptor. In my perspective, once the scene level descriptor roughly localize the position, search space used for local features in the base map can be limited around that area and our approach can be applied. Hope this comment will be helpful. Thanks, |
Hi Jaesik, thank you for the response. I am now testing volumetric grid features that don't rely on normal estimation and am seeing better performance so far. However, I have not yet tried to limit the search space but am now scaling down my experiment to simpler data (less noise, more overlap) for a first shot. Best, |
Hi Abel, Best, |
Hello all,
First of all thank you very much for open sourcing your work!
I am having trouble getting an alignment between 2 pointclouds. I am trying to globally localize in a larger map (base map) based on a smaller excerpt (query map).
The data however are having rather high noise between the base map and the query map. I have been using FPFH features as suggested in the readme, but either receive no or a wrong final transformation. I believe the low performance I am experiencing is due to the descriptor choice / parametrization.
My question is: Do you have an intuition on how to parametrize the descriptors / which descriptors to use on very noisy data, e.g., I believe the normals estimation is not very reliable on my data? Furthermore, does it make sense to select keypoints or should the matching always be done densely?
Thank you very much for your help!
The text was updated successfully, but these errors were encountered: