Skip to content
This repository has been archived by the owner on Sep 2, 2024. It is now read-only.

Using Live PCL clouds? #7

Open
antithing opened this issue Oct 13, 2017 · 9 comments
Open

Using Live PCL clouds? #7

antithing opened this issue Oct 13, 2017 · 9 comments

Comments

@antithing
Copy link

Hi, and thank you for making this code available. I am trying to implement it in my GICP application to test speed/accuracy, but I am having trouble.

With live cloud input, what is the workflow?

Read clouds
extract features.

This gives me my original cloud <pcl::PointXYZI> and my features PointCloud<FPFHSignature33>.
i do this for both clouds. What do i need to do to pass this data into your lib?

thank you!

@syncle
Copy link
Collaborator

syncle commented Oct 14, 2017

Hi. What you need to do would be make a interface function that replaces 'ReadFeature' function. The function will copy your data into

typedef vector<Vector3f> Points;
typedef vector<VectorXf> Feature;

FGR application just requires vector of xyz and fpfh features. You need to stack Points and Feature to

vector<Points> pointcloud_;
vector<Feature> features_;

Stack source point cloud first and target point cloud later.

In overall, you need to call followings

	CApp app;
	app.YOUR_INTERFACE_FUNCTION;
	app.NormalizePoints();
	app.AdvancedMatching();
	app.OptimizePairwise(true, ITERATION_NUMBER);

and use Eigen::Matrix4f CApp::GetTrans() to retrieve transformation. Note that the transformation matrix you would get transforms target point cloud to source point cloud domain. Apply inverse operation to the matrix depends on your need.

BTW, this is my answer with current version. The code structure can be updated in the future.

@antithing
Copy link
Author

antithing commented Oct 15, 2017

Thank you! I am trying the following:
//cloudQuery is the XYZ point cloud
//fpfhs_src is the PointCloud(FPFHSignature33)::Ptr fpfhs_src(new PointCloud(FPFHSignature33))

typedef vector<Vector3f> Points;
	typedef vector<VectorXf> Feature;

	//cloud1
	Points pts;
	Feature feat;
	for (int v = 0; v < cloudQuery->size(); v++) {                       
		const pcl::PointXYZ &pt = cloudQuery->points[v];
		float xyz[3] = { pt.x, pt.y, pt.z };

		//push back pnt	
		pts.push_back(Vector3f(pt.x, pt.y, pt.z));

		const pcl::FPFHSignature33 feature = fpfhs_src->points[v];   //

		// push back feature
	
		float* hist = new float[33];
		for (int i = 0; i < 33; i++)
		{
			hist[i] = feature.histogram[i];
		}
		
		Map<VectorXf> feat_v(hist, 1);		
		feat.push_back(feat_v);
	}
	

	//cloud2
	Points pts2;
	Feature feat2;
	for (int v = 0; v < cloudRef->size(); v++) {
		const pcl::PointXYZ &pt = cloudRef->points[v];
		float xyz[3] = { pt.x, pt.y, pt.z };

		//push back pnt	
		pts2.push_back(Vector3f(pt.x, pt.y, pt.z));

		const pcl::FPFHSignature33 feature = fpfhs_tgt->points[v];

		// push back feature

		float* hist = new float[33];
		for (int i = 0; i < 33; i++)
		{
			hist[i] = feature.histogram[i];
		}

		Map<VectorXf> feat_v(hist, 1);
		feat2.push_back(feat_v);
	}



	CApp app;

	app.LoadFeature(pts, feat);
	app.LoadFeature(pts2, feat2);

         app.NormalizePoints();
	app.AdvancedMatching();
	app.OptimizePairwise(true, ITERATION_NUMBER);

Using your loadfeature function:

void CApp::LoadFeature(const Points& pts, const Feature& feat)
{
    pointcloud_.push_back(pts);
    features_.push_back(feat);
}

But get a crash. The result is..

scale: 0.841799
normalize points :: mean[0] = [-2.759973 1.376408 21.239971]
normalize points :: mean[1] = [-2.774145 1.364353 21.240294]
normalize points :: global scale : 21.488535
Advanced matching : [0 - 1]

Then the program crashes. If you have a chance, could you help me out here? thank you again.

@syncle
Copy link
Collaborator

syncle commented Oct 15, 2017

Based on the result you got, I think your xyz vector seems correctly set, but you may need to check your feature vectors. The erroneous line seems to be

		float* hist = new float[33];
		for (int i = 0; i < 33; i++)
		{
			hist[i] = feature.histogram[i];
		}

		Map<VectorXf> feat_v(hist, 1);
		feat.push_back(feat_v);

as VectorXf in feat will not have any information about feature dimension.

How about this?

		VectorXf feat_v(33);
		for (int i = 0; i < 33; i++)
			feat_v[i] = feature.histogram[i];
		feat.push_back(feat_v);

@antithing
Copy link
Author

Thank you! This is now working, but in my test application, i am seeing a result that is slower and slightly less accurate than the PCL GICP implementation. I am loading a cloud, transforming it manually to get a ground truth, then running both algorithms. Please see the results below. Are you able to give me any pointers to improve the result? thank you again for your help.

Real Transformation:
 0.975528 -0.154509  0.156434       0.1
 0.178679    0.9717 -0.154509      0.05
-0.128135  0.178679  0.975528      0.32
        0         0         0         1

scale: 0.839005
normalize points :: mean[0] = [-3.030331 1.432236 21.487417]
normalize points :: mean[1] = [0.283906 -2.419738 21.925747]
normalize points :: global scale : 21.747261
Advanced matching : [0 - 1]
points are remained : 5904
        [cross check] points are remained : 839
        [tuple constraint] 1000 tuples (83900 trial, 13104 actual).
        [final] matches 3000.

Pairwise rigid pose optimization
  0.975858   0.177786  -0.126857 -0.0912495
 -0.153879   0.971867   0.178315 -0.0789199
   0.15499  -0.154489   0.975762  -0.327972
         0          0          0          1

Fast global took:  530 ms

gicp:  
0.975527 -0.154508  0.156446 0.0997643
 0.178679  0.971701 -0.154501  0.049852
-0.128148  0.178673  0.975528   0.31975
        0         0         0         1

gicp took 250ms.

@syncle
Copy link
Collaborator

syncle commented Oct 15, 2017

I think you can tune parameters for point cloud downsampling and feature extraction. We recommend 1:2:5 for radius of downsampling voxel size, normal estimation, and feature estimation.

For performance, you can reduce point clouds by downsampling bit more. You can also reduce TUPLE_MAX_CNT to 300 as it would take some time.

And.. don't be judged by single example. This example looks very easy to me, almost identity transformation. Our method is global: you would get stable result regardless of initial pose.

@antithing
Copy link
Author

That ratio has improved the results, but no matter what the cloud resolution, it runs half as fast as gicp. Is there anything else I can look at to speed it up? Thanks again!

@syncle
Copy link
Collaborator

syncle commented Oct 15, 2017

I think the question is not very clear to me as I don't have any information about your GICP application. I can give you following comments:

  • Note that our code is for single thread. The bottleneck would be feature matching in this case. You can parallelize using OpenMP and would get x10 faster if your machine has 10 physical cores.
  • You can reduce ITERATION_NUMBER by half, but this will sacrifice the accuracy bit.
  • Try to profiling your code and interface function. You may find another bottleneck.

@antithing
Copy link
Author

Thank you again! Using OpenMp has sped it up well. I am playing with the settings for normal and feature estimation, but i just cannot seem to get a great result. Sorry to ask for even more of your time, but could you take a look for me if you have a minute? The relevant code is:

//vars

       float filterRadius = 0.6;
	float NormalsRadius = filterRadius * 2;
	float FeatureRadius = filterRadius * 5;

//filtering

	pcl::VoxelGrid<pcl::PointXYZ> sor;
	sor.setInputCloud(cloudQueryRaw);
	sor.setLeafSize(filterRadius ,filterRadius ,filterRadius );
	sor.filter(*cloudQuery);

//normal estimation

	std::vector<int> indices(src->size());
	pcl::gpu::Feature::PointCloud cloud_d(src->width * src->height);
	cloud_d.upload(src->points);
	pcl::gpu::NormalEstimation ne_d;
	ne_d.setInputCloud(cloud_d);
	ne_d.setViewPoint(0, 0, 0);
	ne_d.setRadiusSearch(NormalsRadius, 4); //radius, max results
	pcl::gpu::Feature::Normals normals_d(src->width * src->height);

//feature estimation

	pcl::gpu::FPFHEstimation fe_gpu;
	fe_gpu.setInputCloud(cloud_gpuSrc);
	fe_gpu.setInputNormals(normals_gpuSrc);
	fe_gpu.setRadiusSearch(FeatureRadius, 4); //radius, max elements

and the result is:

Real Transformation:
 0.975528 -0.154509  0.156434       0.1
 0.178679    0.9717 -0.154509      0.05
-0.128135  0.178679  0.975528      0.32
        0         0         0         1
normalize points :: mean[0] = [-3.044734 1.417674 21.510321]
normalize points :: mean[1] = [0.275692 -2.440011 21.947401]
normalize points :: global scale : 21.770979
NormalizePoints:  1 ms
Advanced matching : [0 - 1]
points are remained : 4059
        [cross check] points are remained : 414
        [tuple constraint] 179 tuples (41400 trial, 41400 actual).
        [final] matches 537.
AdvancedMatching:  124 ms
Pairwise rigid pose optimization
OptimizePairwise:  46 ms
 0.973382  0.184938 -0.135372 0.0150535
-0.158216  0.969554  0.186906 -0.296592
 0.165816 -0.160513  0.973006 -0.283104
        0         0         0         1

Thank you yet again for your code, and your time.

@syncle
Copy link
Collaborator

syncle commented Oct 16, 2017

Based on the log you provide, you are using point cloud scale normalization, but you are using too large radius for downsampling. As we mentioned in README.md, you would need to tune this parameter, but the keep ratio for other radiuses. I cannot answer much about this, but you can play with that. For example, FPFH is bit sensitive to radius, and you can visualize the correspondences to check how your feature works.

In addition, as OptimizePairwise only takes 45ms, I think you can use ITERATION_NUMBER = 64 as will only add another 46ms in total while increasing accuracy.

Plus, as you had parallelized matching procedure, please double check your OpenMP will not corrupt correspondence vectors.

We will make QnA.md with the answers from this thread.

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants