-
Notifications
You must be signed in to change notification settings - Fork 134
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
redkitchen incomplete #10
Comments
Hello! Have you tried increasing the size of the volumetric voxel grid? With the default parameters ( |
Thanks for your reply, " terminate called after throwing an instance of 'std::bad_alloc' and less than 1200 the construction is still incomplete I also had another query, is it possible to directly use the binary tsdf volume in the 7scenes dataset (.raw file) with your tsdf2mesh funtion to generate a mesh? |
The error occurs because the voxel grid is too large to fit into your memory. Since the code creates two voxel grids (one saving distance values and another saving weights for computing running averages), a voxel grid of size 1200x1200x1200 will take 12+GB of RAM and GPU memory. You'll have to play around with the parameters ( The tsdf2mesh in this repository is not out-of-the-box compatible with 7-scenes .raw files. You will need to modify tsdf2mesh to support that. |
Thanks but I was trying to use the info in the mhd file of redkitchen in the 7scenes dataset to make a mesh using tsdf2mesh. The mhd file says that the offset is 0 0 3000 and and the element spacing is 11.718750 and that the unit is mm so i converted it to meters and reconstructed the scene. The problem is that now the mesh has a range of 0.48-5.82 in y, 0.0117-6.00 in x and 3.01-7.96 in z. However the poses in the dataset do not fall inside this model now. Could this be because the model is in the camera coordinates and it has to be converted to global coordinates? Also when you swap the two columns while converting the model from voxel coordinates to camera coordinates, dont you have to change the sign? |
Correct. The fused model created by demo.cu lies in the camera coordinates of the base frame that you specify (see
No. The transformation between voxel coordinates and the base camera coordinates (that the model was fused in) should only amount to a translation and a scaling. The swap that occurs in tsdf2mesh.m is only there to account for Matlab's y-first indexing. |
Now I can read the raw file given in the redkitchen dataset and create a mesh using the tsdf2mesh, but the thing is, that it’s not compitable with the poses in the redkitchen dataset, when I plot the poses it doesn’t lie on the image, do you have any idea why? |
The python version of TSDF-fusion seems like implementing the automatic voxel volume bounds estimating. |
Optional GPU_MODE
Hi,
I tried reconstructing the whole redkitchen sequence, but the reconstruction didn't look like the one given in the dataset, some parts of the reconstruction are cropped/incomplete. Do you have any idea why?
The picture below is my reconstruction.
and this picture is the original one
The text was updated successfully, but these errors were encountered: