-
Notifications
You must be signed in to change notification settings - Fork 207
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[BUG] Splat positions rendering differently after compression #356
Comments
Love this scan btw 😍 |
Thank you! I hope to build more of an interactive experience around it using playcanvas! Let me know if you have any bandwidth to help, I am an extreme novice in this kind of programming (just barely getting acquainted with the playcanvas editor platform). |
This is really weird. I looked into the compressed splat a bit but couldn't find a clue. Maybe it is related to some precision loss/rounding. Have you tried rescaling the scene a bit and compressing it? Will it create the same artifacts? |
Yes, I tried rescaling from supersplat and exporting the compressed ply both at 1/10th and 10x scale. At 1/10th there was much worse artifacting near the top (all geometry appeared weirdly thin), and at 10x it was the same. |
Also maybe interesting - if I cut just that part of the model and save it separately it does not seem to exhibit the problem. It seems like it arises in situ with the rest of the splats too. Maybe to do with the way the compression chunks the scene? I do also notice some issues in other less prominent areas, like trees. |
Hi @vincentwoo , I think I know what the issue might be, but will need some time to investigate. The compression works by:
There are a few possibilities that might explain the issue you're seeing:
Sorry I hope some of that makes sense. Basically the compress stage needs some love. Thanks! |
No, that makes perfect sense. Basically for the morton order issue you would try a different hilbert curve through the ordering space, with the aim of minimizing the bounds for each cube defined by 256 splats? Sounds like a fun puzzle, if you point me at the code for that bit I'd love to take a look. |
Awesome! Here's the sorter https://github.com/playcanvas/supersplat/blob/main/src/splat-serialize.ts#L569 |
const Part1By2 = (x: number) => {
x &= 0x000003ff;
x = (x ^ (x << 16)) & 0xff0000ff;
x = (x ^ (x << 8)) & 0x0300f00f;
x = (x ^ (x << 4)) & 0x030c30c3;
x = (x ^ (x << 2)) & 0x09249249;
return x;
}; hahaha, should be a good time |
Playing with it a bit:
Splitting (padding) those chunks makes sense to alleviate this but I think it would require an engine change too, no? You'd need variable length chunks or a way to communicate that the chunk has dead space. I think that one might be a bit beyond me. |
I wrote a node CLI tool for creating compressed.ply yesterday and noticed a bug which might explain this. The very last chunk in the scene is almost always part-full (since chunks are 256 in size) and I realised that the remaining out-of-bounds gaussians in the last chunk will keep the values from the preceding chunk. The out-of-bounds gaussians do not get rendered or used at runtime, but their presence in the last chunk will impact the calculated bounds, which may result in lower-than-necessary precision for the remaining gaussians. I think it's a long shot, but I will fix this bug today and retest. |
Interesting. I just now tried to try writing out the compressed ply leaving off the last chunk, but still saw my oddly placed gaussians there. |
Yep didn't fix anything. The largest buckets in the scene are bigger than I thought:
So I'm going to try recursively performing morton order on buckets larger than |
I wonder if it would be helpful to like color the splats by the chunk they came in with? There must be something strange going on with that particular bucket |
Yeah, looks like the tips are sort of in-plane with some very far away splats that just happen to be on the morton curve coming in, stretching the XZ quantization for that chunk: I suppose this is one downside of having a splat that is too free of aerial floaters - the morton curve may have to jump over a lot of empty space |
But I think especially the green splats are all very close together and still off. Maybe you can create something like a percentile ~80 bounding box and look for splats that are magnitudes outside that box to eliminate them (compress them separately). Edit: Or implement some logarithmic position encoding starting from the median position of a bounding box. |
BTW we could be padding chunks with alpha 0 gaussians in these situations. |
I tried throwing a hilbert curve at it after banging my head on typescript for a half hour but it turns out pretty much exactly the same fwiw. |
Hi all, I'm working on a decoder for the Self Organizing Gaussians format (https://fraunhoferhhi.github.io/Self-Organizing-Gaussians/). I've built a proof of concept decoder in JS: import { decode } from 'fast-png'
import npyjs from 'npyjs'
const dataPaths = [
['means_l', 'means_l.png'],
['means_u', 'means_u.png'],
['opacities', 'opacities.png'],
['quats', 'quats.png'],
['scales', 'scales.png'],
['sh0', 'sh0.png'],
['centroids', 'shN_centroids.npy'],
['labels', 'shN_labels.npy'],
]
function rescaleData(data, meta, bits = 8) {
const len = meta.shape[0]
const dim = meta.shape.length > 1 ? meta.shape[1] : 1
const ret = new Float32Array(len * dim)
let scales = new Float32Array(dim)
const norm = (2 ** bits) - 1
for (let j = 0; j < dim; j++) {
scales[j] = meta.maxs[j] - meta.mins[j]
}
for (let i = 0; i < len * dim; i += dim) {
for (let j = 0; j < dim; j++) {
ret[i + j] =
(data[i + j] / norm) * scales[j] + meta.mins[j]
}
}
return ret
}
function mergeMeans(upper, lower, meta) {
const temp = new Uint16Array(upper.length)
for (let i = 0; i < upper.length; i++) {
temp[i] = upper[i] << 8 + lower[i]
}
return rescaleData(temp, meta, 16)
}
function decompressKmeans(centroids, labels, meta) {
const scaledCentroids = new Float32Array(centroids.length)
const scale = meta.maxs - meta.mins
const norm = 2 ** meta.quantization - 1
for (let i = 0; i < centroids.length; i++) {
scaledCentroids[i] = (centroids[i] / norm) * scale + meta.mins
}
const dim = meta.shape[1] * meta.shape[2]
const ret = new Float32Array(labels.length * dim)
for (let i = 0; i < ret.length; i += dim) {
for (let j = 0; j < dim; j++) {
ret[i + j] = scaledCentroids[labels[i] + j]
}
}
return ret
}
export async function loadFromURL(path) {
const meta = await fetch(path + '/meta.json').then(response => response.json())
return load(path, meta, (fullPath) => fetch(fullPath).then(response => response.arrayBuffer()))
}
// export function loadFromFS(path) {
// return load(path, (fullPath) => readFile(fullPath))
// }
async function load(path, meta, getter) {
const n = new npyjs()
const data = {}
return Promise.all(
dataPaths.map(([param, file], _, __) => {
return getter(path + '/' + file).then(buffer =>
data[param] = file.endsWith('.png') ?
decode(buffer).data :
n.parse(buffer).data
)
})
).then(() => {
return {
means: mergeMeans(data.means_u, data.means_l, meta.means),
opacities: rescaleData(data.opacities, meta.opacities),
quats: rescaleData(data.quats, meta.quats),
scales: rescaleData(data.scales, meta.scales),
sh0: rescaleData(data.sh0, meta.sh0),
shN: decompressKmeans(data.centroids, data.labels, meta.shN)
}
})
} I'm trying to integrate this from the outside of playcanvas' engine, and I was wondering if there was anywhere you could point me to illustrate how to fill the buffers of a gsplat resource and add it to the scene. The examples I'm seeing rely on the internal loader. Is there a codepath for constructing a resource, attaching it to an asset, and adding it as a child to the scene? If this all ends up working well I'd be happy to work on adding it as a modularized loader to playcanvas. The size savings are really good! |
I think I've managed to new up a |
Hi @vincentwoo , This is very interesting! You can see the implementation for .splat loading here. I think you could do something very similar. Let me know if I can help with anything! |
I think the resource system you're pointing me at works by intercepting URLs for assets and then attaching their resources on load? Is there a way to, externally, add a resourced component to the scene without patching the asset loading system? I'm trying to do something like this, but no dice so far: loadGsplatDataFromURL('test_data').then(async splatData => {
const gSplatData = new GSplatData(splatData)
const splat = new Entity();
const asset = new Asset('gsplat-filename', 'gsplat', { url: 'https://test-url.com/gsplat.bundle'}); // todo
asset.resource = new GSplatResource(app.graphicsDevice, gSplatData);
splat.addComponent('gsplat', { asset: asset })
splat.setLocalPosition(0, 0, 0);
splat.setLocalEulerAngles(180, 90, 0);
splat.setLocalScale(1, 1, 1);
app.root.addChild(splat);
entity.script.create(FrameScene);
}) I'm basically trying to hack in this functionality into the templated viewer app. Any pointers would be extremely helpful |
Ah, doing something like const resource = new GSplatResource(app.graphicsDevice, new GSplatData(splatData));
app.root.addChild(resource.instantiate()) was sufficient, just had to work out a bunch of bugs in camera position and shN and suchlike. thank you for your patience. |
I have a lil demo up: https://pier90-preview.netlify.app/sogs/. the implementation of the decoder is at https://pier90-preview.netlify.app/sogs/sogs-decoder.js. Any thoughts? I'm not sure this should be a part of supersplat, but having the ability to use a new compression method with it is really nice. Also I have to tune it, it's really slow to load (aside from the network savings). |
This is so cool @vincentwoo ! I haven't looked into this paper or technique at all yet, but presumably you could also render directly from the data instead of decompressing first? I imagine loading those images as textures directly and rendering them would be a huge win. I would love to add support for this to the engine. It's a lot like our compressed.ply format - perfect for loading and rendering scenes directly, but in order to edit the scene in SuperSplat, you must first decompress the data (like you're doing here). |
It is similar to the playcanvas compression technique in that it does quantize the params. The biggest jumping off point is the ML sorting technique to optimize for adjacency across all dimensions, and then using 2D image compression on the sorted splats. I haven't got the full pipeline working yet, but I should be able to get another 3-5x savings by training with a neighbor-smoothness regularizer and using lossy compression. I'm not sure how one builds the textures directly - can you direct me? I would have assumed you'd still need to translate the uint8s into floats? I'm gonna take a crack at just skipping one buffer scan and just decompressing into the buffer format playcanvas expects, that might be "fast enough" Also communicating in this bug report is sort of funny, but if you'd like to get in touch a lot of us splat enthusiasts hang out on https://discord.gg/tdap466E, or you can email me at me@vincentwoo.com. |
I improved the deserialization (no double buffer allocation), and spread the work out as the data comes in. As you note loading the finalized data is not quite as quick as the playcanvas impl. On this dataset the network savings just about evens out the loading speed for me (you can compare to loading a normal compressed splat at the root URL https://pier90-preview.netlify.app). I think another sticking point is that I'm using a png library where I could probably just rely on the browser. If you wanna take a crack at it the latest code is at https://pier90-preview.netlify.app/sogs/sogs-decoder.js. |
Thanks so much, I've joined the discord channel. Creating the textures directly is actually straightforward, but the rendering internals and shaders for GS assumes a data layout which isn't so easy to update. TBH regarding data locality of this technique, ultimately the best option will be ordering the data at runtime based on camear position. This will aid GPU memory caches, which is super important for rendering speeds on large scenes. |
A quick update on the original bug filing, after a very circuitous route I've managed to get SOGS-based compression working for that scene too (though I did have to bump up SH quantization): https://sutro-tower.netlify.app. It weighs about half the Supersplat default compression and it does in fact solve the means quantization issue, so long as the high detail area is properly centered on the origin. I think a good interim fix for SuperSplat could be to just give a bit more quantization juice to the means? Getting positions right is essential for perceptual quality, and you can slip a bit on the rest. |
@vincentwoo This is extremely cool! Have you considered switching from PNGs to WebP? quats.png is 5,579KB. A lossless quats.webp is 5,355KB. So only 4% smaller - not a massive saving - but better than nothing. How would SOGS cope with WebP compression applied? Also, I'd love to tweet that link. Would that be OK? Are you on X? |
I am but I'm still refining a bit if you don't mind waiting a week or so. I actually want to annotate the model in 3D and would actually love some help from people with more 3D / playcanvas experience. For instance, I want to add clickable tooltips and an automatic camera track. Would you be down to consult with me on a call or something? I'm gonna just release this all permissibly, I just think its so cool. Oh and to your webp question - yeah me and wieland of SOGS fame are cooking up some stuff, should be able to do even better. |
@vincentwoo I've DMd you on ✖️ |
I have a splat that, when compressed by supersplat, mostly looks great except for (as far as I can tell) one specific area of interest, wherein the means of the splats themselves seem to have moved. Here's a GIF comparison:
Here I'm toggling between the compressed/uncompressed plys in a scene in SuperSplat with both loaded. The tips of the towers seem to "detach" in the compressed ply.
Here are the files if you want to try to repro/inspect more closely:
https://sutro-tower.netlify.app/scene.ply
https://sutro-tower.netlify.app/scene.compressed.ply
and you can check out the viewer I rigged up at https://sutro-tower.netlify.app itself.
(I erroneously reported previously that this issue only happened in the viewer, please ignore that version of the issue)
The text was updated successfully, but these errors were encountered: