It's quite possible that they make heavy use of instancing, which might reduce the memory footprint by a lot. Not every pebble or rock needs an individual geometry.
But with instancing. So they could have a few dozens of grain types 5K tris each and the total number is from instances, no need to load 5 billion triangles into memory.
Ah, good point, memory consumption might indeed be a factor here. If that's the case I'd expect higher variance in the measurements perhaps - looking for that in the raw data could be interesting.
They're using specialized hardware to accelerate their development feedback loop. Without a doubt researchers and hackers will find ways to cut down model sizes and complexity, to run on consumer hardware, soon enough. Just use stable diffusion as an example: 4GB for the whole model. Even if text models are 16GB that'd be great.
> TSDF memory isn’t an issue since Niessner et al. (2013).
I would strongly disagree. This paper uses TSDF and runs into memory issues. And ATLAS is using TSDF and running into memory issues. So for practical applications, TSDF is still too memory hungry.
Perhaps a better response would be outlining how much memory it actually takes? This way people can decide (i.e. if they care deeply about the memory footprint)
reply