高斯 splatting 替代方案:Nvidia 的 SVRaster 的 WebGL 实现
Gaussian Splatting Alternative: WebGL Implementation of Nvidia's SVRaster

原始链接: https://github.com/samuelm2/svraster-webgl

这个WebGL查看器允许交互式探索使用Nvidia的稀疏体素光栅化技术生成的稀疏体素场景。可在vid2scene.com/voxel访问,它提供使用鼠标或触摸进行轨道、平移和缩放的相机控制。查看器显示性能指标(FPS)。 要本地使用它:克隆GitHub仓库,安装Node.js和依赖项,然后启动开发服务器。当前的实现使用基于距离的排序,并且有可能改进性能,尤其是在片段着色器和内存使用方面。性能范围从笔记本电脑GPU上的60-80 FPS到移动设备上的12-20 FPS。它目前支持使用球谐函数1阶训练的场景。 您可以使用一个分支的SVRaster仓库生成兼容的场景,使用`--sh_degree 1`训练模型,并使用`convert_to_ply.py`将现有模型转换为PLY格式。PLY文件可以通过URL参数或文件上传加载到查看器中。该项目利用AI辅助生成样板代码,加快了初始开发速度,但对于复杂的图形问题需要人工干预。

Samuelm2 发布了 Nvidia SVRaster 的 WebGL 实现,这是一种基于体素的光线场渲染技术,可作为高斯散射(Gaussian Splatting)的替代方案。该项目已在 GitHub 和 vid2scene.com/voxel 上发布,允许用户在浏览器中渲染 SVRaster 体素场景。Samuelm2 指出,与高斯散射相比,SVRaster 具有独特的优势和劣势。目前,在配备 3080 显卡的笔记本电脑上,2K 分辨率下的性能约为 60 FPS,在 iPhone 13 Pro Max 上约为 10-15 FPS,仍有进一步优化的空间。代码采用 MIT 许可证,鼓励探索和贡献。Samuelm2 还讨论了使用 AI 辅助开发的经验,发现它在编写样板代码方面非常有效,但在复杂的图形调试方面帮助不大。另一位用户询问 SVRaster 是否可以用于蒙皮动画,Samuelm2 回复说它目前主要限于静态几何体。

原文

A WebGL-based viewer for visualizing sparse voxel scenes from the Nvidia Sparse Voxels Rasterization paper. This viewer provides an interactive way to explore and visualize the voxel radiance field from the web. You can try the viewer at vid2scene.com/voxel

The rendering isn't exactly the same as the reference CUDA implementation, but it's pretty similar.

  • Interactive camera controls:
    • Left-click + drag: Orbit camera
    • Right-click + drag: Pan camera
    • Mouse wheel: Zoom
    • WASD/Arrow keys: Move camera
    • Q/E: Rotate scene around view direction
    • Space/Shift: Move up/down
  • Touch controls for mobile devices:
    • 1 finger drag: Orbit
    • 2 finger drag: Pan/zoom
  • Performance metrics display (FPS counter)

Before running the project, you need to have Node.js and NPM (Node Package Manager) installed:

  1. Install Node.js and NPM:

  2. Verify installation:

    node --version
    npm --version

To run this project locally:

  1. Clone the repository:

    git clone https://github.com/samuelm2/svraster-webgl.git
    cd svraster-webgl
  2. Install dependencies:

  3. Start the development server:

    This will start the Vite development server, typically at http://localhost:5173

Implementation and Performance Notes

  • This viewer uses a distance-based sorting approach rather than the ray direction-dependent Morton ordering described in the paper
  • The current implementation has only the most basic optimizations applied - there's significant room for performance improvements. Right now, the fragment shader is the bottleneck. Memory usage could also be lowered because nothing is quantized right now. If you have a perf improvement suggestion, please feel free to submit a PR!
  • It runs at ~60-80 FPS on my laptop with a Laptop 3080 GPU
  • It runs at about ~12-20 FPS on my iPhone 13 Pro Max
  • Right now, only scenes trained with spherical harmonic degree 1 are supported (so 12 total SH coefficients per voxel). See the command below to train your SVRaster scene with sh degree 1.

You can pass ?samples=X as a URL param which will adjust the amount of density samples per ray in the fragment shader. The default is 3, but you can get a pretty good performance increase by decreasing this value, at the cost of a little less accurate rendering.

The viewer supports a few URL parameters to customize its behavior:

How to Generate Your Own Scenes

If you have your own SVRaster scenes that you'd like to visualize in this WebGL viewer, you can use this forked version of SVRaster that supports PLY export:

  1. Clone the forked SVRaster repository: github.com/samuelm2/svraster

  2. Run SVRaster with SH degree 1 to create a model compatible with this viewer:

    python train.py --source_path /PATH/TO/COLMAP/SFM/DIR --model_path outputs/pumpkin/ --sh_degree 1 --sh_degree_init 1 --subdivide_max_num 600000

    The PLY file will be automatically saved in your outputs model_path directory.

  3. For existing SVRaster models that were previously trained, you can use the convert_to_ply.py script:

    python convert_to_ply.py outputs/pumpkin/ outputs/pumpkin/pumpkin.ply
  4. Open the WebGL viewer and use the URL parameter or file upload UI (or modify the code itself) to load your custom PLY file

Note: The PLY files generated by this process are very unoptimized and uncompressed, so they can get very large quickly. I usually keep the number of voxels down to 600k to 1m range using the subdivide_max_num flag above.

This project was made with heavy use of AI assistance ("vibe coded"). I wanted to see how it would go for something graphics related. My brief thoughts: it is super good for the boilerplate (defining/binding buffers, uniforms, etc). I was able to get simple rendering within hours. But when it comes to solving the harder graphics bugs, the benefits are a lot lower. There were multiple times where it would go in the complete wrong direction and I would have to rewrite portions manually. But overall, I think it is definitely a net positive for smaller projects like this one. In a more complex graphics engine / production environment, the benefits might be less clear for now. I'm interested in what others think.

联系我们 contact @ memedata.com