A WebGL-based viewer for visualizing sparse voxel scenes from the Nvidia Sparse Voxels Rasterization paper. This viewer provides an interactive way to explore and visualize the voxel radiance field from the web. You can try the viewer at vid2scene.com/voxel
The rendering isn't exactly the same as the reference CUDA implementation, but it's pretty similar.
- Interactive camera controls:
- Left-click + drag: Orbit camera
- Right-click + drag: Pan camera
- Mouse wheel: Zoom
- WASD/Arrow keys: Move camera
- Q/E: Rotate scene around view direction
- Space/Shift: Move up/down
- Touch controls for mobile devices:
- 1 finger drag: Orbit
- 2 finger drag: Pan/zoom
- Performance metrics display (FPS counter)
Before running the project, you need to have Node.js and NPM (Node Package Manager) installed:
-
Install Node.js and NPM:
-
Verify installation:
node --version npm --version
To run this project locally:
-
Clone the repository:
git clone https://github.com/samuelm2/svraster-webgl.git cd svraster-webgl
-
Install dependencies:
-
Start the development server:
This will start the Vite development server, typically at http://localhost:5173
- This viewer uses a distance-based sorting approach rather than the ray direction-dependent Morton ordering described in the paper
- The current implementation has only the most basic optimizations applied - there's significant room for performance improvements. Right now, the fragment shader is the bottleneck. Memory usage could also be lowered because nothing is quantized right now. If you have a perf improvement suggestion, please feel free to submit a PR!
- It runs at ~60-80 FPS on my laptop with a Laptop 3080 GPU
- It runs at about ~12-20 FPS on my iPhone 13 Pro Max
- Right now, only scenes trained with spherical harmonic degree 1 are supported (so 12 total SH coefficients per voxel). See the command below to train your SVRaster scene with sh degree 1.
You can pass ?samples=X as a URL param which will adjust the amount of density samples per ray in the fragment shader. The default is 3, but you can get a pretty good performance increase by decreasing this value, at the cost of a little less accurate rendering.
The viewer supports a few URL parameters to customize its behavior:
If you have your own SVRaster scenes that you'd like to visualize in this WebGL viewer, you can use this forked version of SVRaster that supports PLY export:
-
Clone the forked SVRaster repository: github.com/samuelm2/svraster
-
Run SVRaster with SH degree 1 to create a model compatible with this viewer:
python train.py --source_path /PATH/TO/COLMAP/SFM/DIR --model_path outputs/pumpkin/ --sh_degree 1 --sh_degree_init 1 --subdivide_max_num 600000
The PLY file will be automatically saved in your outputs model_path directory.
-
For existing SVRaster models that were previously trained, you can use the
convert_to_ply.py
script:python convert_to_ply.py outputs/pumpkin/ outputs/pumpkin/pumpkin.ply
-
Open the WebGL viewer and use the URL parameter or file upload UI (or modify the code itself) to load your custom PLY file
Note: The PLY files generated by this process are very unoptimized and uncompressed, so they can get very large quickly. I usually keep the number of voxels down to 600k to 1m range using the subdivide_max_num flag above.
This project was made with heavy use of AI assistance ("vibe coded"). I wanted to see how it would go for something graphics related. My brief thoughts: it is super good for the boilerplate (defining/binding buffers, uniforms, etc). I was able to get simple rendering within hours. But when it comes to solving the harder graphics bugs, the benefits are a lot lower. There were multiple times where it would go in the complete wrong direction and I would have to rewrite portions manually. But overall, I think it is definitely a net positive for smaller projects like this one. In a more complex graphics engine / production environment, the benefits might be less clear for now. I'm interested in what others think.