无姿势3D高斯溅射,通过形状光线估计
Pose-free 3D Gaussian splatting via shape-ray estimation

原始链接: https://arxiv.org/abs/2505.22978

## 分享:无位姿 3D 高斯溅射 本文介绍 SHARE,一种新的 3D 高斯溅射框架,它消除了对精确相机位姿的需求——这是现实世界应用中常见的限制。现有方法难以处理不准确的位姿估计,导致几何失真。 SHARE 通过联合估计形状*和*相机光线来解决这个问题,构建一种“感知位姿”的表示,而无需依赖显式的 3D 变换。这使得能够无缝集成多视角信息并减少错位。该方法进一步通过“锚点对齐高斯预测”来提高重建质量,细化围绕预定义锚点的局部几何形状,以实现更准确的高斯放置。 实验表明,SHARE 在各种数据集上表现出强大的性能,即使在位姿信息嘈杂或缺失的情况下也能实现高质量的渲染。代码已公开可用。

黑客新闻 新 | 过去 | 评论 | 提问 | 展示 | 招聘 | 提交 登录 无需姿势的3D高斯飞溅,通过形状光线估计 (arxiv.org) 8点 由 PaulHoule 1小时前 | 隐藏 | 过去 | 收藏 | 1条评论 lawlessone 1分钟前 [–] 我错过了我们每天的高斯飞溅讨论。 很高兴终于看到一个。回复 指南 | 常见问题 | 列表 | API | 安全 | 法律 | 申请YC | 联系 搜索:
相关文章

原文

View a PDF of the paper titled Pose-free 3D Gaussian splatting via shape-ray estimation, by Youngju Na and 5 other authors

View PDF HTML (experimental)
Abstract:While generalizable 3D Gaussian splatting enables efficient, high-quality rendering of unseen scenes, it heavily depends on precise camera poses for accurate geometry. In real-world scenarios, obtaining accurate poses is challenging, leading to noisy pose estimates and geometric misalignments. To address this, we introduce SHARE, a pose-free, feed-forward Gaussian splatting framework that overcomes these ambiguities by joint shape and camera rays estimation. Instead of relying on explicit 3D transformations, SHARE builds a pose-aware canonical volume representation that seamlessly integrates multi-view information, reducing misalignment caused by inaccurate pose estimates. Additionally, anchor-aligned Gaussian prediction enhances scene reconstruction by refining local geometry around coarse anchors, allowing for more precise Gaussian placement. Extensive experiments on diverse real-world datasets show that our method achieves robust performance in pose-free generalizable Gaussian splatting. Code is avilable at this https URL
From: Youngju Na [view email]
[v1] Thu, 29 May 2025 01:34:40 UTC (1,331 KB)
[v2] Fri, 26 Sep 2025 06:08:53 UTC (1,331 KB)
[v3] Tue, 21 Oct 2025 11:48:43 UTC (1,331 KB)
联系我们 contact @ memedata.com