We show more results on out-of-distribution input videos, such as model generated videos, and videos with extremely large object or camera motion.
[Back to main page]
Given a monocular video, naively fitting a 3DGS on it leads to broken results due to the low coverage of the scene.
Instead, we first run 360Anything to synthesize the entire 360° view as a panorama video, and then train 3DGS on it.
This leads to much better geometry and enables fly-through exploration of the 3D scene.
[Back to main page]
| Input Video | Outpainted Panorama Video | 3DGS on Input Video | 3DGS on Panorama Video (Ours) | |
|---|---|---|---|---|
360Anything also applies to perspective-to-panorama image generation.
We fine-tune FLUX on synthetic 360° rendering of 3D scenes, which generalizes to in-the-wild and OOD images.
[Back to main page]