Enhance dataset card: Add paper, code, project page, abstract, task categories, tags, and sample usage
#1
by
nielsr
HF Staff
- opened
README.md
CHANGED
|
@@ -1,3 +1,70 @@
|
|
| 1 |
-
---
|
| 2 |
-
license: cc-by-4.0
|
| 3 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
---
|
| 2 |
+
license: cc-by-4.0
|
| 3 |
+
task_categories:
|
| 4 |
+
- image-to-3d
|
| 5 |
+
- text-to-3d
|
| 6 |
+
tags:
|
| 7 |
+
- 3d-generation
|
| 8 |
+
- 3d-reconstruction
|
| 9 |
+
- gaussian-splatting
|
| 10 |
+
- video-diffusion
|
| 11 |
+
- synthetic-data
|
| 12 |
+
---
|
| 13 |
+
|
| 14 |
+
# Lyra: Generative 3D Scene Reconstruction Training Dataset
|
| 15 |
+
|
| 16 |
+
This repository contains the training datasets used for the **Lyra: Generative 3D Scene Reconstruction via Video Diffusion Model Self-Distillation** paper. This dataset facilitates the training of 3D Gaussian Splatting (3DGS) representations, enabling 3D scene synthesis from text prompts or single images for real-time rendering, and extending to dynamic 3D scene generation from monocular input videos.
|
| 17 |
+
|
| 18 |
+
**Paper:** [Lyra: Generative 3D Scene Reconstruction via Video Diffusion Model Self-Distillation](https://huggingface.co/papers/2509.19296)
|
| 19 |
+
**Project Page:** [https://research.nvidia.com/labs/toronto-ai/lyra/](https://research.nvidia.com/labs/toronto-ai/lyra/)
|
| 20 |
+
**Code:** [https://github.com/nv-tlabs/lyra](https://github.com/nv-tlabs/lyra)
|
| 21 |
+
|
| 22 |
+
## Paper Abstract
|
| 23 |
+
|
| 24 |
+
The ability to generate virtual environments is crucial for applications ranging from gaming to physical AI domains such as robotics, autonomous driving, and industrial AI. Current learning-based 3D reconstruction methods rely on the availability of captured real-world multi-view data, which is not always readily available. Recent advancements in video diffusion models have shown remarkable imagination capabilities, yet their 2D nature limits the applications to simulation where a robot needs to navigate and interact with the environment. In this paper, we propose a self-distillation framework that aims to distill the implicit 3D knowledge in the video diffusion models into an explicit 3D Gaussian Splatting (3DGS) representation, eliminating the need for multi-view training data. Specifically, we augment the typical RGB decoder with a 3DGS decoder, which is supervised by the output of the RGB decoder. In this approach, the 3DGS decoder can be purely trained with synthetic data generated by video diffusion models. At inference time, our model can synthesize 3D scenes from either a text prompt or a single image for real-time rendering. Our framework further extends to dynamic 3D scene generation from a monocular input video. Experimental results show that our framework achieves state-of-the-art performance in static and dynamic 3D scene generation.
|
| 25 |
+
|
| 26 |
+
## Dataset Description
|
| 27 |
+
|
| 28 |
+
This Hugging Face repository hosts the synthetic training data (referred to as "Lyra-SDG training data") that is crucial for training the 3D Gaussian Splatting (3DGS) decoders within the Lyra framework. This data is generated by video diffusion models and is used to enable the generation of 3D scenes from text prompts or single images, and dynamic 3D scene generation from monocular input videos.
|
| 29 |
+
|
| 30 |
+
## Sample Usage
|
| 31 |
+
|
| 32 |
+
You can download the Lyra training datasets from Hugging Face for use in your Lyra model training setup.
|
| 33 |
+
|
| 34 |
+
```bash
|
| 35 |
+
# Download our training datasets from Hugging Face and untar them into a static/dynamic folder
|
| 36 |
+
huggingface-cli download nvidia/PhysicalAI-SpatialIntelligence-Lyra-SDG --repo-type dataset --local-dir lyra_dataset/tar
|
| 37 |
+
```
|
| 38 |
+
|
| 39 |
+
After downloading, make sure to update the paths in `src/models/data/registry.py` for `lyra_static` / `lyra_dynamic` to wherever your data is stored. You can then run the progressive training script:
|
| 40 |
+
|
| 41 |
+
```bash
|
| 42 |
+
bash train.sh
|
| 43 |
+
```
|
| 44 |
+
|
| 45 |
+
You can also visualize outputs during training using:
|
| 46 |
+
|
| 47 |
+
```bash
|
| 48 |
+
bash inference.sh
|
| 49 |
+
```
|
| 50 |
+
|
| 51 |
+
## Citation
|
| 52 |
+
|
| 53 |
+
If you use this dataset or the Lyra paper in your research, please cite the associated paper:
|
| 54 |
+
|
| 55 |
+
```bibtex
|
| 56 |
+
@inproceedings{bahmani2025lyra,
|
| 57 |
+
title={Lyra: Generative 3D Scene Reconstruction via Video Diffusion Model Self-Distillation},
|
| 58 |
+
author={Bahmani, Sherwin and Shen, Tianchang and Ren, Jiawei and Huang, Jiahui and Jiang, Yifeng and
|
| 59 |
+
Turki, Haithem and Tagliasacchi, Andrea and Lindell, David B. and Gojcic, Zan and Fidler, Sanja and
|
| 60 |
+
Ling, Huan and Gao, Jun and Ren, Xuanchi},
|
| 61 |
+
booktitle={arXiv preprint arXiv:2509.19296},
|
| 62 |
+
year={2025}
|
| 63 |
+
}
|
| 64 |
+
```
|
| 65 |
+
|
| 66 |
+
## License
|
| 67 |
+
|
| 68 |
+
The training data provided in this Hugging Face repository is licensed under the [CC-BY-4.0 License](https://creativecommons.org/licenses/by/4.0/).
|
| 69 |
+
|
| 70 |
+
Please note that the Lyra source code is released under the [Apache 2 License](https://www.apache.org/licenses/LICENSE-2.0), and Lyra models are released under the [NVIDIA Open Model License](https://www.nvidia.com/en-us/agreements/enterprise-software/nvidia-open-model-license). Review the license terms of these associated open-source projects before use.
|