Add pipeline tag: robotics
#2
by
nielsr
HF Staff
- opened
README.md
CHANGED
|
@@ -1,36 +1,38 @@
|
|
| 1 |
-
---
|
| 2 |
-
license: apache-2.0
|
| 3 |
-
|
| 4 |
-
|
| 5 |
-
|
| 6 |
-
|
| 7 |
-
|
| 8 |
-
<
|
| 9 |
-
<a href="https://github.
|
| 10 |
-
<a href="https://github.
|
| 11 |
-
<a href="https://github.com/
|
| 12 |
-
<a href="https://
|
| 13 |
-
<a href="https://
|
| 14 |
-
<a href="https://
|
| 15 |
-
<
|
| 16 |
-
|
| 17 |
-
|
| 18 |
-
|
| 19 |
-
|
| 20 |
-
|
| 21 |
-
|
| 22 |
-
<
|
| 23 |
-
<a href="https://github.
|
| 24 |
-
<a href="https://
|
| 25 |
-
|
| 26 |
-
|
| 27 |
-
</div>
|
| 28 |
-
|
| 29 |
-
|
| 30 |
-
|
| 31 |
-
|
| 32 |
-
|
| 33 |
-
|
| 34 |
-
|
| 35 |
-
|
|
|
|
|
|
|
| 36 |
## For more details, please refer to our [github repo](https://github.com/SEU-zxj/CoIRL-AD).
|
|
|
|
| 1 |
+
---
|
| 2 |
+
license: apache-2.0
|
| 3 |
+
pipeline_tag: robotics
|
| 4 |
+
---
|
| 5 |
+
|
| 6 |
+
# CoIRL-AD: Collaborative–Competitive Imitation–Reinforcement Learning in Latent World Models for Autonomous Driving
|
| 7 |
+
|
| 8 |
+
<div align="center">
|
| 9 |
+
<a href="https://seu-zxj.github.io/">Xiaoji Zheng</a>*,
|
| 10 |
+
<a href="https://ziyuan-yang.github.io">Yangzi Yuan</a>*,
|
| 11 |
+
<a href="https://github.com/Ian-cyh">Yanhao Chen</a>,
|
| 12 |
+
<a href="https://github.com/doraemonaaaa">Yuhang Peng</a>,
|
| 13 |
+
<a href="https://github.com/TANGXTONG1">Yuanrong Tang</a>,
|
| 14 |
+
<a href="https://openreview.net/profile?id=~Gengyuan_Liu1">Gengyuan Liu</a>,
|
| 15 |
+
<a href="https://scholar.google.com/citations?user=_Wrx_yEAAAAJ">Bokui Chen</a>‡ and
|
| 16 |
+
<a href="https://scholar.google.com/citations?user=AktmI14AAAAJ">Jiangtao Gong</a>‡.
|
| 17 |
+
<div>
|
| 18 |
+
*: Equal contribution.
|
| 19 |
+
‡: Corresponding authors.
|
| 20 |
+
</div>
|
| 21 |
+
|
| 22 |
+
<div>
|
| 23 |
+
<a href="https://seu-zxj.github.io/CoIRL-AD"><img alt="Static Badge" src="https://img.shields.io/badge/_-page-blue?style=flat&logo=githubpages&logoColor=white&logoSize=auto&labelColor=gray"></a>
|
| 24 |
+
<a href="https://arxiv.org/abs/2510.12560"><img alt="Static Badge" src="https://img.shields.io/badge/arxiv-paper-red?logo=arxiv"></a>
|
| 25 |
+
<a href="https://github.com/SEU-zxj/CoIRL-AD"><img alt="Static Badge" src="https://img.shields.io/badge/github-code-white?logo=github"></a>
|
| 26 |
+
<a href="https://huggingface.co/Student-Xiaoji/CoIRL-AD-models"><img alt="Static Badge" src="https://img.shields.io/badge/hf-models-yellow?logo=huggingface"></a>
|
| 27 |
+
</div>
|
| 28 |
+
|
| 29 |
+
</div>
|
| 30 |
+
|
| 31 |
+
**CoIRL-AD** introduces a dual-policy framework that unifies imitation learning (IL) and reinforcement learning (RL) through a collaborative–competitive mechanism within a latent world model.
|
| 32 |
+
The framework enhances generalization and robustness in end-to-end autonomous driving without relying on external simulators.
|
| 33 |
+
|
| 34 |
+

|
| 35 |
+
|
| 36 |
+
Here we provide our model checkpoints (see `/ckpts`), info files (see `/info-files`) for dataloader to download and reproduce our experiment results.
|
| 37 |
+
|
| 38 |
## For more details, please refer to our [github repo](https://github.com/SEU-zxj/CoIRL-AD).
|