🥇 UniGenBench Leaderboard (Chinese Long)
📚 UniGenBench is a unified benchmark for T2I generation that integrates diverse prompt themes with a comprehensive suite of fine-grained evaluation criteria.
🔧 You can use the official GitHub repo to evaluate your model on UniGenBench.
😊 We release all generated images from the T2I models evaluated in our UniGenBench on UniGenBench-Eval-Images. Feel free to use any evaluation model that is convenient and suitable for you to assess and compare the performance of your models.
📝 To add your own model to the leaderboard, please send an Email to Yibin Wang, then we will help with the evaluation and updating the leaderboard.
2025-11 | ✗ | 95.42 | 99.42 | 98.84 | 91.91 | 92.93 | 97.14 | 89.36 | 94.31 | 98.27 | 96.78 | 94.23 | 99.16 | 92.97 | 93.59 | 92.81 | 97.46 | 91.52 | 91.95 | 92.26 | 95.64 | 95.79 | 93.91 | 97.99 | 94.66 | 95.85 | 95.87 | 95.79 | 93.27 | 99.21 | 90.87 | 90.14 | 96.27 | 96.47 | 96.01 |