--- language: - en license: cc-by-sa-4.0 size_categories: - 10K ## Overview The benchmark consists of two main components: - **MMST (Multi-Modal Single-Turn)**: Single-turn multimodal reasoning tasks. - **MMMT (Multi-Modal Multi-Turn)**: Multi-turn conversational tasks with visual context. ## Sample Usage You can load the various configurations of the dataset using the `datasets` library: ```python from datasets import load_dataset # Load MMST datasets ds_standard = load_dataset("MIRAGE-Benchmark/MIRAGE", "MMST_Standard") ds_contextual = load_dataset("MIRAGE-Benchmark/MIRAGE", "MMST_Contextual") # Load MMMT dataset ds_mmmt_direct = load_dataset("MIRAGE-Benchmark/MIRAGE", "MMMT_Direct") ds_mmmt_decomp = load_dataset("MIRAGE-Benchmark/MIRAGE", "MMMT_Decomp") ``` ## Citation If you use our benchmark in your research, please cite our paper: ```bibtex @article{dongre2025mirage, title={MIRAGE: A Benchmark for Multimodal Information-Seeking and Reasoning in Agricultural Expert-Guided Conversations}, author={Dongre, Vardhan and Gui, Chi and Garg, Shubham and Nayyeri, Hooshang and Tur, Gokhan and Hakkani-T{\"{u}}r, Dilek and Adve, Vikram S}, journal={arXiv preprint arXiv:2506.20100}, year={2025} } ``` ## License This project is licensed under the [Creative Commons Attribution-ShareAlike 4.0 International License (CC-BY-SA 4.0)](https://creativecommons.org/licenses/by-sa/4.0/).