Datasets:
language:
- en
license: cc-by-sa-4.0
size_categories:
- 10K<n<100K
task_categories:
- image-text-to-text
dataset_info:
- config_name: MMST_Standard
description: |
MIRAGE-MMST Standard Configuration: standard benchmark (train + test).
citation: |
@misc{mirage2025,
title={MIRAGE: A Benchmark for Multimodal Information-Seeking and Reasoning in Agricultural Expert-Guided Conversations},
author={},
year={2025},
}
features:
- name: id
dtype: string
- name: question
dtype: string
- name: answer
dtype: string
- name: image_1
dtype: image
- name: image_2
dtype: image
- name: image_3
dtype: image
- name: category
dtype: string
- name: entity_type
dtype: string
- name: entity_name
dtype: string
- name: entity_scientific_name
dtype: string
- name: entity_common_names
list:
dtype: string
- name: meta_data_state
dtype: string
- name: meta_data_county
dtype: string
- name: meta_data_asked_time
dtype: string
splits:
- name: train
num_examples: 17537
- name: test
num_examples: 8188
- config_name: MMST_Contextual
description: |
MIRAGE-MMST Contextual Configuration: contextual benchmark (test only).
citation: |
@misc{mirage2025,
title={MIRAGE: A Benchmark for Multimodal Information-Seeking and Reasoning in Agricultural Expert-Guided Conversations},
author={},
year={2025},
}
features:
- name: id
dtype: string
- name: question
dtype: string
- name: answer
dtype: string
- name: image_1
dtype: image
- name: image_2
dtype: image
- name: image_3
dtype: image
- name: category
dtype: string
- name: entity_type
dtype: string
- name: entity_name
dtype: string
- name: entity_scientific_name
dtype: string
- name: entity_common_names
list:
dtype: string
- name: meta_data_state
dtype: string
- name: meta_data_county
dtype: string
- name: meta_data_asked_time
dtype: string
- name: location_related
dtype: bool
- name: time_related
dtype: bool
- name: location_related_analysis
dtype: string
- name: time_related_analysis
dtype: string
splits:
- name: test
num_examples: 3934
- config_name: MMMT_Direct
description: >
MIRAGE-MMMT Direct Configuration: direct-response dialog benchmark with
three splits (train, dev, test).
citation: |
@misc{mirage2025,
title={MIRAGE: A Benchmark for Multimodal Information-Seeking and Reasoning in Agricultural Expert-Guided Conversations},
author={},
year={2025},
}
features:
- name: id
dtype: string
- name: dialog_context
dtype: string
- name: decision
dtype: string
- name: utterance
dtype: string
- name: dialog_turns
dtype: int32
- name: image_1
dtype: image
- name: image_2
dtype: image
- name: image_3
dtype: image
splits:
- name: train
num_examples: 3876
- name: dev
num_examples: 878
- name: test
num_examples: 861
- config_name: MMMT_Decomp
description: >
MIRAGE-MMMT Decomp Configuration: decomposed-dialog benchmark, with
known/missing goals.
citation: |
@misc{mirage2025,
title={MIRAGE: A Benchmark for Multimodal Information-Seeking and Reasoning in Agricultural Expert-Guided Conversations},
author={},
year={2025},
}
features:
- name: id
dtype: string
- name: dialog_context
dtype: string
- name: decision
dtype: string
- name: utterance
dtype: string
- name: dialog_turns
dtype: int32
- name: known_goal
list:
dtype: string
- name: missing_goal
list:
dtype: string
- name: image_1
dtype: image
- name: image_2
dtype: image
- name: image_3
dtype: image
splits:
- name: train
num_examples: 3876
- name: dev
num_examples: 878
- name: test
num_examples: 861
configs:
- config_name: MMST_Standard
data_files:
- split: train
path: MMST_Standard/train/*.arrow
- split: test
path: MMST_Standard/test/*.arrow
- config_name: MMST_Contextual
data_files:
- split: test
path: MMST_Contextual/test/*.arrow
- config_name: MMMT_Direct
data_files:
- split: train
path: MMMT_Direct/train/*.arrow
- split: dev
path: MMMT_Direct/dev/*.arrow
- split: test
path: MMMT_Direct/test/*.arrow
- config_name: MMMT_Decomp
data_files:
- split: train
path: MMMT_Decomp/train/*.arrow
- split: dev
path: MMMT_Decomp/dev/*.arrow
- split: test
path: MMMT_Decomp/test/*.arrow
modalities:
- Image
- Text
tags:
- biology
- agriculture
- Long-Form Question Answering
MIRAGE Benchmark
Project Page | Paper | GitHub
MIRAGE is a benchmark for multimodal expert-level reasoning and decision-making in consultative interaction settings, specifically designed for the agriculture domain. It captures the complexity of expert consultations by combining natural user queries, expert-authored responses, and image-based context.
The benchmark spans diverse crop health, pest diagnosis, and crop management scenarios, including more than 7,000 unique biological entities.
Overview
The benchmark consists of two main components:
- MMST (Multi-Modal Single-Turn): Single-turn multimodal reasoning tasks.
- MMMT (Multi-Modal Multi-Turn): Multi-turn conversational tasks with visual context.
Sample Usage
You can load the various configurations of the dataset using the datasets library:
from datasets import load_dataset
# Load MMST datasets
ds_standard = load_dataset("MIRAGE-Benchmark/MIRAGE", "MMST_Standard")
ds_contextual = load_dataset("MIRAGE-Benchmark/MIRAGE", "MMST_Contextual")
# Load MMMT dataset
ds_mmmt_direct = load_dataset("MIRAGE-Benchmark/MIRAGE", "MMMT_Direct")
ds_mmmt_decomp = load_dataset("MIRAGE-Benchmark/MIRAGE", "MMMT_Decomp")
Citation
If you use our benchmark in your research, please cite our paper:
@article{dongre2025mirage,
title={MIRAGE: A Benchmark for Multimodal Information-Seeking and Reasoning in Agricultural Expert-Guided Conversations},
author={Dongre, Vardhan and Gui, Chi and Garg, Shubham and Nayyeri, Hooshang and Tur, Gokhan and Hakkani-T{\"{u}}r, Dilek and Adve, Vikram S},
journal={arXiv preprint arXiv:2506.20100},
year={2025}
}
License
This project is licensed under the Creative Commons Attribution-ShareAlike 4.0 International License (CC-BY-SA 4.0).