emma_stone / README.md
winvswon78's picture
Upload folder using huggingface_hub
d094d12 verified
metadata
language:
  - en
size_categories:
  - n<1K
task_categories:
  - question-answering
  - visual-question-answering
  - multiple-choice
dataset_info:
  - config_name: Chemistry
    features:
      - name: pid
        dtype: string
      - name: question
        dtype: string
      - name: options
        sequence: string
      - name: answer
        dtype: string
      - name: image_1
        dtype: image
      - name: image_2
        dtype: image
      - name: image_3
        dtype: image
      - name: image_4
        dtype: image
      - name: image_5
        dtype: image
      - name: solution
        dtype: string
      - name: subject
        dtype: string
      - name: task
        dtype: string
      - name: category
        dtype: string
      - name: source
        dtype: string
      - name: type
        dtype: string
      - name: context
        dtype: string
    splits:
      - name: test
        num_examples: 8
    download_size: 415466
  - config_name: Coding
    features:
      - name: pid
        dtype: string
      - name: question
        dtype: string
      - name: options
        sequence: string
      - name: answer
        dtype: string
      - name: image_1
        dtype: image
      - name: image_2
        dtype: image
      - name: image_3
        dtype: image
      - name: image_4
        dtype: image
      - name: image_5
        dtype: image
      - name: solution
        dtype: string
      - name: subject
        dtype: string
      - name: task
        dtype: string
      - name: category
        dtype: string
      - name: source
        dtype: string
      - name: type
        dtype: string
      - name: context
        dtype: string
    splits:
      - name: test
        num_examples: 8
    download_size: 1693180
  - config_name: Math
    features:
      - name: pid
        dtype: string
      - name: question
        dtype: string
      - name: options
        sequence: string
      - name: answer
        dtype: string
      - name: image_1
        dtype: image
      - name: image_2
        dtype: image
      - name: image_3
        dtype: image
      - name: image_4
        dtype: image
      - name: image_5
        dtype: image
      - name: solution
        dtype: string
      - name: subject
        dtype: string
      - name: task
        dtype: string
      - name: category
        dtype: string
      - name: source
        dtype: string
      - name: type
        dtype: string
      - name: context
        dtype: string
    splits:
      - name: test
        num_examples: 8
    download_size: 857062
  - config_name: Physics
    features:
      - name: pid
        dtype: string
      - name: question
        dtype: string
      - name: options
        sequence: string
      - name: answer
        dtype: string
      - name: image_1
        dtype: image
      - name: image_2
        dtype: image
      - name: image_3
        dtype: image
      - name: image_4
        dtype: image
      - name: image_5
        dtype: image
      - name: solution
        dtype: string
      - name: subject
        dtype: string
      - name: task
        dtype: string
      - name: category
        dtype: string
      - name: source
        dtype: string
      - name: type
        dtype: string
      - name: context
        dtype: string
    splits:
      - name: test
        num_examples: 8
    download_size: 566203
  - config_name: All
    features:
      - name: pid
        dtype: string
      - name: question
        dtype: string
      - name: options
        sequence: string
      - name: answer
        dtype: string
      - name: image_1
        dtype: image
      - name: image_2
        dtype: image
      - name: image_3
        dtype: image
      - name: image_4
        dtype: image
      - name: image_5
        dtype: image
      - name: solution
        dtype: string
      - name: subject
        dtype: string
      - name: task
        dtype: string
      - name: category
        dtype: string
      - name: source
        dtype: string
      - name: type
        dtype: string
      - name: context
        dtype: string
    splits:
      - name: test
        num_examples: 32
    download_size: 3534939
configs:
  - config_name: Chemistry
    data_files:
      - split: test
        path: Chemistry/test-*
  - config_name: Coding
    data_files:
      - split: test
        path: Coding/test-*
  - config_name: Math
    data_files:
      - split: test
        path: Math/test-*
  - config_name: Physics
    data_files:
      - split: test
        path: Physics/test-*
  - config_name: All
    data_files:
      - split: test
        path: All/test-*
tags:
  - chemistry
  - physics
  - math
  - coding

EMMA Clone Dataset (Small Version)

EMMA Stone is a reduced version of the EMMA (Enhanced MultiModal reAsoning) benchmark with 8 samples per subject category, designed for quick testing and development.

This dataset contains:

  • Chemistry: 8 samples
  • Coding: 8 samples
  • Math: 8 samples
  • Physics: 8 samples
  • All: 32 samples (8 from each category)

Usage

Loading with datasets library

from datasets import load_dataset

# Load specific subject
chemistry_data = load_dataset("winvswon78/emma_stone", "Chemistry", split="test")
math_data = load_dataset("winvswon78/emma_stone", "Math", split="test")
coding_data = load_dataset("winvswon78/emma_stone", "Coding", split="test")
physics_data = load_dataset("winvswon78/emma_stone", "Physics", split="test")

# Load all subjects combined
all_data = load_dataset("winvswon78/emma_stone", "All", split="test")

# Verify the dataset
print(f"Chemistry samples: {len(chemistry_data)}")
print(f"Math samples: {len(math_data)}")  
print(f"Coding samples: {len(coding_data)}")
print(f"Physics samples: {len(physics_data)}")
print(f"All samples: {len(all_data)}")
print(f"Subject distribution in All: {all_data['subject']}")

Alternative loading method

If you encounter issues with the config names, you can also load the data directly:

from datasets import Dataset
import pandas as pd

# Load specific subject directly
chemistry_df = pd.read_parquet("hf://datasets/winvswon78/emma_stone/Chemistry/test-00000-of-00001.parquet")
chemistry_dataset = Dataset.from_pandas(chemistry_df)

# Load all subjects
all_df = pd.read_parquet("hf://datasets/winvswon78/emma_stone/All/test-00000-of-00001.parquet")
all_dataset = Dataset.from_pandas(all_df)

Original EMMA Information

This is a sampled version of the original EMMA benchmark targeting organic multimodal reasoning across mathematics, physics, chemistry, and coding. EMMA tasks demand advanced cross-modal reasoning that cannot be solved by thinking separately in each modality.

Data Format

The dataset is provided in jsonl format and contains the following attributes:

{
    "pid": [string] Problem ID, e.g., “math_1”,
    "question": [string] The question text,
    "options": [list] Choice options for multiple-choice problems. For free-form problems, this could be a 'none' value,
    "answer": [string] The correct answer for the problem,
    "image_1": [image] ,
    "image_2": [image] ,
    "image_3": [image] ,
    "image_4": [image] ,
    "image_5": [image] ,
    "solution": [string] The detailed thinking steps required to solve the problem,
    "subject": [string] The subject of data, e.g., “Math”, “Physics”...,
    "task": [string] The task of the problem, e.g., “Code Choose Vis”,
    "category": [string] The category of the problem, e.g., “2D Transformation”,
    "source": [string] The original source dataset of the data, e.g., “math-vista”. For handmade data, this could be “Newly annotated” ,
    "type": [string] Types of questions, e.g., “Multiple Choice”, “Open-ended”,
    "context": [string] Background knowledge required for the question. For problems without context, this could be a 'none' value,
}

Citation

@misc{hao2025mllmsreasonmultimodalityemma,
      title={Can MLLMs Reason in Multimodality? EMMA: An Enhanced MultiModal ReAsoning Benchmark}, 
      author={Yunzhuo Hao and Jiawei Gu and Huichen Will Wang and Linjie Li and Zhengyuan Yang and Lijuan Wang and Yu Cheng},
      year={2025},
      eprint={2501.05444},
      archivePrefix={arXiv},
      primaryClass={cs.CV},
      url={https://arxiv.org/abs/2501.05444}, 
}