Dataset Preview
The full dataset viewer is not available (click to read why). Only showing a preview of the rows.
The dataset generation failed
Error code: DatasetGenerationError
Exception: ArrowInvalid
Message: JSON parse error: Invalid value. in row 0
Traceback: Traceback (most recent call last):
File "/src/services/worker/.venv/lib/python3.12/site-packages/datasets/packaged_modules/json/json.py", line 174, in _generate_tables
df = pandas_read_json(f)
^^^^^^^^^^^^^^^^^^^
File "/src/services/worker/.venv/lib/python3.12/site-packages/datasets/packaged_modules/json/json.py", line 38, in pandas_read_json
return pd.read_json(path_or_buf, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/src/services/worker/.venv/lib/python3.12/site-packages/pandas/io/json/_json.py", line 791, in read_json
json_reader = JsonReader(
^^^^^^^^^^^
File "/src/services/worker/.venv/lib/python3.12/site-packages/pandas/io/json/_json.py", line 905, in __init__
self.data = self._preprocess_data(data)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/src/services/worker/.venv/lib/python3.12/site-packages/pandas/io/json/_json.py", line 917, in _preprocess_data
data = data.read()
^^^^^^^^^^^
File "/src/services/worker/.venv/lib/python3.12/site-packages/datasets/utils/file_utils.py", line 813, in read_with_retries
out = read(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "<frozen codecs>", line 322, in decode
UnicodeDecodeError: 'utf-8' codec can't decode byte 0xd0 in position 10: invalid continuation byte
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/src/services/worker/.venv/lib/python3.12/site-packages/datasets/builder.py", line 1815, in _prepare_split_single
for _, table in generator:
^^^^^^^^^
File "/src/services/worker/.venv/lib/python3.12/site-packages/datasets/packaged_modules/json/json.py", line 177, in _generate_tables
raise e
File "/src/services/worker/.venv/lib/python3.12/site-packages/datasets/packaged_modules/json/json.py", line 151, in _generate_tables
pa_table = paj.read_json(
^^^^^^^^^^^^^^
File "pyarrow/_json.pyx", line 342, in pyarrow._json.read_json
File "pyarrow/error.pxi", line 155, in pyarrow.lib.pyarrow_internal_check_status
File "pyarrow/error.pxi", line 92, in pyarrow.lib.check_status
pyarrow.lib.ArrowInvalid: JSON parse error: Invalid value. in row 0
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 1455, in compute_config_parquet_and_info_response
parquet_operations = convert_to_parquet(builder)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 1054, in convert_to_parquet
builder.download_and_prepare(
File "/src/services/worker/.venv/lib/python3.12/site-packages/datasets/builder.py", line 894, in download_and_prepare
self._download_and_prepare(
File "/src/services/worker/.venv/lib/python3.12/site-packages/datasets/builder.py", line 970, in _download_and_prepare
self._prepare_split(split_generator, **prepare_split_kwargs)
File "/src/services/worker/.venv/lib/python3.12/site-packages/datasets/builder.py", line 1702, in _prepare_split
for job_id, done, content in self._prepare_split_single(
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/src/services/worker/.venv/lib/python3.12/site-packages/datasets/builder.py", line 1858, in _prepare_split_single
raise DatasetGenerationError("An error occurred while generating the dataset") from e
datasets.exceptions.DatasetGenerationError: An error occurred while generating the datasetNeed help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
paper_id string | question_type string | question string | answer string | relevant_passage string | paper_content string |
|---|---|---|---|---|---|
0 | SA-MCQ | According to the results of this article, when llama1 faces the repetition task, Chinese is less likely to use English as an intermediate language than Finnish. This is because:
A. The training corpus for Chinese is significantly larger than that for Finnish, leading the model to prefer expressing itself directly in Ch... | B | This said, note that the English-first pattern is
less pronounced on the repetition task (Fig. 7),
where the input language rises earlier than on the
other tasks or, for Chinese (Fig. 7(e)) even simul-
taneously with, or faster than, English. This might
be due to tokenization: for Chinese we explicitly
chose 100% singl... | Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 15366–15394
August 11-16, 2024 ©2024 Association for Computational Linguistics
Do Llamas Work in English?
On the Latent Language of Multilingual Transformers
Chris Wendler*, Veniamin Veselovsky*, Giova... |
0 | MA-MCQ | Which of the following factors may cause multilingual large language models to show English bias when processing non-English languages?
A. The model's training data mainly consists of English text.
B. The model uses English as the central language in the middle layer for semantic understanding and reasoning.
C. In the ... | ABC | Next, we formulate a conceptual model that is consistent with the above observations.
In order to predict the next token, the transformer’s job essentially consists in mapping the input embedding of the current token to the output embedding of the next token. Phase 1 is focused on building up a better feature represen... | Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 15366–15394
August 11-16, 2024 ©2024 Association for Computational Linguistics
Do Llamas Work in English?
On the Latent Language of Multilingual Transformers
Chris Wendler*, Veniamin Veselovsky*, Giova... |
0 | MA-MCQ | "Which of the following best describes the three-phase process observed in Llama-2 models when proce(...TRUNCATED) | BC | "Tracking intermedi-\nate embeddings through their high-dimensional\nspace reveals three distinct ph(...TRUNCATED) | "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: (...TRUNCATED) |
0 | MA-MCQ | "Which of the following statements are supported by the study?\nA. Intermediate layers of Llama-2 ex(...TRUNCATED) | AC | "On the translation and cloze tasks a consistent picture emerges across model sizes. Neither the cor(...TRUNCATED) | "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: (...TRUNCATED) |
1 | SA-MCQ | "According to the paper, which of the following conclusions about LLMs processing KG input is correc(...TRUNCATED) | D | "• When using external knowledge to answer fact-intensive questions, LLMs\nprefer unordered struct(...TRUNCATED) | "Large Language Models Can Better Understand\nKnowledge Graphs Than We Thought\nXinbang Daia, Yunche(...TRUNCATED) |
1 | MA-MCQ | "Which of the following statements about LLMs understanding different input formats of KGs are corre(...TRUNCATED) | BD | "Our experiments show that (1) linearized triples are more effective than fluent NL text in helping (...TRUNCATED) | "Large Language Models Can Better Understand\nKnowledge Graphs Than We Thought\nXinbang Daia, Yunche(...TRUNCATED) |
1 | MA-MCQ | "Which of the following statements are correct about how LLMs use KG information for question answer(...TRUNCATED) | ABD | "The data in Table 1 indicate that, compared to NL text, LLMs achieve significant improvements when (...TRUNCATED) | "Large Language Models Can Better Understand\nKnowledge Graphs Than We Thought\nXinbang Daia, Yunche(...TRUNCATED) |
1 | MA-MCQ | "According to the research in the paper, which of the following conclusions about the combination of(...TRUNCATED) | ABD | "Results Analysis. The data in Table 1 indicate that, compared to NL text, LLMs achieve significant (...TRUNCATED) | "Large Language Models Can Better Understand\nKnowledge Graphs Than We Thought\nXinbang Daia, Yunche(...TRUNCATED) |
2 | SA-MCQ | "Which of the following best explains the core motivation behind the proposed SIU method?\nA. To era(...TRUNCATED) | B | "To overcome the challenge, we propose an efficient method, Single Image Unlearning (SIU), to unlear(...TRUNCATED) | "Single Image Unlearning: Efficient Machine\nUnlearning in Multimodal Large Language Models\nJiaqi L(...TRUNCATED) |
2 | MA-MCQ | "According to the paper, what are some primary advantages of using Dual Masked KL-divergence in SIU?(...TRUNCATED) | AC | "We propose a novel Dual Masked KL-divergence (DMK) Loss which refines the unlearning process by inc(...TRUNCATED) | "Single Image Unlearning: Efficient Machine\nUnlearning in Multimodal Large Language Models\nJiaqi L(...TRUNCATED) |
End of preview.
ELAIPBench Dataset
Description
This dataset contains academic questions with evidence passages extracted from research papers. Each question is paired with a relevant passage from the source paper that provides evidence for answering the question.It was officially adopted as the dataset for the CCKS 2025 Academic Paper Question Answering Challenge.
Dataset Structure
The dataset contains 403 questions with the following fields:
paper_id: ID of the source paper (corresponds to PDF filename in papers.zip)question_type: Type of question (SA-MCQ, MA-MCQ, etc.)question: The question textanswer: The correct answerrelevant_passage: Evidence passage extracted from the paperpaper_content: Full text content of the source paper
Usage
from datasets import load_dataset
# Load the dataset
dataset = load_dataset("KangKang625/ELAIPBench")
# Access the data
data = dataset['test']
print(f"Number of questions: {len(data)}")
print(f"First question: {data[0]['question']}")
print(f"Paper content length: {len(data[0]['paper_content'])}")
Citation
If you use this dataset, please cite the original ELAIPBench paper.
License
MIT License
- Downloads last month
- 166