Dataset Viewer
Auto-converted to Parquet Duplicate
_id
stringlengths
24
24
id
stringlengths
4
121
author
stringlengths
2
42
cardData
stringlengths
2
1.09M
disabled
bool
1 class
gated
stringclasses
3 values
lastModified
timestamp[ns]date
2021-02-05 16:03:35
2026-01-30 13:34:35
likes
int64
0
9.58k
trendingScore
float64
0
75
private
bool
1 class
sha
stringlengths
40
40
description
stringlengths
0
6.67k
downloads
int64
0
1.79M
downloadsAllTime
int64
0
143M
tags
listlengths
1
7.92k
createdAt
timestamp[ns]date
2022-03-02 23:29:22
2026-01-30 13:33:58
paperswithcode_id
stringclasses
687 values
citation
stringlengths
0
10.7k
696ddc1ba806b4bfbcfc0224
opendatalab/ChartVerse-SFT-1800K
opendatalab
{"license": "apache-2.0", "language": ["en"], "task_categories": ["visual-question-answering", "image-text-to-text"], "tags": ["chart", "reasoning", "vision-language", "multimodal", "chart-understanding", "CoT", "SFT", "large-scale"], "size_categories": ["1M<n<10M"]}
false
False
2026-01-30T08:01:50
86
75
false
86fd98bdfac3e7fa2120748e7d6c597e7ee26cf8
ChartVerse-SFT-1800K is an extended large-scale chart reasoning dataset with Chain-of-Thought (CoT) annotations, developed as part of the opendatalab/ChartVerse project. For more details about our method, datasets, and full model series, please visit our Project Page. This dataset contains all verified correct samples without failure rate filtering. Unlike SFT-600K which excludes easy samples (r=0), SFT-1800K includes the complete set of truth-anchored QA pairs for maximum coverage and scale.… See the full description on the dataset page: https://huggingface.co/datasets/opendatalab/ChartVerse-SFT-1800K.
1,969
1,969
[ "task_categories:visual-question-answering", "task_categories:image-text-to-text", "language:en", "license:apache-2.0", "size_categories:1M<n<10M", "format:parquet", "modality:image", "modality:text", "library:datasets", "library:dask", "library:polars", "library:mlcroissant", "arxiv:2601.13606", "region:us", "chart", "reasoning", "vision-language", "multimodal", "chart-understanding", "CoT", "SFT", "large-scale" ]
2026-01-19T07:24:11
null
null
69524c8ad001e56220ced9bc
Alibaba-Apsara/Superior-Reasoning-SFT-gpt-oss-120b
Alibaba-Apsara
{"license": "cc-by-4.0", "task_categories": ["text-generation"], "language": ["en"], "tags": ["code", "math", "scientific-qa", "instruction-following", "reasoning", "thinking", "gpt-oss-120b", "distill"], "size_categories": ["435K"], "configs": [{"config_name": "stage1", "data_files": "Superior-Reasoning-SFT-gpt-oss-120b-stage1-train-data.jsonl", "features": [{"name": "uuid", "dtype": "string"}, {"name": "input", "dtype": "string"}, {"name": "output", "dtype": "string"}, {"name": "domain", "dtype": "string"}, {"name": "meta", "dtype": "string"}]}, {"config_name": "stage2", "data_files": "Superior-Reasoning-SFT-gpt-oss-120b-stage2-train-data.jsonl", "features": [{"name": "uuid", "dtype": "string"}, {"name": "input", "dtype": "string"}, {"name": "output", "dtype": "string"}, {"name": "domain", "dtype": "string"}, {"name": "meta", "dtype": "string"}]}]}
false
False
2026-01-15T06:39:55
303
66
false
e9d54e2a3f376fd5c62cafd3c4c99b304cdda698
Superior-Reasoning-SFT-gpt-oss-120b           🚀 Overview The Superior-Reasoning-SFT-gpt-oss-120b dataset is a high-quality, open-source collection containing 435K samples designed to democratize the training of high-performance Long Chain-of-Thought (Long-CoT) models. Unlike standard distilled datasets that rely on random sampling or heuristic filtering, Superior-Reasoning-SFT-gpt-oss-120b is constructed using a principled Distribution-Aligned Sequence… See the full description on the dataset page: https://huggingface.co/datasets/Alibaba-Apsara/Superior-Reasoning-SFT-gpt-oss-120b.
29,428
29,429
[ "task_categories:text-generation", "language:en", "license:cc-by-4.0", "size_categories:100K<n<1M", "format:json", "modality:text", "library:datasets", "library:pandas", "library:polars", "library:mlcroissant", "arxiv:2601.09088", "arxiv:2512.20908", "region:us", "code", "math", "scientific-qa", "instruction-following", "reasoning", "thinking", "gpt-oss-120b", "distill" ]
2025-12-29T09:40:26
null
null
696b2406e6c69ff4f49745f4
sojuL/RubricHub_v1
sojuL
{"license": "apache-2.0", "language": ["zh", "en"], "tags": ["medical", "science", "wirting", "isntruction", "chat", "general"], "pretty_name": "RubricHub", "size_categories": ["100K<n<1M"], "task_categories": ["text-generation", "reinforcement-learning", "question-answering"]}
false
False
2026-01-20T07:16:51
142
63
false
bec50742963ed3672391fecbcc4b60067b9fa8bc
RubricHub_v1 RubricHub is a large-scale (approximately 110K), multi-domain dataset that provides high-quality rubric-based supervision for open-ended generation tasks. It is constructed via an automated coarse-to-fine rubric generation framework, which integrates principle-guided synthesis, multi-model aggregation, and difficulty evolution to produce comprehensive and highly discriminative evaluation criteria, overcoming the supervision ceiling of coarse or static rubrics.… See the full description on the dataset page: https://huggingface.co/datasets/sojuL/RubricHub_v1.
732
732
[ "task_categories:text-generation", "task_categories:reinforcement-learning", "task_categories:question-answering", "language:zh", "language:en", "license:apache-2.0", "size_categories:100K<n<1M", "format:parquet", "modality:text", "library:datasets", "library:dask", "library:polars", "library:mlcroissant", "arxiv:2601.08430", "region:us", "medical", "science", "wirting", "isntruction", "chat", "general" ]
2026-01-17T05:54:14
null
null
6965b354f2c297a7078582d4
Qwen/DeepPlanning
Qwen
{"language": ["en", "zh"], "license": "apache-2.0", "viewer": false, "task_categories": ["text-generation"], "tags": ["planning", "llm-benchmark", "reasoning", "autonomous-agents"], "pretty_name": "DeepPlanning", "size_categories": ["1k<n<10k"]}
false
False
2026-01-27T05:22:17
66
62
false
4769c4974f6a2ac026a725a9e99320727454ead8
DeepPlanning: Benchmarking Long-Horizon Agentic Planning with Verifiable Constraints DeepPlanningBench is a challenging benchmark for evaluating long-horizon agentic planning capabilities of large language models (LLMs) with verifiable constraints. It features realistic multi-day travel planning and multi-product shopping tasks that require proactive information acquisition, local constrained reasoning, and global constrained optimization. 🌐 Website:… See the full description on the dataset page: https://huggingface.co/datasets/Qwen/DeepPlanning.
82
82
[ "task_categories:text-generation", "language:en", "language:zh", "license:apache-2.0", "size_categories:1K<n<10K", "format:webdataset", "modality:text", "library:datasets", "library:webdataset", "library:mlcroissant", "arxiv:2601.18137", "region:us", "planning", "llm-benchmark", "reasoning", "autonomous-agents" ]
2026-01-13T02:52:04
null
null
69660562d230db5333514344
FOMO-MRI/FOMO300K
FOMO-MRI
{"license": "other", "license_name": "license", "tags": ["brain", "mri", "ssl", "foundation_model", "3d", "image"], "pretty_name": "FOMO-300K", "size_categories": ["100K<n<1M"], "task_categories": ["image-feature-extraction", "zero-shot-classification"], "viewer": false, "extra_gated_prompt": "\nThis collection of datasets is released under the Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International (CC BY-NC-SA 4.0) license. Each individual dataset within the collection retains its original license, which is reported in the corresponding dataset folder. Some datasets are additionally subject to Data Use Agreements (DUAs), which are reported below and in the relevant dataset folders. Users must comply with the applicable license terms and any associated DUAs.\n\nYou are free to:\nShare \u2014 copy and redistribute the material in any medium or format\nAdapt \u2014 remix, transform, and build upon the material\nThe licensor cannot revoke these freedoms as long as you follow the license terms.\n\nUnder the following terms:\nAttribution \u2014 You must give appropriate credit, provide a link to the license, and indicate if changes were made. You may do so in any reasonable manner, but not in any way that suggests the licensor endorses you or your use.\nNonCommercial \u2014 You may not use the material for commercial purposes.\nShareAlike \u2014 If you remix, transform, or build upon the material, you must distribute your contributions under the same license as the original.\nNo additional restrictions \u2014 You may not apply legal terms or technological measures that legally restrict others from doing anything the license permits.\n\nNotices:\nYou do not have to comply with the license for elements of the material in the public domain or where your use is permitted by an applicable exception or limitation.\n\nNo warranties are given. The license may not give you all of the permissions necessary for your intended use. For example, other rights such as publicity, privacy, or moral rights may limit how you use the material.\n\nFull license: https://creativecommons.org/licenses/by-nc-sa/4.0/\n\nDUAs:\n\nOASIS Data Use Agreement\n\nThe OASIS data are distributed to the greater scientific community under the following terms:\n1. User will not use the OASIS datasets, either alone or in concert with any other information, to make any effort to identify or contact individuals who are or may be the sources of the information in the dataset. If User inadvertently receives identifiable information or otherwise identifies a subject, User will immediately notify OASIS and follow OASISs reasonable written instructions, which may include the return or destruction of identifiable information.\n2. User is strictly prohibited from generating or using images or comparable representations of the face, head, or body for facial recognition, re-identification, or other purposes that could allow the identities of research participants to be readily ascertained.\n3. User will not use or further disclose the OASIS-3 or OASIS-4 except as required by law. User shall not share, distribute, or otherwise make available the OASIS data, in whole or in part, to any third party, including collaborators, without prior written permission from OASIS. All collaborators must independently apply for access and agree to these terms. Additionally, User will not use or further disclose any derivative works or derivative data of the OASIS datasets, in any case in whole or in part, that could be used to reconstruct a facial image. User shall report to OASIS immediately upon Users discovery of any unauthorized use or disclosure not permitted by this Data Use Agreement. User shall provide the following information: (1) the nature of the use or disclosure; (2) the information used or disclosed; (3) the identity of the persons and/or entities that made the use or disclosure; and (4) what corrective action will be taken by User as a result of the use or disclosure. User shall take any other reasonable actions available to it to mitigate any detrimental effects of the use or disclosure.\n4. User agrees to implement appropriate administrative, physical, and technical safeguards to protect the OASIS data from unauthorized access, use or disclosure. OASIS data must be stored on secure, access-controlled systems, and only the User authorized under this Data Use Agreement may access the data.\n5. OASIS data are provided for non-commercial, academic research purposes only. Any commercial use, including but not limited to the sale of data or commercial consulting, is strictly prohibited without explicit, prior written authorization from OASIS.\n6. User agrees to retain OASIS data only for as long as necessary to fulfill the research purposes described in Users application. Upon completion of the research or upon request by OASIS, User will securely destroy or return all copies of the data.\n7. User will acknowledge the use of OASIS data and data derived from OASIS data when publicly presenting any results or algorithms that benefitted from their use. Papers, book chapters, books, posters, oral presentations, and all other printed\nand digital presentations of results derived from OASIS data should contain the following: \n - Acknowledgments: Data were provided [in part] by OASIS [insert appropriate OASIS source info]\n (a) OASIS-1: Cross-Sectional: Principal Investigators: D. Marcus, R, Buckner, J, Csernansky J. Morris; P50 AG05681, P01 AG03991, P01 AG026276, R01 AG021910, P20 MH071616, U24 RR021382\n (b) OASIS-2: Longitudinal: Principal Investigators: D. Marcus, R, Buckner, J. Csernansky, J. Morris; P50 AG05681, P01 AG03991, P01 AG026276, R01 AG021910, P20 MH071616, U24 RR021382\n (c) OASIS-3: Longitudinal Multimodal Neuroimaging: Principal Investigators: T. Benzinger, D. Marcus, J. Morris; NIH P30 AG066444, P50 AG00561, P30 NS09857781, P01 AG026276, P01 AG003991, R01 AG043434, UL1 TR000448, R01 EB009352. AV-45 doses were provided by Avid Radiopharmaceuticals, a wholly owned subsidiary of Eli Lilly.\n (d) OASIS-3_AV1451: Principal Investigators: T. Benzinger, J. Morris; NIH P30 AG066444, AW00006993. AV-1451 doses were provided by Avid Radiopharmaceuticals, a wholly owned subsidiary of Eli Lilly.\n (e) OASIS-4: Clinical Cohort: Principal Investigators: T. Benzinger, L. Koenig, P. LaMontagne\n - Citation: The specific publications that are appropriate to cite in any given study will depend on what OASIS data were used and for what purposes. An annotated and current list of OASIS publications is available at http://www.oasis- brains.org.\n (a) OASIS-1: Cross-Sectional: https://doi.org/10.1162/jocn.2007.19.9.1498\n (b) OASIS-2: Longitudinal: https://doi.org/10.1162/jocn.2009.21407\n (c) OASIS-3: Longitudinal Multimodal Neuroimaging: https://doi.org/10.1101/2019.12.13.19014902\n (d) OASIS-4: Clinical Cohort: https://doi.org/10.1016/j.nicl.2020.102248\n - All proposed publications or presentations using Florbetapir F18 (AV45) or Flortaucipir F18 (AV1451) PET data must be submitted to Avid Radiopharmaceuticals for review and comment thirty days prior to such presentation or publication for review of intellectual property interests. See Imaging data dictionary for contact information and details.\n8. User agree to provide the Knight ADRC with information on Users use of OASIS data, upon request.\n9. Failure to abide by these data use terms may result in termination of your right to access and use OASIS data. In the event of breach of this Data Use Agreement, OASIS reserves the right to pursue all remedies available at law or in equity, including but not limited to termination of access, notification of the Users institution, and legal action.\n\nBraTS-GEN Data Use Agreement\n\nYou are free to use and/or refer to the BraTS datasets in your own research, provided that you always cite the flagship manuscript (published or pre-published) resulting from the challenge, as well as the following challenge-specific manuscripts:\n\nDataset:\n- Any dataset and/or Med-Perf client\n - Citations Needed\n \u2022 A. Karargyris, R. Umeton, M.J. Sheller, A. Aristizabal, J. George, A. Wuest, S. Pati, et al. \"Federated benchmarking of medical artificial intelligence with MedPerf\". Nature Machine Intelligence. 5:799810 (2023).\n \u2022 DOI: https://doi.org/10.1038/s42256-023-00652-2\n- BraTS-GLI\n - Citations Needed\n 1 U.Baid, et al., The RSNA-ASNR-MICCAI BraTS 2021 Benchmark on Brain Tumor Segmentation and Radiogenomic Classification, arXiv:2107.02314, 2021.\n 2 B. H. Menze, A. Jakab, S. Bauer, J. Kalpathy-Cramer, K. Farahani, J. Kirby, et al. \"The Multimodal Brain Tumor Image Segmentation Benchmark (BRATS)\", IEEE Transactions on Medical Imaging 34(10), 1993-2024 (2015) DOI: https://doi.org/10.1109 TMI.2014.2377694\n 3 S. Bakas, H. Akbari, A. Sotiras, M. Bilello, M. Rozycki, J.S. Kirby, et al., \"Advancing The Cancer Genome Atlas glioma MRI collections with expert segmentation labels and radiomic features\", Nature Scientific Data, 4:170117 (2017) DOI: https://doi.org/10.1038/sdata.2017.117\n In addition, if there are no restrictions imposed from the journal/conference you submit your paper about citing \"Data Citations\", please be specific and also cite the following:\n 4 S. Bakas, H. Akbari, A. Sotiras, M. Bilello, M. Rozycki, J. Kirby, et al., \"Segmentation Labels and Radiomic Features for the Pre-operative Scans of the TCGA-GBM collection\", The Cancer Imaging Archive, 2017. DOI: https://doi.org/10.7937/K9/TCIA.2017.KLXWJJ1Q\n 5 S. Bakas, H. Akbari, A. Sotiras, M. Bilello, M. Rozycki, J. Kirby, et al., \"Segmentation Labels and Radiomic Features for the Pre-operative Scans of the TCGA-LGG collection\", The Cancer Imaging Archive, 2017. DOI: https://doi.org/10.7937/K9/TCIA.2017.GJQ7R0EF\n- BraTS-MEN\n - Citations Needed\n \u2022 arXiv: https://arxiv.org/abs/2305.07642\n \u2022 DOI: https://doi.org/10.48550/arXiv.2305.07642\n- BraTS-MET\n - Citations Needed\n \u2022 arXiv: https://arxiv.org/abs/2306.00838\n \u2022 DOI: https://doi.org/10.48550/arXiv.2306.00838\n- BraTS-PED\n - Citations Needed\n \u2022 arXiv: https://arxiv.org/abs/2305.17033\n \u2022 DOI: https://doi.org/10.48550/arXiv.2305.17033\n- BraTS-SSA\n - Citations Needed\n 1 Adewole M, Rudie JD, Gbadamosi A, et al. The Brain Tumor Segmentation (BraTS) Challenge 2023: Glioma Segmentation in Sub-Saharan Africa Patient Population (BraTS-Africa). arXiv:2305.19369 [eess.IV] (2023).\n \u2022 arXiv: https://arxiv.org/abs/2305.19369\n \u2022 DOI: https://doi.org/10.48550/arXiv.2305.19369\n \nNote: Challenge participants agree to cite the initial challenge pre publication manuscript (or the final publication manuscript). You will be contacted through your Synapse affiliated email when the manuscript has been released for citation. Note: Use of the BraTS datasets for creating and submitting benchmark results for publication on MLPerf.org is considered non-commercial use. It is further acceptable to republish results published on MLPerf.org, as well as to create unverified benchmark results consistent with the MLPerf.org rules in other locations. Please note that you should always adhere to the BraTS data usage guidelines and cite appropriately the aforementioned publications, as well as to the terms of use required by MLPerf.org.\n\nGSP Open Access Data Use Terms\n\nI request access to data collected as part of the Brain Genomics Superstruct Project (GSP) of Harvard University and the Massachusetts General Hospital, and I agree to the following:\n1. I will not attempt to establish the identity of or attempt to contact any of the included human subjects.\n2. I will not attempt to link any of the distributed data to any other data that might contain information about the included human subjects.\n3. I understand that under no circumstances will the code that would link these data to Protected Health Information be given to me, nor will any additional information about individual human subjects be released to me under these Open Access Data Use Terms.\n4. I will comply with all relevant rules and regulations imposed by my institution. This may mean that I need my research to be approved or declared exempt by a committee that oversees research on human subjects e.g., my Internal Review Board or Ethics Committee. Different committees operate under different national, state, and local laws and may interpret regulations differently, so it is important to ask about this.\n5. I may redistribute original GSP Open Access data and any derived data as long as the data are redistributed under these same Data Use Terms.\n6. I will acknowledge the use of GSP data and data derived from GSP data when publicly presenting any results or algorithms that benefitted from their use.\n (a) Papers, book chapters, books, posters, oral presentations, and all other printed and digital presentations of results derived from GSP data should contain the following wording in the acknowledgments section: Data were provided [in part] by the Brain Genomics Superstruct Project of Harvard University and the Massachusetts General Hospital, (Principal Investigators: Randy Buckner, Joshua Roffman, and Jordan Smoller), with support from the Center for Brain Science Neuroinformatics Research Group, the Athinoula A. Martinos Center for Biomedical Imaging, and the Center for Human Genetic Research. 20 individual investigators at Harvard and MGH generously contributed data to the overall project.\n (b) Authors of publications or presentations using GSP data should cite relevant publications describing the methods used by the GSP to acquire and process the data. The specific publications that are appropriate to cite in any given study will depend on what GSP data were used and for what purposes. An annotated and appropriately up-to-date list of publications that may warrant consideration is available at http://neuroinformatics.harvard.edu/gsp/\n (c) The GSP as a consortium should not be included as an author of publications or presentations if this authorship would be based solely on the use of GSP data.\n7. Failure to abide by these guidelines will result in termination of my privileges to access GSP data.\n\nHCP WU-Minn and Test-Retest Data Use Terms\n\nI request access to data collected by the Washington University - University of Minnesota Consortium of the Human Connectome Project (WU-Minn HCP), and I agree to the following:\n1. I will not attempt to establish the identity of or attempt to contact any of the included human subjects.\n2. I understand that under no circumstances will the code that would link these data to Protected Health Information be given to me, nor will any additional information about individual human subjects be released to me under these Open Access Data Use Terms.\n3. I will comply with all relevant rules and regulations imposed by my institution. This may mean that I need my research to be approved or declared exempt by a committee that oversees research on human subjects, e.g. my IRB or Ethics Committee. The released HCP data are not considered de-identified, insofar as certain combinations of HCP Restricted Data (available through a separate process) might allow identification of individuals. Different committees operate under different national, state and local laws and may interpret regulations differently, so it is important to ask about this. If needed and upon request, the HCP will provide a certificate stating that you have accepted the HCP Open Access Data Use Terms.\n4. I may redistribute original WU-Minn HCP Open Access data and any derived data as long as the data are redistributed under these same Data Use Terms.\n5. I will acknowledge the use of WU-Minn HCP data and data derived from WU-Minn HCP data when publicly presenting any results or algorithms that benefitted from their use.\n (a) Papers, book chapters, books, posters, oral presentations, and all other printed and digital presentations of results derived from HCP data should contain the following wording in the acknowledgments section: \"Data were provided [in part] by the Human Connectome Project, WU-Minn Consortium (Principal Investigators: David Van Essen and Kamil Ugurbil; 1U54MH091657) funded by the 16 NIH Institutes and Centers that support the NIH Blueprint for Neuroscience Research; and by the McDonnell Center for Systems Neuroscience at Washington University.\"\n (b) Authors of publications or presentations using WU-Minn HCP data should cite relevant publications describing the methods used by the HCP to acquire and process the data. The specific publications that are appropriate to cite in any given study will depend on what HCP data were used and for what purposes. An annotated and appropriately up-to-date list of publications that may warrant consideration is available at http://www.humanconnectome.org/about/acknowledgehcp.html\n (c) The WU-Minn HCP Consortium as a whole should not be included as an author of publications or presentations if this authorship would be based solely on the use of WU-Minn HCP data.\n6. Failure to abide by these guidelines will result in termination of my privileges to access WU-Minn HCP data.\n\nBy requesting access, you agree to the above terms.\n", "extra_gated_fields": {"I agree to these terms": "checkbox", "Name": "text", "Email": "text"}}
false
auto
2026-01-25T09:25:23
77
59
false
580083cd4f33b145d5ffdc57265915128e541ffe
FOMO300K: Brain MRI Dataset for Large-Scale Self-Supervised Learning with Clinical Data Dataset paper preprint: A large-scale heterogeneous 3D magnetic resonance brain imaging dataset for self-supervised learning. https://arxiv.org/pdf/2506.14432v2. Description FOMO-300K is a large-scale dataset of brain MRI scans, including both clinical and research-grade scans. The dataset includes a wide range of sequences, including T1, MPRAGE, T2, T2*, FLAIR, SWI, T1c, PD, DWI… See the full description on the dataset page: https://huggingface.co/datasets/FOMO-MRI/FOMO300K.
21,863
21,863
[ "task_categories:image-feature-extraction", "task_categories:zero-shot-classification", "license:other", "size_categories:100K<n<1M", "modality:3d", "modality:image", "arxiv:2506.14432", "region:us", "brain", "mri", "ssl", "foundation_model", "3d", "image" ]
2026-01-13T08:42:10
null
null
6967b2da7b115954f1c9327c
mercor/apex-agents
mercor
{"license": "cc-by-4.0", "language": ["en"], "tags": ["agents", "benchmarking", "finance", "legal", "management-consulting", "tool-use", "long-horizon"], "pretty_name": "apex-agents", "size_categories": ["n<1K"]}
false
False
2026-01-22T00:33:03
66
58
false
602aae289ba9f4b74c27635e6f3a1738b000e5be
APEX–Agents APEX–Agents is a benchmark from Mercor for evaluating whether AI agents can execute long-horizon, cross-application professional services tasks. Tasks were created by investment banking analysts, management consultants, and corporate lawyers, and require agents to navigate realistic work environments with files and tools (e.g., docs, spreadsheets, PDFs, email, chat, calendar). Tasks: 480 total (160 per job category) Worlds: 33 total (10 banking, 11 consulting, 12 law)… See the full description on the dataset page: https://huggingface.co/datasets/mercor/apex-agents.
5,270
5,270
[ "language:en", "license:cc-by-4.0", "size_categories:n<1K", "arxiv:2601.14242", "region:us", "agents", "benchmarking", "finance", "legal", "management-consulting", "tool-use", "long-horizon" ]
2026-01-14T15:14:34
null
null
6938038933eda94c0094c844
raidium/RadImageNet-VQA
raidium
{"language": ["en"], "license": "apache-2.0", "size_categories": ["1K<n<10M"], "task_categories": ["visual-question-answering"], "tags": ["medical"], "pretty_name": "RadImageNet-VQA", "dataset_info": [{"config_name": "alignment", "features": [{"name": "image", "dtype": "image"}, {"name": "conversations", "list": [{"name": "from", "dtype": "string"}, {"name": "value", "dtype": "string"}]}, {"name": "metadata", "struct": [{"name": "content_type", "dtype": "string"}, {"name": "correct_text", "dtype": "null"}, {"name": "is_abnormal", "dtype": "bool"}, {"name": "location", "dtype": "string"}, {"name": "modality", "dtype": "string"}, {"name": "pathology", "dtype": "string"}, {"name": "question_id", "dtype": "string"}]}], "splits": [{"name": "train", "num_bytes": 29401649909, "num_examples": 750009}, {"name": "val", "num_bytes": 3175441830, "num_examples": 83668}], "download_size": 38405331105, "dataset_size": 32577091739}, {"config_name": "benchmark", "features": [{"name": "image", "dtype": "image"}, {"name": "question", "dtype": "string"}, {"name": "choices", "list": "string"}, {"name": "answer", "dtype": "string"}, {"name": "question_type", "dtype": "string"}, {"name": "metadata", "struct": [{"name": "content_type", "dtype": "string"}, {"name": "correct_text", "dtype": "string"}, {"name": "is_abnormal", "dtype": "bool"}, {"name": "location", "dtype": "string"}, {"name": "modality", "dtype": "string"}, {"name": "pathology", "dtype": "string"}, {"name": "question_id", "dtype": "string"}]}], "splits": [{"name": "test", "num_bytes": 414947216, "num_examples": 9000}], "download_size": 361133763, "dataset_size": 414947216}, {"config_name": "instruct", "features": [{"name": "image", "dtype": "image"}, {"name": "conversations", "list": [{"name": "from", "dtype": "string"}, {"name": "value", "dtype": "string"}]}, {"name": "metadata", "struct": [{"name": "content_type", "dtype": "string"}, {"name": "correct_text", "dtype": "string"}, {"name": "is_abnormal", "dtype": "bool"}, {"name": "location", "dtype": "string"}, {"name": "modality", "dtype": "string"}, {"name": "pathology", "dtype": "string"}, {"name": "question_id", "dtype": "string"}]}], "splits": [{"name": "train", "num_bytes": 29904541796, "num_examples": 750009}, {"name": "val", "num_bytes": 3231558586, "num_examples": 83668}], "download_size": 38424398344, "dataset_size": 33136100382}], "configs": [{"config_name": "alignment", "data_files": [{"split": "train", "path": "alignment/train-*"}, {"split": "val", "path": "alignment/val-*"}]}, {"config_name": "instruct", "data_files": [{"split": "train", "path": "instruct/train-*"}, {"split": "val", "path": "instruct/val-*"}]}, {"config_name": "benchmark", "data_files": [{"split": "test", "path": "benchmark/test-*"}]}], "extra_gated_prompt": "### RADIMAGENET LLC Dataset Research Use Agreement\n \n1. RadImageNet grants you permission, upon your agreeing to the terms of the Research Use Agreement, to view and use the Dataset for personal, non-commercial (e.g., academic) research purposes only. Any commercial use, sale, or other monetization, by you or your affiliates, is strictly prohibited under any and all circumstances.\n2. Other than any limited rights expressly granted herein to you, RadImageNet retains all rights, title, and interest in the Dataset.\n3. You may make a verbatim copy of the Dataset for non-commercial research use as permitted in the Research Use Agreement. You may not alter this verbatim copy for any reason. If another user within your organization wishes to use the Dataset, they must register as an individual user and comply with all the terms of the Research Use Agreement.\n4. YOU MAY NOT DISTRIBUTE, PUBLISH, OR REPRODUCE A COPY of any portion, including the entirety, of the Dataset to anyone without express and specific prior written permission from RadImageNet.\n5. YOU MAY NOT SHARE THE DOWNLOAD LINK to the Dataset with others. For example, if someone other than you within your organization wishes to use or view the Dataset, they must register as an individual user and agree to and comply with all the terms of the Research Use Agreement.\n6. You must not modify, reverse engineer, decompile, or create derivative works from the Dataset. You must not remove or alter any copyright or other proprietary notices in the Dataset.\n7. The Dataset has not been reviewed or approved by the Food and Drug Administration, or any other regulatory agency of the United States of America. The Dataset is being provided to you strictly and only for non-clinical, research use. In no event shall data or images generated through the use, directly or indirectly, in whole or in part, of the Dataset be used or relied upon in the diagnosis or provision of patient care. This Research Use Agreement expressly forbids the use, directly or indirectly, in whole or in part, of the Dataset in the diagnosis or provision of patient care.\n8. THE DATASET IS PROVIDED \u201cAS IS,\u201d AND RADIMAGENET AND ITS COLLABORATORS MAKE NO WARRANTY, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO WARRANTIES OF MERCHANTABILITY AND FITNESS FOR ANY PARTICULAR PURPOSE,2 NOR DO THEY ASSUME ANY LIABILITY OR RESPONSIBILITY FOR THE USE OF THE DATASET.\n9. You will not attempt to identify or re-identify any of the individual data subjects (e.g., patients). Identification or re-identification of individuals is strictly prohibited. Any identification or re-identification of any individual data subject shall be immediately reported to RadImageNet and may be subject to immediate termination of the use of the Dataset.\n\n10. Any violation of the Research Use Agreement or other impermissible use shall be grounds for immediate termination of use of the Dataset. It is your duty to promptly report to RadImageNet any knowledge of any violation at any time. In the event that RadImageNet determines that you have violated this Research Use Agreement or made other impermissible use of the Dataset, RadImageNet may direct that you immediately return all copies of the Dataset and retain no copies thereof. RadImageNet may do this even if you did not cause the violation or impermissible use.\n\nIn consideration for your agreement to the terms and conditions contained in the Research Use Agreement, RadImageNet grants you limited permission to view and use the Dataset for personal, non-commercial research, as described herein. You may not otherwise copy, reproduce, retransmit, distribute, publish, commercially exploit or otherwise transfer any material from or related to the Dataset.\n#### Limitation of Use\nYou may use the Dataset for legal purposes only.\n#### Indemnification\nYou agree to indemnify and hold RadImageNet harmless from and not liable in any way for any claims, losses or damages, including legal fees, arising out of or resulting from your use of the Dataset or your violation or role in violation of the Research Use Agreement. You agree to fully cooperate in RadImageNet\u2019s defense against any such claims. These terms and all other terms of the Research Use Agreement shall be governed by and interpreted in accordance with the laws of New York State.", "extra_gated_fields": {"Name": "text", "Title": "text", "Date": "date_picker", "By clicking Submit below I accept the terms of this RADIMAGENET LLC Dataset Research Use Agreement (hereinafter \u201cthe Research Use Agreement\u201d), as well as to the Terms of Use of the RADIMAGENET LLC (hereinafter \u201cRadImageNet\u201d) website as posted and updated periodically": "checkbox"}, "extra_gated_button_content": "Submit"}
false
auto
2025-12-19T10:06:57
61
49
false
fe2154107adfd74f5b8218be6d2b3b127b668d32
RadImageNet-VQA: A Large-Scale CT and MRI Dataset for Radiologic Visual Question Answering 📖 Paper Dataset Details We introduce RadImageNet-VQA, a large-scale dataset designed for training and benchmarking radiologic VQA on CT and MRI exams. Built from the CT/MRI subset of RadImageNet and its expert-curated anatomical and pathological annotations, RadImageNet-VQA provides 750K images with 7.5M generated samples, including 750K medical captions for visual-text… See the full description on the dataset page: https://huggingface.co/datasets/raidium/RadImageNet-VQA.
1,579
1,798
[ "task_categories:visual-question-answering", "language:en", "license:apache-2.0", "size_categories:1M<n<10M", "format:parquet", "modality:image", "modality:text", "library:datasets", "library:dask", "library:mlcroissant", "library:polars", "region:us", "medical" ]
2025-12-09T11:10:01
null
null
69645867fd167898fdec27e6
moonworks/lunara-aesthetic
moonworks
{"license": "apache-2.0", "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}], "dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "prompt", "dtype": "string"}, {"name": "region", "dtype": "string"}, {"name": "category", "dtype": "string"}, {"name": "topic", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 2953317713, "num_examples": 2000}], "download_size": 2970387971, "dataset_size": 2953317713}, "task_categories": ["text-to-image"], "tags": ["art"], "size_categories": ["1K<n<10K"]}
false
False
2026-01-22T08:40:29
74
48
false
fcf45a62e226560ae63e60eb01c4d40372457965
Dataset Card for Moonworks Lunara Aesthetic Dataset Sample Images Dataset Summary paper: https://arxiv.org/abs/2601.07941 The Lunara Aesthetic Dataset is a curated collection of 2,000 high-quality image–prompt pairs designed for controlled research on prompt grounding, style conditioning, and aesthetic alignment in text-to-image generation. All images are generated using the Moonworks Lunara, a sub-10B parameter… See the full description on the dataset page: https://huggingface.co/datasets/moonworks/lunara-aesthetic.
5,911
5,911
[ "task_categories:text-to-image", "license:apache-2.0", "size_categories:1K<n<10K", "format:parquet", "modality:image", "modality:text", "library:datasets", "library:dask", "library:polars", "library:mlcroissant", "arxiv:2601.07941", "region:us", "art" ]
2026-01-12T02:11:51
null
null
6969078587ce326016ddda46
lightonai/LightOnOCR-mix-0126
lightonai
{"dataset_info": {"features": [{"name": "key", "dtype": "string"}, {"name": "page_idx", "dtype": "int64"}, {"name": "content", "dtype": "string"}, {"name": "metadata", "struct": [{"name": "element_counts", "struct": [{"name": "formulas", "dtype": "int64"}, {"name": "images", "dtype": "int64"}, {"name": "tables", "dtype": "int64"}]}, {"name": "token_length", "dtype": "int64"}]}], "splits": [{"name": "pdfa_train", "num_bytes": 38584453222, "num_examples": 16428833}, {"name": "pdfa_validation", "num_bytes": 4689687, "num_examples": 2000}], "download_size": 21111271721, "dataset_size": 38589142909}, "configs": [{"config_name": "default", "data_files": [{"split": "pdfa_train", "path": "data/pdfa_train-*"}, {"split": "pdfa_validation", "path": "data/pdfa_validation-*"}]}], "license": "other", "task_categories": ["image-to-text"], "language": ["en", "fr", "de", "es", "it", "ja", "ru", "pl", "nl", "zh", "pt", "bg", "tr", "ur", "hi", "th", "ar", "sw", "el", "vi"], "tags": ["ocr"], "size_categories": ["10M<n<100M"], "pretty_name": "LightOnOCR-mix"}
false
False
2026-01-26T16:29:46
106
46
false
af0218b88fc337468d91f9c107ae33453f65cf30
LightOnOCR-mix-0126 LightOnOCR-mix-0126 is a large-scale OCR training dataset built via distillation: a strong vision–language model is prompted to produce naturally ordered full-page transcriptions (Markdown with LaTeX math spans and HTML tables) from rendered document pages. The dataset is designed as supervision for end-to-end OCR / document-understanding models that aim to output clean, human-readable text in a consistent format. This repository releases the PDFA-derived… See the full description on the dataset page: https://huggingface.co/datasets/lightonai/LightOnOCR-mix-0126.
1,384
1,384
[ "task_categories:image-to-text", "language:en", "language:fr", "language:de", "language:es", "language:it", "language:ja", "language:ru", "language:pl", "language:nl", "language:zh", "language:pt", "language:bg", "language:tr", "language:ur", "language:hi", "language:th", "language:ar", "language:sw", "language:el", "language:vi", "license:other", "size_categories:10M<n<100M", "format:parquet", "modality:text", "library:datasets", "library:dask", "library:polars", "library:mlcroissant", "arxiv:2601.14251", "region:eu", "ocr" ]
2026-01-15T15:28:05
null
null
696a53dfe8359277ca69b28a
rootsautomation/pubmed-ocr
rootsautomation
{"language": ["en"], "license": "other", "size_categories": ["1M<n<10M"], "task_categories": ["image-to-text", "image-text-to-text"], "pretty_name": "PubMed-OCR", "arxiv": 2601.11425, "dataset_info": {"features": [{"name": "basename", "dtype": "string"}, {"name": "page", "dtype": "int32"}, {"name": "license", "dtype": "string"}, {"name": "pmid", "dtype": "string"}, {"name": "accession_id", "dtype": "string"}, {"name": "article_citation", "dtype": "string"}, {"name": "pdf_bytes", "dtype": "binary"}, {"name": "ocr_json", "dtype": "string"}]}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "train-*.parquet"}]}], "license_name": "pubmed-ocr-multiple-cc-licenses", "tags": ["biology", "medical", "ocr", "multimodal"]}
false
False
2026-01-22T19:58:29
61
44
false
d03682f1b9e4d1c2a4d48657063cc467a464363d
PubMed-OCR: PMC Open Access OCR Annotations PubMed-OCR is an OCR-centric corpus of scientific articles derived from PubMed Central Open Access PDFs. Each page is rendered to an image and annotated with Google Cloud Vision OCR, released in a compact JSON schema with word-, line-, and paragraph-level bounding boxes. Scale (release): 209.5K articles ~1.5M pages ~1.3B words (OCR tokens) This dataset is intended to support layout-aware modeling, coordinate-grounded QA, and evaluation… See the full description on the dataset page: https://huggingface.co/datasets/rootsautomation/pubmed-ocr.
2,632
2,632
[ "task_categories:image-to-text", "task_categories:image-text-to-text", "language:en", "license:other", "size_categories:1M<n<10M", "format:parquet", "modality:text", "library:datasets", "library:dask", "library:polars", "library:mlcroissant", "arxiv:2601.11425", "region:us", "biology", "medical", "ocr", "multimodal" ]
2026-01-16T15:06:07
null
null
67a404bc8c6d42c5ec097433
Anthropic/EconomicIndex
Anthropic
{"language": "en", "pretty_name": "EconomicIndex", "tags": ["AI", "LLM", "Economic Impacts", "Anthropic"], "viewer": true, "license": "mit", "configs": [{"config_name": "release_2026_01_15", "data_files": [{"split": "raw_claude_ai", "path": "release_2026_01_15/data/intermediate/aei_raw_claude_ai_2025-11-13_to_2025-11-20.csv"}, {"split": "raw_1p_api", "path": "release_2025_09_15/data/intermediate/aei_raw_1p_api_2025-11-13_to_2025-11-20.csv"}]}]}
false
False
2026-01-15T23:52:53
444
43
false
f7f2edfbbcf28329dd621fc8e3cc83d0d99b72eb
The Anthropic Economic Index Overview The Anthropic Economic Index provides insights into how AI is being incorporated into real-world tasks across the modern economy. Data Releases This repository contains multiple data releases, each with its own documentation: 2026-01-15 Release: Updated analysis with economic primitives and Sonnet 4.5 2025-09-15 Release: Updated analysis with geographic and first-party API data using Sonnet 4 2025-03-27 Release: Updated… See the full description on the dataset page: https://huggingface.co/datasets/Anthropic/EconomicIndex.
6,836
41,131
[ "language:en", "license:mit", "arxiv:2503.04761", "region:us", "AI", "LLM", "Economic Impacts", "Anthropic" ]
2025-02-06T00:39:24
null
null
67fce65dd1ec7d15ba6a2da3
zwhe99/DeepMath-103K
zwhe99
{"license": "mit", "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}], "dataset_info": {"features": [{"name": "question", "dtype": "string"}, {"name": "final_answer", "dtype": "string"}, {"name": "difficulty", "dtype": "float64"}, {"name": "topic", "dtype": "string"}, {"name": "r1_solution_1", "dtype": "string"}, {"name": "r1_solution_2", "dtype": "string"}, {"name": "r1_solution_3", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 4959744761.05883, "num_examples": 103022}], "download_size": 2136106260, "dataset_size": 4959744761.05883}, "task_categories": ["text-generation", "text2text-generation"], "language": ["en"], "tags": ["math", "reasoning", "rl"], "pretty_name": "deepmath-103k", "size_categories": ["100K<n<1M"]}
false
False
2025-05-29T03:37:07
341
43
false
5cf055d1fe3d7a2eb19719ac020211469736ae44
DeepMath-103K 🔥 News May 8, 2025: We found that 48 samples contained hints that revealed the answers. The relevant questions have now been revised to remove the leaked answers. April 14, 2025: We release DeepMath-103K, a large-scale dataset featuring challenging, verifiable, and decontaminated math problems tailored for RL and SFT. We open source:… See the full description on the dataset page: https://huggingface.co/datasets/zwhe99/DeepMath-103K.
7,836
92,236
[ "task_categories:text-generation", "language:en", "license:mit", "size_categories:100K<n<1M", "format:parquet", "modality:text", "library:datasets", "library:dask", "library:mlcroissant", "library:polars", "arxiv:2504.11456", "region:us", "math", "reasoning", "rl" ]
2025-04-14T10:41:33
null
null
696e2528357a40707550b1c4
google/WaxalNLP
google
{"license": ["cc-by-sa-4.0", "cc-by-4.0"], "annotation_creators": ["human-annotated", "crowdsourced"], "language_creators": ["creator_1"], "tags": ["audio", "automatic-speech-recognition", "text-to-speech"], "language": ["ach", "aka", "dag", "dga", "ewe", "fat", "ful", "hau", "ibo", "kpo", "lin", "lug", "mas", "mlg", "nyn", "sna", "sog", "swa", "twi", "yor"], "multilinguality": ["multilingual"], "pretty_name": "Waxal NLP Datasets", "task_categories": ["automatic-speech-recognition", "text-to-speech"], "source_datasets": ["UGSpeechData", "DigitalUmuganda/AfriVoice", "original"], "configs": [{"config_name": "ach_asr", "data_files": [{"split": "train", "path": "data/ASR/ach/ach-train-*"}, {"split": "validation", "path": "data/ASR/ach/ach-validation-*"}, {"split": "test", "path": "data/ASR/ach/ach-test-*"}, {"split": "unlabeled", "path": "data/ASR/ach/ach-unlabeled-*"}]}, {"config_name": "ach_tts", "data_files": [{"split": "train", "path": "data/TTS/ach/ach-train-*"}, {"split": "validation", "path": "data/TTS/ach/ach-validation-*"}, {"split": "test", "path": "data/TTS/ach/ach-test-*"}]}, {"config_name": "aka_asr", "data_files": [{"split": "train", "path": "data/ASR/aka/aka-train-*"}, {"split": "validation", "path": "data/ASR/aka/aka-validation-*"}, {"split": "test", "path": "data/ASR/aka/aka-test-*"}, {"split": "unlabeled", "path": "data/ASR/aka/aka-unlabeled-*"}]}, {"config_name": "dag_asr", "data_files": [{"split": "train", "path": "data/ASR/dag/dag-train-*"}, {"split": "validation", "path": "data/ASR/dag/dag-validation-*"}, {"split": "test", "path": "data/ASR/dag/dag-test-*"}, {"split": "unlabeled", "path": "data/ASR/dag/dag-unlabeled-*"}]}, {"config_name": "dga_asr", "data_files": [{"split": "train", "path": "data/ASR/dga/dga-train-*"}, {"split": "validation", "path": "data/ASR/dga/dga-validation-*"}, {"split": "test", "path": "data/ASR/dga/dga-test-*"}, {"split": "unlabeled", "path": "data/ASR/dga/dga-unlabeled-*"}]}, {"config_name": "ewe_asr", "data_files": [{"split": "train", "path": "data/ASR/ewe/ewe-train-*"}, {"split": "validation", "path": "data/ASR/ewe/ewe-validation-*"}, {"split": "test", "path": "data/ASR/ewe/ewe-test-*"}, {"split": "unlabeled", "path": "data/ASR/ewe/ewe-unlabeled-*"}]}, {"config_name": "fat_tts", "data_files": [{"split": "train", "path": "data/TTS/fat/fat-train-*"}, {"split": "validation", "path": "data/TTS/fat/fat-validation-*"}, {"split": "test", "path": "data/TTS/fat/fat-test-*"}]}, {"config_name": "ful_asr", "data_files": [{"split": "train", "path": "data/ASR/ful/ful-train-*"}, {"split": "validation", "path": "data/ASR/ful/ful-validation-*"}, {"split": "test", "path": "data/ASR/ful/ful-test-*"}, {"split": "unlabeled", "path": "data/ASR/ful/ful-unlabeled-*"}]}, {"config_name": "ful_tts", "data_files": [{"split": "train", "path": "data/TTS/ful/ful-train-*"}, {"split": "validation", "path": "data/TTS/ful/ful-validation-*"}, {"split": "test", "path": "data/TTS/ful/ful-test-*"}]}, {"config_name": "hau_tts", "data_files": [{"split": "train", "path": "data/TTS/hau/hau-train-*"}, {"split": "validation", "path": "data/TTS/hau/hau-validation-*"}, {"split": "test", "path": "data/TTS/hau/hau-test-*"}]}, {"config_name": "ibo_tts", "data_files": [{"split": "train", "path": "data/TTS/ibo/ibo-train-*"}, {"split": "validation", "path": "data/TTS/ibo/ibo-validation-*"}, {"split": "test", "path": "data/TTS/ibo/ibo-test-*"}]}, {"config_name": "kpo_asr", "data_files": [{"split": "train", "path": "data/ASR/kpo/kpo-train-*"}, {"split": "validation", "path": "data/ASR/kpo/kpo-validation-*"}, {"split": "test", "path": "data/ASR/kpo/kpo-test-*"}, {"split": "unlabeled", "path": "data/ASR/kpo/kpo-unlabeled-*"}]}, {"config_name": "lin_asr", "data_files": [{"split": "train", "path": "data/ASR/lin/lin-train-*"}, {"split": "validation", "path": "data/ASR/lin/lin-validation-*"}, {"split": "test", "path": "data/ASR/lin/lin-test-*"}, {"split": "unlabeled", "path": "data/ASR/lin/lin-unlabeled-*"}]}, {"config_name": "lug_asr", "data_files": [{"split": "train", "path": "data/ASR/lug/lug-train-*"}, {"split": "validation", "path": "data/ASR/lug/lug-validation-*"}, {"split": "test", "path": "data/ASR/lug/lug-test-*"}, {"split": "unlabeled", "path": "data/ASR/lug/lug-unlabeled-*"}]}, {"config_name": "lug_tts", "data_files": [{"split": "train", "path": "data/TTS/lug/lug-train-*"}, {"split": "validation", "path": "data/TTS/lug/lug-validation-*"}, {"split": "test", "path": "data/TTS/lug/lug-test-*"}]}, {"config_name": "mas_asr", "data_files": [{"split": "train", "path": "data/ASR/mas/mas-train-*"}, {"split": "validation", "path": "data/ASR/mas/mas-validation-*"}, {"split": "test", "path": "data/ASR/mas/mas-test-*"}, {"split": "unlabeled", "path": "data/ASR/mas/mas-unlabeled-*"}]}, {"config_name": "mlg_asr", "data_files": [{"split": "train", "path": "data/ASR/mlg/mlg-train-*"}, {"split": "validation", "path": "data/ASR/mlg/mlg-validation-*"}, {"split": "test", "path": "data/ASR/mlg/mlg-test-*"}, {"split": "unlabeled", "path": "data/ASR/mlg/mlg-unlabeled-*"}]}, {"config_name": "nyn_asr", "data_files": [{"split": "train", "path": "data/ASR/nyn/nyn-train-*"}, {"split": "validation", "path": "data/ASR/nyn/nyn-validation-*"}, {"split": "test", "path": "data/ASR/nyn/nyn-test-*"}, {"split": "unlabeled", "path": "data/ASR/nyn/nyn-unlabeled-*"}]}, {"config_name": "nyn_tts", "data_files": [{"split": "train", "path": "data/TTS/nyn/nyn-train-*"}, {"split": "validation", "path": "data/TTS/nyn/nyn-validation-*"}, {"split": "test", "path": "data/TTS/nyn/nyn-test-*"}]}, {"config_name": "sna_asr", "data_files": [{"split": "train", "path": "data/ASR/sna/sna-train-*"}, {"split": "validation", "path": "data/ASR/sna/sna-validation-*"}, {"split": "test", "path": "data/ASR/sna/sna-test-*"}, {"split": "unlabeled", "path": "data/ASR/sna/sna-unlabeled-*"}]}, {"config_name": "sog_asr", "data_files": [{"split": "train", "path": "data/ASR/sog/sog-train-*"}, {"split": "validation", "path": "data/ASR/sog/sog-validation-*"}, {"split": "test", "path": "data/ASR/sog/sog-test-*"}, {"split": "unlabeled", "path": "data/ASR/sog/sog-unlabeled-*"}]}, {"config_name": "swa_tts", "data_files": [{"split": "train", "path": "data/TTS/swa/swa-train-*"}, {"split": "validation", "path": "data/TTS/swa/swa-validation-*"}, {"split": "test", "path": "data/TTS/swa/swa-test-*"}]}, {"config_name": "twi_tts", "data_files": [{"split": "train", "path": "data/TTS/twi/twi-train-*"}, {"split": "validation", "path": "data/TTS/twi/twi-validation-*"}, {"split": "test", "path": "data/TTS/twi/twi-test-*"}]}, {"config_name": "yor_tts", "data_files": [{"split": "train", "path": "data/TTS/yor/yor-train-*"}, {"split": "validation", "path": "data/TTS/yor/yor-validation-*"}, {"split": "test", "path": "data/TTS/yor/yor-test-*"}]}], "dataset_info": [{"config_name": "ach_asr", "features": [{"name": "id", "dtype": "string"}, {"name": "speaker_id", "dtype": "string"}, {"name": "transcription", "dtype": "string"}, {"name": "language", "dtype": "string"}, {"name": "gender", "dtype": "string"}, {"name": "audio", "dtype": "audio"}]}, {"config_name": "ach_tts", "features": [{"name": "id", "dtype": "string"}, {"name": "speaker_id", "dtype": "string"}, {"name": "text", "dtype": "string"}, {"name": "locale", "dtype": "string"}, {"name": "gender", "dtype": "string"}, {"name": "audio", "dtype": "audio"}]}, {"config_name": "aka_asr", "features": [{"name": "id", "dtype": "string"}, {"name": "speaker_id", "dtype": "string"}, {"name": "transcription", "dtype": "string"}, {"name": "language", "dtype": "string"}, {"name": "gender", "dtype": "string"}, {"name": "audio", "dtype": "audio"}]}, {"config_name": "dag_asr", "features": [{"name": "id", "dtype": "string"}, {"name": "speaker_id", "dtype": "string"}, {"name": "transcription", "dtype": "string"}, {"name": "language", "dtype": "string"}, {"name": "gender", "dtype": "string"}, {"name": "audio", "dtype": "audio"}]}, {"config_name": "dga_asr", "features": [{"name": "id", "dtype": "string"}, {"name": "speaker_id", "dtype": "string"}, {"name": "transcription", "dtype": "string"}, {"name": "language", "dtype": "string"}, {"name": "gender", "dtype": "string"}, {"name": "audio", "dtype": "audio"}]}, {"config_name": "ewe_asr", "features": [{"name": "id", "dtype": "string"}, {"name": "speaker_id", "dtype": "string"}, {"name": "transcription", "dtype": "string"}, {"name": "language", "dtype": "string"}, {"name": "gender", "dtype": "string"}, {"name": "audio", "dtype": "audio"}]}, {"config_name": "fat_tts", "features": [{"name": "id", "dtype": "string"}, {"name": "speaker_id", "dtype": "string"}, {"name": "text", "dtype": "string"}, {"name": "locale", "dtype": "string"}, {"name": "gender", "dtype": "string"}, {"name": "audio", "dtype": "audio"}]}, {"config_name": "ful_asr", "features": [{"name": "id", "dtype": "string"}, {"name": "speaker_id", "dtype": "string"}, {"name": "transcription", "dtype": "string"}, {"name": "language", "dtype": "string"}, {"name": "gender", "dtype": "string"}, {"name": "audio", "dtype": "audio"}]}, {"config_name": "ful_tts", "features": [{"name": "id", "dtype": "string"}, {"name": "speaker_id", "dtype": "string"}, {"name": "text", "dtype": "string"}, {"name": "locale", "dtype": "string"}, {"name": "gender", "dtype": "string"}, {"name": "audio", "dtype": "audio"}]}, {"config_name": "hau_tts", "features": [{"name": "id", "dtype": "string"}, {"name": "speaker_id", "dtype": "string"}, {"name": "text", "dtype": "string"}, {"name": "locale", "dtype": "string"}, {"name": "gender", "dtype": "string"}, {"name": "audio", "dtype": "audio"}]}, {"config_name": "ibo_tts", "features": [{"name": "id", "dtype": "string"}, {"name": "speaker_id", "dtype": "string"}, {"name": "text", "dtype": "string"}, {"name": "locale", "dtype": "string"}, {"name": "gender", "dtype": "string"}, {"name": "audio", "dtype": "audio"}]}, {"config_name": "kpo_asr", "features": [{"name": "id", "dtype": "string"}, {"name": "speaker_id", "dtype": "string"}, {"name": "transcription", "dtype": "string"}, {"name": "language", "dtype": "string"}, {"name": "gender", "dtype": "string"}, {"name": "audio", "dtype": "audio"}]}, {"config_name": "lin_asr", "features": [{"name": "id", "dtype": "string"}, {"name": "speaker_id", "dtype": "string"}, {"name": "transcription", "dtype": "string"}, {"name": "language", "dtype": "string"}, {"name": "gender", "dtype": "string"}, {"name": "audio", "dtype": "audio"}]}, {"config_name": "lug_asr", "features": [{"name": "id", "dtype": "string"}, {"name": "speaker_id", "dtype": "string"}, {"name": "transcription", "dtype": "string"}, {"name": "language", "dtype": "string"}, {"name": "gender", "dtype": "string"}, {"name": "audio", "dtype": "audio"}]}, {"config_name": "lug_tts", "features": [{"name": "id", "dtype": "string"}, {"name": "speaker_id", "dtype": "string"}, {"name": "text", "dtype": "string"}, {"name": "locale", "dtype": "string"}, {"name": "gender", "dtype": "string"}, {"name": "audio", "dtype": "audio"}]}, {"config_name": "mas_asr", "features": [{"name": "id", "dtype": "string"}, {"name": "speaker_id", "dtype": "string"}, {"name": "transcription", "dtype": "string"}, {"name": "language", "dtype": "string"}, {"name": "gender", "dtype": "string"}, {"name": "audio", "dtype": "audio"}]}, {"config_name": "mlg_asr", "features": [{"name": "id", "dtype": "string"}, {"name": "speaker_id", "dtype": "string"}, {"name": "transcription", "dtype": "string"}, {"name": "language", "dtype": "string"}, {"name": "gender", "dtype": "string"}, {"name": "audio", "dtype": "audio"}]}, {"config_name": "nyn_asr", "features": [{"name": "id", "dtype": "string"}, {"name": "speaker_id", "dtype": "string"}, {"name": "transcription", "dtype": "string"}, {"name": "language", "dtype": "string"}, {"name": "gender", "dtype": "string"}, {"name": "audio", "dtype": "audio"}]}, {"config_name": "nyn_tts", "features": [{"name": "id", "dtype": "string"}, {"name": "speaker_id", "dtype": "string"}, {"name": "text", "dtype": "string"}, {"name": "locale", "dtype": "string"}, {"name": "gender", "dtype": "string"}, {"name": "audio", "dtype": "audio"}]}, {"config_name": "sna_asr", "features": [{"name": "id", "dtype": "string"}, {"name": "speaker_id", "dtype": "string"}, {"name": "transcription", "dtype": "string"}, {"name": "language", "dtype": "string"}, {"name": "gender", "dtype": "string"}, {"name": "audio", "dtype": "audio"}]}, {"config_name": "sog_asr", "features": [{"name": "id", "dtype": "string"}, {"name": "speaker_id", "dtype": "string"}, {"name": "transcription", "dtype": "string"}, {"name": "language", "dtype": "string"}, {"name": "gender", "dtype": "string"}, {"name": "audio", "dtype": "audio"}]}, {"config_name": "swa_tts", "features": [{"name": "id", "dtype": "string"}, {"name": "speaker_id", "dtype": "string"}, {"name": "text", "dtype": "string"}, {"name": "locale", "dtype": "string"}, {"name": "gender", "dtype": "string"}, {"name": "audio", "dtype": "audio"}]}, {"config_name": "twi_tts", "features": [{"name": "id", "dtype": "string"}, {"name": "speaker_id", "dtype": "string"}, {"name": "text", "dtype": "string"}, {"name": "locale", "dtype": "string"}, {"name": "gender", "dtype": "string"}, {"name": "audio", "dtype": "audio"}]}, {"config_name": "yor_tts", "features": [{"name": "id", "dtype": "string"}, {"name": "speaker_id", "dtype": "string"}, {"name": "text", "dtype": "string"}, {"name": "locale", "dtype": "string"}, {"name": "gender", "dtype": "string"}, {"name": "audio", "dtype": "audio"}]}]}
false
False
2026-01-26T17:30:43
47
43
false
75c875b3ec0731682ad9a68dd1a784856eae1378
Waxal Datasets Dataset Description The Waxal project provides datasets for both Automated Speech Recognition (ASR) and Text-to-Speech (TTS) for African languages. The goal of this dataset's creation and release is to facilitate research that improves the accuracy and fluency of speech and language technology for these underserved languages, and to serve as a repository for digital preservation. The Waxal datasets are collections acquired through partnerships with Makerere… See the full description on the dataset page: https://huggingface.co/datasets/google/WaxalNLP.
1,509
1,509
[ "task_categories:automatic-speech-recognition", "task_categories:text-to-speech", "language_creators:creator_1", "multilinguality:multilingual", "source_datasets:UGSpeechData", "source_datasets:DigitalUmuganda/AfriVoice", "source_datasets:original", "language:ach", "language:aka", "language:dag", "language:dga", "language:ewe", "language:fat", "language:ful", "language:hau", "language:ibo", "language:kpo", "language:lin", "language:lug", "language:mas", "language:mlg", "language:nyn", "language:sna", "language:sog", "language:swa", "language:twi", "language:yor", "license:cc-by-sa-4.0", "license:cc-by-4.0", "size_categories:1M<n<10M", "format:parquet", "modality:audio", "modality:text", "library:datasets", "library:dask", "library:polars", "library:mlcroissant", "region:us", "audio", "automatic-speech-recognition", "text-to-speech" ]
2026-01-19T12:35:52
null
null
6978a37bcc1cd38620f46bbc
MiniMaxAI/role-play-bench
MiniMaxAI
{"language": ["zh", "en"], "license": "apache-2.0", "task_categories": ["text-generation"], "size_categories": ["1K<n<10K"], "configs": [{"config_name": "seeds_zh", "data_files": [{"split": "test", "path": "data/zh/seeds.parquet"}]}, {"config_name": "seeds_en", "data_files": [{"split": "test", "path": "data/en/seeds.parquet"}]}, {"config_name": "dialogues_zh", "data_files": [{"split": "test", "path": "data/zh/dialogues.parquet"}]}, {"config_name": "dialogues_en", "data_files": [{"split": "test", "path": "data/en/dialogues.parquet"}]}, {"config_name": "evaluations_zh", "data_files": [{"split": "test", "path": "data/zh/evaluations.parquet"}]}, {"config_name": "evaluations_en", "data_files": [{"split": "test", "path": "data/en/evaluations.parquet"}]}]}
false
False
2026-01-28T04:01:11
40
40
false
3c1be2a56afbcaab19ae6b40b8a24429eae792f5
Role-play Benchmark A comprehensive benchmark for evaluating Role-play Agents in Chinese and English scenarios. Dataset Summary Role-play Benchmark is designed to evaluate Role-play Agents' ability to deliver immersive role-play experiences through Situated Reenactment. Unlike traditional benchmarks with verifiable answers, Role-play is fundamentally non-verifiable, e.g., there's no single "correct" response when a tsundere character is asked "Do you like me?". Instead… See the full description on the dataset page: https://huggingface.co/datasets/MiniMaxAI/role-play-bench.
117
117
[ "task_categories:text-generation", "language:zh", "language:en", "license:apache-2.0", "size_categories:1K<n<10K", "format:parquet", "modality:tabular", "modality:text", "library:datasets", "library:pandas", "library:polars", "library:mlcroissant", "region:us" ]
2026-01-27T11:37:31
null
null
68ba0ffd343a84103b603c45
Pageshift-Entertainment/LongPage
Pageshift-Entertainment
{"pretty_name": "LongPage", "dataset_name": "LongPage", "library_name": "datasets", "language": ["en"], "license": ["cc-by-4.0", "other"], "task_categories": ["text-generation"], "task_ids": ["language-modeling", "text2text-generation"], "size_categories": ["n<1K"], "source_datasets": ["original"], "annotations_creators": ["machine-generated"], "language_creators": ["found"], "multilinguality": ["monolingual"], "tags": ["long-context", "cot", "reasoning", "creative-writing", "Cold start reasoning data"], "pretty_visual": "assets/cover_image.png"}
false
False
2026-01-20T14:01:26
139
37
false
27d907b6a9f92682110e68ef91f001b4812698d6
Overview 🚀📚 The first comprehensive dataset for training AI models to write complete novels with sophisticated reasoning. 🧠 Hierarchical Reasoning Architecture — Multi-layered planning traces including character archetypes, story arcs, world rules, and scene breakdowns. A complete cognitive roadmap for long-form narrative construction. 📖 Complete Novel Coverage — From 40,000 to 600,000+ tokens per book, spanning novellas to epic series with consistent quality throughout. ⚡… See the full description on the dataset page: https://huggingface.co/datasets/Pageshift-Entertainment/LongPage.
7,006
18,323
[ "task_categories:text-generation", "task_ids:language-modeling", "task_ids:text2text-generation", "annotations_creators:machine-generated", "language_creators:found", "multilinguality:monolingual", "source_datasets:original", "language:en", "license:cc-by-4.0", "license:other", "size_categories:1K<n<10K", "format:parquet", "format:optimized-parquet", "modality:text", "library:datasets", "library:dask", "library:polars", "library:mlcroissant", "region:us", "long-context", "cot", "reasoning", "creative-writing", "Cold start reasoning data" ]
2025-09-04T22:17:33
null
null
6976521d67df645b2b063143
nvidia/Nemotron-Personas-Brazil
nvidia
{"license": "cc-by-4.0", "task_categories": ["text-generation"], "language": ["pt"], "tags": ["synthetic", "personas", "NVIDIA", "datadesigner"], "size_categories": ["1M<n<10M"], "dataset_info": {"features": [{"name": "uuid", "dtype": "string"}, {"name": "professional_persona", "dtype": "string"}, {"name": "sports_persona", "dtype": "string"}, {"name": "arts_persona", "dtype": "string"}, {"name": "travel_persona", "dtype": "string"}, {"name": "culinary_persona", "dtype": "string"}, {"name": "persona", "dtype": "string"}, {"name": "cultural_background", "dtype": "string"}, {"name": "skills_and_expertise", "dtype": "string"}, {"name": "skills_and_expertise_list", "dtype": "string"}, {"name": "hobbies_and_interests", "dtype": "string"}, {"name": "hobbies_and_interests_list", "dtype": "string"}, {"name": "career_goals_and_ambitions", "dtype": "string"}, {"name": "sex", "dtype": "string"}, {"name": "age", "dtype": "int64"}, {"name": "marital_status", "dtype": "string"}, {"name": "education_level", "dtype": "string"}, {"name": "occupation", "dtype": "string"}, {"name": "municipality", "dtype": "string"}, {"name": "state", "dtype": "string"}, {"name": "country", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 5395470286, "num_examples": 1000000}], "download_size": 2514627068, "dataset_size": 5395470286}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
false
False
2026-01-26T23:09:43
37
37
false
441be2bd83a829020452ba9242efd31d212ae602
Nemotron-Personas-Brazil Abordagem de IA composta para geração de personas baseada em distribuições do mundo real Visão Geral do Conjunto de Dados (Dataset Overview): Nemotron-Personas-Brazil é um conjunto de dados (dataset) de código aberto (CC BY 4.0) composto por personas geradas sinteticamente e fundamentadas em distribuições demográficas, geográficas e traços de personalidade reais do Brasil, visando capturar a diversidade e a riqueza da população. Trata-se de… See the full description on the dataset page: https://huggingface.co/datasets/nvidia/Nemotron-Personas-Brazil.
588
588
[ "task_categories:text-generation", "language:pt", "license:cc-by-4.0", "size_categories:1M<n<10M", "format:parquet", "format:optimized-parquet", "modality:text", "library:datasets", "library:dask", "library:polars", "library:mlcroissant", "library:datadesigner", "region:us", "synthetic", "personas", "NVIDIA", "datadesigner" ]
2026-01-25T17:25:49
null
null
68e91c24e825003e1c2aec1a
SWE-Arena/leaderboard_data
SWE-Arena
nan
false
False
2026-01-24T19:43:10
36
35
false
8f7931b7b18b553f5a5d4d695d7a4fb0dfd08d81
null
844
1,827
[ "region:us" ]
2025-10-10T14:45:56
null
null
6960b100448a2a7a83c8f3fb
nyuuzyou/google-code-archive
nyuuzyou
{"annotations_creators": ["machine-generated"], "language_creators": ["found"], "language": ["code", "en"], "license": "other", "multilinguality": ["multilingual"], "pretty_name": "Google Code Archive Dataset", "size_categories": ["10M<n<100M"], "source_datasets": ["original"], "task_categories": ["text-generation"], "tags": ["code", "google-code", "archive"], "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/*.parquet"}], "default": true}], "dataset_info": {"features": [{"name": "code", "dtype": "string"}, {"name": "repo_name", "dtype": "string"}, {"name": "path", "dtype": "string"}, {"name": "language", "dtype": "string"}, {"name": "license", "dtype": "string"}, {"name": "size", "dtype": "int64"}]}}
false
False
2026-01-09T08:00:30
42
34
false
242084cfa56acf4af01fb76858ccfb8294ee2406
Google Code Archive Dataset Dataset Description This dataset was compiled from the Google Code Archive, a preserved snapshot of projects hosted on Google Code, Google's open-source project hosting service that operated from 2006 to 2016. Google Code was one of the major code hosting platforms of its era, hosting hundreds of thousands of open-source projects before its shutdown. The archive provides a unique historical record of open-source development during a formative… See the full description on the dataset page: https://huggingface.co/datasets/nyuuzyou/google-code-archive.
885
885
[ "task_categories:text-generation", "annotations_creators:machine-generated", "language_creators:found", "multilinguality:multilingual", "source_datasets:original", "language:code", "language:en", "license:other", "size_categories:10M<n<100M", "format:parquet", "modality:text", "library:datasets", "library:dask", "library:polars", "library:mlcroissant", "region:us", "code", "google-code", "archive" ]
2026-01-09T07:40:48
null
null
695bd296a9cec4f412019cd2
DeepGlint-AI/DanQing100M
DeepGlint-AI
{"license": "cc-by-4.0", "task_categories": ["zero-shot-image-classification", "image-to-text"], "language": ["zh"], "arxiv_id": 2601.10305, "dataset_info": {"features": [{"name": "url", "dtype": "string"}, {"name": "alt_text", "dtype": "string"}, {"name": "recaption", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 19795236, "num_examples": 99892381}], "download_size": 19795236, "dataset_size": 99892381}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
false
False
2026-01-20T07:14:23
46
33
false
508ac59f20e90f80b7bb3765c6741759e115c741
100M Chinese image-text pairs | 12TB dataset | 2024-2025 web data DanQing: An Up-to-Date Large-Scale Chinese Vision-Language Pre-training Dataset Project Page | Paper | Code Hengyu Shen∗, Tiancheng Gu∗, Bin Qin, Lan Wu, Yuling Wu, Shuo Tan, Zelong Sun, Jun Wang, Nan Wu, Xiang An, Weidong Cai, Ziyong Feng‡, Kaicheng Yang† ∗ Equal Contribution | ‡ Team Leader | † Project Leader 📣 News [2026/01/16] ✨ We release the paper of DanQing. [2026/01/15] 🔥 We release the… See the full description on the dataset page: https://huggingface.co/datasets/DeepGlint-AI/DanQing100M.
4,798
4,798
[ "task_categories:zero-shot-image-classification", "task_categories:image-to-text", "language:zh", "license:cc-by-4.0", "size_categories:10M<n<100M", "format:parquet", "modality:image", "modality:text", "library:datasets", "library:dask", "library:polars", "library:mlcroissant", "arxiv:2601.10305", "region:us" ]
2026-01-05T15:02:46
null
null
633a585e593f7e38374056ec
bigcode/the-stack
bigcode
{"annotations_creators": [], "language_creators": ["crowdsourced", "expert-generated"], "language": ["code"], "license": ["other"], "multilinguality": ["multilingual"], "pretty_name": "The-Stack", "size_categories": ["unknown"], "source_datasets": [], "task_categories": ["text-generation"], "task_ids": [], "extra_gated_prompt": "## Terms of Use for The Stack\n\nThe Stack dataset is a collection of source code in over 300 programming languages. We ask that you read and acknowledge the following points before using the dataset:\n1. The Stack is a collection of source code from repositories with various licenses. Any use of all or part of the code gathered in The Stack must abide by the terms of the original licenses, including attribution clauses when relevant. We facilitate this by providing provenance information for each data point.\n2. The Stack is regularly updated to enact validated data removal requests. By clicking on \"Access repository\", you agree to update your own version of The Stack to the most recent usable version specified by the maintainers in [the following thread](https://huggingface.co/datasets/bigcode/the-stack/discussions/7). If you have questions about dataset versions and allowed uses, please also ask them in the dataset\u2019s [community discussions](https://huggingface.co/datasets/bigcode/the-stack/discussions/new). We will also notify users via email when the latest usable version changes.\n3. To host, share, or otherwise provide access to The Stack dataset, you must include [these Terms of Use](https://huggingface.co/datasets/bigcode/the-stack#terms-of-use-for-the-stack) and require users to agree to it.\n\nBy clicking on \"Access repository\" below, you accept that your contact information (email address and username) can be shared with the dataset maintainers as well.\n ", "extra_gated_fields": {"Email": "text", "I have read the License and agree with its terms": "checkbox"}}
false
auto
2023-04-13T12:15:50
945
31
false
349a71353fd5868fb90b593ef09e311379da498a
Dataset Card for The Stack Changelog Release Description v1.0 Initial release of the Stack. Included 30 programming languages and 18 permissive licenses. Note: Three included licenses (MPL/EPL/LGPL) are considered weak copyleft licenses. The resulting near-deduplicated dataset is 3TB in size. v1.1 The three copyleft licenses ((MPL/EPL/LGPL) were excluded and the list of permissive licenses extended to 193 licenses in total. The list of programming languages… See the full description on the dataset page: https://huggingface.co/datasets/bigcode/the-stack.
16,631
339,242
[ "task_categories:text-generation", "language_creators:crowdsourced", "language_creators:expert-generated", "multilinguality:multilingual", "language:code", "license:other", "size_categories:100M<n<1B", "format:parquet", "modality:tabular", "modality:text", "library:datasets", "library:dask", "library:polars", "library:mlcroissant", "arxiv:2211.15533", "arxiv:2107.03374", "arxiv:2207.14157", "region:us" ]
2022-10-03T03:34:54
null
null
695df55a4e351abe5277cca5
UniParser/OmniScience
UniParser
{"license": "cc-by-nc-sa-4.0", "task_categories": ["image-to-text"], "extra_gated_heading": "Request Access to This Dataset", "extra_gated_description": "Please complete the required fields below to request access. Access will be automatically granted upon submission.", "extra_gated_fields": {"Full Name": {"type": "text"}, "Email": {"type": "text"}, "Affiliation (Company / University)": {"type": "text"}, "I agree this dataset is for non-commercial use ONLY": {"type": "checkbox"}}, "extra_gated_button_content": "Submit Access Request"}
false
auto
2026-01-22T02:55:43
104
31
false
9c9fdac9ea87b36e3889330463cd4aee2e81ce95
OmniScience: A Large-scale Dataset for Scientific Image Understanding 🚀 2026-01-21: The OmniScience dataset ranked Top 8 on Hugging Face Datasets Trending (Top 1 on Image Caption Filed). 🚀 2026-01-17: The OmniScience dataset surpassed 5,000 downloads within 5 days of its release. 🚀 2026-01-12: Official release of the OmniScience dataset. 🚀 2025-06-01: Completion of the original dataset collection. 📘 Dataset Summary OmniScience is an ultra-large-scale… See the full description on the dataset page: https://huggingface.co/datasets/UniParser/OmniScience.
8,935
8,951
[ "task_categories:image-to-text", "license:cc-by-nc-sa-4.0", "size_categories:1M<n<10M", "format:parquet", "format:optimized-parquet", "modality:image", "modality:text", "library:datasets", "library:dask", "library:polars", "library:mlcroissant", "arxiv:2512.15098", "region:us" ]
2026-01-07T05:55:38
null
null
650a9248d26103b6eee3ea7b
lmsys/lmsys-chat-1m
lmsys
{"size_categories": ["1M<n<10M"], "task_categories": ["conversational"], "extra_gated_prompt": "You agree to the [LMSYS-Chat-1M Dataset License Agreement](https://huggingface.co/datasets/lmsys/lmsys-chat-1m#lmsys-chat-1m-dataset-license-agreement).", "extra_gated_fields": {"Name": "text", "Email": "text", "Affiliation": "text", "Country": "text"}, "extra_gated_button_content": "I agree to the terms and conditions of the LMSYS-Chat-1M Dataset License Agreement.", "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}], "dataset_info": {"features": [{"name": "conversation_id", "dtype": "string"}, {"name": "model", "dtype": "string"}, {"name": "conversation", "list": [{"name": "content", "dtype": "string"}, {"name": "role", "dtype": "string"}]}, {"name": "turn", "dtype": "int64"}, {"name": "language", "dtype": "string"}, {"name": "openai_moderation", "list": [{"name": "categories", "struct": [{"name": "harassment", "dtype": "bool"}, {"name": "harassment/threatening", "dtype": "bool"}, {"name": "hate", "dtype": "bool"}, {"name": "hate/threatening", "dtype": "bool"}, {"name": "self-harm", "dtype": "bool"}, {"name": "self-harm/instructions", "dtype": "bool"}, {"name": "self-harm/intent", "dtype": "bool"}, {"name": "sexual", "dtype": "bool"}, {"name": "sexual/minors", "dtype": "bool"}, {"name": "violence", "dtype": "bool"}, {"name": "violence/graphic", "dtype": "bool"}]}, {"name": "category_scores", "struct": [{"name": "harassment", "dtype": "float64"}, {"name": "harassment/threatening", "dtype": "float64"}, {"name": "hate", "dtype": "float64"}, {"name": "hate/threatening", "dtype": "float64"}, {"name": "self-harm", "dtype": "float64"}, {"name": "self-harm/instructions", "dtype": "float64"}, {"name": "self-harm/intent", "dtype": "float64"}, {"name": "sexual", "dtype": "float64"}, {"name": "sexual/minors", "dtype": "float64"}, {"name": "violence", "dtype": "float64"}, {"name": "violence/graphic", "dtype": "float64"}]}, {"name": "flagged", "dtype": "bool"}]}, {"name": "redacted", "dtype": "bool"}], "splits": [{"name": "train", "num_bytes": 2626438904, "num_examples": 1000000}], "download_size": 1488850250, "dataset_size": 2626438904}}
false
auto
2024-07-27T09:28:42
825
30
false
200748d9d3cddcc9d782887541057aca0b18c5da
LMSYS-Chat-1M: A Large-Scale Real-World LLM Conversation Dataset This dataset contains one million real-world conversations with 25 state-of-the-art LLMs. It is collected from 210K unique IP addresses in the wild on the Vicuna demo and Chatbot Arena website from April to August 2023. Each sample includes a conversation ID, model name, conversation text in OpenAI API JSON format, detected language tag, and OpenAI moderation API tag. User consent is obtained through the "Terms of use"… See the full description on the dataset page: https://huggingface.co/datasets/lmsys/lmsys-chat-1m.
4,913
292,462
[ "size_categories:1M<n<10M", "format:parquet", "modality:text", "library:datasets", "library:dask", "library:polars", "library:mlcroissant", "arxiv:2309.11998", "region:us" ]
2023-09-20T06:33:44
null
null
696a3aa73b9cc2d063e34382
DAGroup-PKU/RoVid-X
DAGroup-PKU
{"language": ["en"], "license": "cc-by-4.0", "size_categories": ["n>1T"], "task_categories": ["image-to-video"], "tags": ["robotics video generation", "text-to-video", "image-to-video", "video-generation", "large-scale", "benchmark", "evaluation"]}
false
False
2026-01-25T11:47:36
36
30
false
446d496e440683271f943ffd65fc1fb761818883
Rethinking Video Generation Model for the Embodied World If you like our project, please give us a star ⭐ on GitHub for the latest update. Key features 4M robotic video clips(10K+ hours) for large-scale video generation training. 1300+ fine-grained robotic skills, covering diverse actions and task primitives. Multi-modal physical annotations, including RGB, depth, and optical flow. Multi-robot and multi-task diversity… See the full description on the dataset page: https://huggingface.co/datasets/DAGroup-PKU/RoVid-X.
1,925
1,925
[ "task_categories:image-to-video", "language:en", "license:cc-by-4.0", "size_categories:n<1K", "modality:video", "library:datasets", "library:mlcroissant", "arxiv:2601.15282", "region:us", "robotics video generation", "text-to-video", "image-to-video", "video-generation", "large-scale", "benchmark", "evaluation" ]
2026-01-16T13:18:31
null
null
67bdc389c748a39392fe7bd7
Neph0s/CoSER
Neph0s
{"license": "mit", "language": ["en"], "size_categories": ["100M<n<1000M"], "viewer": true, "default_split": "test"}
false
False
2025-07-13T16:00:50
69
28
false
7cc80430f92532cda85df45015a4aca8ecc068d0
CoSER Dataset Overview CoSER is a high-quality dataset for role-playing LLMs, sourced from 771 renowned novels. The dataset contains authentic multi-turn, multi-character dialogues extracted from acclaimed literary works. Key Features Authentic Content: Unlike synthetic datasets, CoSER extracts real dialogues from literature, maintaining high fidelity to the original works. The dialogues are inherently multi-turn and multi-character, exhibiting natural… See the full description on the dataset page: https://huggingface.co/datasets/Neph0s/CoSER.
1,429
12,778
[ "language:en", "license:mit", "arxiv:2502.09082", "region:us" ]
2025-02-25T13:20:09
null
null
69676b65aeecdadc87f8da8e
facebook/action100m-preview
facebook
{"language": ["en"], "license": "fair-noncommercial-research-license", "size_categories": ["10M<n<100M"], "task_categories": ["video-classification", "video-text-to-text"], "tags": ["video", "action"], "arxiv": 2601.10592}
false
False
2026-01-29T17:20:17
128
28
false
128d3edb9449334f89e65c806b16f35279ee50c9
Action100M: A Large-scale Video Action Dataset Paper | GitHub Action100M is a large-scale dataset constructed from 1.2M Internet instructional videos (14.6 years of duration), yielding ~100 million temporally localized segments with open-vocabulary action supervision and rich captions. It serves as a foundation for scalable research in video understanding and world modeling. Load Action100M Annotations Our data can be loaded from the 🤗 huggingface repo at… See the full description on the dataset page: https://huggingface.co/datasets/facebook/action100m-preview.
4,495
4,495
[ "task_categories:video-classification", "task_categories:video-text-to-text", "language:en", "license:fair-noncommercial-research-license", "size_categories:100K<n<1M", "format:parquet", "modality:text", "modality:video", "library:datasets", "library:dask", "library:polars", "library:mlcroissant", "arxiv:2601.10592", "region:us", "video", "action" ]
2026-01-14T10:09:41
null
null
67510376ab059ce38f2caaa3
SWE-Arena/conversation_data
SWE-Arena
nan
false
False
2026-01-25T17:18:10
28
27
false
fb2d49c4f9667206dc37622e52af466f45a29d46
null
80
1,405
[ "region:us" ]
2024-12-05T01:35:50
null
null
68072cc4cce05035af98207e
nvidia/OpenMathReasoning
nvidia
{"language": ["en"], "license": "cc-by-4.0", "size_categories": ["1M<n<10M"], "task_categories": ["question-answering", "text-generation"], "pretty_name": "OpenMathReasoning", "tags": ["math", "nvidia"], "configs": [{"config_name": "default", "data_files": [{"split": "cot", "path": "data/cot-*"}, {"split": "tir", "path": "data/tir-*"}, {"split": "genselect", "path": "data/genselect-*"}, {"split": "additional_problems", "path": "data/additional_problems-*"}]}], "dataset_info": {"features": [{"name": "expected_answer", "dtype": "string"}, {"name": "problem_type", "dtype": "string"}, {"name": "problem_source", "dtype": "string"}, {"name": "generation_model", "dtype": "string"}, {"name": "pass_rate_72b_tir", "dtype": "string"}, {"name": "problem", "dtype": "string"}, {"name": "generated_solution", "dtype": "string"}, {"name": "inference_mode", "dtype": "string"}, {"name": "used_in_kaggle", "dtype": "bool"}], "splits": [{"name": "cot", "num_bytes": 71639174648, "num_examples": 3201061}, {"name": "tir", "num_bytes": 35746562996, "num_examples": 1718466}, {"name": "genselect", "num_bytes": 6981124435, "num_examples": 565620}, {"name": "additional_problems", "num_bytes": 66328865, "num_examples": 193170}], "download_size": 49585391985, "dataset_size": 114433190944}}
false
False
2025-05-27T18:43:44
424
26
false
d3d08664755704f422af97d43a7ff0ded4bd95df
OpenMathReasoning OpenMathReasoning is a large-scale math reasoning dataset for training large language models (LLMs). This dataset contains 306K unique mathematical problems sourced from AoPS forums with: 3.2M long chain-of-thought (CoT) solutions 1.7M long tool-integrated reasoning (TIR) solutions 566K samples that select the most promising solution out of many candidates (GenSelect) Additional 193K problems sourced from AoPS forums (problems only, no solutions) We used… See the full description on the dataset page: https://huggingface.co/datasets/nvidia/OpenMathReasoning.
11,573
153,538
[ "task_categories:question-answering", "task_categories:text-generation", "language:en", "license:cc-by-4.0", "size_categories:1M<n<10M", "format:parquet", "modality:text", "library:datasets", "library:dask", "library:mlcroissant", "library:polars", "arxiv:2504.16891", "region:us", "math", "nvidia" ]
2025-04-22T05:44:36
null
null
675103d813aa765a04ace26f
SWE-Arena/vote_data
SWE-Arena
nan
false
False
2026-01-25T17:18:14
26
25
false
2efc84d5525a3fea718043670158dc0f1eb4b2e7
null
81
2,182
[ "size_categories:n<1K", "modality:text", "region:us" ]
2024-12-05T01:37:28
null
null
68f119703c5910443df36569
SWE-Arena/bot_data
SWE-Arena
nan
false
False
2026-01-22T04:55:09
26
25
false
b7cc227d7f5ef4a237a25eb1f1f0c0f03c6721c4
null
148
1,417
[ "size_categories:n<1K", "modality:text", "region:us" ]
2025-10-16T16:12:32
null
null
621ffdd236468d709f181d5e
allenai/ai2_arc
allenai
{"annotations_creators": ["found"], "language_creators": ["found"], "language": ["en"], "license": ["cc-by-sa-4.0"], "multilinguality": ["monolingual"], "size_categories": ["1K<n<10K"], "source_datasets": ["original"], "task_categories": ["question-answering"], "task_ids": ["open-domain-qa", "multiple-choice-qa"], "pretty_name": "Ai2Arc", "language_bcp47": ["en-US"], "dataset_info": [{"config_name": "ARC-Challenge", "features": [{"name": "id", "dtype": "string"}, {"name": "question", "dtype": "string"}, {"name": "choices", "sequence": [{"name": "text", "dtype": "string"}, {"name": "label", "dtype": "string"}]}, {"name": "answerKey", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 349760, "num_examples": 1119}, {"name": "test", "num_bytes": 375511, "num_examples": 1172}, {"name": "validation", "num_bytes": 96660, "num_examples": 299}], "download_size": 449460, "dataset_size": 821931}, {"config_name": "ARC-Easy", "features": [{"name": "id", "dtype": "string"}, {"name": "question", "dtype": "string"}, {"name": "choices", "sequence": [{"name": "text", "dtype": "string"}, {"name": "label", "dtype": "string"}]}, {"name": "answerKey", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 619000, "num_examples": 2251}, {"name": "test", "num_bytes": 657514, "num_examples": 2376}, {"name": "validation", "num_bytes": 157394, "num_examples": 570}], "download_size": 762935, "dataset_size": 1433908}], "configs": [{"config_name": "ARC-Challenge", "data_files": [{"split": "train", "path": "ARC-Challenge/train-*"}, {"split": "test", "path": "ARC-Challenge/test-*"}, {"split": "validation", "path": "ARC-Challenge/validation-*"}]}, {"config_name": "ARC-Easy", "data_files": [{"split": "train", "path": "ARC-Easy/train-*"}, {"split": "test", "path": "ARC-Easy/test-*"}, {"split": "validation", "path": "ARC-Easy/validation-*"}]}]}
false
False
2023-12-21T15:09:48
310
21
false
210d026faf9955653af8916fad021475a3f00453
Dataset Card for "ai2_arc" Dataset Summary A new dataset of 7,787 genuine grade-school level, multiple-choice science questions, assembled to encourage research in advanced question-answering. The dataset is partitioned into a Challenge Set and an Easy Set, where the former contains only questions answered incorrectly by both a retrieval-based algorithm and a word co-occurrence algorithm. We are also including a corpus of over 14 million science sentences relevant to… See the full description on the dataset page: https://huggingface.co/datasets/allenai/ai2_arc.
277,277
10,820,969
[ "task_categories:question-answering", "task_ids:open-domain-qa", "task_ids:multiple-choice-qa", "annotations_creators:found", "language_creators:found", "multilinguality:monolingual", "source_datasets:original", "language:en", "license:cc-by-sa-4.0", "size_categories:1K<n<10K", "format:parquet", "modality:text", "library:datasets", "library:pandas", "library:polars", "library:mlcroissant", "arxiv:1803.05457", "region:us" ]
2022-03-02T23:29:22
null
null
696674bae19dee6669d689d8
AvitoTech/BAT
AvitoTech
{"configs": [{"config_name": "fpa_campaigns", "data_files": "fpa/campaigns.csv"}, {"config_name": "fpa_stats", "data_files": "fpa/stats.csv"}, {"config_name": "vcg_campaigns", "data_files": "vcg/campaigns.csv"}, {"config_name": "vcg_stats", "data_files": "vcg/stats.csv"}]}
false
False
2026-01-13T16:37:15
21
21
false
cd15e02054d26a6f1534cab5a7897a7f1bd974b7
BAT Dataset This dataset provides an alternative way to access the data from the BAT (BAT: Benchmark for Auto-bidding Task) autobidding benchmark. Related Resources GitHub Repository: avito-tech/bat-autobidding-benchmark Paper: BAT: Benchmark for Auto-bidding Task Dataset Description This dataset contains auction data for First-Price Auction (FPA) and Vickrey-Clarke-Groves (VCG) mechanisms, used for benchmarking autobidding algorithms. Configurations… See the full description on the dataset page: https://huggingface.co/datasets/AvitoTech/BAT.
44
44
[ "size_categories:10M<n<100M", "format:csv", "modality:tabular", "modality:text", "library:datasets", "library:pandas", "library:polars", "library:mlcroissant", "region:us" ]
2026-01-13T16:37:14
null
null
End of preview. Expand in Data Studio

Changelog

NEW Changes July 25th

  • added baseModels field to models which shows the models that the user tagged as base models for that model

Example:

{
  "models": [
    {
      "_id": "687de260234339fed21e768a",
      "id": "Qwen/Qwen3-235B-A22B-Instruct-2507"
    }
  ],
  "relation": "quantized"
}

NEW Changes July 9th

  • Fixed issue with gguf column with integer overflow causing import pipeline to be broken over a few weeks ✅

NEW Changes Feb 27th

  • Added new fields on the models split: downloadsAllTime, safetensors, gguf

  • Added new field on the datasets split: downloadsAllTime

  • Added new split: papers which is all of the Daily Papers

Updated Daily

Downloads last month
4,125

Spaces using cfahlgren1/hub-stats 15