File size: 2,879 Bytes
862756c
ef5ac3b
 
 
862756c
 
 
ef5ac3b
 
862756c
ef5ac3b
862756c
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
d90b517
 
 
 
862756c
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
85dea6f
ef5ac3b
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
---
license: mit
task_categories:
- feature-extraction
language:
- en
tags:
- code
pretty_name: MNIST Visual Curation
size_categories:
- 10K<n<100K
---

# Curation of the famous MNIST Dataset

The curation was done using qualitative analysis of the dataset, following visualization techniques like **PCA** and **UMAP** and score-based categorization of the samples using metrics like **hardness**, **mistakenness**, or **uniqueness**.

The code of the curation can be found on GitHub:  
πŸ‘‰ https://github.com/Conscht/MNIST_Curation_Repo/tree/main  

This curated version of MNIST introduces an additional **IDK (β€œI Don’t Know”)** label for digits that are ambiguous, noisy, or of low quality. It is intended for experiments on robust classification, dataset curation, and handling uncertain or hard-to-classify examples.

---

## πŸ” Overview

Compared to the original MNIST dataset, this curated version:

- keeps the original digit classes **0–9**
- adds an **11th class: `IDK`**
- moves visually ambiguous or questionable digits into the `IDK` class

Questionable digits include:

- distorted or spaghetti-like shapes  
- digits that are hard even for humans to classify  
- strong outliers in the embedding space  
- samples often misclassified by the baseline model  

---

## 🧠 How the Curation Was Done

The curation process combined **qualitative inspection** and **quantitative metrics**:

1. Train a **LeNet-5** classifier on the original MNIST digits.  
2. Extract **embeddings** from the penultimate layer of the network.  
3. Visualize these embeddings with **PCA** and **UMAP** in **FiftyOne** to identify clusters, outliers, and ambiguous regions.  
4. Compute several **FiftyOne Brain metrics**:
   - `hardness`
   - `mistakenness`
   - `uniqueness`
   - `representativeness`
5. Use these metrics to surface suspicious samples:
   - highly mistaken or hard examples  
   - high-uniqueness outliers  
   - misclassified samples  
6. Inspect these subsets inside the **FiftyOne App** and manually decide which samples should be relabeled as **IDK**.  


Example of visualized embedding space:
![UMAP](https://huggingface.co/proxy/cdn-uploads.huggingface.co/production/uploads/673e7e7c4dca9bce31534dd7/bHTbMCZ0QL3ETONxoUT4S.png)

---

## πŸ“ Dataset Structure

The dataset is exported in **ImageClassificationDirectoryTree** format:

```text
root/
β”œβ”€β”€ train/
β”‚   β”œβ”€β”€ 0/
β”‚   β”œβ”€β”€ 1/
β”‚   β”œβ”€β”€ ...
β”‚   β”œβ”€β”€ 9/
β”‚   └── IDK/
└── test/
    β”œβ”€β”€ 0/
    β”œβ”€β”€ 1/
    β”œβ”€β”€ ...
    β”œβ”€β”€ 9/
    └── IDK/

@article{lecun1998gradient,
  title={Gradient-based learning applied to document recognition},
  author={LeCun, Yann and Bottou, L{\'e}on and Bengio, Yoshua and Haffner, Patrick},
  journal={Proceedings of the IEEE},
  volume={86},
  number={11},
  pages={2278--2324},
  year={1998},
  publisher={IEEE}
}