Dataset Viewer
Auto-converted to Parquet Duplicate
spike_counts
list
subject_id
string
session_id
string
segment_id
string
source_dataset
string
[[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,(...TRUNCATED)
sub-716464
sub-716464_ses-1330382682
segment_72
v2h
[[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,(...TRUNCATED)
data
ecephys_session_1095138995
segment_36
vbn
[[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,(...TRUNCATED)
sub-637484
sub-637484_ses-1208667752_ogen
segment_103
illusion
[[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,3,1,1,0,0,0,0,0,0,0,0,0,(...TRUNCATED)
sub-719667
sub-719667_ses-1333741475
segment_32
v2h
[[0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,1,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,1,0,0,0,0,(...TRUNCATED)
data
ecephys_session_1084427055
segment_51
vbn
[[0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,(...TRUNCATED)
sub-716813540
sub-716813540_ses-739448407
segment_63
vcn
[[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,(...TRUNCATED)
sub-614547
sub-614547_ses-20220531T221544
segment_9
shield
[[0,0,1,1,0,0,1,0,0,0,0,0,0,1,0,1,0,0,0,0,1,0,1,0,1,1,0,0,1,0,0,0,1,1,0,0,0,0,0,0,0,1,0,0,1,0,0,1,0,(...TRUNCATED)
data
ecephys_session_1086200042
segment_17
vbn
[[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,(...TRUNCATED)
data
ecephys_session_1046166369
segment_146
vbn
[[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,(...TRUNCATED)
data
ecephys_session_1091039376
segment_7
vbn
End of preview. Expand in Data Studio

The Neural Pile (rodent)

This dataset contains 453 billion tokens of curated spiking neural activity data recorded from rodents. The code and detailed instructions for creating this dataset from scratch can be found at this GitHub repository. The dataset takes up about 453 GB on disk when stored as memory-mapped .arrow files (which is the format used by the local caching system of the Hugging Face datasets library). The dataset comes with separate train and test splits. You can load, e.g., the train split of the dataset as follows:

ds = load_dataset("eminorhan/neural-pile-rodent", num_proc=32, split='train')

and display the first data row:

>>> print(ds[0])
>>> {
'spike_counts': ...,
'source_dataset': 'giocomo'
'subject_id': 'sub-npI3',
'session_id': 'sub-npI3_ses-20190420_behavior+ecephys',
'segment_id': 'segment_4'
}

where:

  • spike_counts is a serialized array containing the spike count data. Its shape is (n,t) where n is the number of simultaneously recorded neurons in that session and t is the number of time bins (20 ms bins).
  • source_dataset is an identifier string indicating the source dataset from which that particular row of data came from.
  • subject_id is an identifier string indicating the subject the data were recorded from.
  • session_id is an identifier string indicating the recording session.
  • segment_id is a segment (or chunk) identifier useful for cases where the session was split into smaller chunks (we split long recording sessions (>10M tokens) into smaller equal-sized chunks of no more than 10M tokens), so the whole session can be reproduced from its chunks, if desired.

The dataset rows are pre-shuffled, so users do not have to re-shuffle them.

Downloads last month
685

Collection including eminorhan/neural-pile-rodent