Today I happened to notice that for bart_large.save_pretrained(x) the model is 1.6GB on disk but the pretrained size (both in the local cache and on the model Hub) is 972M. However, for bart-base the trained size and pretrained sizes are the same. Looking at the difference between the model sizes, I think the 1.6GB is probably the “right” size. Anyone know how they are compressing the bart-large down to 972M for storage on the Hub and is there a way to do this to save space for my bigger trained models?
Related topics
| Topic | Replies | Views | Activity | |
|---|---|---|---|---|
| Why Is the Pytorch Checkpoint of Bart-large Smaller? | 0 | 383 | December 27, 2021 | |
| Any model's size is huge when saved as opposed to downloading from hub pretrained | 3 | 455 | February 17, 2024 | |
| Model saving results in a small size checkpoint | 1 | 661 | January 4, 2021 | |
| Saving standard BertModel english and BertModel multilingual have drastically different sizes? | 2 | 300 | August 28, 2020 | |
| Why does local-downloaded model files are different from those in huggingface? | 5 | 865 | June 10, 2024 |