Instructions to use WinKawaks/vit-tiny-patch16-224 with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use WinKawaks/vit-tiny-patch16-224 with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("image-classification", model="WinKawaks/vit-tiny-patch16-224") pipe("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/hub/parrots.png")# Load model directly from transformers import AutoImageProcessor, AutoModelForImageClassification processor = AutoImageProcessor.from_pretrained("WinKawaks/vit-tiny-patch16-224") model = AutoModelForImageClassification.from_pretrained("WinKawaks/vit-tiny-patch16-224") - Inference
- Notebooks
- Google Colab
- Kaggle
- Xet hash:
- 9b15f445f45f5d2ec0b9a708b613e0c44b6ba216ad3e31e21f15016df5d781ad
- Size of remote file:
- 33.6 MB
- SHA256:
- 7a1d85957bc2aec480b393968779d567f7f548f7418fad700c94b9e129aac5b8
·
Xet efficiently stores Large Files inside Git, intelligently splitting files into unique chunks and accelerating uploads and downloads. More info.