Instructions to use google/vit-base-patch16-384 with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use google/vit-base-patch16-384 with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("image-classification", model="google/vit-base-patch16-384") pipe("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/hub/parrots.png")# Load model directly from transformers import AutoImageProcessor, AutoModelForImageClassification processor = AutoImageProcessor.from_pretrained("google/vit-base-patch16-384") model = AutoModelForImageClassification.from_pretrained("google/vit-base-patch16-384") - Inference
- Notebooks
- Google Colab
- Kaggle
- Xet hash:
- c8bbe3db433e597a6a506d74f00f20d4a45e9f685ded230c37c19457f8e88d76
- Size of remote file:
- 348 MB
- SHA256:
- c593287e3774ee37983d3083dac47cf1fc26509c403a4e9a952bc31d03610ea4
·
Xet efficiently stores Large Files inside Git, intelligently splitting files into unique chunks and accelerating uploads and downloads. More info.