Instructions to use subset-data/test-model-roberta with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use subset-data/test-model-roberta with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("question-answering", model="subset-data/test-model-roberta")# Load model directly from transformers import AutoTokenizer, AutoModelForQuestionAnswering tokenizer = AutoTokenizer.from_pretrained("subset-data/test-model-roberta") model = AutoModelForQuestionAnswering.from_pretrained("subset-data/test-model-roberta") - Notebooks
- Google Colab
- Kaggle
- Xet hash:
- 7e7d792aca10fe8002be04e13314fa8d5008b32fc86f05a3949d2911615adbfa
- Size of remote file:
- 496 MB
- SHA256:
- 50b4b197010597a3bf53c4ec676b6b917d5dec1be49e5fcf0f7321fbbb62b6f6
路
Xet efficiently stores Large Files inside Git, intelligently splitting files into unique chunks and accelerating uploads and downloads. More info.