Instructions to use dongyangyang/uie_torch with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use dongyangyang/uie_torch with Transformers:
# Load model directly from transformers import AutoTokenizer, UIE tokenizer = AutoTokenizer.from_pretrained("dongyangyang/uie_torch") model = UIE.from_pretrained("dongyangyang/uie_torch") - Notebooks
- Google Colab
- Kaggle
- Xet hash:
- 8ff83262400b03586d2cd20f7297f5120526d5f5530a31b36ba90728f2719676
- Size of remote file:
- 472 MB
- SHA256:
- 37a687621ff9917415b6f758ed3a7ae5ee0320e4b5699f855246398b3f80171b
·
Xet efficiently stores Large Files inside Git, intelligently splitting files into unique chunks and accelerating uploads and downloads. More info.