Text Classification
Transformers
PyTorch
TensorBoard
bert
Generated from Trainer
text-embeddings-inference
Instructions to use fffffly/biobert_model with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use fffffly/biobert_model with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-classification", model="fffffly/biobert_model")# Load model directly from transformers import AutoTokenizer, AutoModelForSequenceClassification tokenizer = AutoTokenizer.from_pretrained("fffffly/biobert_model") model = AutoModelForSequenceClassification.from_pretrained("fffffly/biobert_model") - Notebooks
- Google Colab
- Kaggle
| license: mit | |
| tags: | |
| - generated_from_trainer | |
| metrics: | |
| - accuracy | |
| - f1 | |
| model-index: | |
| - name: biobert_model | |
| results: [] | |
| <!-- This model card has been generated automatically according to the information the Trainer had access to. You | |
| should probably proofread and complete it, then remove this comment. --> | |
| # biobert_model | |
| This model is a fine-tuned version of [emilyalsentzer/Bio_ClinicalBERT](https://huggingface.co/emilyalsentzer/Bio_ClinicalBERT) on the None dataset. | |
| It achieves the following results on the evaluation set: | |
| - Loss: 0.9645 | |
| - Accuracy: 0.8711 | |
| - F1: 0.8475 | |
| ## Model description | |
| More information needed | |
| ## Intended uses & limitations | |
| More information needed | |
| ## Training and evaluation data | |
| More information needed | |
| ## Training procedure | |
| ### Training hyperparameters | |
| The following hyperparameters were used during training: | |
| - learning_rate: 1e-05 | |
| - train_batch_size: 8 | |
| - eval_batch_size: 8 | |
| - seed: 42 | |
| - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 | |
| - lr_scheduler_type: linear | |
| - num_epochs: 15 | |
| ### Training results | |
| | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | | |
| |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | |
| | No log | 1.0 | 334 | 0.6463 | 0.6897 | 0.7129 | | |
| | 0.4503 | 2.0 | 668 | 0.3590 | 0.8651 | 0.8269 | | |
| | 0.2715 | 3.0 | 1002 | 0.4549 | 0.8711 | 0.8252 | | |
| | 0.2715 | 4.0 | 1336 | 0.6012 | 0.8681 | 0.8434 | | |
| | 0.1335 | 5.0 | 1670 | 0.6307 | 0.8576 | 0.8313 | | |
| | 0.0746 | 6.0 | 2004 | 0.7658 | 0.8636 | 0.8366 | | |
| | 0.0746 | 7.0 | 2338 | 0.8658 | 0.8666 | 0.8436 | | |
| | 0.0307 | 8.0 | 2672 | 0.8312 | 0.8711 | 0.8453 | | |
| | 0.0148 | 9.0 | 3006 | 0.8922 | 0.8651 | 0.8421 | | |
| | 0.0148 | 10.0 | 3340 | 0.8761 | 0.8726 | 0.8490 | | |
| | 0.0128 | 11.0 | 3674 | 0.9329 | 0.8681 | 0.8462 | | |
| | 0.0105 | 12.0 | 4008 | 0.9512 | 0.8666 | 0.8441 | | |
| | 0.0105 | 13.0 | 4342 | 0.9553 | 0.8711 | 0.8475 | | |
| | 0.0069 | 14.0 | 4676 | 0.9731 | 0.8681 | 0.8445 | | |
| | 0.0046 | 15.0 | 5010 | 0.9645 | 0.8711 | 0.8475 | | |
| ### Framework versions | |
| - Transformers 4.29.2 | |
| - Pytorch 2.0.1+cu118 | |
| - Datasets 2.12.0 | |
| - Tokenizers 0.13.3 | |