asr-baatonou

This model is a fine-tuned version of openai/whisper-large-v3-turbo on the None dataset. It achieves the following results on the evaluation set:

  • Loss: 0.4710
  • Wer: 47.1082

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 1e-05
  • train_batch_size: 16
  • eval_batch_size: 16
  • seed: 42
  • gradient_accumulation_steps: 2
  • total_train_batch_size: 32
  • optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_steps: 100
  • num_epochs: 10
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss Wer
0.9171 0.1265 100 0.7232 56.8144
0.705 0.2530 200 0.6174 51.8920
0.6686 0.3795 300 0.5594 56.5408
0.647 0.5060 400 0.5373 50.7973
0.5975 0.6325 500 0.5198 46.9513
0.5446 0.7590 600 0.5040 47.5862
0.5714 0.8855 700 0.4922 46.8345
0.4901 1.0114 800 0.4785 52.0416
0.4949 1.1379 900 0.4715 46.1595
0.4987 1.2644 1000 0.4640 42.8535
0.489 1.3909 1100 0.4582 44.4919
0.4498 1.5174 1200 0.4508 46.3018
0.5076 1.6439 1300 0.4522 44.3386
0.4795 1.7704 1400 0.4449 48.8706
0.4703 1.8969 1500 0.4419 43.8059
0.4461 2.0228 1600 0.4423 52.4685
0.4418 2.1493 1700 0.4375 46.2580
0.4383 2.2758 1800 0.4327 44.8093
0.4169 2.4023 1900 0.4317 45.1925
0.4143 2.5288 2000 0.4299 43.4081
0.4187 2.6553 2100 0.4295 43.3534
0.4213 2.7818 2200 0.4278 44.1160
0.401 2.9083 2300 0.4217 45.0502
0.44 3.0342 2400 0.4242 44.6999
0.3967 3.1607 2500 0.4256 48.4109
0.3607 3.2872 2600 0.4245 42.7221
0.3863 3.4137 2700 0.4211 46.0427
0.3773 3.5402 2800 0.4187 44.4663
0.402 3.6667 2900 0.4180 43.9445
0.3634 3.7932 3000 0.4205 43.2403
0.401 3.9197 3100 0.4183 44.9589
0.3289 4.0455 3200 0.4233 45.3713
0.3503 4.1720 3300 0.4210 45.4406
0.3712 4.2985 3400 0.4195 44.6086
0.3443 4.4250 3500 0.4222 45.0246
0.3641 4.5515 3600 0.4193 42.2149
0.3789 4.6781 3700 0.4200 43.9628
0.3561 4.8046 3800 0.4187 42.9119
0.3744 4.9311 3900 0.4140 46.7615
0.3287 5.0569 4000 0.4231 43.1272
0.3278 5.1834 4100 0.4221 48.1664
0.3425 5.3099 4200 0.4183 45.0392
0.3353 5.4364 4300 0.4233 46.9878
0.3497 5.5629 4400 0.4170 46.4477
0.3045 5.6894 4500 0.4190 45.1487
0.3205 5.8159 4600 0.4206 46.0573
0.3386 5.9424 4700 0.4182 43.0505
0.2765 6.0683 4800 0.4286 44.7035
0.2935 6.1948 4900 0.4269 45.5574
0.3174 6.3213 5000 0.4305 47.0425
0.3003 6.4478 5100 0.4300 44.6962
0.287 6.5743 5200 0.4292 45.0867
0.2898 6.7008 5300 0.4280 45.3311
0.2914 6.8273 5400 0.4296 46.5061
0.2807 6.9538 5500 0.4289 45.4662
0.2535 7.0797 5600 0.4376 44.7874
0.2348 7.2062 5700 0.4428 46.4441
0.2756 7.3327 5800 0.4407 45.8055
0.2521 7.4592 5900 0.4443 45.5720
0.2815 7.5857 6000 0.4410 46.8564
0.2501 7.7122 6100 0.4440 45.9369
0.256 7.8387 6200 0.4435 45.8128
0.283 7.9652 6300 0.4450 44.5685
0.2254 8.0911 6400 0.4547 45.5756
0.2392 8.2176 6500 0.4541 46.1485
0.2278 8.3441 6600 0.4567 47.8964
0.2319 8.4706 6700 0.4576 46.9987
0.2427 8.5971 6800 0.4581 47.2031
0.2307 8.7236 6900 0.4604 45.8858
0.2229 8.8501 7000 0.4596 46.8418
0.2361 8.9766 7100 0.4596 47.8453
0.1967 9.1025 7200 0.4677 47.6555
0.2068 9.2290 7300 0.4688 47.1775
0.2138 9.3555 7400 0.4697 47.5753
0.2036 9.4820 7500 0.4697 47.0826
0.205 9.6085 7600 0.4714 46.5061
0.2192 9.7350 7700 0.4712 47.3381
0.2054 9.8615 7800 0.4709 47.0316
0.2176 9.9880 7900 0.4710 47.1082

Framework versions

  • Transformers 4.57.3
  • Pytorch 2.9.1+cu128
  • Datasets 3.6.0
  • Tokenizers 0.22.1
Downloads last month
282
Safetensors
Model size
0.8B params
Tensor type
F32
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for bivariant/asr-baatonou

Finetuned
(422)
this model

Evaluation results