whisper-tiny-he-acft

Hebrew Whisper Tiny with ACFT (Audio-Context Fine-Tuning) for optimized short-audio performance, compatible with FUTO Keyboard and whisper.cpp.

Training

Two-stage pipeline:

  1. Fine-tune: openai/whisper-tiny on ivrit-ai/whisper-training (~400h Hebrew) → amitkot/whisper-tiny-he
  2. ACFT: Fine-tuned model on google/fleurs (he_il) using FUTO-aligned ACFT (partial encoder with truncated positional embeddings, 8 epochs, batch_size=1)
  • Hardware: Apple M4 (MPS)
  • Method: Distillation-based — teaches model to handle short audio contexts without repeating

Usage

from transformers import WhisperProcessor, WhisperForConditionalGeneration

processor = WhisperProcessor.from_pretrained("amitkot/whisper-tiny-he-acft")
model = WhisperForConditionalGeneration.from_pretrained("amitkot/whisper-tiny-he-acft")

For FUTO Keyboard / whisper.cpp, convert to ggml:

uv run python scripts/pipeline.py \
  --finetune-config configs/hebrew_tiny_finetune.yaml \
  --config configs/hebrew_tiny_acft.yaml

Training pipeline

Trained using whisper-acft-pipeline.

See also

Downloads last month
22
Safetensors
Model size
37.8M params
Tensor type
F32
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for amitkot/whisper-tiny-he-acft

Finetuned
(1)
this model

Datasets used to train amitkot/whisper-tiny-he-acft