whisper-small-hindi-transcribe

This model is a fine-tuned version of openai/whisper-small on an unknown dataset. It achieves the following results on the evaluation set:

  • Loss: 0.3134
  • Wer: 21.0323

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 1e-05
  • train_batch_size: 64
  • eval_batch_size: 16
  • seed: 42
  • optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_steps: 500
  • training_steps: 8100
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss Wer
0.1415 1.8473 750 0.1538 22.5756
0.0787 3.6946 1500 0.1400 20.1356
0.039 5.5419 2250 0.1561 20.1668
0.0163 7.3892 3000 0.1859 20.6361
0.0057 9.2365 3750 0.2193 21.0532
0.0018 11.0837 4500 0.2519 20.9906
0.0011 12.9310 5250 0.2728 21.1470
0.0006 14.7783 6000 0.2896 21.3139
0.0002 16.6256 6750 0.3043 21.0636
0.0002 18.4729 7500 0.3134 21.0323

Framework versions

  • Transformers 4.48.1
  • Pytorch 2.5.1+cu124
  • Datasets 3.2.0
  • Tokenizers 0.21.0
Downloads last month
30
Safetensors
Model size
242M params
Tensor type
F32
·
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.

Model tree for shadabsayd/whisper-small-hindi-transcribe

Finetuned
(2287)
this model