w_large / README.md
johnatanebonilla's picture
Training in progress, step 5000
8c529d0 verified
|
raw
history blame
1.63 kB
metadata
library_name: transformers
license: apache-2.0
base_model: openai/whisper-large
tags:
  - generated_from_trainer
metrics:
  - wer
model-index:
  - name: w_large
    results: []

w_large

This model is a fine-tuned version of openai/whisper-large on an unknown dataset. It achieves the following results on the evaluation set:

  • Loss: 0.6144
  • Wer: 69.9421

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 1e-05
  • train_batch_size: 16
  • eval_batch_size: 8
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_steps: 500
  • training_steps: 4000
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss Wer
0.7439 0.4548 1000 0.7228 107.9816
0.6638 0.9095 2000 0.6496 82.4336
0.413 1.3643 3000 0.6292 76.3384
0.4303 1.8190 4000 0.6144 69.9421

Framework versions

  • Transformers 4.44.2
  • Pytorch 2.4.1+cu118
  • Datasets 3.0.0
  • Tokenizers 0.19.1