cportoca's picture
End of training
a153782 verified
|
raw
history blame
1.89 kB
metadata
library_name: transformers
license: cc-by-nc-4.0
base_model: facebook/mms-1b-all
tags:
  - generated_from_trainer
metrics:
  - wer
model-index:
  - name: MMS_Quechua_finetuned
    results: []

MMS_Quechua_finetuned

This model is a fine-tuned version of facebook/mms-1b-all on an unknown dataset. It achieves the following results on the evaluation set:

  • Loss: 0.4424
  • Wer: 0.3950

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.001
  • train_batch_size: 16
  • eval_batch_size: 8
  • seed: 42
  • optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_steps: 100
  • num_epochs: 1
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss Wer
5.7203 0.1355 100 0.7301 0.6115
0.731 0.2710 200 0.5409 0.4553
0.5859 0.4065 300 0.5124 0.4280
0.549 0.5420 400 0.4815 0.4170
0.5341 0.6775 500 0.4720 0.4051
1.0055 0.8130 600 0.4601 0.4095
0.4943 0.9485 700 0.4424 0.3950

Framework versions

  • Transformers 4.46.3
  • Pytorch 2.4.1+cu121
  • Datasets 3.1.0
  • Tokenizers 0.20.3