mlouala commited on
Commit
707f881
·
verified ·
1 Parent(s): 5dbc574

End of training

Browse files
Files changed (2) hide show
  1. README.md +6 -8
  2. generation_config.json +1 -1
README.md CHANGED
@@ -2,12 +2,10 @@
2
  library_name: transformers
3
  language:
4
  - fr
5
- license: mit
6
- base_model: mlouala/whisper-diin-v2
7
  tags:
8
  - generated_from_trainer
9
- datasets:
10
- - amphion/Emilia-Dataset
11
  model-index:
12
  - name: Whisper Diin - Part 3
13
  results: []
@@ -18,7 +16,7 @@ should probably proofread and complete it, then remove this comment. -->
18
 
19
  # Whisper Diin - Part 3
20
 
21
- This model is a fine-tuned version of [mlouala/whisper-diin-v2](https://huggingface.co/mlouala/whisper-diin-v2) on the Emilia-Dataset dataset.
22
 
23
  ## Model description
24
 
@@ -44,7 +42,7 @@ The following hyperparameters were used during training:
44
  - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
45
  - lr_scheduler_type: linear
46
  - lr_scheduler_warmup_steps: 500
47
- - training_steps: 5000
48
  - mixed_precision_training: Native AMP
49
 
50
  ### Training results
@@ -53,7 +51,7 @@ The following hyperparameters were used during training:
53
 
54
  ### Framework versions
55
 
56
- - Transformers 4.48.0.dev0
57
  - Pytorch 2.4.0
58
- - Datasets 3.2.0
59
  - Tokenizers 0.21.0
 
2
  library_name: transformers
3
  language:
4
  - fr
5
+ license: apache-2.0
6
+ base_model: deepdml/whisper-large-v3-turbo
7
  tags:
8
  - generated_from_trainer
 
 
9
  model-index:
10
  - name: Whisper Diin - Part 3
11
  results: []
 
16
 
17
  # Whisper Diin - Part 3
18
 
19
+ This model is a fine-tuned version of [deepdml/whisper-large-v3-turbo](https://huggingface.co/deepdml/whisper-large-v3-turbo) on the None dataset.
20
 
21
  ## Model description
22
 
 
42
  - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
43
  - lr_scheduler_type: linear
44
  - lr_scheduler_warmup_steps: 500
45
+ - training_steps: 2000
46
  - mixed_precision_training: Native AMP
47
 
48
  ### Training results
 
51
 
52
  ### Framework versions
53
 
54
+ - Transformers 4.50.0.dev0
55
  - Pytorch 2.4.0
56
+ - Datasets 3.3.2
57
  - Tokenizers 0.21.0
generation_config.json CHANGED
@@ -160,5 +160,5 @@
160
  "transcribe": 50360,
161
  "translate": 50359
162
  },
163
- "transformers_version": "4.48.0.dev0"
164
  }
 
160
  "transcribe": 50360,
161
  "translate": 50359
162
  },
163
+ "transformers_version": "4.50.0.dev0"
164
  }