ysdede commited on
Commit
6250da6
·
verified ·
1 Parent(s): bcf8569

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +63 -0
README.md ADDED
@@ -0,0 +1,63 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ library_name: transformers
3
+ license: mit
4
+ base_model: microsoft/Phi-4-multimodal-instruct
5
+ tags:
6
+ - generated_from_trainer
7
+ model-index:
8
+ - name: Phi-4-mm-inst-asr-turkish-3
9
+ results: []
10
+ ---
11
+
12
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
13
+ should probably proofread and complete it, then remove this comment. -->
14
+
15
+ # Phi-4-mm-inst-asr-turkish-3
16
+
17
+ This model is a fine-tuned version of [microsoft/Phi-4-multimodal-instruct](https://huggingface.co/microsoft/Phi-4-multimodal-instruct) on a 1300-hour Turkish audio dataset.
18
+
19
+ ## Training Prompt
20
+
21
+ The model was initially fine-tuned using the original ASR prompt: "Transcribe the audio clip into text."
22
+ This prompt is language agnostic—as described in the model [paper](https://huggingface.co/microsoft/Phi-4-multimodal-instruct/blob/main/phi_4_mm.tech_report.02252025.pdf):
23
+ > The ASR prompt for Phi-4-Multimodal is “Transcribe the audio clip into text.”, which is
24
+ language agnostic. We notice that the model can learn to recognize in the target language perfectly
25
+ without providing language information, while Qwen2-audio and Gemini-2.0-Flash require the language
26
+ information in the prompt to obtain the optimal ASR performance.
27
+
28
+ However, we found that using a language-defining prompt, such as: "Transcribe the Turkish audio." leads to better performance.
29
+ See: [ysdede/Phi-4-mm-inst-asr-turkish](https://huggingface.co/ysdede/Phi-4-mm-inst-asr-turkish)
30
+
31
+ ## Training Results
32
+
33
+ When benchmarked with the original ASR prompt "Transcribe the audio clip into text.", the evaluation results were as follows:
34
+
35
+ - **Before Fine-Tuning:**
36
+ - WER: 153.84
37
+ - CER: 82.57
38
+
39
+ - **After Fine-Tuning:**
40
+ - WER: 64.76
41
+ - CER: 29.85
42
+
43
+ ## Inference
44
+
45
+ Load `generation_config` and `processor` from the base model as a quick fix to use the default generation settings.
46
+
47
+ *Note: The new models currently lack high-quality fine-tuning scripts. When saving a fine-tuned model using `model.save_pretrained()`, the processor configuration—including essential audio parameters—is not automatically saved. This omission can lead to errors during inference due to the model’s complex architecture. Loading these components from the base model ensures that all critical settings are properly included.*
48
+
49
+ ```python
50
+ generation_config = GenerationConfig.from_pretrained(
51
+ 'microsoft/Phi-4-multimodal-instruct', 'generation_config.json'
52
+ )
53
+ processor = AutoProcessor.from_pretrained(
54
+ 'microsoft/Phi-4-multimodal-instruct', trust_remote_code=True
55
+ )
56
+ ```
57
+
58
+ ### Framework versions
59
+
60
+ - Transformers 4.46.1
61
+ - Pytorch 2.5.1+cu124
62
+ - Datasets 3.3.2
63
+ - Tokenizers 0.20.3