--- library_name: transformers license: mit base_model: microsoft/Phi-4-multimodal-instruct tags: - generated_from_trainer model-index: - name: Phi-4-mm-inst-asr-turkish-3 results: [] --- # Phi-4-mm-inst-asr-turkish-3 This model is a fine-tuned version of [microsoft/Phi-4-multimodal-instruct](https://huggingface.co/microsoft/Phi-4-multimodal-instruct) on a 1300-hour Turkish audio dataset. ## Training Prompt The model was initially fine-tuned using the original ASR prompt: "Transcribe the audio clip into text." This prompt is language agnostic—as described in the model [paper](https://huggingface.co/microsoft/Phi-4-multimodal-instruct/blob/main/phi_4_mm.tech_report.02252025.pdf): > The ASR prompt for Phi-4-Multimodal is “Transcribe the audio clip into text.”, which is language agnostic. We notice that the model can learn to recognize in the target language perfectly without providing language information, while Qwen2-audio and Gemini-2.0-Flash require the language information in the prompt to obtain the optimal ASR performance. However, we found that using a language-defining prompt, such as: "Transcribe the Turkish audio." leads to better performance. See: [ysdede/Phi-4-mm-inst-asr-turkish](https://huggingface.co/ysdede/Phi-4-mm-inst-asr-turkish) ## Training Results When benchmarked with the original ASR prompt "Transcribe the audio clip into text.", the evaluation results were as follows: - **Before Fine-Tuning:** - WER: 153.84 - CER: 82.57 - **After Fine-Tuning:** - WER: 64.76 - CER: 29.85 ## Inference Load `generation_config` and `processor` from the base model as a quick fix to use the default generation settings. *Note: The new models currently lack high-quality fine-tuning scripts. When saving a fine-tuned model using `model.save_pretrained()`, the processor configuration—including essential audio parameters—is not automatically saved. This omission can lead to errors during inference due to the model’s complex architecture. Loading these components from the base model ensures that all critical settings are properly included.* ```python generation_config = GenerationConfig.from_pretrained( 'microsoft/Phi-4-multimodal-instruct', 'generation_config.json' ) processor = AutoProcessor.from_pretrained( 'microsoft/Phi-4-multimodal-instruct', trust_remote_code=True ) ``` ### Framework versions - Transformers 4.46.1 - Pytorch 2.5.1+cu124 - Datasets 3.3.2 - Tokenizers 0.20.3