Maximum2000 commited on
Commit
031b524
·
verified ·
1 Parent(s): 9fb1a94

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +86 -0
README.md ADDED
@@ -0,0 +1,86 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: mit
3
+ language:
4
+ - multilingual
5
+ tags:
6
+ - nlp
7
+ - code
8
+ - audio
9
+ - automatic-speech-recognition
10
+ - speech-summarization
11
+ - speech-translation
12
+ - visual-question-answering
13
+ - phi-4-multimodal
14
+ - phi
15
+ - phi-4-mini
16
+ ---
17
+
18
+ ## Phi-4 Multimodal Instruct ONNX models
19
+
20
+ This is non quantized version of the Phi-4 Multimodal Instruct ONNX model
21
+
22
+ ### Introduction
23
+
24
+ This is an ONNX version of the Phi-4 multimodal model that is quantized to int4 precision to accelerate inference with ONNX Runtime.
25
+
26
+ ## Model Run
27
+ For CPU: stay tuned or follow [this tutorial](https://github.com/microsoft/onnxruntime-genai/blob/main/examples/python/phi-4-multi-modal.md) to generate your own ONNX models for CPU!
28
+
29
+ <!-- ```bash
30
+ # Download the model directly using the Hugging Face CLI
31
+ huggingface-cli download microsoft/Phi-4-multimodal-instruct-onnx --include cpu_and_mobile/cpu-int4-rtn-block-32-acc-level-4/* --local-dir .
32
+
33
+ # Install the CPU package of ONNX Runtime GenAI
34
+ pip install --pre onnxruntime-genai
35
+
36
+ # Please adjust the model directory (-m) accordingly
37
+ curl https://raw.githubusercontent.com/microsoft/onnxruntime-genai/main/examples/python/phi4-mm.py -o phi4-mm.py
38
+ python phi4-mm.py -m cpu_and_mobile/cpu-int4-rtn-block-32-acc-level-4 -e cpu
39
+ ``` -->
40
+
41
+ For CUDA:
42
+
43
+ ```bash
44
+ # Download the model directly using the Hugging Face CLI
45
+ huggingface-cli download microsoft/Phi-4-multimodal-instruct-onnx --include gpu/* --local-dir .
46
+
47
+ # Install the CUDA package of ONNX Runtime GenAI
48
+ pip install --pre onnxruntime-genai-cuda
49
+
50
+ # Please adjust the model directory (-m) accordingly
51
+ curl https://raw.githubusercontent.com/microsoft/onnxruntime-genai/main/examples/python/phi4-mm.py -o phi4-mm.py
52
+ python phi4-mm.py -m gpu/gpu-int4-rtn-block-32 -e cuda
53
+ ```
54
+
55
+ For DirectML:
56
+
57
+ ```bash
58
+ # Download the model directly using the Hugging Face CLI
59
+ huggingface-cli download microsoft/Phi-4-multimodal-instruct-onnx --include gpu/* --local-dir .
60
+
61
+ # Install the DML package of ONNX Runtime GenAI
62
+ pip install --pre onnxruntime-genai-directml
63
+
64
+ # Please adjust the model directory (-m) accordingly
65
+ curl https://raw.githubusercontent.com/microsoft/onnxruntime-genai/main/examples/python/phi4-mm.py -o phi4-mm.py
66
+ python phi4-mm.py -m gpu/gpu-int4-rtn-block-32 -e dml
67
+ ```
68
+
69
+ You will be prompted to provide any images, audios, and a prompt.
70
+
71
+ The performance of the text component is similar to the [Phi-4 mini ONNX models](https://huggingface.co/microsoft/Phi-4-mini-instruct-onnx/blob/main/README.md)
72
+
73
+ ### Model Description
74
+
75
+ - Developed by: Microsoft
76
+ - Model type: ONNX
77
+ - License: MIT
78
+ - Model Description: This is a conversion of Phi4 multimodal model for ONNX Runtime inference.
79
+
80
+ Disclaimer: Model is only an optimization of the base model, any risk associated with the model is the responsibility of the user of the model. Please verify and test for you scenarios. There may be a slight difference in output from the base model with the optimizations applied.
81
+
82
+ ### Base Model
83
+
84
+ Phi-4-multimodal-instruct is a lightweight open multimodal foundation model that leverages the language, vision, and speech research and datasets used for Phi-3.5 and 4.0 models. The model processes text, image, and audio inputs, generating text outputs, and comes with 128K token context length. The model underwent an enhancement process, incorporating both supervised fine-tuning, and direct preference optimization to support precise instruction adherence and safety measures.
85
+
86
+ See details [here](https://huggingface.co/microsoft/Phi-4-multimodal-instruct/blob/main/README.md)