Improve model card and correct pipeline tag
Browse filesThis PR improves the model card by:
- Correcting the `pipeline_tag` to `image-text-to-text`.
- Adding a more detailed model description, intended uses, limitations, and training and evaluation data based on the information in the Github README.
- Including the citation from the README.
- Removing the placeholder comment.
README.md
CHANGED
@@ -1,34 +1,45 @@
|
|
1 |
---
|
|
|
2 |
library_name: transformers
|
3 |
license: other
|
4 |
-
base_model: Qwen/Qwen2.5-VL-7B-Instruct
|
5 |
tags:
|
6 |
- llama-factory
|
7 |
- full
|
8 |
- generated_from_trainer
|
|
|
9 |
model-index:
|
10 |
- name: LongWriter-V-7B
|
11 |
results: []
|
12 |
---
|
13 |
|
14 |
-
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
|
15 |
-
should probably proofread and complete it, then remove this comment. -->
|
16 |
-
|
17 |
# LongWriter-V-7B
|
18 |
|
19 |
-
This model is a fine-tuned version of [Qwen/Qwen2.5-VL-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-VL-7B-Instruct) on the LongWriter-V-22K dataset.
|
20 |
|
21 |
## Model description
|
22 |
|
23 |
-
|
|
|
24 |
|
25 |
## Intended uses & limitations
|
26 |
|
27 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
28 |
|
29 |
## Training and evaluation data
|
30 |
|
31 |
-
|
|
|
32 |
|
33 |
## Training procedure
|
34 |
|
@@ -59,3 +70,17 @@ The following hyperparameters were used during training:
|
|
59 |
- Pytorch 2.5.1+cu124
|
60 |
- Datasets 3.2.0
|
61 |
- Tokenizers 0.21.0
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
---
|
2 |
+
base_model: Qwen/Qwen2.5-VL-7B-Instruct
|
3 |
library_name: transformers
|
4 |
license: other
|
|
|
5 |
tags:
|
6 |
- llama-factory
|
7 |
- full
|
8 |
- generated_from_trainer
|
9 |
+
pipeline_tag: image-text-to-text
|
10 |
model-index:
|
11 |
- name: LongWriter-V-7B
|
12 |
results: []
|
13 |
---
|
14 |
|
|
|
|
|
|
|
15 |
# LongWriter-V-7B
|
16 |
|
17 |
+
This model is a fine-tuned version of [Qwen/Qwen2.5-VL-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-VL-7B-Instruct) on the LongWriter-V-22K dataset. It is designed for generating ultra-long and high-fidelity text outputs, particularly effective for tasks like generating lengthy lecture scripts from a series of presentation slides or creating long-form text descriptions based on visual input.
|
18 |
|
19 |
## Model description
|
20 |
|
21 |
+
LongWriter-V-7B is a vision-language model fine-tuned for generating extended text outputs based on image and text input. It leverages the capabilities of the Qwen2.5-VL-7B-Instruct base model to achieve high-fidelity generation, even for outputs exceeding several thousand words. The model excels at tasks requiring comprehensive and detailed text generation based on visual context. It has been trained on the LongWriter-V-22K dataset, designed for ultra-long and high-fidelity vision-language generation.
|
22 |
+
|
23 |
|
24 |
## Intended uses & limitations
|
25 |
|
26 |
+
**Intended Uses:**
|
27 |
+
|
28 |
+
* Generating long-form text outputs (e.g., lecture scripts, reports, summaries) from image and text prompts.
|
29 |
+
* Summarizing long documents accompanied by visual elements.
|
30 |
+
* Creating detailed descriptions from visual scenes.
|
31 |
+
|
32 |
+
**Limitations:**
|
33 |
+
|
34 |
+
* The model's performance may degrade with exceptionally long prompts or complex visual inputs.
|
35 |
+
* The model's factual accuracy is limited to the knowledge embedded in its training data (LongWriter-V-22K).
|
36 |
+
* The model may generate outputs that are not entirely factually accurate, or that contain hallucinated information. Careful review of outputs is necessary.
|
37 |
+
|
38 |
|
39 |
## Training and evaluation data
|
40 |
|
41 |
+
The model was trained on the LongWriter-V-22K dataset. Evaluation was performed using the MMLongBench-Write and LongWrite-V-Ruler benchmarks.
|
42 |
+
|
43 |
|
44 |
## Training procedure
|
45 |
|
|
|
70 |
- Pytorch 2.5.1+cu124
|
71 |
- Datasets 3.2.0
|
72 |
- Tokenizers 0.21.0
|
73 |
+
|
74 |
+
## Citation
|
75 |
+
|
76 |
+
```
|
77 |
+
@misc{tu2025longwriterv,
|
78 |
+
title={LongWriter-V: Enabling Ultra-Long and High-Fidelity Generation in Vision-Language Models},
|
79 |
+
author={Shangqing Tu and Yucheng Wang and Daniel Zhang-Li and Yushi Bai and Jifan Yu and Yuhao Wu and Lei Hou and Huiqin Liu and Zhiyuan Liu and Bin Xu and Juanzi Li},
|
80 |
+
year={2025},
|
81 |
+
eprint={2502.14834},
|
82 |
+
archivePrefix={arXiv},
|
83 |
+
primaryClass={cs.CV},
|
84 |
+
url={https://arxiv.org/abs/2502.14834},
|
85 |
+
}
|
86 |
+
```
|