chtmp223 commited on
Commit
31fdc7d
·
verified ·
1 Parent(s): d3b5399

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +68 -0
README.md ADDED
@@ -0,0 +1,68 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ base_model:
3
+ - Qwen/Qwen2.5-7B-Instruct
4
+ license: apache-2.0
5
+ language:
6
+ - en
7
+ datasets:
8
+ - chtmp223/CLIPPER
9
+ ---
10
+
11
+ # Qwen2.5-7B-CLIPPER
12
+ Qwen2.5-7B-CLIPPER is a fine-tuned version of https://huggingface.co/Qwen/Qwen2.5-7B-Instruct using supervised finetuning over chtmp223/CLIPPER dataset.
13
+ Please check [our paper](https://arxiv.org/abs/2502.14854) for more details on the method.
14
+
15
+ ## 📒 Model Details
16
+
17
+ ### Model Description
18
+
19
+ - **Language(s) (NLP):** English
20
+ - **License:** Apache-2.0
21
+ - **Finetuned from model:** https://huggingface.co/Qwen/Qwen2.5-7B-Instruct](https://huggingface.co/https://huggingface.co/Qwen/Qwen2.5-7B-Instruct)
22
+
23
+ ### Model Sources
24
+
25
+ - **Repository:** [Github repository](https://github.com/chtmp223/CLIPPER).
26
+ - **Paper:** [https://arxiv.org/abs/2502.14854](https://arxiv.org/abs/2502.14854)
27
+
28
+
29
+ ## 💻 Training Details
30
+
31
+ ### Training Data
32
+
33
+ [chtmp223/CLIPPER](https://huggingface.co/datasets/chtmp223/CLIPPER)
34
+
35
+ ### Training Procedure
36
+
37
+ | **Configurations** | **Values** |
38
+ |----------------------------------|--------------|
39
+ | Hardware (Training and Inference)| 8xA100s |
40
+ | Tracking | wandb |
41
+ | batch size | 16 |
42
+ | gradient_checkpointing | True |
43
+ | learning_rate | 1.0e-6 |
44
+ | lr_scheduler_type | cosine |
45
+ | max_length | 131072 |
46
+ | num_train_epochs | 1 |
47
+ | optim | adamw_torch |
48
+
49
+ #### Software
50
+
51
+ Training code is adapted from [https://github.com/Qihoo360/360-LLaMA-Factory/tree/1b5398f539c7d94a530f3f32b53553a3b1928314](https://github.com/Qihoo360/360-LLaMA-Factory/tree/1b5398f539c7d94a530f3f32b53553a3b1928314).
52
+
53
+ ## 🤗 Inference
54
+ Inference is done with [vLLM](https://github.com/vllm-project/vllm) on 1 A100-80GB.
55
+
56
+ ## 📜 Citation
57
+
58
+ ```
59
+ @misc{pham2025clippercompressionenableslongcontext,
60
+ title={CLIPPER: Compression enables long-context synthetic data generation},
61
+ author={Chau Minh Pham and Yapei Chang and Mohit Iyyer},
62
+ year={2025},
63
+ eprint={2502.14854},
64
+ archivePrefix={arXiv},
65
+ primaryClass={cs.CL},
66
+ url={https://arxiv.org/abs/2502.14854},
67
+ }
68
+ ```