lapp0 commited on
Commit
2f7da7b
·
verified ·
1 Parent(s): 93c2b84

End of training

Browse files
README.md ADDED
@@ -0,0 +1,210 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ base_model: gpt2
3
+ datasets:
4
+ - wikimedia/wikipedia
5
+ library_name: Distily
6
+ license: creativeml-openrail-m
7
+ tags:
8
+ - generated_from_trainer
9
+ - Distily
10
+ base_model_relation: finetune
11
+ model-index:
12
+ - name: distily_performance_tests
13
+ results: []
14
+ ---
15
+
16
+
17
+ # Summary
18
+
19
+ Distilled with [Distily](https://github.com/lapp0/distily) library
20
+ using teacher model [gpt2](https://huggingface.co/gpt2)
21
+ on dataset [wikimedia/wikipedia](https://huggingface.co/datasets/wikimedia/wikipedia).
22
+
23
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
24
+ should probably proofread and complete it, then remove this comment.
25
+
26
+ # Model description
27
+
28
+ More information needed
29
+
30
+ # Intended uses & limitations
31
+
32
+ More information needed
33
+ -->
34
+
35
+ # Model Architecture:
36
+ - **Architecture**: `GPT2LMHeadModel`
37
+ - **Total Parameters**: 81,912,576
38
+ - **Data Type (dtype)**: torch.bfloat16
39
+ - **Model Size**: 0.16 GB
40
+
41
+ <details>
42
+ <summary>Student Model Details</summary>
43
+
44
+ ```
45
+ GPT2LMHeadModel(
46
+ (transformer): GPT2Model(
47
+ (wte): Embedding(50257, 768)
48
+ (wpe): Embedding(1024, 768)
49
+ (drop): Dropout(p=0.1, inplace=False)
50
+ (h): ModuleList(
51
+ (0-5): 6 x GPT2Block(
52
+ (ln_1): LayerNorm((768,), eps=1e-05, elementwise_affine=True)
53
+ (attn): GPT2SdpaAttention(
54
+ (c_attn): Conv1D()
55
+ (c_proj): Conv1D()
56
+ (attn_dropout): Dropout(p=0.1, inplace=False)
57
+ (resid_dropout): Dropout(p=0.1, inplace=False)
58
+ )
59
+ (ln_2): LayerNorm((768,), eps=1e-05, elementwise_affine=True)
60
+ (mlp): GPT2MLP(
61
+ (c_fc): Conv1D()
62
+ (c_proj): Conv1D()
63
+ (act): NewGELUActivation()
64
+ (dropout): Dropout(p=0.1, inplace=False)
65
+ )
66
+ )
67
+ )
68
+ (ln_f): LayerNorm((768,), eps=1e-05, elementwise_affine=True)
69
+ )
70
+ (lm_head): Linear(in_features=768, out_features=50257, bias=False)
71
+ )
72
+ ```
73
+
74
+ </details>
75
+ <br/>
76
+
77
+
78
+
79
+ # Resource Usage
80
+
81
+ - Max Train VRAM Use: 10.4783 GB
82
+ - Available VRAM: 23.4329 GB
83
+ - GPUs:
84
+ - 1x NVIDIA GeForce RTX 4090
85
+ - CPUs: 64
86
+ - CPU Memory: 251.7190 GB
87
+ - CPU Memory Bandwidth: 1600 GB/s
88
+
89
+ # Distillation (Teacher -> Student) Architecture Difference:
90
+
91
+ - **Architecture**: `GPT2LMHeadModel` -> `GPT2LMHeadModel`
92
+ - **Total Parameters**: 124,439,808 -> 81,912,576
93
+ - **Data Type (dtype)**: torch.bfloat16 -> torch.bfloat16
94
+ - **Model Size**: 0.24 GB -> 0.16 GB
95
+
96
+ <details>
97
+ <summary>Module Diff Details</summary>
98
+
99
+ ```diff
100
+ --- teacher model modules
101
+ +++ student model modules
102
+ @@ -4,7 +4,7 @@
103
+ (wpe): Embedding(1024, 768)
104
+ (drop): Dropout(p=0.1, inplace=False)
105
+ (h): ModuleList(
106
+ - (0-11): 12 x GPT2Block(
107
+ + (0-5): 6 x GPT2Block(
108
+ (ln_1): LayerNorm((768,), eps=1e-05, elementwise_affine=True)
109
+ (attn): GPT2SdpaAttention(
110
+ (c_attn): Conv1D()
111
+
112
+ ```
113
+
114
+ </details>
115
+ <br/>
116
+
117
+ # Train Dataset
118
+ Trained on 3,884,521 tokens from the [wikimedia/wikipedia](https://huggingface.co/datasets/wikimedia/wikipedia) dataset.
119
+
120
+ - Num Samples: `4,990`
121
+ - Subset: `20231101.en`
122
+ - Split: `train`
123
+
124
+
125
+ # Training Objective
126
+
127
+ ```
128
+ DistillationObjective(
129
+ logits_loss_component=LossComponent(
130
+ weight=1,
131
+ loss_fn='kl'
132
+ ),
133
+ hs_loss_component=LossComponent(
134
+ weight=0
135
+ ),
136
+ attn_loss_component=LossComponent(
137
+ weight=5.0,
138
+ loss_fn='raw_mse',
139
+ layer_mapper='layer-2',
140
+ norm='layernorm_teacher_only_affine',
141
+ projector='orthogonal'
142
+ )
143
+ )
144
+ ```
145
+
146
+ # Hyperparameters
147
+ The following hyperparameters were used during training:
148
+
149
+ <details>
150
+ <summary>Expand</summary>
151
+
152
+ - learning_rate: `0.0002`
153
+ - train_batch_size: `8`
154
+ - eval_batch_size: `8`
155
+ - seed: `42`
156
+ - optimizer: `Adam with betas=(0.9,0.999) and epsilon=1e-08`
157
+ - lr_scheduler_type: `polynomial`
158
+ - num_epochs: `1.0`
159
+ - distillation_objective: `DistillationObjective(
160
+ logits_loss_component=LossComponent(
161
+ weight=1,
162
+ loss_fn='kl'
163
+ ),
164
+ hs_loss_component=LossComponent(
165
+ weight=0
166
+ ),
167
+ attn_loss_component=LossComponent(
168
+ weight=5.0,
169
+ loss_fn='raw_mse',
170
+ layer_mapper='layer-2',
171
+ norm='layernorm_teacher_only_affine',
172
+ projector='orthogonal'
173
+ )
174
+ )`
175
+ - lr_scheduler: `<torch.optim.lr_scheduler.LambdaLR object at 0x7efbaca48460>`
176
+ - student_model_name_or_path: `None`
177
+ - student_config_name_or_path: `distilbert/distilgpt2`
178
+ - student_model_config: `None`
179
+ - reinitialize_weights: `None`
180
+ - copy_teacher_modules: `[('lm_head', False)]`
181
+ - student_model_as_bitnet: `False`
182
+ - student_model_use_liger: `False`
183
+ - teacher_model_name_or_path: `gpt2`
184
+ - teacher_load_in_8bit: `False`
185
+ - teacher_load_in_4bit: `False`
186
+ - dataset_uri: `wikimedia/wikipedia`
187
+ - dataset_subset: `20231101.en`
188
+ - dataset_split: `train`
189
+ - dataset_column_name: `text`
190
+ - dataset_sample_size: `5000`
191
+ - dataset_test_size: `0.002`
192
+ - dataset_shuffle: `False`
193
+ - dataset_shuffle_seed: `42`
194
+ - dataset_trust_remote_code: `False`
195
+ - gradient_accumulation_steps: `1`
196
+ - weight_decay: `0.0`
197
+ - max_grad_norm: `1.0`
198
+ - warmup_ratio: `0.0`
199
+ - warmup_steps: `0`
200
+ - gradient_checkpointing: `True`
201
+
202
+ </details>
203
+ <br/>
204
+
205
+
206
+ # Framework Versions
207
+ - Distily 0.5.0
208
+ - Transformers 4.44.2
209
+ - Pytorch 2.4.1+cu121
210
+ - Datasets 3.0.0
benchmarks.shelve.bak ADDED
File without changes
benchmarks.shelve.dat ADDED
File without changes
benchmarks.shelve.dir ADDED
File without changes
generation_config.json ADDED
@@ -0,0 +1,7 @@
 
 
 
 
 
 
 
 
1
+ {
2
+ "_from_model_config": true,
3
+ "bos_token_id": 50256,
4
+ "eos_token_id": 50256,
5
+ "transformers_version": "4.44.2",
6
+ "use_cache": false
7
+ }
logs/per_device_train_batch_size=8, run_name=refactor_distillation_objective/events.out.tfevents.1726111539.46d00238c241 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:4d30982814fe55c42e536a47fc4b3db44d438703d7083fc57d006c82b6e62628
3
+ size 249