Aivesa commited on
Commit
cb67b66
·
verified ·
1 Parent(s): 0e466f0

End of training

Browse files
Files changed (1) hide show
  1. README.md +19 -16
README.md CHANGED
@@ -1,14 +1,14 @@
1
  ---
2
  library_name: peft
3
  license: apache-2.0
4
- base_model: unsloth/Qwen2-0.5B-Instruct
5
  tags:
6
  - axolotl
7
  - generated_from_trainer
8
  datasets:
9
- - Aivesa/dataset_7c40032f-e667-40ad-9658-3748512bf15b
10
  model-index:
11
- - name: 01290299-6ff6-484d-800c-9fc02709045a
12
  results: []
13
  ---
14
 
@@ -21,17 +21,18 @@ should probably proofread and complete it, then remove this comment. -->
21
  axolotl version: `0.6.0`
22
  ```yaml
23
  adapter: lora
24
- base_model: unsloth/Qwen2-0.5B-Instruct
25
  bf16: auto
26
  chat_template: llama3
27
  dataset_prepared_path: /workspace/axolotl/data/prepared
28
  datasets:
29
  - ds_type: json
30
  format: custom
31
- path: Aivesa/dataset_7c40032f-e667-40ad-9658-3748512bf15b
32
  type:
33
- field_instruction: sentence1
34
- field_output: sentence2
 
35
  system_format: '{system}'
36
  system_prompt: ''
37
  debug: null
@@ -47,7 +48,7 @@ fsdp_config: null
47
  gradient_accumulation_steps: 4
48
  gradient_checkpointing: false
49
  group_by_length: false
50
- hub_model_id: Aivesa/01290299-6ff6-484d-800c-9fc02709045a
51
  hub_private_repo: true
52
  hub_repo: null
53
  hub_strategy: checkpoint
@@ -78,6 +79,8 @@ sample_packing: false
78
  save_safetensors: true
79
  saves_per_epoch: 4
80
  sequence_len: 512
 
 
81
  strict: false
82
  tf32: false
83
  tokenizer_type: AutoTokenizer
@@ -87,10 +90,10 @@ use_accelerate: true
87
  val_set_size: 0.05
88
  wandb_entity: null
89
  wandb_mode: online
90
- wandb_name: 7c40032f-e667-40ad-9658-3748512bf15b
91
  wandb_project: Gradients-On-Demand
92
  wandb_run: your_name
93
- wandb_runid: 7c40032f-e667-40ad-9658-3748512bf15b
94
  warmup_steps: 10
95
  weight_decay: 0.0
96
  xformers_attention: null
@@ -99,11 +102,11 @@ xformers_attention: null
99
 
100
  </details><br>
101
 
102
- # 01290299-6ff6-484d-800c-9fc02709045a
103
 
104
- This model is a fine-tuned version of [unsloth/Qwen2-0.5B-Instruct](https://huggingface.co/unsloth/Qwen2-0.5B-Instruct) on the Aivesa/dataset_7c40032f-e667-40ad-9658-3748512bf15b dataset.
105
  It achieves the following results on the evaluation set:
106
- - Loss: 3.1468
107
 
108
  ## Model description
109
 
@@ -137,9 +140,9 @@ The following hyperparameters were used during training:
137
 
138
  | Training Loss | Epoch | Step | Validation Loss |
139
  |:-------------:|:------:|:----:|:---------------:|
140
- | 3.7746 | 0.0030 | 3 | 3.5979 |
141
- | 2.9603 | 0.0060 | 6 | 3.4694 |
142
- | 3.3668 | 0.0090 | 9 | 3.1468 |
143
 
144
 
145
  ### Framework versions
 
1
  ---
2
  library_name: peft
3
  license: apache-2.0
4
+ base_model: EleutherAI/pythia-410m-deduped
5
  tags:
6
  - axolotl
7
  - generated_from_trainer
8
  datasets:
9
+ - Aivesa/dataset_f5aaf6b6-8b8a-4d62-a003-0a4288146cc5
10
  model-index:
11
+ - name: a5f1d39f-30d4-4ab2-93d1-269d93a6989b
12
  results: []
13
  ---
14
 
 
21
  axolotl version: `0.6.0`
22
  ```yaml
23
  adapter: lora
24
+ base_model: EleutherAI/pythia-410m-deduped
25
  bf16: auto
26
  chat_template: llama3
27
  dataset_prepared_path: /workspace/axolotl/data/prepared
28
  datasets:
29
  - ds_type: json
30
  format: custom
31
+ path: Aivesa/dataset_f5aaf6b6-8b8a-4d62-a003-0a4288146cc5
32
  type:
33
+ field_input: title
34
+ field_instruction: query
35
+ field_output: positive
36
  system_format: '{system}'
37
  system_prompt: ''
38
  debug: null
 
48
  gradient_accumulation_steps: 4
49
  gradient_checkpointing: false
50
  group_by_length: false
51
+ hub_model_id: Aivesa/a5f1d39f-30d4-4ab2-93d1-269d93a6989b
52
  hub_private_repo: true
53
  hub_repo: null
54
  hub_strategy: checkpoint
 
79
  save_safetensors: true
80
  saves_per_epoch: 4
81
  sequence_len: 512
82
+ special_tokens:
83
+ pad_token: <|endoftext|>
84
  strict: false
85
  tf32: false
86
  tokenizer_type: AutoTokenizer
 
90
  val_set_size: 0.05
91
  wandb_entity: null
92
  wandb_mode: online
93
+ wandb_name: f5aaf6b6-8b8a-4d62-a003-0a4288146cc5
94
  wandb_project: Gradients-On-Demand
95
  wandb_run: your_name
96
+ wandb_runid: f5aaf6b6-8b8a-4d62-a003-0a4288146cc5
97
  warmup_steps: 10
98
  weight_decay: 0.0
99
  xformers_attention: null
 
102
 
103
  </details><br>
104
 
105
+ # a5f1d39f-30d4-4ab2-93d1-269d93a6989b
106
 
107
+ This model is a fine-tuned version of [EleutherAI/pythia-410m-deduped](https://huggingface.co/EleutherAI/pythia-410m-deduped) on the Aivesa/dataset_f5aaf6b6-8b8a-4d62-a003-0a4288146cc5 dataset.
108
  It achieves the following results on the evaluation set:
109
+ - Loss: 3.4159
110
 
111
  ## Model description
112
 
 
140
 
141
  | Training Loss | Epoch | Step | Validation Loss |
142
  |:-------------:|:------:|:----:|:---------------:|
143
+ | 13.5865 | 0.0003 | 3 | 3.4376 |
144
+ | 13.3023 | 0.0006 | 6 | 3.4332 |
145
+ | 13.8277 | 0.0008 | 9 | 3.4159 |
146
 
147
 
148
  ### Framework versions