Paladiso commited on
Commit
2e5f687
·
verified ·
1 Parent(s): 8ce0182

End of training

Browse files
Files changed (1) hide show
  1. README.md +20 -18
README.md CHANGED
@@ -1,14 +1,14 @@
1
  ---
2
  library_name: peft
3
- license: gemma
4
- base_model: zake7749/gemma-2-2b-it-chinese-kyara-dpo
5
  tags:
6
  - axolotl
7
  - generated_from_trainer
8
  datasets:
9
- - Paladiso/dataset_48ec0ffb-9278-423b-bb92-4d2d7ba40108
10
  model-index:
11
- - name: 8ccdbc24-1387-4605-99fc-ca461a35399b
12
  results: []
13
  ---
14
 
@@ -21,18 +21,18 @@ should probably proofread and complete it, then remove this comment. -->
21
  axolotl version: `0.6.0`
22
  ```yaml
23
  adapter: lora
24
- base_model: zake7749/gemma-2-2b-it-chinese-kyara-dpo
25
  bf16: auto
26
  chat_template: llama3
27
  dataset_prepared_path: /workspace/axolotl/data/prepared
28
  datasets:
29
  - ds_type: json
30
  format: custom
31
- path: Paladiso/dataset_48ec0ffb-9278-423b-bb92-4d2d7ba40108
32
  type:
33
- field_input: authors
34
- field_instruction: type
35
- field_output: title
36
  system_format: '{system}'
37
  system_prompt: ''
38
  debug: null
@@ -48,7 +48,7 @@ fsdp_config: null
48
  gradient_accumulation_steps: 4
49
  gradient_checkpointing: false
50
  group_by_length: false
51
- hub_model_id: Paladiso/8ccdbc24-1387-4605-99fc-ca461a35399b
52
  hub_private_repo: true
53
  hub_repo: null
54
  hub_strategy: checkpoint
@@ -79,6 +79,8 @@ sample_packing: false
79
  save_safetensors: true
80
  saves_per_epoch: 4
81
  sequence_len: 512
 
 
82
  strict: false
83
  tf32: false
84
  tokenizer_type: AutoTokenizer
@@ -88,10 +90,10 @@ use_accelerate: true
88
  val_set_size: 0.05
89
  wandb_entity: null
90
  wandb_mode: online
91
- wandb_name: 48ec0ffb-9278-423b-bb92-4d2d7ba40108
92
  wandb_project: Gradients-On-Demand
93
  wandb_run: your_name
94
- wandb_runid: 48ec0ffb-9278-423b-bb92-4d2d7ba40108
95
  warmup_steps: 10
96
  weight_decay: 0.0
97
  xformers_attention: null
@@ -100,11 +102,11 @@ xformers_attention: null
100
 
101
  </details><br>
102
 
103
- # 8ccdbc24-1387-4605-99fc-ca461a35399b
104
 
105
- This model is a fine-tuned version of [zake7749/gemma-2-2b-it-chinese-kyara-dpo](https://huggingface.co/zake7749/gemma-2-2b-it-chinese-kyara-dpo) on the Paladiso/dataset_48ec0ffb-9278-423b-bb92-4d2d7ba40108 dataset.
106
  It achieves the following results on the evaluation set:
107
- - Loss: 4.6824
108
 
109
  ## Model description
110
 
@@ -138,9 +140,9 @@ The following hyperparameters were used during training:
138
 
139
  | Training Loss | Epoch | Step | Validation Loss |
140
  |:-------------:|:------:|:----:|:---------------:|
141
- | 7.3026 | 0.0007 | 3 | 7.5792 |
142
- | 5.8698 | 0.0014 | 6 | 5.9740 |
143
- | 5.1532 | 0.0020 | 9 | 4.6824 |
144
 
145
 
146
  ### Framework versions
 
1
  ---
2
  library_name: peft
3
+ license: apache-2.0
4
+ base_model: EleutherAI/pythia-410m-deduped
5
  tags:
6
  - axolotl
7
  - generated_from_trainer
8
  datasets:
9
+ - Paladiso/dataset_f5aaf6b6-8b8a-4d62-a003-0a4288146cc5
10
  model-index:
11
+ - name: b0ed7ce7-ba23-491f-ab37-49149ecf7349
12
  results: []
13
  ---
14
 
 
21
  axolotl version: `0.6.0`
22
  ```yaml
23
  adapter: lora
24
+ base_model: EleutherAI/pythia-410m-deduped
25
  bf16: auto
26
  chat_template: llama3
27
  dataset_prepared_path: /workspace/axolotl/data/prepared
28
  datasets:
29
  - ds_type: json
30
  format: custom
31
+ path: Paladiso/dataset_f5aaf6b6-8b8a-4d62-a003-0a4288146cc5
32
  type:
33
+ field_input: title
34
+ field_instruction: query
35
+ field_output: positive
36
  system_format: '{system}'
37
  system_prompt: ''
38
  debug: null
 
48
  gradient_accumulation_steps: 4
49
  gradient_checkpointing: false
50
  group_by_length: false
51
+ hub_model_id: Paladiso/b0ed7ce7-ba23-491f-ab37-49149ecf7349
52
  hub_private_repo: true
53
  hub_repo: null
54
  hub_strategy: checkpoint
 
79
  save_safetensors: true
80
  saves_per_epoch: 4
81
  sequence_len: 512
82
+ special_tokens:
83
+ pad_token: <|endoftext|>
84
  strict: false
85
  tf32: false
86
  tokenizer_type: AutoTokenizer
 
90
  val_set_size: 0.05
91
  wandb_entity: null
92
  wandb_mode: online
93
+ wandb_name: f5aaf6b6-8b8a-4d62-a003-0a4288146cc5
94
  wandb_project: Gradients-On-Demand
95
  wandb_run: your_name
96
+ wandb_runid: f5aaf6b6-8b8a-4d62-a003-0a4288146cc5
97
  warmup_steps: 10
98
  weight_decay: 0.0
99
  xformers_attention: null
 
102
 
103
  </details><br>
104
 
105
+ # b0ed7ce7-ba23-491f-ab37-49149ecf7349
106
 
107
+ This model is a fine-tuned version of [EleutherAI/pythia-410m-deduped](https://huggingface.co/EleutherAI/pythia-410m-deduped) on the Paladiso/dataset_f5aaf6b6-8b8a-4d62-a003-0a4288146cc5 dataset.
108
  It achieves the following results on the evaluation set:
109
+ - Loss: 3.4121
110
 
111
  ## Model description
112
 
 
140
 
141
  | Training Loss | Epoch | Step | Validation Loss |
142
  |:-------------:|:------:|:----:|:---------------:|
143
+ | 13.5435 | 0.0003 | 3 | 3.4388 |
144
+ | 13.2872 | 0.0006 | 6 | 3.4335 |
145
+ | 13.6718 | 0.0008 | 9 | 3.4121 |
146
 
147
 
148
  ### Framework versions