Aivesa commited on
Commit
7b40e21
·
verified ·
1 Parent(s): 281e8b5

End of training

Browse files
Files changed (1) hide show
  1. README.md +17 -18
README.md CHANGED
@@ -1,13 +1,13 @@
1
  ---
2
  library_name: peft
3
- base_model: katuni4ka/tiny-random-codegen2
4
  tags:
5
  - axolotl
6
  - generated_from_trainer
7
  datasets:
8
- - Aivesa/dataset_86c8d5cb-17ec-456e-b894-19c6e5e659a9
9
  model-index:
10
- - name: 286fd9a8-760d-4213-a477-b5ea1e7ce9a3
11
  results: []
12
  ---
13
 
@@ -20,18 +20,17 @@ should probably proofread and complete it, then remove this comment. -->
20
  axolotl version: `0.6.0`
21
  ```yaml
22
  adapter: lora
23
- base_model: katuni4ka/tiny-random-codegen2
24
  bf16: auto
25
  chat_template: llama3
26
  dataset_prepared_path: /workspace/axolotl/data/prepared
27
  datasets:
28
  - ds_type: json
29
  format: custom
30
- path: Aivesa/dataset_86c8d5cb-17ec-456e-b894-19c6e5e659a9
31
  type:
32
- field_input: caption2
33
- field_instruction: negative_caption
34
- field_output: caption
35
  system_format: '{system}'
36
  system_prompt: ''
37
  debug: null
@@ -47,7 +46,7 @@ fsdp_config: null
47
  gradient_accumulation_steps: 4
48
  gradient_checkpointing: false
49
  group_by_length: false
50
- hub_model_id: Aivesa/286fd9a8-760d-4213-a477-b5ea1e7ce9a3
51
  hub_private_repo: true
52
  hub_repo: null
53
  hub_strategy: checkpoint
@@ -79,7 +78,7 @@ save_safetensors: true
79
  saves_per_epoch: 4
80
  sequence_len: 512
81
  special_tokens:
82
- pad_token: <|endoftext|>
83
  strict: false
84
  tf32: false
85
  tokenizer_type: AutoTokenizer
@@ -89,10 +88,10 @@ use_accelerate: true
89
  val_set_size: 0.05
90
  wandb_entity: null
91
  wandb_mode: online
92
- wandb_name: 86c8d5cb-17ec-456e-b894-19c6e5e659a9
93
  wandb_project: Gradients-On-Demand
94
  wandb_run: your_name
95
- wandb_runid: 86c8d5cb-17ec-456e-b894-19c6e5e659a9
96
  warmup_steps: 10
97
  weight_decay: 0.0
98
  xformers_attention: null
@@ -101,11 +100,11 @@ xformers_attention: null
101
 
102
  </details><br>
103
 
104
- # 286fd9a8-760d-4213-a477-b5ea1e7ce9a3
105
 
106
- This model is a fine-tuned version of [katuni4ka/tiny-random-codegen2](https://huggingface.co/katuni4ka/tiny-random-codegen2) on the Aivesa/dataset_86c8d5cb-17ec-456e-b894-19c6e5e659a9 dataset.
107
  It achieves the following results on the evaluation set:
108
- - Loss: 10.8422
109
 
110
  ## Model description
111
 
@@ -139,9 +138,9 @@ The following hyperparameters were used during training:
139
 
140
  | Training Loss | Epoch | Step | Validation Loss |
141
  |:-------------:|:------:|:----:|:---------------:|
142
- | 43.3622 | 0.1143 | 3 | 10.8461 |
143
- | 43.3138 | 0.2286 | 6 | 10.8447 |
144
- | 43.3712 | 0.3429 | 9 | 10.8422 |
145
 
146
 
147
  ### Framework versions
 
1
  ---
2
  library_name: peft
3
+ base_model: HuggingFaceH4/tiny-random-LlamaForCausalLM
4
  tags:
5
  - axolotl
6
  - generated_from_trainer
7
  datasets:
8
+ - Aivesa/dataset_a5667e37-9a88-46c1-b8d0-368d1ad49cee
9
  model-index:
10
+ - name: 90cc73f1-7712-4110-bc96-10bf0bf9bd01
11
  results: []
12
  ---
13
 
 
20
  axolotl version: `0.6.0`
21
  ```yaml
22
  adapter: lora
23
+ base_model: HuggingFaceH4/tiny-random-LlamaForCausalLM
24
  bf16: auto
25
  chat_template: llama3
26
  dataset_prepared_path: /workspace/axolotl/data/prepared
27
  datasets:
28
  - ds_type: json
29
  format: custom
30
+ path: Aivesa/dataset_a5667e37-9a88-46c1-b8d0-368d1ad49cee
31
  type:
32
+ field_instruction: question_text
33
+ field_output: question_title
 
34
  system_format: '{system}'
35
  system_prompt: ''
36
  debug: null
 
46
  gradient_accumulation_steps: 4
47
  gradient_checkpointing: false
48
  group_by_length: false
49
+ hub_model_id: Aivesa/90cc73f1-7712-4110-bc96-10bf0bf9bd01
50
  hub_private_repo: true
51
  hub_repo: null
52
  hub_strategy: checkpoint
 
78
  saves_per_epoch: 4
79
  sequence_len: 512
80
  special_tokens:
81
+ pad_token: </s>
82
  strict: false
83
  tf32: false
84
  tokenizer_type: AutoTokenizer
 
88
  val_set_size: 0.05
89
  wandb_entity: null
90
  wandb_mode: online
91
+ wandb_name: a5667e37-9a88-46c1-b8d0-368d1ad49cee
92
  wandb_project: Gradients-On-Demand
93
  wandb_run: your_name
94
+ wandb_runid: a5667e37-9a88-46c1-b8d0-368d1ad49cee
95
  warmup_steps: 10
96
  weight_decay: 0.0
97
  xformers_attention: null
 
100
 
101
  </details><br>
102
 
103
+ # 90cc73f1-7712-4110-bc96-10bf0bf9bd01
104
 
105
+ This model is a fine-tuned version of [HuggingFaceH4/tiny-random-LlamaForCausalLM](https://huggingface.co/HuggingFaceH4/tiny-random-LlamaForCausalLM) on the Aivesa/dataset_a5667e37-9a88-46c1-b8d0-368d1ad49cee dataset.
106
  It achieves the following results on the evaluation set:
107
+ - Loss: 10.3727
108
 
109
  ## Model description
110
 
 
138
 
139
  | Training Loss | Epoch | Step | Validation Loss |
140
  |:-------------:|:------:|:----:|:---------------:|
141
+ | 10.3857 | 0.0080 | 3 | 10.3729 |
142
+ | 10.3718 | 0.0160 | 6 | 10.3729 |
143
+ | 10.3736 | 0.0239 | 9 | 10.3727 |
144
 
145
 
146
  ### Framework versions