Model save
Browse files- README.md +17 -24
- model.safetensors +1 -1
README.md
CHANGED
@@ -21,11 +21,11 @@ should probably proofread and complete it, then remove this comment. -->
|
|
21 |
|
22 |
This model is a fine-tuned version of [HuggingFaceTB/SmolLM2-360M](https://huggingface.co/HuggingFaceTB/SmolLM2-360M) on an unknown dataset.
|
23 |
It achieves the following results on the evaluation set:
|
24 |
-
- Loss: 0.
|
25 |
-
- F1: 0.
|
26 |
-
- Accuracy: 0.
|
27 |
-
- Precision: 0.
|
28 |
-
- Recall: 0.
|
29 |
|
30 |
## Model description
|
31 |
|
@@ -44,10 +44,14 @@ More information needed
|
|
44 |
### Training hyperparameters
|
45 |
|
46 |
The following hyperparameters were used during training:
|
47 |
-
- learning_rate:
|
48 |
-
- train_batch_size:
|
49 |
-
- eval_batch_size:
|
50 |
- seed: 42
|
|
|
|
|
|
|
|
|
51 |
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
|
52 |
- lr_scheduler_type: cosine
|
53 |
- lr_scheduler_warmup_ratio: 0.1
|
@@ -55,26 +59,15 @@ The following hyperparameters were used during training:
|
|
55 |
|
56 |
### Training results
|
57 |
|
58 |
-
| Training Loss | Epoch | Step
|
59 |
-
|
60 |
-
| No log | 0 | 0
|
61 |
-
| 0.
|
62 |
-
| 0.2451 | 0.4680 | 10000 | 0.2443 | 0.8976 | 0.8995 | 0.8968 | 0.8995 |
|
63 |
-
| 0.2349 | 0.7020 | 15000 | 0.2383 | 0.8994 | 0.9020 | 0.8989 | 0.9020 |
|
64 |
-
| 0.2277 | 0.9360 | 20000 | 0.2363 | 0.9006 | 0.9027 | 0.8999 | 0.9027 |
|
65 |
-
| 0.2414 | 1.1700 | 25000 | 0.2352 | 0.9013 | 0.9035 | 0.9007 | 0.9035 |
|
66 |
-
| 0.2361 | 1.4040 | 30000 | 0.2349 | 0.9013 | 0.9035 | 0.9007 | 0.9035 |
|
67 |
-
| 0.2312 | 1.6380 | 35000 | 0.2348 | 0.9013 | 0.9033 | 0.9007 | 0.9033 |
|
68 |
-
| 0.2207 | 1.8720 | 40000 | 0.2348 | 0.9014 | 0.9035 | 0.9007 | 0.9035 |
|
69 |
-
| 0.2645 | 2.1060 | 45000 | 0.2347 | 0.9012 | 0.9033 | 0.9005 | 0.9033 |
|
70 |
-
| 0.2369 | 2.3399 | 50000 | 0.2347 | 0.9012 | 0.9033 | 0.9005 | 0.9033 |
|
71 |
-
| 0.2329 | 2.5739 | 55000 | 0.2347 | 0.9013 | 0.9034 | 0.9006 | 0.9034 |
|
72 |
-
| 0.2253 | 2.8079 | 60000 | 0.2347 | 0.9013 | 0.9033 | 0.9006 | 0.9033 |
|
73 |
|
74 |
|
75 |
### Framework versions
|
76 |
|
77 |
- Transformers 4.46.3
|
78 |
-
- Pytorch 2.5.1
|
79 |
- Datasets 3.1.0
|
80 |
- Tokenizers 0.20.3
|
|
|
21 |
|
22 |
This model is a fine-tuned version of [HuggingFaceTB/SmolLM2-360M](https://huggingface.co/HuggingFaceTB/SmolLM2-360M) on an unknown dataset.
|
23 |
It achieves the following results on the evaluation set:
|
24 |
+
- Loss: 0.7434
|
25 |
+
- F1: 0.6049
|
26 |
+
- Accuracy: 0.5261
|
27 |
+
- Precision: 0.7390
|
28 |
+
- Recall: 0.5261
|
29 |
|
30 |
## Model description
|
31 |
|
|
|
44 |
### Training hyperparameters
|
45 |
|
46 |
The following hyperparameters were used during training:
|
47 |
+
- learning_rate: 1e-06
|
48 |
+
- train_batch_size: 44
|
49 |
+
- eval_batch_size: 44
|
50 |
- seed: 42
|
51 |
+
- distributed_type: multi-GPU
|
52 |
+
- num_devices: 8
|
53 |
+
- total_train_batch_size: 352
|
54 |
+
- total_eval_batch_size: 352
|
55 |
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
|
56 |
- lr_scheduler_type: cosine
|
57 |
- lr_scheduler_warmup_ratio: 0.1
|
|
|
59 |
|
60 |
### Training results
|
61 |
|
62 |
+
| Training Loss | Epoch | Step | Validation Loss | F1 | Accuracy | Precision | Recall |
|
63 |
+
|:-------------:|:------:|:----:|:---------------:|:------:|:--------:|:---------:|:------:|
|
64 |
+
| No log | 0 | 0 | 0.7481 | 0.6025 | 0.5231 | 0.7383 | 0.5231 |
|
65 |
+
| 0.7489 | 1.5277 | 5000 | 0.7434 | 0.6049 | 0.5261 | 0.7390 | 0.5261 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
66 |
|
67 |
|
68 |
### Framework versions
|
69 |
|
70 |
- Transformers 4.46.3
|
71 |
+
- Pytorch 2.5.1
|
72 |
- Datasets 3.1.0
|
73 |
- Tokenizers 0.20.3
|
model.safetensors
CHANGED
@@ -1,3 +1,3 @@
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
-
oid sha256:
|
3 |
size 723684600
|
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:9c44bb9ae9194ed10a8af20d6e1b5e91758260ebd3b9a48917d712ab9cf1feba
|
3 |
size 723684600
|