salbatarni commited on
Commit
6afab6f
·
verified ·
1 Parent(s): af8206c

Training in progress, step 160

Browse files
Files changed (3) hide show
  1. README.md +85 -41
  2. model.safetensors +1 -1
  3. training_args.bin +1 -1
README.md CHANGED
@@ -3,20 +3,20 @@ base_model: aubmindlab/bert-base-arabertv02
3
  tags:
4
  - generated_from_trainer
5
  model-index:
6
- - name: arabert_cross_organization_task2_fold1
7
  results: []
8
  ---
9
 
10
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
11
  should probably proofread and complete it, then remove this comment. -->
12
 
13
- # arabert_cross_organization_task2_fold1
14
 
15
  This model is a fine-tuned version of [aubmindlab/bert-base-arabertv02](https://huggingface.co/aubmindlab/bert-base-arabertv02) on the None dataset.
16
  It achieves the following results on the evaluation set:
17
- - Loss: 0.8825
18
- - Qwk: 0.1150
19
- - Mse: 0.8655
20
 
21
  ## Model description
22
 
@@ -36,48 +36,92 @@ More information needed
36
 
37
  The following hyperparameters were used during training:
38
  - learning_rate: 2e-05
39
- - train_batch_size: 16
40
- - eval_batch_size: 16
41
  - seed: 42
42
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
43
  - lr_scheduler_type: linear
44
- - num_epochs: 1
45
 
46
  ### Training results
47
 
48
- | Training Loss | Epoch | Step | Validation Loss | Qwk | Mse |
49
- |:-------------:|:------:|:----:|:---------------:|:-------:|:------:|
50
- | No log | 0.0317 | 2 | 3.0267 | -0.0044 | 3.0446 |
51
- | No log | 0.0635 | 4 | 1.2891 | 0.0342 | 1.2879 |
52
- | No log | 0.0952 | 6 | 1.5424 | -0.0019 | 1.5380 |
53
- | No log | 0.1270 | 8 | 0.8826 | -0.0013 | 0.8720 |
54
- | No log | 0.1587 | 10 | 0.7922 | 0.0662 | 0.7791 |
55
- | No log | 0.1905 | 12 | 0.7888 | 0.1226 | 0.7749 |
56
- | No log | 0.2222 | 14 | 0.8412 | 0.1270 | 0.8242 |
57
- | No log | 0.2540 | 16 | 0.9694 | 0.0536 | 0.9479 |
58
- | No log | 0.2857 | 18 | 1.1052 | 0.0 | 1.0806 |
59
- | No log | 0.3175 | 20 | 1.1465 | 0.0 | 1.1217 |
60
- | No log | 0.3492 | 22 | 1.1550 | 0.0 | 1.1324 |
61
- | No log | 0.3810 | 24 | 1.0325 | 0.0155 | 1.0157 |
62
- | No log | 0.4127 | 26 | 0.9215 | 0.1245 | 0.9088 |
63
- | No log | 0.4444 | 28 | 0.9192 | 0.0894 | 0.9058 |
64
- | No log | 0.4762 | 30 | 0.9624 | 0.0819 | 0.9461 |
65
- | No log | 0.5079 | 32 | 0.9482 | 0.0650 | 0.9309 |
66
- | No log | 0.5397 | 34 | 0.9962 | 0.0388 | 0.9763 |
67
- | No log | 0.5714 | 36 | 0.9725 | 0.0 | 0.9522 |
68
- | No log | 0.6032 | 38 | 0.9242 | 0.0360 | 0.9052 |
69
- | No log | 0.6349 | 40 | 0.8757 | 0.0806 | 0.8588 |
70
- | No log | 0.6667 | 42 | 0.8479 | 0.0080 | 0.8325 |
71
- | No log | 0.6984 | 44 | 0.8319 | 0.0221 | 0.8178 |
72
- | No log | 0.7302 | 46 | 0.8311 | 0.0185 | 0.8171 |
73
- | No log | 0.7619 | 48 | 0.8250 | 0.0130 | 0.8119 |
74
- | No log | 0.7937 | 50 | 0.8256 | 0.0130 | 0.8126 |
75
- | No log | 0.8254 | 52 | 0.8281 | -0.0220 | 0.8148 |
76
- | No log | 0.8571 | 54 | 0.8385 | 0.0374 | 0.8242 |
77
- | No log | 0.8889 | 56 | 0.8565 | 0.0396 | 0.8410 |
78
- | No log | 0.9206 | 58 | 0.8727 | 0.0642 | 0.8563 |
79
- | No log | 0.9524 | 60 | 0.8784 | 0.0538 | 0.8616 |
80
- | No log | 0.9841 | 62 | 0.8825 | 0.1150 | 0.8655 |
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
81
 
82
 
83
  ### Framework versions
 
3
  tags:
4
  - generated_from_trainer
5
  model-index:
6
+ - name: arabert_cross_organization_task2_fold0
7
  results: []
8
  ---
9
 
10
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
11
  should probably proofread and complete it, then remove this comment. -->
12
 
13
+ # arabert_cross_organization_task2_fold0
14
 
15
  This model is a fine-tuned version of [aubmindlab/bert-base-arabertv02](https://huggingface.co/aubmindlab/bert-base-arabertv02) on the None dataset.
16
  It achieves the following results on the evaluation set:
17
+ - Loss: 0.8868
18
+ - Qwk: 0.5359
19
+ - Mse: 0.8876
20
 
21
  ## Model description
22
 
 
36
 
37
  The following hyperparameters were used during training:
38
  - learning_rate: 2e-05
39
+ - train_batch_size: 64
40
+ - eval_batch_size: 64
41
  - seed: 42
42
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
43
  - lr_scheduler_type: linear
44
+ - num_epochs: 10
45
 
46
  ### Training results
47
 
48
+ | Training Loss | Epoch | Step | Validation Loss | Qwk | Mse |
49
+ |:-------------:|:------:|:----:|:---------------:|:------:|:------:|
50
+ | No log | 0.1333 | 2 | 5.2895 | 0.0054 | 5.2856 |
51
+ | No log | 0.2667 | 4 | 3.0949 | 0.0698 | 3.0914 |
52
+ | No log | 0.4 | 6 | 1.6236 | 0.2168 | 1.6216 |
53
+ | No log | 0.5333 | 8 | 1.1930 | 0.2638 | 1.1920 |
54
+ | No log | 0.6667 | 10 | 1.7940 | 0.2599 | 1.7930 |
55
+ | No log | 0.8 | 12 | 3.2795 | 0.1504 | 3.2772 |
56
+ | No log | 0.9333 | 14 | 2.3117 | 0.2122 | 2.3105 |
57
+ | No log | 1.0667 | 16 | 1.3259 | 0.3185 | 1.3250 |
58
+ | No log | 1.2 | 18 | 1.1156 | 0.3075 | 1.1147 |
59
+ | No log | 1.3333 | 20 | 1.0743 | 0.3306 | 1.0736 |
60
+ | No log | 1.4667 | 22 | 1.2168 | 0.3511 | 1.2161 |
61
+ | No log | 1.6 | 24 | 1.3139 | 0.3493 | 1.3135 |
62
+ | No log | 1.7333 | 26 | 1.2219 | 0.3893 | 1.2216 |
63
+ | No log | 1.8667 | 28 | 0.9789 | 0.4805 | 0.9789 |
64
+ | No log | 2.0 | 30 | 0.8254 | 0.5525 | 0.8256 |
65
+ | No log | 2.1333 | 32 | 0.8267 | 0.5514 | 0.8268 |
66
+ | No log | 2.2667 | 34 | 0.8956 | 0.4976 | 0.8958 |
67
+ | No log | 2.4 | 36 | 1.2761 | 0.3798 | 1.2761 |
68
+ | No log | 2.5333 | 38 | 1.5640 | 0.3427 | 1.5638 |
69
+ | No log | 2.6667 | 40 | 1.3598 | 0.3678 | 1.3596 |
70
+ | No log | 2.8 | 42 | 1.0347 | 0.4562 | 1.0345 |
71
+ | No log | 2.9333 | 44 | 0.9138 | 0.4910 | 0.9135 |
72
+ | No log | 3.0667 | 46 | 0.8461 | 0.5492 | 0.8459 |
73
+ | No log | 3.2 | 48 | 0.8243 | 0.5507 | 0.8244 |
74
+ | No log | 3.3333 | 50 | 1.0141 | 0.4952 | 1.0142 |
75
+ | No log | 3.4667 | 52 | 1.0921 | 0.4640 | 1.0921 |
76
+ | No log | 3.6 | 54 | 1.0221 | 0.4941 | 1.0220 |
77
+ | No log | 3.7333 | 56 | 0.8836 | 0.5285 | 0.8836 |
78
+ | No log | 3.8667 | 58 | 0.7919 | 0.5711 | 0.7921 |
79
+ | No log | 4.0 | 60 | 0.8281 | 0.5532 | 0.8283 |
80
+ | No log | 4.1333 | 62 | 0.9534 | 0.5227 | 0.9536 |
81
+ | No log | 4.2667 | 64 | 0.9598 | 0.5211 | 0.9601 |
82
+ | No log | 4.4 | 66 | 0.9166 | 0.5419 | 0.9169 |
83
+ | No log | 4.5333 | 68 | 0.8422 | 0.5565 | 0.8426 |
84
+ | No log | 4.6667 | 70 | 0.8553 | 0.5458 | 0.8556 |
85
+ | No log | 4.8 | 72 | 0.9644 | 0.5070 | 0.9646 |
86
+ | No log | 4.9333 | 74 | 0.9798 | 0.5192 | 0.9801 |
87
+ | No log | 5.0667 | 76 | 0.9185 | 0.5327 | 0.9189 |
88
+ | No log | 5.2 | 78 | 0.8475 | 0.5427 | 0.8479 |
89
+ | No log | 5.3333 | 80 | 0.8340 | 0.5557 | 0.8344 |
90
+ | No log | 5.4667 | 82 | 0.8878 | 0.5405 | 0.8882 |
91
+ | No log | 5.6 | 84 | 0.9622 | 0.5199 | 0.9627 |
92
+ | No log | 5.7333 | 86 | 0.9394 | 0.5278 | 0.9398 |
93
+ | No log | 5.8667 | 88 | 0.8872 | 0.5334 | 0.8876 |
94
+ | No log | 6.0 | 90 | 0.8248 | 0.5428 | 0.8252 |
95
+ | No log | 6.1333 | 92 | 0.8451 | 0.5371 | 0.8454 |
96
+ | No log | 6.2667 | 94 | 0.8324 | 0.5384 | 0.8326 |
97
+ | No log | 6.4 | 96 | 0.7920 | 0.5786 | 0.7922 |
98
+ | No log | 6.5333 | 98 | 0.8023 | 0.5742 | 0.8025 |
99
+ | No log | 6.6667 | 100 | 0.8667 | 0.5474 | 0.8670 |
100
+ | No log | 6.8 | 102 | 0.9364 | 0.5182 | 0.9367 |
101
+ | No log | 6.9333 | 104 | 0.9710 | 0.5093 | 0.9714 |
102
+ | No log | 7.0667 | 106 | 1.0089 | 0.4947 | 1.0094 |
103
+ | No log | 7.2 | 108 | 0.9682 | 0.5057 | 0.9687 |
104
+ | No log | 7.3333 | 110 | 0.8275 | 0.5472 | 0.8281 |
105
+ | No log | 7.4667 | 112 | 0.7651 | 0.5844 | 0.7658 |
106
+ | No log | 7.6 | 114 | 0.7873 | 0.5622 | 0.7879 |
107
+ | No log | 7.7333 | 116 | 0.8290 | 0.5478 | 0.8297 |
108
+ | No log | 7.8667 | 118 | 0.8907 | 0.5182 | 0.8914 |
109
+ | No log | 8.0 | 120 | 0.9161 | 0.5170 | 0.9168 |
110
+ | No log | 8.1333 | 122 | 0.9115 | 0.5170 | 0.9122 |
111
+ | No log | 8.2667 | 124 | 0.9104 | 0.5170 | 0.9111 |
112
+ | No log | 8.4 | 126 | 0.8932 | 0.5228 | 0.8940 |
113
+ | No log | 8.5333 | 128 | 0.8537 | 0.5382 | 0.8545 |
114
+ | No log | 8.6667 | 130 | 0.8599 | 0.5382 | 0.8607 |
115
+ | No log | 8.8 | 132 | 0.9062 | 0.5144 | 0.9070 |
116
+ | No log | 8.9333 | 134 | 0.9172 | 0.5132 | 0.9181 |
117
+ | No log | 9.0667 | 136 | 0.9365 | 0.5161 | 0.9374 |
118
+ | No log | 9.2 | 138 | 0.9530 | 0.5138 | 0.9539 |
119
+ | No log | 9.3333 | 140 | 0.9499 | 0.5138 | 0.9508 |
120
+ | No log | 9.4667 | 142 | 0.9352 | 0.5134 | 0.9361 |
121
+ | No log | 9.6 | 144 | 0.9147 | 0.5132 | 0.9155 |
122
+ | No log | 9.7333 | 146 | 0.8945 | 0.5311 | 0.8953 |
123
+ | No log | 9.8667 | 148 | 0.8886 | 0.5310 | 0.8894 |
124
+ | No log | 10.0 | 150 | 0.8868 | 0.5359 | 0.8876 |
125
 
126
 
127
  ### Framework versions
model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:7347f0a131c04150e445745d3b2677bdf2f7b006e10c7d00749780212e1d2a7a
3
  size 540799996
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:7b55c12b37c3d5b4f777e1b28aebf48d592c38c3e288a01310b9ab60623d4615
3
  size 540799996
training_args.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:767078d583d75437fdbc7857b1d2c3c9a4f6c9147727e51ef45130930e2bec4e
3
  size 5240
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f7ded43c3fb8b00d5ac5f67b4a4e42ecc3bf1e6c19d670a9d381115bddebb631
3
  size 5240