arabert_cross_organization_task3_fold1
This model is a fine-tuned version of aubmindlab/bert-base-arabertv02 on the None dataset. It achieves the following results on the evaluation set:
- Loss: 0.6478
- Qwk: 0.4607
- Mse: 0.6478
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
Training results
Training Loss | Epoch | Step | Validation Loss | Qwk | Mse |
---|---|---|---|---|---|
No log | 0.125 | 2 | 3.2900 | 0.0231 | 3.2900 |
No log | 0.25 | 4 | 1.3894 | 0.0822 | 1.3894 |
No log | 0.375 | 6 | 0.7800 | 0.1877 | 0.7800 |
No log | 0.5 | 8 | 1.0858 | 0.2133 | 1.0858 |
No log | 0.625 | 10 | 1.0129 | 0.2378 | 1.0129 |
No log | 0.75 | 12 | 0.8148 | 0.3122 | 0.8148 |
No log | 0.875 | 14 | 0.8640 | 0.3294 | 0.8640 |
No log | 1.0 | 16 | 0.5808 | 0.5261 | 0.5808 |
No log | 1.125 | 18 | 0.5809 | 0.5472 | 0.5809 |
No log | 1.25 | 20 | 0.6891 | 0.4535 | 0.6891 |
No log | 1.375 | 22 | 0.8095 | 0.4026 | 0.8095 |
No log | 1.5 | 24 | 0.6001 | 0.4892 | 0.6001 |
No log | 1.625 | 26 | 0.5488 | 0.5204 | 0.5488 |
No log | 1.75 | 28 | 0.6546 | 0.4189 | 0.6546 |
No log | 1.875 | 30 | 0.5905 | 0.4711 | 0.5905 |
No log | 2.0 | 32 | 0.5350 | 0.5446 | 0.5350 |
No log | 2.125 | 34 | 0.6019 | 0.4942 | 0.6019 |
No log | 2.25 | 36 | 0.6144 | 0.4795 | 0.6144 |
No log | 2.375 | 38 | 0.5350 | 0.5333 | 0.5350 |
No log | 2.5 | 40 | 0.5122 | 0.5472 | 0.5122 |
No log | 2.625 | 42 | 0.5046 | 0.5160 | 0.5046 |
No log | 2.75 | 44 | 0.5608 | 0.4614 | 0.5608 |
No log | 2.875 | 46 | 0.6094 | 0.4437 | 0.6094 |
No log | 3.0 | 48 | 0.5336 | 0.5068 | 0.5336 |
No log | 3.125 | 50 | 0.5450 | 0.5314 | 0.5450 |
No log | 3.25 | 52 | 0.5818 | 0.5196 | 0.5818 |
No log | 3.375 | 54 | 0.6351 | 0.4649 | 0.6351 |
No log | 3.5 | 56 | 0.6078 | 0.4611 | 0.6078 |
No log | 3.625 | 58 | 0.5257 | 0.4924 | 0.5257 |
No log | 3.75 | 60 | 0.5373 | 0.4903 | 0.5373 |
No log | 3.875 | 62 | 0.6198 | 0.4443 | 0.6198 |
No log | 4.0 | 64 | 0.6554 | 0.4271 | 0.6554 |
No log | 4.125 | 66 | 0.5949 | 0.4804 | 0.5949 |
No log | 4.25 | 68 | 0.5758 | 0.5021 | 0.5758 |
No log | 4.375 | 70 | 0.6050 | 0.4669 | 0.6050 |
No log | 4.5 | 72 | 0.6617 | 0.4443 | 0.6617 |
No log | 4.625 | 74 | 0.5939 | 0.4755 | 0.5939 |
No log | 4.75 | 76 | 0.5429 | 0.5333 | 0.5429 |
No log | 4.875 | 78 | 0.5670 | 0.4951 | 0.5670 |
No log | 5.0 | 80 | 0.6057 | 0.4790 | 0.6057 |
No log | 5.125 | 82 | 0.5475 | 0.5271 | 0.5475 |
No log | 5.25 | 84 | 0.5293 | 0.5556 | 0.5293 |
No log | 5.375 | 86 | 0.5920 | 0.4961 | 0.5920 |
No log | 5.5 | 88 | 0.8172 | 0.3957 | 0.8172 |
No log | 5.625 | 90 | 0.8892 | 0.3757 | 0.8892 |
No log | 5.75 | 92 | 0.7109 | 0.4460 | 0.7109 |
No log | 5.875 | 94 | 0.5766 | 0.5031 | 0.5766 |
No log | 6.0 | 96 | 0.5308 | 0.5421 | 0.5308 |
No log | 6.125 | 98 | 0.5523 | 0.5081 | 0.5523 |
No log | 6.25 | 100 | 0.6890 | 0.4239 | 0.6890 |
No log | 6.375 | 102 | 0.7810 | 0.4232 | 0.7810 |
No log | 6.5 | 104 | 0.6877 | 0.4432 | 0.6877 |
No log | 6.625 | 106 | 0.5825 | 0.4691 | 0.5825 |
No log | 6.75 | 108 | 0.5649 | 0.4830 | 0.5649 |
No log | 6.875 | 110 | 0.5645 | 0.4955 | 0.5645 |
No log | 7.0 | 112 | 0.6053 | 0.4622 | 0.6053 |
No log | 7.125 | 114 | 0.6870 | 0.4274 | 0.6870 |
No log | 7.25 | 116 | 0.7007 | 0.4215 | 0.7007 |
No log | 7.375 | 118 | 0.6387 | 0.4483 | 0.6387 |
No log | 7.5 | 120 | 0.5709 | 0.5018 | 0.5709 |
No log | 7.625 | 122 | 0.5594 | 0.5102 | 0.5594 |
No log | 7.75 | 124 | 0.6003 | 0.4811 | 0.6003 |
No log | 7.875 | 126 | 0.6909 | 0.4242 | 0.6909 |
No log | 8.0 | 128 | 0.7230 | 0.4165 | 0.7230 |
No log | 8.125 | 130 | 0.6705 | 0.4395 | 0.6705 |
No log | 8.25 | 132 | 0.6041 | 0.4682 | 0.6041 |
No log | 8.375 | 134 | 0.5802 | 0.4903 | 0.5802 |
No log | 8.5 | 136 | 0.5853 | 0.4903 | 0.5853 |
No log | 8.625 | 138 | 0.6211 | 0.4634 | 0.6211 |
No log | 8.75 | 140 | 0.6644 | 0.4483 | 0.6644 |
No log | 8.875 | 142 | 0.6824 | 0.4514 | 0.6824 |
No log | 9.0 | 144 | 0.6713 | 0.4472 | 0.6713 |
No log | 9.125 | 146 | 0.6586 | 0.4498 | 0.6586 |
No log | 9.25 | 148 | 0.6406 | 0.4592 | 0.6406 |
No log | 9.375 | 150 | 0.6229 | 0.4711 | 0.6229 |
No log | 9.5 | 152 | 0.6220 | 0.4720 | 0.6220 |
No log | 9.625 | 154 | 0.6319 | 0.4686 | 0.6319 |
No log | 9.75 | 156 | 0.6419 | 0.4623 | 0.6419 |
No log | 9.875 | 158 | 0.6451 | 0.4623 | 0.6451 |
No log | 10.0 | 160 | 0.6478 | 0.4607 | 0.6478 |
Framework versions
- Transformers 4.44.0
- Pytorch 2.4.0
- Datasets 2.21.0
- Tokenizers 0.19.1
- Downloads last month
- 2
Inference Providers
NEW
This model is not currently available via any of the supported Inference Providers.
The model cannot be deployed to the HF Inference API:
The model has no library tag.
Model tree for salbatarni/arabert_cross_organization_task3_fold1
Base model
aubmindlab/bert-base-arabertv02