arabert_no_augmentation_organization_task1_fold0
This model is a fine-tuned version of aubmindlab/bert-base-arabertv02 on the None dataset. It achieves the following results on the evaluation set:
- Loss: 0.8071
- Qwk: 0.7786
- Mse: 0.8071
- Rmse: 0.8984
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
Training results
Training Loss | Epoch | Step | Validation Loss | Qwk | Mse | Rmse |
---|---|---|---|---|---|---|
No log | 0.1818 | 2 | 4.6091 | -0.0084 | 4.6091 | 2.1469 |
No log | 0.3636 | 4 | 2.7511 | 0.0000 | 2.7511 | 1.6586 |
No log | 0.5455 | 6 | 1.9010 | -0.1625 | 1.9010 | 1.3788 |
No log | 0.7273 | 8 | 1.2735 | 0.0930 | 1.2735 | 1.1285 |
No log | 0.9091 | 10 | 1.3174 | 0.2584 | 1.3174 | 1.1478 |
No log | 1.0909 | 12 | 1.2159 | 0.3084 | 1.2159 | 1.1027 |
No log | 1.2727 | 14 | 1.1049 | 0.3923 | 1.1049 | 1.0511 |
No log | 1.4545 | 16 | 0.9768 | 0.4541 | 0.9768 | 0.9883 |
No log | 1.6364 | 18 | 1.2066 | 0.4134 | 1.2066 | 1.0985 |
No log | 1.8182 | 20 | 1.4322 | 0.4085 | 1.4322 | 1.1967 |
No log | 2.0 | 22 | 1.4533 | 0.3831 | 1.4533 | 1.2055 |
No log | 2.1818 | 24 | 1.2317 | 0.5312 | 1.2317 | 1.1098 |
No log | 2.3636 | 26 | 0.9282 | 0.4561 | 0.9282 | 0.9634 |
No log | 2.5455 | 28 | 1.0926 | 0.5222 | 1.0926 | 1.0453 |
No log | 2.7273 | 30 | 1.0693 | 0.5222 | 1.0693 | 1.0341 |
No log | 2.9091 | 32 | 0.9590 | 0.5088 | 0.9590 | 0.9793 |
No log | 3.0909 | 34 | 1.1455 | 0.5288 | 1.1455 | 1.0703 |
No log | 3.2727 | 36 | 1.3818 | 0.5073 | 1.3818 | 1.1755 |
No log | 3.4545 | 38 | 1.3091 | 0.5073 | 1.3091 | 1.1442 |
No log | 3.6364 | 40 | 0.9953 | 0.5288 | 0.9953 | 0.9976 |
No log | 3.8182 | 42 | 0.7863 | 0.4938 | 0.7863 | 0.8867 |
No log | 4.0 | 44 | 0.8394 | 0.5254 | 0.8394 | 0.9162 |
No log | 4.1818 | 46 | 0.7831 | 0.5399 | 0.7831 | 0.8849 |
No log | 4.3636 | 48 | 0.7361 | 0.6087 | 0.7361 | 0.8580 |
No log | 4.5455 | 50 | 1.0454 | 0.7268 | 1.0454 | 1.0224 |
No log | 4.7273 | 52 | 1.2795 | 0.5743 | 1.2795 | 1.1312 |
No log | 4.9091 | 54 | 1.2229 | 0.5896 | 1.2229 | 1.1058 |
No log | 5.0909 | 56 | 1.0233 | 0.7526 | 1.0233 | 1.0116 |
No log | 5.2727 | 58 | 0.8234 | 0.6087 | 0.8234 | 0.9074 |
No log | 5.4545 | 60 | 0.7794 | 0.6087 | 0.7794 | 0.8828 |
No log | 5.6364 | 62 | 0.8013 | 0.6087 | 0.8013 | 0.8952 |
No log | 5.8182 | 64 | 0.8913 | 0.6182 | 0.8913 | 0.9441 |
No log | 6.0 | 66 | 0.9996 | 0.6536 | 0.9996 | 0.9998 |
No log | 6.1818 | 68 | 1.1340 | 0.6585 | 1.1340 | 1.0649 |
No log | 6.3636 | 70 | 1.1480 | 0.6585 | 1.1480 | 1.0714 |
No log | 6.5455 | 72 | 1.0158 | 0.7447 | 1.0158 | 1.0079 |
No log | 6.7273 | 74 | 0.8611 | 0.6026 | 0.8611 | 0.9280 |
No log | 6.9091 | 76 | 0.7947 | 0.6087 | 0.7947 | 0.8915 |
No log | 7.0909 | 78 | 0.7768 | 0.6340 | 0.7768 | 0.8814 |
No log | 7.2727 | 80 | 0.7826 | 0.5997 | 0.7826 | 0.8846 |
No log | 7.4545 | 82 | 0.8277 | 0.7109 | 0.8277 | 0.9098 |
No log | 7.6364 | 84 | 0.8633 | 0.7355 | 0.8633 | 0.9291 |
No log | 7.8182 | 86 | 0.8868 | 0.7704 | 0.8868 | 0.9417 |
No log | 8.0 | 88 | 0.9068 | 0.7704 | 0.9068 | 0.9523 |
No log | 8.1818 | 90 | 0.9279 | 0.7704 | 0.9279 | 0.9633 |
No log | 8.3636 | 92 | 0.8988 | 0.7955 | 0.8988 | 0.9481 |
No log | 8.5455 | 94 | 0.8985 | 0.7955 | 0.8985 | 0.9479 |
No log | 8.7273 | 96 | 0.8973 | 0.7955 | 0.8973 | 0.9473 |
No log | 8.9091 | 98 | 0.8757 | 0.7955 | 0.8757 | 0.9358 |
No log | 9.0909 | 100 | 0.8457 | 0.7955 | 0.8457 | 0.9196 |
No log | 9.2727 | 102 | 0.8353 | 0.7786 | 0.8353 | 0.9140 |
No log | 9.4545 | 104 | 0.8206 | 0.7786 | 0.8206 | 0.9059 |
No log | 9.6364 | 106 | 0.8107 | 0.7786 | 0.8107 | 0.9004 |
No log | 9.8182 | 108 | 0.8081 | 0.7786 | 0.8081 | 0.8990 |
No log | 10.0 | 110 | 0.8071 | 0.7786 | 0.8071 | 0.8984 |
Framework versions
- Transformers 4.44.2
- Pytorch 2.4.0+cu118
- Datasets 2.21.0
- Tokenizers 0.19.1
- Downloads last month
- 6
Inference Providers
NEW
This model is not currently available via any of the supported Inference Providers.
Model tree for MayBashendy/arabert_no_augmentation_organization_task1_fold0
Base model
aubmindlab/bert-base-arabertv02