salbatarni's picture
Training in progress, step 61
3f5a59a verified
|
raw
history blame
3.31 kB
metadata
base_model: aubmindlab/bert-base-arabertv02
tags:
  - generated_from_trainer
model-index:
  - name: arabert_cross_relevance_task1_fold3
    results: []

arabert_cross_relevance_task1_fold3

This model is a fine-tuned version of aubmindlab/bert-base-arabertv02 on the None dataset. It achieves the following results on the evaluation set:

  • Loss: 0.2991
  • Qwk: 0.5026
  • Mse: 0.2991

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 2e-05
  • train_batch_size: 16
  • eval_batch_size: 16
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 1

Training results

Training Loss Epoch Step Validation Loss Qwk Mse
No log 0.0351 2 1.0137 0.0171 1.0137
No log 0.0702 4 0.6105 0.1259 0.6105
No log 0.1053 6 0.6101 0.4090 0.6101
No log 0.1404 8 0.5978 0.3841 0.5978
No log 0.1754 10 0.4704 0.2193 0.4704
No log 0.2105 12 0.3969 0.1722 0.3969
No log 0.2456 14 0.4062 0.2387 0.4062
No log 0.2807 16 0.3689 0.2173 0.3689
No log 0.3158 18 0.3622 0.2224 0.3622
No log 0.3509 20 0.3775 0.2202 0.3775
No log 0.3860 22 0.3739 0.2294 0.3739
No log 0.4211 24 0.3817 0.2153 0.3817
No log 0.4561 26 0.3802 0.2316 0.3802
No log 0.4912 28 0.3706 0.2936 0.3706
No log 0.5263 30 0.3544 0.2687 0.3544
No log 0.5614 32 0.3300 0.2840 0.3300
No log 0.5965 34 0.3051 0.3286 0.3051
No log 0.6316 36 0.2877 0.3503 0.2877
No log 0.6667 38 0.2875 0.3784 0.2875
No log 0.7018 40 0.2986 0.4873 0.2986
No log 0.7368 42 0.3064 0.5052 0.3064
No log 0.7719 44 0.3065 0.5343 0.3065
No log 0.8070 46 0.2939 0.4736 0.2939
No log 0.8421 48 0.2887 0.4490 0.2887
No log 0.8772 50 0.2881 0.4490 0.2881
No log 0.9123 52 0.2916 0.4637 0.2916
No log 0.9474 54 0.2967 0.4885 0.2967
No log 0.9825 56 0.2991 0.5026 0.2991

Framework versions

  • Transformers 4.44.0
  • Pytorch 2.4.0
  • Datasets 2.21.0
  • Tokenizers 0.19.1