salbatarni's picture
Training in progress, step 57
8b7b3de verified
|
raw
history blame
3.62 kB
metadata
base_model: aubmindlab/bert-base-arabertv02
tags:
  - generated_from_trainer
model-index:
  - name: arabert_cross_relevance_task2_fold2
    results: []

arabert_cross_relevance_task2_fold2

This model is a fine-tuned version of aubmindlab/bert-base-arabertv02 on the None dataset. It achieves the following results on the evaluation set:

  • Loss: 0.2974
  • Qwk: 0.0
  • Mse: 0.2974

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 2e-05
  • train_batch_size: 16
  • eval_batch_size: 16
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 1

Training results

Training Loss Epoch Step Validation Loss Qwk Mse
No log 0.0308 2 1.8592 0.0 1.8592
No log 0.0615 4 0.5542 0.0521 0.5542
No log 0.0923 6 0.2790 -0.0315 0.2790
No log 0.1231 8 0.3268 0.0812 0.3268
No log 0.1538 10 0.2739 0.0068 0.2739
No log 0.1846 12 0.2872 0.0 0.2872
No log 0.2154 14 0.3469 0.0 0.3469
No log 0.2462 16 0.3134 0.0 0.3134
No log 0.2769 18 0.2680 0.0 0.2680
No log 0.3077 20 0.2818 -0.0034 0.2818
No log 0.3385 22 0.3013 -0.0461 0.3013
No log 0.3692 24 0.3036 -0.0714 0.3036
No log 0.4 26 0.3135 0.0 0.3135
No log 0.4308 28 0.3150 0.0 0.3150
No log 0.4615 30 0.3118 0.0 0.3118
No log 0.4923 32 0.3083 -0.0096 0.3083
No log 0.5231 34 0.3294 0.0432 0.3294
No log 0.5538 36 0.3784 0.0432 0.3784
No log 0.5846 38 0.4043 0.0432 0.4043
No log 0.6154 40 0.3829 0.0432 0.3829
No log 0.6462 42 0.3424 0.0 0.3424
No log 0.6769 44 0.3208 0.0 0.3208
No log 0.7077 46 0.3133 0.0 0.3133
No log 0.7385 48 0.3084 0.0 0.3084
No log 0.7692 50 0.3050 0.0 0.3050
No log 0.8 52 0.3061 0.0 0.3061
No log 0.8308 54 0.3000 0.0 0.3000
No log 0.8615 56 0.2974 0.0 0.2974
No log 0.8923 58 0.2961 0.0 0.2961
No log 0.9231 60 0.2955 0.0 0.2955
No log 0.9538 62 0.2966 0.0 0.2966
No log 0.9846 64 0.2974 0.0 0.2974

Framework versions

  • Transformers 4.44.0
  • Pytorch 2.4.0
  • Datasets 2.21.0
  • Tokenizers 0.19.1