gemma7b-coding-gpt4o-100k

This model is a fine-tuned version of google/gemma-7b on the llama-duo/synth_coding_dataset_dedup dataset. It achieves the following results on the evaluation set:

  • Loss: 3.9658

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.0002
  • train_batch_size: 4
  • eval_batch_size: 4
  • seed: 42
  • distributed_type: multi-GPU
  • num_devices: 4
  • gradient_accumulation_steps: 2
  • total_train_batch_size: 32
  • total_eval_batch_size: 16
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: cosine
  • lr_scheduler_warmup_ratio: 0.1
  • num_epochs: 10

Training results

Training Loss Epoch Step Validation Loss
0.5262 0.9989 470 1.3224
0.4826 2.0 941 1.3435
0.4369 2.9989 1411 1.4787
0.3819 4.0 1882 1.7432
0.3345 4.9989 2352 2.1234
0.2875 6.0 2823 2.5846
0.2319 6.9989 3293 3.1057
0.1968 8.0 3764 3.6609
0.1809 8.9989 4234 3.9400
0.1757 9.9894 4700 3.9658

Framework versions

  • PEFT 0.11.1
  • Transformers 4.40.1
  • Pytorch 2.2.0+cu121
  • Datasets 2.19.2
  • Tokenizers 0.19.1
Downloads last month
5
Inference Providers NEW
This model is not currently available via any of the supported third-party Inference Providers, and HF Inference API was unable to determine this model’s pipeline type.

Model tree for llama-duo/gemma7b-coding-gpt4o-100k

Base model

google/gemma-7b
Adapter
(9172)
this model

Dataset used to train llama-duo/gemma7b-coding-gpt4o-100k