ToastyPigeon's picture
Model save
8195459 verified
|
raw
history blame
4.32 kB
metadata
base_model: mistralai/Mistral-Small-Instruct-2409
library_name: peft
license: other
tags:
  - axolotl
  - generated_from_trainer
model-index:
  - name: mistral-small-adventure-qlora
    results: []

Built with Axolotl

See axolotl config

axolotl version: 0.4.1

# huggingface-cli login --token $hf_key && wandb login $wandb_key
# python -m axolotl.cli.preprocess ms-adventure.yml
# accelerate launch -m axolotl.cli.train ms-adventure.yml
# python -m axolotl.cli.merge_lora ms-adventure.yml

base_model: mistralai/Mistral-Small-Instruct-2409
model_type: AutoModelForCausalLM
tokenizer_type: AutoTokenizer

load_in_8bit: false
load_in_4bit: true
strict: false
sequence_len: 16384 # 99% vram
min_sample_len: 128
bf16: true
fp16:
tf32: false
flash_attention: true
special_tokens:

# Data
dataset_prepared_path: last_run_prepared
datasets:
  - path: ColumbidAI/adventure-ms-16k
    type: completion
warmup_steps: 20
shuffle_merged_datasets: true

save_safetensors: true

# WandB
wandb_project: Mistral-Small-Skein
wandb_entity:

# Iterations
num_epochs: 1

# Output
output_dir: ./adventure-workspace
hub_model_id: ToastyPigeon/mistral-small-adventure-qlora
hub_strategy: "all_checkpoints"
saves_per_epoch: 5

# Sampling
sample_packing: true
pad_to_sequence_len: true

# Batching
gradient_accumulation_steps: 4
micro_batch_size: 1
eval_batch_size: 1
gradient_checkpointing: 'unsloth'
gradient_checkpointing_kwargs:
   use_reentrant: true

#unsloth_cross_entropy_loss: true
#unsloth_lora_mlp: true
#unsloth_lora_qkv: true
#unsloth_lora_o: true

# Evaluation
val_set_size: 100
evals_per_epoch: 5
eval_table_size:
eval_max_new_tokens: 256
eval_sample_packing: false

# LoRA
adapter: qlora
lora_model_dir:
lora_r: 64
lora_alpha: 32
lora_dropout: 0.125
lora_target_linear: 
lora_fan_in_fan_out:
lora_target_modules:
  - gate_proj
  - down_proj
  - up_proj
  - q_proj
  - v_proj
  - k_proj
  - o_proj
lora_modules_to_save:

# Optimizer
optimizer: paged_adamw_8bit # adamw_8bit
lr_scheduler: cosine
learning_rate: 0.0001
cosine_min_lr_ratio: 0.1
weight_decay: 0.01
max_grad_norm: 10.0

# Misc
train_on_inputs: false
group_by_length: false
early_stopping_patience:
local_rank:
logging_steps: 1
xformers_attention:
debug:
deepspeed: /workspace/axolotl/deepspeed_configs/zero3.json # previously blank
fsdp:
fsdp_config:

# Checkpoints
resume_from_checkpoint:


plugins:
  - axolotl.integrations.liger.LigerPlugin
liger_rope: true
liger_rms_norm: true
liger_swiglu: true
liger_fused_linear_cross_entropy: true

mistral-small-adventure-qlora

This model is a fine-tuned version of mistralai/Mistral-Small-Instruct-2409 on the None dataset. It achieves the following results on the evaluation set:

  • Loss: 1.9117

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.0001
  • train_batch_size: 1
  • eval_batch_size: 1
  • seed: 42
  • distributed_type: multi-GPU
  • num_devices: 2
  • gradient_accumulation_steps: 4
  • total_train_batch_size: 8
  • total_eval_batch_size: 2
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: cosine
  • lr_scheduler_warmup_steps: 20
  • num_epochs: 1

Training results

Training Loss Epoch Step Validation Loss
1.8182 0.0035 1 2.1284
1.8279 0.2043 59 1.9991
1.8002 0.4087 118 1.9488
1.7188 0.6130 177 1.9185
1.7306 0.8173 236 1.9117

Framework versions

  • PEFT 0.13.0
  • Transformers 4.45.0
  • Pytorch 2.3.1+cu121
  • Datasets 2.21.0
  • Tokenizers 0.20.0