Small Model Learnability Gap: Models
Collection
24 items
•
Updated
•
1
This model is a fine-tuned version of meta-llama/Llama-3.3-70B-Instruct on the MATH_training_Qwen2.5-32B-Instruct dataset. It achieves the following results on the evaluation set:
More information needed
More information needed
More information needed
The following hyperparameters were used during training:
Training Loss | Epoch | Step | Validation Loss |
---|---|---|---|
0.117 | 1.25 | 200 | 0.1356 |
Base model
meta-llama/Llama-3.1-70B