Arcanum-12b π§ββοΈ
Arcanum-12b is a merged large language model created by combining TheDrummer/Rocinante-12B-v1.1 and MarinaraSpaghetti/NemoMix-Unleashed-12B using a novel merging technique.
Model Details π
- Developed by: Xclbr7
- Model type: Causal Language Model
- Language(s): English (primarily), may support other languages
- License: MIT
- Repository: https://huggingface.co/Xclbr7/Arcanum-12b
Model Architecture ποΈ
- Base model: MarinaraSpaghetti/NemoMix-Unleashed-12B
- Parameter count: ~12 billion
- Architecture specifics: Transformer-based language model
Open LLM Leaderboard Evaluation Results
Detailed results can be found here
Metric | Value |
---|---|
Avg. | 20.48 |
IFEval (0-Shot) | 29.07 |
BBH (3-Shot) | 31.88 |
MATH Lvl 5 (4-Shot) | 10.27 |
GPQA (0-shot) | 9.40 |
MuSR (0-shot) | 13.53 |
MMLU-PRO (5-shot) | 28.74 |
Training & Merging π
Arcanum-12b was created by merging two existing 12B models:
TheDrummer/Rocinante-12B-v1.1
- Density parameters: [1, 0.8, 0.6]
- Weight: 0.7
MarinaraSpaghetti/NemoMix-Unleashed-12B
- Density parameters: [0.5, 0.7, 0.9]
- Weight: 0.8
Merging method: Ties Additional parameters:
- Normalization: True
- Int8 mask: True
- Data type: float16
Intended Use π―
Conversation with different personas.
Ethical Considerations π€
As a merged model based on existing language models, Arcanum-12b may inherit biases and limitations from its parent models. Users should be aware of potential biases in generated content and use the model responsibly.
Acknowledgments π
We acknowledge the contributions of the original model creators:
- TheDrummer for Rocinante-12B-v1.1
- MarinaraSpaghetti for NemoMix-Unleashed-12B
Their work formed the foundation for Arcanum-12b.
- Downloads last month
- 13
Inference Providers
NEW
This model is not currently available via any of the supported third-party Inference Providers, and
the model is not deployed on the HF Inference API.
Model tree for Xclbr7/Arcanum-12b
Spaces using Xclbr7/Arcanum-12b 3
Evaluation results
- strict accuracy on IFEval (0-Shot)Open LLM Leaderboard29.070
- normalized accuracy on BBH (3-Shot)Open LLM Leaderboard31.880
- exact match on MATH Lvl 5 (4-Shot)Open LLM Leaderboard10.270
- acc_norm on GPQA (0-shot)Open LLM Leaderboard9.400
- acc_norm on MuSR (0-shot)Open LLM Leaderboard13.530
- accuracy on MMLU-PRO (5-shot)test set Open LLM Leaderboard28.740