metadata
base_model:
- beomi/Llama-3-Open-Ko-8B
- aaditya/Llama3-OpenBioLLM-8B
- MLP-KTLim/llama-3-Korean-Bllossom-8B
- maum-ai/Llama-3-MAAL-8B-Instruct-v0.1
- NousResearch/Meta-Llama-3-8B
- NousResearch/Meta-Llama-3-8B-Instruct
- Locutusque/llama-3-neural-chat-v2.2-8B
- asiansoul/Solo-Llama-3-MAAL-MLP-KoEn-8B
library_name: transformers
tags:
- mergekit
- merge
U-GO-GIRL-Llama-3-KoEn-8B
This is a merge of pre-trained language models created using mergekit.
Merge Details
Merge Method
This model was merged using the DARE TIES merge method using NousResearch/Meta-Llama-3-8B as a base.
Models Merged
The following models were included in the merge:
- beomi/Llama-3-Open-Ko-8B
- aaditya/Llama3-OpenBioLLM-8B
- MLP-KTLim/llama-3-Korean-Bllossom-8B
- maum-ai/Llama-3-MAAL-8B-Instruct-v0.1
- NousResearch/Meta-Llama-3-8B-Instruct
- Locutusque/llama-3-neural-chat-v2.2-8B
- asiansoul/Solo-Llama-3-MAAL-MLP-KoEn-8B
Configuration
The following YAML configuration was used to produce this model:
models:
- model: NousResearch/Meta-Llama-3-8B
# Base model providing a general foundation without specific parameters
- model: NousResearch/Meta-Llama-3-8B-Instruct
parameters:
density: 0.65
weight: 0.4
- model: asiansoul/Solo-Llama-3-MAAL-MLP-KoEn-8B
parameters:
density: 0.6
weight: 0.3
- model: maum-ai/Llama-3-MAAL-8B-Instruct-v0.1
parameters:
density: 0.55
weight: 0.1
- model: beomi/Llama-3-Open-Ko-8B
parameters:
density: 0.55
weight: 0.1
- model: MLP-KTLim/llama-3-Korean-Bllossom-8B
parameters:
density: 0.55
weight: 0.1
- model: aaditya/Llama3-OpenBioLLM-8B
parameters:
density: 0.55
weight: 0.05
- model: Locutusque/llama-3-neural-chat-v2.2-8B
parameters:
density: 0.55
weight: 0.05
merge_method: dare_ties
base_model: NousResearch/Meta-Llama-3-8B
parameters:
int8_mask: true
dtype: bfloat16