--- base_model: - Qwen/Qwen2.5-Coder-32B-Instruct - Qwen/Qwen2.5-32B - Qwen/QwQ-32B library_name: transformers tags: - mergekit - merge --- # QwQ-Qwen2.5-Coder-Instruct-32B-MW This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the [SCE](https://arxiv.org/abs/2408.07990) merge method using [Qwen/Qwen2.5-32B](https://huggingface.co/Qwen/Qwen2.5-32B) as a base. ### Models Merged The following models were included in the merge: * [Qwen/Qwen2.5-Coder-32B-Instruct](https://huggingface.co/Qwen/Qwen2.5-Coder-32B-Instruct) * [Qwen/QwQ-32B](https://huggingface.co/Qwen/QwQ-32B) ### Configuration The following YAML configuration was used to produce this model: ```yaml models: - model: Qwen/Qwen2.5-32B - model: Qwen/QwQ-32B - model: Qwen/Qwen2.5-Coder-32B-Instruct merge_method: sce base_model: Qwen/Qwen2.5-32B parameters: select_topk: 0.6 layers_weights: - pattern: "transformer.h.[0-9]|1[0-5]" # Regex for layers 0-15 value: [0.0, 0.7, 0.3] - pattern: "transformer.h.(1[6-9]|[2-4][0-9]|5[0-9])" # Regex for layers 16-59 value: [0.0, 0.9, 0.1] - pattern: "transformer.h.(6[0-3])" # Regex for layers 60-63 value: [0.0, 0.8, 0.2] - pattern: "lm_head" value: [0.0, 0.8, 0.2] dtype: bfloat16 ```