QwQ-Qwen2.5-Coder-Instruct-32B-MW
This is a merge of pre-trained language models created using mergekit.
Merge Details
Merge Method
This model was merged using the SCE merge method using Qwen/Qwen2.5-32B as a base.
Models Merged
The following models were included in the merge:
Configuration
The following YAML configuration was used to produce this model:
models:
- model: Qwen/Qwen2.5-32B
- model: Qwen/QwQ-32B
- model: Qwen/Qwen2.5-Coder-32B-Instruct
merge_method: sce
base_model: Qwen/Qwen2.5-32B
parameters:
select_topk: 0.6
layers_weights:
- pattern: "transformer.h.[0-9]|1[0-5]" # Regex for layers 0-15
value: [0.0, 0.7, 0.3]
- pattern: "transformer.h.(1[6-9]|[2-4][0-9]|5[0-9])" # Regex for layers 16-59
value: [0.0, 0.9, 0.1]
- pattern: "transformer.h.(6[0-3])" # Regex for layers 60-63
value: [0.0, 0.8, 0.2]
- pattern: "lm_head"
value: [0.0, 0.8, 0.2]
dtype: bfloat16
- Downloads last month
- 5
Inference Providers
NEW
This model is not currently available via any of the supported Inference Providers.
Model tree for sm54/QwQ-Qwen2.5-Coder-Instruct-32B-MW
Merge model
this model