Qwen2.5-14B-MUSR-LoRA-R32
This is a LoRA extracted from a language model. It was extracted using mergekit.
LoRA Details
This LoRA adapter was extracted from Triangle104/DS-R1-Distill-Q2.5-14B-Harmony_V0.1 and uses Goekdeniz-Guelmez/Josiefied-Qwen2.5-14B-Instruct-abliterated-v4 as a base.
Parameters
The following command was used to extract this LoRA adapter:
mergekit/scripts/extract_lora.py --model Triangle104/DS-R1-Distill-Q2.5-14B-Harmony_V0.1 --base-model Goekdeniz-Guelmez/Josiefied-Qwen2.5-14B-Instruct-abliterated-v4 --out-path Qwen2.5-14B-MUSR-LoRA-R32 --cuda --no-lazy-unpickle --safe-serialization --trust-remote-code --read-to-gpu --copy-tokenizer --allow-crimes
- Downloads last month
- 16
Inference Providers
NEW
This model is not currently available via any of the supported Inference Providers.
The model cannot be deployed to the HF Inference API:
The model has no pipeline_tag.
Model tree for CultriX/Qwen2.5-14B-MUSR-LoRA-R32
Base model
Qwen/Qwen2.5-14B
Finetuned
Qwen/Qwen2.5-14B-Instruct