image/png

Qwen2.5-Gutenberg-Doppel-32B

Qwen/Qwen2.5-32B-Instruct finetuned on jondurbin/gutenberg-dpo-v0.1 and nbeerbower/gutenberg2-dpo.

Method

ORPO tuned with 2x A100 for 1.25 epochs.

Downloads last month
8
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.

Model tree for async0x42/Qwen2.5-Gutenberg-Doppel-32B-exl2_5.0bpw

Base model

Qwen/Qwen2.5-32B
Quantized
(129)
this model

Datasets used to train async0x42/Qwen2.5-Gutenberg-Doppel-32B-exl2_5.0bpw