File size: 3,031 Bytes
a4404ee 253811a a4404ee aaefcc5 a240ca9 aaefcc5 253811a a4404ee 43370f9 9390c65 a4404ee da6f45e aaefcc5 a4404ee aaefcc5 a4404ee aaefcc5 253811a |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 |
---
base_model: FuseAI/FuseO1-DeepSeekR1-QwQ-SkyT1-32B-Preview
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
- trl
- reason
- Chain-of-Thought
- deep thinking
license: apache-2.0
language:
- en
datasets:
- bespokelabs/Bespoke-Stratos-17k
- Daemontatox/Deepthinking-COT
- Daemontatox/Qwqloncotam
- Daemontatox/Reasoning_am
library_name: transformers
new_version: Daemontatox/PathFinderAI4.0
pipeline_tag: text-generation
---

# **PathfinderAI 4.0**
## **WARNING THIS IS A FAILED FINETUNE**
## THIS IS MERELY A TEST ATTEMPT OF FINETUNING
## **Model Overview**
This model is a fine-tuned version of **FuseAI/FuseO1-DeepSeekR1-QwQ-SkyT1-32B-Preview**, based on the **Qwen2** architecture. It has been optimized using **Unsloth** for significantly improved training efficiency, reducing compute time by **2x** while maintaining high performance across various NLP benchmarks.
Fine-tuning was performed using **Hugging Face’s TRL (Transformers Reinforcement Learning) library**, ensuring adaptability for **complex reasoning, natural language generation (NLG), and conversational AI** tasks.
## **Model Details**
- **Developed by:** Daemontatox
- **Base Model:** [FuseAI/FuseO1-DeepSeekR1-QwQ-SkyT1-32B-Preview](https://huggingface.co/FuseAI/FuseO1-DeepSeekR1-QwQ-SkyT1-32B-Preview)
- **License:** Apache-2.0
- **Model Type:** Qwen2-based large-scale transformer
- **Optimization Framework:** [Unsloth](https://github.com/unslothai/unsloth)
- **Fine-tuning Methodology:** LoRA (Low-Rank Adaptation) & Full Fine-Tuning
- **Quantization Support:** 4-bit and 8-bit for deployment on resource-constrained devices
- **Training Library:** [Hugging Face TRL](https://huggingface.co/docs/trl/)
---
## **Training & Fine-Tuning Details**
### **Optimization with Unsloth**
Unsloth significantly accelerates fine-tuning by reducing memory overhead and improving hardware utilization. The model was fine-tuned **twice as fast** as conventional methods, leveraging **Flash Attention 2** and **PagedAttention** for enhanced performance.
### **Fine-Tuning Method**
The model was fine-tuned using **parameter-efficient techniques**, including:
- **QLoRA (Quantized LoRA)** for reduced memory usage.
- **Full fine-tuning** on select layers to maintain original capabilities while improving specific tasks.
- **RLHF (Reinforcement Learning with Human Feedback)** for improved alignment with human preferences.
---
---
## **Intended Use & Applications**
### **Primary Use Cases**
- **Conversational AI**: Enhances chatbot interactions with **better contextual awareness** and logical coherence.
- **Text Generation & Completion**: Ideal for **content creation**, **report writing**, and **creative writing**.
- **Mathematical & Logical Reasoning**: Can assist in **education**, **problem-solving**, and **automated theorem proving**.
- **Research & Development**: Useful for **scientific research**, **data analysis**, and **language modeling experiments**.
--- |