Daemontatox commited on
Commit
253811a
·
verified ·
1 Parent(s): 9390c65

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +8 -8
README.md CHANGED
@@ -6,9 +6,9 @@ tags:
6
  - unsloth
7
  - qwen2
8
  - trl
9
- - reason
10
- - Chain-of-Thought
11
- - deep thinking
12
  license: apache-2.0
13
  language:
14
  - en
@@ -18,6 +18,8 @@ datasets:
18
  - Daemontatox/Qwqloncotam
19
  - Daemontatox/Reasoning_am
20
  library_name: transformers
 
 
21
  ---
22
  ![image](./image.webp)
23
  # **PathfinderAI 4.0**
@@ -92,7 +94,7 @@ The model supports **4-bit and 8-bit quantization**, making it **deployable on r
92
  ```python
93
  from transformers import AutoModelForCausalLM, AutoTokenizer
94
 
95
- model_name = "Daemontatox/FuseO1-DeepSeekR1-QwQ-SkyT1-32B-Preview"
96
 
97
  tokenizer = AutoTokenizer.from_pretrained(model_name)
98
  model = AutoModelForCausalLM.from_pretrained(model_name)
@@ -107,8 +109,7 @@ Using with Unsloth (Optimized LoRA Inference)
107
 
108
  from unsloth import FastAutoModelForCausalLM
109
 
110
- model = FastAutoModelForCausalLM.from_pretrained(
111
- "Daemontatox/FuseO1-DeepSeekR1-QwQ-SkyT1-32B-Preview",
112
  load_in_4bit=True # Efficient deployment
113
  )
114
 
@@ -125,5 +126,4 @@ Special thanks to:
125
  The open-source AI community for continuous innovation.
126
 
127
 
128
- ---
129
-
 
6
  - unsloth
7
  - qwen2
8
  - trl
9
+ - reason
10
+ - Chain-of-Thought
11
+ - deep thinking
12
  license: apache-2.0
13
  language:
14
  - en
 
18
  - Daemontatox/Qwqloncotam
19
  - Daemontatox/Reasoning_am
20
  library_name: transformers
21
+ new_version: Daemontatox/PathFinderAI4.0
22
+ pipeline_tag: text-generation
23
  ---
24
  ![image](./image.webp)
25
  # **PathfinderAI 4.0**
 
94
  ```python
95
  from transformers import AutoModelForCausalLM, AutoTokenizer
96
 
97
+ model_name = "Daemontatox/PathFinderAI4.0"
98
 
99
  tokenizer = AutoTokenizer.from_pretrained(model_name)
100
  model = AutoModelForCausalLM.from_pretrained(model_name)
 
109
 
110
  from unsloth import FastAutoModelForCausalLM
111
 
112
+ model = FastAutoModelForCausalLM.from_pretrained(model_name,
 
113
  load_in_4bit=True # Efficient deployment
114
  )
115
 
 
126
  The open-source AI community for continuous innovation.
127
 
128
 
129
+ ---