Update README
Browse files
README.md
CHANGED
@@ -11,6 +11,15 @@ tags:
|
|
11 |
- bfcl
|
12 |
---
|
13 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
14 |
# watt-tool-8B
|
15 |
|
16 |
watt-tool-8B is a fine-tuned language model based on LLaMa-3.1-8B-Instruct, optimized for tool usage and multi-turn dialogue. It achieves state-of-the-art performance on the Berkeley Function-Calling Leaderboard (BFCL).
|
|
|
11 |
- bfcl
|
12 |
---
|
13 |
|
14 |
+
# GGUF and Quantized versions of watt-ai/watt-tool-8B
|
15 |
+
|
16 |
+
This is a fork of [watt-ai/watt-tool-8B](https://huggingface.co/watt-ai/watt-tool-8B) where the safetensors have been converted to GGUF and quantized to BF16, Q8_0 and Q4_K
|
17 |
+
|
18 |
+
This model is one of the top performers in the [BFCL Leaderboard](https://gorilla.cs.berkeley.edu/leaderboard.html)
|
19 |
+
|
20 |
+
From the original repo:
|
21 |
+
|
22 |
+
|
23 |
# watt-tool-8B
|
24 |
|
25 |
watt-tool-8B is a fine-tuned language model based on LLaMa-3.1-8B-Instruct, optimized for tool usage and multi-turn dialogue. It achieves state-of-the-art performance on the Berkeley Function-Calling Leaderboard (BFCL).
|