Update README.md
Browse files
README.md
CHANGED
@@ -15,12 +15,16 @@ tags:
|
|
15 |
|
16 |
# GGUF and "i-matrix" quantized versions of watt-ai/watt-tool-8B
|
17 |
|
18 |
-
**watt-tool-8B** is a fine-tuned language model based on LLaMa-3.1-8B-Instruct, optimized for tool usage and multi-turn dialogue. It achieves state-of-the-art performance on the [Berkeley Function-Calling Leaderboard (BFCL)](https://gorilla.cs.berkeley.edu/leaderboard.html)
|
19 |
-
|
20 |
Using [LLaMA C++](https://github.com/ggerganov/llama.cpp) release [b4585](https://github.com/ggerganov/llama.cpp/releases/tag/b4585) for quantization.
|
21 |
|
22 |
Original model: [watt-ai/watt-tool-8B](https://huggingface.co/watt-ai/watt-tool-8B)
|
23 |
|
|
|
|
|
|
|
|
|
|
|
|
|
24 |
All quantized versions were generated using an appropriate imatrix created from datasets available at [eaddario/imatrix-calibration](https://huggingface.co/datasets/eaddario/imatrix-calibration).
|
25 |
|
26 |
At its core, an Importance Matrix (imatrix) is a table or, more broadly, a structured representation that scores the relative importance of different features or parameters in a machine learning model. It essentially quantifies the "impact" each feature has on a specific outcome, prediction, or relationship being modeled.
|
|
|
15 |
|
16 |
# GGUF and "i-matrix" quantized versions of watt-ai/watt-tool-8B
|
17 |
|
|
|
|
|
18 |
Using [LLaMA C++](https://github.com/ggerganov/llama.cpp) release [b4585](https://github.com/ggerganov/llama.cpp/releases/tag/b4585) for quantization.
|
19 |
|
20 |
Original model: [watt-ai/watt-tool-8B](https://huggingface.co/watt-ai/watt-tool-8B)
|
21 |
|
22 |
+
From the model creators:
|
23 |
+
|
24 |
+
> watt-tool-8B is a fine-tuned language model based on LLaMa-3.1-8B-Instruct, optimized for tool usage and multi-turn dialogue. It achieves state-of-the-art performance on the [Berkeley Function-Calling Leaderboard (BFCL)](https://gorilla.cs.berkeley.edu/leaderboard.html)
|
25 |
+
>
|
26 |
+
> The model is specifically designed to excel at complex tool usage scenarios that require multi-turn interactions, making it ideal for empowering platforms like [Lupan](https://lupan.watt.chat/), an AI-powered workflow building tool. By leveraging a carefully curated and optimized dataset, watt-tool-8B demonstrates superior capabilities in understanding user requests, selecting appropriate tools, and effectively utilizing them across multiple turns of conversation.
|
27 |
+
|
28 |
All quantized versions were generated using an appropriate imatrix created from datasets available at [eaddario/imatrix-calibration](https://huggingface.co/datasets/eaddario/imatrix-calibration).
|
29 |
|
30 |
At its core, an Importance Matrix (imatrix) is a table or, more broadly, a structured representation that scores the relative importance of different features or parameters in a machine learning model. It essentially quantifies the "impact" each feature has on a specific outcome, prediction, or relationship being modeled.
|