Update README.md
Browse files
README.md
CHANGED
@@ -18,7 +18,7 @@ This document presents the evaluation results of `Llama-3.1-8B-Instruct-gptq-4bi
|
|
18 |
|
19 |
## π Evaluation Summary
|
20 |
|
21 |
-
| **Metric** | **Value** | **Description** | **Llama-3.1-8B-Instruct** |
|
22 |
|----------------------|-----------|-----------------|-----------|
|
23 |
| **Accuracy (acc,none)** | `47.1%` | Raw accuracy - percentage of correct answers. | `53.1%` |
|
24 |
| **Standard Error (acc_stderr,none)** | `1.46%` | Uncertainty in the accuracy estimate. | `1.45%` |
|
@@ -73,4 +73,26 @@ This document presents the evaluation results of `Llama-3.1-8B-Instruct-gptq-4bi
|
|
73 |
|
74 |
---
|
75 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
76 |
π Let us know if you need further analysis or model tuning! π
|
|
|
18 |
|
19 |
## π Evaluation Summary
|
20 |
|
21 |
+
| **Metric** | **Value** | **Description** | **[original](https://huggingface.co/meta-llama/Llama-3.1-8B-Instruct)** |
|
22 |
|----------------------|-----------|-----------------|-----------|
|
23 |
| **Accuracy (acc,none)** | `47.1%` | Raw accuracy - percentage of correct answers. | `53.1%` |
|
24 |
| **Standard Error (acc_stderr,none)** | `1.46%` | Uncertainty in the accuracy estimate. | `1.45%` |
|
|
|
73 |
|
74 |
---
|
75 |
|
76 |
+
|
77 |
+
## **Citation**
|
78 |
+
If you use this model in your research or project, please cite it as follows:
|
79 |
+
|
80 |
+
π **Dr. Wasif Masood** (2024). *4bit Llama-3.1-8B-Instruct*. Version 1.0.
|
81 |
+
Available at: [https://huggingface.co/empirischtech/Meta-Llama-3.1-8B-Instruct-gptq-4bit](https://huggingface.co/empirischtech/Meta-Llama-3.1-8B-Instruct-gptq-4bit)
|
82 |
+
|
83 |
+
### **BibTeX:**
|
84 |
+
```bibtex
|
85 |
+
@dataset{rwmasood2024,
|
86 |
+
author = {Dr. Wasif Masood and Empirisch Tech GmbH},
|
87 |
+
title = {Llama-3.1-8B 4 bit quantized},
|
88 |
+
year = {2024},
|
89 |
+
publisher = {Hugging Face},
|
90 |
+
url = {https://huggingface.co/empirischtech/Meta-Llama-3.1-8B-Instruct-gptq-4bit},
|
91 |
+
version = {1.0},
|
92 |
+
license = {llama3.1},
|
93 |
+
institution = {Empirisch Tech GmbH}
|
94 |
+
}
|
95 |
+
|
96 |
+
|
97 |
+
|
98 |
π Let us know if you need further analysis or model tuning! π
|