File size: 3,560 Bytes
1f864fb
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
4e0123b
92de51f
6cf3229
92de51f
 
 
1f864fb
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
de01892
4e0123b
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
---
license: llama3.1
datasets:
- allenai/c4
language:
- en
metrics:
- accuracy
base_model:
- meta-llama/Llama-3.1-8B-Instruct
---
# Language Model Evaluation Results

## Overview
This document presents the evaluation results of `Llama-3.1-8B-Instruct-gptq-4bit` using the **Language Model Evaluation Harness** on the **ARC-Challenge** benchmark.

---

## πŸ“Š Evaluation Summary

| **Metric**            | **Value**  | **Description**  | **[original](https://huggingface.co/meta-llama/Llama-3.1-8B-Instruct)**  |
|----------------------|-----------|-----------------|-----------|
| **Accuracy (acc,none)** | `47.1%`  | Raw accuracy - percentage of correct answers. | `53.1%` |
| **Standard Error (acc_stderr,none)** | `1.46%` | Uncertainty in the accuracy estimate. | `1.45%` |
| **Normalized Accuracy (acc_norm,none)** | `49.9%`  | Accuracy after dataset-specific normalization. | `56.8%` |
| **Standard Error (acc_norm_stderr,none)** | `1.46%` | Uncertainty for normalized accuracy. | `1.45%` |

πŸ“Œ **Interpretation:**
- The model correctly answered **47.1% of the questions**.
- After **normalization**, the accuracy slightly improves to **49.9%**.
- The **standard error (~1.46%)** indicates a small margin of uncertainty.

---

## βš™οΈ Model Configuration

- **Model:** `Llama-3.1-8B-Instruct-gptq-4bit`
- **Parameters:** `1.05 billion` (Quantized 4-bit model)
- **Source:** Hugging Face (`hf`)
- **Precision:** `torch.float16`
- **Hardware:** `NVIDIA A100 80GB PCIe`
- **CUDA Version:** `12.4`
- **PyTorch Version:** `2.6.0+cu124`
- **Batch Size:** `1`
- **Evaluation Time:** `365.89 seconds (~6 minutes)`

πŸ“Œ **Interpretation:**
- The evaluation was performed on a **high-performance GPU (A100 80GB)**.
- The model is **4-bit quantized**, reducing memory usage but possibly affecting accuracy.
- A **single-sample batch size** was used, which might slow evaluation speed.

---

## πŸ“‚ Dataset Information

- **Dataset:** `AI2 ARC-Challenge`
- **Task Type:** `Multiple Choice`
- **Number of Samples Evaluated:** `1,172`
- **Few-shot Examples Used:** `0` (Zero-shot setting)

πŸ“Œ **Interpretation:**
- This benchmark assesses **grade-school-level scientific reasoning**.
- Since **no few-shot examples** were provided, the model was evaluated in a **pure zero-shot setting**.

---

## πŸ“ˆ Performance Insights

- The `"higher_is_better"` flag confirms that **higher accuracy is preferred**.
- The model's **raw accuracy (47.1%)** is moderate compared to state-of-the-art models (**60–80%** on ARC-Challenge).
- **Quantization Impact:** The **4-bit quantized model** might perform slightly worse than a full-precision version.
- **Zero-shot Limitation:** Performance could improve with **few-shot prompting** (providing examples before testing).

---

πŸ“Œ Let us know if you need further analysis or model tuning! πŸš€

## **Citation**
If you use this model in your research or project, please cite it as follows:

πŸ“Œ **Dr. Wasif Masood** (2024). *4bit Llama-3.1-8B-Instruct*. Version 1.0.  
Available at: [https://huggingface.co/empirischtech/Meta-Llama-3.1-8B-Instruct-gptq-4bit](https://huggingface.co/empirischtech/Meta-Llama-3.1-8B-Instruct-gptq-4bit)

### **BibTeX:**
```bibtex
@dataset{rwmasood2024,
  author    = {Dr. Wasif Masood and Empirisch Tech GmbH},
  title     = {Llama-3.1-8B 4 bit quantized},
  year      = {2024},
  publisher = {Hugging Face},
  url       = {https://huggingface.co/empirischtech/Meta-Llama-3.1-8B-Instruct-gptq-4bit},
  version   = {1.0},
  license   = {llama3.1},
  institution = {Empirisch Tech GmbH}
}