|
--- |
|
library_name: transformers |
|
license: llama3.1 |
|
base_model: meta-llama/Meta-Llama-3.1-8B-Instruct |
|
tags: |
|
- abliterated |
|
- uncensored |
|
--- |
|
|
|
# 🦙 Meta-Llama-3.1-8B-Instruct-abliterated |
|
|
|
|
|
This is an uncensored version of Llama 3.1 8B Instruct created with abliteration (see [this article](https://huggingface.co/blog/mlabonne/abliteration) to know more about it). |
|
|
|
Special thanks to [@FailSpy](https://huggingface.co/failspy) for the original code and technique. Please follow him if you're interested in abliterated models. |
|
|
|
## Evaluations |
|
The following data has been re-evaluated and calculated as the average for each test. |
|
|
|
| Benchmark | Llama-3.1-8b-Instruct | Meta-Llama-3.1-8B-Instruct-abliterated | |
|
|-------------|-----------------------|----------------------------------------| |
|
| IF_Eval | **80.0** | 78.98 | |
|
| MMLU Pro | **36.34** | 35.91 | |
|
| TruthfulQA | 52.98 | **55.42** | |
|
| BBH | **48.72** | 47.0 | |
|
| GPQA | 33.55 | **33.93** | |
|
|
|
The script used for evaluation can be found inside this repository under /eval.sh, or click [here](https://huggingface.co/huihui-ai/Meta-Llama-3.1-8B-Instruct-abliterated/blob/main/eval.sh) |
|
|
|
|