File size: 4,260 Bytes
72cec4e
 
 
 
 
 
 
 
 
7ea397e
99941f6
3e0a631
99941f6
 
 
 
 
 
 
 
 
 
 
13faff0
 
 
99941f6
 
bbfdc69
99941f6
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
bbfdc69
99941f6
 
 
bbfdc69
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
---
license: llama3
language:
- en
pipeline_tag: text-generation
tags:
- meta
- Llama3
- pytorch
base_model: meta-llama/Meta-Llama-3-8B-Instruct
---
# SandLogic Technologies - Quantized Meta-Llama3-8b-Instruct Models

## Model Description

We have quantized the Meta-Llama3-8b-Instruct model into three variants:

1. Q5_KM
2. Q4_KM
3. IQ4_XS

These quantized models offer improved efficiency while maintaining performance.

Discover our full range of quantized language models by visiting our [SandLogic Lexicon](https://github.com/sandlogic/SandLogic-Lexicon) GitHub.
To learn more about our company and services, check out our website at [SandLogic](https://www.sandlogic.com).

## Original Model Information

- **Name**: [Meta-Llama3-8b-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct)
- **Developer**: Meta
- **Release Date**: April 18, 2024
- **Model Type**: Auto-regressive language model
- **Architecture**: Optimized transformer with Grouped-Query Attention (GQA)
- **Parameters**: 8 billion
- **Context Length**: 8k tokens
- **Training Data**: New mix of publicly available online data (15T+ tokens)
- **Knowledge Cutoff**: March, 2023

## Model Capabilities

Llama 3 is designed for multiple use cases, including:

- Responding to questions in natural language
- Writing code
- Brainstorming ideas
- Content creation
- Summarization

The model understands context and responds in a human-like manner, making it useful for various applications.

## Use Cases

1. **Chatbots**: Enhance customer service automation
2. **Content Creation**: Generate articles, reports, blogs, and stories
3. **Email Communication**: Draft emails and maintain consistent brand tone
4. **Data Analysis Reports**: Summarize findings and create business performance reports
5. **Code Generation**: Produce code snippets, identify bugs, and provide programming recommendations

## Model Variants

We offer three quantized versions of the Meta-Llama3-8b-Instruct model:

1. **Q5_KM**: 5-bit quantization using the KM method
2. **Q4_KM**: 4-bit quantization using the KM method
3. **IQ4_XS**: 4-bit quantization using the IQ4_XS method

These quantized models aim to reduce model size and improve inference speed while maintaining performance as close to the original model as possible.

## Usage

```bash
pip install llama-cpp-python 
```
Please refer to the llama-cpp-python [documentation](https://llama-cpp-python.readthedocs.io/en/latest/) to install with GPU support.

### Basic Text Completion
Here's an example demonstrating how to use the high-level API for basic text completion:

```bash
from llama_cpp import Llama

llm = Llama(
    model_path="./models/7B/llama-model.gguf",
    verbose=False,
    # n_gpu_layers=-1, # Uncomment to use GPU acceleration
    # n_ctx=2048, # Uncomment to increase the context window
)

output = llm(
    "Q: Name the planets in the solar system? A: ", # Prompt
    max_tokens=32, # Generate up to 32 tokens
    stop=["Q:", "\n"], # Stop generating just before a new question
    echo=False # Don't echo the prompt in the output
)

print(output["choices"][0]["text"])
```

## Download
You can download `Llama` models in `gguf` format directly from Hugging Face using the `from_pretrained` method. This feature requires the `huggingface-hub` package.

To install it, run: `pip install huggingface-hub`

```bash
from llama_cpp import Llama

llm = Llama.from_pretrained(
    repo_id="SandLogicTechnologies/Meta-Llama-3-8B-Instruct-GGUF",
    filename="*Meta-Llama-3-8B-Instruct.Q5_K_M.gguf",
    verbose=False
)
```
By default, from_pretrained will download the model to the Hugging Face cache directory. You can manage installed model files using the huggingface-cli tool.


## License

A custom commercial license is available at: https://llama.meta.com/llama3/license

## Acknowledgements

We thank Meta for developing and releasing the original Llama 3 model.
Special thanks to  [Georgi Gerganov](https://github.com/ggerganov) and the entire [llama.cpp](https://github.com/ggerganov/llama.cpp/) development team for their outstanding contributions.

## Contact

For any inquiries or support, please contact us at **[email protected]** or visit our [support page](https://www.sandlogic.com/LingoForge/support).