license: llama3
language:
- en
pipeline_tag: text-generation
tags:
- meta
- Llama3
- pytorch
SandLogic Technology - Quantized Meta-Llama3-8b-Instruct Models
Model Description
We have quantized the Meta-Llama3-8b-Instruct model into three variants:
- Q5_KM
- Q4_KM
- IQ4_XS
These quantized models offer improved efficiency while maintaining performance.
Original Model Information
- Name: Meta-Llama3-8b-Instruct
- Developer: Meta
- Release Date: April 18, 2024
- Model Type: Auto-regressive language model
- Architecture: Optimized transformer with Grouped-Query Attention (GQA)
- Parameters: 8 billion
- Context Length: 8k tokens
- Training Data: New mix of publicly available online data (15T+ tokens)
- Knowledge Cutoff: March, 2023
Model Capabilities
Llama 3 is designed for multiple use cases, including:
- Responding to questions in natural language
- Writing code
- Brainstorming ideas
- Content creation
- Summarization
The model understands context and responds in a human-like manner, making it useful for various applications.
Use Cases
- Chatbots: Enhance customer service automation
- Content Creation: Generate articles, reports, blogs, and stories
- Email Communication: Draft emails and maintain consistent brand tone
- Data Analysis Reports: Summarize findings and create business performance reports
- Code Generation: Produce code snippets, identify bugs, and provide programming recommendations
Model Variants
We offer three quantized versions of the Meta-Llama3-8b-Instruct model:
- Q5_KM: 5-bit quantization using the KM method
- Q4_KM: 4-bit quantization using the KM method
- IQ4_XS: 4-bit quantization using the IQ4_XS method
These quantized models aim to reduce model size and improve inference speed while maintaining performance as close to the original model as possible.
Usage
pip install llama-cpp-python
Please refer to the llama-cpp-python documentation to install with GPU support.
Basic Text Completion
Here's an example demonstrating how to use the high-level API for basic text completion:
from llama_cpp import Llama
llm = Llama(
model_path="./models/7B/llama-model.gguf",
verbose=False,
# n_gpu_layers=-1, # Uncomment to use GPU acceleration
# n_ctx=2048, # Uncomment to increase the context window
)
output = llm(
"Q: Name the planets in the solar system? A: ", # Prompt
max_tokens=32, # Generate up to 32 tokens
stop=["Q:", "\n"], # Stop generating just before a new question
echo=False # Don't echo the prompt in the output
)
print(output["choices"][0]["text"])
Download
You can download Llama
models in gguf
format directly from Hugging Face using the from_pretrained
method. This feature requires the huggingface-hub
package.
To install it, run: pip install huggingface-hub
from llama_cpp import Llama
llm = Llama.from_pretrained(
repo_id="SandLogicTechnologies/Meta-Llama-3-8B-Instruct-GGUF",
filename="*Meta-Llama-3-8B-Instruct.Q5_K_M.gguf",
verbose=False
)
By default, from_pretrained will download the model to the Hugging Face cache directory. You can manage installed model files using the huggingface-cli tool.
License
A custom commercial license is available at: https://llama.meta.com/llama3/license
Acknowledgements
We thank Meta for developing and releasing the original Llama 3 model.
Contact
For any inquiries or support, please contact us at` [email protected] or visit our support page.