|
--- |
|
base_model: |
|
- irlab-udc/Llama-3.1-8B-Instruct-Galician |
|
license: llama3.1 |
|
language: |
|
- gl |
|
pipeline_tag: text-generation |
|
library_name: transformers |
|
--- |
|
|
|
4-bit quantized version of [irlab-udc/Llama-3.1-8B-Instruct-Galician](https://huggingface.co/irlab-udc/Llama-3.1-8B-Instruct-Galician). |
|
|
|
## How to Use |
|
```python |
|
import torch |
|
|
|
from transformers import AutoModelForCausalLM, AutoTokenizer |
|
|
|
model_id = "irlab-udc/Llama-3.1-8B-Instruct-Galician-GPTQ-Int4" |
|
|
|
tokenizer = AutoTokenizer.from_pretrained(model_id) |
|
|
|
model = AutoModelForCausalLM.from_pretrained( |
|
model_id, |
|
torch_dtype=torch.float16, |
|
low_cpu_mem_usage=True, |
|
device_map="auto" |
|
) |
|
|
|
messages = [ |
|
{"role": "system", "content": "You are a conversational AI that responds in Galician."}, |
|
{"role": "user", "content": "Cal é a principal vantaxe de Scrum?"}, |
|
] |
|
|
|
inputs = tokenizer.apply_chat_template( |
|
messages, |
|
tokenize=True, |
|
add_generation_prompt=True, |
|
return_tensors="pt", |
|
return_dict=True, |
|
).to("cuda") |
|
|
|
outputs = model.generate(**inputs, do_sample=True, max_new_tokens=512) |
|
|
|
print(tokenizer.batch_decode(outputs, skip_special_tokens=True)) |
|
``` |