metadata
license: apache-2.0
datasets:
- irlab-udc/alpaca_data_galician
language:
- gl
- en
Galician Fine-Tuned LLM Model
This repository contains a large language model (LLM) fine-tuned using the LLaMA Factory library and the Finisterrae III supercomputer at CESGA. The base model used for fine-tuning was Meta's LLaMA 3
.
Model Description
This LLM model has been specifically fine-tuned to understand and generate text in Galician. It was fine-tuned using a modified version of the irlab-udc/alpaca_data_galician dataset, enriched with synthetic data to enhance its text generation and comprehension capabilities in specific contexts.
Technical Details
- Base Model: Meta's LLaMA 3
- Fine-Tuning Platform: LLaMA Factory
- Infrastructure: Finisterrae III, CESGA
- Dataset: irlab-udc/alpaca_data_galician (with modifications)
- Fine-Tuning Objective: To improve text comprehension and generation in Galician.
How to Use the Model
To use this model, follow the example code provided below. Ensure you have the necessary libraries installed (e.g., Hugging Face's transformers
).
Installation
pip install transformers
pip install bitsandbytes
pip install "unsloth[colab-new] @ git+https://github.com/unslothai/unsloth.git"
pip install llmtuner
Test the model
from llmtuner import ChatModel
from llmtuner.extras.misc import torch_gc
chat_model = ChatModel(dict(
model_name_or_path="unsloth/llama-3-8b-Instruct-bnb-4bit", # use bnb-4bit-quantized Llama-3-8B-Instruct model
adapter_name_or_path="model", # load the saved LoRA adapters
finetuning_type="lora", # same to the one in training
template="llama3", # same to the one in training
quantization_bit=4, # load 4-bit quantized model
use_unsloth=True, # use UnslothAI's LoRA optimization for 2x faster generation
))
messages = []
while True:
query = input("\nUser: ")
if query.strip() == "exit":
break
if query.strip() == "clear":
messages = []
torch_gc()
print("History has been removed.")
continue
messages.append({"role": "user", "content": query}) # add query to messages
print("Assistant: ", end="", flush=True)
response = ""
for new_text in chat_model.stream_chat(messages): # stream generation
print(new_text, end="", flush=True)
response += new_text
print()
messages.append({"role": "assistant", "content": response}) # add response to messages
torch_gc()