Yi-Ko-6B-Instruct-v1.0
Model Details
Base Model
Training Dataset
- kyujinpy/KOR-OpenOrca-Platypus-v3 π
- beomi/KoAlpaca-v1.1a π
- maywell/ko_wikidata_QA π
- AIHub MRC λ°μ΄ν° μ λ³ ν Instruction Format λ§κ² λ³κ²½ ν μ¬μ©
Benchmark Results
AI-Harness Evaluation
https://github.com/Beomi/ko-lm-evaluation-harness
Model | kobest_boolq | kobest_copa | kobest_hellaswag | kobest_sentineg | korunsmile | pawsx_ko |
---|---|---|---|---|---|---|
Zero-shot | ||||||
Yi-Ko-6B-Instruct-v1.0 | 0.6619 | 0.7794 | 0.4858 | 0.4589 | 0.3520 | 0.5545 |
Yi-Ko-6B | 0.7070 | 0.7696 | 0.5009 | 0.4044 | 0.3828 | 0.5145 |
Instruction Format
### User:
{instruction}
### Assistant:
{response}
Loading the Model
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("wkshin89/Yi-Ko-6B-Instruct-v1.0")
model = AutoModelForCausalLM.from_pretrained(
"wkshin89/Yi-Ko-6B-Instruct-v1.0",
device_map="auto",
torch_dtype=torch.bfloat16,
)
- Downloads last month
- 3,392
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.