yaxili96's picture
Update README.md
7582b25 verified
metadata
license: mit
language:
  - en
base_model:
  - microsoft/deberta-v3-large
pipeline_tag: text-classification

FactCG for Large Language Model Ungrounded Hallucination Detection

This is a fact-checking model from our work:

📃 FactCG: Enhancing Fact Checkers with Graph-Based Multi-Hop Data (NAACL2025, GitHub Repo)

You can load our model with the following example code:

from transformers import AutoTokenizer, AutoConfig, AutoModelForSequenceClassification
config = AutoConfig.from_pretrained("yaxili96/FactCG-DeBERTa-v3-Large", num_labels=2, finetuning_task="text-classification", revision='main', token=None, cache_dir="./cache")
config.problem_type = "single_label_classification"
tokenizer = AutoTokenizer.from_pretrained("yaxili96/FactCG-DeBERTa-v3-Large", use_fast=True, revision='main', token=None, cache_dir="./cache")
model = AutoModelForSequenceClassification.from_pretrained(
            "yaxili96/FactCG-DeBERTa-v3-Large", config=config, revision='main', token=None, ignore_mismatched_sizes=False, cache_dir="./cache")

If you find the repository or FactCG helpful, please cite the following paper

@inproceedings{lei2025factcg,
  title={FactCG: Enhancing Fact Checkers with Graph-Based Multi-Hop Data},
  author={Lei, Deren and Li, Yaxi and Li, Siyao and Hu, Mengya and Xu, Rui and Archer, Ken and Wang, Mingyu and Ching, Emily and Deng, Alex},
  journal={NAACL},
  year={2025}
}