roleplaiapp's picture
Upload README.md with huggingface_hub
2bb8abe verified
metadata
language:
  - en
inference: false
fine-tuning: false
tags:
  - llama-cpp
  - Llama-3.1-Nemotron-70B-Instruct-HF
  - gguf
  - Q6_K
  - 70b
  - 6-bit
  - Nemotron
  - llama-cpp
  - nvidia
  - code
  - math
  - chat
  - roleplay
  - text-generation
  - safetensors
  - nlp
  - code
datasets:
  - nvidia/HelpSteer2
base_model: meta-llama/Llama-3.1-70B-Instruct
pipeline_tag: text-generation
library_name: transformers

roleplaiapp/Llama-3.1-Nemotron-70B-Instruct-HF-Q6_K-GGUF

Repo: roleplaiapp/Llama-3.1-Nemotron-70B-Instruct-HF-Q6_K-GGUF
Original Model: Llama-3.1-Nemotron-70B-Instruct-HF Organization: nvidia Quantized File: llama-3.1-nemotron-70b-instruct-hf-q6_k.gguf Quantization: GGUF Quantization Method: Q6_K
Use Imatrix: False
Split Model: True

Overview

This is an GGUF Q6_K quantized version of Llama-3.1-Nemotron-70B-Instruct-HF.

Quantization By

I often have idle A100 GPUs while building/testing and training the RP app, so I put them to use quantizing models. I hope the community finds these quantizations useful.

Andrew Webby @ RolePlai