---
license: mit
base_model: perplexity-ai/r1-1776
tags:
- TensorBlock
- GGUF
---
## perplexity-ai/r1-1776 - GGUF
This repo contains GGUF format model files for [perplexity-ai/r1-1776](https://huggingface.co/perplexity-ai/r1-1776).
The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b4658](https://github.com/ggerganov/llama.cpp/commit/855cd0734aca26c86cc23d94aefd34f934464ac9).
## Prompt template
```
<|begin▁of▁sentence|>{system_prompt}<|User|>{prompt}<|Assistant|>
```
## Model file specification
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [r1-1776-Q8_0](https://huggingface.co/tensorblock/r1-1776-GGUF/blob/main/r1-1776-Q8_0) | Q8_0 | 713.287 GB | very large, extremely low quality loss - not recommended |
## Downloading instruction
### Command line
Firstly, install Huggingface Client
```shell
pip install -U "huggingface_hub[cli]"
```
Then, downoad the individual model file the a local directory
```shell
huggingface-cli download tensorblock/r1-1776-GGUF --include "r1-1776-Q8_0" --local-dir MY_LOCAL_DIR
```
If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
```shell
huggingface-cli download tensorblock/r1-1776-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
```