drkameleon ybelkada commited on
Commit
59b10eb
·
verified ·
0 Parent(s):

Duplicate from tiiuae/Falcon3-Mamba-7B-Instruct-GGUF

Browse files

Co-authored-by: Younes Belkada <[email protected]>

.gitattributes ADDED
@@ -0,0 +1,44 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ *.7z filter=lfs diff=lfs merge=lfs -text
2
+ *.arrow filter=lfs diff=lfs merge=lfs -text
3
+ *.bin filter=lfs diff=lfs merge=lfs -text
4
+ *.bz2 filter=lfs diff=lfs merge=lfs -text
5
+ *.ckpt filter=lfs diff=lfs merge=lfs -text
6
+ *.ftz filter=lfs diff=lfs merge=lfs -text
7
+ *.gz filter=lfs diff=lfs merge=lfs -text
8
+ *.h5 filter=lfs diff=lfs merge=lfs -text
9
+ *.joblib filter=lfs diff=lfs merge=lfs -text
10
+ *.lfs.* filter=lfs diff=lfs merge=lfs -text
11
+ *.mlmodel filter=lfs diff=lfs merge=lfs -text
12
+ *.model filter=lfs diff=lfs merge=lfs -text
13
+ *.msgpack filter=lfs diff=lfs merge=lfs -text
14
+ *.npy filter=lfs diff=lfs merge=lfs -text
15
+ *.npz filter=lfs diff=lfs merge=lfs -text
16
+ *.onnx filter=lfs diff=lfs merge=lfs -text
17
+ *.ot filter=lfs diff=lfs merge=lfs -text
18
+ *.parquet filter=lfs diff=lfs merge=lfs -text
19
+ *.pb filter=lfs diff=lfs merge=lfs -text
20
+ *.pickle filter=lfs diff=lfs merge=lfs -text
21
+ *.pkl filter=lfs diff=lfs merge=lfs -text
22
+ *.pt filter=lfs diff=lfs merge=lfs -text
23
+ *.pth filter=lfs diff=lfs merge=lfs -text
24
+ *.rar filter=lfs diff=lfs merge=lfs -text
25
+ *.safetensors filter=lfs diff=lfs merge=lfs -text
26
+ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
27
+ *.tar.* filter=lfs diff=lfs merge=lfs -text
28
+ *.tar filter=lfs diff=lfs merge=lfs -text
29
+ *.tflite filter=lfs diff=lfs merge=lfs -text
30
+ *.tgz filter=lfs diff=lfs merge=lfs -text
31
+ *.wasm filter=lfs diff=lfs merge=lfs -text
32
+ *.xz filter=lfs diff=lfs merge=lfs -text
33
+ *.zip filter=lfs diff=lfs merge=lfs -text
34
+ *.zst filter=lfs diff=lfs merge=lfs -text
35
+ *tfevents* filter=lfs diff=lfs merge=lfs -text
36
+ Falcon3-Mamba-7B-Instruct-f16.gguf filter=lfs diff=lfs merge=lfs -text
37
+ Falcon3-Mamba-7B-Instruct-q2_k.gguf filter=lfs diff=lfs merge=lfs -text
38
+ Falcon3-Mamba-7B-Instruct-q3_k_m.gguf filter=lfs diff=lfs merge=lfs -text
39
+ Falcon3-Mamba-7B-Instruct-q4_0.gguf filter=lfs diff=lfs merge=lfs -text
40
+ Falcon3-Mamba-7B-Instruct-q4_k_m.gguf filter=lfs diff=lfs merge=lfs -text
41
+ Falcon3-Mamba-7B-Instruct-q5_0.gguf filter=lfs diff=lfs merge=lfs -text
42
+ Falcon3-Mamba-7B-Instruct-q5_k_m.gguf filter=lfs diff=lfs merge=lfs -text
43
+ Falcon3-Mamba-7B-Instruct-q6_k.gguf filter=lfs diff=lfs merge=lfs -text
44
+ Falcon3-Mamba-7B-Instruct-q8_0.gguf filter=lfs diff=lfs merge=lfs -text
Falcon3-Mamba-7B-Instruct-f16.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:eb399fd1c821213870df23aefbf4faa922c132ff8910454cef3795a90b625eba
3
+ size 14572304096
Falcon3-Mamba-7B-Instruct-q2_k.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:94c35bd1f8393bb53f9511658c30f9f410e36dd69eb2b910b35ccb4b382ec637
3
+ size 2565011168
Falcon3-Mamba-7B-Instruct-q3_k_m.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a4b895ac9f5f5212b9c3e28d721542b1d53becc698a6df273f678580f7bd2d8b
3
+ size 3275339488
Falcon3-Mamba-7B-Instruct-q4_0.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d8ed05ee419b8abc74ce095183174bba858b4f5da3d53bc327e09cfc59859520
3
+ size 4204230368
Falcon3-Mamba-7B-Instruct-q4_k_m.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:6abfc651ba7e565aa3b3c74759546e268a98918dfc2fb91e3d9dccf1130dff45
3
+ size 4204230368
Falcon3-Mamba-7B-Instruct-q5_0.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:72c9022a3da11c2f11cfc8e0fc911557c1357c4d53a2fed8e1d736efb9aba84c
3
+ size 5078480608
Falcon3-Mamba-7B-Instruct-q5_k_m.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:62149c2d4acc134634c9686af75282a043bf00246cf8a31f41b70cf500b4b3a0
3
+ size 5078480608
Falcon3-Mamba-7B-Instruct-q6_k.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:109e89bfecac554a096a19cb18b2bd2cc5f36fadf7581eda86bc3ffe49936e5e
3
+ size 6007371488
Falcon3-Mamba-7B-Instruct-q8_0.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a6c786933081d39b9711eebe8e1c745a807084002bfcef61fe6aaba5acac88ad
3
+ size 7765735136
README.md ADDED
@@ -0,0 +1,92 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language:
3
+ - en
4
+ tags:
5
+ - falcon3
6
+ - falcon_mamba
7
+ base_model: tiiuae/Falcon3-Mamba-7B-Instruct
8
+ ---
9
+
10
+ <div align="center">
11
+ <img src="https://huggingface.co/datasets/tiiuae/documentation-images/resolve/main/falcon_mamba/falcon-mamba-logo.png" alt="drawing" width="500"/>
12
+ </div>
13
+
14
+
15
+ # Falcon3-Mamba-7B-Instruct
16
+
17
+ Tired of needing massive GPUs just to experiment with the latest Large Language Models? Wish you could run powerful LLMs locally on your laptop or even your phone? This GGUF model makes it possible!
18
+
19
+ Falcon3-Mamba-7B-Instruct is designed for efficient inference on consumer-grade hardware. It leverages the GGUF format for optimal performance, allowing you to experience the power of LLMs without the need for expensive hardware.
20
+
21
+ Whether you're a student, hobbyist, or developer, this model opens up a world of possibilities for exploring natural language processing, text generation, and AI-powered applications right at your fingertips.
22
+
23
+
24
+ ## Getting started
25
+
26
+ ### 1. Download GGUF models from hugging face
27
+ First, download the model from Hugging Face. You can use the `huggingface_hub` library or download it manually:
28
+
29
+ ```bash
30
+ pip install huggingface_hub
31
+ huggingface-cli download {model_name}
32
+ ```
33
+
34
+ This will download the model to your current directory. Make sure to replace {model_name} with the actual username and model name from your Hugging Face repository.
35
+
36
+ ## 2. Install llama.cpp
37
+
38
+ You have several options for installing llama.cpp:
39
+
40
+ **1. Build from source:**
41
+
42
+ This gives you the most flexibility and control. Follow the instructions in the llama.cpp repository to build from source:
43
+
44
+ ```bash
45
+ git clone https://github.com/ggerganov/llama.cpp
46
+ cd llama.cpp
47
+ cmake -B build
48
+ cmake --build build --config Release
49
+ ```
50
+ For more information about how to build llama.cpp from source please refere to llama.cpp documentation on how to build from source: **[llama.cpp build from source](https://github.com/ggerganov/llama.cpp/blob/master/docs/build.md)**.
51
+
52
+ **2. Download pre-built binaries:**
53
+
54
+ If you prefer a quicker setup, you can download pre-built binaries for your operating system. Check the llama.cpp repository for available binaries.
55
+
56
+ **3. Use Docker:**
57
+
58
+ For a more contained environment, you can use the official llama.cpp Docker image. Refer to the llama.cpp documentation for instructions on how to use the Docker image.
59
+
60
+ For detailed instructions and more information, please check the llama.cpp documentation on docker: **[llama.cpp docker](https://github.com/ggerganov/llama.cpp/blob/master/docs/docker.mdg)**.
61
+
62
+
63
+ ### 3. Start playing with your model
64
+ - <details open>
65
+ <summary>Run simple text completion</summary>
66
+
67
+ ```bash
68
+ llama-cli -m {path-to-gguf-model} -p "I believe the meaning of life is" -n 128
69
+ ```
70
+
71
+ </details>
72
+
73
+ - <details>
74
+ <summary>Run in conversation mode</summary>
75
+
76
+ ```bash
77
+ llama-cli -m {path-to-gguf-model} -p "You are a helpful assistant" -cnv -co
78
+ </details>
79
+ ```
80
+
81
+ # Citation
82
+
83
+ If Falcon3 family were helpful to your work, feel free to give us a cite.
84
+
85
+ ```
86
+ @misc{Falcon3,
87
+ title = {The Falcon 3 family of Open Models},
88
+ author = {TII Team},
89
+ month = {December},
90
+ year = {2024}
91
+ }
92
+ ```