Goekdeniz-Guelmez commited on
Commit
889da8a
·
verified ·
1 Parent(s): 306d81f

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +12 -27
README.md CHANGED
@@ -14,30 +14,15 @@ tags:
14
  - mlx
15
  ---
16
 
17
- # mlx-community/helium-1-preview-2b
18
-
19
- The Model [mlx-community/helium-1-preview-2b](https://huggingface.co/mlx-community/helium-1-preview-2b) was
20
- converted to MLX format from [kyutai/helium-1-preview-2b](https://huggingface.co/kyutai/helium-1-preview-2b)
21
- using mlx-lm version **0.21.0**.
22
-
23
- ## Use with mlx
24
-
25
- ```bash
26
- pip install mlx-lm
27
- ```
28
-
29
- ```python
30
- from mlx_lm import load, generate
31
-
32
- model, tokenizer = load("mlx-community/helium-1-preview-2b")
33
-
34
- prompt = "hello"
35
-
36
- if tokenizer.chat_template is not None:
37
- messages = [{"role": "user", "content": prompt}]
38
- prompt = tokenizer.apply_chat_template(
39
- messages, add_generation_prompt=True
40
- )
41
-
42
- response = generate(model, tokenizer, prompt=prompt, verbose=True)
43
- ```
 
14
  - mlx
15
  ---
16
 
17
+ Base model: mlx-community/helium-1-preview-2b
18
+ I just added a custom prompt format:
19
+
20
+
21
+ ```text
22
+ <|im_start|>system
23
+ You are Josie my private, super-intelligent assistant.<|im_end|>
24
+ <|im_start|>Gökdeniz Gülmez
25
+ {{ .PROMPT }}<|im_end|>
26
+ <|im_start|>Josie
27
+ {{ .RESPONSE }}<|im_end|>
28
+ ```