zeekay commited on
Commit
c7b0c03
·
verified ·
1 Parent(s): 903c70c

Upload mlx/README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. mlx/README.md +32 -7
mlx/README.md CHANGED
@@ -1,12 +1,37 @@
1
  ---
2
- license: other
3
- license_name: qwen-research
4
- license_link: https://huggingface.co/Qwen/Qwen2.5-3B/blob/main/LICENSE
5
- language:
6
- - en
7
  pipeline_tag: text-generation
8
- base_model: Qwen/Qwen2.5-3B
9
  tags:
10
  - mlx
11
- library_name: mlx
12
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ library_name: mlx
3
+ license: apache-2.0
4
+ license_link: https://huggingface.co/Qwen/Qwen3-4B/blob/main/LICENSE
 
 
5
  pipeline_tag: text-generation
6
+ base_model: Qwen/Qwen3-4B
7
  tags:
8
  - mlx
 
9
  ---
10
+
11
+ # mlx-community/Qwen3-4B-4bit
12
+
13
+ This model [mlx-community/Qwen3-4B-4bit](https://huggingface.co/mlx-community/Qwen3-4B-4bit) was
14
+ converted to MLX format from [Qwen/Qwen3-4B](https://huggingface.co/Qwen/Qwen3-4B)
15
+ using mlx-lm version **0.24.0**.
16
+
17
+ ## Use with mlx
18
+
19
+ ```bash
20
+ pip install mlx-lm
21
+ ```
22
+
23
+ ```python
24
+ from mlx_lm import load, generate
25
+
26
+ model, tokenizer = load("mlx-community/Qwen3-4B-4bit")
27
+
28
+ prompt = "hello"
29
+
30
+ if tokenizer.chat_template is not None:
31
+ messages = [{"role": "user", "content": prompt}]
32
+ prompt = tokenizer.apply_chat_template(
33
+ messages, add_generation_prompt=True
34
+ )
35
+
36
+ response = generate(model, tokenizer, prompt=prompt, verbose=True)
37
+ ```