cpatonn commited on
Commit
4b296c2
·
verified ·
1 Parent(s): 06e076b

Upload folder using huggingface_hub

Browse files
.gitattributes CHANGED
@@ -33,3 +33,5 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
 
 
 
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
36
+ tokenizer.json filter=lfs diff=lfs merge=lfs -text
37
+ demo.gif filter=lfs diff=lfs merge=lfs -text
README.md ADDED
@@ -0,0 +1,149 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ language:
4
+ - en
5
+ base_model: janhq/Jan-v2-VL-high
6
+ pipeline_tag: image-text-to-text
7
+ library_name: transformers
8
+ tags:
9
+ - agent
10
+ ---
11
+
12
+ # Jan-v2-VL-high AWQ - INT4
13
+
14
+ ## Model Details
15
+
16
+ ### Quantization Details
17
+
18
+ - **Quantization Method:** AWQ
19
+ - **Bits:** 4
20
+ - **Group Size:** 32
21
+ - **Calibration Dataset:** [nvidia/Llama-Nemotron-Post-Training-Dataset](https://huggingface.co/datasets/nvidia/Llama-Nemotron-Post-Training-Dataset)
22
+ - **Quantization Tool:** [llm-compressor](https://github.com/vllm-project/llm-compressor)
23
+
24
+ ### Memory Usage
25
+
26
+ | **Type** | **Jan-v2-VL-high** | **Jan-v2-VL-high-AWQ-4bit** |
27
+ |:---------------:|:----------------:|:----------------:|
28
+ | **Memory Size** | 16.3 GB | 7.0 GB |
29
+ | **KV Cache per Token** | 144.0 kB | 36.0 kB |
30
+ | **KV Cache per Context** | 36.0 GB | 9.0 GB |
31
+
32
+ ### Evaluations
33
+
34
+ | **Benchmarks** | **Jan-v2-VL-high** | **Jan-v2-VL-high-AWQ-4bit** |
35
+ |:---------------:|:----------------:|:----------------:|
36
+ | **Perplexity** | 1.61565 | 1.62198 |
37
+
38
+ - **Evaluation Context Length:** 16384
39
+
40
+ ## Inference
41
+
42
+ ### Prerequisite
43
+
44
+ ```bash
45
+ pip install -U vllm
46
+ ```
47
+
48
+ ### Basic Usage
49
+
50
+ ```bash
51
+ vllm serve cyankiwi/Jan-v2-VL-high-AWQ-4bit
52
+ ```
53
+
54
+ ## Additional Information
55
+
56
+ ### Changelog
57
+
58
+ - **v1.0.0** - Initial quantized release
59
+
60
+ ### Authors
61
+
62
+ - **Name:** Ton Cao
63
+ - **Contacts:** [email protected]
64
+
65
+ # Jan-v2-VL: Multimodal Agent for Long-Horizon Tasks
66
+
67
+ [![GitHub](https://img.shields.io/badge/GitHub-Repository-blue?logo=github)](https://github.com/janhq/jan)
68
+ [![License](https://img.shields.io/badge/License-Apache%202.0-yellow)](https://opensource.org/licenses/Apache-2.0)
69
+ [![Jan App](https://img.shields.io/badge/Powered%20by-Jan%20App-purple?style=flat&logo=android)](https://jan.ai/)
70
+
71
+ ![image/gif](demo.gif)
72
+
73
+ ## Overview
74
+
75
+ **Jan-v2-VL** is an 8B-parameter vision–language model for long-horizon, multi-step tasks in real software environments (e.g., browsers and desktop apps). It combines language reasoning with visual perception to follow complex instructions, maintain intermediate state, and recover from minor execution errors.
76
+
77
+ We recognize the importance of **long-horizon execution** for real-world tasks, where small per-step gains compound into much longer successful chains—so **Jan-v2-VL** is built for stable, many-step execution. For evaluation, we use **[The Illusion of Diminishing Returns: Measuring Long-Horizon Execution in LLMs](https://arxiv.org/pdf/2509.09677)**, which measures execution length. This benchmark aligns with public consensus on what makes a strong coding model—steady, low-drift step execution—suggesting that robust long-horizon ability closely tracks better user experience.
78
+
79
+ **Variants**
80
+
81
+ * **Jan-v2-VL-low** — efficiency-oriented, lower latency
82
+ * **Jan-v2-VL-med** — balanced latency/quality
83
+ * **Jan-v2-VL-high** — deeper reasoning; higher think time
84
+
85
+ ### Intended Use
86
+ Tasks where the plan and/or knowledge can be provided up front, and success hinges on stable, many-step execution with minimal drift:
87
+
88
+ * **Agentic automation & UI control:** Stepwise operation in browsers/desktop apps with screenshot grounding and tool calls (e.g., BrowserMCP).
89
+
90
+ ## Model Performance
91
+
92
+ ![image](https://cdn-uploads.huggingface.co/production/uploads/655e3b59d5c0d3db5359ca3c/bruqlcVK87KMQE99JsS0c.png)
93
+
94
+ Compared with its base (**[Qwen-3-VL-8B-Thinking](https://huggingface.co/Qwen/Qwen3-VL-8B-Thinking)**), **Jan-v2-VL** shows **no degradation** on standard text-only and vision tasks—and is **slightly better on several**—while delivering stronger long-horizon execution on the *Illusion of Diminishing Returns* benchmark.
95
+
96
+ ![image](https://cdn-uploads.huggingface.co/production/uploads/655e3b59d5c0d3db5359ca3c/q4DzuOjmcZOik2c8ZQSCN.png)
97
+
98
+ ![image](https://cdn-uploads.huggingface.co/production/uploads/655e3b59d5c0d3db5359ca3c/JdA1kFh2IEJesQsOAOTrh.png)
99
+
100
+ ![image](https://cdn-uploads.huggingface.co/production/uploads/655e3b59d5c0d3db5359ca3c/fuuZ5pMOGsbbEpKCM5xy8.png)
101
+
102
+ ## Local Deployment
103
+
104
+ ### Integration with Jan App
105
+
106
+ Jan-v2-VL is optimized for direct integration with the [Jan App](https://jan.ai/). Simply select the model from the Jan App interface for immediate access to its full capabilities.
107
+
108
+ ### Local Deployment
109
+
110
+ **Using vLLM:**
111
+ ```bash
112
+ vllm serve Menlo/Jan-v2-VL-high \
113
+ --host 0.0.0.0 \
114
+ --port 1234 \
115
+ --enable-auto-tool-choice \
116
+ --tool-call-parser hermes \
117
+ --reasoning-parser qwen3
118
+
119
+ ```
120
+
121
+ **Using llama.cpp:**
122
+ ```bash
123
+ llama-server --model Jan-v2-VL-high-Q8_0.gguf \
124
+ --vision-model-path mmproj-Jan-v2-VL-high.gguf \
125
+ --host 0.0.0.0 \
126
+ --port 1234 \
127
+ --jinja \
128
+ --no-context-shift
129
+ ```
130
+
131
+ ### Recommended Parameters
132
+ For optimal performance in agentic and general tasks, we recommend the following inference parameters:
133
+ ```yaml
134
+ temperature: 1.0
135
+ top_p: 0.95
136
+ top_k: 20
137
+ repetition_penalty: 1.0
138
+ presence_penalty: 1.5
139
+ ```
140
+
141
+ ## 🤝 Community & Support
142
+
143
+ - **Discussions**: [Hugging Face Community](https://huggingface.co/janhq/Jan-v2-VL-8B/discussions)
144
+ - **Jan App**: Learn more about the Jan App at [jan.ai](https://jan.ai/)
145
+
146
+ ## 📄 Citation
147
+ ```bibtex
148
+ Updated Soon
149
+ ```
added_tokens.json ADDED
@@ -0,0 +1,28 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "</think>": 151668,
3
+ "</tool_call>": 151658,
4
+ "</tool_response>": 151666,
5
+ "<think>": 151667,
6
+ "<tool_call>": 151657,
7
+ "<tool_response>": 151665,
8
+ "<|box_end|>": 151649,
9
+ "<|box_start|>": 151648,
10
+ "<|endoftext|>": 151643,
11
+ "<|file_sep|>": 151664,
12
+ "<|fim_middle|>": 151660,
13
+ "<|fim_pad|>": 151662,
14
+ "<|fim_prefix|>": 151659,
15
+ "<|fim_suffix|>": 151661,
16
+ "<|im_end|>": 151645,
17
+ "<|im_start|>": 151644,
18
+ "<|image_pad|>": 151655,
19
+ "<|object_ref_end|>": 151647,
20
+ "<|object_ref_start|>": 151646,
21
+ "<|quad_end|>": 151651,
22
+ "<|quad_start|>": 151650,
23
+ "<|repo_name|>": 151663,
24
+ "<|video_pad|>": 151656,
25
+ "<|vision_end|>": 151653,
26
+ "<|vision_pad|>": 151654,
27
+ "<|vision_start|>": 151652
28
+ }
chat_template.jinja ADDED
@@ -0,0 +1,110 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {%- set image_count = namespace(value=0) %}
2
+ {%- set video_count = namespace(value=0) %}
3
+ {%- macro render_content(content, do_vision_count) %}
4
+ {%- if content is string %}
5
+ {{- content }}
6
+ {%- else %}
7
+ {%- for item in content %}
8
+ {%- if 'image' in item or 'image_url' in item or item.type == 'image' %}
9
+ {%- if do_vision_count %}
10
+ {%- set image_count.value = image_count.value + 1 %}
11
+ {%- endif %}
12
+ {%- if add_vision_id %}Picture {{ image_count.value }}: {% endif -%}
13
+ <|vision_start|><|image_pad|><|vision_end|>
14
+ {%- elif 'video' in item or item.type == 'video' %}
15
+ {%- if do_vision_count %}
16
+ {%- set video_count.value = video_count.value + 1 %}
17
+ {%- endif %}
18
+ {%- if add_vision_id %}Video {{ video_count.value }}: {% endif -%}
19
+ <|vision_start|><|video_pad|><|vision_end|>
20
+ {%- elif 'text' in item %}
21
+ {{- item.text }}
22
+ {%- endif %}
23
+ {%- endfor %}
24
+ {%- endif %}
25
+ {%- endmacro %}
26
+ {%- if tools %}
27
+ {{- '<|im_start|>system\n' }}
28
+ {%- if messages[0].role == 'system' %}
29
+ {{- render_content(messages[0].content, false) + '\n\n' }}
30
+ {%- endif %}
31
+ {{- "# Tools\n\nYou may call one or more functions to assist with the user query.\n\nYou are provided with function signatures within <tools></tools> XML tags:\n<tools>" }}
32
+ {%- for tool in tools %}
33
+ {{- "\n" }}
34
+ {{- tool | tojson }}
35
+ {%- endfor %}
36
+ {{- "\n</tools>\n\nFor each function call, return a json object with function name and arguments within <tool_call></tool_call> XML tags:\n<tool_call>\n{\"name\": <function-name>, \"arguments\": <args-json-object>}\n</tool_call><|im_end|>\n" }}
37
+ {%- else %}
38
+ {%- if messages[0].role == 'system' %}
39
+ {{- '<|im_start|>system\n' + render_content(messages[0].content, false) + '<|im_end|>\n' }}
40
+ {%- endif %}
41
+ {%- endif %}
42
+ {%- set ns = namespace(multi_step_tool=true, last_query_index=messages|length - 1) %}
43
+ {%- for message in messages[::-1] %}
44
+ {%- set index = (messages|length - 1) - loop.index0 %}
45
+ {%- if ns.multi_step_tool and message.role == "user" %}
46
+ {%- set content = render_content(message.content, false) %}
47
+ {%- if not(content.startswith('<tool_response>') and content.endswith('</tool_response>')) %}
48
+ {%- set ns.multi_step_tool = false %}
49
+ {%- set ns.last_query_index = index %}
50
+ {%- endif %}
51
+ {%- endif %}
52
+ {%- endfor %}
53
+ {%- for message in messages %}
54
+ {%- set content = render_content(message.content, True) %}
55
+ {%- if (message.role == "user") or (message.role == "system" and not loop.first) %}
56
+ {{- '<|im_start|>' + message.role + '\n' + content + '<|im_end|>' + '\n' }}
57
+ {%- elif message.role == "assistant" %}
58
+ {%- set reasoning_content = '' %}
59
+ {%- if message.reasoning_content is string %}
60
+ {%- set reasoning_content = message.reasoning_content %}
61
+ {%- else %}
62
+ {%- if '</think>' in content %}
63
+ {%- set reasoning_content = content.split('</think>')[0].rstrip('\n').split('<think>')[-1].lstrip('\n') %}
64
+ {%- set content = content.split('</think>')[-1].lstrip('\n') %}
65
+ {%- endif %}
66
+ {%- endif %}
67
+ {%- if loop.index0 > ns.last_query_index %}
68
+ {%- if loop.last or (not loop.last and reasoning_content) %}
69
+ {{- '<|im_start|>' + message.role + '\n<think>\n' + reasoning_content.strip('\n') + '\n</think>\n\n' + content.lstrip('\n') }}
70
+ {%- else %}
71
+ {{- '<|im_start|>' + message.role + '\n' + content }}
72
+ {%- endif %}
73
+ {%- else %}
74
+ {{- '<|im_start|>' + message.role + '\n' + content }}
75
+ {%- endif %}
76
+ {%- if message.tool_calls %}
77
+ {%- for tool_call in message.tool_calls %}
78
+ {%- if (loop.first and content) or (not loop.first) %}
79
+ {{- '\n' }}
80
+ {%- endif %}
81
+ {%- if tool_call.function %}
82
+ {%- set tool_call = tool_call.function %}
83
+ {%- endif %}
84
+ {{- '<tool_call>\n{"name": "' }}
85
+ {{- tool_call.name }}
86
+ {{- '", "arguments": ' }}
87
+ {%- if tool_call.arguments is string %}
88
+ {{- tool_call.arguments }}
89
+ {%- else %}
90
+ {{- tool_call.arguments | tojson }}
91
+ {%- endif %}
92
+ {{- '}\n</tool_call>' }}
93
+ {%- endfor %}
94
+ {%- endif %}
95
+ {{- '<|im_end|>\n' }}
96
+ {%- elif message.role == "tool" %}
97
+ {%- if loop.first or (messages[loop.index0 - 1].role != "tool") %}
98
+ {{- '<|im_start|>user' }}
99
+ {%- endif %}
100
+ {{- '\n<tool_response>\n' }}
101
+ {{- content }}
102
+ {{- '\n</tool_response>' }}
103
+ {%- if loop.last or (messages[loop.index0 + 1].role != "tool") %}
104
+ {{- '<|im_end|>\n' }}
105
+ {%- endif %}
106
+ {%- endif %}
107
+ {%- endfor %}
108
+ {%- if add_generation_prompt %}
109
+ {{- '<|im_start|>assistant\n' }}
110
+ {%- endif %}
config.json ADDED
@@ -0,0 +1,217 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "architectures": [
3
+ "Qwen3VLForConditionalGeneration"
4
+ ],
5
+ "dtype": "float16",
6
+ "eos_token_id": 151645,
7
+ "image_token_id": 151655,
8
+ "model_type": "qwen3_vl",
9
+ "pad_token_id": 151643,
10
+ "quantization_config": {
11
+ "config_groups": {
12
+ "group_0": {
13
+ "format": "pack-quantized",
14
+ "input_activations": null,
15
+ "output_activations": null,
16
+ "targets": [
17
+ "Linear"
18
+ ],
19
+ "weights": {
20
+ "actorder": null,
21
+ "block_structure": null,
22
+ "dynamic": false,
23
+ "group_size": 32,
24
+ "num_bits": 4,
25
+ "observer": "mse",
26
+ "observer_kwargs": {},
27
+ "strategy": "group",
28
+ "symmetric": true,
29
+ "type": "int"
30
+ }
31
+ }
32
+ },
33
+ "format": "pack-quantized",
34
+ "global_compression_ratio": null,
35
+ "ignore": [
36
+ "model.visual.blocks.0.attn.qkv",
37
+ "model.visual.blocks.0.attn.proj",
38
+ "model.visual.blocks.0.mlp.linear_fc1",
39
+ "model.visual.blocks.0.mlp.linear_fc2",
40
+ "model.visual.blocks.1.attn.qkv",
41
+ "model.visual.blocks.1.attn.proj",
42
+ "model.visual.blocks.1.mlp.linear_fc1",
43
+ "model.visual.blocks.1.mlp.linear_fc2",
44
+ "model.visual.blocks.2.attn.qkv",
45
+ "model.visual.blocks.2.attn.proj",
46
+ "model.visual.blocks.2.mlp.linear_fc1",
47
+ "model.visual.blocks.2.mlp.linear_fc2",
48
+ "model.visual.blocks.3.attn.qkv",
49
+ "model.visual.blocks.3.attn.proj",
50
+ "model.visual.blocks.3.mlp.linear_fc1",
51
+ "model.visual.blocks.3.mlp.linear_fc2",
52
+ "model.visual.blocks.4.attn.qkv",
53
+ "model.visual.blocks.4.attn.proj",
54
+ "model.visual.blocks.4.mlp.linear_fc1",
55
+ "model.visual.blocks.4.mlp.linear_fc2",
56
+ "model.visual.blocks.5.attn.qkv",
57
+ "model.visual.blocks.5.attn.proj",
58
+ "model.visual.blocks.5.mlp.linear_fc1",
59
+ "model.visual.blocks.5.mlp.linear_fc2",
60
+ "model.visual.blocks.6.attn.qkv",
61
+ "model.visual.blocks.6.attn.proj",
62
+ "model.visual.blocks.6.mlp.linear_fc1",
63
+ "model.visual.blocks.6.mlp.linear_fc2",
64
+ "model.visual.blocks.7.attn.qkv",
65
+ "model.visual.blocks.7.attn.proj",
66
+ "model.visual.blocks.7.mlp.linear_fc1",
67
+ "model.visual.blocks.7.mlp.linear_fc2",
68
+ "model.visual.blocks.8.attn.qkv",
69
+ "model.visual.blocks.8.attn.proj",
70
+ "model.visual.blocks.8.mlp.linear_fc1",
71
+ "model.visual.blocks.8.mlp.linear_fc2",
72
+ "model.visual.blocks.9.attn.qkv",
73
+ "model.visual.blocks.9.attn.proj",
74
+ "model.visual.blocks.9.mlp.linear_fc1",
75
+ "model.visual.blocks.9.mlp.linear_fc2",
76
+ "model.visual.blocks.10.attn.qkv",
77
+ "model.visual.blocks.10.attn.proj",
78
+ "model.visual.blocks.10.mlp.linear_fc1",
79
+ "model.visual.blocks.10.mlp.linear_fc2",
80
+ "model.visual.blocks.11.attn.qkv",
81
+ "model.visual.blocks.11.attn.proj",
82
+ "model.visual.blocks.11.mlp.linear_fc1",
83
+ "model.visual.blocks.11.mlp.linear_fc2",
84
+ "model.visual.blocks.12.attn.qkv",
85
+ "model.visual.blocks.12.attn.proj",
86
+ "model.visual.blocks.12.mlp.linear_fc1",
87
+ "model.visual.blocks.12.mlp.linear_fc2",
88
+ "model.visual.blocks.13.attn.qkv",
89
+ "model.visual.blocks.13.attn.proj",
90
+ "model.visual.blocks.13.mlp.linear_fc1",
91
+ "model.visual.blocks.13.mlp.linear_fc2",
92
+ "model.visual.blocks.14.attn.qkv",
93
+ "model.visual.blocks.14.attn.proj",
94
+ "model.visual.blocks.14.mlp.linear_fc1",
95
+ "model.visual.blocks.14.mlp.linear_fc2",
96
+ "model.visual.blocks.15.attn.qkv",
97
+ "model.visual.blocks.15.attn.proj",
98
+ "model.visual.blocks.15.mlp.linear_fc1",
99
+ "model.visual.blocks.15.mlp.linear_fc2",
100
+ "model.visual.blocks.16.attn.qkv",
101
+ "model.visual.blocks.16.attn.proj",
102
+ "model.visual.blocks.16.mlp.linear_fc1",
103
+ "model.visual.blocks.16.mlp.linear_fc2",
104
+ "model.visual.blocks.17.attn.qkv",
105
+ "model.visual.blocks.17.attn.proj",
106
+ "model.visual.blocks.17.mlp.linear_fc1",
107
+ "model.visual.blocks.17.mlp.linear_fc2",
108
+ "model.visual.blocks.18.attn.qkv",
109
+ "model.visual.blocks.18.attn.proj",
110
+ "model.visual.blocks.18.mlp.linear_fc1",
111
+ "model.visual.blocks.18.mlp.linear_fc2",
112
+ "model.visual.blocks.19.attn.qkv",
113
+ "model.visual.blocks.19.attn.proj",
114
+ "model.visual.blocks.19.mlp.linear_fc1",
115
+ "model.visual.blocks.19.mlp.linear_fc2",
116
+ "model.visual.blocks.20.attn.qkv",
117
+ "model.visual.blocks.20.attn.proj",
118
+ "model.visual.blocks.20.mlp.linear_fc1",
119
+ "model.visual.blocks.20.mlp.linear_fc2",
120
+ "model.visual.blocks.21.attn.qkv",
121
+ "model.visual.blocks.21.attn.proj",
122
+ "model.visual.blocks.21.mlp.linear_fc1",
123
+ "model.visual.blocks.21.mlp.linear_fc2",
124
+ "model.visual.blocks.22.attn.qkv",
125
+ "model.visual.blocks.22.attn.proj",
126
+ "model.visual.blocks.22.mlp.linear_fc1",
127
+ "model.visual.blocks.22.mlp.linear_fc2",
128
+ "model.visual.blocks.23.attn.qkv",
129
+ "model.visual.blocks.23.attn.proj",
130
+ "model.visual.blocks.23.mlp.linear_fc1",
131
+ "model.visual.blocks.23.mlp.linear_fc2",
132
+ "model.visual.blocks.24.attn.qkv",
133
+ "model.visual.blocks.24.attn.proj",
134
+ "model.visual.blocks.24.mlp.linear_fc1",
135
+ "model.visual.blocks.24.mlp.linear_fc2",
136
+ "model.visual.blocks.25.attn.qkv",
137
+ "model.visual.blocks.25.attn.proj",
138
+ "model.visual.blocks.25.mlp.linear_fc1",
139
+ "model.visual.blocks.25.mlp.linear_fc2",
140
+ "model.visual.blocks.26.attn.qkv",
141
+ "model.visual.blocks.26.attn.proj",
142
+ "model.visual.blocks.26.mlp.linear_fc1",
143
+ "model.visual.blocks.26.mlp.linear_fc2",
144
+ "model.visual.merger.linear_fc1",
145
+ "model.visual.merger.linear_fc2",
146
+ "model.visual.deepstack_merger_list.0.linear_fc1",
147
+ "model.visual.deepstack_merger_list.0.linear_fc2",
148
+ "model.visual.deepstack_merger_list.1.linear_fc1",
149
+ "model.visual.deepstack_merger_list.1.linear_fc2",
150
+ "model.visual.deepstack_merger_list.2.linear_fc1",
151
+ "model.visual.deepstack_merger_list.2.linear_fc2",
152
+ "lm_head"
153
+ ],
154
+ "kv_cache_scheme": null,
155
+ "quant_method": "compressed-tensors",
156
+ "quantization_status": "compressed",
157
+ "sparsity_config": {},
158
+ "transform_config": {},
159
+ "version": "0.12.3.a20251114"
160
+ },
161
+ "text_config": {
162
+ "attention_bias": false,
163
+ "attention_dropout": 0.0,
164
+ "bos_token_id": 151643,
165
+ "dtype": "float16",
166
+ "eos_token_id": 151645,
167
+ "head_dim": 128,
168
+ "hidden_act": "silu",
169
+ "hidden_size": 4096,
170
+ "initializer_range": 0.02,
171
+ "intermediate_size": 12288,
172
+ "max_position_embeddings": 262144,
173
+ "model_type": "qwen3_vl_text",
174
+ "num_attention_heads": 32,
175
+ "num_hidden_layers": 36,
176
+ "num_key_value_heads": 8,
177
+ "rms_norm_eps": 1e-06,
178
+ "rope_scaling": {
179
+ "mrope_interleaved": true,
180
+ "mrope_section": [
181
+ 24,
182
+ 20,
183
+ 20
184
+ ],
185
+ "rope_type": "default"
186
+ },
187
+ "rope_theta": 5000000,
188
+ "use_cache": true,
189
+ "vocab_size": 151936
190
+ },
191
+ "tie_word_embeddings": false,
192
+ "transformers_version": "5.0.0.dev0",
193
+ "video_token_id": 151656,
194
+ "vision_config": {
195
+ "deepstack_visual_indexes": [
196
+ 8,
197
+ 16,
198
+ 24
199
+ ],
200
+ "depth": 27,
201
+ "dtype": "float16",
202
+ "hidden_act": "gelu_pytorch_tanh",
203
+ "hidden_size": 1152,
204
+ "in_channels": 3,
205
+ "initializer_range": 0.02,
206
+ "intermediate_size": 4304,
207
+ "model_type": "qwen3_vl",
208
+ "num_heads": 16,
209
+ "num_position_embeddings": 2304,
210
+ "out_hidden_size": 4096,
211
+ "patch_size": 16,
212
+ "spatial_merge_size": 2,
213
+ "temporal_patch_size": 2
214
+ },
215
+ "vision_end_token_id": 151653,
216
+ "vision_start_token_id": 151652
217
+ }
demo.gif ADDED

Git LFS Details

  • SHA256: 1e2bcebd1ff93fa33ad4b4440c0cbf2509d2eb56e93577e5a88f0eb025fcbe09
  • Pointer size: 133 Bytes
  • Size of remote file: 48 MB
generation_config.json ADDED
@@ -0,0 +1,13 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "bos_token_id": 151643,
3
+ "do_sample": true,
4
+ "eos_token_id": [
5
+ 151645,
6
+ 151643
7
+ ],
8
+ "pad_token_id": 151643,
9
+ "presence_penalty": 1.5,
10
+ "top_k": 20,
11
+ "top_p": 0.95,
12
+ "transformers_version": "5.0.0.dev0"
13
+ }
merges.txt ADDED
The diff for this file is too large to render. See raw diff
 
model-00001-of-00002.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:253d292d329f6fc7f14ce8cb4a5e46561751cf25600d3bd71de75e767e2983ff
3
+ size 4999461816
model-00002-of-00002.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b95175c5d9d72585f1a505d5c26ffe9943f1ccde011470b5e62f49e74af3a9f2
3
+ size 2550404672
model.safetensors.index.json ADDED
The diff for this file is too large to render. See raw diff
 
processor_config.json ADDED
@@ -0,0 +1,85 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "image_processor": {
3
+ "crop_size": null,
4
+ "data_format": "channels_first",
5
+ "default_to_square": true,
6
+ "device": null,
7
+ "disable_grouping": null,
8
+ "do_center_crop": null,
9
+ "do_convert_rgb": true,
10
+ "do_normalize": true,
11
+ "do_pad": null,
12
+ "do_rescale": true,
13
+ "do_resize": true,
14
+ "image_mean": [
15
+ 0.5,
16
+ 0.5,
17
+ 0.5
18
+ ],
19
+ "image_processor_type": "Qwen2VLImageProcessorFast",
20
+ "image_std": [
21
+ 0.5,
22
+ 0.5,
23
+ 0.5
24
+ ],
25
+ "input_data_format": null,
26
+ "max_pixels": null,
27
+ "merge_size": 2,
28
+ "min_pixels": null,
29
+ "pad_size": null,
30
+ "patch_size": 16,
31
+ "processor_class": "Qwen3VLProcessor",
32
+ "resample": 3,
33
+ "rescale_factor": 0.00392156862745098,
34
+ "return_tensors": null,
35
+ "size": {
36
+ "longest_edge": 16777216,
37
+ "shortest_edge": 65536
38
+ },
39
+ "temporal_patch_size": 2
40
+ },
41
+ "processor_class": "Qwen3VLProcessor",
42
+ "video_processor": {
43
+ "crop_size": null,
44
+ "data_format": "channels_first",
45
+ "default_to_square": true,
46
+ "device": null,
47
+ "do_center_crop": null,
48
+ "do_convert_rgb": true,
49
+ "do_normalize": true,
50
+ "do_pad": null,
51
+ "do_rescale": true,
52
+ "do_resize": true,
53
+ "do_sample_frames": true,
54
+ "fps": 2,
55
+ "image_mean": [
56
+ 0.5,
57
+ 0.5,
58
+ 0.5
59
+ ],
60
+ "image_std": [
61
+ 0.5,
62
+ 0.5,
63
+ 0.5
64
+ ],
65
+ "input_data_format": null,
66
+ "max_frames": 768,
67
+ "merge_size": 2,
68
+ "min_frames": 4,
69
+ "num_frames": null,
70
+ "pad_size": null,
71
+ "patch_size": 16,
72
+ "processor_class": "Qwen3VLProcessor",
73
+ "resample": 3,
74
+ "rescale_factor": 0.00392156862745098,
75
+ "return_metadata": false,
76
+ "return_tensors": null,
77
+ "size": {
78
+ "longest_edge": 25165824,
79
+ "shortest_edge": 4096
80
+ },
81
+ "temporal_patch_size": 2,
82
+ "video_metadata": null,
83
+ "video_processor_type": "Qwen3VLVideoProcessor"
84
+ }
85
+ }
recipe.yaml ADDED
@@ -0,0 +1,37 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ default_stage:
2
+ default_modifiers:
3
+ AWQModifier:
4
+ config_groups:
5
+ group_0:
6
+ targets: [Linear]
7
+ weights:
8
+ num_bits: 4
9
+ type: int
10
+ symmetric: true
11
+ group_size: 32
12
+ strategy: group
13
+ block_structure: null
14
+ dynamic: false
15
+ actorder: null
16
+ scale_dtype: null
17
+ zp_dtype: null
18
+ observer: mse
19
+ observer_kwargs: {}
20
+ input_activations: null
21
+ output_activations: null
22
+ format: null
23
+ targets: [Linear]
24
+ ignore: ['re:.*embed_tokens', 're:.*input_layernorm$', 're:.*mlp[.]gate$', 're:.*post_attention_layernorm$',
25
+ 're:.*norm$', 're:model[.]visual.*', lm_head]
26
+ mappings:
27
+ - smooth_layer: re:.*input_layernorm$
28
+ balance_layers: ['re:.*q_proj$', 're:.*k_proj$', 're:.*v_proj$']
29
+ - smooth_layer: re:.*v_proj$
30
+ balance_layers: ['re:.*o_proj$']
31
+ - smooth_layer: re:.*post_attention_layernorm$
32
+ balance_layers: ['re:.*gate_proj$', 're:.*up_proj$']
33
+ - smooth_layer: re:.*up_proj$
34
+ balance_layers: ['re:.*down_proj$']
35
+ offload_device: !!python/object/apply:torch.device [cpu]
36
+ duo_scaling: true
37
+ n_grid: 20
special_tokens_map.json ADDED
@@ -0,0 +1,31 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "additional_special_tokens": [
3
+ "<|im_start|>",
4
+ "<|im_end|>",
5
+ "<|object_ref_start|>",
6
+ "<|object_ref_end|>",
7
+ "<|box_start|>",
8
+ "<|box_end|>",
9
+ "<|quad_start|>",
10
+ "<|quad_end|>",
11
+ "<|vision_start|>",
12
+ "<|vision_end|>",
13
+ "<|vision_pad|>",
14
+ "<|image_pad|>",
15
+ "<|video_pad|>"
16
+ ],
17
+ "eos_token": {
18
+ "content": "<|im_end|>",
19
+ "lstrip": false,
20
+ "normalized": false,
21
+ "rstrip": false,
22
+ "single_word": false
23
+ },
24
+ "pad_token": {
25
+ "content": "<|endoftext|>",
26
+ "lstrip": false,
27
+ "normalized": false,
28
+ "rstrip": false,
29
+ "single_word": false
30
+ }
31
+ }
tokenizer.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:aeb13307a71acd8fe81861d94ad54ab689df773318809eed3cbe794b4492dae4
3
+ size 11422654
tokenizer_config.json ADDED
@@ -0,0 +1,240 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "add_bos_token": false,
3
+ "add_prefix_space": false,
4
+ "added_tokens_decoder": {
5
+ "151643": {
6
+ "content": "<|endoftext|>",
7
+ "lstrip": false,
8
+ "normalized": false,
9
+ "rstrip": false,
10
+ "single_word": false,
11
+ "special": true
12
+ },
13
+ "151644": {
14
+ "content": "<|im_start|>",
15
+ "lstrip": false,
16
+ "normalized": false,
17
+ "rstrip": false,
18
+ "single_word": false,
19
+ "special": true
20
+ },
21
+ "151645": {
22
+ "content": "<|im_end|>",
23
+ "lstrip": false,
24
+ "normalized": false,
25
+ "rstrip": false,
26
+ "single_word": false,
27
+ "special": true
28
+ },
29
+ "151646": {
30
+ "content": "<|object_ref_start|>",
31
+ "lstrip": false,
32
+ "normalized": false,
33
+ "rstrip": false,
34
+ "single_word": false,
35
+ "special": true
36
+ },
37
+ "151647": {
38
+ "content": "<|object_ref_end|>",
39
+ "lstrip": false,
40
+ "normalized": false,
41
+ "rstrip": false,
42
+ "single_word": false,
43
+ "special": true
44
+ },
45
+ "151648": {
46
+ "content": "<|box_start|>",
47
+ "lstrip": false,
48
+ "normalized": false,
49
+ "rstrip": false,
50
+ "single_word": false,
51
+ "special": true
52
+ },
53
+ "151649": {
54
+ "content": "<|box_end|>",
55
+ "lstrip": false,
56
+ "normalized": false,
57
+ "rstrip": false,
58
+ "single_word": false,
59
+ "special": true
60
+ },
61
+ "151650": {
62
+ "content": "<|quad_start|>",
63
+ "lstrip": false,
64
+ "normalized": false,
65
+ "rstrip": false,
66
+ "single_word": false,
67
+ "special": true
68
+ },
69
+ "151651": {
70
+ "content": "<|quad_end|>",
71
+ "lstrip": false,
72
+ "normalized": false,
73
+ "rstrip": false,
74
+ "single_word": false,
75
+ "special": true
76
+ },
77
+ "151652": {
78
+ "content": "<|vision_start|>",
79
+ "lstrip": false,
80
+ "normalized": false,
81
+ "rstrip": false,
82
+ "single_word": false,
83
+ "special": true
84
+ },
85
+ "151653": {
86
+ "content": "<|vision_end|>",
87
+ "lstrip": false,
88
+ "normalized": false,
89
+ "rstrip": false,
90
+ "single_word": false,
91
+ "special": true
92
+ },
93
+ "151654": {
94
+ "content": "<|vision_pad|>",
95
+ "lstrip": false,
96
+ "normalized": false,
97
+ "rstrip": false,
98
+ "single_word": false,
99
+ "special": true
100
+ },
101
+ "151655": {
102
+ "content": "<|image_pad|>",
103
+ "lstrip": false,
104
+ "normalized": false,
105
+ "rstrip": false,
106
+ "single_word": false,
107
+ "special": true
108
+ },
109
+ "151656": {
110
+ "content": "<|video_pad|>",
111
+ "lstrip": false,
112
+ "normalized": false,
113
+ "rstrip": false,
114
+ "single_word": false,
115
+ "special": true
116
+ },
117
+ "151657": {
118
+ "content": "<tool_call>",
119
+ "lstrip": false,
120
+ "normalized": false,
121
+ "rstrip": false,
122
+ "single_word": false,
123
+ "special": false
124
+ },
125
+ "151658": {
126
+ "content": "</tool_call>",
127
+ "lstrip": false,
128
+ "normalized": false,
129
+ "rstrip": false,
130
+ "single_word": false,
131
+ "special": false
132
+ },
133
+ "151659": {
134
+ "content": "<|fim_prefix|>",
135
+ "lstrip": false,
136
+ "normalized": false,
137
+ "rstrip": false,
138
+ "single_word": false,
139
+ "special": false
140
+ },
141
+ "151660": {
142
+ "content": "<|fim_middle|>",
143
+ "lstrip": false,
144
+ "normalized": false,
145
+ "rstrip": false,
146
+ "single_word": false,
147
+ "special": false
148
+ },
149
+ "151661": {
150
+ "content": "<|fim_suffix|>",
151
+ "lstrip": false,
152
+ "normalized": false,
153
+ "rstrip": false,
154
+ "single_word": false,
155
+ "special": false
156
+ },
157
+ "151662": {
158
+ "content": "<|fim_pad|>",
159
+ "lstrip": false,
160
+ "normalized": false,
161
+ "rstrip": false,
162
+ "single_word": false,
163
+ "special": false
164
+ },
165
+ "151663": {
166
+ "content": "<|repo_name|>",
167
+ "lstrip": false,
168
+ "normalized": false,
169
+ "rstrip": false,
170
+ "single_word": false,
171
+ "special": false
172
+ },
173
+ "151664": {
174
+ "content": "<|file_sep|>",
175
+ "lstrip": false,
176
+ "normalized": false,
177
+ "rstrip": false,
178
+ "single_word": false,
179
+ "special": false
180
+ },
181
+ "151665": {
182
+ "content": "<tool_response>",
183
+ "lstrip": false,
184
+ "normalized": false,
185
+ "rstrip": false,
186
+ "single_word": false,
187
+ "special": false
188
+ },
189
+ "151666": {
190
+ "content": "</tool_response>",
191
+ "lstrip": false,
192
+ "normalized": false,
193
+ "rstrip": false,
194
+ "single_word": false,
195
+ "special": false
196
+ },
197
+ "151667": {
198
+ "content": "<think>",
199
+ "lstrip": false,
200
+ "normalized": false,
201
+ "rstrip": false,
202
+ "single_word": false,
203
+ "special": false
204
+ },
205
+ "151668": {
206
+ "content": "</think>",
207
+ "lstrip": false,
208
+ "normalized": false,
209
+ "rstrip": false,
210
+ "single_word": false,
211
+ "special": false
212
+ }
213
+ },
214
+ "additional_special_tokens": [
215
+ "<|im_start|>",
216
+ "<|im_end|>",
217
+ "<|object_ref_start|>",
218
+ "<|object_ref_end|>",
219
+ "<|box_start|>",
220
+ "<|box_end|>",
221
+ "<|quad_start|>",
222
+ "<|quad_end|>",
223
+ "<|vision_start|>",
224
+ "<|vision_end|>",
225
+ "<|vision_pad|>",
226
+ "<|image_pad|>",
227
+ "<|video_pad|>"
228
+ ],
229
+ "bos_token": null,
230
+ "clean_up_tokenization_spaces": false,
231
+ "eos_token": "<|im_end|>",
232
+ "errors": "replace",
233
+ "extra_special_tokens": {},
234
+ "model_max_length": 262144,
235
+ "pad_token": "<|endoftext|>",
236
+ "processor_class": "Qwen3VLProcessor",
237
+ "split_special_tokens": false,
238
+ "tokenizer_class": "Qwen2Tokenizer",
239
+ "unk_token": null
240
+ }
vocab.json ADDED
The diff for this file is too large to render. See raw diff