Jan-v2-VL
Collection
Jan-v2-VL: a family of VLM focused on reliable, many-step task execution.
β’
8 items
β’
Updated
β’
38
Jan-v2-VL-max-Intruct extends the Jan-v2-VL family to a 30B-parameter visionβlanguage model focused on research capability.
Hosted on Jan Web β use the model directly at chat.jan.ai
Using vLLM: We recommend vLLM for serving and inference. All reported results were run with vLLM 0.12.0. For FP8 deployment, we used llm-compressor built from source. Please pin transformers==4.57.1 for compatibility.
# Exact versions used in our evals
pip install vllm==0.12.0
pip install transformers==4.57.1
pip install "git+https://github.com/vllm-project/llm-compressor.git@1abfd9eb34a2941e82f47cbd595f1aab90280c80"
vllm serve Menlo/Jan-v2-VL-max-Instruct-FP8 \
--host 0.0.0.0 \
--port 1234 \
-dp 1 \
--enable-auto-tool-choice \
--tool-call-parser hermes
For optimal performance in agentic and general tasks, we recommend the following inference parameters:
temperature: 0.7
top_p: 0.8
top_k: 20
repetition_penalty: 1.0
presence_penalty: 0.0
Updated Soon