Inference Providers
Active filters: vLLM
RohitUltimate/Qwen3.5_VL_2B_12k
Image-Text-to-Text
• 2B • Updated • 190
• 7
QuantTrio/Qwen3.6-35B-A3B-AWQ
Image-Text-to-Text
• 36B • Updated • 7
mistralai/Mistral-Small-4-119B-2603
119B • Updated • 81.7k
• 354
QuantTrio/Qwen3.5-27B-AWQ
Image-Text-to-Text
• 28B • Updated • 376k
• 41
mistralai/Mistral-Small-4-119B-2603-eagle
Updated • 303
• 46
QuantTrio/Qwopus3.5-27B-v3-AWQ
Image-Text-to-Text
• 27B • Updated • 22.6k
• 9
QuantTrio/gemma-4-31B-it-AWQ
Image-Text-to-Text
• 31B • Updated • 81.3k
• 6
QuantTrio/Qwen3.5-27B-Claude-4.6-Opus-Reasoning-Distilled-v2-AWQ
Image-Text-to-Text
• 28B • Updated • 40.3k
• 12
JunHowie/Qwen3-4B-Instruct-2507-GPTQ-Int4
Text Generation
• 4B • Updated • 119k
• 3
QuantTrio/Qwen3-VL-30B-A3B-Instruct-AWQ
Text Generation
• 31B • Updated • 763k
• 42
QuantTrio/MiniMax-M2.5-AWQ
Text Generation
• 229B • Updated • 85.3k
• 14
QuantTrio/Qwen3.5-35B-A3B-AWQ
Image-Text-to-Text
• 36B • Updated • 155k
• 17
Image-Text-to-Text
• 5B • Updated • 40.5k
• 8
unsloth/Mistral-Small-4-119B-2603-GGUF
119B • Updated • 30.1k
• 58
QuantTrio/gemma-4-31B-it-AWQ-6Bit
Image-Text-to-Text
• 31B • Updated • 9.98k
• 7
Xingyu-Zheng/Qwen3.5-9B-GLM5.1-Distill-v1-INT4-FOEM
Image-Text-to-Text
• 9B • Updated • 14
• 1
QuantTrio/Qwopus3.5-27B-v3-AWQ-6Bit
Image-Text-to-Text
• 27B • Updated • 1.79k
• 2
model-scope/glm-4-9b-chat-GPTQ-Int4
Text Generation
• 9B • Updated • 143
• 6
model-scope/glm-4-9b-chat-GPTQ-Int8
Text Generation
• 9B • Updated • 16
• 2
tclf90/qwen2.5-72b-instruct-gptq-int4
Text Generation
• 73B • Updated • 87
• 2
tclf90/qwen2.5-72b-instruct-gptq-int3
Text Generation
• 69B • Updated • 70
prithivMLmods/Nu2-Lupi-Qwen-14B
Text Generation
• 15B • Updated • 6
• 2
mradermacher/Nu2-Lupi-Qwen-14B-GGUF
15B • Updated • 131
• 1
mradermacher/Nu2-Lupi-Qwen-14B-i1-GGUF
15B • Updated • 232
• 1
JunHowie/Qwen3-0.6B-GPTQ-Int4
Text Generation
• 0.6B • Updated • 70
• 1
JunHowie/Qwen3-0.6B-GPTQ-Int8
Text Generation
• 0.6B • Updated • 9
JunHowie/Qwen3-1.7B-GPTQ-Int4
Text Generation
• 2B • Updated • 140
• 1
JunHowie/Qwen3-1.7B-GPTQ-Int8
Text Generation
• 2B • Updated • 12
JunHowie/Qwen3-32B-GPTQ-Int4
Text Generation
• 33B • Updated • 27.9k
• 4
JunHowie/Qwen3-32B-GPTQ-Int8
Text Generation
• 33B • Updated • 225
• 4