GGUF
conversational

This model was fine-tuned on a GPT-5 nano reasoning and a GPT-5 non-reasoning dataset.

Downloads last month
19
GGUF
Model size
2B params
Architecture
qwen3
Hardware compatibility
Log In to view the estimation

4-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for Liontix/Qwen3-1.7B-GPT5-nano-distill

Finetuned
Qwen/Qwen3-1.7B
Quantized
(34)
this model

Datasets used to train Liontix/Qwen3-1.7B-GPT5-nano-distill