Text Generation
Transformers
Safetensors
step3p5
conversational
custom_code
Eval Results

Recommended sampling parameters?

#3
by sszymczyk - opened

I couldn't find any information about recommended sampling parameters (temperature, top-p, top-k etc) for this model. Can you add it in the model card?

@sszymczyk Sorry for the missing of default inference parameters. For general chat domain, we suggest: temperature=0.6, top_p=0.95, and for reasoning / agent scenario, we recommend temperature=1.0, top_p=0.95.

What about reasoning tokens, do we have have to pass back reasoning tokens like kimi thinking models?

Sign up or log in to comment