Jackrong/Qwen3.5-27B-Claude-4.6-Opus-Reasoning-Distilled Image-Text-to-Text ⢠28B ⢠Updated 5 days ago ⢠253k ⢠1.52k
YanLabs/gemma-3-27b-it-abliterated-normpreserve-v1 Text Generation ⢠27B ⢠Updated Dec 8, 2025 ⢠29 ⢠6
huihui-ai/Huihui-Mistral-Small-3.2-24B-Instruct-2506-abliterated-v2 Image-Text-to-Text ⢠24B ⢠Updated Sep 11, 2025 ⢠2.97k ⢠9
ReadyArt/MS3.2-The-Omega-Directive-24B-Unslop-v2.0 Text Generation ⢠24B ⢠Updated Jul 24, 2025 ⢠14 ⢠37
Doctor-Shotgun/MS3.2-24B-Magnum-Diamond Text Generation ⢠24B ⢠Updated Jul 7, 2025 ⢠166 ⢠55
view post Post 4585 I have just released a new blogpost about kv caching and its role in inference speedup šš https://huggingface.co/blog/not-lain/kv-caching/some takeaways : See translation 4 replies Ā· š„ 8 8 š¤ 4 4 + Reply