GGUF version of https://huggingface.co/allenai/Olmo-3-1125-32B

after loading the models you cna merge into a single file .\llama.cpp\build\bin\Release\llama-gguf-split.exe --merge .\mymodels\model_name-00001-of-00004.gguf .\mymodels\model_name-merged.gguf

Downloads last month
53
GGUF
Model size
32B params
Architecture
olmo2
Hardware compatibility
Log In to view the estimation

4-bit

8-bit

16-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for FBTMAML/Olmo-3-1125-32B-GGUF

Quantized
(20)
this model