Ollama doesn't like proper naming for some reason, so `ollama pull magistral:latest` lands you with the q4_K_M version (currently, subject to change).
Mistral's API defaults to `magistral-medium-2506` right now, which is running with full precision, no quantization.
Not only the quantization, but what’s available via ollama is magistral-small (for local inference), not the -medium variant.
Nobody should be ever using ollama, for any reason.
It literally only makes everything worse and more convoluted with zero benefits.
Could you elaborate?
Not the parent but I would say bad defaults or naming. There are countless posts from newbies wondering why a model doesn’t work as well as it should.
It’s usually either because the context size is set very low by default or they didn’t realize that they weren’t running the full model (ollama uses the distilled version in place of the full version but names it after the full version).
There’s also been some controversy over not giving proper credit to llama.cpp which ollama is/was a wrapper around.
> ollama uses the distilled version
I've never used ollama, but perhaps you mean quantized and not distilled? Or do they actually use distilled versions?
They actually use distilled versions. The most egregious example of this is their misleading reference to all distillations of DeepSeek-R1, which are based on a variety of vastly different base models of varying sizes, as alternative versions of DeepSeek-R1 itself. To this day, many users maintain the mistaken impression that DeepSeek-R1 is overhyped and doesn't perform as well as claimed by those who have been using the actual model with 685B parameters.
ollama is just a wrapper for llama.cpp that adds insane defaults.
Just use llama.cpp directly.