atxtechbro 2 days ago

Hi Simon,

What's the huge difference between the two pelicans riding bicycles? Was one running locally the small version vs the pretty good one running the bigger one thru the API?

Thanks, Morgan

2
diggan 2 days ago

Ollama doesn't like proper naming for some reason, so `ollama pull magistral:latest` lands you with the q4_K_M version (currently, subject to change).

Mistral's API defaults to `magistral-medium-2506` right now, which is running with full precision, no quantization.

samtheprogram 2 days ago

Not only the quantization, but what’s available via ollama is magistral-small (for local inference), not the -medium variant.

otabdeveloper4 1 day ago

Nobody should be ever using ollama, for any reason.

It literally only makes everything worse and more convoluted with zero benefits.

jeffhuys 1 day ago

Could you elaborate?

redman25 1 day ago

Not the parent but I would say bad defaults or naming. There are countless posts from newbies wondering why a model doesn’t work as well as it should.

It’s usually either because the context size is set very low by default or they didn’t realize that they weren’t running the full model (ollama uses the distilled version in place of the full version but names it after the full version).

There’s also been some controversy over not giving proper credit to llama.cpp which ollama is/was a wrapper around.

kristianp 1 day ago

> ollama uses the distilled version

I've never used ollama, but perhaps you mean quantized and not distilled? Or do they actually use distilled versions?

cosmojg 1 day ago

They actually use distilled versions. The most egregious example of this is their misleading reference to all distillations of DeepSeek-R1, which are based on a variety of vastly different base models of varying sizes, as alternative versions of DeepSeek-R1 itself. To this day, many users maintain the mistaken impression that DeepSeek-R1 is overhyped and doesn't perform as well as claimed by those who have been using the actual model with 685B parameters.

otabdeveloper4 1 day ago

ollama is just a wrapper for llama.cpp that adds insane defaults.

Just use llama.cpp directly.

simonw 2 days ago

Yes, the bad one was Mistral Small running locally, the better one was Mistral Medium via their API.