Is the number of em-dashes in this marketing copy indicative of the kind of output that the model produces? If so, might want to tone it down a bit.
> Our early tests indicated that Magistral is an excellent creative companion. We highly recommend it for creative writing and storytelling, with the model capable of producing coherent or — if needed — delightfully eccentric copy.
This meme that humans don’t use em dashes needs to die.
It’s an extremely useful tool in writing and I’ve been using it for decades.
Same here. The em dash has been maybe my favorite punctuation since at least the early 2000s. All the em dash output from LLMs looks really natural to me.
That's very weird, I on the other hand don't remember noticing them or using them before the advent of chatgpt. Maybe it's a cultural thing.
It makes sense that humans would have been using it though, chatgpt learned from us afterall
I love a good em-dash, but this page overuses them (nearly 1:1 ratio of em-dashes to commas!) and puts them in places where they just do not belong.
But the em dashes — if appreciated — are delightfully eccentric and whimsical!
Unless you're a lawyer. We love 'em.
As a journalist, same!
Also a journalist. I use em-dashes all the time
Really anyone that writes for a living. I have a referee report on a paper asking me to correct something to be an em-dash.
That is just Mistral's market style. You see it on a lot of their pages. The model output doesn't share the same love for the long dash.
We don’t have em dashes as punctuation in French —- commas are usually used instead —- so we get overly excited about using them when we can —- everybody likes novelty.
I do not know but sometimes when I type "-" and press space, LibreOffice converts it to an em-dash. I get rid of it so people won't confuse me with an LLM.
it's bizarre.
the first sentence is "Announcing Magistral — the first reasoning model by Mistral AI — excelling in domain-specific, transparent, and multilingual reasoning." and those should clearly be comma
and this sentence is just flat out wrong "Lack of specialized depth needed for domain-specific problems, limited transparency, and inconsistent reasoning in the desired language — are just some of the known limitations of early thinking models."