Coders may want to look at translators for an idea of what might happen.
Translation software has been around for a couple of decades. It was pretty shitty. But about 10 years ago it started to get to the point where it could translate relatively accurately. However, it couldn't produce text that sounded like it was written by a human. A good translator (and there are plenty of bad ones) could easy outperform a machine. Their jobs were "safe".
I speak several languages quite well and used to do freelance translation work. I noticed that as the software got better, you'd start to see companies who instead of paying you to translate wanted to pay you less to "edit" or "proofread" a document pre-translated by machine. I never accepted such work because sometimes it took almost as much work as translating it from scratch, and secondly, I didn't want to do work where the focus wasn't on quality. But I saw the software steadily improving, and this was before ChatGPT, and I realized the writing was on the wall. So I decided not to become dependent on that for an income stream, and moved away from it.
When LLMs came out, and they now produce text that sounded like it was written by a native speaker (in major languages). Sure, it's not going to win any literary awards, but the vast vast majority of translation work out there is commercial, not literature.
Several things have happened: 1) there's very little translation work available compared to before, because now you can pay only a few people to double-check machine-generated translations (that are fairly good to start with); 2) many companies aren't using humans at all as the translations are "good enough" and a few mistakes won't matter that much; 3) the work that is available is high-volume and uninteresting, no longer a creative challenge (which is why I did it in the first place); 4) downward pressure on translation rates (which are typically per word), and 5) very talented translators (who are more like writers/artists) are still in demand for literary works or highly creative work (i.e., major marketing campaign), so the top 1% translators still have their jobs. Also more niche language pairs for which LLMs aren't trained will be safe.
It will continue to exist as a profession, but diminishing, until it'll eventually be a fraction of what it was 10 or 15 years ago.
(This is specifically translating written documents, not live interpreting which isn't affected by this trend, or at least not much.)
> When LLMs came out, and they now produce text that sounded like it was written by a native speaker (in major languages).
While the general syntax of the language seem to be somewhat correct now, the LLM's still don't know anything about those languages and keep mis-translating words due to its inherit insane design around english. A whole lot of concepts don't even exist in english so these translation oracles just can never do it successfully.
If i i read a few minutes of LLM translated text, there's always a couple of such errors.
I notice younger people don't see these errors because of their worse language skills, and the LLM:s enforce their incorrect understanding.
I don't think this problem will go away as long as we keep pushing this inferior tech, but instead the languages will devolve to "fix" it.
Languages will morph into a 1-to-1 mapping of english and all the cultural nuances will get lost to time.
> When LLMs came out, and they now produce text that sounded like it was written by a native speaker (in major languages).
But they still often get things completely wrong, especially in high-context languages such as Japanese. There often isn't a way to convey the necessary context in text. For example, a Japanese live-streamer who says "配信来てくれてありがとう" means "Thank you for coming to (watch) the stream", but DeepL will give "Thanks for coming to the delivery." - because 配信 actually does mean "delivery" in ordinary circumstances, and that's just the word they idiomatically use to refer to a stream. No matter how much you add from the transcript before or after that, it doesn't communicate the essential fact that the text is a transcript of a livestream.
(And going the other way around, DeepL will insert a に which is grammatically correct but rarely actually uttered by livestreamers who are speaking informally and colloquially; and if you put "stream" in the English, it will be rendered as a loanword ストリーム which is just not what they actually say. Although I guess it should get credit for realizing that you don't mean a small river, which would be 小川.)
(Also, DeepL comes up with complete incomprehensible nonsense for 歌枠, where even basic dictionaries like Jisho will get it right.)
More obviously, they get pronouns and roles wrong all the time - they can't reliably tell whether someone is saying that "I did X" or "you did X" because that may depend on facts of the natural world outside of the actual text (which wouldn't include more information than the equivalent of "did X"). A human translator for, say, a video game cutscene may be able to fix these mistakes by observing what happened in the cutscene. The LLM cannot; no matter how good its model of how Japanese is spoken, it lacks this input channel.