mmcwilliams 1 day ago

I think it's easy to understand why people are overestimating the accuracy and performance of LLM-based output: it's currently being touted as the replacement for human labor in a large number of fields. Outside of software development there are fewer optimistic skeptics and much less nuanced takes on the tech.

Casually scrolling through TechCrunch I see over $1B in very recent investments into legal-focused startups alone. You can't push the messaging that the technology to replace humans is here and expect people will also know intrinsically that they need to do the work of checking the output. It runs counter to the massive public rollout of these products which have a simple pitch: we are going to replace the work of human employees.

1
namaria 10 hours ago

I take the charitable view that some high profile people painted themselves into a corner very publicly. I think they made an estimation that they could work out the kinks as they went, but it's becoming apparent that the large early gains were not sustainable. And now there appears to be some very fundamental limitations to what this architecture can achieve and everyone involved has basically little option other than to keep doubling down. I expect this to blow up spectacularly pretty soon.