pupppet 8 days ago

If an LLM just finds patterns, is it even possible for an LLM to be GOOD at anything? Doesn't that mean at best it will be average?

5
bitpush 7 days ago

Humans are also almost always operating on patterns. This is why "experience" matters a lot.

Very few people are doing truly cutting edge stuff - we call them visionaries. But most of the time, we're just merely doing what's expected

And yes, that includes this comment. This wasnt creative or an original thought at all. I'm sure hundreds of people have had similar thought, and I'm probably parroting someone else's idea here. So if I can do it, why cant LLM?

dgb23 7 days ago

The times we just operate on patterns is when we code boilerplate or just very commonly written code. There's value in speeding this up and LLMs help here.

But generally speaking I don't experience programming like that most of the time. There are so many things going on that have nothing to do with pattern matching while coding.

I load up a working model of the running code in my head and explore what it should be doing in a more abstract/intangible way and then I translate those thoughts to code. In some cases I see the code in my inner eye, in others I have to focus quite a lot or even move around or talk.

My mind goes to different places and experiences. Sometimes it's making new connections, sometimes it's processing a bit longer to get a clearer picture, sometimes it re-shuffles priorities. A radical context switch may happen at any time and I delete a lot of code because I found a much simpler solution.

I think that's a qualitative, insurmountable difference between an LLM and an actual programmer. The programmer thinks deeply about the running program and not just the text that needs to be written.

There might be different types of "thinking" that we can put into a computer in order to automate these kinds of tasks reliably and efficiently. But just pattern matching isn't it.

riknos314 8 days ago

My experience is that LLMs regress to the average of the context they have for the task at hand.

If you're getting average results you most likely haven't given it enough details about what you're looking for.

The same largely applies to hallucinations. In my experience LLMs hallucinate significantly more when at or pushed to exceed the limits of their context.

So if you're looking to get a specific output, your success rate is largely determined by how specific and comprehensive the context the LLM has access to is.

jaccola 8 days ago

Most people (average and below average) can tell when something is above average, even if they cannot create above average work, so using RLHF it should be quite possible to achieve above average.

Indeed it is likely already the case that in training the top links scraped or most popular videos are weighted higher, these are likely to be better than average.

lukan 8 days ago

There are bad patterns and good patterns. But whether a pattern is the right one for a specific task is something different.

And what really matters is, if the task gets reliable solved.

So if they actually could manage this on average with average quality .. that would be a next level gamechanger.

JackSlateur 8 days ago

Yes, IA is basically a random machine aiming for average outcome

IA is neat for average people, to produce average code, for average compagnies

In a competitive world, using IA is a death sentence;