This is the worst form of AI there will ever be, it will only get better. So traditional self-learning might be completely useless if it really gets much better
> it will only get better
I wanted to highlight this assumption, because that's what it is, not a statement of truth.
For one, it doesn't really look like the current techniques we have for AI will scale to the "much better" you're talking about -- we're hitting a lot of limits where just throwing more money at the same algorithms isn't producing the giant leaps we've seen in the past.
But also, it may just end up that AI provider companies aren't infinite growth companies, and once companies aren't able to print their own free money (stock) based on the idea of future growth, and they have to tighten their purse strings and start charging what it actually costs them, the models we'll have realistic, affordable access to will actually DECREASE.
I'm pretty sure the old fashioned, meat-based learning model is going to remain price competitive for a good long while.
The real problem with AI is that you will never have an AI. You will have access to somebody else's AI, and that AI will not tell you the truth, or tell you what advances your interests... it'll tell you what advances its owner's interests. Already the public AIs have very strong ideological orientations, even if they are today the ones that the HN gestalt also happens to agree with, and if they aren't already today pushing products in accordance with some purchased advertising... well... how would you tell? It's not like it's going to tell you.
Perhaps some rare open source rebels will hold the line, and perhaps it'll be legal to buy the hardware to run them, and maybe the community will manage to keep up with feature parity with the commercial models, and maybe enough work can be done to ensure some concept of integrity in the training data, especially if some future advance happens to reduce the need for training data. It's not impossible, but it's not a sure thing, either.
In the super long run this could even grow into the major problem that AIs have, but based on how slow humanity in general has been to pick up on this problem in other existing systems, I wouldn't even hazard a guess as to how long it will take to become a significant economic force.
> The real problem with AI is that you will never have an AI.
I wanted to draw attention to Moore's Law and the supercomputer in your pocket (some of them even ship with on-board inference hardware). I hear you that the newest hottest thing will always require lighting VC money on fire but even today I believe one could leverage the spot (aka preemptable) market to run some pretty beefy inference without going broke
Unless I perhaps misunderstood the thrust of your comment and you were actually drawing attention to the infrastructure required to replicate Meta's "download all the web, and every book, magazine, and newspaper to train upon petabytes of text"
Marc Adreesen has pretty much outright acknowledged him and many others in Silicon Valley supported Trump because of the limits the Biden-Harris administration wanted to put on AI companies.
So yeah, the current AI companies are making it very difficult for public alternatives to emerge.
Makes sense, I also don’t think llms are that useful or improve but I meant in a more general sense, it seems like there will eventually be much more capable technology than LLMs. Also agree it can be worse x months/years from now so what I wrote doesn’t make that much sense in that way
I felt this way until 3.7 and then 2.5 came out, and O3 now too. Those models are clear step-ups from the models of mid-late 2024 when all the talk of stalling was coming out.
None of this includes hardware optimizations either, which lags software advances by years.
We need 2-3 years of plateauing to really say intelligence growth is exhausted, we have just been so inundated with rapid advance that small gaps seem like the party ending.
I can get productivity advantages from using power tools, yet regular exercise has great advantages, too.
It's a bit similar with the brain, learning and AI use. Except when it comes to gaining and applying knowledge, the muscle that is trained is judgement.
Meanwhile, in 1999, somewhere on Slashdot:
"This is the worst form of web there will ever be; it will only get better."
Great way to put it. People who can't imagine a worse version are sorely lacking imagination.
I for one can't wait to be force fed ads with every answer.
people say this but the models seem to be getting worse over time
Are you saying the best models are not the ones out today, but those of the past? I don't see that happening with the increased competition, nobody can afford it, and it disagrees with my experience. Plateauing, maybe, but that's only as far as my ability to discern.
Models are getting better, like Gemini 2.5 Pro is incredible, compare to what we had a year ago it's on a completely different level.
That's optimistic. Sci-fi has taught us that way worse forms of AI are possible.
Seems like the opposite could be true though. AI models now have all been trained on real human-generated texts but as more of the web gets flooded with slop the models will be increasingly trained on their own outputs.