DanHulton 1 day ago

> it will only get better

I wanted to highlight this assumption, because that's what it is, not a statement of truth.

For one, it doesn't really look like the current techniques we have for AI will scale to the "much better" you're talking about -- we're hitting a lot of limits where just throwing more money at the same algorithms isn't producing the giant leaps we've seen in the past.

But also, it may just end up that AI provider companies aren't infinite growth companies, and once companies aren't able to print their own free money (stock) based on the idea of future growth, and they have to tighten their purse strings and start charging what it actually costs them, the models we'll have realistic, affordable access to will actually DECREASE.

I'm pretty sure the old fashioned, meat-based learning model is going to remain price competitive for a good long while.

3
jerf 1 day ago

The real problem with AI is that you will never have an AI. You will have access to somebody else's AI, and that AI will not tell you the truth, or tell you what advances your interests... it'll tell you what advances its owner's interests. Already the public AIs have very strong ideological orientations, even if they are today the ones that the HN gestalt also happens to agree with, and if they aren't already today pushing products in accordance with some purchased advertising... well... how would you tell? It's not like it's going to tell you.

Perhaps some rare open source rebels will hold the line, and perhaps it'll be legal to buy the hardware to run them, and maybe the community will manage to keep up with feature parity with the commercial models, and maybe enough work can be done to ensure some concept of integrity in the training data, especially if some future advance happens to reduce the need for training data. It's not impossible, but it's not a sure thing, either.

In the super long run this could even grow into the major problem that AIs have, but based on how slow humanity in general has been to pick up on this problem in other existing systems, I wouldn't even hazard a guess as to how long it will take to become a significant economic force.

mdaniel 12 hours ago

> The real problem with AI is that you will never have an AI.

I wanted to draw attention to Moore's Law and the supercomputer in your pocket (some of them even ship with on-board inference hardware). I hear you that the newest hottest thing will always require lighting VC money on fire but even today I believe one could leverage the spot (aka preemptable) market to run some pretty beefy inference without going broke

Unless I perhaps misunderstood the thrust of your comment and you were actually drawing attention to the infrastructure required to replicate Meta's "download all the web, and every book, magazine, and newspaper to train upon petabytes of text"

jimbokun 1 day ago

Marc Adreesen has pretty much outright acknowledged him and many others in Silicon Valley supported Trump because of the limits the Biden-Harris administration wanted to put on AI companies.

So yeah, the current AI companies are making it very difficult for public alternatives to emerge.

ozgrakkurt 1 day ago

Makes sense, I also don’t think llms are that useful or improve but I meant in a more general sense, it seems like there will eventually be much more capable technology than LLMs. Also agree it can be worse x months/years from now so what I wrote doesn’t make that much sense in that way

Workaccount2 1 day ago

I felt this way until 3.7 and then 2.5 came out, and O3 now too. Those models are clear step-ups from the models of mid-late 2024 when all the talk of stalling was coming out.

None of this includes hardware optimizations either, which lags software advances by years.

We need 2-3 years of plateauing to really say intelligence growth is exhausted, we have just been so inundated with rapid advance that small gaps seem like the party ending.