It's going to be a race to the bottom, they have no moat.
Especially now that they are second in the race (behind Anthropic) and lot of free-to-download and free-to-use models are now starting to be viable competitors.
Once new MacBooks and iPhones have enough memory onboard this is going to be a disaster for OpenAI and other providers.
I'm not sure they're scared of Anthropic - they're doing great work but afaict running into some scaling issues and really focused on winning over developers at the moment.
If I was OpenAI (or Anthropic for that matter) I would remain scared of Google, who is now awake and able to dump Gemini 2.5 pro on the market at costs that I'm not sure people without their own hardware can compete with, and with the infrastructure to handle everyone switching to them tomorrow.
Google is going to lap them. The hardware muscle they have has not even started flexing
Thrid for coding, after Anthropic, and Gemini, which was leading last I checked.
OpenAI are second in the race to Anthropic in some benchmarks (maybe?), but OpenAI still dwarves Anthropic in distribution and popularity.
That's slowly changing. I know some relatively non-tech savvy young people using things like Claude for various reasons, so people are exploring options.
Very, very slowly.
OpenAI vs Anthropic on Google Trends
https://trends.google.com/trends/explore?date=today%203-m&q=...
ChatGPT vs Claude on Google Trends
https://trends.google.com/trends/explore?date=today%203-m&q=...
I wonder how much of this is brand name? Like Kleenex. Non-tech people might not search for LLM, generative AI, etc. ChatGPT may just be what people have heard of. I’m assuming OpenAI has a large advantage over Anthropic, and the name helps, but I bet the name is exaggerating the difference here a bit. Not everyone buys Kleenex branded Kleenex.
This is such a big difference, thank you for sharing it, I didn't expect the gap to be _that_ huge
While mac unified ram inference is great for prosumers+ I really don't foresee Apple making 128GB+ options affordable enough to be attractive for inference for the general public. iPhone even less so considering the latest is only at 8GB. Meanwhile the best model sizes will just keep growing.
Third behind Anthropic/Google. People are too quick to discount mindshare though. For the vast majority of the world's population AI = LLM = ChatGPT, and that itself will keep OpenAI years ahead of the competition as long as they don't blunder away that audience.
LLM inferencing is race to the bottom but the service layers on top isn’t. People always pay much more for convenience, those are the thing OpenAI focuses on and is harder to replicate
My understanding was that OpenAI couldn't make money at their previous price point, and I don't think operation and training cost have gone down sufficiently to make up for those short comings. So how are they going to make money by lowering the price by 80%?
I get the point is to be the last man standing, and poaching customers by lowering the price, and perhaps attract a few people who wouldn't have bought a subscription at the higher price. I just question how long investors can justify pouring money into OpenAI. OpenAI is also the poster child for modern AI, so if they fail the market will react badly.
Mostly I don't understand Silicon Valley venture capital, but dumping price, making wild purchases for investor money and mostly only leading on branding, why isn't this a sign that OpenAI is failing?
OpenAI's Adam Groth credits "engineers optimizing inferencing" for the price drop: https://twitter.com/TheRealAdamG/status/1932440328293806321
That seems likely to me, all of the LLM providers have been consistently finding new optimizations for the past couple of years.
There was an article on here a week or two ago on batch inference.
Do you not think that batch inference gives at least a bit of a moat whereby unit costs fall with more prompts per unit of time, especially if models get more complicated and larger in the future?
for sure they are no longer clear winners, but they try to be just barely on top of others.
right now new Gemini surpassed their o3 (barely) in benchmarks for significantly less money so they cut pricing to be still competitive.
I bet they didn't released o4 not because it's not competitive, but because they are doing Nvidia game: release new product that is just enough better to convince people to buy it. so IMO they are holding full o4 model to have something to release after competition release something better that their top horse