Maybe it takes some world modeling to do it as well as they do, but ultimately they are just predicting the next token. These things are not mutually exclusive.
The issue is whether they are "just" predicting the next token. When people say they are stochastic parrots, they are denying any of these further capabilities. Modelling is facet of understanding and so to discover that LLMs model the world should strongly raise your credence that they do understand.