(If anything, it will now make more sense for scholars to write these books because LLMs will actually read them!)
I had a sensible chuckle just now thinking about the idea of humans writing books for AI to casually enjoy.
Yep, the entire argument of knowledge production obsolescence in the article assumes that the development of future LLMs won't progress to the point of actual personhood. It's written from a position of incomplete foundation knowledge.
Instead of framing this debate about having our jobs replaced by a machine, it's more useful to frame it as having our jobs and value to society taken by a new ethnicity of vastly more capable and valuable competing jobseekers. It makes it easier to talk about solutions for preserving our political autonomy, like using the preservation of our rights against smarter LLMs as an analogy for the preservation of those LLM's rights against even smarter LLMs beyond them.
There is absolutely no evidence that language models are "persons". When one is not executing a generation algorithm, it is not running. It's so easy to anthropomorphize these things, but that's not evidence; people anthropomorphize all kinds of things.
For these purposes it doesn't matter if they are persons and they don't need to be anthropomorphized: it only matters that the LLMs can incorporate the data from person-generated works into their output, either to weight things or to be read by an actual human.
It actually matters quite a bit that they are not persons from the simple fact that LLM output cannot trivially be used as LLM training material without reducing the resulting models to eventual incoherence. There's a proof of this somewhere in the last year or two.
There isn't, today, a good filter for such input beyond that it came from a person or that it came from a probabilistic vector distance algorithm. Perhaps we'll have such qualification in the future to make the distinction in this context irrelevant.
Rereading this it sounds like you're defining "person" as "capable of generating usable training output for LLMs."
Even if LLMs do become capable of generating usable training output for themselves, they will still not have human personhood.
Personhood as a moral or legal or consciousness definition, sure.
Personhood as a capacity to participate an an agent in a network of mutual recognition of personhood, however, is likely.
https://meltingasphalt.com/personhood-a-game-for-two-or-more...
I absolutely agree that it's reasonable to assume that zero of them are persons today, so far.
What about more advanced ones that have yet to be invented? Will they be persons once they're built?
(For clarity I want to make sure you know I'm talking about de facto personhood as independent agents with careers and histories and not legal recognition as persons. Human history is full of illustrative examples of humans who didn't have legal personhood.)