Though given that LLMs fundamentally can't know whether they know something or not (without a later pass of fine-tuning on what they should know) is a pretty good argument against them being good knowledge bases.
No, it is not. In mathematical limit this applies to literally everything. In practice you are not going to store video compressed with a lossless codec, for example.
Me forgetting/never having "recorded" what necklace the other person had during an important event is not at all similar to a statistical text-generation.
If they ask me the previous question I can retrospect/query my memory and tell 100% whether I know it or not - lossy compression aside. An LLM will just reply based on how likely a yes answer is with no regards to having that knowledge or not.
You obviously forgot you previously heard about false memories and/or never thought that happens to you (would be v. ironic).