> I think this too easily skips over the fact that the abstractions are based on a knowledge of how things actually work - known with certainty.
models ≠ knowledge, and a high degree of certainty is not certainty. This is tiring.
This seems like a misreading of the comment. The models and knowledge of arrays, classes, etc, are known with "arbitrarily high" certainty because they were designed by humans, using native instruction sets which were also designed by humans. Even if this knowledge is specialized, it is readily available. OTOH nobody has a clue how neurons actually work, nobody has a working model of the simplest animal brains, and any supposed model of the human mind is at best unfalsifiable. There's a categorical epistemic difference.
But doesn't this argument defeat itself? We cannot, a priori, know very much at all about the world. There is very, very little we can "know" with certainty -- that's the whole reason Descarte resorted to the whole cogito argument in the first place. You and GP just choose different lines to draw.
Yes, I agree completely. I think the apriori/aposteriori distinction is always worth making though.
This really does matter a lot more when floating signifiers get involved; I'm not actually contesting that our models of electrical engineering model reality quite well.