> Although the Principia is thought to be “a monumental failure”, as said by Prof. Freeman Dyson
I'd like some elaboration on that. I failed to find a source.
Principia was written during the naive Logicist era of philosophy of mathematics that couldn't foresee serious foundational decidability issues in logic like Godel's incompleteness theorems, or the Halting Problem. Formalism/Platonism and Constructivism are two streams that came out of Logicism as a way to fix logical issues, and they're (very roughly speaking) the philosophical basis of classical mathematics and constructive mathematics today.
The way formalists (mainstream mathematical community) dealt with the foundational issues was to study them very closely and precisely so that they can ignore it as much as possible. The philosophical justification is that even though a statement P is undecidable, ultimately speaking, within the universe of mathematical truth, it's either true or false and nothing else, even though we may not be able to construct a proof of either.
Constructivists on the other hand took the opposite approach, they equated mathematical truth with provability, therefore undecidable statements P are such that they're neither true nor false, constructively. This means Aristotle's law of excluded middle (for any statement P, P or (not P)) no longer holds and therefore constructivists had to rebuild mathematics from a different logical basis.
The issue with Principia is it doesn't know how to deal with issues like this, so the way it lays out mathematics no longer makes total sense, and its goals (mathematical program) are found to be impossible.
Note: this is an extreme oversimplification. I recommend Stanford Encyclopedia of Philosophy for a more detailed overview. E.g. https://plato.stanford.edu/entries/hilbert-program/
Nobody argues about the result of an addition because the computation is mechanistically verifiable. Same with statements that are properly formalized in logic. The goal was to have the same for all of mathematics. So incompleteness is not a problem per se -- even if it shook people so much at the time (because proof theory always work within a given system). Incompleteness is the battery ram that is used to break the walls of common sense.
If incompleteness isn't the killer of the Hilbert program, what is? The axiom of choice and the continuum hypothesis. Both lack any form of naturalness that would prevent any philosophical arguing. Worse, not accepting them also do. There is such a wealth of intuitionistically absurd results implied by these systems -- most famously, there is the joke that “The axiom of choice is obviously true, the well-ordering principle obviously false, and who can tell about Zorn's lemma?”, when these 3 statements are _logically_ equivalent. So, we're back to a mathematical form of epistemological anarchism; there is no universal axiomatic basis for doing mathematics; any justification for the use of one has to be found externally to mathematics.
I would add that there is/was a certain desire for categorical theories.
"In mathematical logic, a theory is categorical if it has exactly one model (up to isomorphism)."
(categorical is stronger than complete)
https://www.youtube.com/watch?v=9RD5D4swZfk - Possibly this.
TLDW: Godel's incompleteness theorem is at odds with the goals of Principia.
I remember my Java IDE in undergrad warned me about an infinite loop, and this was before I learned about the diagonalization proof of the non-computability of the halting problem, one of my favourite proofs ever. The fact that not all programs and inputs can be shown to halt did not stop the engineer who wrote that guardrail for the IDE.
Surely the principia and similar efforts will still yield useful results even if they cannot necessarily prove every true statement from the axioms?
Yes, you can't prove important properties of the class of all programs, but you can prove properties of smaller, limited classes of programs that you are interested in.
So the Java IDE had been able to recognize an infinite loop of the kind you wrote by an algorithm, that can be proven to be correct for a limited class.
On the other hand, you can loop infinitely deciding to exit on the return value of opaque calls to some entity external to your analyzer, and your IDE shouldn't be able to catch that.
Which is weird because he used the formalism of principia to actually state the theorem, or at least part of it
Russel builds a logical system - it just can’t ground mathematics. Gödel’s paper is about the system in Russels book.
Thanks. It appears, however, that Dyson considers the whole approach a failure (referring to Gödel as a demolisher of it). So while he is saying it about a book, ironically, it seems hardly applicable in this context anymore. Because with this reasoning, any program in Lean (and the Lean programming language itself) should be seen as "a monumental failure".
This is just my opinion, but reading about Bertrand Russell my impression is that he dedicated his life to Pincipia Mathematica partially because he expected to find God in the foundations of the mathematics, and when that didn't happen it drove him rather insane. And then Gödel shows up and basically knifes him on stage with the Incompleteness Theorm.
I don't know what you red about Russell, but in my own readings he has always been presented as a fervent atheist, so except with a far stretched interpretation of "neutral monism" as some form of gnoseologic divinity, it's hard to imagine such a character looking for any god.
Also Russel himself ruined the cathedral of Frege with its eponymous paradox, he was clearly among the best to understand how a thing like Godel's incompleteness theorem could come along the way.
And for his relation to madness, his personal life have been felt with many turmoil from an early age. If anything it seems that mathematics saved him, preventing his early desire for suicide.
https://plato.stanford.edu/entries/neutral-monism/
https://en.wikipedia.org/wiki/Copleston%E2%80%93Russell_deba...
Incidentally his co-author AN Whitehead was not an atheist as a reading of Science and the Modern World (from lectures at Harvard in 1926 I think.) makes clear.
I would like if you could refer me to that reading as well. I really know nothing about, uh, any of that, so I cannot judge. But your description strikes me as rather weird: "dedicating his life" seems a bit dramatic, since Principia is a pretty early work of his. He was active for 50-60 more years since he must have been "driven insane", as you say. Most of his famous works were written after that. Also, all of famous results of Gödel were after Russell finished with Principia. Not that he ever finished, but given the fact Second Edition was 15 years after the First one, and mostly contained relatively minor fixes… it seems only logical to conclude that he wasn't pursuing the topic after the first publication, basically, ever since realizing how big of a task would it be to try and formalize all of math like that.
I believe you are thinking of Cantor, regarding God and subsequent insanity. And it was Russell who knifed Frege. :-)