Oh, finitism is perfectly fine by me. I have a lot of sympathy for it!
What is silly is not finitism (or ultrafinitism), but trying to apply concepts like 'regular language' or 'context free language' without modification in such a setting.
I suspect you should be able to suitably alter most of these concepts to make sense in the strictly finite setting. But they are going to become a lot more complicated.
For a similar flavour, you can also check out the differences and similarities between differential equations and difference equations.
Right, but the “finite setting” is the real world. Yes, if we talk about C++ as a mathematical abstraction, maybe a C++ program containing a googolplex tokens could be in a theoretical sense syntactically and semantically valid. But such a program can’t exist in the real world
Let’s say a compiler has a (very plausible) implementation restriction that the source code of a program can be no longer than 2^32 bytes. Then, even if the abstract grammar is (e.g.) an unbounded context-free language, the actually accepted language has a finite upper bound on length-and it is well-known that applying a length constraint turns a context-free language into a regular language. This doesn’t require us to “to suitably alter most of these concepts to make sense in the strictly finite setting”, because formal language theory in an infinite setting permits constraining languages to be finite, and we have a very good understanding of what that the results of that are - this doesn’t require any new or different math, just applying the same math.
Now, it is true that, we will say that certain algorithms only work for regular languages, not context-free - but once we impose a finite bound on length, those algorithms do actually work in principle. In practice, of course, they are likely impractically slow - but the formalism we are talking about (the Chomsky hierarchy) is based on computability (can it answer the question in a finite but unbounded amount of time), not asymptotic computational complexity (nor real-world performance, which isn’t the same thing, as e.g. galactic algorithms demonstrate)
And those formalisms all become silly. Computability also becomes trivial, if you only have a finite amount of memory: either your program halts after a finite amount of time, or it enters a loop after at most 2^(number of bits in your memory) steps.
> Computability also becomes trivial, if you only have a finite amount of memory
I disagree. Computability isn't trivial if you have a finite amount of memory.
In the real world, we have machines with finite space and time. It is far from trivial to work out what problems they can and can't solve. In very many cases, all we can say is "we don't know how to solve this problem, but for all we know there is a way of solving it which nobody has discovered yet". There are many problems for which, a century will pass from today, and we'll likely be no closer to knowing whether they are solvable than we are now.
Yes, if you actually had a machine with infinite storage, and you had an eternity to wait for its computations to complete, then you could always eventually get the decisive answer to the question of whether a machine could solve a problem given some finite bound on storage and time. But that's a very unusual sense of "trivial", essentially "this problem is trivial if we assume humans are immortal beings with infinite resources"
Now you are mixing things up.
> It is far from trivial to work out what problems they can and can't solve.
Yes, and I am exactly saying that if you want to talk about explicitly finite machines, you need more sophisticated concepts. Formally defined 'computability' (with the usual definition) is a silly notion for these finite machines, because it captures nothing of the subtlety. It's trivial for finite machines. Trivial concepts are seldom useful.
The standard formal definition of computability doesn't care about your finite human life. Similar for the standard formal definition of 'regular language' etc.
It seems like you are agreeing with me, but don't really know the terms very well.
> Formally defined 'computability' (with the usual definition) is a silly notion for these finite machines, because it captures nothing of the subtlety. It's trivial for finite machines.
Computability is relative to the definition of an abstract machine - e.g. there are problems which aren’t computable given a standard Turing machine, but which are for e.g. a Turing machine with an oracle, or a Turing machine that can execute supertasks - the halting problem for a standard Turing machine is computable by a Turing machine with an oracle for the first-order halting problem (but its own halting problem remains uncomputable for it)
And if “computability” can be meaningful for abstract machines stronger than a Turing machine, it can also be a meaningful concept for weaker machines
e.g. a problem for which the fastest possible algorithm is O(n^2) isn’t computable by a linear bounded automaton, but problems for which the fastest possible algorithm is O(n) or O(log n) are computable by an LBA
> but don't really know the terms very well.
I think the same about you
> e.g. a problem for which the fastest possible algorithm is O(n^2) isn’t computable by a linear bounded automaton, but problems for which the fastest possible algorithm is O(n) or O(log n) are computable by an LBA
You are right that there's lots of interesting mysteries to explore. But the tools we commonly use only work for (potentially) infinite machines.
Alas, Computability for all finite machines is trivial. And big-O notation is another silly one for finite machines: it's all O(1) for them. Big-O notation is too blunt a tool to deal with finite machines, at least by itself.
You seem to be using the word “computability” to mean “computable by an abstract machine of Turing degree 0”, whereas I am using it in the broader sense of “computable by an abstract machine of specified type”. As I said, in that broader sense, computability is non-trivial for finite machines.
And that broader sense is very standard, you’ll find heaps of papers/etc containing variations of the phrase “computable by a Turing machine with an oracle”… but if that oracle is e.g. for the standard (Turing degree 0) halting problem, then that halting problem is computable by such a machine.
You were earlier claiming I wasn’t familiar with the definitions, but you are giving me the impression you aren’t familiar with “computable” as a relative term