eru 5 days ago

> Its syntax is, by definition, regular.

No, not at all. Regexes describe regular languages, but that doesn't mean that their own syntax needs to be a regular language.

Eg many regexes allow parens, so you can have both 'ab|c' and 'a(b|c)'. But if you require your parens to be balanced, that's no longer a regular language.

3
eterm 4 days ago

> if you require your parens to be balanced, that's no longer a regular language.

TIL: https://en.wikipedia.org/wiki/Pumping_lemma_for_regular_lang...

As someone who has only recently learned the formal definition of regular language ( Thanks to https://www.youtube.com/watch?v=9syvZr-9xwk as mentioned on this board recently ), I'm interested in the formal proof for this?

It feels intuitively true, but I haven't finished the course yet and therefore haven't come across it and can't yet reason well about it.

Is the problem then in trying to "encode" whether brackets are balanced, that we would need infinite "levels" of how many more brackets we've opened rather than closed, (i.e a stack) and that violates the finite nature of finite automota?

So I've googled it now, and I've found the result is due to the "Pumping lemma": https://en.wikipedia.org/wiki/Pumping_lemma_for_regular_lang...

gbacon 4 days ago

Pulling on the thread of your intuition about needing infinite levels, a regular language is one that can be recognized by a finite automaton, so requiring infinite memory or arbitrarily high counting suggests that the language is not regular. It’s not perfect; we prove it as below.

Assume for purpose of contraction that our language L of balanced parentheses is regular. Then L has pumping length p ≥ 1 such that every wL where |w| ≥ p can be decomposed w = xyz where |y| ≥ 1 and |xy| ≤ p. It is called the pumping lemma because we can “pump” the non-empty substring y either up or down, i.e., xy⁰z = xz and xyⁿz are also all in L. In formal languages, exponentiation notates character or string repetition.

We don’t know p and may not cherry pick it. Pretend you are playing a game against an adversary who gets to select arbitrary p ≥ 1, and then you use that p to choose wL to spoil the regular language party. With the language of balanced parentheses, no matter what p the adversary selects, you can force the leading xy prefix to contain all left-parentheses by choosing w = (ᵖ)ᵖ. The substring y cannot be empty, so pumping down gives xy⁰z = (ᵖ⁻ᵏ)ᵖ belongs to L, a contradiction.

One of my favorite test questions for automata theory classes is to have ChatGPT attempt a proof that a language is not regular and then have students correct the proof. You linked to a lecture by Michael Sipser, and the undergraduate version of the course I teach uses his excellent textbook.

https://www.amazon.com/Introduction-Theory-Computation-Micha...

His lecture on the pumping lemma for regular languages is

https://youtu.be/KAySmSEGc9U?t=1807

eterm 4 days ago

I've just watched that video, thank you, and it's interesting that he shows an example where my intuition of counting leading to non-regularity fails:

We've seen you can't detect if there is a balanced number of 0's and 1's in a string of 0's and 1's with a FA/Regular expression, howver you CAN detect if there are a balanced number of "01" and "10" substrings in a string of 0's and 1's.

Proving that 01 and 10 balancing was left as an exercise to the reader, which I will try to work through here, hopefully I've not made a mistake. My intuition here says that it is equivalent to checking whether it starts and ends with the same symbol, because 01 leaves you having last read a 1, and therefore then requires a "10" substring to get back to having last read a 0, and vice-versa.

The FA I believe therefore can be constructed with transitions

    Q0 -> epsilon -> Q1
    Q0 -> epsilon -> Q3
    Q1 -> 0 -> Q2
    Q2 -> 1 -> Q1
    Q1 -> epsilon -> Q5
    Q3 -> 1 -> Q4
    Q4 -> 0 -> Q3
    Q3 -> epsilon -> Q5
    ( Starting Q0, Accepting Q5 )
    
Or presented differently, (0(0U1)0) U (1(0U1)1) U epsilon

gbacon 2 days ago

Your intuition about starting and ending with the same symbol is correct. This is a good example of why the unbounded memory rule of thumb is not perfect.

The NFA you described recognizes

    (01)* ∪ (10)*
All strings in the above language will begin and end with different symbols and have only even lengths. (Balanced means not only equal in number but that they nest properly, e.g., )()()( and ())( have equal numbers of left- and right-parentheses but not in balance.) The empty string ε does belong to the language, and your NFA — but not your regular expression — correctly accepts it. The language from Sipser’s lecture has strings of odd length as well, the shortest being 0 and 1 that an equal number (namely, zero) of 01 and 10 substrings.

Your regular expression is close. It looks like HN’s markdown formatting ate your Kleene stars. Always account for the empty string, and in your case it does need to appear explicitly.

    ε ∪ 0 ∪ 1 ∪ 0(0 ∪ 1)*0 ∪ 1(0 ∪ 1)*1
Because this is a simple enough language, I suggest building a DFA rather than an NFA. You know that your automaton will have two branches: one for strings beginning with 0 and the other for those that begin with 1. That accounts for three states so far, and your initial state q0 has both of its transitions defined. Let’s say that q1 is the state after reading the initial 0. (Remember that you can interpret each state in a finite automaton’s transition graph as “I have read …”) When you’re in q1 and read a 0, where do you go? When you’re in q1 and read a 1, where do you go? You’ll know you completed this branch when you have defined 0 and 1 transitions for all its states. Then move on to completing the branch for strings that begin with 1. The intuition is it ought to have similar structure to the first branch. Finally, label your final or accept states. Always remember to account for the empty string!

What’s really interesting is the NFA you defined has, after conversion to DFA and minimization, almost the same structure as the correct DFA: the only difference is which are accept states.

anyfoo 4 days ago

You're right!

skissane 5 days ago

> But if you require your parens to be balanced, that's no longer a regular language

From a certain pedantic perspective, it still is a regular language, since there is a finite (implementation dependent) upper bound on nesting depth, and balanced parens with a finite upper bound of nesting is regular.

OTOH, one can appeal to the linguistic distinction between competence and performance, and say that regexes are irregular in pure competence, even though all (practically usable) languages turn out to be regular once we constrain them by performance as well.

eru 5 days ago

> From a certain pedantic perspective, it still is a regular language, since there is a finite (implementation dependent) upper bound on nesting depth, and balanced parens with a finite upper bound of nesting is regular.

That's a silly point, because it makes all languages trivially finite, even monsters like C++. (Finity is even more of a restriction than being regular.)

skissane 4 days ago

Is it silly? Well, if a person feels drawn to an ultrafinist philosophy of mathematics, they may feel it is important to emphasise the inherent finitude of all creations of inherently finite human minds, formalisms such as programming languages included.

eru 4 days ago

Oh, finitism is perfectly fine by me. I have a lot of sympathy for it!

What is silly is not finitism (or ultrafinitism), but trying to apply concepts like 'regular language' or 'context free language' without modification in such a setting.

I suspect you should be able to suitably alter most of these concepts to make sense in the strictly finite setting. But they are going to become a lot more complicated.

For a similar flavour, you can also check out the differences and similarities between differential equations and difference equations.

skissane 4 days ago

Right, but the “finite setting” is the real world. Yes, if we talk about C++ as a mathematical abstraction, maybe a C++ program containing a googolplex tokens could be in a theoretical sense syntactically and semantically valid. But such a program can’t exist in the real world

Let’s say a compiler has a (very plausible) implementation restriction that the source code of a program can be no longer than 2^32 bytes. Then, even if the abstract grammar is (e.g.) an unbounded context-free language, the actually accepted language has a finite upper bound on length-and it is well-known that applying a length constraint turns a context-free language into a regular language. This doesn’t require us to “to suitably alter most of these concepts to make sense in the strictly finite setting”, because formal language theory in an infinite setting permits constraining languages to be finite, and we have a very good understanding of what that the results of that are - this doesn’t require any new or different math, just applying the same math.

Now, it is true that, we will say that certain algorithms only work for regular languages, not context-free - but once we impose a finite bound on length, those algorithms do actually work in principle. In practice, of course, they are likely impractically slow - but the formalism we are talking about (the Chomsky hierarchy) is based on computability (can it answer the question in a finite but unbounded amount of time), not asymptotic computational complexity (nor real-world performance, which isn’t the same thing, as e.g. galactic algorithms demonstrate)

eru 4 days ago

And those formalisms all become silly. Computability also becomes trivial, if you only have a finite amount of memory: either your program halts after a finite amount of time, or it enters a loop after at most 2^(number of bits in your memory) steps.

skissane 3 days ago

> Computability also becomes trivial, if you only have a finite amount of memory

I disagree. Computability isn't trivial if you have a finite amount of memory.

In the real world, we have machines with finite space and time. It is far from trivial to work out what problems they can and can't solve. In very many cases, all we can say is "we don't know how to solve this problem, but for all we know there is a way of solving it which nobody has discovered yet". There are many problems for which, a century will pass from today, and we'll likely be no closer to knowing whether they are solvable than we are now.

Yes, if you actually had a machine with infinite storage, and you had an eternity to wait for its computations to complete, then you could always eventually get the decisive answer to the question of whether a machine could solve a problem given some finite bound on storage and time. But that's a very unusual sense of "trivial", essentially "this problem is trivial if we assume humans are immortal beings with infinite resources"

eru 3 days ago

Now you are mixing things up.

> It is far from trivial to work out what problems they can and can't solve.

Yes, and I am exactly saying that if you want to talk about explicitly finite machines, you need more sophisticated concepts. Formally defined 'computability' (with the usual definition) is a silly notion for these finite machines, because it captures nothing of the subtlety. It's trivial for finite machines. Trivial concepts are seldom useful.

The standard formal definition of computability doesn't care about your finite human life. Similar for the standard formal definition of 'regular language' etc.

It seems like you are agreeing with me, but don't really know the terms very well.

skissane 3 days ago

> Formally defined 'computability' (with the usual definition) is a silly notion for these finite machines, because it captures nothing of the subtlety. It's trivial for finite machines.

Computability is relative to the definition of an abstract machine - e.g. there are problems which aren’t computable given a standard Turing machine, but which are for e.g. a Turing machine with an oracle, or a Turing machine that can execute supertasks - the halting problem for a standard Turing machine is computable by a Turing machine with an oracle for the first-order halting problem (but its own halting problem remains uncomputable for it)

And if “computability” can be meaningful for abstract machines stronger than a Turing machine, it can also be a meaningful concept for weaker machines

e.g. a problem for which the fastest possible algorithm is O(n^2) isn’t computable by a linear bounded automaton, but problems for which the fastest possible algorithm is O(n) or O(log n) are computable by an LBA

> but don't really know the terms very well.

I think the same about you

eru 3 days ago

> e.g. a problem for which the fastest possible algorithm is O(n^2) isn’t computable by a linear bounded automaton, but problems for which the fastest possible algorithm is O(n) or O(log n) are computable by an LBA

You are right that there's lots of interesting mysteries to explore. But the tools we commonly use only work for (potentially) infinite machines.

Alas, Computability for all finite machines is trivial. And big-O notation is another silly one for finite machines: it's all O(1) for them. Big-O notation is too blunt a tool to deal with finite machines, at least by itself.

skissane 3 days ago

You seem to be using the word “computability” to mean “computable by an abstract machine of Turing degree 0”, whereas I am using it in the broader sense of “computable by an abstract machine of specified type”. As I said, in that broader sense, computability is non-trivial for finite machines.

And that broader sense is very standard, you’ll find heaps of papers/etc containing variations of the phrase “computable by a Turing machine with an oracle”… but if that oracle is e.g. for the standard (Turing degree 0) halting problem, then that halting problem is computable by such a machine.

You were earlier claiming I wasn’t familiar with the definitions, but you are giving me the impression you aren’t familiar with “computable” as a relative term