A stack of symbol tables is more machinery. It might also be hard to prove that the stack of symbol tables strategy is actually correct!
A big part of the value in the lambda calculus is in its aggressive simplicity meaning that you can do a bunch of theoretical work with relatively few concepts.
The less concepts and moving parts, the less proving necessary. And then you can layer on things and prove that they can also fit into your calculus.
I think most LC work _isn't_ about implementing something based on LC so much as providing theoretical foundations for your work being sound. And in that model you are likely going to focus on theoretical ease.
Yeah. It is coming back to me now: in lambda calc, there is no universe of values to which the universe of lambda expressions might be mapped. Instead, there is a process called lambda reduction that maps the set of all lambda expressions to the set of lambda expressions in normal form.
The expression λf.λx.f(f(fx)) for example is in normal form. The number 3 and the string "3" do not exist in the formalism, but at least some of the time, λf.λx.x is used to represent 0, λf.λx.fx to represent 1, λf.λx.f(fx) to represent 2, λf.λx.f(f(fx)) to represent 3, etc.
Again, lambda calc precedes the digital computer.