gorgoiler 18 hours ago

On the topic* of having 24 cores and wanting to put them to work: when I were a lad the promise was that pure functional programming would trivially allow for parallel execution of functions. Has this future ever materialized in a modern language / runtime?

  x = 2 + 2
  y = 2 * 2
  z = f(x, y)
  print(z)
…where x and y evaluate in parallel without me having to do anything. Clojure, perhaps?

*And superficially off the topic of this thread, but possibly not.

9
gdwatson 17 hours ago

Superscalar processors (which include all mainstream ones these days) do this within a single core, provided there are no data dependencies between the assignment statements. They have multiple arithmetic logic units, and they can start a second operation while the first is executing.

But yeah, I agree that we were promised a lot more automatic multithreading than we got. History has proven that we should be wary of any promises that depend on a Sufficiently Smart Compiler.

lazide 17 hours ago

Eh, in this case not splitting them up to compute them in parallel is the smartest thing to do. Locking overhead alone is going to dwarf every other cost involved in that computation.

gdwatson 16 hours ago

Yeah, I think the dream was more like, “The compiler looks at a map or filter operation and figures out whether it’s worth the overhead to parallelize it automatically.” And that turns out to be pretty hard, with potentially painful (and nondeterministic!) consequences for failure.

Maybe it would have been easier if CPU performance didn’t end up outstripping memory performance so much, or if cache coherency between cores weren’t so difficult.

eptcyka 15 hours ago

Spawning threads or using a thread pool implicitly would be pretty bad - it would be difficult to reason about performance if the compiler was to make these choices for you.

lazide 16 hours ago

I think it has shaken out the way it has, is because compile time optimizations to this extent require knowing runtime constraints/data at compile time. Which for non-trivial situations is impossible, as the code will be run with too many different types of input data, with too many different cache sizes, etc.

The CPU has better visibility into the actual runtime situation, so can do runtime optimization better.

In some ways, it’s like a bytecode/JVM type situation.

PinkSheep 11 hours ago

If we can write code to dispatch different code paths (like has been used for decades for SSE, later AVX support within one binary), then we can write code to parallelize large array execution based on heuristics. Not much different from busy spins falling back to sleep/other mechanisms when the fast path fails after ca. 100-1000 attempts to secure a lock.

For the trivial example of 2+2 like above, of course, this is a moot discussion. The commenter should've lead with a better example.

lazide 10 hours ago

Sure, but it’s a rare situation (by code path) where it will beat the CPU’s auto optimization, eh?

And when that happens, almost always the developer knows it is that type of situation and will want to tune things themselves anyway.

maccard 7 hours ago

I think you’re fixating on the very specific example. Imagine if instead of 2 + 2 it was multiplying arrays of large matrices. The compiler or runtime would be smart enough to figure out if it’s worth dispatching the parallelism or not for you. Basically auto vectorisation but for parallelism

lazide 6 hours ago

Notably - in most cases, there is no way the compiler can know which of these scenarios are going to happen at compile time.

At runtime, the CPU can figure it out though, eh?

maccard 4 hours ago

I mean, theoretically it's possible. A super basic example would be if the data is known at compile time, it could be auto-parallelized, e.g.

    int buf_size = 10000000;
    auto vec = make_large_array(buf_size);
    for (const auto& val : vec)
    {
        do_expensive_thing(val);
    }
this could clearly be parallelised. In a C++ world that doesn't exist, we can see that it's valid.

If I replace it with int buf_size = 10000000; cin >> buf_size; auto vec = make_large_array(buf_size); for (const auto& val : vec) { do_expensive_thing(val); }

the compiler could generate some code that looks like: if buf_size >= SOME_LARGE_THRESHOLD { DO_IN_PARALLEL } else { DO_SERIAL }

With some background logic for managing threads, etc. In a C++-style world where "control" is important it likely wouldn't fly, but if this was python...

    arr_size = 10000000
    buf = [None] * arr_size
    for x in buf:
        do_expensive_thing(x)
could be parallelised at compile time.

lazide 4 hours ago

Which no one really does (data is generally provided at runtime). Which is why ‘super smart’ compilers kinda went no where eh?

chubot 17 hours ago

That looks more like a SIMD problem than a multi-core problem

You want bigger units of work for multiple cores, otherwise the coordination overhead will outweigh the work the application is doing

I think the Erlang runtime is probably the best use of functional programming and multiple cores. Since Erlang processes are shared nothing, I think they will scale to 64 or 128 cores just fine

Whereas the GC will be a bottleneck in most languages with shared memory ... you will stop scaling before using all your cores

But I don't think Erlang is as fine-grained as your example ...

Some related threads:

https://news.ycombinator.com/item?id=40130079

https://news.ycombinator.com/item?id=31176264

AFAIU Erlang is not that fast an interpreter; I thought the Pony Language was doing something similar (shared nothing?) with compiled code, but I haven't heard about it in awhile

fmajid 10 hours ago

Yes, Erlang's zero-sharing model is what I think Rust should have gone for in its concurrency model. Sadly too few people have even heard of it.

steveklabnik 1 hour ago

Very early on, Rust was like this! But as the language changed over time, it because less appropriate.

chubot 3 hours ago

That would be an odd choice for a low-level language ... languages like C, C++, and Rust let you use whatever the OS has, and the OS has threads

A higher level language can be more opinionated, but a low level one shouldn't straight jacket you.

i.e. Rust can be used to IMPLEMENT an Erlang runtime

If you couldn't use threads, then you could not implement an Erlang runtime.

juped 11 hours ago

There's some sharing used to avoid heavy copies, though GC runs at the process level. The implementation is tilted towards copying between isolated heaps over sharing, but it's also had performance work done over the years. (In fact, if I really want to cause a global GC pause bottleneck in Erlang, I can abuse persistent_term to do this.)

inejge 16 hours ago

> …where x and y evaluate in parallel without me having to do anything.

I understand that yours is a very simple example, but a) such things are already parallelized even on a single thread thanks to all the internal CPU parallelism, b) one should always be mindful of Amdahl's law, c) truly parallel solutions to various problems tend to be structurally different from serial ones in unpredictable ways, so there's no single transformation, not even a single family of transformations.

snackbroken 15 hours ago

Bend[1] and Vine[1] are two experimental programming languages that take similar approaches to automatically parallelizing programs; interaction nets[3]. IIUC, they basically turn the whole program into one big dependency graph, then the runtime figures out what can run in parallel and distributes the work to however many threads you can throw at it. It's also my understanding that they are currently both quite slow, which makes sense as the focus has been on making `write embarrassingly parallelizable program -> get highly parallelized execution` work at all until recently. Time will tell if they can manage enough optimizations that the approach enables you to get reasonably performing parallel functional programs 'for free'.

[1] https://github.com/HigherOrderCO/Bend [2] https://github.com/VineLang/vine [3] https://en.wikipedia.org/wiki/Interaction_nets

fweimer 15 hours ago

There have been experimental parallel graph reduction machines. Excel has a parallel evaluator these days.

Oddly enough, functional programming seems to be a poor fit for this because the fanout tends to be fairly low: individual operations have few inputs, and single-linked lists and trees are more common than arrays.

speed_spread 18 hours ago

I believe it's not the language preventing it but the nature of parallel computing. The overhead of splitting up things and then reuniting them again is high enough to make trivial cases not worth it. OTOH we now have pretty good compiler autovectorization which does a lot of parallel magic if you set things right. But it's not handled at the language level either.

colechristensen 16 hours ago

there have been fortran compilers which have done auto parallelization for decades, i think nvidia released a compiler that will take your code and do its best to run it on a gpu

this works best for scientific computing things that run through very big loops where there is very little interaction between iterations

deepsun 18 hours ago

Sure, Tensorflow and Pytorch, here ya go :)