I guess I'm curious how; I breathe on it wrong and it OoMs.
One of the tradeoffs for ClickHouse versus databases like Snowflake is that you have to have some knowledge about the internals to use it effectively. For example, Snowflake completely hides partitioning but on the other hand it does not deliver consistent, real-time response the way a well-tuned ClickHouse application can.
When you use INSERT ... SELECT in ClickHouse you do need to pay attention to the generated table partitions, as they coexist in memory before flushing to storage. The usual approach is to break up the insert into chunks so you can control how many parts are generated or to adjust the partitioning in the target table.
It's possible the problem might be somehow related to this behavior but that's just conjecture. It's usually pretty easy to work around. Meanwhile if it's a bug it will probably get fixed quickly.
You have to have knowledge of the internals of any database you use. Not knowing is going to cost someone a lot of money and/or performance.
One easy way to achieve this is to store really large values, e.g. 10 Mb per row. Since ClickHouse operates in large blocks you'd easily cause an OOM just by trying to read chunks of 8192 rows (the default) at a time, especially during merges, where it needs to read large blocks from several parts at once
You don’t need a good product to have a lot of users, just good marketing and salespeople.