amluto 5 days ago

I have an personal pet peeve about Parquet that is solved, incompatibly, by basically every "data lake / lakehouse" layer on top, and I'd love to see it become compatible: ranged partitioning.

I have an application which ought to be a near-perfect match for Parquet. I have a source of timestamped data (basically a time series, except that the intervals might not be evenly spaced -- think log files). A row is a timestamp and a bunch of other columns, and all the columns have data types that Parquet handles just fine [0]. The data accumulates, and it's written out in batches, and the batches all have civilized sizes. The data is naturally partitioned on some partition column, and there is only one writer for each value of the partition column. So far, so good -- the operation of writing a batch is a single file creation or create call to any object store. The partition column maps to the de-facto sort-of-standard Hive partitioning scheme.

Except that the data is (obviously) also partitioned on the timestamp -- each batch covers a non-overlapping range of timestamps. And Hive partitioning can't represent this. So none of the otherwise excellent query tools can naturally import the data unless I engage in a gross hack:

I could also partition on a silly column like "date". This involves aligning batches to date boundaries and also makes queries uglier.

I could just write the files and import ".parquet". This kills performance and costs lots of money.

I could use Iceberg or Delta Lake or whatever for the sole benefit that their client tools can handle ranged partitions. Gee thanks. I don't actually need any of the other complexity.

It would IMO be really really nice if everyone could come up with a directory-name or filename scheme for ranged partitioning.

[0] My other peeve is that a Parquet row and an Arrow row and a Thrift message and a protobuf message, etc, are almost* but not quite the same thing. It would be awesome if there was a companion binary format for a single Parquet row or a stream of rows so that tools could cooperate more easily on producing the data that eventually gets written into Parquet files.

5
bitbang 5 days ago

Why is the footer metadata not sufficient for this need? The metadata should contain the min and max timestamp values from the respective column of interest, so that when executing a query, the query tool should be optimizing its query by reading the metadata to determine if that parquet file should be read or not depending on what time range is in the query.

amluto 5 days ago

Because the footer metadata is in the Parquet file, which is already far too late to give an efficient query.

If I have an S3 bucket containing five years worth of Parquet files, each covering a few days worth of rows, and I tell my favorite query tool (DuckDB, etc) about that bucket, then the tool will need to do a partial read (which is multiple operations, I think, since it will need to find the footer and then read the footer) of ~500 files just to find out which ones contain the data of interest. A good query plan would be to do a single list operation on the bucket to find the file names and then to read the file or files needed to answer my query.

Iceberg and Delta Lake (I think -- I haven't actually tried it) can do this, but plain Parquet plus Hive partitioning can't, and I'm not aware of any other lightweight scheme that is well supported that can do it. My personal little query tool (which predates Parquet) can do it just fine by the simple expedient of reading directory names.

Jarwain 5 days ago

Maybe I'm misunderstanding something about how ducklake works, but isn't that the purpose of the 'catalog database'? To store the metadata about all the files to optimize the query?

In theory, going off of the schema diagram they have, all your files are listed in `data_file`, the timestamp range for that file would be in `file_column_stats`, and that information could be used to decide what files to _actually_ read based on your query.

Whether duckdb's query engine takes advantage of this is a different story, but even if it doesn't Yet it should be possible to do so Eventually.

amluto 4 days ago

Yes, and this is how basically every “lake” thing works. But all the lake solutions add a lot more complexity than just improving the parquet filename scheme, and all of them require that all the readers and all the writers agree on a particular “lake”.

Jarwain 3 days ago

That's fair! I guess I see it as trading technical complexity with the human complexity of getting everyone on board with an update to the standard, and getting that standard implemented across the board. It's a lot easier to get my coworkers to just use duckdb as a reader/writer with ducklake than to change the system.

Frankly, I'm not entirely sure what the process of proposing that change to the hive file scheme would even look like

amluto 3 days ago

> Frankly, I'm not entirely sure what the process of proposing that change to the hive file scheme would even look like

Maybe convince DuckDB and/or clickhouse-local and/or polars.scan_parquet to implement it as a pilot? If it's a success, other tools might follow suit.

Or maybe something like DuckLake could have an option to put column statistics in the filenames. I raised this as a discussion:

https://github.com/duckdb/ducklake/discussions/92

Jarwain 3 days ago

I'm not super sure about it being in the filename, if only because my understanding is that some of the lakes use it for partitioning and other metadata (metameta-data?).

Imo range is probably the most useful statistic in a folder/file name anyways for partitioning purposes. My vote would be for `^` as the range separator to minimize risk of collision and confusion. i.e. `timestamp=2025-03-27T00:00:00-0800^2025-03-30-0700` or `hour=0^12`,`hour=12^24`. `^` is valid across all systems, and I'd be very surprised if it was commonly used as a property/column name. Only collision I can think of is that its start-of-line in regex

Jarwain 2 days ago

Too late to edit buuut

There's a standard! (for time intervals, and I could see it working here)[0]

> Section 3.2.6 of ISO 8601-1:2019 notes that "A solidus may be replaced by a double hyphen ["--"] by mutual agreement of the communicating partners",

So forget what I said; why exacerbate the standards problem?[1]

[0]https://en.wikipedia.org/wiki/ISO_8601#Time_intervals [1]https://xkcd.com/927/

dugmartin 5 days ago

This can also be done using row group metadata within the parquet file. The row group metadata can include the range values of ordinals so you can "partition" on timestamps without having to have a file per time range.

amluto 5 days ago

But I want a file per range! I’m already writing out an entire chunk of rows, and that chunk is a good size for a Parquet file, and that chunk doesn’t overlap the previous chunk.

Sure, metadata in the Parquet file handles this, but a query planner has to read that metadata, whereas a sensible way to stick the metadata in the file path would allow avoiding reading the file at all.

mrlongroots 5 days ago

I have the same gripe. You want a canonical standard that's like "hive partitioning" but defines the range [val1, val2) as column=val1_val2. It's a trivial addition on top of Parquet.

amluto 5 days ago

That would do the trick, as would any other spelling of the same thing.

simlevesque 5 days ago

I wish we had more control of the row group metadata when writing Parquet files with DuckDB.

TheCondor 5 days ago

Hive supports 2 kinds of partitioning, injected and dynamic. You can totally use a partition key like the hour in UNIX time. It's an integer starting at some epoch and incrementing by 3600.

Now your query engine might require you to specify the partitions or range of partitions you want to query on; you absolutely can use datepartition >=a and datepartition<b in your query. Iceberg seems to fix that and just let you use the timestamp; presumably the metadata is smart enough to exclude the partitions you don't care about.

amluto 5 days ago

This is exactly what I meant by “I could also partition on a silly column like "date". This involves aligning batches to date boundaries and also makes queries uglier.”

Dowwie 5 days ago

Time series data is naturally difficult to work with, but avoidable. One solution is to not query raw time series data files. Instead, segment your time series data before you store it, normalizing the timestamps as part of event processing. Sliding window observations will help you find where the event begins and then you adjust the offset until you find where the time series returns to its zero-like position. That's your event.

amluto 5 days ago

Segmenting data is exactly what writing it into non-overlapping Parquet files is. My point is that many tools can read a bucket full of these segments, and most of them can handle a scheme where each file corresponds to a single value of a column, but none of them can agree on how to efficiently segment the data where each segment contains a range, unless a new column is invented for the purpose and all queries add complexity to map onto this type of segmentation.

There’s nothing conceptually or algorithmically difficult about what I want to do. All that’s needed is to encode a range of times into the path of a segment. But Hive didn’t do this, and everyone implemented Hive’s naming scheme, and that’s the status quo now.

hendiatris 5 days ago

In the lower level arrow/parquet libraries you can control the row groups, and even the data pages (although it’s a lot more work). I have used this heavily with the arrow-rs crate to drastically improve (like 10x) how quickly data could be queried from files. Some row groups will have just a few rows, others will have thousands, but being able to bypass searching in many row groups makes the skew irrelevant.

Just beware that one issue you can have is the limit of row groups per file (2^15).

jonstewart 5 days ago

I think maybe this is a pet peeve of Hive and not of Parquet? Yes, it does require opening the Parquet file to look at the min, max range for the column, but only that data and if the data isn’t in range there shouldn’t be further requests.

That is the kind of metadata that is useful to push up, into something like DuckLake.

amluto 5 days ago

I guess my peeve could be restated as: Hive’s naming scheme doesn’t handle this, and Parquet per se can’t handle it because it’s out of scope, but all the awesome tools (e.g. DuckDB), when used on Parquet files without something with “lake” or “ice” in the name, use the Hive scheme.

Someone could buck the trend and extend Hive’s scheme to support ranges.