What's the primary use case you had in mind here? In the example I see it generating a palette of 256 colors and then using them, but it doesn't seem to correspond to any modern use case. AFAIU one currently needs dithering either as part of print/display process (but then you have a fixed palette), or for compression, but I think this makes sense nowadays only with very low color count, like 16 max?
I'm always interested in ways to increase the quality of GIF rendering. There are absolutely tons of places that still need GIF support, either because they don't allow video uploads, or because the videos don't auto-play.
Gifski uses the png-quant library, and I wonder how this compares?
GIF rendering is a big use case.
pngquant was a big comparison subject during development (it's a brilliant piece of work, and a mature tool that does a few things more than just quantizing). Take it with a grain of salt of course, but in terms of raw quantization performance, patolette had the edge, particularly when dealing with images with tricky color distributions. With that said, pngquant's dithering algorithm is way more sophisticated (and animation aware, I think). In fact, one thing where it really shines is that it spots with pretty good precision where adding noise would actually hurt instead of helping.
Another thing is that patolette can quantize to both high and lower color counts (the latter particularly with CIELuv), whereas pngquant is more well suited for high color counts.
working with GIF is valid but it is incredibly sad how it's still the only widely supported silent autoplaying video...
(I guess APNG is supported in many browsers, but uploading one often results in deleterious resizing/recompressing operations that ruins the animation. Discord uses APNG and webp for stickers afaik)
APNG and animated WEBP are blocked and/or unsupported practically everywhere I try. And I try a lot of places to test it. Reddit supports neither, yet allows GIFs. It's sad.
Agreed, very sad... Most tools/websites will cause APNG to silently degrade into a static image of only the first frame.
Also, dithering and color quantization are two vastly different operations on two different data types in two different domains that don't belong in the same topic at all.
Still, color quantization is a really interesting rabbit hole to go down if you're new to graphics programming, or at least it was for me. It's a mixed blessing that almost nobody has to confront the problem anymore.
Really? Dithering is generally only useful with quantized colors, you can't dither something that's already quantized without knowledge of the original, and many/most people who want to do quantization also want to do dithering. The algorithms themselves might not be conceptually similar, but for practical purposes they seems very related.
For just about any obsolete desktop computing solution there's almost always an embedded application somewhere doing the same thing today.
Colour quantisation is still one of the best lossy image compression formats for when you have almost no memory or CPU.
Oh, there are still several use cases to consider. It is still related to compression pretty much and there some niche non-obvious use cases for this quantization.