Storage buffers are 32bit blocks of memory so why not just use a 32bit texture as a storage buffer? With their implementation is it actually different?
You can't do random writes to textures in WebGL, which is required by the vast majority of algorithms. Some hacks exist, all of which come with severe limitations and performance penalties.
Treating a texture as a 'poor man's storage buffer' often works but is much more awkward than populating a storage buffer in a compute shader, and you're giving up on a lot of usage scenarios where WebGPU is simply more flexible (even simple things like populating a storage buffer in a compute shader and then binding it as vertex- or index-buffer).
But at this point I'm thinking that it would have been way better if they had just added storage buffers and compute shaders to webgl.
There was a planned compute shader update for WebGL2, but AFAIK it has been abandondend in favour of WebGPU:
https://groups.google.com/a/chromium.org/g/blink-dev/c/bPD47...
It would have made sense 5 years ago when it wasn't clear that WebGPU would be delayed for so long, but now that WebGPU support in browsers is actually close to the finish line it's probably not worth the hassle.