The latest release of zarr (2.14.0) includes experimental support for sharding, along with a number of other features from the v3 spec.
Previous zarr specs stored one chunk per storage object. This can be problematic for zarr stores with a large number of chunks due to design constraints of the underlying storage (e.g. inode limits). Sharding allows storing multiple chunks in one storage object.
Full zarr v3 spec here
Details on zarr sharding here
1 Like
Aidan
(Aidan Heerdegen, ACCESS-NRI Release Team Lead)
2
In practice are there recommended minimum sizes for storage objects in S3 buckets? Or put another way, maximum number of objects per bucket?
I can see the utility of reducing the inode count on lustre file systems, but I thought the current work-around for this was to compress the whole zarr directory tree? I suppose that gets problematic for extremely large datasets? Are there are any other reasons to favour sharding over compressing the whole zarr directory structure?
It is great that this functionality now exists, but I can now see a similar issue arising for zarr that we’ve had for a long time with netCDF: aligning read/write chunk size to object size on disk. Not a deal breaker, but a subtlety that often needs to be taken into account.
My understanding is that there is no limit to the number of objects you can store in a bucket, but latency overheads still can be problematic for large numbers of small objects.
Yes, I’ve zipped some pretty large zarr stores (order 10s TB), but it doesn’t feel like an ideal work-around:
zarr.ZipStores are not safe to write in multiple processes and there’s no way to update an existing zip file without unzipping and rezipping. Zipping/unzipping can take a really long time for large stores.
I’ve hit hard limits on file size on some systems.
Dividing a dataset across multiple ZipStores helps with the above, but this can be a pain to manage and I’ve experienced unexplained performance issues doing this (newer versions of dask may be better).
Yes, though from my skim through the specs I’m not sure how this issue has changed for zarr with the introduction of sharding. My (ignorant) understanding is that read/write will still occur at the chunk level.