-
Notifications
You must be signed in to change notification settings - Fork 199
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Block level deduplication #58
Comments
Files in squashfs are represented basically as a pointer to a start block, and a list of sizes of the blocks that follow. Because of this assumption that all blocks in a file are located one after the other, it will require some level of change to the format. |
In order to avoid impact on all files, a new file type could be introduced (assuming anyone would like to implement it) - say 'fragmented file', with list of extents making the file. |
I am so hoping to see this feature implemented. I've started to use squashfs on my Mac mini to archive up stuff like sample CDs and emulator games, and it's so cool to be able to simply mount these highly-compressed images at will and share them via Samba. Currently, I'm clearing up a crapton of old PSP game ISOs, and the duplication level here is off the charts. Not just considering the amount of data that is shared between most, if not all, of the images, such as firmware update files and boot loaders. There are also often many different versions of each game, either because of regional and localization differences, or because of "rips," where someone removed non-vital files for an image to make it smaller. These latter examples are 100% duplicates in terms of blocks in the image, but won't compress as such due to the nature and often size of the ISOs. I was considering creating a dedicated ZFS pool with dedup enabled, setting the block size to 2048 bytes to match that of a CD ISO, but that's not really realistic. It would create an insanely large DDT due to the very small block size (not to mention the compression would suffer as well), which would require a ridiculous amount of memory, and I believe the DDT overhead is also stored on disk (as ZFS is a r/w fs, so it needs that dedup table readily available for writes), and the metadata for each block is not negligible, so it might counteract the benefits of deduplication to the point of actual space saving being negligible. Having a read-only alternative to this, where you won't need massive amounts of memory to hold a dedup table, but instead rely on data pointers to just-in-time serve the right block when reading would be magical. I'd then either create that squasfs image without compression and put it on a compressed fs, or squash it once more with a 1-MB block size using squashfs zstd:22 compression. Man, that'd be sweet 💌 |
Will get off topic regarding the issue, but sharing my experience which may help. Had roughly the same idea earlier, looking for a modern (read-only) archival format because tar is really not doing well without at least a file index, and 7z apparently has design weaknesses making it unreliable for archiving. The deal-breaker I ran into was the archive size which is likely limited by the block size, but I didn't get to rebuild the tools on my own with an increased size to verify.
I remain interested in the feasibility of using squashfs for (optionally read-only) archival focusing more on size instead of performance given that I haven't found a suitable format. |
@Dr-Emann So block-level deduplication isn't possible, but a file that's a subset of another should be, right? Imagine a file with blocks [ A, B, C, D ]. A File composed of [ A, B, C ] or [ C, D ] should be able to be fully deduplicated Of course I'm not really sure what kind of space savings this would make or how much extra work it would take to do this, but it should be possible |
I designed the layout and wrote the code and so I think you should be asking me that question. Things like that are possible, and that and other things are still on my "to do" list. It is a question of priorities and time. Every year I get requests for new features, and I can only do some of it. I don't get any money for Squashfs, and all work is done in my "spare time". |
@plougher It was just out of curiosity and I didn't want to drag you into a basically dead thread But I do have a semi off-topic question while you're here: I've been working on a SquashFS implementation and this code in the Linux kernel appears to enable the |
Hmm. That is odd, squashfs tools has the same code. That would seem to try to say "we have an uncompressed metatdata block that's 32k coming up". Presumably that would always be immediately rejected because 32k will be larger than the output buffer which should always be 8k, unless it's the last block, in which case it will be less than 8k. |
@mgord9518 That relates to Squashfs V1.0 filesystems. V1.0 used the same two byte block length for data blocks and metadata blocks, and the same code and macros were used for both. Datablocks in V1.0 could be a maximum of 32K, and an uncompressed 32K block would have "SQUASHFS_COMPRESSED_BIT" set and nothing else. In Unsquash-1.0.c you can still see the macro being used to get the size of the V1.0 data block. https://github.com/plougher/squashfs-tools/blob/master/squashfs-tools/unsquash-1.c#L65 You do not need to deal with this, as you probably won't need to deal with V1.0 filesystems. |
File-level deduplication currently present in squashfs is great and surely saves a lots of space in various scenarios. Could block-level deduplication be added without breaking compatibility with current format?
mksquashfs
would take significantly longer to complete (especially with small block sizes and big data sets), but could also save much more space with e.g. snapshots of system disk images and other big, mostly similar files.If backward-compatible change is not possible, could format upgrade/change be investigated for that?
Block level deduplication has several drawbacks, too. Internal fragmentation is one of them of course.
The text was updated successfully, but these errors were encountered: