I meant TRANSPARENT filesystem level dedupe. They are doing it at the application level. filesystem level dedupe makes it impossible to store the same file more than once and doesn't consume hardlinks for the references. It is really awesome.
ZFS is great! However, it's too complicated for most Linux server use cases (especially with just one block device attached); it's not the default (root filesystem); and it's not supported for at least one major enterprise Linux distro family.
File system dedupe is expensive because it requires another hash calculation that cannot be shared with application-level hashing, is a relatively rare OS-fs feature, doesn't play nice with backups (because files will be duplicated), and doesn't scale across boxes.
A simpler solution is application-level dedupe that doesn't require fs-specific features. Simple scales and wins. And plays nice with backups.
Hash = sha256 of file, and abs filename = {{aa}}/{{bb}}/{{cc}}/{{d}} where
That costs even more, unreuseable time and effort. It's simpler to dedupe at the application level rather than shift the burden onto N things. I guess you don't understand or appreciate simplicity.
As is always the case, short vs long term... but I think I'd put effort into migrating to a filesystem that is aware of duplication instead of trying to recreate one with links [while retaining duplicates, just fewer].
Effectiveness is debatable, this approach still has duplication. An insignificant amount, I'll admit. The filesystem handling this at the block level is probably less problematic/prone to rework and more efficient.
edit: Eh, ignore me. I see this is preparing for [whatever filesystem hosts chose] thanks to 'ameliaquining' below. Originally thought this was all Discourse-proper, processing data they had.
This makes them look rather incompetent. Storing the exact same file 246,173 times is just stupid. Dedupe at the filesystem level and make your life easier.
31 comments
> [W]e shipped an optimization. Detect duplicate files by their content hash, use hardlinks instead of downloading each copy.
If the greatest filesystem in the world were a living being, it would be our God. That filesystem, of course, is ZFS.
Handles this correctly:
https://www.truenas.com/docs/references/zfsdeduplication/
I just wanted to mention ZFS.
Have I mentioned how great ZFS is yet?
A simpler solution is application-level dedupe that doesn't require fs-specific features. Simple scales and wins. And plays nice with backups.
Hash = sha256 of file, and abs filename = {{aa}}/{{bb}}/{{cc}}/{{d}} where
aa = hash 2 hex most significant digits
bb = hash next 2 hex digits
cc = hash next 2 hex after that
d = remaining hex digits
zfs sendis the backup solution. And it performs incremental backups with the-iargument.Is it just me or is everybody else just as fed up with always the same AI tropes?
I've reached a point where I just close the tab the moment I read a headline "The problem". At least use tropes.fyi please
(Some say ZFS as well, but it's not nearly as easy to use, and its license is still not GPL-friendly.)
Effectiveness is debatable, this approach still has duplication. An insignificant amount, I'll admit. The filesystem handling this at the block level is probably less problematic/prone to rework and more efficient.
edit: Eh, ignore me. I see this is preparing for [whatever filesystem hosts chose] thanks to 'ameliaquining' below. Originally thought this was all Discourse-proper, processing data they had.