Skip to main content

You are not logged in. Your edit will be placed in a queue until it is peer reviewed.

We welcome edits that make the post easier to understand and more valuable for readers. Because community members review edits, please try to make the post substantially better than how you found it, for example, by fixing grammar or adding additional resources and hyperlinks.

Required fields*

8
  • This is probably the way I'll go eventually, however, does BSD's ZFS implementation do dedup? I thought it did not. Commented Dec 8, 2010 at 20:14
  • 22
    ZFS dedup is the friend of nobody. Where ZFS recommends 1Gb ram per 1Tb usable disk space, you're friggin' nuts if you try to use dedup with less than 32Gb ram per 1Tb usable disk space. That means that for a 1Tb mirror, if you don't have 32 Gb ram, you are likely to encounter memory bomb conditions sooner or later that will halt the machine due to lack of ram. Been there, done that, still recovering from the PTSD. Commented Sep 22, 2014 at 18:51
  • 7
    To avoid the excessive RAM requirements with online deduplication (i.e., check on every write), btrfs uses batch or offline deduplication (run it whenever you consider it useful/necessary) btrfs.wiki.kernel.org/index.php/Deduplication Commented Feb 5, 2017 at 19:18
  • 8
    Update seven years later: I eventually did move to ZFS and tried deduplication -- I found that it's RAM requirements were indeed just far to high. Crafty use of ZFS snapshots provided the solution I ended up using. (Copy one user's music, snapshot and clone, copy the second user's music into the clone using rsync --inplace so only changed blocks are stored) Commented Sep 13, 2017 at 13:54
  • 1
    @endolith Nope! Which makes ZFS dedup completely useless for 99% of users. If you have enough RAM for online dedup, either your disks are tiny or your wallet is big enough that you should instead spend some engineer-hours implementing dedup in your application. Commented Jan 20, 2022 at 13:10