Skip to main content

You are not logged in. Your edit will be placed in a queue until it is peer reviewed.

We welcome edits that make the post easier to understand and more valuable for readers. Because community members review edits, please try to make the post substantially better than how you found it, for example, by fixing grammar or adding additional resources and hyperlinks.

Required fields*

14
  • Any comment on the topic of fault tolerance or is that something that's always implemented completely outside of compression algorithms? Commented Jan 8, 2014 at 1:00
  • 2
    @illuminÉ resiliency can't be provided without sacrificing compression ratio. It's an orthogonal problem, and while tools like Parchive exist, for distributing the kernel TCP's error handling does the job just as well. Commented Jan 8, 2014 at 8:58
  • 4
    @illuminÉ Fault tolerance (assuming you mean something similar to par2) isn't normally a concern with distributing archives over the Internet. Downloads are assumed reliable enough (and you can just redownload if it was corrupted). Cryptographic hashes and signatures are often used, and they detect corruption as well as tampering. There are compressors that give greater fault tolerance, though at the cost of compression ratio. No one seems to find the tradeoff worth it for HTTP or FTP downloads. Commented Jan 8, 2014 at 17:02
  • 2
    @derobert I don't know, but it seems quite misleading to me, because for example (from the blog post) xz-2 compressed file has nearly the same size as bzip2-3 but xz-2 uses 1300 KB of memory and bzip2-3 uses 1700 KB when decompressing. (Plus compress time of xz-2 is faster.) However it uses more memory when compressing. Commented Aug 11, 2015 at 14:15
  • 1
    @Mike does that make it clearer? I checked and confirmed that kernel sources are compressed with xz -9 (xv -vvl will tell you). Commented Aug 11, 2015 at 20:59