Skip to main content

You are not logged in. Your edit will be placed in a queue until it is peer reviewed.

We welcome edits that make the post easier to understand and more valuable for readers. Because community members review edits, please try to make the post substantially better than how you found it, for example, by fixing grammar or adding additional resources and hyperlinks.

5
  • How would this work within the infrastructure I outlined? Commented Mar 12, 2014 at 13:06
  • You would have to get your process to write its filenames to mksquash's invocation script, and have it continue to append them as it runs. Or even into a tmpfs that squash will read and compress as it runs. Or, as another mentioned, through something else - invoke cpio just like the above dd example, but with cpio use its copy function maybe. In any case - it definitely reads, creates, and compresses on the fly. Commented Mar 12, 2014 at 13:09
  • Will it compress across files? Commented Mar 12, 2014 at 13:14
  • It compresses its input in a stream - all inodes, all of it. I've used it with dd and it was pretty cool - I always use the 1MB block-size and xz compression. Commented Mar 12, 2014 at 13:15
  • This looks like an option, but from your answer I fail to see how to create, say, a squashfs archive with a directory test and a file file in this directory. Could you please provide a brief example? Commented Mar 14, 2014 at 7:49