Timeline for Disk usage inside archives, like ncdu
Current License: CC BY-SA 3.0
11 events
| when toggle format | what | by | license | comment | |
|---|---|---|---|---|---|
| S Sep 9, 2017 at 8:05 | history | suggested | SDsolar | CC BY-SA 3.0 | Spelling fix |
| Sep 9, 2017 at 7:38 | comment | added | SDsolar | This is really good. Thank you from September 2017 (Ubuntu 16.04 LTS). | |
| Sep 9, 2017 at 7:37 | review | Suggested edits | |||
| S Sep 9, 2017 at 8:05 | |||||
| Aug 17, 2014 at 17:53 | comment | added | Volker Siegel | Sure, that could be done in memory, but I do not see much reason for that - using the filesystem is essentially the same using standard tools. You can use a tmpfs to write to if you insist in using memmory, of course. Regarding using the stream offsets for sizes, I can not easily picture it, but expect that it will not work in general, based on the compressed-as-a-whole issue. Would be worth some experiments... | |
| Aug 17, 2014 at 17:20 | comment | added | a3nm | For gzip, I think that decompression can be done in streaming, and tar archives are sequential, so one should be able to talk of the compressed size of a file as the difference between the offset of this file and the offset of the next file, no? (I don't know gzip in detail so I may be missing some subtleties.) Of course this would favor files that come later in the archive (e.g., if I have two copies of the same file, only the first would take space) but I guess that's an OK approximation. What do you think? | |
| Aug 17, 2014 at 15:12 | history | edited | Volker Siegel | CC BY-SA 3.0 | added 825 characters in body |
| Aug 17, 2014 at 15:04 | comment | added | Volker Siegel | So it can only be approximated. One approximation would be the size of individually compressed files. Another would be a fraction of the compressed size assuming all files compress by the same ratio. There are certainly other ways. The first seems to be ok. To implement it, there is no way around actually unpacking and recompressing the individual files, so I see no reason to not just do that, and use ncdu on it. | |
| Aug 17, 2014 at 13:53 | comment | added | Volker Siegel | Strictly speaking, that's not possible because the archive is compressed as a whole. An individual file has no "compressed size". | |
| Aug 17, 2014 at 13:38 | comment | added | a3nm | Thanks for your answer! This is a good idea, but what I care about is the compressed size of the files, not uncompressed: I want to see which files take up the most space in the actual archive. Is there any way to do this? | |
| Aug 17, 2014 at 13:07 | history | edited | Volker Siegel | CC BY-SA 3.0 | added 188 characters in body |
| Aug 17, 2014 at 12:57 | history | answered | Volker Siegel | CC BY-SA 3.0 |