Skip to main content
Corrected wording the best I could and corrected minor grammatical errors.
Source Link
Zachary Brady
  • 4.3k
  • 2
  • 21
  • 40

It is possible (sorry i did not remember the command line tool) at least for FAT32, however I don't recall the name of the tool.

If i did notI remember badright is based on a first call for a full length hole and assign it to the new stream, then the file is read and write on that stream.

It is also possible with NTFS (previa pre-allocate copy is called if i did notI remember bad)right.

I do not know any tool / command able to avoid that (whenwhen NTFS compression is enabled)... sorry, iI used a tool a long time ago a tool (i remember it hasthat had a goodnice GUI) to copy without causing fragmentation (inin Windows), but i did not rememberI can't seem to recall the name.

So the best iI can tell you is to use (onon windows) a the command line utility: XCOPY with the /J flag.

  • XCOPY /J

It tries to not fragment, ideal for big files (ifif NTFS compression is disabled or youryou are using FAT32).

ExplicationExplanation for NTFS compression pipeline fragmentation:

  1. Block N will be written on Cluster N
  2. Clusters arround N+# will be compressed and stored somewhere else, so clusters between N and N+# will be free, aka, file will be fragmented, very fragmented... a lot fragmented... 10GiB = >100000100000 fragments... allways supposing, assuming a compression of 50%, that is why NTFS compression is very BAD... itIf the file would be compressed first on RAM then sent to disk... no fragmentation would occur (oror at least it can be avoided).

Another side effect of this way of doing things is... assume we have 5Gib of free space is 5Gib, iand I want to write a file of 6GiB that after compressing will take only 3GiB... itThis is not possible to be done, but if you first move a 2GiB (non-compressible) to another place, then free space will be 7GiB, write 6GiB, compressing makes it only 3GiB, free space will be 4GiB, then put back suchthe 2GiB, of original data that we moved and voila... all is there and 2GiB free... butPoint being that if there is not enough space for uncompressed file, it will not be possible to be copied on NTFS (doand it does not matter if after it would be NTFS compressed there is enoughtenough space), it. It needs all because it first write it without compression, then applyit applies the compression... lastLastly NTFS drivers do that in pipeline so in theory it would still be possible, but they did not change the check free size part.

Windows does not write the file "after" compressing it, windows "compress" the file "after" saving it uncompressuncompressed, so for each cluster the hard disk will see two write attempts..., first with non compressed data, second with compressed data... modernModern pipeline NTFS controllercontrollers avoid HDD to see both writes... but on first NTFS version, the HDD holdwrites the whole file uncompressed, then it compresses the compression starts..file. itIt was very noticeable with huge (really big)very large and well compresiblecompressible files.... writtingWriting a file of 10GiB that NTFS compressed down to only takes a few MiB (aA file filled with zeros) tooks moretook longer to write than writting itwriting the non-compressed.. version of the file. now a days,Nowadays pipeline has break that, and now it tookstook a very little amount of time... but the prior enoughtenough free size is still there.

HopeHopefully one day the NTFS compression method wouldwill do it oto a row, so we could get non-fragmented NTFS compressed files, without the need to de-fragment them after we write them... tillUntill then the best option is XCOPY /JXCOPY with the /J flag and CONTIG...CONTIG or that GUI tool i do notI can't remember the name.

If someone remember it.. of. It was while windowsWindows was only up to XP, and had options to pause, to copy in parrallelparallel from HDD1 to HDD2 and HDD3 to HDD4, and of course the one wished:, to pre-allocate.

Hope i can help more Hopefully this helps!

It is possible (sorry i did not remember the command line tool) at least for FAT32.

If i did not remember bad is based on a first call for a full length hole and assign it to the new stream, then the file is read and write on that stream.

It is also possible with NTFS (pre-allocate copy is called if i did not remember bad).

I do not know any tool / command able to avoid that (when NTFS compression is enabled)... sorry, i used a long time ago a tool (i remember it has a good GUI) to copy without causing fragmentation (in Windows), but i did not remember the name.

So the best i can tell you is to use (on windows) a command line utility:

  • XCOPY /J

It tries to not fragment, ideal for big files (if NTFS compression is disabled or your are using FAT32).

Explication for NTFS compression pipeline fragmentation:

  1. Block N will be written on Cluster N
  2. Clusters arround N+# will be compressed and stored somewhere else, so clusters between N and N+# will be free, aka, file will be fragmented, very fragmented... a lot fragmented... 10GiB = >100000 fragments... allways supposing a compression of 50%, that is why NTFS compression is very BAD... it the file would be compressed first on RAM then sent to disk... no fragmentation would occur (or at least it can be avoided).

Another side effect of this way of doing things is... free space is 5Gib, i want to write a file of 6GiB that after compressing will take only 3GiB... it is not possible to be done, but if you first move a 2GiB (non-compressible) to another place, then free space will be 7GiB, write 6GiB, compressing makes it only 3GiB, free space will be 4GiB, then put back such 2GiB, and voila... all is there and 2GiB free... but if there is not enough space for uncompressed file, it will not be possible to be copied on NTFS (do not matter if after it would be NTFS compressed there is enought space), it needs all because it first write it without compression, then apply the compression... last NTFS drivers do that in pipeline so in theory it would still be possible, but they did not change the check free size part.

Windows does not write the file "after" compressing it, windows "compress" the file "after" saving it uncompress, so for each cluster the hard disk will see two write attempts... first with non compressed data, second with compressed data... modern pipeline NTFS controller avoid HDD to see both writes... but on first NTFS version, the HDD hold the whole file uncompressed, then the compression starts... it was very noticeable with huge (really big) and well compresible files.... writting a file of 10GiB that NTFS compressed only takes a few MiB (a file filled with zeros) tooks more than writting it non-compressed... now a days, pipeline has break that, and now it tooks a very little time... but the prior enought free size is still there.

Hope one day NTFS compression method would do it o a row, so could get non-fragmented NTFS compressed files, without the need to de-fragment them after write them... till then the best option is XCOPY /J and CONTIG... or that GUI tool i do not remember the name.

If someone remember it... was while windows was only up to XP, and had options to pause, to copy in parrallel from HDD1 to HDD2 and HDD3 to HDD4, and of course the one wished: pre-allocate.

Hope i can help more !

It is possible at least for FAT32, however I don't recall the name of the tool.

If I remember right is based on a first call for a full length hole and assign it to the new stream, then the file is read and write on that stream.

It is also possible with NTFS via pre-allocate copy if I remember right.

I do not know any tool / command able to avoid that when NTFS compression is enabled. I used a tool a long time ago that had a nice GUI to copy without causing fragmentation in Windows, but I can't seem to recall the name.

So the best I can tell you is to use on windows the command line utility XCOPY with the /J flag.

It tries to not fragment, ideal for big files if NTFS compression is disabled or you are using FAT32.

Explanation for NTFS compression pipeline fragmentation:

  1. Block N will be written on Cluster N
  2. Clusters arround N+# will be compressed and stored somewhere else, so clusters between N and N+# will be free, aka, file will be fragmented, very fragmented. 10GiB = 100000 fragments, assuming a compression of 50%, that is why NTFS compression is very BAD. If the file would be compressed first on RAM then sent to disk no fragmentation would occur or at least it can be avoided.

Another side effect of this way of doing things is assume we have 5Gib of free space and I want to write a file of 6GiB that after compressing will take only 3GiB. This is not possible, but if you first move a 2GiB (non-compressible) to another place, then free space will be 7GiB, write 6GiB, compressing makes it only 3GiB, free space will be 4GiB, then put back the 2GiB of original data that we moved and voila all is there and 2GiB free. Point being that if there is not enough space for uncompressed file, it will not be possible to be copied on NTFS and it does not matter if after it would be NTFS compressed there is enough space. It needs all because it first write it without compression, then it applies the compression. Lastly NTFS drivers do that in pipeline so in theory it would still be possible, but they did not change the check free size part.

Windows does not write the file "after" compressing it, windows "compress" the file "after" saving it uncompressed, so for each cluster the hard disk will see two write attempts, first with non compressed data, second with compressed data. Modern pipeline NTFS controllers avoid HDD to see both writes but on first NTFS version the HDD writes the whole file uncompressed, then it compresses the file. It was very noticeable with very large and well compressible files. Writing a file of 10GiB that NTFS compressed down to only a few MiB (A file filled with zeros) took longer to write than writing the non-compressed version of the file. Nowadays pipeline has break that, and now it took a very little amount of time but the prior enough free size is still there.

Hopefully one day the NTFS compression method will do it to a row, so we could get non-fragmented NTFS compressed files without the need to de-fragment them after we write them. Untill then the best option is XCOPY with the /J flag and CONTIG or that GUI tool I can't remember the name of. It was while Windows was only up to XP, and had options to pause, to copy in parallel from HDD1 to HDD2 and HDD3 to HDD4, and of course the one wished, to pre-allocate. Hopefully this helps!

Source Link

It is possible (sorry i did not remember the command line tool) at least for FAT32.

If i did not remember bad is based on a first call for a full length hole and assign it to the new stream, then the file is read and write on that stream.

It is also possible with NTFS (pre-allocate copy is called if i did not remember bad).

But, for NTFS compressed files there is no way... since Windows first write the file (non-compressed) and after that it compress the file in chunks... to tell the truth it does that in a pipeline... that is the reason the file get so fragmented (a file of 10GiB can create more than one hundred thousand fragments).

I do not know any tool / command able to avoid that (when NTFS compression is enabled)... sorry, i used a long time ago a tool (i remember it has a good GUI) to copy without causing fragmentation (in Windows), but i did not remember the name.

So the best i can tell you is to use (on windows) a command line utility:

  • XCOPY /J

It tries to not fragment, ideal for big files (if NTFS compression is disabled or your are using FAT32).

Explication for NTFS compression pipeline fragmentation:

  1. Block N will be written on Cluster N
  2. Clusters arround N+# will be compressed and stored somewhere else, so clusters between N and N+# will be free, aka, file will be fragmented, very fragmented... a lot fragmented... 10GiB = >100000 fragments... allways supposing a compression of 50%, that is why NTFS compression is very BAD... it the file would be compressed first on RAM then sent to disk... no fragmentation would occur (or at least it can be avoided).

Another side effect of this way of doing things is... free space is 5Gib, i want to write a file of 6GiB that after compressing will take only 3GiB... it is not possible to be done, but if you first move a 2GiB (non-compressible) to another place, then free space will be 7GiB, write 6GiB, compressing makes it only 3GiB, free space will be 4GiB, then put back such 2GiB, and voila... all is there and 2GiB free... but if there is not enough space for uncompressed file, it will not be possible to be copied on NTFS (do not matter if after it would be NTFS compressed there is enought space), it needs all because it first write it without compression, then apply the compression... last NTFS drivers do that in pipeline so in theory it would still be possible, but they did not change the check free size part.

Windows does not write the file "after" compressing it, windows "compress" the file "after" saving it uncompress, so for each cluster the hard disk will see two write attempts... first with non compressed data, second with compressed data... modern pipeline NTFS controller avoid HDD to see both writes... but on first NTFS version, the HDD hold the whole file uncompressed, then the compression starts... it was very noticeable with huge (really big) and well compresible files.... writting a file of 10GiB that NTFS compressed only takes a few MiB (a file filled with zeros) tooks more than writting it non-compressed... now a days, pipeline has break that, and now it tooks a very little time... but the prior enought free size is still there.

Hope one day NTFS compression method would do it o a row, so could get non-fragmented NTFS compressed files, without the need to de-fragment them after write them... till then the best option is XCOPY /J and CONTIG... or that GUI tool i do not remember the name.

If someone remember it... was while windows was only up to XP, and had options to pause, to copy in parrallel from HDD1 to HDD2 and HDD3 to HDD4, and of course the one wished: pre-allocate.

Hope i can help more !