It is possible (sorry i did not remember the command line tool) at least for FAT32, however I don't recall the name of the tool.
If i did notI remember badright is based on a first call for a full length hole and assign it to the new stream, then the file is read and write on that stream.
It is also possible with NTFS (previa pre-allocate copy is called if i did notI remember bad)right.
I do not know any tool / command able to avoid that (whenwhen NTFS compression is enabled)... sorry, iI used a tool a long time ago a tool (i remember it hasthat had a goodnice GUI) to copy without causing fragmentation (inin Windows), but i did not rememberI can't seem to recall the name.
So the best iI can tell you is to use (onon windows) a the command line utility: XCOPY with the /J flag.
- XCOPY /J
It tries to not fragment, ideal for big files (ifif NTFS compression is disabled or youryou are using FAT32).
ExplicationExplanation for NTFS compression pipeline fragmentation:
- Block N will be written on Cluster N
- Clusters arround N+# will be compressed and stored somewhere else, so clusters between N and N+# will be free, aka, file will be fragmented, very fragmented... a lot fragmented... 10GiB = >100000100000 fragments... allways supposing, assuming a compression of 50%, that is why NTFS compression is very BAD... itIf the file would be compressed first on RAM then sent to disk... no fragmentation would occur (oror at least it can be avoided).
Another side effect of this way of doing things is... assume we have 5Gib of free space is 5Gib, iand I want to write a file of 6GiB that after compressing will take only 3GiB... itThis is not possible to be done, but if you first move a 2GiB (non-compressible) to another place, then free space will be 7GiB, write 6GiB, compressing makes it only 3GiB, free space will be 4GiB, then put back suchthe 2GiB, of original data that we moved and voila... all is there and 2GiB free... butPoint being that if there is not enough space for uncompressed file, it will not be possible to be copied on NTFS (doand it does not matter if after it would be NTFS compressed there is enoughtenough space), it. It needs all because it first write it without compression, then applyit applies the compression... lastLastly NTFS drivers do that in pipeline so in theory it would still be possible, but they did not change the check free size part.
Windows does not write the file "after" compressing it, windows "compress" the file "after" saving it uncompressuncompressed, so for each cluster the hard disk will see two write attempts..., first with non compressed data, second with compressed data... modernModern pipeline NTFS controllercontrollers avoid HDD to see both writes... but on first NTFS version, the HDD holdwrites the whole file uncompressed, then it compresses the compression starts..file. itIt was very noticeable with huge (really big)very large and well compresiblecompressible files.... writtingWriting a file of 10GiB that NTFS compressed down to only takes a few MiB (aA file filled with zeros) tooks moretook longer to write than writting itwriting the non-compressed.. version of the file. now a days,Nowadays pipeline has break that, and now it tookstook a very little amount of time... but the prior enoughtenough free size is still there.
HopeHopefully one day the NTFS compression method wouldwill do it oto a row, so we could get non-fragmented NTFS compressed files, without the need to de-fragment them after we write them... tillUntill then the best option is XCOPY /JXCOPY with the /J flag and CONTIG...CONTIG or that GUI tool i do notI can't remember the name.
If someone remember it.. of. It was while windowsWindows was only up to XP, and had options to pause, to copy in parrallelparallel from HDD1 to HDD2 and HDD3 to HDD4, and of course the one wished:, to pre-allocate.
Hope i can help more Hopefully this helps!