3

I am trying to chop a 4TB (4,000,225,165,312 bytes) drive up into even partitions of 1TB.

I want these partitions to be further dividable to at least 1^3 Byte (~1GB) (1000000000) partitions.

Okay, so after hours of distilling, I've found a couple of conflicting conclusions:

  • with Gparted I cannot make a 1000000000 byte (953.67431640625 MB) partition
  • with KDEparted I can select bytes 1000000000 byte partition, it ends up 1000341504
    • turns out 954MB is 1000341504 Bytes
    • this doesn't scale as 1000341504*1000*4 (~4TB) is 4001366016000, larger than the drive
    • when I make one 1,000,000,000,000 it ends up 1,000,000,716,800
    • so there is extra overhead that decreases with increasing total size
    • KDEparted uses sfdisk backend which doesn't use sectors
    • Gparted uses alignment to MBs
  • with Gdisk I can make a 1000000000000 (1000^4) B (~1TB) partition using 1953125000 Sectors (512 bytes each)

That would be acceptable, to have to use Gdisk to create partitions with sectors and then move them around with Gparted. However, when I delete a 1000^4 B partition and create a new one with Gparted just filling available space, it gives extra bytes 1,000,000,716,800 (143 sectors).

This may be related to the Gdisk warning "Partitions will be aligned on 2048-sector boundaries", but I thought I was maximizing space with Gdisk. Now it looks like I would have to use Gdisk then Gparted then Gdisk again..? Is there a more optimal way of going about this?

A big part was understanding which alignment (bytes, cylinder, MiB) was best, and this post helped: "For this reason a lot of modern partitioning tools simply align the entire drive along a 1M[i]B boundary, which neatly does away with the need to detect whether you have any of the many types of drive, be they 512-byte sectors, 4KB sectors, or SSD with some arbitrary block size." https://superuser.com/questions/393914/what-is-partition-alignment-and-why-whould-i-need-it

Apparently 1 MiB was chosen because of recent drives using 4096 byte size sectors, and SSD 512 delete requirements, and OG 512 sector sizes. What mystified me is how much larger a MiB (1,048,576 bytes) is to 4096 bytes. I still don't understand why, but MiB seems to be the dominant alignment. And is working so far. "2048-sector boundaries" does actually mean 2048*512= 1MiB, not just starting at 2048 bytes, ("MiB alignment" would have been more clear). This link is also helpful: https://www.rodsbooks.com/gdisk/advice.html

I need to think in binary. I cant just multiply by 10.. it won't add up to ~1TB. So again, why is MiB alignment used when it is so much larger than the 4K 4096B sector size?? Is this an attempt for future-proofing?

5
  • I don't really remember the alignment restrictions, but is it possible you're trying to do this with a (early 1980s!) legacy MBR based positioning instead of on a GPT system like modern operating systems would expect? Commented Feb 12, 2023 at 20:12
  • @MarcusMüller Thanks, but no this is with GPT tables Commented Feb 12, 2023 at 20:14
  • 1
    1⁴ is 1. Did you mean 1000⁴?, that is 1000000000000. Or maybe even 1024⁴? Commented Feb 12, 2023 at 22:25
  • 2
    You said optimal partition size. What are you trying to optimise for? If you tell us why you are doing it, then we may be better able to help. Commented Feb 12, 2023 at 22:27
  • @ctrl-alt-delor true, I'll change that.. I said my goal is to have large partitions that I can evenly divide into smaller ones that will fit neatly inside them.. I had to use MiB alignment, one remainder smaller than 1TB, and then divide it by 1024, instead of 1000. (there is a limit of 128 partitions, but this gives me a ~1GB unit to create multiples of). Commented Feb 12, 2023 at 22:30

3 Answers 3

6

It's not possible to make partitions at byte resolution. Even if you could, it would leave you with no end of alignment issues.

Sector size is either 512 or 4096, all partition sizes must be multiples of that. By convention, you should even stick to MiB alignment (multiple of 1048576 bytes) unless you have strong reasons not to.

Another complication is that the partition table itself needs some room, so no partition can start at sector 0. Likewise you can't use the very last sectors of the drive (in use by GPT backup header).

So if you want all partitions of the same size, and not exceed byte boundaries, you can't help to approximate some things.

Here's an example for 1TB partitions on a 4TB disk:

(parted) unit b (parted) print Model: Loopback device (loopback) Disk /dev/loop0: 4000225165312B Sector size (logical/physical): 512B/512B Partition Table: gpt Disk Flags: Number Start End Size File system Name Flags 1 1048576B 999999668223B 999998619648B 2 1000000716800B 1999999336447B 999998619648B 3 2000000385024B 2999999004671B 999998619648B 4 3000000053248B 3999998672895B 999998619648B (parted) unit mib (parted) print Model: Loopback device (loopback) Disk /dev/loop0: 3814912MiB Sector size (logical/physical): 512B/512B Partition Table: gpt Disk Flags: Number Start End Size File system Name Flags 1 1.00MiB 953674MiB 953673MiB 2 953675MiB 1907348MiB 953673MiB 3 1907349MiB 2861022MiB 953673MiB 4 2861023MiB 3814696MiB 953673MiB 

This is just an example — you can choose other boundaries.

If you need to create many more partitions (you mentioned 1GB ones) you should write yourself a script that determines those boundaries for you. Note that GPT has a 128 partition limit by default.

18
  • thank you.. im not sure the 2nd sentence is clear.. also, i do want to be precise, so i can have precise divisions and have multiple smaller partitions be able to fit in the space of larger ones.. Im aware of the table space req.. isnt that in the first 2048 S? Yes, I saw the 128 limit, thanks.. I'm looking at your example.. also, it seems Sectors is primary to MB as a unit. I cant go lower than 512 bytes for sure. seems like Gdisk can divide in sectors.. but maybe the extra is coming from MBs.. however, the extra space 143 S is much larger than either. Commented Feb 12, 2023 at 20:36
  • @alchemy the partitions in this example are MiB aligned and do not cross TB boundaries. (There's no reason why they can't cross those boundaries, you just don't want them to, or that's what I gathered from your question. Sorry if I misunderstood). One semi-practical use case for such partitioning is if you expect to be replacing drives which might be smaller in the future, but guaranteed to have at least full multiple of TB. Some SSDs actually came out with 960GB and similar odd numbers, so a 1TB partition is still too large for those... Commented Feb 12, 2023 at 20:40
  • yes, ive seen that problem with SSDs mulitples, but i want to at least get the HDD situation correct.. I do like your smaller than 1TB partitions in MBs, and I did try that with 999292928 bytes, but I am trying to understand why I cant use more readable (and mathable) units of 1^4 bytes when it is an even multiple of 512.. yet the remainder 143 S is much larger than either a sector or a MB.. I wonder if it is a cylinder boundary.. I have mostly excluded there is any metadata needed adjacent to the partition, which was my first guess. ..esentially your smaller parts are 2696 S less than 1^4 B. Commented Feb 12, 2023 at 20:45
  • 1
    I'm still not seeing how you arrived at the size of 999998619648B originally. Commented Feb 12, 2023 at 21:28
  • 1
    It's MiB aligned. +1 MiB it'd be 1000000716800 which is 716800 bytes past the 1TB boundary. Next partition ends at 1999999336448, +1 MiB it'd be 2000000385024 which is 385024 bytes past the 2TB boundary. Basically I tried to keep each partition within TB boundary and so it ends up with that size. You asked for "even partitions of 1TB" ideally 1000000000000 bytes each but that ideal can't be reached (in MiB alignment) thus the approximate has to do. You also wanted it to scale and not go past disk size which I took as those boundaries have to be respected. Commented Feb 12, 2023 at 21:36
4

It's not advisable for partitions sizes to be multiples of 1'000'000'000 - this number is not divisable by 4096 which is crucial for proper performance of many Linux subsystems. If I were you, I'd use 1024 * 1024 * 1024 instead (1073741824 bytes or exactly 1GiB) or something close to it (but again divisable by at least 1024*1024 as many Linux disk utilities prefer a 1MiB boundary).

9
  • agree.. using binary multiples would be best.. however the drive manufactures do not.. so I am stuck trying to evenly divide ~1^4 Bytes. (also, gdisk says "Sector size (logical/physical): 512/512 bytes" whereas I've seen some newer drives say 512/4096 as you are saying. Luckily 1^4 is an even multiple of 512). Also, I think this is likely the next most common use case beside using the whole drive, and maybe using lvm. Many people would approach this with simple logic as I do here. Commented Feb 12, 2023 at 20:25
  • 2
    Drive OEMs use 512/4096 byte sectors internally. I've never seen anything different. E.g. a 1TB HDD drive is not exactly 1 000 000 000 000 bytes, it's something close to be either divisable by 512 or 4096. Commented Feb 12, 2023 at 20:27
  • Thank you. How would I verify that?.. it is.. see the very first number of bytes i wrote for the 4TB drive. Commented Feb 12, 2023 at 20:27
  • 1
    unix.stackexchange.com/questions/52215/… e.g. blockdev --getsize64 /dev/sda My 1TB HDD is actually 1 000 204 886 016 bytes. Commented Feb 12, 2023 at 20:28
  • 4000225165312 same as i reported above. it shows that in gnome-disks-utility and in fdisk -l Commented Feb 12, 2023 at 20:30
0

I'm adding some partitioning schemes.

Rounding down by MiB:

999999668224 ~1TB /2 953674MiB 499999834112 ~500GB /2 476837 249999917056 ~250GB /2 238418.5 -> 238418 249999392768 124999958528 ~125GB /2 119209.25 119209 124999696384 62499979264 ~63GB /2 59604.625 -> 59604 62499323904 31249989632 ~32GB /2 29802 31249661952 15624994816 ~16GB /2 14901.15625 14901 15624830976 7812497408 ~8GB /2 7450.578125 7450 7811891200 3906248704 ~4GB /2 3725.28.. -> 3725 3905945600 1953124352 ~2GB /2 1862.64.. -> 1862 1952448512 976562176 ~1GB 931.32.. -> 931 976224256 1024 * 976224256 = 999653638144 

Loses a maximum of .64 MiB on each adjustment, for a total of 346030080 B (330 MiB) unused (999999668224-999653638144), or a total of 1000^4-999653638144 = 346361856/1048576 = 330.31..MiB. This may be the best approach. An acceptable loss.

Building up from 931MiB:

953334, 479672, 238336, 119168, 59584, 29792, 14896, 7448, 3724, 1862, 931 MiB 999643152384 ~1TB 953334 MiB 502972547072 ~500GB *2 479672 249913409536 ~250GB *2 238336 124956704768 ~125GB *2 119168 62478352384 ~63GB *2 59584 31239176192 ~34GB *2 29792 15619588096 ~16GB *2 14896 7809794048 ~8GB *2 7448 3904897024 ~4GB *2 3724 1952448512 ~2GB *2 1862 976224256 ~1GB *2 931 

953674-953334 = 340MiB ..also an acceptable loss, with no intermediate size adjustments.

893760 ~960GB 446880 ~480GB 223440 ~240GB 111720 ~120GB 55860 ~60GB 27930 ~30GB 9310 ~10GB 4655 ~5GB 931 MiB ~1GB 

These sizes work nicely with SSDs and OS sizes. 953334-893760 = 59574 MiB not used, but enough to fit a ~60GB partition (with 3714 left over, plus 20 MiB is 20971520 B and there's enough extra bites on a commmon HDDs to fit a ~4GB partition) (depending on end-alignment issues).

Just an example, my 4TB drives have 225,165,312 B extra, which is 53.68 MiB. On my 2TB drive I was able to fit 2x893760, 2x55860, 2x3724, 1x931 (total 1907619), and it left 256MiB (which changed to 109 with 88 used, presumably for the GPT table). I did this filling from the end toward the beginning. There was .7MiB (90113 B) not used at the end, and 1MiB at the beginning. But my partition numbers are backwards (I can probably live with that, although they are listed upside-down in fdisk). The alternative is to pre-calculate them, possibly by making the largest possible partition first in MiB to read the end alignment count. This should work as 1907728-1907619=109MiB, same as above.

However, the minimum size allowed by Gparted is 256MiB. ..Ahah, this is because I left the default type as btrfs, which requires 256, while ext4 only requires 8MiB. So, this does work, YAY!

EDIT: I realized by adjusting the 931MiB to 930, it makes one more even subdivision possible.

892800 ~960GB 426400 ~480GB 223200 ~240GB 111600 ~120GB 55800 ~60GB 27900 ~30GB 18600 ~20GB 9300 ~10GB 4650 ~5GB 930 MiB ~1GB 465 

Then I realized by making 465 into 464, it can be divided even further, all the way down to "7MiB".

890880 ~960GB 445440 ~480GB 223720 ~240GB 111360 ~120GB 55680 ~60GB 27840 ~30GB 9280 ~10GB 4640 ~5GB 928 MiB ~1GB 464 232 116 58 26 14 7 MiB 

I'm not sure which of the above two I prefer. Or rounding further to 900 also makes some more readable chunks.

864000 ~960GB 432000 ~480GB 216000 ~240GB 108000 ~120GB 54000 ~60GB 27000 ~30GB 18000 ~20GB 9000 ~10GB 4500 ~5GB 900 MiB ~1GB 450 275 

This loses only 3.2% of space (30/931) and gives much more memorable and readable sizes.

Or, I could round all the way down to 800,000 MiB, but this would be an 11% loss (833.3/931) and the ~1GB chunks would be uneven at 833.33 MiB. Its possible to make the decimal-like division of 5 at 25000 to make ~5GB sizes be 5000 MiB, but this might be confusing, as 5000 MiB wouldn't fit on a 5GB usb drive. I could use the "930" or "900" scheme for small drives like 1GB though. Actually, 950 MiB should fit because 1GB is 953.67 MiB. This leaves 3.67 MiB for end-alignment and the beggining GPT partition table of 2048 sectors (which is 1 MiB).

800000 ~960GB 400000 ~480GB 200000 ~240GB 100000 ~120GB 50000 ~60GB or 64GB drive 25000 ~30GB or 32GB drive 15000 ~15GB part or 16GB drive 5000 ~5GB part or 8GB drive smaller drives (and backups as partitions): 950 MiB ~1GB (4750 ~5GB) 425 

Or, if I keep the medium sizes at even cutoffs, they will fit on most drives until the 120GB SSDs. Even the smaller 1GB partitions usually can be shrunk by 50 MiB to fit on a usb stick, so I can leave them at 1000 MiB on the HDDs.

800000 ~960GB 400000 ~480GB 200000 ~240GB 100000 ~120GB 60000 ~60GB or 64GB drive 30000 ~30GB or 32GB drive 15000 ~15GB part or 16GB drive 5000 ~5GB part or 8GB drive 1000 ~1GB part or 1GB drive smaller drives (and backups as partitions): 950 MiB ~1GB (4750 ~5GB) 425 

You must log in to answer this question.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.