An extra 80 GB in EC2 EBS costs something under $12 per month. On-line manipulations are likely to take more than one hour of your work, and a risk of downtime if something goes wrong - how much is that worth for you?
Pay for some extra capacity, add it to your instance as a third disk xvdc, initialize it as a LVM PV (you don't even have to put a partition table on it: just pvcreate /dev/xvdc will be sufficient). Then add the new PV to your rootrhel VG (vgextend rootrhel /dev/xvdc) and now you can extend your /storetmp with the added capacity.
lvextend -L +80G /dev/mapper/rootrhel-storetmp xfs_growfs /storetmp #or the appropriate tool for your filesystem type
With your immediate problem solved, you can now schedule for some downtime at suitable time.
If you are using XFS filesystem (as RHEL/CentOS 7 does by default), then during the next scheduled downtime, you'll create tarballs of the current contents of /store and /transient, unmount and remove the entire storerhel VG, add its PV xvdb3 to the rootrhel VG and then recreate the LVs for /store and /transient filesystems using more realistic estimates for their capacity needs, and restore the contents of the tarballs. End of downtime.
Now your rootrhel VG has three PVs: xvdb2, xvdb3 and xvdc, and plenty of space for your needs.
If you want to stop paying for xvdc, you can use pvmove /dev/xvdc to automatically migrate the data within the VG off the xvdc and onto unallocated space within xvdb2 and/or xvdb3. You can do this on-line; just don't do it at the time of your peak I/O workload to avoid taking a performance hit. Then vgreduce rootrhel /dev/xvdc, echo 1 > /sys/block/xvdc/device/delete to tell the kernel that the xvdc device is going away, and then tell Amazon that you don't need your xvdc disk any more.
I have nearly 20 years of experience working with LVM disk storage (first with HP-UX LVM, and later with Linux LVM once it matured enough to be usable in enterprise environment). These are the rules of thumb I've come to use with LVM:
- You should never create two VGs when one is enough.
In particular, having two VGs on a single disk is most likely a mistake that will cause you headache. Reallocating disk capacity within a VG is as flexible as your filesystem type allows; moving capacity between VGs in chunks smaller than one already-existing PV is usually not worth the hassle.
- If there is uncertainty in your disk space requirements (and there always is), keep your LVs on the small side and some unallocated space in reserve.
As long as your VG has unallocated capacity available, you can extend LVs and filesystems in them on-line as needed with one or two quick commands. It's a one-banana job for a trained monkey junior sysadmin.
If there is no unallocated capacity in the VG, get a new disk, initialize it as a new PV, add it to the VG that needs capacity, and then go on with the extension as usual. Shrinking filesystems is more error-prone, may require downtime or may even be impossible without backing up & recreating the filesystem in smaller size, depending on filesystem type. So you'll want to avoid situations that require on-line shrinking of filesystems as much as possible.
- Micro-management of disk space can be risky, and is a lot of work. Work is expensive.
Okay. Technically you could create a 80 GB file on /store, losetup it into a loop device, then make that into a PV you could add into your rootrhel VG... but doing that would result in a system that would most likely drop into a single user recovery mode at boot unless you set up a customized start-up script for these filesystems and VGs and got it right at the first time.
Get it wrong, and the next time your system is rebooted for any reason you'll have to take some unplanned downtime for troubleshooting and fixing, or more realistically recreating the filesystems from scratch and restoring the contents from backups because it's simpler than trying to troubleshoot this jury-rigged mess.
Or if you are using ext4 filesystem that can be on-line reduced, you could shrink the /store filesystem, shrink the LV, use pvmove --alloc anywhere to consolidate the free space to the tail end of the xvdb3 PV, shrink the PV, shrink the partition, run partprobe to make the changes effective without a reboot, then create a new partition xvdb4, initialize it as a new PV and add it to rootrhel VG...
BUT if you make one mistake in this sequence so that your filesystem/PV extends beyond its LV/partition container, and your filesystem gets switched into read-only mode with an error flag that can be only reset by running a filesystem check, resulting in mandatory unplanned downtime.
rootrhel-storetmp,storerhel-storeandstorerhel-transientempty? (I.e. can I give you the commands to detete them before creating them anew?)