Extend AWS EBS volume with LVM installed
Yes, I know that there is the documentation for this, but I found it not clear enough on the Volume Extend topic. What I wanted to do was to change the size of my LVM volume from 128GB to 378GB (don’t ask).
I started with the creation of a snapshot (just in case something went wrong) and extending the EBS volume size in the AWS console, which is clearly explained in the documentation.
Once the volume was resized in the AWS console, the result of the lsblk
command was as follows:
$ sudo lsblk [...] nvme2n1 259:3 0 378G 0 disk ââvg--srv-lv--srv 253:0 0 128G 0 lvm /srv [...]
In the AWS LVM volume extending documentation, you may read that you should use growpart
. So, this is the part that took me some time. With nvme
volumes there is no such need. You should go straight to pvresize
which in my case was:
$ sudo pvresize /dev/nvme2n1 WARNING: PV /dev/nvme2n1 in VG vg-srv is using an old PV header, modify the VG to update. WARNING: updating PV header on /dev/nvme2n1 for VG vg-srv. Physical volume "/dev/nvme2n1" changed 1 physical volume(s) resized or updated / 0 physical volume(s) not resized
I also executed pvs
, vgs
, and lvs
to see if everything is fine:
$ sudo pvs PV VG Fmt Attr PSize PFree /dev/nvme2n1 vg-srv lvm2 a-- <378.00g 250.00g $ sudo vgs VG #PV #LV #SN Attr VSize VFree vg-srv 1 1 0 wz--n- <378.00g 250.00g $ sudo lvs LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert lv-srv vg-srv -wi-ao---- <128.00g
Looks like we have added 250G of free space, let’s extend the volume using lvextend
. What I will need for this, is the VG and LV values from the lvs
execution above. My command looked like this:
$ sudo lvextend -L 378G /dev/vg-srv/lv-srv Insufficient free space: 32768 extents needed, but only 32767 available
Whoops, looks like a little bit too much, so (because I had no time to calculate it) I simply took one GB less:
$ sudo lvextend -L 377G /dev/vg-srv/lv-srv Size of logical volume vg-srv/lv-srv changed from 128.00 GiB (32768 extents) to 377.00 GiB (96512 extents).
Now my lsblk
looks like this:
$ sudo lsblk [...] nvme2n1 259:3 0 378G 0 disk ââvg--srv-lv--srv 253:0 0 377G 0 lvm /srv [...]
Nice, but this is not the end. When you take a look at the df
result, the volume available for the system is still not resized:
$ df -h [...] /dev/mapper/vg--srv-lv--srv 126G 104G 17G 87% /srv [...]
Because my volume is using Ext4, the last command was:
$ sudo resize2fs /dev/vg-srv/lv-srv resize2fs 1.45.5 (07-Jan-2020) Filesystem at /dev/vg-srv/lv-srv is mounted on /srv; on-line resizing required old_desc_blocks = 8, new_desc_blocks = 24 The filesystem on /dev/vg-srv/lv-srv is now 98828288 (4k) blocks long.
Note that I again used the VG and LV values from the lvs
command result. Now the work is finished:
$ df -h [...] /dev/mapper/vg--srv-lv--srv 371G 104G 252G 30% /srv [...]
Instead of resizing the existing volume, in case you are using LVM you can add a new one and join two “physical” volumes in one “logical” volume. It brings a new one in the volume group with the old one. Since I never did this before, I will not try to do it now đ