Disk Management in Azure
LVM on Red Hat or Oracle Linux running in Azure
Increase the disk space for /var
The following samples are taken on a default RedHat82 on Azure
- Check the default disk configuration, in this sample /var has 8GB and is mounted on /dev/mapper/rootvg-varlv
[root@vm-rhe82 ~]# df -h Filesystem Size Used Avail Use% Mounted on devtmpfs 1.9G 0 1.9G 0% /dev tmpfs 1.9G 0 1.9G 0% /dev/shm tmpfs 1.9G 8.5M 1.9G 1% /run tmpfs 1.9G 0 1.9G 0% /sys/fs/cgroup /dev/mapper/rootvg-rootlv 2.0G 71M 2.0G 4% / /dev/mapper/rootvg-usrlv 10G 1.5G 8.6G 15% /usr /dev/sda1 496M 148M 348M 30% /boot /dev/mapper/rootvg-tmplv 2.0G 47M 2.0G 3% /tmp /dev/mapper/rootvg-homelv 1014M 40M 975M 4% /home /dev/sda15 495M 6.9M 488M 2% /boot/efi /dev/mapper/rootvg-varlv 8.0G 297M 7.8G 4% /var /dev/sdb1 7.9G 36M 7.4G 1% /mnt tmpfs 378M 0 378M 0% /run/user/1000
- Check the Logical volume next to see on which volume group we are (rootvg)
[root@vm-rhe82 ~]# lvdisplay /dev/mapper/rootvg-varlv --- Logical volume --- LV Path /dev/rootvg/varlv LV Name varlv VG Name rootvg LV UUID L29d2z-WFNF-4vE7-t08B-KPC0-KEUP-utyBYy LV Write Access read/write LV Creation host, time localhost, 2021-03-04 12:14:55 +0000 LV Status available # open 1 LV Size 8.00 GiB Current LE 2048 Segments 1 Allocation inherit Read ahead sectors auto - currently set to 8192 Block device 253:3
- Check to see if the volume group has space left, in this sample vg01 has 40.02GB left disk space which we can use.
[root@vm-rhe92 ~]# vgdisplay rootvg --- Volume group --- VG Name rootvg System ID Format lvm2 Metadata Areas 1 Metadata Sequence No 6 VG Access read/write VG Status resizable MAX LV 0 Cur LV 5 Open LV 5 Max PV 0 Cur PV 1 Act PV 1 VG Size <63.02 GiB PE Size 4.00 MiB Total PE 16133 Alloc PE / Size 5888 / 23.00 GiB Free PE / Size 10245 / <40.02 GiB VG UUID 3EaPl9-Q0Py-z12t-x1YC-R0MT-D1VN-UNch8o
- Increase /var by 10GB from 8GB to 18GB
[root@vm-rhe82 ~]# lvextend -L +10GB /dev/mapper/rootvg-varlv Size of logical volume rootvg/varlv changed from 8.00 GiB (2048 extents) to 18.00 GiB (4608 extents). Logical volume rootvg/varlv successfully resized.
- Extend the FXS Filesystem
- Use -n to test
[root@vm-rhe82 ~]# xfs_growfs /dev/mapper/rootvg-varlv meta-data=/dev/mapper/rootvg-varlv isize=512 agcount=4, agsize=524288 blks = sectsz=4096 attr=2, projid32bit=1 = crc=1 finobt=1, sparse=1, rmapbt=0 = reflink=1 data = bsize=4096 blocks=2097152, imaxpct=25 = sunit=0 swidth=0 blks naming =version 2 bsize=4096 ascii-ci=0, ftype=1 log =internal log bsize=4096 blocks=2560, version=2 = sectsz=4096 sunit=1 blks, lazy-count=1 realtime =none extsz=4096 blocks=0, rtextents=0 data blocks changed from 2097152 to 4718592
- Check the result again and compare with the result from above, /var has now 18GB
[root@vm-rhe82 ~]# df -h Filesystem Size Used Avail Use% Mounted on devtmpfs 1.9G 0 1.9G 0% /dev tmpfs 1.9G 0 1.9G 0% /dev/shm tmpfs 1.9G 8.5M 1.9G 1% /run tmpfs 1.9G 0 1.9G 0% /sys/fs/cgroup /dev/mapper/rootvg-rootlv 2.0G 71M 2.0G 4% / /dev/mapper/rootvg-usrlv 10G 1.5G 8.6G 15% /usr /dev/sda1 496M 148M 348M 30% /boot /dev/mapper/rootvg-tmplv 2.0G 47M 2.0G 3% /tmp /dev/mapper/rootvg-homelv 1014M 40M 975M 4% /home /dev/sda15 495M 6.9M 488M 2% /boot/efi /dev/mapper/rootvg-varlv 18G 462M 18G 3% /var /dev/sdb1 7.9G 36M 7.4G 1% /mnt tmpfs 378M 0 378M 0% /run/user/1000
- Check the volume group again, the entire space has decreased by 10GB
[root@vm-rhe82 ~]# vgdisplay rootvg --- Volume group --- VG Name rootvg System ID Format lvm2 Metadata Areas 1 Metadata Sequence No 7 VG Access read/write VG Status resizable MAX LV 0 Cur LV 5 Open LV 5 Max PV 0 Cur PV 1 Act PV 1 VG Size <63.02 GiB PE Size 4.00 MiB Total PE 16133 Alloc PE / Size 8448 / 33.00 GiB Free PE / Size 7685 / <30.02 GiB VG UUID 3EaPl9-Q0Py-z12t-x1YC-R0MT-D1VN-UNch8o
Increase the OS Disk
- Teslab on Azure using RedHat8.2, in Azure we increase the Disk size from 65GB to 128GB. This is howto add the new disk space to LVM
List the disk with parted, on the first run we will need to fix the new size by typing Fix
[root@vm-rhe92 ~]# parted -l /dev/sda Warning: Not all of the space available to /dev/sda appears to be used, you can fix the GPT to use all of the space (an extra 134217728 blocks) or continue with the current setting? Fix/Ignore? Fix Model: Msft Virtual Disk (scsi) Disk /dev/sda: 137GB Sector size (logical/physical): 512B/4096B Partition Table: gpt Disk Flags:
Number Start End Size File system Name Flags 14 1049kB 5243kB 4194kB bios_grub 15 5243kB 524MB 519MB fat16 EFI System Partition boot, esp 1 525MB 1050MB 524MB xfs 2 1050MB 68.7GB 67.7GB lvm
- Next we need to know where to expand the new partition, this sample shows /dev/sda2
[root@vm-rhe82 ~]# pvdisplay --- Physical volume --- PV Name /dev/sda2 VG Name rootvg PV Size 63.02 GiB / not usable 2.00 MiB Allocatable yes PE Size 4.00 MiB Total PE 16133 Free PE 7685 Allocated PE 8448 PV UUID uevLg0-eCem-PQIV-0lUJ-ViuH-BQPi-C4PSIo
- Optional confirm with lsblk
[root@vm-rhe82 ~]# lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT sda 8:0 0 128G 0 disk ├─sda1 8:1 0 500M 0 part /boot ├─sda2 8:2 0 63G 0 part │ ├─rootvg-tmplv 253:0 0 2G 0 lvm /tmp │ ├─rootvg-usrlv 253:1 0 10G 0 lvm /usr │ ├─rootvg-homelv 253:2 0 1G 0 lvm /home │ ├─rootvg-varlv 253:3 0 18G 0 lvm /var │ └─rootvg-rootlv 253:4 0 2G 0 lvm / ├─sda14 8:14 0 4M 0 part └─sda15 8:15 0 495M 0 part /boot/efi sdb 8:16 0 8G 0 disk └─sdb1 8:17 0 8G 0 part /mnt
- The above information shows partition number 2 which we wnat to grow up to 128GB
[root@vm-rhe82 ~]# growpart /dev/sda 2 CHANGED: partition=2 start=2050048 old: size=132165632 end=134215680 new: size=266385375 end=268435423
- Parted shows now the new disk space for /dev/sda2
[root@vm-rhe92 ~]# parted -l /dev/sda Model: Msft Virtual Disk (scsi) Disk /dev/sda: 137GB Sector size (logical/physical): 512B/4096B Partition Table: gpt Disk Flags: Number Start End Size File system Name Flags 14 1049kB 5243kB 4194kB bios_grub 15 5243kB 524MB 519MB fat16 EFI System Partition boot, esp 1 525MB 1050MB 524MB xfs 2 1050MB 137GB 136GB lvm
- Now resize the disk within LVM
[root@vm-rhe92 ~]# pvresize /dev/sda2 Physical volume "/dev/sda2" changed 1 physical volume(s) resized or updated / 0 physical volume(s) not resized
- Check the volume group again, compare it from the result above to see how rootvg increased
[root@vm-rhe82 ~]# vgdisplay --- Volume group --- VG Name rootvg System ID Format lvm2 Metadata Areas 1 Metadata Sequence No 8 VG Access read/write VG Status resizable MAX LV 0 Cur LV 5 Open LV 5 Max PV 0 Cur PV 1 Act PV 1 VG Size <127.02 GiB PE Size 4.00 MiB Total PE 32517 Alloc PE / Size 8448 / 33.00 GiB Free PE / Size 24069 / <94.02 GiB VG UUID 3EaPl9-Q0Py-z12t-x1YC-R0MT-D1VN-UNch8o
Add a new Disk and replace the mount point /var to the new disk
Teslab on Azure using RedHat8.2, in Azure we added a new disk which we want to replace with /var
- Check to see which partition came on new, see the kernel messages, this sample shows sdc as the latest disk which came on our system
[root@vm-rhe92 ~]# dmesg | grep Attached [ 4.960356] scsi 0:0:0:0: Attached scsi generic sg0 type 0 [ 4.992214] scsi 0:0:0:1: Attached scsi generic sg1 type 0 [ 5.228196] sd 0:0:0:1: [sdb] Attached SCSI disk [ 5.354066] sd 0:0:0:0: [sda] Attached SCSI disk [ 2675.092988] sd 1:0:0:0: Attached scsi generic sg2 type 0 [ 2675.145990] sd 1:0:0:0: [sdc] Attached SCSI disk
- Next is to create the new disk in LVM
[root@vm-rhe82 ~]# pvcreate /dev/sdc Physical volume "/dev/sdc" successfully created.
- Next is to create the new volume group in LVM, we call the volume group as vg01
[root@vm-rhe92 ~]# vgcreate vg01 /dev/sdc Volume group "vg01" successfully created
- Next is to create the logcal volume in LVM, we call the logical volume group as lv01 and use all space
[root@vm-rhe82 ~]# lvcreate -n lv01 -l 100%FREE vg01 Logical volume "lv01" created.
- As the disk is new, we need to create a filesystem before to proceed
[root@vm-rhe92 ~]# mkfs.xfs /dev/vg01/lv01 meta-data=/dev/vg01/lv01 isize=512 agcount=4, agsize=4194048 blks = sectsz=4096 attr=2, projid32bit=1 = crc=1 finobt=1, sparse=1, rmapbt=0 = reflink=1 data = bsize=4096 blocks=16776192, imaxpct=25 = sunit=0 swidth=0 blks naming =version 2 bsize=4096 ascii-ci=0, ftype=1 log =internal log bsize=4096 blocks=8191, version=2 = sectsz=4096 sunit=1 blks, lazy-count=1 realtime =none extsz=4096 blocks=0, rtextents=0
- Now create a temp folder for /var, say we create /var-temp
[root@vm-rhe82 ~]# mkdir /var-temp
- Next mount our new disk to here:
[root@vm-rhe82 ~]# mount /dev/vg01/lv01 /var-temp
- Check the result, /var-temp should come up and we show also to which we want to replace
[root@vm-rhe82 ~]# df -h Filesystem Size Used Avail Use% Mounted on devtmpfs 1.9G 0 1.9G 0% /dev tmpfs 1.9G 0 1.9G 0% /dev/shm tmpfs 1.9G 8.5M 1.9G 1% /run tmpfs 1.9G 0 1.9G 0% /sys/fs/cgroup /dev/mapper/rootvg-rootlv 2.0G 71M 2.0G 4% / /dev/mapper/rootvg-usrlv 10G 1.5G 8.6G 15% /usr /dev/mapper/rootvg-tmplv 2.0G 47M 2.0G 3% /tmp /dev/mapper/rootvg-homelv 1014M 40M 975M 4% /home /dev/sda1 496M 148M 348M 30% /boot /dev/sda15 495M 6.9M 488M 2% /boot/efi /dev/mapper/rootvg-varlv 18G 440M 18G 3% /var /dev/sdb1 7.9G 36M 7.4G 1% /mnt tmpfs 378M 0 378M 0% /run/user/1000 /dev/mapper/vg01-lv01 64G 489M 64G 1% /var-temp
- Copy the actual data to var-temp, keep the exact file attributes
[root@vm-rhe82 ~]# rsync -avAXEWSlHh /var/* /var-temp/ --no-compress
- Add the new disk into etc/fstab
- Make sure that there are no more mount points to /var
echo "/dev/mapper/vg01-lv01 /var xfs defaults 0 0" >> /etc/fstab
- Check to see if we are ok by test:
[root@vm-rhe82 ~]# mount -fav / : ignored /boot : already mounted /boot/efi : already mounted /home : already mounted /tmp : already mounted /usr : already mounted /mnt : already mounted /var : successfully mounted
- Reboot and cross fingers
[root@vm-rhe82 ~]# init 6
- After reboot check the result:
[root@vm-rhe82 ~]# df -h Filesystem Size Used Avail Use% Mounted on devtmpfs 1.9G 0 1.9G 0% /dev tmpfs 1.9G 0 1.9G 0% /dev/shm tmpfs 1.9G 17M 1.9G 1% /run tmpfs 1.9G 0 1.9G 0% /sys/fs/cgroup /dev/mapper/rootvg-rootlv 2.0G 71M 2.0G 4% / /dev/mapper/rootvg-usrlv 10G 1.5G 8.6G 15% /usr /dev/mapper/rootvg-tmplv 2.0G 47M 2.0G 3% /tmp /dev/mapper/rootvg-homelv 1014M 40M 975M 4% /home /dev/sda1 496M 148M 348M 30% /boot /dev/sda15 495M 6.9M 488M 2% /boot/efi /dev/mapper/vg01-lv01 64G 764M 64G 2% /var /dev/sdb1 7.9G 36M 7.4G 1% /mnt tmpfs 378M 0 378M 0% /run/user/1000
- Check the VG size again
[root@vm-rhe82 ~]# vgdisplay rootvg | grep -i free Free PE / Size 24069 / <94.02 GiB
- Cleanup: Delete the old logical volume group
[root@vm-rhe82 ~]# lvremove /dev/rootvg/varlv Do you really want to remove active logical volume rootvg/varlv? [y/n]: y Logical volume "varlv" successfully removed
- Check the VG size again
[root@vm-rhe82 ~]# vgdisplay rootvg | grep -i free Free PE / Size 28677 / <112.02 GiB
List Commands
- List Disks
lshw -class disk lvmdiskscan
- List Size and add Total
df -h --output=size --total
- Automount from fstab
mount -a
- Simulate
mount -fav