Disk Management in Azure
Jump to navigation
Jump to search
EXT4 on Ubuntu or Debian running in Azure
Add a new disk for /var
- Check the new Disk
root@vm-ubuntu20:~# lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT loop0 7:0 0 70.3M 1 loop /snap/lxd/21029 loop1 7:1 0 55.4M 1 loop /snap/core18/2128 loop2 7:2 0 32.3M 1 loop /snap/snapd/12883 sda 8:0 0 30G 0 disk ├─sda1 8:1 0 29.9G 0 part / ├─sda14 8:14 0 4M 0 part └─sda15 8:15 0 106M 0 part /boot/efi sdb 8:16 0 8G 0 disk └─sdb1 8:17 0 8G 0 part /mnt sdc 8:32 0 32G 0 disk
- Label the disk as GPT
root@vm-ubuntu20:~# parted /dev/sdc mklabel gpt Information: You may need to update /etc/fstab.
- Label the disk as MBR
root@vm-ubuntu20:~# parted /dev/sdc mklabel msdos Information: You may need to update /etc/fstab.
- Create Patition for GPT label
root@vm-ubuntu20:~# parted /dev/sdc mkpart 1 ext4 0% 100% Information: You may need to update /etc/fstab.
- Create Patition for MSDOS label
root@vm-ubuntu20:~# parted /dev/sdc mkpart primary ext4 0% 100% Information: You may need to update /etc/fstab.
- Create Filesystem
root@vm-ubuntu20:~# mkfs.ext4 /dev/sdc1 mke2fs 1.45.5 (07-Jan-2020) Discarding device blocks: done Creating filesystem with 8388352 4k blocks and 2097152 inodes Filesystem UUID: 8f1da040-7ece-42ea-98a0-1bfe2c200fc7 Superblock backups stored on blocks: 32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208, 4096000, 7962624 Allocating group tables: done Writing inode tables: done Creating journal (32768 blocks): done Writing superblocks and filesystem accounting information: done
- Make temp dir
root@vm-ubuntu20:~# mkdir /var-temp
- Mount new disk
root@vm-ubuntu20:~# mount /dev/sdc1 /var-temp
- Copy/Sync data
root@vm-ubuntu20:~# rsync -avAXEWSlHh /var/* /var-temp/ --no-compress
- Get BlockID
root@vm-ubuntu20:~# blkid /dev/sdc1 /dev/sdc1: UUID="4a3968f7-16c1-41f1-8579-0457829de256" TYPE="ext4" PARTUUID="f7a8e367-01"
- Add the disk into /etc/fstab, you shoul use the UUID as device numbers may change after rebooot
echo "UUID=4a3968f7-16c1-41f1-8579-0457829de256 /var ext4 defaults 0 1" >>/etc/fstab
- Test mount
root@vm-ubuntu20:~# mount -fav / : ignored /boot/efi : already mounted /mnt : already mounted /var : successfully mounted
- Reboot
root@vm-ubuntu20:~# init 6
- Test to see the new device for /var
root@vm-ubuntu20:~# df -h Filesystem Size Used Avail Use% Mounted on /dev/root 29G 1.4G 28G 5% / devtmpfs 2.0G 0 2.0G 0% /dev tmpfs 2.0G 0 2.0G 0% /dev/shm tmpfs 394M 940K 393M 1% /run tmpfs 5.0M 0 5.0M 0% /run/lock tmpfs 2.0G 0 2.0G 0% /sys/fs/cgroup /dev/sdb15 105M 5.2M 100M 5% /boot/efi /dev/sda1 32G 314M 30G 2% /var /dev/loop1 33M 33M 0 100% /snap/snapd/12883 /dev/loop0 56M 56M 0 100% /snap/core18/2128 /dev/loop2 71M 71M 0 100% /snap/lxd/21029 /dev/sdc1 7.9G 36M 7.4G 1% /mnt tmpfs 394M 0 394M 0% /run/user/1000
Extend the disk space for /var
This Lab Test is a follow up from the above, the second hard drive has increased by 32GB to 64GB
- Check with parted, depending on the label you might to run the Fix
root@vm-ubuntu20:~# parted -l /dev/sda Warning: Not all of the space available to /dev/sda appears to be used, you can fix the GPT to use all of the space (an extra 67108864 blocks) or continue with the current setting? Fix/Ignore? Fix Model: Msft Virtual Disk (scsi) Disk /dev/sda: 68.7GB Sector size (logical/physical): 512B/4096B Partition Table: gpt Disk Flags: Number Start End Size File system Name Flags 1 1049kB 34.4GB 34.4GB ext4 1
- Grow the partition
root@vm-ubuntu20:~# growpart /dev/sda 1 CHANGED: partition=1 start=2048 old: size=67104768 end=67106816 new: size=134215647 end=134217695
- Check parted again to see if the size matches
root@vm-ubuntu20:~# parted -l /dev/sda Model: Msft Virtual Disk (scsi) Disk /dev/sda: 68.7GB Sector size (logical/physical): 512B/4096B Partition Table: gpt Disk Flags: Number Start End Size File system Name Flags 1 1049kB 68.7GB 68.7GB ext4 1
- Resize the Filesystem
root@vm-ubuntu20:~# resize2fs /dev/sda1 resize2fs 1.45.5 (07-Jan-2020) Filesystem at /dev/sda1 is mounted on /var-temp; on-line resizing required old_desc_blocks = 4, new_desc_blocks = 8 The filesystem on /dev/sda1 is now 16776955 (4k) blocks long.
- Check Block Device again and compare with /etc/fstab
LVM on Red Hat or Oracle Linux running in Azure
Extend the disk space for /var
The following samples are taken on a default RedHat82 on Azure
- Check the default disk configuration, in this sample /var has 8GB and is mounted on /dev/mapper/rootvg-varlv
[root@vm-rhe82 ~]# df -h Filesystem Size Used Avail Use% Mounted on devtmpfs 1.9G 0 1.9G 0% /dev tmpfs 1.9G 0 1.9G 0% /dev/shm tmpfs 1.9G 8.5M 1.9G 1% /run tmpfs 1.9G 0 1.9G 0% /sys/fs/cgroup /dev/mapper/rootvg-rootlv 2.0G 71M 2.0G 4% / /dev/mapper/rootvg-usrlv 10G 1.5G 8.6G 15% /usr /dev/sda1 496M 148M 348M 30% /boot /dev/mapper/rootvg-tmplv 2.0G 47M 2.0G 3% /tmp /dev/mapper/rootvg-homelv 1014M 40M 975M 4% /home /dev/sda15 495M 6.9M 488M 2% /boot/efi /dev/mapper/rootvg-varlv 8.0G 297M 7.8G 4% /var /dev/sdb1 7.9G 36M 7.4G 1% /mnt tmpfs 378M 0 378M 0% /run/user/1000
- Check the Logical volume next to see on which volume group we are (rootvg)
[root@vm-rhe82 ~]# lvdisplay /dev/mapper/rootvg-varlv --- Logical volume --- LV Path /dev/rootvg/varlv LV Name varlv VG Name rootvg LV UUID L29d2z-WFNF-4vE7-t08B-KPC0-KEUP-utyBYy LV Write Access read/write LV Creation host, time localhost, 2021-03-04 12:14:55 +0000 LV Status available # open 1 LV Size 8.00 GiB Current LE 2048 Segments 1 Allocation inherit Read ahead sectors auto - currently set to 8192 Block device 253:3
- Check to see if the volume group has space left, in this sample vg01 has 40.02GB left disk space which we can use.
[root@vm-rhe82 ~]# vgdisplay rootvg --- Volume group --- VG Name rootvg System ID Format lvm2 Metadata Areas 1 Metadata Sequence No 6 VG Access read/write VG Status resizable MAX LV 0 Cur LV 5 Open LV 5 Max PV 0 Cur PV 1 Act PV 1 VG Size <63.02 GiB PE Size 4.00 MiB Total PE 16133 Alloc PE / Size 5888 / 23.00 GiB Free PE / Size 10245 / <40.02 GiB VG UUID 3EaPl9-Q0Py-z12t-x1YC-R0MT-D1VN-UNch8o
- Increase /var by 10GB from 8GB to 18GB
[root@vm-rhe82 ~]# lvextend -L +10GB /dev/mapper/rootvg-varlv Size of logical volume rootvg/varlv changed from 8.00 GiB (2048 extents) to 18.00 GiB (4608 extents). Logical volume rootvg/varlv successfully resized.
- Extend the FXS Filesystem
- Use -n to test
[root@vm-rhe82 ~]# xfs_growfs /dev/mapper/rootvg-varlv meta-data=/dev/mapper/rootvg-varlv isize=512 agcount=4, agsize=524288 blks = sectsz=4096 attr=2, projid32bit=1 = crc=1 finobt=1, sparse=1, rmapbt=0 = reflink=1 data = bsize=4096 blocks=2097152, imaxpct=25 = sunit=0 swidth=0 blks naming =version 2 bsize=4096 ascii-ci=0, ftype=1 log =internal log bsize=4096 blocks=2560, version=2 = sectsz=4096 sunit=1 blks, lazy-count=1 realtime =none extsz=4096 blocks=0, rtextents=0 data blocks changed from 2097152 to 4718592
- Check the result again and compare with the result from above, /var has now 18GB
[root@vm-rhe82 ~]# df -h Filesystem Size Used Avail Use% Mounted on devtmpfs 1.9G 0 1.9G 0% /dev tmpfs 1.9G 0 1.9G 0% /dev/shm tmpfs 1.9G 8.5M 1.9G 1% /run tmpfs 1.9G 0 1.9G 0% /sys/fs/cgroup /dev/mapper/rootvg-rootlv 2.0G 71M 2.0G 4% / /dev/mapper/rootvg-usrlv 10G 1.5G 8.6G 15% /usr /dev/sda1 496M 148M 348M 30% /boot /dev/mapper/rootvg-tmplv 2.0G 47M 2.0G 3% /tmp /dev/mapper/rootvg-homelv 1014M 40M 975M 4% /home /dev/sda15 495M 6.9M 488M 2% /boot/efi /dev/mapper/rootvg-varlv 18G 462M 18G 3% /var /dev/sdb1 7.9G 36M 7.4G 1% /mnt tmpfs 378M 0 378M 0% /run/user/1000
- Check the volume group again, the entire space has decreased by 10GB
[root@vm-rhe82 ~]# vgdisplay rootvg --- Volume group --- VG Name rootvg System ID Format lvm2 Metadata Areas 1 Metadata Sequence No 7 VG Access read/write VG Status resizable MAX LV 0 Cur LV 5 Open LV 5 Max PV 0 Cur PV 1 Act PV 1 VG Size <63.02 GiB PE Size 4.00 MiB Total PE 16133 Alloc PE / Size 8448 / 33.00 GiB Free PE / Size 7685 / <30.02 GiB VG UUID 3EaPl9-Q0Py-z12t-x1YC-R0MT-D1VN-UNch8o
Extend the OS Disk
- Teslab on Azure using RedHat8.2, in Azure we increase the Disk size from 65GB to 128GB. This is howto add the new disk space to LVM
List the disk with parted, on the first run we will need to fix the new size by typing Fix
[root@vm-rhe82 ~]# parted -l /dev/sda Warning: Not all of the space available to /dev/sda appears to be used, you can fix the GPT to use all of the space (an extra 134217728 blocks) or continue with the current setting? Fix/Ignore? Fix Model: Msft Virtual Disk (scsi) Disk /dev/sda: 137GB Sector size (logical/physical): 512B/4096B Partition Table: gpt Disk Flags:
Number Start End Size File system Name Flags 14 1049kB 5243kB 4194kB bios_grub 15 5243kB 524MB 519MB fat16 EFI System Partition boot, esp 1 525MB 1050MB 524MB xfs 2 1050MB 68.7GB 67.7GB lvm
- Next we need to know where to expand the new partition, this sample shows /dev/sda2
[root@vm-rhe82 ~]# pvdisplay --- Physical volume --- PV Name /dev/sda2 VG Name rootvg PV Size 63.02 GiB / not usable 2.00 MiB Allocatable yes PE Size 4.00 MiB Total PE 16133 Free PE 7685 Allocated PE 8448 PV UUID uevLg0-eCem-PQIV-0lUJ-ViuH-BQPi-C4PSIo
- Optional confirm with lsblk
[root@vm-rhe82 ~]# lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT sda 8:0 0 128G 0 disk ├─sda1 8:1 0 500M 0 part /boot ├─sda2 8:2 0 63G 0 part │ ├─rootvg-tmplv 253:0 0 2G 0 lvm /tmp │ ├─rootvg-usrlv 253:1 0 10G 0 lvm /usr │ ├─rootvg-homelv 253:2 0 1G 0 lvm /home │ ├─rootvg-varlv 253:3 0 18G 0 lvm /var │ └─rootvg-rootlv 253:4 0 2G 0 lvm / ├─sda14 8:14 0 4M 0 part └─sda15 8:15 0 495M 0 part /boot/efi sdb 8:16 0 8G 0 disk └─sdb1 8:17 0 8G 0 part /mnt
- The above information shows partition number 2 which we wnat to grow up to 128GB
[root@vm-rhe82 ~]# growpart /dev/sda 2 CHANGED: partition=2 start=2050048 old: size=132165632 end=134215680 new: size=266385375 end=268435423
- Parted shows now the new disk space for /dev/sda2
[root@vm-rhe82 ~]# parted -l /dev/sda Model: Msft Virtual Disk (scsi) Disk /dev/sda: 137GB Sector size (logical/physical): 512B/4096B Partition Table: gpt Disk Flags: Number Start End Size File system Name Flags 14 1049kB 5243kB 4194kB bios_grub 15 5243kB 524MB 519MB fat16 EFI System Partition boot, esp 1 525MB 1050MB 524MB xfs 2 1050MB 137GB 136GB lvm
- Now resize the disk within LVM
[root@vm-rhe82 ~]# pvresize /dev/sda2 Physical volume "/dev/sda2" changed 1 physical volume(s) resized or updated / 0 physical volume(s) not resized
- Check the volume group again, compare it from the result above to see how rootvg increased
[root@vm-rhe82 ~]# vgdisplay --- Volume group --- VG Name rootvg System ID Format lvm2 Metadata Areas 1 Metadata Sequence No 8 VG Access read/write VG Status resizable MAX LV 0 Cur LV 5 Open LV 5 Max PV 0 Cur PV 1 Act PV 1 VG Size <127.02 GiB PE Size 4.00 MiB Total PE 32517 Alloc PE / Size 8448 / 33.00 GiB Free PE / Size 24069 / <94.02 GiB VG UUID 3EaPl9-Q0Py-z12t-x1YC-R0MT-D1VN-UNch8o
Add a new Disk and replace the mount point /var to the new disk
Teslab on Azure using RedHat8.2, in Azure we added a new disk which we want to replace with /var
- Check to see which partition came on new, see the kernel messages, this sample shows sdc as the latest disk which came on our system
[root@vm-rhe82 ~]# dmesg | grep Attached [ 4.960356] scsi 0:0:0:0: Attached scsi generic sg0 type 0 [ 4.992214] scsi 0:0:0:1: Attached scsi generic sg1 type 0 [ 5.228196] sd 0:0:0:1: [sdb] Attached SCSI disk [ 5.354066] sd 0:0:0:0: [sda] Attached SCSI disk [ 2675.092988] sd 1:0:0:0: Attached scsi generic sg2 type 0 [ 2675.145990] sd 1:0:0:0: [sdc] Attached SCSI disk
- Next is to create the new disk in LVM
[root@vm-rhe82 ~]# pvcreate /dev/sdc Physical volume "/dev/sdc" successfully created.
- Next is to create the new volume group in LVM, we call the volume group as vg01
[root@vm-rhe82 ~]# vgcreate vg01 /dev/sdc Volume group "vg01" successfully created
- Next is to create the logcal volume in LVM, we call the logical volume group as lv01 and use all space
[root@vm-rhe82 ~]# lvcreate -n lv01 -l 100%FREE vg01 Logical volume "lv01" created.
- As the disk is new, we need to create a filesystem before to proceed
[root@vm-rhe82 ~]# mkfs.xfs /dev/vg01/lv01 meta-data=/dev/vg01/lv01 isize=512 agcount=4, agsize=4194048 blks = sectsz=4096 attr=2, projid32bit=1 = crc=1 finobt=1, sparse=1, rmapbt=0 = reflink=1 data = bsize=4096 blocks=16776192, imaxpct=25 = sunit=0 swidth=0 blks naming =version 2 bsize=4096 ascii-ci=0, ftype=1 log =internal log bsize=4096 blocks=8191, version=2 = sectsz=4096 sunit=1 blks, lazy-count=1 realtime =none extsz=4096 blocks=0, rtextents=0
- Now create a temp folder for /var, say we create /var-temp
[root@vm-rhe82 ~]# mkdir /var-temp
- Next mount our new disk to here:
[root@vm-rhe82 ~]# mount /dev/vg01/lv01 /var-temp
- Check the result, /var-temp should come up and we show also to which we want to replace
[root@vm-rhe82 ~]# df -h Filesystem Size Used Avail Use% Mounted on devtmpfs 1.9G 0 1.9G 0% /dev tmpfs 1.9G 0 1.9G 0% /dev/shm tmpfs 1.9G 8.5M 1.9G 1% /run tmpfs 1.9G 0 1.9G 0% /sys/fs/cgroup /dev/mapper/rootvg-rootlv 2.0G 71M 2.0G 4% / /dev/mapper/rootvg-usrlv 10G 1.5G 8.6G 15% /usr /dev/mapper/rootvg-tmplv 2.0G 47M 2.0G 3% /tmp /dev/mapper/rootvg-homelv 1014M 40M 975M 4% /home /dev/sda1 496M 148M 348M 30% /boot /dev/sda15 495M 6.9M 488M 2% /boot/efi /dev/mapper/rootvg-varlv 18G 440M 18G 3% /var /dev/sdb1 7.9G 36M 7.4G 1% /mnt tmpfs 378M 0 378M 0% /run/user/1000 /dev/mapper/vg01-lv01 64G 489M 64G 1% /var-temp
- Copy the actual data to var-temp, keep the exact file attributes
[root@vm-rhe82 ~]# rsync -avAXEWSlHh /var/* /var-temp/ --no-compress
- Add the new disk into etc/fstab
- Make sure that there are no more mount points to /var
echo "/dev/mapper/vg01-lv01 /var xfs defaults 0 0" >> /etc/fstab
- Check to see if we are ok by test:
[root@vm-rhe82 ~]# mount -fav / : ignored /boot : already mounted /boot/efi : already mounted /home : already mounted /tmp : already mounted /usr : already mounted /mnt : already mounted /var : successfully mounted
- Reboot and cross fingers
[root@vm-rhe82 ~]# init 6
- After reboot check the result:
[root@vm-rhe82 ~]# df -h Filesystem Size Used Avail Use% Mounted on devtmpfs 1.9G 0 1.9G 0% /dev tmpfs 1.9G 0 1.9G 0% /dev/shm tmpfs 1.9G 17M 1.9G 1% /run tmpfs 1.9G 0 1.9G 0% /sys/fs/cgroup /dev/mapper/rootvg-rootlv 2.0G 71M 2.0G 4% / /dev/mapper/rootvg-usrlv 10G 1.5G 8.6G 15% /usr /dev/mapper/rootvg-tmplv 2.0G 47M 2.0G 3% /tmp /dev/mapper/rootvg-homelv 1014M 40M 975M 4% /home /dev/sda1 496M 148M 348M 30% /boot /dev/sda15 495M 6.9M 488M 2% /boot/efi /dev/mapper/vg01-lv01 64G 764M 64G 2% /var /dev/sdb1 7.9G 36M 7.4G 1% /mnt tmpfs 378M 0 378M 0% /run/user/1000
- Check the VG size again
[root@vm-rhe82 ~]# vgdisplay rootvg | grep -i free Free PE / Size 24069 / <94.02 GiB
- Cleanup: Delete the old logical volume group
[root@vm-rhe82 ~]# lvremove /dev/rootvg/varlv Do you really want to remove active logical volume rootvg/varlv? [y/n]: y Logical volume "varlv" successfully removed
- Check the VG size again
[root@vm-rhe82 ~]# vgdisplay rootvg | grep -i free Free PE / Size 28677 / <112.02 GiB
List Commands
- List Disks
lshw -class disk lvmdiskscan
- List Size and add Total
df -h --output=size --total
- Automount from fstab
mount -a
- Simulate
mount -fav