I came across a Linux Database VM at work that desperately needed more disk space. The DBA outlined how they wanted the storage to be allocated, so I set about with the work!
First, I added new disks in ESXi to match the disk space requested. I added space in 500 or 1000 GB blocks to give flexibility with storage and vmotion. Disks were balanced across SCSI controllers as well.
Then, I logged into the linux VM with admin credentials.
At a high-level, I’ll:
- Capture the existing disk statistics & configuration for reference:
fdisk -l lvs df -TH | grep Disk lsblk
- Add space to the disk group, then expand the logical volume to make use of that space:
sudo vgextend oralogvg /dev/sdaa sudo pvresize /dev/sdaa sudo lvextend -l+100%FREE /dev/oralogvg/oraloglv sudo xfs_growfs /dev/oralogvg/oraloglv
VMWare has an article outlining this process (Extending a logical volume in a virtual machine running Red Hat or Cent OS), but it assumes the creation of a disk partition prior to adding it to the disk group. The existing disks I dealt with aren’t configured like that, so I didn’t want to deviate from what’s already in place.
I dumped the current disks to determine how Linux has labelled the new disks that I added in ESXi. They should be at the end of the list and not be mapped to any logical volumes or disk groups:
[admin@DB030035 ~]$ lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 68G 0 disk
├─sda1 8:1 0 1M 0 part
├─sda2 8:2 0 1G 0 part /boot
└─sda3 8:3 0 62G 0 part
├─vg00-LogVol00_root 253:0 0 5G 0 lvm /
├─vg00-LogVol00_swap 253:1 0 22G 0 lvm [SWAP]
├─vg00-LogVol00_usr 253:2 0 10G 0 lvm /usr
├─vg00-LogVol00_vlogaudit 253:10 0 2G 0 lvm /var/log/audit
├─vg00-LogVol00_vlog 253:11 0 2G 0 lvm /var/log
├─vg00-LogVol00_vtmp 253:12 0 2G 0 lvm /var/tmp
├─vg00-LogVol00_tmp 253:13 0 5G 0 lvm /tmp
├─vg00-LogVol00_var 253:14 0 5G 0 lvm /var
├─vg00-LogVol00_opt 253:15 0 6G 0 lvm /opt
└─vg00-LogVol00_home 253:16 0 2G 0 lvm /home
sdb 8:16 0 220G 0 disk
└─oraredo1vg-oraredo1lv 253:20 0 370G 0 lvm /oraredo1
sdc 8:32 0 12G 0 disk
└─oratempvg-oratemplv 253:5 0 12G 0 lvm /oratemp
sdd 8:48 0 10G 0 disk
└─flash_recovery_area_vg-flash_recovery_area_lv 253:6 0 10G 0 lvm /flash_recovery_area
sde 8:64 0 256G 0 disk
└─oraredo3vg-oraredo3lv 253:4 0 406G 0 lvm /oraredo3
sdf 8:80 0 257G 0 disk
└─oraredo4vg-oraredo4lv 253:18 0 407G 0 lvm /oraredo4
sdg 8:96 0 70G 0 disk
└─oraclevg-oraclelv 253:21 0 70G 0 lvm /oracle
sdh 8:112 0 30G 0 disk
└─oradatavg-oradatalv 253:17 0 30G 0 lvm /oradata
sdi 8:128 0 220G 0 disk
└─oraredo2vg-oraredo2lv 253:19 0 370G 0 lvm /oraredo2
sdj 8:144 0 1.4T 0 disk
└─oradata1vg-oradata1lv 253:8 0 11T 0 lvm /oradata1
sdk 8:160 0 1.4T 0 disk
└─oradata1vg-oradata1lv 253:8 0 11T 0 lvm /oradata1
sdl 8:176 0 1.4T 0 disk
└─oradata1vg-oradata1lv 253:8 0 11T 0 lvm /oradata1
sdm 8:192 0 20G 0 disk
└─oraidxvg-oraidxlv 253:9 0 20G 0 lvm /oraidx
sdn 8:208 0 22G 0 disk
└─oralogvg-oraloglv 253:7 0 22G 0 lvm /oralog
sdo 8:224 0 25G 0 disk
└─oraexportvg-oraexportlv 253:3 0 2.7T 0 lvm /oraexport
sdp 8:240 0 1.4T 0 disk
└─oradata1vg-oradata1lv 253:8 0 11T 0 lvm /oradata1
sdq 65:0 0 1.4T 0 disk
└─oradata1vg-oradata1lv 253:8 0 11T 0 lvm /oradata1
sdr 65:16 0 1.4T 0 disk
└─oradata1vg-oradata1lv 253:8 0 11T 0 lvm /oradata1
sds 65:32 0 1000G 0 disk
└─sds1 65:33 0 1000G 0 part
└─oradata1vg-oradata1lv 253:8 0 11T 0 lvm /oradata1
sdt 65:48 0 1.5T 0 disk
└─oradata1vg-oradata1lv 253:8 0 11T 0 lvm /oradata1
sdu 65:64 0 1.5T 0 disk
└─oraexportvg-oraexportlv 253:3 0 2.7T 0 lvm /oraexport
sdv 65:80 0 150G 0 disk
└─oraredo1vg-oraredo1lv 253:20 0 370G 0 lvm /oraredo1
sdw 65:96 0 150G 0 disk
└─oraredo2vg-oraredo2lv 253:19 0 370G 0 lvm /oraredo2
sdx 65:112 0 150G 0 disk
└─oraredo3vg-oraredo3lv 253:4 0 406G 0 lvm /oraredo3
sdy 65:128 0 150G 0 disk
└─oraredo4vg-oraredo4lv 253:18 0 407G 0 lvm /oraredo4
sdz 65:144 0 1.2T 0 disk
└─oraexportvg-oraexportlv 253:3 0 2.7T 0 lvm /oraexport
sr0 11:0 1 1024M 0 rom
sdaa 65:160 0 1.5T 0 disk
sdab 65:176 0 1.5T 0 disk
[admin@DB030035 ~]$
Next, extend the existing group to include the new disk:
[admin@DB030035 ~]$ sudo vgextend oralogvg /dev/sdaa
WARNING: Device for PV ovZDIW-axwB-1YVM-rZVF-V90Y-5cMl-tdyU3m not found or rejected by a filter.
Physical volume "/dev/sdaa" successfully created.
Volume group "oralogvg" successfully extended
[admin@DB030035 ~]$ sudo vgextend oralogvg /dev/sdab
WARNING: Device for PV ovZDIW-axwB-1YVM-rZVF-V90Y-5cMl-tdyU3m not found or rejected by a filter.
Physical volume "/dev/sdab" successfully created.
Volume group "oralogvg" successfully extended
Then, resize the physical volume:
[admin@DB030035 ~]$ sudo pvresize /dev/sdaa
WARNING: Device for PV ovZDIW-axwB-1YVM-rZVF-V90Y-5cMl-tdyU3m not found or rejected by a filter.
Physical volume "/dev/sdaa" changed
1 physical volume(s) resized or updated / 0 physical volume(s) not resized
[admin@DB030035 ~]$ sudo pvresize /dev/sdab
WARNING: Device for PV ovZDIW-axwB-1YVM-rZVF-V90Y-5cMl-tdyU3m not found or rejected by a filter.
Physical volume "/dev/sdab" changed
1 physical volume(s) resized or updated / 0 physical volume(s) not resized
Extend the logical volume to consume 100% of the free space in the disk group:
[admin@DB030035 ~]$ sudo lvextend -l+100%FREE /dev/oralogvg/oraloglv
Size of logical volume oralogvg/oraloglv changed from <22.00 GiB (5631 extents) to 2.95 TiB (773629 extents).
Logical volume oralogvg/oraloglv successfully resized.
Finally, resize the file system to consume all the new space in the logical volume:
[admin@DB030035 ~]$ sudo xfs_growfs /dev/oralogvg/oraloglv
meta-data=/dev/mapper/oralogvg-oraloglv isize=512 agcount=4, agsize=1441536 blks
= sectsz=512 attr=2, projid32bit=1
= crc=1 finobt=0 spinodes=0
data = bsize=4096 blocks=5766144, imaxpct=25
= sunit=0 swidth=0 blks
naming =version 2 bsize=4096 ascii-ci=0 ftype=1
log =internal bsize=4096 blocks=2815, version=2
= sectsz=512 sunit=0 blks, lazy-count=1
realtime =none extsz=4096 blocks=0, rtextents=0
data blocks changed from 5766144 to 792196096
At this point, the new space should be available on the volume. Run fdisk
to confirm the new size of the mount point:
[admin@DB030035 ~]$ sudo fdisk -l | grep "Disk /dev/mapper/oralogvg-oraloglv"
Disk /dev/mapper/oralogvg-oraloglv: 3244.8 GB, 3244835209216 bytes, 6337568768 sectors