Scope/Description
- This article will go over the process of creating an LVM with multiple Ceph RBD images on a Linux client to present as storage, and expanding that storage.
Prerequisites
- Ceph Cluster with RBD pool
- Ceph Common package installed on client machine(s)
- RBD image(s) created
- RBD image(s) mapped to client
Steps
Editing lvm.conf file to allow RBD images
- Before we’re to create our volume group, first we have to allow RBD images to work as devices with LVM.
- We do this by editing the /etc/lvm/lvm.conf file and adding types = [ “rbd”, 1024 ] in the devices section.
types = [ "rbd", 1024 ]
Creating RBD Volume Group
- Once we’ve got our RBD images mapped to our client machine, we’ll have to first create the Volume Group and add these RBD images to them. We can do this with the vgcreate command. This will create our Volume Group, which we can then use to create our LVM.
root@ubuntu-45d:~# vgcreate RBD-VOLGROUP /dev/rbd0 /dev/rbd1
Creating RBD LVM and XFS
- Next, we’ll create the LVM using lvcreate. This will create the LVM using 100% of the available storage from the RBD images.
root@ubuntu-45d:~# lvcreate -l 100%FREE -n RBD-LVM RBD-VOLGROUP
- Then, we can create an XFS volume on top of this LVM. We are using the options reflink=1,crc=1 when creating. This will enable the use of reflinks on our XFS, which is good when tying it into Veeam for using the fast clone feature.
root@ubuntu-45d:~# mkfs.xfs -K -m reflink=1,crc=1 /dev/RBD-VOLGROUP/RBD-LVM
- Next, we’ll have to add our XFS mount to the /etc/fstab to ensure it will remount on boot. Ensure we add this line to /etc/fstab, where /dev/RBD-VOLGROUP/RBD-LVM is our LVM, and /mnt/rbd is where we will mount the LVM to.
/dev/RBD-VOLGROUP/RBD-LVM /mnt/rbd xfs defaults,_netdev 0 0
Creating systemd service to assemble LVM on boot
- Next, we’ll have to create a system service for our LVM group to ensure that it assembles correctly on boot.
root@ubuntu-45d:~# vim /usr/lib/systemd/system/lv-activate-rbd.service
[Unit] Description=lv activate rbd devices After=rbdmap.service [Service] Type=oneshot RemainAfterExit=yes ExecStart=/sbin/lvm vgchange -ay [Install] WantedBy=multi-user.target
root@ubuntu-45d:~# systemctl enable --now lv-activate-rbd.service
Extending RBD Volume Group
- To extend our Volume Group/LVM, first we’ll have to map our new RBD devices.
- Once we’ve mapped our new RBD devices, we can add them to the Volume Group using vgextend.
root@ubuntu-45d:~# vgextend RBD-VOLGROUP /dev/rbd2 /dev/rbd3
- Once they’re added to the Volume Group, we can then extend the available size of our LVM with lvextend.
root@ubuntu-45d:~# lvextend -l +100%FREE /dev/RBD-VOLGROUP/RBD-LVM
- Next, we can then extend the size of our XFS with xfs_growfs.
root@ubuntu-45d:~# xfs_growfs /mnt/rbd/
Verification
- If we’re to do df on the client machine, we should see our XFS mounted with the correct size.
Troubleshooting
- Ensure you have added types = [ “rbd”, 1024 ] in the devices section of /etc/lvm/lvm.conf.
- If the LVM has not restored correctly on reboot, you can restore it manually by restoring the Volume Group.
root@ubuntu-45d:~# cd /etc/lvm/backup
root@ubuntu-45d:~# vgcfgrestore RBD-VOLGROUP
root@ubuntu-45d:~# vgchange -ay
Views: 1437