KB045239 – Expanding a Ceph Cluster with Drives

Last modified: January 14, 2021
You are here:
Estimated reading time: 2 min

Expanding a Ceph Cluster with Drives

Scope/Description:

To expand a Ceph Cluster with Hard Drives to allow additional usable capacity.

Prerequisites:

  1. A running ceph cluster running CentOS 7 and Ceph Nautilus
  2. Unformatted hard drives that are able to be added to the cluster
  3. Access to the cli of each osd node in the cluster

Steps:

  1. Insert unformatted hard drives in each osd node as desired and take note of the slot ID’s that you use.
    • Order does not matter and placement does not matter
    • Ensure that drives are fully inserted
    • It is best practice to have an even amount of capacity in each server if possible
  2. Access each osd node
    • Ensure disks are inserted and are showing up unpartitioned – you can use 45Drives Tools: /opt/ctools/lsdev or use lsblk
    • Ensure the disks you are trying to add match the physical slots where you inserted the disks.
  3. Take note of the linux device name of the new disks you wish to add to the cluster for each node (ex. /dev/sdh /dev/sdi)
  4. Run a report for each linux device to ensure they’re free of any partitions. This will also test for any errors:
    [root@osd~]# ceph-volume lvm batch --report /dev/sdh
  5. Run the command with all the necessary drives included to ensure it works properly:
    [root@osd~]# ceph-volume lvm batch --report /dev/sdh /dev/sdi /dev/sdj
  6. If no errors appear, run the command without the –report function:
    [root@osd~]# ceph-volume lvm batch /dev/sdh /dev/sdi /dev/sdj

Checklist for Completing Cluster Expansion: 

  1. Run a Ceph -s command to display the status of the cluster:
    [root@osd~]# ceph -s
  2. If no issues appear and everything is running smoothly, run a ceph health detail and confirm all data has been redistributed and that backfills have been complete:
    [root@osd~]# ceph health detail
  3. In the Ceph Dashboard check the OSD usage to ensure data is evenly distributed between drives.
    • If data isn’t distributed look at the ceph balancer status, confirm that the mode is “ceph-compat” and active is “true”
      [root@osd~]#  ceph balancer status
      
  4. In the Ceph Dashboard check the PG’s on each OSD, they should all be between 100-150
  5. In the Ceph Dashboard check the Normal Distribution in the OSD overall performance tab
  6. Lastly, if the Dashboard is unavailable run a “ceph osd df” and check the PG’s:
    [root@osd~]# ceph osd df
  7. Return OSD Backfill to the default value by navigating in the Dashboard to Cluster–>Configuration–>Search “backfill”–>OSD_max_backfill and set it to 1

Verification:

  • Use ceph -s to verify the storage capacity and number of OSDs before adding the new drives.
  • Use ceph -s afterwards to verify that the OSD count and storage capacity has increased.
  • Use LSBLK to verify that the drives have a ceph volume on them

Troubleshooting:

  • If there are already volumes present on the drives, be sure to use wipedev on them. Before running wipedev on drives, ensure you are targeting the correct drive and there is no critical data on the device.
[root@osd~]# /opt/tools/wipdev -a
Was this article helpful?
Dislike 0
Views: 155
Unboxing Racking Storage Drives Cable Setup Power UPS Sizing Remote Access