45Drives Knowledge Base
KB450419 - Offlining a Ceph Storage Node for Maintenance

KB450419 - Offlining a Ceph Storage Node for Maintenance

Posted on April 9, 2021 by Matthew Hutchinson




Setting Maintenance Options

root@osd1:~# ceph osd set noout 
root@osd1:~# ceph osd set norebalance 
root@osd1:~# ceph osd set norecover


root@osd1:~# ceph -s

root@osd1:~# shutdown now

Disabling Maintenance Options

root@osd1:~# ceph osd unset noout
root@osd1:~# ceph osd unset norebalance
root@osd1:~# ceph osd unset norecover


root@osd1:~# ceph -s



systemctl restart ceph-mon@hostname
Ceph – Page 2 – 45Drives Knowledge Base
Did you know  45Drives offers free  public and private  webinars ? Click here to learn more  & register! Build & Price

KB450419 – Offlining a Ceph Storage Node for Maintenance

Scope/Description This article will walk you through taking a Ceph node offline safely and then online it and bring the cluster back safely. Prerequisites Ceph Cluster. SSH Access to a Ceph Node. Steps Setting Maintenance Options SSH into the node you want to take down Run these 3 commands to set flags on the cluster […]

Read more »

KB450246 – Ceph Cluster Initial Network Configuration

Scope/Description This article will explain the steps to begin configuring the network for a new Ceph cluster.  It will explain how to properly cable each storage node and gateway system for best practices. Assumptions: A 5 node cluster with 2 filesystem gateways will be used as an example 2 networks will be used, a cluster […]

Read more »

KB450233 – Adding HBA and Port Buckets to CRUSH Map

Scope/Description This article will walk through how to add “hba” and “port” buckets to an existing CRUSH map. These CRUSH buckets then can be used as failure domains in a CRUSH rule. e.g You want to create a ceph storage pool that will distribute objects or objects chunks across OSDs grouped by the HBA or […]

Read more »

KB450230 – VMware tuning for Ceph iSCSI

Scope/Description This guide will show you how to configure your RBDs for the specific use case of VMware to improve performance. Prerequisites A running Ceph cluster iSCSI gateways up and configured. Guide can be found here http://knowledgebase.45drives.com/kb/kb450229-setup-and-configuration-of-iscsi-gateways-on-ceph-cluster/ Steps Set the object size of your RBD images to 1MB. When creating your RBD’s in the Ceph dashboard, […]

Read more »

KB450245 – Configuring Alertmanager to send Emails

Scope/Description This article will show proper syntax in order to allow emails to be sent from alert manager Prerequisites A running Ceph Cluster with the ceph dashboard configured with ceph-ansible Access to the cluster ansible-admin node An email address, mail server name and port you wish to send the alerts to Steps Log in to […]

Read more »

KB450160 – CephFS Snapshots

Scope/Description This article will show how to manually create a cephfs snapshots and then walk through the process of setting up automated hourly,daily,weekly,monthly, and/or yearly. Prerequisites An existing Nautilus ceph cluster with cephfs configured When using ceph-ansible-45d to build the cluster, this process is to be done after running the “core.yml”, “cephfs.yml”, “dashboard.yml” playbook Steps […]

Read more »

KB450173 – Ceph Network Configuration Explanation

Scope/Description This article outlines a network diagram/explanation for a typical 3 node cluster with 2 gateway nodes. High Level Services and Network Diagram Explanation of Services Ceph is a license free, open source storage platform that ties together multiple storage servers to provide interfaces for object, block and file-level storage in a single, horizontally scalable […]

Read more »

KB450188 – CephFS Removing Large omap Objects

Scope/Description This article details the process of clearing the HEALTH_WARN on a ceph cluster due to a large omap object. Prerequisites Configured Ceph Cluster running CephFS Steps In some cases, the Ceph health reporter will start reporting that ‘large objects’ are found within a pool. This error will display when running the following commands: ceph […]

Read more »

KB450157 – Repairing Inconsistent Placement Groups with Damaged Objects

Scope/Description This article describes the process of repairing inconsistent PGs/damaged objects on your Ceph Cluster. Prerequisites Ceph Cluster Experiencing HEALTH_ERR state with damaged objects PGs that are inconsistent Steps Identifying damaged PGs We can see with ceph -s  that we have some inconsistent PGs, and possible data damage. cluster: id: cec9ca98-b59f-4d91-8ddd-43802195c735 health: HEALTH_ERR 1 scrub […]

Read more »

KB450185 – Adding Storage Drives to a Ceph Cluster

Scope/Description This guide will detail the process to add storage drives to a cluster. Although, not necessary it is best practice to add the same amount of drives to each node and drives of the same capacity. Prerequisites Ceph Cluster is in a healthy state Steps First make sure the status of the cluster is […]

Read more »
© 2022 - 45Drives Knowledge Base
Unboxing Racking Storage Drives Cable Setup Power UPS Sizing Remote Access