Topics: Ceph
KB450107 – Creating & Using Erasure Code Pools
Scope/Description This article will outline how to create an Erasure Code Pool in the Ceph Dashboard along with configuring the Erasure Code Profile necessary for the pool to use. Prerequisites An existing Ceph Cluster Steps Go to your ceph dashboard and go to the Pools section. Click Create to start creating a new pool. For […]
Read more »KB450403 – Adding SSD Journals to OSDs
Scope/Description This article will go through the steps required to add journals from SSDs to your OSDs (HDD) Prerequisites Stable ceph cluster OSDs on Hard Drives SSDs added to the each OSD node within the cluster Steps Create LVM devices on SSD pvcreate /dev/1-6 vgcreate ceph-db-0 /dev/1-6 lvcreate -l <extents> -n osd-db-0 ceph-db-0 extents = […]
Read more »KB450430 – Adding OSD Nodes to a Ceph Cluster
Scope/Description This guide will detail the process of adding OSD nodes to an existing cluster running Octopus 15.2.13. The process can be completed without taking the cluster out of production. Prerequisites An existing Ceph cluster Additional OSD node(s) to add The OSD node(s) have same version of Ceph installed Network configured on new OSD node(s) […]
Read more »KB450101 – Ceph Monitor Slow Blocked Ops
Scope/Description This article details the process of troubleshooting a monitor service experiencing slow-block ops. If your Ceph cluster encounters a slow/blocked operation it will log it and set the cluster health into Warning Mode. Generally speaking, an OSD with slow requests is every OSD that is not able to service the I/O operations per second […]
Read more »KB450110 – Updating Ceph
See here for detailed explanation of the Ceph release cycle. To summarize there are (2) types of a Ceph cluster update. Both can be completed without cluster downtime, but release notes should be reviewed in both cases. Minor Updates Major Updates Major Updates are released every 9 months, they are the most invasive of the […]
Read more »KB450308 – Adding Additional Public Addresses to CTDB Clusters
Scope/Description This article will cover adding additional VIP(virtual IP) public addresses to a CTDB cluster. This process should only be used when new gateways are added to a pre-standing cluster or CTDB was configured with a singular public address. Prerequisites A standing Ceph cluster CTDB installed and configured Additional IP address(es) Additional gateway nodes Steps […]
Read more »KB450424 – Ceph Backfill & Recovery
Scope/Description This article describes what ceph defines as recovery and backfill and how to adjust the thresholds to limit or increase recovery throughput. Prerequisites Ceph Cluster Configured Steps Ceph defines recovery as moving PGs when OSDs crash and come back online. A more detailed explanation can be found here. Ceph defines backfill as moving PGs […]
Read more »KB450406 – Exporting NFS-Ganesha to multiple subnets
Scope/Description This guide will run through the steps of adding new VIP’s (virtual IP addresses) with pacemaker to service secondary/tertiary subnets from the same gateway servers to allow NFS shares to be exported via additional subnets. Prerequisites This guide assumes the cluster is up and functioning, with NFS deployed with 45Drives Ansible playbooks using a […]
Read more »KB450407 – Increase Grafana Logging Period
Scope/Description This article covers increasing the Prometheus/Grafana logging period. Prerequisites A working Ceph cluster running Ubuntu 20.04 with Ceph Octopus. Access to the cli of the admin/metrics node. Steps Extending Logging Period Open /etc/systemd/system/prometheus.service in your text editor for choice and add the following line: –storage.tsdb.retention.time=x Your config should now look like the below screenshot, […]
Read more »KB450409 – Creating Custom Grafana Graphs
Scope/Description This guide covers the creation of Grafana Graphs at a base level which can then be used to pull and manipulate data for any purpose. Prerequisites A working ceph cluster A defined metrics server Access to the Grafana dashboard Steps Navigate to your grafana dashboard, typically this will be on port 3000 of your […]
Read more »