Did you know  45Drives offers free  public and private  webinars ? Click here to learn more  & register! Build & Price

KB450422 – Configuring Ceph Object Storage to use Multiple Data Pools

You are here:

Scope/Description

This guide will run through the process of updating the default data pool that Ceph RGW uses to store object data for buckets. If your cluster is only going to use a single EC pool to store your object data on, you can simply update the default data placement policy, however, if the end-user may want to use different pools for different buckets, you will also want to create secondary placement policies first.

Prerequisites

  • A Ceph Octopus or later cluster stood up with object storage with RGW’s deployed with all RGW service pools up and running. If you don’t create some of the service pools manually, such as the non-ec pool for multi-part uploads, they will get created automatically once you begin adding data to buckets. This may not be ideal if you are planning on setting your service pools to use faster media such as SSD.
  • Create any RGW data pools you may want to use ahead of time through the dashboard or command line, whether they are SSD or specific EC pools which are different from the default replicated data pool.

Steps

  • If you are planning on creating multiple placement targets (meaning if you plan to have more than one pool that you want to use for object data, such as SSD and HDD) the first step is to export the zonegroup information for your cluster. This should be done from a monitor node. You will export this information to a json file. First, list the zonegroups for your cluster. Once you list them (there should only be one named default in a cluster that has not been configured prior)
  • NOTE: If you only plan to have a single erasure coded pool that you want to use other than the default data pool, you do not need to change the placement targets of your zonegroup, and you can skip to the step of updating the zone itself which starts with listing your clusters zones.
[root@octosd1 ~]# radosgw-admin zonegroup list
{
"default_info": "ef9b53v64-afg5-445c-9aa2-364b98agcb43",
"zonegroups": [
"default"
]
}
[root@octosd1 ~]# radosgw-admin zonegroup get --rgw-zonegroup=default > zonegroup.json
  • The zonegroup information has now been exported to a file called zonegroup.json. Next, open this file, and create a new placement target for each different pool you want to use. json syntax can be extremely picky, so making edits with vim might not be the best idea. There are some online json formatters you can use to help with this such as this one.  By default under “placement_targets”: you will see only default-placement. We need to add a new target for each tier. This guide is going to use the name ssd-placement and ec-placement as 2 new extra placement targets. As you can see below, we have added 2 new placement targets alongside the default-placement target.
 "placement_targets": [
{
"name": "ssd-placement",
"tags": [],
"storage_classes": [
"STANDARD"
]
},
{
"name": "ec-placement",
"tags": [],
"storage_classes": [
"STANDARD"
]
},
{
"name": "default-placement",
"tags": [],
"storage_classes": [
"STANDARD"
]
}
],
"default_placement": "default-placement",
"realm_id": "82f0a82f-9e77-4312-a50b-1082684866fc",
"sync_policy": {
"groups": []
}
}
  • Once you make these changes to your zonegroup.json (Please use a JSON formatter to weed out any syntax errors) you can now import the edits made to the zonegroup placement policy.
[root@octosd1 ~]# radosgw-admin zonegroup set --rgw-zonegroup=default --infile zonegroup.json
  • Now that we have new placement targets created, we can now assign different pools to each one. If you are only using the default placement target, this is where you will set your data pool as well. First, pull the name of your Ceph clusters zone. Unless you have configured something special ahead of time, it should just be “default”
[root@octosd1 ~]# radosgw-admin zone list
{
"default_info": "7285bba5-0926-495f-867b-0a6c042b396b",
"zones": [
"default"
]
}
  • Next, we will export the zone info into a JSON file exactly like we did in the previous steps.
[root@octosd1 ~]# radosgw-admin zone get --rgw-zone=default > zone.json
  • Open your zone.json file with a JSON formatter. This is where you will select the placement pools for each of your placement targets you created previously, or if you are just using the default placement target, it is where you will set the new data pool for it. This is what my configuration would look like after completion.
 "placement_pools": [
{
"key": "ssd-placement",
"val": {
"index_pool": "default.rgw.buckets.index",
"storage_classes": {
"STANDARD": {
"data_pool": "default.rgw.buckets.ssd.data"
}
},
"data_extra_pool": "default.rgw.buckets.non-ec",
"index_type": 0
}
},
{
"key": "ec-placement",
"val": {
"index_pool": "default.rgw.buckets.index",
"storage_classes": {
"STANDARD": {
"data_pool": "default.rgw.buckets.ec.data"
}
},
"data_extra_pool": "default.rgw.buckets.non-ec",
"index_type": 0
}
}
]
  • Once all the changes have been made and updated, we will import this JSON file back to our zone, the same way we did in previous steps with the zonegroup.
[root@octosd1 ~]# radosgw-admin zone set --rgw-zone=default --infile zone.json
  • Finally, to have the changes take effect in the dashboard, you must restart your RGW services. This must be done from the terminal of each node that is hosting the RGW service.

 

[root@octosd1 ~]# systemctl restart ceph-radosgw.target

Verification

  • You are ready to begin using your different placement targets for different buckets you create. From the dashboard when creating a bucket, select the drop-down that says “placement target” and you will find your newly created targets

Troubleshooting

  •  JSON formatting will be the culprit in most cases if this configuration is not working. Make sure not to try to make these edits through vim unless you are only replacing a placement target name for another name. If you are looking to add additional lines to the JSON file, make sure to use a proper JSON format tool.
  • If your cluster is part of a multi-site cluster you will have to commit these updates to the latest period, otherwise, you can begin creating your buckets.
[root@octosd1 ~]# radosgw-admin period update --commit
Was this article helpful?
Dislike 0
Views: 2521
Unboxing Racking Storage Drives Cable Setup Power UPS Sizing Remote Access