CephFS + NFSGanesha
Prerequisites
-
An anisble deployed ceph cluster. Ceph Installation
-
Ceph File-System created and named “cephfs” created on cluster. CephFS Creation
-
Node(s) to act as NFS Gateway.
-
NFS gateway can be physical hardware or a virtual machine.
-
NFS gateway can NOT be co-located on OSD nodes.
-
Password-less SSH access from ansible node
-
Single Gateway Configuration
Installation
-
Edit group_vars/nfss.yml
-
Add [nfss] and hostnames to inventory file
-
Verify connection
-
ansible -m ping nfss
-
-
Run playbook and limit it to nodes in nfs group
-
ansible-playbook site.yml --limit=nfss
-
Mount Via NFSv4
-
Mount on Linux Client
-
mount -t nfs -o nfsvers=4.1,proto=tcp <ganesha-host-name>:<ganesha-pseudo-path> <mount-point>
-
Default ganesha-pseudo-path is /cephfile.
-
Active Active Configuration
-
Requires Ceph Nautilus (v14) Release.
-
NFS-Ganesha Packages 2.7 or greater
Installation
-
The temporary process of installing for active-active is too install the same way as single server config above, and and alter for active-active config after.
-
The right solution is to edit NFS role to allow ansible to configure active active for us
-
Edit group_vars/nfss.yml
-
Add [nfss] and hostnames to inventory file
-
Verify connection
-
ansible -m ping nfss
-
-
Run playbook and limit it to nodes in nfs group
-
ansible-playbook site.yml --limit=nfss
-
Gateway Configuration
/etc/ganesha/ganesha.conf
# Please do not change this file directly since it is managed by Ansible and will be overwritten NFS_Core_Param { } EXPORT_DEFAULTS { Attr_Expiration_Time = 0; } CACHEINODE { Dir_Chunk = 0; NParts = 1; Cache_Size = 1; } RADOS_URLS { ceph_conf = '/etc/ceph/ceph.conf'; userid = "admin"; } NFSv4 { RecoveryBackend = 'rados_cluster'; } RADOS_KV { pool = "metadata"; namespace = "ganesha-grace"; nodeid = "nfs1"; } # EXPORTS BLOCKS STORED IN RADOS POOL. THEY ARE DYNAMICALLY ADDED THROUGH THE CEPH UI -> NFS # Read current config: # "rados -p metadata -N ganesha-export-index get conf-nfs1 conf-nfs1 ; cat conf-nfs1" %url rados://data/ganesha-export-index/conf-nfs1 LOG { Facility { name = FILE; destination = "/var/log/ganesha/ganesha.log"; enable = active; } }
-
Configure ganesha grace database.
-
[root@nfs1 ~]# ganesha-rados-grace -p metadata -n ganesha-grace add nfs1 nfs2 nfs3
-
-
View db status at any time with
-
[root@nfs1 ~]# ganesha-rados-grace -p metadata -n ganesha-grace
-
[root@nfs1 ~]# ganesha-rados-grace -p metadata -n ganesha-grace cur=1 rec=0 ====================================================== nfs1 E nfs2 E nfs3 E
-
-
cur is the current epoch. This represents the current epoch value of the cluster
-
rec is the recovery epoch.This represents the epoch from which clients are allowed to recover. A non-zero value here means that a cluster-wide grace period is in effect. Setting this to 0 ends that grace period.
-
E Indicates server is enforcing the grace period by refusing non-recliam locks.
-
N Indicates server has clients from the previous epoch that need recovery.
-
No Flag Indicates server is not enforcing grace nor has old clients in need of recovery. This is the state during normal function
Export Configuration
-
Place empty object in rados pool defined in ganesha.conf for each nfs gateway. Obeject must have syntax conf-$HOSTNAME to have ceph dashboard find the objects
-
touch conf-nfs1 ; rados -p data -N ganesha-export-index put conf-nfs1 conf-nfs1
-
-
On any ceph node
-
ceph dashboard set-ganesha-clusters-rados-pool-namespace data/ganesha-export-index
-
ceph mgr module disable dashboard ; ceph mgr module enable dashboard
-
-
From the Ceph Dashboard UI → NFS you can create and edit NFS exports