KB450015 – Step by Step Gluster Setup

Last modified: October 23, 2019
You are here:
Estimated reading time: 5 min

NOTE: All bold italicized words are commands to be entered in the command line.

Preflight

Configuring your Network (Do on all Nodes)

Make use of the Network Management tool

  • nmtui
  • Edit a connection
  • Eno1 and eno2 are onboard ports, other two are 10 GB NIC. Delete all interfaces to start fresh
  • Add à Bond
  • Profile name & Device = bond0
    Add Bond Slaves, your two 10GB NIC names (ens15f0 ens15f1) if applicable
    Mode = Adaptive Load Balancing (alb)
    IPv4 Config = Automatic if using DHCP, IPv4 Config= Manual if you want Static
    See the example below.
  • Scroll down to Back, and then go to “Activate a connection”
  • With “bond0” highlighted, go over to <Deactivate> and hit “Enter”. You will then see <Activate> and then hit “Enter” again.
  • Then go down to Back, and then click OK to return to the command line.
  • ip addr show à bond0 will show the IP address you can ping from the other servers.
  • ping 192.168.16.4

Install Required Packages (On all nodes)

  • cd /root
  • ls

    if preconfig isn’t there then wget images.45drives.com/gtools/preconfig
  • ./preconfig –af
  • You’ll need to reboot the system, log back in as root, and then ./preconfig –af to finish the install.

Configure Services

NTP

  • Unless you have your own NTP server or Active Directory, you can use the CentOS defaults.
  • To edit, vim /etc/ntp.conf à press i to enter text, and the ESC key when done, followed by :wq
  • systemctl enable ntpd
  • systemctl start ntpd
  • Test that all is working with ntpq –p à output should be the same format as below

Passwordless SSH

  • vim /etc/hosts à and enter the IP and hostname for all nodes being setup
  • ssh-keygen –t rsa à (leave input blank just hit enter three times for simplicity)
  • ssh-copy-id root@hostname à (Do for all hosts in /etc/hosts including itself)

Creating Storage

ZFS Storage Poll Setup (Do on every node)

Configure Drive Mapping

  • dmap à options are as follows:
    Controller:
  • R750, r750, r (HighPoint R750)
  • LSI, lsi, l (LSI 9201 -24i)
  • Adaptec, adaptec, a (Adaptec HBA-1000i, ASR-81605Z)
  • Rr3740, rr (HighPoint RR3740)

Chassis

  • 30, 45, or 60
  • lsdev à (Grey = empty slot, Orange = clean drive, Green = Drive in a storage volume)

Build ZFS Storage Pool

Chassis Size Maximum Storage Efficiency Maximum IO per Second
Q30 3VDEVs of 10 Drives 5VDEVs of 6 Drives
S45 3VDEVs of 15 Drives 5VDEVs of 9 Drives
XL60 4VDEVs of 15 Drives 6VDEVs of 10 Drives
  • zcreate -n (insert pool name) –l (insert RAID level (raidz2 suggested)) –v (# of VDEVs) –b (build flag)

Below is a table of our suggested VDEV configurations:

  • now lsdev will show the slots are green
  • systemctl enable zfs.target; systemctl start zfs.target
  • vim /usr/lib/systemd/system/zfs-import-cache.service
  • change line “ExecStart=” to be “ExecStart=/usr/local/libexec/zfs/startzfscache.sh”
  • mkdir /usr/local/libexec/zfs
  • vim /usr/local/libexec/zfs/startzfscache.sh and add the following in the file:
    #!/bin/sh
    sleep 30
    /sbin/zpool import –c /etc/zfs/zpool.cache –aN
    zfs mount –a
  • chmod +x /usr/local/libexec/zfs/startzfscache.sh

Gluster Volume Setup

Create Bricks (Do on all nodes)

To set up a cluster with GlusterFS, you must break up your big ZFS storage pool into several bricks to allow for the replication and/or distribution of data.
-A is for an Arbiter brick. An Arbiter brick is a brick that will store filenames and metadata, but no physical data. It is helpful in avoiding a split-brain, by knowing which file belongs to which brick etc.
-C is for a CTDB brick. A CTDB brick controls the sharing of the clustered volume. If one server goes down, the volume can still be accessible through the other servers etc.

There are a few things to consider when deciding how many bricks you want to create:

  1. We recommend that a single brick shouldn’t be more than 100TB in size.
  2. Your brick needs to be larger in size than any single file that you plan to store on it.
  3. More bricks mean more processes, so it can handle more clients better.
  • mkbrick –n (ZFS pool name) -C -A -b (# of bricks wanted)
  • df –H à this will show you all that is mounted and you should see your ZpoolName/volX

Firewall Ports

  • firewall-cmd –permanent –add-port=24007-24008/tcp
  • firewall-cmd –permanent –add-port=4379/tcp
  • firewall-cmd –reload
  • systemctl enable glusterd; systemctl start glusterd
  • gluster peer probe HostName à do this from one node and probe all other nodes.

Creating your Gluster Volume (Only do on ONE node)

  • vim /root/vol.conf
    Linked list (4 nodes, 4 bricks)                                                 Distributed Replica (4 nodes, 4 bricks)

Distributed (4 nodes, 4 bricks)

  • gcreate -c  /root/vol.conf  -b X  -n Y -n Z …
    X = # of bricks per node.  Y,Z,…= hostname of all other nodes.

Creating your CTDB Volume (Only do on ONE node)

  • vim /root/ctdb.conf

NOTE: if using 3 servers or more, make it a replica 3.

  • gcreate -c /root/ctdb.conf -b 1 -n Y -n Z …
    -b 1 (only one CTDB brick per node), Y,Z,…= hostname of all other nodes.
  • mkdir /mnt/ctdb à /mnt/ctdb is just our example.
  • echo localhost:/ctdb /mnt/ctdb glusterfs defaults,_netdev 0 0 >> /etc/fstab
  • mount /mnt/ctdb

Firewall ports

  • gluster volume status à this will output a table similar to the below
  • firewall-cmd –permanent –add-ports=49152-49156/tcp
  • firewall-cmd –permanent –add-ports=2049/tcp
  • firewall-cmd –reload

 

Sharing

Check to see if your CTDB volume is mounted with the df command.
Should say “localhost:ctdb” at the bottom of the output.

SMB

  • mkdir /mnt/ctdb/files
  • vim /mnt/ctdb/files/ctdb à enter the following information
  • vim /mnt/ctdb/files/nodes à enter the IP addresses of all nodes being set up like below
  • vim /mnt/ctdb/files/public_addresses à enter an IP which will be used to access the share
    Ex: 192.168.16.160/16 bond0 (/16 is the Subnet Mask & bond0 is the interface)
  • vim /mnt/ctdb/files/smb.conf à below is the basic config, you’ll need to adjust permissions
    [gluster-tank] is the share name.
  • These files need to be on every node at the following locations:
    – ctdb = /etc/sysconfig/ctdb
    – nodes = /etc/ctdb/nodes
    – public_addresses = /etc/ctdb/public_addresses
    -smb.conf = /etc/samba/smb.conf
    -This can all be done from one node using passwordless SSH:
    Ex: ssh root@gluster2  “cp /mnt/ctdb/files/nodes  /etc/ctdb nodes”
  • touch /mnt/ctdb/files/.CTDB-lockfile
  • firewall-cmd –permanent –add-service=samba;  firewall-cmd –reload
  • systemctl enable ctdb; systemctl start ctdb
  • systemctl disable smb; systemctl disable nfs
  • testparm à This will check the smb.conf file for any issues.

Creating Groups/Users to Access your SMB Share

  • Create a group which will be given access to the share àgroupadd groupName
  • Create a user within that group àuseradd username -G groupName
  • Add user to Samba database àsmbpasswd -a username
  • Edit the smb.conf, in the share section to add àvalid users = @groupName
  • If you only want one user to access the volume, do not include the -G option when creating the user, and make valid users = username

NFS

  • mkdir /mnt/ctdb/files
  • vim /mnt/ctdb/files/ctdb à enter the following information
  • vim /mnt/ctdb/files/nodes à enter the IP addresses of all nodes being set up like below
  • vim /mnt/ctdb/files/public_addresses à enter an IP which will be used to access the share
    Ex: 192.168.16.160/16 bond0 (/16 is the Subnet Mask & bond0 is the interface)
  • These files need to be on every node at the following locations:
    – ctdb = df -H
    – nodes = /etc/ctdb/nodes
    – public_addresses = /etc/ctdb/public_addresses
    -This can all be done from one node using passwordless SSH:
    Ex: ssh root@gluster2  “cp /mnt/ctdb/files/nodes  /etc/ctdb nodes”
  • touch /mnt/ctdb/files/.CTDB-lockfile
  • firewall-cmd –permanent –add-service=nfs
  • firewall-cmd –permanent –add-port=111/tcp
  • firewall-cmd –permanent –add-port=38465-38467/tcp
  • firewall-cmd –reload
  • gluster volume set (volume name) nfs.disable off
  • gluster volume set (volume name) nfs.rpc-auth-allow <ip range>
    Ex: on a 255.0.0 Subnet, we put 192.168.*.* so anyone on network can access.
  • gluster volume set (volume name) nfs.export-volumes on
  • systemctl enable ctdb; systemctl start ctdb
  • systemctl disable smb; systemctl disable nfs

Creating Groups/Users to access your NFS share

  • Create a group which will be given access to the share àgroupadd groupName
  • Create a user within that group àuseradd username -G groupName
  • Set a user and group to be the owner of the share à chown username:groupName  /mnt/tank (NOTE: à chown???) 
  • Mount on client using credentials:
    mount -t nfs  <externalIP>:VolumeName  -o username=X,password=Y   /directoryOfChoice

Firewall Cheat Sheet

 

Application Add-port
NFS (RPC Bind) 111/tcp
Communication for Gluster nodes 24007-24008/tcp
GlusterFS NFS Service 38465-38467/tcp  &  2049/tcp
Communication for CTDB 4379/tcp
Gluster Bricks 49152-4915X/tcp

 

Application Add-service
Samba samba
NFS nfs

 

Was this article helpful?
Dislike 0
Views: 899
Unboxing Racking Storage Drives Cable Setup Power UPS Sizing Remote Access