Running Gdeploy
Gdeploy rpm located here
Gdeploy requires Ansible-2.4.2.0
Use the gdeploy sample file linked here: lvm-deploy-example.cfg
Read through, the instructions of what needs to be changed is detailed as comments.
Further detail into gdeploy syntax and capabilities can be found here , and here .
After making your edits you can run the gdeploy file.
gdeploy -c lvm-deploy-example .cfg
Troubleshooting
Should the gdeploy process fail, read the errors reported and address. gdeploy can be run with
“-vv” option for more verbosity.
Most likely any errors experienced will be due to incorrect sizing.
If gdeploy fails after creating a PV, VG or an LV. Be sure to delete them first before running
gdeploy again. Remove VGs first and it will prompt to delete any LVs on top. If any of the brick
LV are mounted be sure to unmount first or else vgremove will fail.
vgremove $VG_NAME
pvremove /dev/sda
CTDB Configuration
Mount CTDB volume on each node, and create an entry in fstab.
Create CTDB configuration files
Download each linked file and save it in the location specified in the table.
Change the information for your environment.
Simply create the lockfile with.
touch /mnt/ctdb/ .ctdb-lockfile
File Location Description
File | Location | Description |
---|---|---|
ctdb | /etc/sysconfig/ctdb | Core CTDB configuration |
public_addresses | /etc/ctdb/public_addresses | A list of VIPs clients can access smb share. One IP per node in cluster. Syntax = $IP/$SUBNET $INTERFACE (192.168.16.4/16 bond0) |
nodes | /etc/ctdb/nodes | A list of host nodes ctdb is running on. Syntax = $IP (192.168.16.2) |
smb.conf | /etc/samba/smb.conf | Samba Configuration |
.lockfile | /mnt/ctdb/.ctdb-lockfile | Lockfile for shared files |
Start CTDB:
systemctl enable ctdb
systemctl start ctdb
watch -n1 “ctdb status”
Wait for both nodes to go OK. If they don’t check “/var/log/log.ctdb” for error.
Double check the config files above for any typos. Both servers should have identical configs.