KB450090 – Disaster Recovery on Linux using ZFS

Last modified: January 18, 2019
You are here:
Estimated reading time: 3 min

To setup a Disaster Recovery similar to the FreeNAS offering, you’ll just need to follow a few steps:

1. After you’ve installed ZFS and created your zpool, create a dataset where the customer will save all their data.

   zfs create zpool/datasetName
   Note: This needs to be done on both servers.

2. Next you’ll need to setup passwordless SSH between the two servers. To do this, you’ll need to edit the /etc/hosts file to have the IP address and hostnames of the two servers.

It should look like for example:

      [root@hostname1 ~]# cat /etc/hosts
      127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
      ::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
      192.168.16.94   hostname1
      192.168.16.95   hostname2

Test that you can use the hostnames to ping from one server to another

     [root@hostname1 ~] ping hostname2
     PING hostname2 (192.168.16.95) 56(84) bytes of data.
     64 bytes from hostname2 (192.168.16.95): icmp_seq=1 ttl=64 time=0.116 ms
     64 bytes from hostname2 (192.168.16.95): icmp_seq=2 ttl=64 time=0.104 ms
     64 bytes from hostname2 (192.168.16.95): icmp_seq=3 ttl=64 time=0.105 ms

If you can ping each server like above, next you’ll setup the passwordless SSH. (You’ll need to run these commands from both nodes to allow passwordless SSH going both ways)

     ssh-keygen                           (You'll be prompted with 3 questions, just press enter on all 3 until you're returned to the command prompt)
     ssh-copy-id root@hostname1           (You will be asked if you want to continue connecting, say "yes" and then enter the root password.)
     ssh-copy-id root@hostname2           (Same thing as above)

3. Next you’ll need to download a script from images.45drives.com/setup/ only onto the master server. There are a few different scripts for each possibility of scheduling cron jobs in linux (hourly,daily,weekly and monthly). Each script has the ability to keep snapshots over a certain range on time, you may just need to adjust that range depending on the customers needs.

The way cron’s work in linux, you’ll need to download the script to a certain directory – /etc/cron.hourly/ /etc/cron.daily/ /etc/cron.weekly/ /etc/cron.monthly/

    wget images.45drives.com/setup/zfs-auto-send-X.sh      (Where X is either hourly,daily,weekly or monthly)
    chmod +x zfs-auto-send-X.sh

Next you’ll need to go in and make a few edits to the script, specifically the hostname of the backup server, and the range of dates to keep the snapshots if necessary, for example I’ll show the zfs-auto-send-daily.sh .

The edits that will need to be made here is in Step 2 – to enter the backup servers hostname in the line that has ssh root@ $remoteIP in it. Step 3 – the 3rd last line that reads ssh $” remoteIP “zfs destroy…., enter the backup servers hostname.

    vim /etc/cron.daily/zfs-auto-send-daily.sh
    #!/bin/sh
    # ZFS Snapshot & Send script
    # Christien AuCoin - 45Drives
    ### DATE Variables
    # D = Today's date
    # D1 = Yesterday's date
    # D# = Today minus # days date
    D=$(date +%m-%d-%Y)
    D1=$(date --date='yesterday' '+%m-%d-%Y')
    D7=$(date --date='7 day ago' '+%m-%d-%Y')
    D14=$(date --date='14 day ago' '+%m-%d-%Y')
    # Must have passwordless SSH setup between two servers.
    # Step 1 - Make snapshots of datasets
    for i in $( zfs list -H -o name); do
            if [ $i == zpool ]
            then echo "$i found, skipping"
            else
            zfs snapshot $i@$D
            fi
    done 
    # Step 2 - Send the snapshots to backup server
    for i in $(zfs list -H -o name ); do
            if [ $i == zpool ]
            then echo "$i found, skipping"
            else
      zfs send -i $i@$D1 $i@$D | ssh root@hostname2 zfs recv $i
      fi
    done
    # Step 3 - Destroy snapshots that are 14 days old
    for i in $(zfs list -H -o name); do
            if [ $i == zpool ]
            then echo "$i found, skipping"
            else
            zfs destroy $i@$D14 > /dev/null 2>&1
      ssh hostname2 "zfs destroy $i@$D7 > /dev/null 2>&1"
            fi
    done

4. Now that the script is in place, you’ll need to manually create the first snapshot and send it over to the backup server.

Depending on which time set of snapshot that you’re setting up, you’ll need to know the snapshot naming convention:

    hourly - zpool/dataset@mm-dd-yyyy-hh-MM        (month-day-year-hour-minute)    NOTE: Cron will take the snapshots every hour at minute 01. So when you manually create the first snapshot make sure you're minute is set to 01.
    daily - zpool/dataset@mm-dd-yyyy               (month-day-year) 
    weekly - zpool/dataset@mm-dd-yyyy              (month-day-year)
    monthly - zpool/dataset@mm_dd_yyyy             (month_day_year)

To create the snapshot you’ll need to run a command with the following structure – for example i’ll do one for a daily snapshot.

    zfs create zpool/dataset@mm-dd-yyyy

Once that snapshot is created you can double check by:

    zfs list -t snapshot

Now we need to send the snapshot over to the backup server.

    zfs send zpool/dataset@mm-dd-yyyy | ssh hostname2 zfs recv zpool/dataset -F

You can check with the zfs list command on the backup server to ensure that the snapshot was sent over.

Was this article helpful?
Dislike 0
Views: 104
Unboxing Racking Storage Drives Cable Setup Power UPS Sizing Remote Access