45Drives Knowledge Base
KB450302 - Recovering ZFS Pool From Backup
https://knowledgebase.45drives.com/kb/kb450302-recovering-zfs-pool-from-backup/

KB450302 - Recovering ZFS Pool From Backup

Posted on April 19, 2021 by Brett Kelly


Scope/Description

This article will walk through the process of recovering a ZFS storage pool from a backup and re-enabling replication task afterwards

Scenario:

Prerequisites

Steps

Disable Znapzend on Primary Server

 

Recovering Snapshot on Backup Server

ssh-keygen (You'll be prompted with 3 questions, just press enter on all 3 until you're returned to the command prompt)
ssh-copy-id root@primary (You will be asked if you want to continue connecting, say "yes" and then enter the root password.)
zfs send esxitank/backup@2021-04-19-120000 | mbuffer -s 128k -m 2G | ssh 192.168.123.121 zfs recv tank/storage
# zfs send esxitank/backup@2021-04-19-120000 | mbuffer -s 128k -m 2G | ssh 192.168.123.121 zfs recv tank/storage
in @ 112 MiB/s, out @ 112 MiB/s, 8994 MiB total, buffer 100% full

Restarting Snapshot Replication on Primary Server

znapzendzetup import --write tank/storage znapzend-tank-storage.backup
znapzendzetup create --recursive --mbuffer=/usr/bin/mbuffer --mbuffersize=5G SRC '7d=>1h' tank/storage DST:a '7d=>1h' root@192.168.185.10:esxitank/backup

Verification

Troubleshooting

KB450302 - Recovering ZFS Pool From Backup - 45Drives Knowledge Base
Did you know  45Drives offers free  public and private  webinars ? Click here to learn more  & register! Build & Price

KB450302 – Recovering ZFS Pool From Backup

You are here:

Scope/Description

This article will walk through the process of recovering a ZFS storage pool from a backup and re-enabling replication task afterwards

Scenario:

  • Initial setup of two servers, one “primary” and one “backup”
    • primary -> 192.168.123.121
  • ZFS auto-replication setup between the two server such that “backup” is a copy of “primary”
  • “primary” experiences massive failure where pool is unrecoverable
  • Users rebuilds an empty pool on “primary” and wants to restore from latest snapshot stored on “backup”

Prerequisites

  • A backup server that has the latest snapshots sent from “primary”
  • A new zpool created on “primary”

Steps

Disable Znapzend on Primary Server

  • Temporarily disable znapzend service on “primary” until pool has been restored from backup

 

Recovering Snapshot on Backup Server

  • Find the latest backup snapshot on the “backup” server. Snapshots are organized per-dataset and the time stamp will indicate which is the latest

  • Make sure “backup” can ssh into the “primary” without prompting for a password
ssh-keygen (You'll be prompted with 3 questions, just press enter on all 3 until you're returned to the command prompt)
ssh-copy-id root@primary (You will be asked if you want to continue connecting, say "yes" and then enter the root password.)
  • In a terminal use zfs send/receive to send snapshot back to the “primary” from the “backup”
zfs send esxitank/backup@2021-04-19-120000 | mbuffer -s 128k -m 2G | ssh 192.168.123.121 zfs recv tank/storage
  • This will output a progress bar to get an idea at how fast your data is transferring back to primary
[root@45drives ~]# zfs send esxitank/backup@2021-04-19-120000 | mbuffer -s 128k -m 2G | ssh 192.168.123.121 zfs recv tank/storage
in @ 112 MiB/s, out @ 112 MiB/s, 8994 MiB total, buffer 100% full
  • Go back to primary and re-enable replication job

Restarting Snapshot Replication on Primary Server

  • If you have the “znapzend” config saved you can re-import with:
znapzendzetup import --write tank/storage znapzend-tank-storage.backup
  • If not recreate a new znapzend replication task
znapzendzetup create --recursive --mbuffer=/usr/bin/mbuffer --mbuffersize=5G SRC '7d=>1h' tank/storage DST:a '7d=>1h' root@192.168.185.10:esxitank/backup
  • Re-enable znapzend service

Verification

  • Once transfer has begun, go back to primary and verify the dataset was created and the Used Space is increasing
  • Once transfer is complete, verify Used capacity and navigate in the dataset that all files are as expected

Troubleshooting

  • If the zfs send | zfs recv command fails, ensure that the dataset you are recreating IS NOT created already on the primary end
Was this article helpful?
Dislike 0
Views: 810
© 2024 - 45Drives Knowledge Base
Unboxing Racking Storage Drives Cable Setup Power UPS Sizing Remote Access