Understanding the Basics
As said on the previous page, ZFS is using several smaller RAIDs apposed to method of one large RAID. These smaller RAIDs are called VDevs which stands for “Virtual Device”. Once a VDev is created, you cannot add any hard drives to that VDev.
A zpool is one or more VDevs joined together. Once a VDev is added to a zpool, the VDev cannot be removed for any reason. After this zpool is created, you can still add more VDevs to it. It is highly recommended that any VDev added to the zpool have the same RAID format and number of disks, however it does not have to. Avoid adding a single hard drive as a VDev, this results in no redundancy and the risk of losing all data if that one hard drive fails.
An easy way to remember the terminology is as follows:
- Hard drives go into VDevs.
- VDevs go inside zpools.
- Zpools store your data.
- VDev failure results in loss of data, not just a single disk failure.
As mentioned on the last page, the ZFS RAID types ranking from highest speed/least protection to lowest speed/most protection is as follows:
ZFS RAID stripes the data across VDevs when there are more than one. If you want a RAID10, you simply have to create your VDevs as mirrors.
ZFS handles file systems differently than the traditional file system volume management in “mdadm” for example. In the traditional file systems, after you make your RAID devices, and allocate them to your volume group, you must set a fixed size for each logical data set. Seen below you can see that /usr, /var, /, and /home all have their specific sizes.
The way ZFS handles the file system volume management is that each data set has full access to the volume group. As files are placed into the data sets, the pool marks that storage as unavailable to all data sets. Each data set is aware of what is available in the pool, so there is no need to give them a specific amount of allocated space. This can be seen below
Installing ZFS packages on Linux
Below are the steps needed to install ZFS on Centos 7. Each bullet point is a command to run in the command line.
Note: in the commands below, we are using vim as our text editor. You can use any you’d like (vim,vi,nano,etc.)
- yum install http://download.zfsonlinux.org/epel/zfs-release$(rpm -EÂ %dist).noarch.rpm
- yum install zfs
- touch /etc/rc.modules
- echo “modprobe zfs” >> /etc/rc.modules
- chmod +x /etc/rc.modules
- systemctl enable zfs-import-cache.service
- systemctl enable zfs-mount.service
- systemctl enable zfs.target
- vim /usr/bin/systemd/systyem/zfs-import-cache.service –> change line “ExecStart=” to be “ExecStart=/usr/local/libexec/zfs/startzfscache.sh“
- mkdir /usr/local/libexec/zfs
- vim /usr/local/libexec/zfs/startzfs.cache.sh –> add the following in the file:
#!/bin/sh sleep 10 /sbin/zpool import -c /etc/zfs/zpool.cache -aN zfs mount -a
- chmod +x /usr/local/libexec/zfs/startzfscache.sh
You must reboot the system before proceeding to create your zpool.
Note: This is an example of how to create a zpool consisting of 4 VDevs of 5 hard drives each, in a RAIDZ2 format. You can follow these commands to create your zpool however you’d like.
You will need to give your zpool a name, in this example i will call it “demo”
1) Create your zpool
root@Proto:~# zpool create -f demo raidz2 sdb sdc sdd sde sdf raidz2 sdg sdh sdi sdj sdk raidz2 sdl sdm sdn sdo sdp raidz2 sdq sdr sds sdt sdu
2) Check zpool status
root@Proto:~# zpool status demo
The output should look like the following:
pool: pool state: ONLINE scan: none requested config: NAME STATE READ WRITE CKSUM pool ONLINE 0 0 0 raidz2-0 ONLINE 0 0 0 sdb ONLINE 0 0 0 sdc ONLINE 0 0 0 sdd ONLINE 0 0 0 sde ONLINE 0 0 0 sdf ONLINE 0 0 0 raidz2-1 ONLINE 0 0 0 sdg ONLINE 0 0 0 sdh ONLINE 0 0 0 sdi ONLINE 0 0 0 sdj ONLINE 0 0 0 sdk ONLINE 0 0 0 raidz2-2 ONLINE 0 0 0 sdl ONLINE 0 0 0 sdm ONLINE 0 0 0 sdn ONLINE 0 0 0 sdo ONLINE 0 0 0 sdp ONLINE 0 0 0 raidz2-3 ONLINE 0 0 0 sdq ONLINE 0 0 0 sdr ONLINE 0 0 0 sds ONLINE 0 0 0 sdt ONLINE 0 0 0 sdu ONLINE 0 0 0 errors: No known data errors
You can see there are 4 VDevs numbered from 0 to 3, each containing 5 drives.
3) Create data sets in the file system (Choose names to fit your needs)
root@Proto:~# zfs create demo/user
root@Proto:~# zfs create demo/home
root@Proto:~# zfs create demo/var
4) Check to see they all have full access to pool
root@Proto:~# zfs list
Output should be similar to as follows:
NAME USED AVAIL REFER MOUNTPOINT demo 191K 10.5T 30.4K /demo demo/user 30.4K 10.5T 30.4K /demo/user demo/home 30.4K 10.5T 30.4K /demo/home demo/var 30.4K 10.5T 30.4k /demo/var
5) Mount your data sets
In this example I’m going to make a directory called “demo” inside of the mnt directory, and mount demo/home there. I will leave demo/user and demo/var mounted where they are.
root@Proto:~# mkdir /mnt/demo root@Proto:~# zfs set mountpoint=/mnt/demo demo/home root@Proto:~# mount | grep demo
the output should look like the following:
demo on /demo type zfs (rw,relatime,xattr,noacl) demo/user on /demo/user type zfs (rw,relatime,xattr,noacl) demo/var on /demo/var type zfs (rw,relatime,xattr,noacl) demo/home on /mnt/demo type zfs (rw,relatime,xattr,noacl)
You can set your mount points to wherever works best for you using this same method.
This same method for the entire setup can be used for any of the different RAID types mentioned before.
NOTE: If you followed the step by step guide for practice with empty hard drives, the following command can be used to delete the zpool.
ONLY DO THIS IF THERE IS NO DATA ON YOUR DRIVES, DATA WILL BE LOST
root@Proto:~# zpool destroy demo