This guide will use Linux RAID AKA the mdadm module – however, this will work with other file systems such as ZFS or even hardware RAID to work as the underlying physical volumes. Just replace the step of creating RAID volumes with mdadm with a module of your choosing. The initial setup will just change slightly in how you create your arrays.
install mdadm module and ISCSI target module, and preemptively add necessary firewall rules do you don’t have to later.
[root@localhost ~]# yum clean all [root@localhost ~]# yum update [root@localhost ~]# yum install mdadm -y [root@localhost ~]# yum install targetcli -y [root@localhost ~]# firewall-cmd –permanent –add-port=32600/tcp [root@localhost ~]# firewall-cmd --reload
Build RAID Arrays
This guide will use two RAID6 arrays for demonstration purposes. You can either build your array right on top of the raw disk or create partitions on the disk first. This guide will use raw disks. Begin by building your arrays that will be used as the physical disks.
first, run an examine to make sure the disks you are looking to use don’t contain super-blocks.
Note: Replace the ? with the actual disks you are looking to use for your RAID.
[root@localhost ~]# mdadm --examine /dev/sd? /dev/sd? /dev/sd? /dev/sd? mdadm: No md superblock detected on /dev/sd? /dev/sd? /dev/sd? /dev/sd? [root@localhost ~]# mdadm --create /dev/md0 --level=6 --raid-devices=4 /dev/sd? /dev/sd? /dev/sd? /dev/sd? [root@localhost ~]# mdadm --create /dev/md0 --level=6 --raid-devices=4 /dev/sd? /dev/sd? /dev/sd? /dev/sd? [root@localhost ~]# cat /proc/mdstat <------**This will show a snapshot of the kernels RAID/md state**
Build physical volumes
Now that the arrays have been created and are in order, now its time to create the physical volumes. This is simple. Take however many arrays you have created, and make a physical volume for each.
[root@localhost ~]# pvcreate /dev/md0 [root@localhost ~]# pvcreate /dev/md1
Build volume group
Next, it’s time to build a volume group out of all of the previously created physical volumes.
[root@localhost ~]# vgcreate VG_NAME /dev/md0 /dev/md1
Build logical volumes
Now it is time to chop up your volume group into logical volumes which will eventually become your LUN’s for your iSCSI gateway. There are two ways you can do this. You can create your LV’s with a size in mind for each of them or you can give each of them a percentage of the overall storage available. I will show the syntax for both methods, but in this guide will use the size method.
Size method: lvcreate -L 100G -n logicalvolume_name volumegroup_name
Percentage method lvcreate -l 60%VG -n logicalvolume_name volumegroup_name
[root@localhost ~]# lvcreate -L 400G -n logicalvolume_name1 volumegroup_name [root@localhost ~]# lvcreate -L 450G -n logicalvolume_name2 volumegroup_name [root@localhost ~]# lvcreate -L 100G -n logicalvolume_name3 volumegroup_name
Begin iSCSI target configuration
Now it’s time to begin configuration of the targetcli module. The text shown below will go through step by step everything needed to do. Combined with this explanation, should be everything you need to get it up and running. targetcli will take you in a modified CLI that is made just for iSCSI. The first step once in targetcli is to create ISCSI disks out of the previously created logical volumes. Create these in the backstores/block directory. Once all of the disks you want to create are finished, back out to the root directory and then change to the ISCSI directory. From here it is now time to create your target. Use whatever syntax is appropriate for you. Once your target is created, change directories to get to the ACLS directory and create an ACL using similar syntax as below. Once the ACL is created it is now time to create your LUN’s. these are created from the iscsi_disks created in a previous step. Once this is complete, take a look at all of the settings, and if everything looks good – save your configuration and exit the targetcli module. Once back in the basic terminal, use systemd to enable and restart target service.
Your system is now configured with multiple LUN’s per iSCSI target.
[root@localhost ~]# targetcli targetcli shell version 2.1.fb46 Copyright 2011-2013 by Datera, Inc and others. For help on commands, type 'help'. /> cd backstores/block /backstores/block> create ISCSI_DISK_NAME1 /dev/volumegroup_name/logicalvolume_name1 /backstores/block> create ISCSI_DISK_NAME2 /dev/volumegroup_name/logicalvolume_name2 /backstores/block>create ISCSI_DISK_NAME3 /dev/volumegroup_name/logicalvolume_name3 /backstores/block> cd .. /backstores> cd .. /> cd iscsi /iscsi> create iqn.2019-07.45drives.centos:test /iscsi> cd iqn.2019-07.45drives.centos:test/tpg1/acls/ /iscsi/iqn.2019-07.45drives.centos:test/tpg1/acls> create iqn.2019-07.45drives.centos:client /iscsi/iqn.2019-07.45drives.centos:test/tpg1/acls> cd .. /iscsi/iqn.2019-07.45drives.centos:test/tpg1> cd luns/ /iscsi/iqn.2019-07.45drives.centos:test/tpg1/luns> create /backstores/block/iscsi_disk1 /iscsi/iqn.2019-07.45drives.centos:test/tpg1/luns> create /backstores/block/iscsi_disk2 /iscsi/iqn.2019-07.45drives.centos:test/tpg1/luns> create /backstores/block/iscsi_disk3 ** next back out to the root directory and run ls to verify all settings look good ** /> ls /> saveconfig
[root@localhost ~]# systemctl enable target.service [root@localhost ~]# systemctl restart target.service