|45 Drives Knowledge Base
||KB450148 - Configuration: FreeNAS
You must attach a monitor and keyboard to the Storinator® for this first section.
Plug your hard drives into the storage drive bays and plug in an Ethernet cable into the server.
Turn on your Storinator® and wait until it is sitting at the FreeNAS console screen.
Under the list of options, you will see the IP address, which you can use to access the FreeNAS management GUI to administer your server. Enter this IP address into a browser of another computer on the same network to begin administering your storage.
The default login information is:
You are now ready to build and share your storage, and by the end of this setup guide you will have:
The following examples outline the ideal configurations for the Q30, S45 and XL60 Storinator models.
Please note that there are two recommended configurations for each server model; one for maximum storage space, and one for maximum I/Os.
It is recommended that the configuration for maximum IO/s is used. However, if you are unsatisfied with the amount of usable storage, use the maximum storage space configuration. It is important to note that both configurations will fully saturate a 10GbE interface.
|Maximum Storage efficiency:||Maximum IO per second|
|3VDevs of 10 Drives each||5VDevs of 6 Drives each|
Go to the “Storage” tab on the top toolbar. From there, to set up the VDevs and Zpool you must click on the “Volume Manager” button.
FreeNAS makes use of a sliding bar (horizontal and vertical) in order to choose the size of each VDev. Each row of the “Volume Layout” represents a VDev, and each column is the number of drives in that VDev. There is a drop down list directly under “Volume Layout”, which allows the user to choose the type of RAID desired. Once the layout is as desired, click “Add Volume”. The screenshot below shows the options chosen to create a Zpool with 3 VDevs, each having 10 6TB hard drives in a RAIDZ.
This configuration is recommended for anybody looking to achieve better storage efficiency as more drives per VDev increases the total available storage.
Following the same procedure as the previous example, creating a 5VDev of 6 drives in a RAIDZ is as simple as dragging the slide bar diagonally to have 5 rows of 6 hard drives each, which is shown in the screenshot below.
This configuration is recommended to anybody looking to maximize IO/s since each VDev is stripped in the Zpool, more VDevs will give higher performance.
Following the same procedure as above, we recommend the following drive configurations for the 45 bay unit. The level of parity protection depends on user preference; remember, RAIDZ2 will always result in less usable space than a RAIDZ1.
|Maximum Storage efficiency:||Maximum IO per second|
|3VDevs of 15 Drives each||5VDevs of 9 Drives each|
Following the same procedure as above, we recommend the following drive configurations for the 60 bay unit. The level of parity protection depends on user preference; remember, RAIDZ2 will always result in less usable space than a RAIDZ1.
|Maximum Storage efficiency:||Maximum IO per second|
|4VDevs of 15 Drives each||6VDevs of 10 Drives each|
An existing ZFS volume can be divided into datasets. Permissions, compression, deduplication, and quotas can be set on a per-dataset basis, allowing more granular control over access to storage data. Like a folder or directory, permissions can be set on dataset. Datasets are also similar to filesystems in that properties such as quotas and compression can be set, and snapshots created.
Selecting an existing ZFS volume in the tree and click Create Dataset:
Some settings are only available in Advanced Mode. To see these settings, either click the Advanced Mode button, or configure the system to always display advanced settings by enabling the box Show advanced fields by default option in . Most attributes, except for the Dataset Name, Case Sensitivity, and Record Size, can be changed after dataset creation by highlighting the dataset name and clicking the Edit Options button in .
To add more space into your Zpool, you don't just add a new disk(s), you have to add a whole new VDev.
Depending on your initial pool configuration this can be painless; the trick with adding space to your Zpool is to plan ahead. If you know you want add more disks down the road, arrange your pool in such a way you can add the needed amount space with ease.
For example, a scenario of a Q30, starting with only 10 disks and wanting to expand in the future.
Following the recommendations above, you would start with a Zpool of 10 disk RAIDZ (1VDEV of 10 disks).
To add more capacity to this pool, the new VDev has to be the same size, therefore you would need to add 10 more disks, resulting in 2 VDev’s of 10 disks for a total of a 20 disk RAIDZ.
Once the Zpool has been created, creating a share is the next step to being able to transfer and receive data to and from the NAS. The client OS you will be working with will decide which type of share is best for your application.
The first task is to create a “Group”. When creating shares, there must be a group that has ownership of the share. The owner of the share will have read/write access to the share. Go to “Account” à ”Group” à ”Add Group”. You will be prompted to enter a name, in this example, we chose “Editors”. Next, we need to create a user, and add this user into our “Editors” group. Go to “Account” à ”User” à ”Add User”. You will see the following window, fill it out as we have shown below. Once filled press “OK”.
NOTE: Make sure you uncheck the “Create a new primary group for the user” box, and select your new group as the “Primary Group”.
Next we need to create a dataset on the storage volume. To do so, go to the “Storage” tab and click on the bottom “NewPool” of the two. Once that is highlighted, go to the bottom tool bar and click the “Create dataset” button shown below.
Add your Dataset name, ours will be “Raw Footage,” and change the “Share type” to the correct type of share that you plan on making , then click “Add dataset”.
NOTE: Do not use spaces in dataset name if using UNIX.
From here, click on “Raw Footage” and click the far left button on the bottom toolbar which is to change dataset permissions.
You must change “Owner (user):” to your new user, in our case that’s “Editor1”. You must also change “Owner (group):” to your new group, ours is “Editors”. Then click on “Change”. Now we’re ready to create our shares. In doing so, any user added to the group “Editors” will be able to access the share with read/write privileges using their username and password.
Samba allows file sharing between computers running Microsoft Windows. On FreeNAS, this is referred to as “Windows (CIFS) Shares”. The following steps will show how to set one of these shares up.
Click on the “Sharing” tab on the top toolbar to start the process. From there, click the “Windows (CIFS)” tab. You should now see a blue “Add Windows (CIFS) Share” button. Click the button and it should bring you to following page.
The path will be the dataset that you just created in the previous section. Click “Browse” -> the “+” next to “mnt” -> the “+” next to “NewPool” and then select the dataset you have just created. The default permissions offered are the ones that were set up in the last step so keep that box checked. The screenshot that follows shows our particular example.
From there, click “OK” and a window will pop up asking you if you’d like to enable this service, select “yes” or your share will not be accessible.
NOTE: If you have not yet set up Network Interfaces on your NAS or clients, you must do this before mounting your network share. Please see Section titled “Network Setup” on page 12.
Once this process has finished, head to your Windows computer, open up the “File Explorer” and then go to “This PC”. On the top toolbar you should see “Map network drive”, click that. Your “server” is the IP address you’ve entered into your browser to access the FreeNAS GUI, and the “share” is the name chosen for your share.
You will be prompted to login. Click “Use another account” and enter your new user (Editor1) and the password of that user. Now your share will always be accessible in this location on your local Windows machine.
Click on the “Sharing” tab on the top toolbar to start the process. From there, click the “UNIX (NFS)” tab. You should now see a blue “Add UNIX (NFS) Share” button. Click the button and it should bring you to following page.
Select the path, which is as shown above, to the new dataset created in the previous section. Once path is selected, simply click “OK”. A window will pop up asking you if you’d like to start the services, click “yes” to allow NFS shares to be accessible.
To connect to the created share on a BSD or Linux machine, run the following command as the super user (or with sudo):
mount –t nfs 192.168.2.183:/mnt/NewPool/RawFootage /mnt
-t nfs: specifies the type of share
192.168.2.183: replace with the IP address of your FreeNAS system.
/mnt/NewPool/RawFootage: replace with the path of your NFS share.
/mnt: a mount point on the client system, must be an existing empty directory. This is where the data in the NFS share will be made available on the client machine.
If this command fails on a Linux system, make sure the nfs-utils package is installed.
Connecting to the NFS share from a Windows machine is possible using an NFSClient and NFSLibrary from the Nekodrive download page. Once everything is downloaded, run the NFSClient and enter the IP address of your FreeNAS machine.
To connect to the NFS share from a Mac OSX client, click on “Go” and the “Connect to Sever”. In the “Server Address” bar, input “nfs://” followed by the IP address of your FreeNAS system and the name of the volume/dataset being shared by NFS. In our example, this would look as follows:
To achieve better data transfer speeds, the NFS mount has to be configured properly for use with certain applications, such as Final Cut ProX. To do so, create an “nfs.conf” file within the “/etc” directory of the Mac OSX. In doing so, the Finder’s mount parameters are tuned for performance for Final Cut Pro Mac clients. Once the nfs.conf file is created, it should contain the following line:
Before you configure your iSCSI target, you will need to set the IP's for the interfaces you will be accessing the target through. (ix0, ix1...) Document those IP addresses for later use.
Click on the “Sharing” tab on the top toolbar to start the process. From there, click the “Apple (AFP)” tab. You should now see a blue “Add Apple (AFP) Share” button. Click the button and it should bring you to following page:
The path should be selected to be the new dataset created before this called “RawFootage”. The Name will be seen after connecting to the AFP on the Mac client. When finished, click “OK” and you’ll be asked if you’d like to enable services, select “yes” to allow AFP shares to be accessible.
It is not recommended to change the default settings of an AFP share as doing so may cause the share to not work as expected. Most settings are only available when you click “Advanced Mode” shown below, but do not change any of these unless you fully understand that function of that options.
To connect to the share on the Mac OSX machine, click “Go” followed by “Connect to Server”. Under “Server Address” input “afp://IP address of the FreeNAS system”. For the example above, it would look as follows:
To configure your 10GB NIC card, click on the “Network” tab, followed by the “Interfaces” tab, then click on the “Add Interface” button. Under the NIC drop down list, “ix0” and “ix1” are typically the interfaces on the 10GB NICs, with “ix0” being the top port and “ix1” being the bottom port on the card. After naming the interface, the IPv4 address and IPv4 Netmask need to be set. In this example, I’ve created a SMB share called “Projects”, and will access it across the 10GB NIC card located in both the Storinator and my Windows computer. Seen below is the interface created on the FreeNAS side of the connection.
Now, on the Windows side, you need to assure that the NIC port being used has the same IPv4 Netmask. In this case “255.255.0.0” was chosen, meaning we need an IP address that has the same first three numbers. Go to “Control panel” à “Network and Sharing Center” à “Change adapter settings”. You will see something similar to the following screenshot. Ethernet 2 is the interface of our NIC, right click and go to “Properties”.
Under “This connection uses the following items:” click on “Internet Protocol Versions 4(TCP/IP)” à “Properties”, where you will be able to enter the proper IP address and Subnet needed. Shown below are the corresponding Windows side IP address and subnet.
To assure the share is accessed across this 10GB connection, go to “This PC” on the top toolbar, click “map network drive”. Input in the form “\\(IP address of 10GB NIC on FreeNAS side)\Name of share”. In the example above this would be “\\192.168.2.214\Projects”. This share will then always be accessible in this location.
It’s similar when using a Mac, as you must set the IP address of the sonnet device (allows 10GB for Mac since no NIC card) to be the same form as the IP address of the FreeNAS NIC card. Go to
“System Preferences” then “Network”, an example can be seen below.
This is the case for connecting to an NFS share or an AFP share. Click on “Go” then “Connect to Server”. Input the correct form mentioned in the “Sharing” section on page 7 for your given share, having the IP address as “192.168.2.214” in this case to assure the share is accessed across the 10GB network.
Click on “System,” which is in the top toolbar, followed by the “Tunable” tab. Click “Add Variable” and enter the information seen below. These network tunes will give optimal performance on a 10GbE network.
Zpool – A zpool is made up of one or more VDevs (see below), which themselves can be a single disk or a group of disks, in the case of a RAID transform. When multiple VDevs are used, ZFS spreads data across the VDevs to increase performance and maximize usable space.
VDev (Virtual Device) – One or more hard drives allocated in an array to work together to add redundancy, improve performance and store data. A VDev can typically be thought of as a RAID group.
RAIDZ-1 – A single parity bit distributed across all of the disks in the array, resulting in the ability to provide up to one hard drive failure within the RAID without data loss.
RAIDZ-2 – A dual parity bit distributed across all of the disks in the array, resulting in the ability to provide up to two hard drive failures within the RAID without data loss.