Applied FreeBSD: Basic iSCSI

iSCSI is often touted as a low-cost replacement for fibre-channel (FC) Storage Area Networks (SANs). Instead of having to setup a separate fibre-channel network for the SAN, or invest in the infrastructure to run Fibre-Channel over Ethernet (FCoE), iSCSI runs on top of standard TCP/IP. This means that the same network equipment used for routing user data on a network could be utilized for the storage as well. In practice, to get high-levels of performance, it is advised that system designers consider iSCSI Host Bus Adaptors (HBAs) for each iSCSI participating team, and that the network at a minimum have a separate VLAN for iSCSI traffic–or more ideally, have separate physical network.

My disclaimer: this article does not cover any of the above performance enhancements! The systems in this article are setup and configured in a VMWare Workstation virtualized environment so that I don’t have to physically procure all of the hardware just to learn about iSCSI.

This article will cover a very basic setup where a FreeBSD server is configured as an iSCSI Target, and another FreeBSD server is configured as the iSCSI Initiator. The iSCSI Target will export a single disk drive, and the initiator will create a filesystem on this disk and mount it locally. Advanced topics, such as multipath, ZFS storage pools, failover controllers, etc. are not covered.  Please refer to the following documentation on iSCSI for more information:

Now to get started…

iSCSI Target Test Setup

The disk drive which should be shared on the network is /dev/ada0, a 5G SATA disk created in VMWare that I attached to the system before starting it up. With FeeBSD, iSCSI is controled by the ctld daemon, so this needs to be enabled on the system. While at it, why not go ahead and enable it at boot time too?

root@bsdtarget:~ # echo ‘ctld_enable=”YES”‘ >> /etc/rc.conf
root@bsdtarget:~ # service start ctld
Starting iscsid.

The real magic is the /etc/ctl.conf file, which contains all of the information necessary for ctld to share disk drives on the network. Check out the man page for /etc/ctl.conf for more details; below is the configuration file that I created for this test setup. Note that on a system that has never had iSCSI configured, there will be no existing configuration file, so go ahead and create it.

root@bsdtarget:/dev # less /etc/ctl.conf
auth-group test{
chap “iscsitest” “bsdforthewin”

portal-group pg0 {
discovery-auth-group no-authentication

target iqn.2017-02.lab.testing:basictarget {
auth-group no-authentication
portal-group pg0
lun 0 {
path /dev/ada0
size 5G
lun 1 {
path /dev/ada1
size 5G

For this setup, LUN 0 will be used by a FreeBSD iSCSI Initiator. I have LUN 1 configured for experimenting with Windows Server at a later time. Before starting ctld, it is a good idea to make sure that the /etc/ctl.conf file is not readable by all users (ctld will complain). At a later point it might be necessary to add iSCSI authentication for the sessions, and it would not be wise to have all users able to look at the authentication secret password.

root@bsdtarget:~ # chmod 640 /etc/ctl.conf
root@bsdtarget:~ # service start ctld

If there are any syntax errors or warnings, ctld will complain about it on the console. The ctladm tool can be used to query the system for more information, for example:

root@bsdtarget:/dev # ctladm lunlist
(7:0:0/0): <FREEBSD CTLDISK 0001> Fixed Direct Access SPC-4 SCSI device
(7:0:1/1): <FREEBSD CTLDISK 0001> Fixed Direct Access SPC-4 SCSI device
root@bsdtarget:/dev # ctladm devlist
LUN Backend Size (Blocks) BS Serial Number Device ID
0 block 10485760 512 MYSERIAL 0 MYDEVID 0
1 block 10485760 512 MYSERIAL 1 MYDEVID 1


That’s really it for the iSCSI Target configuration. The real effort is in setting up the /etc/ctl.conf file. And for a real production system, there would be more configuration with the exported disks, such as using ZFS shares, RAID-1 mirroring, et cetera.

iSCSI Initiator Test Setup

In order for a FreeBSD host to become an iSCSI Initiator, the iscsd daemon needs to be started. It doesn’t hurt to go ahead and add the instruction to /etc/rc.conf so that iscsid is started when the system comes up.

root@bsdinitiator:~ # echo ‘iscsid_enable=”YES”‘ >> /etc/rc.conf
root@bsdinitiator:~ # service start iscsid
Starting iscsid.

Next, the iSCSI Initiator can manually connect to the iSCSI target using the iscsictl tool. While setting up a new iSCSI session, this is probably the best option. Once you are sure the configuration is correct, add the configuration to the /etc/iscsi.conf file (see man page for this file). For iscsictl, pass the IP address of the target as well as the iSCSI IQN for the session:

root@bsdinitiator:~ # iscsictl -A -p -t iqn.2017-02.lab.testing:basictarget

The command returns silently, but a look at /var/message/logs shows that the remote disk was recognized and is now recognized by the Initiator as /dev/da1.

da1 at iscsi3 bus 0 scbus34 target 0 lun 0
da1: <FREEBSD CTLDISK 0001> Fixed Direct Access SPC-4 SCSI device
da1: Serial Number MYSERIAL 0
da1: 150.000MB/s transfers
da1: Command Queueing enabled
da1: 5120MB (10485760 512 byte sectors)

The iSCSI session connection status can also be verified with iscsictl:

root@bsdinitiator:~ # iscsictl -L
Target name                                                 Target portal          State
iqn.2017-02.lab.testing:basictarget       Connected: da1

Once the disk is recognized by the iSCSI Initiator system, it can be configured for use on the Initiator like a regular SCSI/SATA disk attached to the system physically. The commands below create a partition and UFS filesystem on /dev/da1.

root@bsdinitiator:~ # gpart create -s gpt /dev/da1
root@bsdinitiator:~ # gpart add -t freebsd-ufs -l 1m /dev/da1
root@bsdinitiator:~ # newfs -U /dev/da1p1
/dev/da1p1: 5120.0MB (10485688 sectors) block size 32768, fragment size 4096
using 9 cylinder groups of 626.09MB, 20035 blks, 80256 inodes.
with soft updates
super-block backups (for fsck_ffs -b #) at:
192, 1282432, 2564672, 3846912, 5129152, 6411392, 7693632, 8975872, 10258112
root@bsdinitiator:~ # mkdir /iscsi_share
root@bsdinitiator:~ # mount -t ufs -o rw /dev/da1p1 /iscsi_share

If there is already a filesystem resident on the device, it only needs to be mounted after the iSCSI session is connected. Back on the iSCSI Target machine, it is possible to see all of the iSCSI Initiators connected

root@bsdtarget:/dev # ctladm islist
ID   Portal                   Initiator name                                                 Target name
6      iqn.2017-02.lab.testing:basictarget

Finally, if for some reason it is necessary to disconnect the system, unmount the filesystem and use iscsictl to disconnect the iSCSI session.

root@bsdinitiator:~ # umount /iscsi_share
root@bsdinitiator:~ # iscsictl -R -t iqn.2017-02.lab.testing:basictarget

There is much more to explore with iSCSI, this is just the very beginning, but it serves a a model and a starting point for this work. More to come in the future!

Update: Windows iSCSI Initiator

I am not a very savvy Windows user, and I am very new to Windows Server. I have just started to learn some of the basics. As such, I thought I’d try setting up a Windows Server 2016 host as an iSCSI Initiator. I won’t go into much detail other than what is required for setting up the iSCSI parts. Go ahead and fire up Server Manager.


From the “Tools” menu, select iSCSI Initiator. It is also possible to start this application from the Windows search tool by searching for “iSCSI Initiator”. As shown below, when running it for the first time, Microsoft’s iSCSI service may not be running. If not, start it up!initiator2

There are many options for configuring the iSCSI Initiator, but for demonstration purposes we’ll cover the basic case. In the Target box, enter the IP address of the iSCSI Target machine and click on the Quick Connect button.


A screen should pop-up window finding the IQN for the iSCSI Target service, and it should also state somewhere that the Login was successful.


After closing out the pop-up window, the target should now be in the Discovered targets area of the Targets tab.


Next go to the Volumes and Devices tab. Unless you know the exact mount point for the iSCSI volume, the best bet is to click the Auto Configure button which will get the data from the iSCSI Target, as shown below.


I bet you wouldn’t have memorized that!  Both LUN 0 and LUN 1 are recognized by Windows. Press OK to exit out of the iSCSI Initiator application. Next, open up the Disk Management application on the Windows Server.


Notice that two new 5 GB disks are present. The tricky part here is that I am not sure which is LUN 0 and which is LUN 1. My best guess is that the disk that is recognized as a healthy primary partition is LUN 0 which contains a GPT label and is UFS formatted.  Thus the unrecognized disk must be LUN 1, which was not modified by the FreeBSD iSCSI Initiator. In reality I would deploy two iSCSI portal groups, one for the FreeBSD iSCSI Initiators and one for Windows iSCSI Initiators. I might have a third portal group for shared volumes.

For the unrecognized volume, create a MBR partition with the Disk Management tool, and then create a FAT32 partition on this disk as well. I decided to name the partition ISCSI_BSD. As shown below, on this Windows Server, the E: drive is now LUN 1, or /dev/ada2 back on my FreeBSD iSCSI Target machine.


The iSCSI drive shows up as a regular drive in Windows Explorer, as shown below.


Inside I created a special message for viewing from the FreeBSD side:


Finally, it is possible to verify the Windows access using the FreeBSD iSCSI Initiator. Reload the iSCSI Target session data, and now /dev/da2 is available on the FreeBSD Initiator. Even nicer, the FAT32 partitions are recognized by FreeBSD–less work to do! On Windows Server an MBR partition was created, which shows up as /dev/da2p1 in FreeBSD, and the actual FAT32 data partition is /dev/da2p2.

root@bsdinitiator:/iscsi_win_edrive # iscsictl -R -t iqn.2017-02.lab.testing:basictarget
root@bsdinitiator:/iscsi_win_edrive # iscsictl -A -p -t iqn.2017-02.lab.testing:basictarget

root@bsdinitiator:~ # ls -l /dev/da2*
crw-r—– 1 root operator 0x72 Mar 4 22:32 /dev/da2
crw-r—– 1 root operator 0x77 Mar 4 22:32 /dev/da2p1
crw-r—– 1 root operator 0x78 Mar 4 22:32 /dev/da2p2
root@bsdinitiator:~ # mount_msdosfs /dev/da2p2 /iscsi_win_edrive
root@bsdinitiator:~ # cd /iscsi_win_edrive/
root@bsdinitiator:/iscsi_win_edrive # ls
$RECYCLE.BIN hello.txt
System Volume Information
root@bsdinitiator:/iscsi_win_edrive # cat hello.txt
Hello FreeBSD! This is Windows Server!
I made your /dev/ada1 into a FAT32 partition.
I call it E: Drive. Thank you!

It works! I can see the message from Windows land.


NAS Firmware Update – M.616

Synology got back to me and said that my issue has a fix identified that will be released in DSM v6.1 bundle. The current baseline is DSM v6.0.2, and v6.1 is a beta version that is available. Since I’m trying to stabilize this system, I did not want to run a beta version of the DSM operating system. Instead Synology provided me an updated Intel firmware version M.616.

Installing the firmware seemed to help, the system installed the firmware and then rebooted without my needing to manually power cycle the system. Manual reboot and shutdown also worked. This time the system firmware also appears to have been updated:


Fingers crossed now!  Hopefully this will be the end of the power management issues. The next test will be when there is an update to the DSM software and if the system can reboot after updating DSM.

More Synology hang-ups

I had some downtime this morning, so I logged into my DS216+ web UI to check on things. The device immediately woke up upon entering its IP address into my web browser which caused me to let out a sigh of relief, thing back to the last time I tried.  My spouse has been heavily using the device lately moving all sorts of photos onto the unit or to her PC. As a file store I really am happy with this NAS, it works seamlessly with her Windows computer and her iPhone. I mapped the photo directory as a share driver on her Windows computer and all has been well. Yet somehow I knew it was too early still to declare victory.

Sure enough there was a DSM operating system software update.  DSM 6.0.2-8451 Update 6 was available, so I went ahead and chose to update. Unfortunately, the NAS hung when trying to reboot! Again power management issues! The status and network LEDs were out, the HDD indicator lamps were solid, and the dreaded dual-blue LEDs by the power button were flashing, indicating an error state. Once again I had to go over to the device and physically remove power.

I verified that if I manually request a reboot or shutdown, the NAS does come back up. At this point the only issue seems to be with rebooting after updating the DSM operating system. It is a minor nuisance at this time, but I am going to put in a trouble ticket to see if there is a solution to this issue. Perhaps Intel firmware version M.615 that Synology seems to hesitant to offer to the user community? What do they know about M.615 that makes them so hesitant?

More Synology Woes

My issues with power management on the Synology DS216+ appear to continue. After my last trouble-ticket was opened with Synolgoy, they remotely installed a BIOS patch. I am not positive about this, because from what I can tell the BIOS version and date have not changed. Regardless, this patch did fix the issue with the device hanging on shutdown and reboot. The NAS now completely shuts down, and it does come back up on-line after reboot rather than hanging.

To my surprise this morning the NAS device appeared to be stuck in a sleep state. All LEDs were off, except for the Status light which was slowing blinking green, the low-power, sleep mode indicator.  I was not able to access the device via the Web GUI or the Synology Assist application. That usually means trouble. I even tried ssh and ping to access the device, but it appeared to be stuck in a sleep state. I tried to press the power button to wake it up, but there was no change. Eventually, I had to hold the power button down for a few seconds to initiate a hard reboot. This is an improvement though! I didn’t have to physically remove power on the back of the device.

After reboot, I was still not able to access the Web GUI. Synology Assist did find the device on a new IP address, however. It appears my FIOS modem/router decided to move it off of the previous IP address that the NAS had been using for months. Thanks, Verizon. This just underscores my need to work on the home network when I get some time off from grad school and work later this month.

The IP address change explains why SSH, ping and the Web GUI were unresponsive, but it does not explain why Synology Assist could not find my NAS. Unfortunately there were no clues in the notification log and . I disabled automatic updates to the system, so I just had notifications letting me know there were some DSM software updates available.

I installed the lasted OS patches and package updates. For now I suppose I just need to monitor and watch the behavior of the device. I hope hanging on sleep will not become my next issue to contend with!

And I will not be happy if the answer is to install the M.615 patch…I am still on M.613 after requesting M.615…

More Synology troubleshooting

So I enabled ssh access to my NAS and decided to poke around a bit. Yesterday I opened a ticket with Synology, and they got back with me in a little over 24 hours, which I don’t consider too bad considering there is no service contract. The Synology support agent mentioned that there has been a shutdown/reboot an issue with the DS216+and the Intel BIOS. The agent did not provide me any commands to run but requesting remote access instead. That got me wondering how I might find out whta the bios version is myself.

On the Synology web forum I found the commands for viewing the device’s BIOS version information. After connecting with an admin group account via SSH, then running “sudo -i”, the below commands give me all of the info I was curious about.


My NAS has BIOS version M.613, and based on my previous research, BIOS version M.615 seems to have the fix that corrects the shutdown/reboot issue.

Hopefully they send me the BIOS update in the next message on the trouble ticket. I’d like to get beyond this problem and check it off of the to-do list!

Synology NAS shutdown problems

As I mentioned in my past post about Synology and UPS, I had read that some UPS devices will report powering down when going through a self-test periodically. Did I experience that issue yesterday? I woke up to find that my NAS had been powered down. Everything else was powered on, including the FIOS modem which is also connected to the UPS.  I restarted the NAS and checked the event log to look for clues.


There are no notifications about the UPS on the 20th, except for a message that the NAS connected with the UPS after it was restarted. That is normal. Nothing else in the house indicates that the power may have gone down either. I do not believe the issue is with the UPS. I then decided to check the update settings for the auto updates.


A clue! The device updates on Sundays (and Wednesdays) at 05:05 in the morning. This matches what I am seeing in the notifications log too, with the update starting at 05:06 after downloading an update. This seems to be clearly a Synology issue and not an UPS issue.

It looks like the Synology DSM operating system updated itself, and I can only assume that after updating, it tried to reboot but hung-up during the shutdown process.  It did this to me before when I manually updated the system, but I just considered it a fluke event. After searching around the support forums, I found this is an issue plaguing the DS216+ product line. It would appear that updating the BIOS on the device to version M.616 is working for some users though.

To make sure the issue is with shutdown on the NAS, I decided to manually shutdown the NAS using the web GUI interface. Sure enough, the NAS hung-up during the shutdown process! I had to physically remove power from the NAS and re-apply power to get it back up. I recall that I had to do this too during my initial auto-shutdown test with the UPS last week. Three times now…starting to smell like a bug to me.

As per guidance on the support forum, a trouble ticket has been submitted with Synology. Hopefully in a few days I will get a response. Synology support is very trying in that it takes days to get an initial response to a ticket.

I have mixed feelings about my Synology NAS at this point. It is very easy to use, there are iPhone/Android apps for accessing the NAS for my spouse, and in general I don’t have to mess with the NAS. On the negative side though, Synology support is very vexing with their response times.  At this point I think I am still happy with the device, and hoping that a BIOS update will make this problem go away. For now I have disabled auto-updates and will just manually update as needed.

It is very disappointing to read about this issue affecting so many people and Synology not being proactive and trying to get a fix out to the user base via the support website. There does not appear to be a way to download the BIOS update from the support website. I can only assume that they have still not identified the root cause of this issue, which is why they don’t make the BIOS update available to their user base.

Synology and UPS

By and large I have been happy with the Synology purchase for the home NAS. While I would have had more fun with FreeNAS, my spouse is probably happy I went with the solution that did not require my tinkering or attention here and there frequently. It is also nice to have the Synology apps for iPhone/Android so that we do not have to deal with upload data to the magic cloud.

I did have my first scare with the NAS though, entirely my own fault! At the end of summer there was a power outage that took down everything in the house, including the NAS. I did not have it hooked up to any sort of UPS device at the time, so it went down hard. I could not access it over the network, so I went to look at it, and the Synology NAS just sat there flashing two blue status lights. After doing some google research I was concerned they might be the death indicator lights. I ended up having to pull out the disk drives, remove physical power from the NAS, and then reboot it. After some time the device recovered and I gracefully shut it down, replaced the disk drives, and then rebooted. It came back up, no data lost, and what a relief!

After that scare I decided that I really ought to get a basic UPS for the NAS. I did my homework and found that Synology supports UPS standards that communicate the status of the power supply from the UPS. On Synology’s website is a compatibility list with rather expensive devices and even some dated devices. However, more google research said that any standards-compliant device should work. Of course, being primarily a Linux / Mac user, one could excuse me for balking at that idea when it comes to consumer grade computer equipment!

I really did not need a fancy UPS with lots of features. I just wanted a simple UPS device that will inform the Synology NAS when it switches over to battery. Ideally the UPS would use USB, since EIA-232 Serial Ports are not on Synology devices. I am not looking for a device that will keep a computer running for some amount of time, I simply want to keep the NAS and perhaps a network switch up and running long enough to safely shutdown.

After reading the Wirecutter recommendation for the CyperPower CP685AVR UPS, I searched around some more with google. I did not find any definitive information about Synology NAS devices working with this UPS model, but I decided to take the plunge and see if it would work since my local Microcenter store carries the UPS in stock. I really like the size of the UPS, it is about the size of a small shoe-box or a Cisco Press textbook. It also doesn’t make any noise which is always nice!

Safe Shutdown Test

Before hooking up the NAS to the UPS for power, I decided to try an experiment to make sure the UPS would work with my NAS.

  1. Keep Synology NAS powered by A/C power, but connect the UPS USB cable to the back of the NAS, and power on the UPS
  2. Switch off the UPS outputs only and see how the NAS handles the event.
  3. Switch on the UPS outputs, and then after a minute remove A/C power supply to the UPS
  4. Wait for safe shutdown time, and then confirm the NAS safely powers down.



Configuration was probably the easiest thing I have ever seen. In the Synology NAS control panel, in the Hardware and Power section, one simply navigates to the UPS tab. Place a check mark in the Enable UPS support box, and then set the safe mode timer. For now I have set this timer to 5 minutes. There is also a button “Device Information” at the bottom of the tab, and as shown in the above screen capture, the NAS automatically detects the UPS.

Test Results

Starting at the bottom and working up with the messages in the screen shot, all of the power events are stored in the NAS log file.  After enabling the UPS support, the NAS logs that the service started and identifies a UPS on the USB port. The message “Local UPS was plugged out” refers to step 2 of my simple test, where I kept the UPS plugged into A/C power but simply disabled the UPS outputs. After 5+ minutes, the NAS was still online and did not shutdown. Next, I re-enabled the power outputs on the UPS, and after a minute or two, removed A/C power from the UPS. The NAS logs that the UPS has gone to “battery” and the safe shutdown timer begins.


After the safe shutdown timer expires, the NAS shuts down the Cloud Station service, and logs a message stating it will be going into Safe Shutdown. At this point, keep in mind the NAS is still plugged into A/C power. After the NAS shuts down, the Status LED on the NAS was cleared, and I was no longer able to navigate the NAS via its web interface. I reconnected the UPS to A/C power, and after a minute or two, the blue lights on the NAS began to flicker. A few minutes later, the NAS was back up online and I was able to access to the web interface. Success!


The CyberPower CP685AVR UPS and Synology NAS work just as I had hoped–I could not be happier. I have read that some UPS devices will initiate periodic self-checks than can cause some NAS devices to think power has been lost. I will have to keep an eye out for this, and worst case I may need to extend the safe shutdown timer from 5 minutes to something longer. However, this UPS claims up to 70 minutes on an iMac G4, and I am sure that this NAS with its low-power ARM CPU will not exceed the power draw of an iMac G4. As such, there appears to be plenty of margin to work with.