Applied FreeBSD: Fibre Channel Target

I wanted to learn about iSCSI and Fibre Channel (FC) but I do not have access to any Enterprise hardware systems that would allow learning. As such, I decided I would build a basic Fibre Channel system.

The most simple Fibre Channel network is a point-to-point system which has just two nodes. The node that serves a storage resource over the network is the Target, and the consumer of the storage resource is the Initiator. I bought two Qlogic 2460 4 Gb/s PCI Express FibreChannel Host Bust Adapters (HBAs) and 15ft. of LC-type fibre cable from Ebay. I downloaded the latest firmware for the devices and updated them using the EFI loader on my workstation. I left one card in my Linux workstation, and put the other in the FreeBSD box.

I have a SATA harddrive in the FreeBSD box that is not currently being used, so the goal will be to serve it over Fibre Channel for the Linux workstation to use. There is not a lot of HOWTO, guides, or general information about Fibre Channel and FreeBSD, but I found Justin Holcomb’s blog to be extremely helpful in getting started.

Configuring a FreeBSD Fibre Channel Target

First of all, /dev/ada1 is the disk drive I want to serve over FC.

root@freebsd:/usr/home/bryan # camcontrol devlist
<TOSHIBA HDWD110 MS2OA8J0>         at scbus0 target 0 lun 0 (ada0,pass0)
<TOSHIBA HDWD110 MS2OA8J0>         at scbus1 target 0 lun 0 (ada1,pass1)
<hp CDDVDW SH-216ALN HA5A>         at scbus2 target 0 lun 0 (cd0,pass2)
<AHCI SGPIO Enclosure 1.00 0001>   at scbus4 target 0 lun 0 (ses0,pass3)

Also, during boot, FreeBSD recognizes my FC card (from dmesg output):

isp0: <Qlogic ISP 2432 PCI FC-AL Adapter> port 0xe000-0xe0ff mem 0xfe440000-0xfe443fff irq 16 at device 0.0 on pci1
isp0: Polled Mailbox Command (0x8) Timeout (100000us) (isp_reset:953)
isp0: Mailbox Command ‘ABOUT FIRMWARE’ failed (TIMEOUT)
isp0: isp_reinit: cannot reset card
device_attach: isp0 attach returned 6

FC is supported by the system, but in order to have the system act as an FC Target, the kernel needs to be compiled with the ISP_TARGET_MODE option (as well as enabled ispfw to load run-time firmware onto the FC HBA. I followed the steps from Justin’s blog:

#mkdir ~/kernels
# touch /root/kernels/FCTARGET
# vim /root/kernels/FCTARGET

Add the following to the FCTARGET kernel configuration file:

include GENERIC
ident FCTARGET
device          ispfw                   #Firmware for QLogic HBAs
options         ISP_TARGET_MODE         #Required for Target Modecd /usr/src

Now build and install the kernel.

# ln -s /root/kernels/FCTARGET /usr/src/sys/amd64/conf/FCTARGET
# make buildkernel KERNCONF=FCTARGET
# make installkernel KERNCONF=FCTARGET
# reboot

After rebooting, the system is ready to be configured in FC Target mode. Modify /etc/rc.conf such that ctld_enable=”YES” is added.

 # sysrc ctld_enable=YES
ctld_enable: NO -> YES

Next, the CAM target layer daemon (ctld) needs a configuration file that will describe what resources to serve. The config below is the most basic configuration, which has no authentication and simply serves the block device over FC port isp0.

# cat /etc/ctl.conf
target  ???????????????? {
alias toshiba-drive
auth-group no-authentication
port isp0
lun 0 {
backend block
# Disk serial number is 671ABKPFS
device-id 671ABKPFS
path /dev/ada1
}
}

The “??????????????????” is because I do not know the World Wide Name (WWN) for the FC link. The ctladm command shows the WWN when querying for CAM target layer ports.

# ctladm port -l
Port Online Frontend Name     pp vp
0    NO     camsim   camsim   0  0  naa.50000006f1d1ab01
1    YES    ioctl    ioctl    0  0
2    YES    tpc      tpc      0  0
3    NO     camtgt   isp0     0  0

From the above output, it is clear that isp0 is not online. After reading through the ctladm man page, I found the commands to bring the port online.

# ctladm port -o on isp0
Front End Ports enabled
# ctladm port -l
Port Online Frontend Name     pp vp
0    YES    camsim   camsim   0  0  naa.500000012e445301
1    YES    ioctl    ioctl    0  0
2    YES    tpc      tpc      0  0
3    YES    camtgt   isp0     0  0  naa.2100001b320f1d96

Now the port has a WWN: naa.2100001b320f1d96. The complete ctl.conf file is updated:

# cat /etc/ctl.conf
target  naa.2100001b320f1d96 {
alias toshiba-drive
auth-group no-authentication
port isp0
lun 0 {
backend block
# Disk serial number is 671ABKPFS
device-id 671ABKPFS
path /dev/ada1
}
}

Now it is time to start the CAM target layer (CTL) service on the system. Note that if there are any syntax errors in the ctl.conf file, the service will not start and will provide some clues as to where the errors are.

# service start ctld

The service starts without any hiccups and one can now query CTL for status.

# ctladm devlist -v
LUN Backend       Size (Blocks)   BS Serial Number    Device ID
0 block            1953525168  512 MYSERIAL   0     671ABKPFS
lun_type=0
num_threads=14
file=/dev/ada1

Now it is time to configure the Linux system as an FC initiator to test FC link and CTL target service.

Configuring a Linux Fibre Channel Initiator

Good news! This just worked out of the box, no configuration needed. First of all, looking through dmesg output, this RHEL 7 workstation has loaded the driver for the Qlogic card.

[ 1.303534] qla2xxx [0000:04:00.0]-00fc:1: ISP2432: PCIe (2.5GT/s x4) @ 0000:
04:00.0 hdma+ host#=1 fw=8.06.00 (9496).

Further down in the output, there is more activity with the qla2xxx driver:

[281614.649436] qla2xxx [0000:04:00.0]-500a:1: LOOP UP detected (4 Gbps).
[283199.887749] qla2xxx [0000:04:00.0]-500b:1: LOOP DOWN detected (2 5 0 0).[283202.700162] qla2xxx [0000:04:00.0]-500a:1: LOOP UP detected (4 Gbps).[283202.711117] scsi 1:0:0:0: Direct-Access     FREEBSD  CTLDISK          0001 PQ: 0 ANSI: 7
[283202.721738] scsi 1:0:0:0: alua: supports implicit TPGS
[283202.722115] scsi 1:0:0:0: alua: port group 01 rel port 03
[283202.722342] scsi 1:0:0:0: alua: rtpg failed with 8000002
[283202.722343] scsi 1:0:0:0: alua: rtpg sense code 06/29/01
[283202.722344] scsi 1:0:0:0: alua: not attached
[283202.722507] sd 1:0:0:0: Attached scsi generic sg3 type 0
[283202.722732] sd 1:0:0:0: [sdc] 1953525168 512-byte logical blocks: (1.00 TB/931 GiB)
[283202.722734] sd 1:0:0:0: [sdc] 4096-byte physical blocks
[283202.723335] sd 1:0:0:0: [sdc] Write Protect is off
[283202.723336] sd 1:0:0:0: [sdc] Mode Sense: 7f 00 10 08
[283202.723483] sd 1:0:0:0: [sdc] Write cache: enabled, read cache: enabled, supports DPO and FUA
[283202.727281] sd 1:0:0:0: [sdc] Attached SCSI disk

The FC loop has gone up, which was explicitly done on the FreeBSD server. At this point, the system negotiates the FC protocol and the remote drive is now handed over to the SCSI subsystem, where is becomes /dev/sdc.

That is all there is to it, fdisk or parted can now be used on the Linux workstation to partition the remote disk and put a filesystem on the disk. For fun, there is a useful utility systool that  can be used for learning about FC devices on the system.

# yum install sysfsutils -y
# systool -c fc_host -v host1
Class = “fc_host”
Class Device = “host1”
Class Device path =
“/sys/devices/pci0000:00/0000:00:1b.4/0000:04:00.0/host1/fc_host/host1”
dev_loss_tmo        = “30”
fabric_name         = “0xffffffffffffffff”
issue_lip           = <store method only>
max_npiv_vports     = “127”
node_name           = “0x2000001b320f368f”
npiv_vports_inuse   = “0”
port_id             = “0x0000e8”
port_name           = “0x2100001b320f368f”
port_state          = “Online”
port_type           = “LPort (private loop)”
speed               = “4 Gbit”
supported_classes   = “Class 3”
supported_speeds    = “1 Gbit, 2 Gbit, 4 Gbit”
symbolic_name       = “QLE2460 FW:v8.06.00 DVR:v8.07.00.38.07.4-k1”
system_hostname     = “”
tgtid_bind_type     = “wwpn (World Wide Port Name)”
uevent              =
vport_create        = <store method only>
vport_delete        = <store method only>
Device = “host1”
Device path = “/sys/devices/pci0000:00/0000:00:1b.4/0000:04:00.0/host1”
fw_dump             =
nvram               = “ISP ”
optrom_ctl          = <store method only>
optrom              =
reset               = <store method only>
sfp                 = “”
uevent              = “DEVTYPE=scsi_host”
vpd                 = “0”
ctld_name=naa.2100001b320f1d96,lun,0
scsiname=naa.2100001b320f1d96,lun,0

Conclusion:

Setting up FibreChannel is even easier than iSCSI, but it does require a kernel recompile on the FreeBSD server. Furthermore, FC requires HBAs on each machine as well as the fibre infrastructure such as switches and fibre cable.

Advertisements

Applied FreeBSD: Basic iSCSI

iSCSI is often touted as a low-cost replacement for fibre-channel (FC) Storage Area Networks (SANs). Instead of having to setup a separate fibre-channel network for the SAN, or invest in the infrastructure to run Fibre-Channel over Ethernet (FCoE), iSCSI runs on top of standard TCP/IP. This means that the same network equipment used for routing user data on a network could be utilized for the storage as well. In practice, to get high-levels of performance, it is advised that system designers consider iSCSI Host Bus Adaptors (HBAs) for each iSCSI participating team, and that the network at a minimum have a separate VLAN for iSCSI traffic–or more ideally, have separate physical network.

My disclaimer: this article does not cover any of the above performance enhancements! The systems in this article are setup and configured in a VMWare Workstation virtualized environment so that I don’t have to physically procure all of the hardware just to learn about iSCSI.

This article will cover a very basic setup where a FreeBSD server is configured as an iSCSI Target, and another FreeBSD server is configured as the iSCSI Initiator. The iSCSI Target will export a single disk drive, and the initiator will create a filesystem on this disk and mount it locally. Advanced topics, such as multipath, ZFS storage pools, failover controllers, etc. are not covered.  Please refer to the following documentation on iSCSI for more information:

Now to get started…

iSCSI Target Test Setup

The disk drive which should be shared on the network is /dev/ada0, a 5G SATA disk created in VMWare that I attached to the system before starting it up. With FeeBSD, iSCSI is controled by the ctld daemon, so this needs to be enabled on the system. While at it, why not go ahead and enable it at boot time too?

root@bsdtarget:~ # echo ‘ctld_enable=”YES”‘ >> /etc/rc.conf
root@bsdtarget:~ # service start ctld
Starting iscsid.

The real magic is the /etc/ctl.conf file, which contains all of the information necessary for ctld to share disk drives on the network. Check out the man page for /etc/ctl.conf for more details; below is the configuration file that I created for this test setup. Note that on a system that has never had iSCSI configured, there will be no existing configuration file, so go ahead and create it.

root@bsdtarget:/dev # less /etc/ctl.conf
auth-group test{
chap “iscsitest” “bsdforthewin”
}

portal-group pg0 {
discovery-auth-group no-authentication
listen 192.168.22.128
}

target iqn.2017-02.lab.testing:basictarget {
auth-group no-authentication
portal-group pg0
lun 0 {
path /dev/ada0
size 5G
}
lun 1 {
path /dev/ada1
size 5G
}
}

For this setup, LUN 0 will be used by a FreeBSD iSCSI Initiator. I have LUN 1 configured for experimenting with Windows Server at a later time. Before starting ctld, it is a good idea to make sure that the /etc/ctl.conf file is not readable by all users (ctld will complain). At a later point it might be necessary to add iSCSI authentication for the sessions, and it would not be wise to have all users able to look at the authentication secret password.

root@bsdtarget:~ # chmod 640 /etc/ctl.conf
root@bsdtarget:~ # service start ctld

If there are any syntax errors or warnings, ctld will complain about it on the console. The ctladm tool can be used to query the system for more information, for example:

root@bsdtarget:/dev # ctladm lunlist
(7:0:0/0): <FREEBSD CTLDISK 0001> Fixed Direct Access SPC-4 SCSI device
(7:0:1/1): <FREEBSD CTLDISK 0001> Fixed Direct Access SPC-4 SCSI device
root@bsdtarget:/dev # ctladm devlist
LUN Backend Size (Blocks) BS Serial Number Device ID
0 block 10485760 512 MYSERIAL 0 MYDEVID 0
1 block 10485760 512 MYSERIAL 1 MYDEVID 1

 

That’s really it for the iSCSI Target configuration. The real effort is in setting up the /etc/ctl.conf file. And for a real production system, there would be more configuration with the exported disks, such as using ZFS shares, RAID-1 mirroring, et cetera.

iSCSI Initiator Test Setup

In order for a FreeBSD host to become an iSCSI Initiator, the iscsd daemon needs to be started. It doesn’t hurt to go ahead and add the instruction to /etc/rc.conf so that iscsid is started when the system comes up.

root@bsdinitiator:~ # echo ‘iscsid_enable=”YES”‘ >> /etc/rc.conf
root@bsdinitiator:~ # service start iscsid
Starting iscsid.

Next, the iSCSI Initiator can manually connect to the iSCSI target using the iscsictl tool. While setting up a new iSCSI session, this is probably the best option. Once you are sure the configuration is correct, add the configuration to the /etc/iscsi.conf file (see man page for this file). For iscsictl, pass the IP address of the target as well as the iSCSI IQN for the session:

root@bsdinitiator:~ # iscsictl -A -p 192.168.22.128 -t iqn.2017-02.lab.testing:basictarget

The command returns silently, but a look at /var/message/logs shows that the remote disk was recognized and is now recognized by the Initiator as /dev/da1.

da1 at iscsi3 bus 0 scbus34 target 0 lun 0
da1: <FREEBSD CTLDISK 0001> Fixed Direct Access SPC-4 SCSI device
da1: Serial Number MYSERIAL 0
da1: 150.000MB/s transfers
da1: Command Queueing enabled
da1: 5120MB (10485760 512 byte sectors)

The iSCSI session connection status can also be verified with iscsictl:

root@bsdinitiator:~ # iscsictl -L
Target name                                                 Target portal          State
iqn.2017-02.lab.testing:basictarget    192.168.22.128       Connected: da1

Once the disk is recognized by the iSCSI Initiator system, it can be configured for use on the Initiator like a regular SCSI/SATA disk attached to the system physically. The commands below create a partition and UFS filesystem on /dev/da1.

root@bsdinitiator:~ # gpart create -s gpt /dev/da1
root@bsdinitiator:~ # gpart add -t freebsd-ufs -l 1m /dev/da1
root@bsdinitiator:~ # newfs -U /dev/da1p1
/dev/da1p1: 5120.0MB (10485688 sectors) block size 32768, fragment size 4096
using 9 cylinder groups of 626.09MB, 20035 blks, 80256 inodes.
with soft updates
super-block backups (for fsck_ffs -b #) at:
192, 1282432, 2564672, 3846912, 5129152, 6411392, 7693632, 8975872, 10258112
root@bsdinitiator:~ # mkdir /iscsi_share
root@bsdinitiator:~ # mount -t ufs -o rw /dev/da1p1 /iscsi_share

If there is already a filesystem resident on the device, it only needs to be mounted after the iSCSI session is connected. Back on the iSCSI Target machine, it is possible to see all of the iSCSI Initiators connected

root@bsdtarget:/dev # ctladm islist
ID   Portal                   Initiator name                                                 Target name
6    192.168.22.136   iqn.1994-09.org.freebsd:bsdinitiator      iqn.2017-02.lab.testing:basictarget

Finally, if for some reason it is necessary to disconnect the system, unmount the filesystem and use iscsictl to disconnect the iSCSI session.

root@bsdinitiator:~ # umount /iscsi_share
root@bsdinitiator:~ # iscsictl -R -t iqn.2017-02.lab.testing:basictarget

There is much more to explore with iSCSI, this is just the very beginning, but it serves a a model and a starting point for this work. More to come in the future!

Update: Windows iSCSI Initiator

I am not a very savvy Windows user, and I am very new to Windows Server. I have just started to learn some of the basics. As such, I thought I’d try setting up a Windows Server 2016 host as an iSCSI Initiator. I won’t go into much detail other than what is required for setting up the iSCSI parts. Go ahead and fire up Server Manager.

initiator1

From the “Tools” menu, select iSCSI Initiator. It is also possible to start this application from the Windows search tool by searching for “iSCSI Initiator”. As shown below, when running it for the first time, Microsoft’s iSCSI service may not be running. If not, start it up!initiator2

There are many options for configuring the iSCSI Initiator, but for demonstration purposes we’ll cover the basic case. In the Target box, enter the IP address of the iSCSI Target machine and click on the Quick Connect button.

initiator4

A screen should pop-up window finding the IQN for the iSCSI Target service, and it should also state somewhere that the Login was successful.

initiator6

After closing out the pop-up window, the target should now be in the Discovered targets area of the Targets tab.

initiator5

Next go to the Volumes and Devices tab. Unless you know the exact mount point for the iSCSI volume, the best bet is to click the Auto Configure button which will get the data from the iSCSI Target, as shown below.

initiator7

I bet you wouldn’t have memorized that!  Both LUN 0 and LUN 1 are recognized by Windows. Press OK to exit out of the iSCSI Initiator application. Next, open up the Disk Management application on the Windows Server.

initiator8

Notice that two new 5 GB disks are present. The tricky part here is that I am not sure which is LUN 0 and which is LUN 1. My best guess is that the disk that is recognized as a healthy primary partition is LUN 0 which contains a GPT label and is UFS formatted.  Thus the unrecognized disk must be LUN 1, which was not modified by the FreeBSD iSCSI Initiator. In reality I would deploy two iSCSI portal groups, one for the FreeBSD iSCSI Initiators and one for Windows iSCSI Initiators. I might have a third portal group for shared volumes.

For the unrecognized volume, create a MBR partition with the Disk Management tool, and then create a FAT32 partition on this disk as well. I decided to name the partition ISCSI_BSD. As shown below, on this Windows Server, the E: drive is now LUN 1, or /dev/ada2 back on my FreeBSD iSCSI Target machine.

initiator9

The iSCSI drive shows up as a regular drive in Windows Explorer, as shown below.

initiator10

Inside I created a special message for viewing from the FreeBSD side:

initiator11

Finally, it is possible to verify the Windows access using the FreeBSD iSCSI Initiator. Reload the iSCSI Target session data, and now /dev/da2 is available on the FreeBSD Initiator. Even nicer, the FAT32 partitions are recognized by FreeBSD–less work to do! On Windows Server an MBR partition was created, which shows up as /dev/da2p1 in FreeBSD, and the actual FAT32 data partition is /dev/da2p2.

root@bsdinitiator:/iscsi_win_edrive # iscsictl -R -t iqn.2017-02.lab.testing:basictarget
root@bsdinitiator:/iscsi_win_edrive # iscsictl -A -p 192.168.22.128 -t iqn.2017-02.lab.testing:basictarget

root@bsdinitiator:~ # ls -l /dev/da2*
crw-r—– 1 root operator 0x72 Mar 4 22:32 /dev/da2
crw-r—– 1 root operator 0x77 Mar 4 22:32 /dev/da2p1
crw-r—– 1 root operator 0x78 Mar 4 22:32 /dev/da2p2
root@bsdinitiator:~ # mount_msdosfs /dev/da2p2 /iscsi_win_edrive
root@bsdinitiator:~ # cd /iscsi_win_edrive/
root@bsdinitiator:/iscsi_win_edrive # ls
$RECYCLE.BIN hello.txt
System Volume Information
root@bsdinitiator:/iscsi_win_edrive # cat hello.txt
Hello FreeBSD! This is Windows Server!
I made your /dev/ada1 into a FAT32 partition.
I call it E: Drive. Thank you!

It works! I can see the message from Windows land.

Synopsys NAS Stable!

I logged into my NAS yesterday and was informed that DSM 6.0.2-8451 Update 7 was available to install. I proceeded with the update, and the NAS started the install. As it neared the 100% mark, I realized that this would be the moment of truth. Will I still have these power management issues?  After some time, I heard that NAS beep, indicating that it had rebooted. I was pleased so far, I didn’t have to manually power cycle the system!  Within a couple of minutes the NAS was back up online and I was able to log in as admin.

Success! It looks like finally the power management issues are behind me. The Intel firmware appears to have fixed the problem.

NAS Firmware Update – M.616

Synology got back to me and said that my issue has a fix identified that will be released in DSM v6.1 bundle. The current baseline is DSM v6.0.2, and v6.1 is a beta version that is available. Since I’m trying to stabilize this system, I did not want to run a beta version of the DSM operating system. Instead Synology provided me an updated Intel firmware version M.616.

Installing the firmware seemed to help, the system installed the firmware and then rebooted without my needing to manually power cycle the system. Manual reboot and shutdown also worked. This time the system firmware also appears to have been updated:

m616

Fingers crossed now!  Hopefully this will be the end of the power management issues. The next test will be when there is an update to the DSM software and if the system can reboot after updating DSM.

More Synology hang-ups

I had some downtime this morning, so I logged into my DS216+ web UI to check on things. The device immediately woke up upon entering its IP address into my web browser which caused me to let out a sigh of relief, thing back to the last time I tried.  My spouse has been heavily using the device lately moving all sorts of photos onto the unit or to her PC. As a file store I really am happy with this NAS, it works seamlessly with her Windows computer and her iPhone. I mapped the photo directory as a share driver on her Windows computer and all has been well. Yet somehow I knew it was too early still to declare victory.

Sure enough there was a DSM operating system software update.  DSM 6.0.2-8451 Update 6 was available, so I went ahead and chose to update. Unfortunately, the NAS hung when trying to reboot! Again power management issues! The status and network LEDs were out, the HDD indicator lamps were solid, and the dreaded dual-blue LEDs by the power button were flashing, indicating an error state. Once again I had to go over to the device and physically remove power.

I verified that if I manually request a reboot or shutdown, the NAS does come back up. At this point the only issue seems to be with rebooting after updating the DSM operating system. It is a minor nuisance at this time, but I am going to put in a trouble ticket to see if there is a solution to this issue. Perhaps Intel firmware version M.615 that Synology seems to hesitant to offer to the user community? What do they know about M.615 that makes them so hesitant?

More Synology Woes

My issues with power management on the Synology DS216+ appear to continue. After my last trouble-ticket was opened with Synolgoy, they remotely installed a BIOS patch. I am not positive about this, because from what I can tell the BIOS version and date have not changed. Regardless, this patch did fix the issue with the device hanging on shutdown and reboot. The NAS now completely shuts down, and it does come back up on-line after reboot rather than hanging.

To my surprise this morning the NAS device appeared to be stuck in a sleep state. All LEDs were off, except for the Status light which was slowing blinking green, the low-power, sleep mode indicator.  I was not able to access the device via the Web GUI or the Synology Assist application. That usually means trouble. I even tried ssh and ping to access the device, but it appeared to be stuck in a sleep state. I tried to press the power button to wake it up, but there was no change. Eventually, I had to hold the power button down for a few seconds to initiate a hard reboot. This is an improvement though! I didn’t have to physically remove power on the back of the device.

After reboot, I was still not able to access the Web GUI. Synology Assist did find the device on a new IP address, however. It appears my FIOS modem/router decided to move it off of the previous IP address that the NAS had been using for months. Thanks, Verizon. This just underscores my need to work on the home network when I get some time off from grad school and work later this month.

The IP address change explains why SSH, ping and the Web GUI were unresponsive, but it does not explain why Synology Assist could not find my NAS. Unfortunately there were no clues in the notification log and . I disabled automatic updates to the system, so I just had notifications letting me know there were some DSM software updates available.

I installed the lasted OS patches and package updates. For now I suppose I just need to monitor and watch the behavior of the device. I hope hanging on sleep will not become my next issue to contend with!

And I will not be happy if the answer is to install the M.615 patch…I am still on M.613 after requesting M.615…

More Synology troubleshooting

So I enabled ssh access to my NAS and decided to poke around a bit. Yesterday I opened a ticket with Synology, and they got back with me in a little over 24 hours, which I don’t consider too bad considering there is no service contract. The Synology support agent mentioned that there has been a shutdown/reboot an issue with the DS216+and the Intel BIOS. The agent did not provide me any commands to run but requesting remote access instead. That got me wondering how I might find out whta the bios version is myself.

On the Synology web forum I found the commands for viewing the device’s BIOS version information. After connecting with an admin group account via SSH, then running “sudo -i”, the below commands give me all of the info I was curious about.

nas_bios_version

My NAS has BIOS version M.613, and based on my previous research, BIOS version M.615 seems to have the fix that corrects the shutdown/reboot issue.

Hopefully they send me the BIOS update in the next message on the trouble ticket. I’d like to get beyond this problem and check it off of the to-do list!