NAS Installation - Synology DSM 6.1 (Hermes)

Hua Hin cloud server 2017 on Synology DS716+.

Specification

  • Intel Celeron N3160 quad core
  • 2 HGST Deskstar NAS 6TB HDD
  • 2 GB RAM

Setup

  1. Find the DS through http://find.synology.com.
  2. Create a volume in Storage Manager
  3. Configure Network settings in Control Panel. Select the 2nd LAN and click Create Bond.
  4. Enable user home service in Control Panel –> User –> Advanced.
  5. Set disk full warning setting in Control Panel –> Notification –> Advanced –> Internal Storage –> Volume Full.
  6. Enable the widgets you want to use on your home screen.
  7. To setup SSL, import server.key, domain.crt, and domain.intermediate.crt through Control Panel –> Security –> Certificate –> Add. Right click on the new certificate, “Edit” to make it default, “Configure” to assign it to services. Detailed instructions see Secure your Synology NAS, install a SSL certificate and How to Move or Copy an SSL Certificate from one Server to Another.

Replace Harddisks

DSM 6

  1. Shut down the NAS and replace the first disk. Numbering of disks is from left to right.
  2. Boot the NAS and add the new disk to the Raid. It takes about 20 hours to rebuild the Raid.
  3. Repeat steps 1 and 2 for the other disk.
  4. Expand the Raid volume if the new disks are higher capacity than the replaced ones.

Command Line

Since DSM 6 the Synology NAS features a linux kernel, so Raid management can also be done on the command line. Since the Diskstation 212+ and 213+ do not support HGST Deskstar 10TB drives, I started to look into this to find a way how to make it work. Here is what I found:

  1. I replaced a failed HGST 6TB with a new HGST 10TB and rebuilt the Raid through the DSM GUI.
  2. I then replaced the other HGST 6TB with a new HGST 10TB and rebuilt the Raid through the DSM GUI.
  3. Extending the Raid volume through the GUI did not work.
  4. After rebooting the NAS the data volume Raid degraded. Interestingly, the other 2 Raids (boot, swap) did not degrade
    # cat /proc/mdstat
    Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4]
    md2 : active raid1 sda3[2]
          5855800416 blocks super 1.2 [2/1] [U_]
    md1 : active raid1 sda2[0] sdb2[1]
          2097088 blocks [2/2] [UU]
    md0 : active raid1 sda1[0] sdb1[1]
          2490176 blocks [2/2] [UU]
    unused devices: <none>
  5. I then rebuilt the Raid from the command line and created a conf file
    # mdadm --add /dev/md2 /dev/sdb3
    # mdadm --detail --scan > /etc/mdadm.conf
  6. I now can boot the NAS without problems.

Services and Packet Installation