I’ve been running my own home file server for many years now. I’ve used it for everything from storing old documents and photos to running a Minecraft server for my cousins. With online backup services like Dropbox it’s become less of a necessity but I still like having one around.
My current file server has been aging and is starting to sound a bit wheezy. It was originally built into an old desktop “Mid Tower” ATX sized case. So, time for something smaller and hopefully quieter.
Here is the build as spec’d out on PCPartPicker:
- Case: Lian-Li PC-Q26B Mini ITX Tower Case
- CPU: Intel Core i3-4170 3.7GHz Dual-Core Processor
- Motherboard: ASRock H97M-ITX/AC Mini ITX LGA1150 Motherboard
- Memory: Crucial Ballistix Sport 16GB (2 x 8GB) DDR3-1600 Memory
- Storage: 2x Western Digital Caviar Blue 250GB 3.5″ 7200RPM Internal Hard Drive
- Storage: 3x Western Digital Red 4TB 3.5″ 5900RPM Internal Hard Drive
- Power Supply: SeaSonic X Series 400W 80+ Platinum Certified Fully-Modular Fanless ATX Power Supply
It could be done cheaper but this is a fun project for me so I splurged a little.
After the parts arrived and I’d put everything together it was time to install Debian.
First thing was to build a new live USB installer. I grabbed the standard Debian 8.2 (jessie) iso from my nearest mirror (the OSU Open Source Lab). Then I popped a 4GB thumb drive into my Macbook and ran:
diskutil list diskutil unmountDisk /dev/disk2 sudo dd if=~/Downloads/debian-live-8.2.0-amd64-standard.iso of=/dev/rdisk2 bs=1m
My old file server was running RAID 5 for the storage partition but the Operating System was running on a single disk that could fail at any time. If that had happened it would have been a huge hassle to reinstall and reconfigure everything. I wanted to avoid that possibility this time around by installing Debian to a RAID 1 (mirrored) array.
Installing Debian to a RAID 1 array can be done entirely from the built-in installer. I pretty much followed this guide from the Debian wiki to the letter:
rootfs for my logical volume name. You can ignore the part about
lilo at the end.
Note: You can skim the LVM section if you’d like and simply install Debian directly to your RAID volume.
The first boot
After the first successful boot I made sure to run
dpkg-reconfigure grub-pc and install grub on both members of
/dev/sdb, to ensure either drive would be bootable.
Then I went about with my usual configuration.
First I added my laptop’s public key to my user’s
~/.ssh/authorized_keys file so I could access the server remotely without bothering with passwords.
Then I modified the
/etc/ssh/sshd_config to disable access for the
root user and for password based authentication:
PermitRootLogin no # ... snip PasswordAuthentication no
Finally, I wanted to make sure I’d know if one of the members of the RAID array becomes degraded so I needed to make sure e-mail could be sent from the server. To do this I had to configure
exim4 to send via Gmail’s SMTP gateway.
This seemed complicated at first glance but proved to be much simpler than it seemed. I basically followed this guide (again from the Debian wiki):
You can ignore the part about
Also, use the command
service exim4 restart instead of invoking the
rc.d command directly.
Where it asks for your Gmail password I suggest generating a new App Password specifically for your server.
You can send a test message by:
echo 'This is only a test' | mail -s 'Hello World!' firstname.lastname@example.org
Since I was in a monitoring mood I also installed
smartmontools to monitor the health of the drives. This includes a daemon that will notify you if any SMART errors are detected.
apt-get install smartmontools. Then, edit
etc/default/smartmontools and uncomment the line
#start_smartd=yes. Finally, restart the service with
service smartmontools restart.
The first problem
I wanted to make sure the system would boot in a degraded state if one of the drives failed. To test this, I shut down the system and removed one of the drives. Upon restarting, I was greeted with the following error:
Unable to find LVM volume vg1/rootfs Volume group "vg1" not found
… then dumped into the
initramfs recovery console (BusyBox).
After a break and some intense Googling I discovered that
mdadm was marking the array as
inactive on boot instead of starting it in a degraded state as expected. Because the physical volume was not available LVM was throwing a misleading error.
To fix this problem you can run the following from the
initramfs recovery console (BusyBox):
mdadm --run /dev/md0 vgchange -a y exit
Your system should now boot normally from a degraded state.
After booting, I used the command
mdadm --manage /dev/md0 --re-add /dev/sdb1 to restore the removed drive to the array.
This issue appears to be a bug in Debian 8.x (jessie) that should be fixed in future versions:
If you're running Debian Jessie note that a degraded RAID array may not autostart on boot: https://t.co/4sd2M4jOzD
— Tristan Waddington (@twaddington) January 11, 2016