I was able to finish setting up the storage array once I received all three of the new
4TB disk drives.
After racking the new drives I first tried partitioning them using
fdisk. I quickly realized
fdisk does not support partitions larger than
2TB. So, the next step was to install GNU Parted using
apt-get install parted.
Then, I created full
4TB partitions on each drive:
parted -a optimal /dev/sdc parted -a optimal /dev/sdd parted -a optimal /dev/sde
parted in an interactive mode using the optimal disk alignment. The commands should be pretty self-explanatory but I basically followed the below tutorial:
The next step was to create a
RAID array using the new partitions. This command was the one I used:
mdadm --create --verbose /dev/md1 --level=5 --raid-devices=3 /dev/sdc1 /dev/sdd1 /dev/sde1
Note, that upon restart I noticed the array was being defined as
/dev/md127. To fix this issue I updated
mdadm.conf to explicitly define the new array and ran
update-initramfs -u. Upon restart the new array comes up correctly as
See below for the contents of my
# mdadm.conf # # Please refer to mdadm.conf(5) for information about this file. # # by default (built-in), scan all partitions (/proc/partitions) and all # containers for MD superblocks. alternatively, specify devices to scan, using # wildcards if desired. #DEVICE partitions containers # auto-create devices with Debian standard permissions CREATE owner=root group=disk mode=0660 auto=yes # automatically tag new arrays as belonging to the local system HOMEHOST <system> # instruct the monitoring daemon where to send mail alerts MAILADDR root # definitions of existing MD arrays ARRAY /dev/md/0 metadata=1.2 UUID=e3b2b47d:47b61e06:1464f958:737d6621 name=lin:0 ARRAY /dev/md/1 metadata=1.2 UUID=741d95f5:fafbc130:e1a17128:41d9ed3c name=lin:1
The next step was to use
LVM to create a virtual drive on top of the
RAID volume. Then, format the new volume using
ext4. I chose an initial size of
2TB for this new volume since that’s enough space to accommodate my existing files. This will leave nearly
6TB for later use.
pvcreate /dev/md1 vgcreate vg2 /dev/md1 lvcreate --name storage --size 2T vg2 mkfs.ext4 /dev/vg2/storage
The new volume needs to be mounted so it can be used.
mkdir /mnt/storage mount /dev/vg2/storage /mnt/storage
I then updated
/etc/fstab to mount the volume on reboot.
See below for the contents of my
# /etc/fstab: static file system information. # # Use 'blkid' to print the universally unique identifier for a # device; this may be used with UUID= as a more robust way to name devices # that works even if disks are added and removed. See fstab(5). # # <file system> <mount point> <type> <options> <dump> <pass> /dev/mapper/vg1-rootfs / ext4 errors=remount-ro 0 1 /dev/mapper/vg2-storage /mnt/storage ext4 errors=remount-ro 0 0
Now that the volume is mounted I could begin transferring files from my old server. I used
rsync for this task.
rsync -rthvv --progress --exclude .AppleDouble --dry-run
At this point everything is working quite well. However, I did notice one oddity.
root@lin:/home/tristan# df -h Filesystem Size Used Avail Use% Mounted on /dev/dm-0 229G 1.6G 216G 1% / udev 10M 0 10M 0% /dev tmpfs 3.1G 9.3M 3.1G 1% /run tmpfs 7.8G 0 7.8G 0% /dev/shm tmpfs 5.0M 0 5.0M 0% /run/lock tmpfs 7.8G 0 7.8G 0% /sys/fs/cgroup /dev/mapper/vg2-storage 2.0T 428G 1.5T 23% /mnt/storage tmpfs 1.6G 0 1.6G 0% /run/user/1000
Note the above output from the
You’ll notice that the first
LVM volume, where the system root is installed, is referenced by
/dev/dm-0 (short for “device mapper”). However, the second
LVM volume is referenced by
/dev/mapper/vg2-storage instead of
/dev/dm-1 as expected.
/dev/mapper/vg2-storage is simply a link to
root@lin:/home/tristan# ls -al /dev/mapper/ total 0 drwxr-xr-x 2 root root 100 Jan 31 22:40 . drwxr-xr-x 21 root root 3280 Jan 31 22:40 .. crw------- 1 root root 10, 236 Jan 31 22:40 control lrwxrwxrwx 1 root root 7 Jan 31 22:40 vg1-rootfs -> ../dm-0 lrwxrwxrwx 1 root root 7 Jan 31 22:40 vg2-storage -> ../dm-1
I haven’t yet figured out how to resolve this inconsistency.
Part 3 coming soon!