I needed a bit more space on my NAS’s RAID10 array which was 6 x 2TB drives.To be honest iI’m not sure why I am using RAID10. The array was initially on my main Linux workstation. Then I decided I needed a separate NAS as my existing NAS was way too small, so I moved it across. RAID5/6 would give me more space. But I guess I like the flexibility and the ability to survive failure of two disks even though it does reduce space by 50%!
The server is running Debian and is headless. I know there’s loads of NAS OSs. But I do prefer to do things myself. The boot/root partitions are on a single SATA SSD card and the (now) eight drives are plugged into a Seagate Smart Host bus Adaptor H240.
I found two “consumer” Crucial 2TB SSD disks on Black Friday for £65 which seemed reasonable. I did wonder how well two SSDs would do in a RAIDarray with spinning drives. Let’s find out….! So this is what I did (which I am blogging about so I do not have to remember next time!). Interestingly the last time I blogged about growing a RAID array was quite some time ago. Also that’s the only time I’ve ever got comments on my blog (127 to be exact!). I think growing RAID arrays with MDADM was quite new back then.
The procedure to add new devices and grow the raid array is:
Procedure to grow an array with two new disks
- physically add new disks
- Partition
- add disks to array
- increase number of active disks in array to grow the array
- grow filesystem
The new disks are /dev/sda /dev/sdd
Partition
For GPT use sgdisk to copy the partition table from one disk to a new one
Backup first of course!
sgdisk /dev/sdX -R /dev/sdY
sgdisk -G /dev/sdY
“The first command copies the partition table of sdb to sda/d
sgdisk /dev/sdb -R /dev/sda
sgdisk /dev/sdb -R /dev/sdd
Now randomise the GUID of each device:
sgdisk -G /dev/sdd
sgdisk -G /dev/sda
Add new devices
mdadm --add /dev/md1 /dev/sdd1 /dev/sda1
mdadm: added /dev/sdd1
mdadm: added /dev/sda1
These are added as spares as the number of active devices does not change. Let’s check:
# cat /proc/mdstat
Personalities : [raid10] [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4]
md1 : active raid10 sda1[8](S) sdd1[7](S) sdc1[5] sdg1[0] sde1[6] sdh1[3] sdb1[1] sdf1[4]
5860141056 blocks super 1.2 512K chunks 2 near-copies [6/6] [UUUUUU]
bitmap: 4/22 pages [16KB], 131072KB chunk
Increase number of devices to include new ones
mdadm --grow --raid-devices=8 --backup-file=/mnt/USB/grown_md1.bak /dev/md1
The –backup-file creates a backup file in case the power goes. Not essential as I have a UPS. Also the filesystem is still mounted. However, to speed it up I turned off all services except the DNS/DHCP server. The less disk activity the quicker the reshape will finish.
The reshaping took about 20 hours. Much less than I thought.
Now we need to resize the filesystem. Unmounting is not essential for growing a ext4 filesystem (although it is for shrinking), but hey it’s a lot safer so I shut off everything and unmounted it
systemctl stop smbd
systemctl stop docker
umount /mnt/storage
resize2fs /dev/md1
This gave an error that the filesystem needed checking first.
e2fsck -f /dev/md1
resize2fs /dev/md1
This took about 30 minutes.
Finishing off.
Now let’s get it all back up and running.
mount /mnt/storage
systemctl start docker
systemctl start samba
systemctl start smbd
systemctl start docker
systemctl restart docker
The entry for the mdadm device does not need updating. Previous it did but I think that’s when I was using the 0.9 metadata block.
mdadm --detail --scan
cat /etc/mdadm/mdadm.conf
One bizarre issue was that when I restarted all the docker containers they downloaded a new image rather than using existing images. I have no idea why that happened.