Growing a RAID5 array – MDADM

With the release of kernel 2.6.17, there’s new functionality to add a device (partition) to a RAID 5 array and make this new device part of the actual array rather than a spare.

I cam across this post a while ago on the LK list.

LKML: Sander: RAID5 grow success (was: Re: 2.6.16-rc6-mm2)

So I gave it a go. My HOME directory is mounted on a 3x70gb SCSI RAID5 array. so I tried adding a further drive.

Although with the release of mdadm > 2.4 the only real critical part of the process is safer (it backs up some live data that is being copied), I didn;t fancy risking growing a mounted array. So I did plenty of backups, then switched to single user run level.

Basically the step includes adding a disc to the array as a spare, then growing the array onto this device.

mdadm --add /dev/md1 /dev/sdf1
mdadm --grow /dev/md1 --raid-devices=4

This then took about 3 hours to reshape the array.

The filesystem the needs to be expanded to fill up the new space.

fsck.ext3 /dev/md1
resize2fs /dev/md1

I then remounted the drive and wahey. Lots extra space….! Cool or what

EDIT  It’s over 18 months since I wrote this post, and Linux kernel RAID and mdadm have continued to move on. All the info here is still current, but as extra info check out the Linux RAID Wiki.

EDIT 2  The Linux Raid Wiki has moved

127 Replies to “Growing a RAID5 array – MDADM”

  1. Hi, You might want to investigate Screen if you have an dodgy connection. You can keep a persistent session going with it. If your connection drops, you can start screen again and pick up the session where it drops.

  2. I didn’t have putty set to send null packets to maintain connections. Just changed it… will see if it helps. I’ll check out screen, though… thanks!

    – Som

  3. hmm ok, im getting a Hardware RAID Controller that can grow the RAID5 Array. am I right with this:
    Ill need to use lvm in order to grow the partitions and the expand the filesystem ?

  4. You may be able to grow the RAID5 array with your hardware RAID controller, but I know nothing about this. This post was about using the Software MD RAID driver in the Linux kernel, and using the RAID utility MDADM to grow the array.

  5. I have just grown a raid 6 array from 6 to 8 disks on an alpha of Ubuntu 7.10, no real problems.
    The kernel is an ubuntued 2.6.22.
    Pretty much as listed here, I added the two drives as spares then grew the raid onto them.
    Also updated the mdadm.conf since I specify the array by devices. I’ve removed the two new drives and re-added them too. Interestingly, although I add them both at once, it rebuilt onto one disk then rebuilt onto the other. I can only assume that this is a holdover from raid 5.
    The LVM had to be resized on top and the JFS filesystem unmounted and then remounted/resized.
    Rebuilding was on the order of 2 hours @ 65 meg / sec with no tweaking.

  6. Spares, Spares every wherez!

    Quick Q:

    Does a spare need to be configured to be EXACTLY like the other drives in the array? I want 1 drive to serve as a spare for 2 arrays 1 is ~300GB raid 1 the other is 500GB raid 5.. can i just partition the whole drive and it’ll use what it needs? Thanks!

  7. I’ve addded a 147Gb partition to a RAID10 array with 70Gb partition members. This worked fine. However, I’m not sure for RAID5. I would think it would work, but you might be best testing this before relying on it

  8. Answered my own Q above. It doesn’t need to be the exact size. You are warned it is bigger (wont allow smaller) and the unused space is wasted. Until you can ‘grow’ the array to use it.

    BUT Q2 for you all.

    Can you remove a drive from an active array? Say you have a 4 drive raid 5 array.. gives you lots of space. you decide you wont need all that space and want a HDD back after all the drives have become active, is it possible to tell it to stop using the drive? maybe shrink the size of each parition and remove a device or something?

  9. I don’t think that will work. You’ve two questions. Can you resize the filesystem you’re using, and can MDADM and Grow shrink a raid array. I don’t think either is possible, but I stand to be corrected. Try the LinuxRaid mailing list.

  10. Well you can shrink *some* file systems – like ext3 which is the one i am using. Although i believe you cant shrink the likes of xfs.

  11. Hi, me again!

    Question for you and your viewer/readers..

    I have noticed that 1 of the 2 drives in my mirrored array is always reporting to be a couple of degrees hotter than the other.

    Is this because (possibly) it is the primary drive and is used for all the reads as well as writes?

    Any suggestions?

  12. Well… I’ve got the itch to add another 500gig drive to my three drive RAID5 array. Problem is that I have noplace to backup the ~1TB of data on the RAID. How risky is this process? This is a fresh Debian lenny installation.

  13. Thanks for the help , I am adding 2x750Go one by one to my 3x750Go RAID 5 device . It will take more than 24hours minimum but it works !!!

  14. Thank you very much for this. I’m currently putting together a 7*500GB NAS with debian and RAID5. This information came in very handy because five of those disks are formatted as NTFS. Cheers, mate!

  15. Anyone know what the Software raid 5 limit is size wise and disk wise?
    Ie. Is there a 5TB Limit, 2TB limit?
    Do you need a certain amount of memory to go past 2TB… After X Drives the Array becomes unstable…

  16. Thanks for this! I am about 21% complete on rebuilding from 4 to 6 500Gb SATAII drives in a single step, and your article has been brilliant.

    I’m amazed at how resilient this is – although I probably wouldn’t recommend it, I was watching a movie streamed from this raid array, copying data to it *and* growing at the same time.

    Awesome.

  17. This array originally built using Debian Etch
    OS is not on the array.
    Hardware is/will be 10 half terrabyte Maxtor sataII discs
    Configured as raid6
    Mounted two in icybox 5 disc sata disc racks.
    Connected by three no-name sii quad port sata drive controllers.
    5 drives actually configured. Number 6 installed.
    Could not grow the number of devices in the array using Etch (kernel too old)
    OS deleted and replaced by Ubuntu 7.04 (desktop actually)
    Installed mdadm and:

    root@Beijun:/# uname -a
    Linux Beijun 2.6.22-14-generic #1 SMP Sun Oct 14 23:05:12 GMT 2007 i686 GNU/Linux

    root@Beijun:~# mdadm –assemble /dev/md0
    mdadm: failed to add /dev/sde1 to /dev/md0: Invalid argument
    mdadm: /dev/md0 has been started with 5 drives.

    root@Beijun:/# fdisk /dev/sde

    The number of cylinders for this disk is set to 60801.
    There is nothing wrong with that, but this is larger than 1024,
    and could in certain setups cause problems with:
    1) software that runs at boot time (e.g., old versions of LILO)
    2) booting and partitioning software from other OSs
    (e.g., DOS FDISK, OS/2 FDISK)

    Command (m for help): p

    Disk /dev/sde: 500.1 GB, 500107862016 bytes
    255 heads, 63 sectors/track, 60801 cylinders
    Units = cylinders of 16065 * 512 = 8225280 bytes
    Disk identifier: 0x00000000

    Device Boot Start End Blocks Id System
    /dev/sde1 1 60801 488384001 83 Linux

    Command (m for help): q

    root@Beijun:/# mdadm –add /dev/md0 /dev/sde
    mdadm: added /dev/sde
    root@Beijun:/# mdadm –grow /dev/md0 –raid-devices=6
    mdadm: Need to backup 768K of critical section..
    mdadm: … critical section passed.
    root@Beijun:/#
    /dev/md0: 1397.28GiB raid6 6 devices, 0 spares. Use mdadm –detail for more detail.
    root@Beijun:/# mdadm –query –detail /dev/md0
    /dev/md0:
    Version : 00.91.03
    Creation Time : Tue Nov 20 00:26:12 2007
    Raid Level : raid6
    Array Size : 1465159488 (1397.29 GiB 1500.32 GB)
    Used Dev Size : 488386496 (465.76 GiB 500.11 GB)
    Raid Devices : 6
    Total Devices : 6
    Preferred Minor : 0
    Persistence : Superblock is persistent

    Update Time : Thu Dec 13 23:01:44 2007
    State : clean, recovering
    Active Devices : 6
    Working Devices : 6
    Failed Devices : 0
    Spare Devices : 0

    Chunk Size : 64K

    Reshape Status : 2% complete
    Delta Devices : 1, (5->6)

    UUID : a562da4a:8beaefd3:ed4e5664:d2029d2e
    Events : 0.7860

    Number Major Minor RaidDevice State
    0 8 96 0 active sync /dev/sdg
    1 8 32 1 active sync /dev/sdc
    2 8 16 2 active sync /dev/sdb
    3 8 48 3 active sync /dev/sdd
    4 8 80 4 active sync /dev/sdf
    5 8 64 5 active sync /dev/sde

    Absolutely bl**dy marvelous software.

    Thanks for the hints, the man page lacks somewhat in detail.

    Cheers Harry

  18. hi,

    it’s really a cool thing, but how do u connect the drives?
    i’ve looked for some cheap sata hba’s but i couldn’t find some exept Promise SATA300 TX4 – but it doesn’t let u read out the s.m.a.r.t. infos ;(

    greets Stefan

  19. I made a mistake and ran:
    mdadm -add /dev/md0 /dev/sdd
    instead of :
    mdadm -add /dev/md0 /dev/sdd1

    then I ran the grow command. Is this going to be a problem?

  20. I once did the same and created an entire array from the raw block devices rather than the partitions. It did not cause any problems for me. Too be honest I\m not too sure what the real ramifications are. Why not swap out the disc when you have some time, repartition the drive and re add it except as a partition instead?

  21. Ah, just got done adding FOUR 500 gig drives to bring my total to EIGHT 500 gig drives. Added them one by one without having them mounted. Had to do e2fsck -f before I resized it. Last drive too over 40 hours to add and another 2 hours to resize. Now have 3.36 TB space avail.

    Thanks Ferg!

    -stile

  22. happy to be of some help. Proper kudos goes to Neil Brown the MDADM developer though! Check out the Linux Raid mailing list for some even better cool stuff.

  23. Thanks for the great information!

    Question: anyone know what will happen if power is lost or a drive fails during the reshape part of “grow”?

  24. I would like to create a raid1 volume from an existing single volume that isn’t a lvm volume…

    how can I add the existing disk to be in a lvm disk group with another disk to be a raid1. Basically I’m looking to create a mirror to migrate data to another location.

  25. I am a complete noob with Linux… but am using Software Raid5 over Windows Server 2003 just because of how slow and limited it was.

    Took 22 hours to expand my 3 x 750 array to 4 drives… check the file system and expand it…

    Even those this guide was super-simple, it was all I needed. Although I had to do a few things different (Ubuntu) it worked out in the end.

    Thanks, I’ll probably come back to this site when I have to expand with more 750 drives… I’ve already forgotten the commands 😮

  26. Hey guys, I grew my RAID5 from 3x300gb to 4x300gb. This drive (/dev/md2) was mounted as root filesystem, with a small raid0 for the /boot partition and lilo as bootloader. anyway, it won’t boot anymore after the grow…
    Lilo seems to load from the /boot partition, but after some time during the boot process it just says:

    md: md2 stopped.

    Do I have to tell lilo somehow that the md2 disk is bigger now? I think that is the problem, but i don’t know how to do it…

    thanks for any suggestions!
    samuel

  27. Hi Guys, I have just grown my raid 5 from 7 to 8 500Gb drives.
    I am now running into this problem.
    When I try to run fsck /dev/md0

    fsck.ext2: Device or resource busy while trying to open /dev/md0
    Filesystem mounted or opened exclusively by another program?

    Or when I try to run resize2fs /dev/md0

    resize2fs: Device or resource busy while trying to open /dev/md0
    Couldn’t find valid filesystem superblock.

    Cat /proc/mdstat outputs:

    Personalities : [raid6] [raid5] [raid4]
    md0 : active raid5 sdb1[0] sdi1[7] sdg1[6] sdh1[5] sdf1[4] sde1[3] sdd1[2] sdc1[1]
    3418686208 blocks level 5, 256k chunk, algorithm 2 [8/8] [UUUUUUUU]

    unused devices:

    However, I can still access and mount my filesystem – i just cannot grow it, which I find very strange.
    I have tried to init 1, to go to single user mode, and I have made sure file system is unmounted, but this ‘busy’ error still occurs.

    Any ideas? I’m kinda stuck. Google is no help…

  28. Hi Jeremy, I’m afraid I’ve no idea. I saw your post on the Linux Raid list. I’m sure you’ll get a decent response on their. BTW I replied to your email address but it bounced. Is it not asdf@asdf.com? 😉

  29. hehe i feel a bit silly.
    Since I am using LVM, all I had to do was 1:
    PVresize.

    2. using system-config-lvm just increase to use new space 🙂

    Reason why the ‘busy’ message kept appearing is because LVM has the device locked.

  30. I have 2x500GB in a RAID 0 array. I have a spare 500GB lying around. Is there a way I can add the spare disk and covert it to RAID 5?

    How would I do it?

    Daze

  31. I have just expanded a 4 disk raid5 to 6 disk raid5 whilst being online, took roughly 12 hours, worked smoothly

  32. Daze,

    Depending on how much data you have on your RAID0 and if it’ll fit on one 500GB.
    – add your third disk
    – copy all the raid0 data if it’ll fit to the third disk
    – break the raid0
    – create a mirror/raid1 with the third disk
    – then create a raid5 with only 2 disks
    – then add the last disk
    – then resize the filesystem to use all of the new raid5.

    See this reference;
    http://scott.wallace.sh/node/1521

  33. Hi there, I am using a 4x500GB Raid5 and would like to grow it using the described method. *BUT* I am using dm-crypt and no LVM.
    Q: Is dm-crypt transparent enough to make the growing possible? Any recommendations or experiences yet?

  34. Hi, thank’s to this howto, I’ve started to reshape my array 🙂
    md8 : active raid5 sdd8[3] sda8[0] sdc8[2] sdb8[1]
    608702464 blocks super 0.91 level 5, 512k chunk, algorithm 2 [4/4] [UUUU]
    [>………………..] reshape = 2.8% (8538112/304351232) finish=344.6min speed=14301K/sec

    Hope, everything will be OK. 🙂

  35. Hey,

    I am about to expand my existing 4x500GB raid5 array. It has been a year since i created it and i cant remember exactly what i did. Do you HAVE to create the Linux Raid partition for mdadm? I have vague memories of that not being necessary? (might have been a pure lvm system i read that for)

    Also I plan to add 2 new 750GB the plan being that as the existing drives are replaced they will be bigger and eventually all of them will be at least 750GB and i can grow the array. to at least the size of the smallest. Does this sound ok?

    Also I had the idea of before i add to the array decommissioning one of the existing 500GB and adding a 750 in its place? Checking it works and then add the 500GB back as a spare to keep wear down. Does this sound crazy or sane? 🙂

  36. That is what i have just read.. and then you can do a repair. I am surprised that mdadm has no feature to automate this. Also, how regularly is regularly?

  37. Thanks for the Infos!
    I`m currently growing 😉

    md0 : active raid5 sdd1[4] sda1[0] sdc1[3] sdb1[1]
    976767488 blocks super 1.0 level 5, 256k chunk, algorithm 2 [4/4] [UUUU]
    [>………………..] reshape = 0.9% (4747520/488383744) finish=1071.8min speed=7517K/sec
    bitmap: 1/466 pages [4KB], 512KB chunk

    unused devices:

Comments are closed.