Thread Rating:
  • 0 Vote(s) - 0 Average
  • 1
  • 2
  • 3
  • 4
  • 5
replace disks and expanding raid 6
#1
Hello,

I want to replace 7 x 2 TB running in Raid 6 with 7 x 4 TB WD Red. I know that I have to change one disk by one. After I have to expand the Raid. Until here I am fine. But, is there not a limitation at 16 TB? Or is it possible with the newest firmware to expand over 16TB?

Frank
Reply
#2
I want to do basically the same thing (or at least I'm playing with the thought of doing so).
What I'd like to know is whether it's possible to, once all disks are replaced, grow the RAID volume to use the whole disks or if I have to create a second RAID volume?

On the 16TB limit, I have an ext4 RAID 5 volume with 7 WD30EFRX (3TB) so that shouldn't be a problem.
My firmware version is V2.04.01.cdv, I think that limitation only exists for 32 bit NAS anyway.
Reply
#3
I want to do basically the same thing (or at least I'm playing with the thought of doing so).
What I'd like to know is whether it's possible to, once all disks are replaced, grow the RAID volume to use the whole disks or if I have to create a second RAID volume?

On the 16TB limit, I have an ext4 RAID 5 volume with 7 WD30EFRX (3TB) so that shouldn't be a problem.
My firmware version is V2.04.01.cdv, I think that limitation only exists for 32 bit NAS anyway.
Reply
#4
I have now expanded my RAID from 7x3TB to 7x6TB and thought I'd share my experience.

First I exchanged the disks, one at a time, letting the RAID rebuild in between. For every disk (sda..sdg, sda being the top disk and sdg the bottom one) I ran
Code:
# echo 1 > /sys/block/sda/device/delete
before actually removing the disk, to give the disk time to park its head and shut down properly.
Then just replace the old with the new disk and the RAID will start rebuilding, which takes about 30 hours (CPU bound).

When the last disk is exchanged for the bigger one, the RAID will rebuild as usual.
After it's finished it will notice that there is more space available, immediately starting to grow the RAID array to the whole disks, which will take another 30 hours (of course depending on the size difference).
After that is finished you now have a RAID array which fills the whole disks, but the filesystem is still the original size and needs to be expanded.

Now this step is where I ran into problems.
You want to go to Storage - RAID Management, select your RAID - Edit, go to the Expand tab and hit Apply.
This will unmount the RAID volume (you can't use it while unmounted), run a filesystem check, expand it and mount it again, which will take approximately 1.5 hours.

This didn't work for me the first time, because of a bug in the version of e2fsprogs installed.
I had firmware version 2.04.06a with e2fsprogs version 1.42.9 installed, which has a bug in its resize2fs program making it spin on one core not doing anything.
After some time on Google (after letting it sit for 18 hours) I found out that this is fixed in version 1.42.12 of e2fsprogs.
Luckily, this is the version to which e2fsprogs was updated in firmware 2.05.08, which I then installed.
After that the expansion ran flawlessly and I now have 32TB of NAS goodness :-)

So if you have e2fsprogs version 1.42.9 through 1.42.11 (you can check that by running `e2fsck -V`) you should do a firmware update before expanding.
Reply
#5
I have now expanded my RAID from 7x3TB to 7x6TB and thought I'd share my experience.

First I exchanged the disks, one at a time, letting the RAID rebuild in between. For every disk (sda..sdg, sda being the top disk and sdg the bottom one) I ran
Code:
# echo 1 > /sys/block/sda/device/delete
before actually removing the disk, to give the disk time to park its head and shut down properly.
Then just replace the old with the new disk and the RAID will start rebuilding, which takes about 30 hours (CPU bound).

When the last disk is exchanged for the bigger one, the RAID will rebuild as usual.
After it's finished it will notice that there is more space available, immediately starting to grow the RAID array to the whole disks, which will take another 30 hours (of course depending on the size difference).
After that is finished you now have a RAID array which fills the whole disks, but the filesystem is still the original size and needs to be expanded.

Now this step is where I ran into problems.
You want to go to Storage - RAID Management, select your RAID - Edit, go to the Expand tab and hit Apply.
This will unmount the RAID volume (you can't use it while unmounted), run a filesystem check, expand it and mount it again, which will take approximately 1.5 hours.

This didn't work for me the first time, because of a bug in the version of e2fsprogs installed.
I had firmware version 2.04.06a with e2fsprogs version 1.42.9 installed, which has a bug in its resize2fs program making it spin on one core not doing anything.
After some time on Google (after letting it sit for 18 hours) I found out that this is fixed in version 1.42.12 of e2fsprogs.
Luckily, this is the version to which e2fsprogs was updated in firmware 2.05.08, which I then installed.
After that the expansion ran flawlessly and I now have 32TB of NAS goodness :-)

So if you have e2fsprogs version 1.42.9 through 1.42.11 (you can check that by running `e2fsck -V`) you should do a firmware update before expanding.
Reply


Possibly Related Threads...
Thread Author Replies Views Last Post
  replace disks and expanding raid 6 0 472 Less than 1 minute ago
Last Post:

Forum Jump:


Users browsing this thread: 1 Guest(s)