* Grow array and convert from raid 5 to 6
@ 2020-03-01 18:07 William Morgan
2020-03-01 19:07 ` antlists
0 siblings, 1 reply; 5+ messages in thread
From: William Morgan @ 2020-03-01 18:07 UTC (permalink / raw)
To: linux-raid
I have a healthy 4 disk raid 5 array (data only, non booting) that is
running out of space. I'd like to add 4 more disks for additional
space, as well as convert to raid 6 for additional fault tolerance.
I've read through the wiki page on converting an existing system, but
I'm still not sure how to proceed. Can anyone outline the steps for
me? Thanks for your help.
Cheers,
Bill
^ permalink raw reply [flat|nested] 5+ messages in thread
* Re: Grow array and convert from raid 5 to 6
2020-03-01 18:07 Grow array and convert from raid 5 to 6 William Morgan
@ 2020-03-01 19:07 ` antlists
2020-03-01 19:48 ` Roman Mamedov
0 siblings, 1 reply; 5+ messages in thread
From: antlists @ 2020-03-01 19:07 UTC (permalink / raw)
To: William Morgan, linux-raid
On 01/03/2020 18:07, William Morgan wrote:
> I have a healthy 4 disk raid 5 array (data only, non booting) that is
> running out of space. I'd like to add 4 more disks for additional
> space, as well as convert to raid 6 for additional fault tolerance.
> I've read through the wiki page on converting an existing system, but
> I'm still not sure how to proceed. Can anyone outline the steps for
> me? Thanks for your help.
> Looks like you looked at the wrong section ... :-) Look at "a guide to mdadm".
The subsection "upgrading a mirror raid to a parity raid" contains the
information you need, just not laid out nicely for you.
You want to "grow" your raid, and you could do it in one hit, or in
several steps. I'd probably add the drives first, "--grow --add disk1
--add disk2 ..." to give an array with 4 active drives and 4 spares.
Then you can upgrade to raid 6 - "--grow --level=6 --raid-devices=8".
> A little bit of advice - MAKE SURE you have the latest mdadm, and if
> possible an up-to-date kernel. What mdadm and kernel do you have?
If your kernel/mdadm is older, the reshape might hang at 0%. The easiest
fix is to reboot into an up-to-date rescue disk and re-assemble the
array, at which point it'll kick off fine. The only snag, of course, is
that means your system is not available for normal use until the reshape
is complete and you can reboot back into your normal setup.
Cheers,
Wol
^ permalink raw reply [flat|nested] 5+ messages in thread
* Re: Grow array and convert from raid 5 to 6
2020-03-01 19:07 ` antlists
@ 2020-03-01 19:48 ` Roman Mamedov
2020-03-01 20:16 ` antlists
0 siblings, 1 reply; 5+ messages in thread
From: Roman Mamedov @ 2020-03-01 19:48 UTC (permalink / raw)
To: antlists; +Cc: William Morgan, linux-raid
On Sun, 1 Mar 2020 19:07:56 +0000
antlists <antlists@youngman.org.uk> wrote:
> --add disk2 ..." to give an array with 4 active drives and 4 spares.
> Then you can upgrade to raid 6 - "--grow --level=6 --raid-devices=8".
This feels risky and unclear, for instance what will be the array state if 2
disks fail during this conversion? Would it depend on which ones have failed?
Instead I'd suggest doing this in two steps. First and foremost, add one disk
and convert to RAID6 in a special way:
--grow --level=6 --raid-devices=5 --layout=preserve
...due to the last bit (see the man page), this will be really fast and will
not even rewrite existing data on the old 4 disks; But you will get a
full-blown RAID6 redundancy array, albeit with a weird on-disk layout.
Then add 3 remaining disks, and
--grow --level=6 --raid-devices=8 --layout=normalise
Additionally, before you even begin, consider do you really want to go with
such a setup in the first place. I used to run large RAID6s with a single
filesystem on top, but then found them to be too much of a single point of
failure, and now moved on to merging individual disks on the FS level (using
MergerFS or mhddfs) for convenience, and doing 100% backups of everything
for data loss protection. Backups which you have to do anyway, as anyone will
tell you RAID is not a replacement for backup; but with a RAID6 there's too
much of a temptation to skimp on them, which tends to bite badly in the end.
Also back then it seemed way too expensive to backup everything, now with
todays HDD sizes and prices that's no longer so.
--
With respect,
Roman
^ permalink raw reply [flat|nested] 5+ messages in thread
* Re: Grow array and convert from raid 5 to 6
2020-03-01 19:48 ` Roman Mamedov
@ 2020-03-01 20:16 ` antlists
2020-03-02 23:57 ` William Morgan
0 siblings, 1 reply; 5+ messages in thread
From: antlists @ 2020-03-01 20:16 UTC (permalink / raw)
To: Roman Mamedov; +Cc: William Morgan, linux-raid
On 01/03/2020 19:48, Roman Mamedov wrote:
> On Sun, 1 Mar 2020 19:07:56 +0000
> antlists <antlists@youngman.org.uk> wrote:
>
>> --add disk2 ..." to give an array with 4 active drives and 4 spares.
>> Then you can upgrade to raid 6 - "--grow --level=6 --raid-devices=8".
> This feels risky and unclear, for instance what will be the array state if 2
> disks fail during this conversion? Would it depend on which ones have failed?
>
> Instead I'd suggest doing this in two steps. First and foremost, add one disk
> and convert to RAID6 in a special way:
>
> --grow --level=6 --raid-devices=5 --layout=preserve
I didn't know about that ...
>
> ...due to the last bit (see the man page), this will be really fast and will
> not even rewrite existing data on the old 4 disks; But you will get a
> full-blown RAID6 redundancy array, albeit with a weird on-disk layout.
>
> Then add 3 remaining disks, and
>
> --grow --level=6 --raid-devices=8 --layout=normalise
>
> Additionally, before you even begin, consider do you really want to go with
> such a setup in the first place. I used to run large RAID6s with a single
> filesystem on top, but then found them to be too much of a single point of
> failure, and now moved on to merging individual disks on the FS level (using
> MergerFS or mhddfs) for convenience, and doing 100% backups of everything
> for data loss protection. Backups which you have to do anyway, as anyone will
> tell you RAID is not a replacement for backup; but with a RAID6 there's too
> much of a temptation to skimp on them, which tends to bite badly in the end.
> Also back then it seemed way too expensive to backup everything, now with
> todays HDD sizes and prices that's no longer so.
Or use lvm to claim the new space, move all the old partitions into lvm,
and use that to manage the raid. Okay, you still need to take backups to
protect against a raid failure, but that makes backups to protect
against accidental damage dead easy - lvm just takes copy-on-write copies.
Cheers,
Wol
^ permalink raw reply [flat|nested] 5+ messages in thread
* Re: Grow array and convert from raid 5 to 6
2020-03-01 20:16 ` antlists
@ 2020-03-02 23:57 ` William Morgan
0 siblings, 0 replies; 5+ messages in thread
From: William Morgan @ 2020-03-02 23:57 UTC (permalink / raw)
To: antlists; +Cc: Roman Mamedov, linux-raid
Well, while I was contemplating my options, a power outage caused my
array to fail to a degraded state. One of the drives is now marked as
a spare and i can't figure out how to get it back into the array. Does
it have to be completely rebuilt?
bill@bill-desk:~$ sudo mdadm --detail /dev/md0
/dev/md0:
Version : 1.2
Creation Time : Sat Sep 22 19:10:10 2018
Raid Level : raid5
Array Size : 23441679360 (22355.73 GiB 24004.28 GB)
Used Dev Size : 7813893120 (7451.91 GiB 8001.43 GB)
Raid Devices : 4
Total Devices : 4
Persistence : Superblock is persistent
Intent Bitmap : Internal
Update Time : Mon Mar 2 17:41:32 2020
State : clean, degraded
Active Devices : 3
Working Devices : 4
Failed Devices : 0
Spare Devices : 1
Layout : left-symmetric
Chunk Size : 64K
Consistency Policy : bitmap
Name : bill-desk:0 (local to host bill-desk)
UUID : 06ad8de5:3a7a15ad:88116f44:fcdee150
Events : 10407
Number Major Minor RaidDevice State
0 8 129 0 active sync /dev/sdi1
1 8 145 1 active sync /dev/sdj1
2 8 161 2 active sync /dev/sdk1
- 0 0 3 removed
4 8 177 - spare /dev/sdl1
bill@bill-desk:~$ sudo mdadm --examine /dev/sdi1
/dev/sdi1:
Magic : a92b4efc
Version : 1.2
Feature Map : 0x9
Array UUID : 06ad8de5:3a7a15ad:88116f44:fcdee150
Name : bill-desk:0 (local to host bill-desk)
Creation Time : Sat Sep 22 19:10:10 2018
Raid Level : raid5
Raid Devices : 4
Avail Dev Size : 15627786240 (7451.91 GiB 8001.43 GB)
Array Size : 23441679360 (22355.73 GiB 24004.28 GB)
Data Offset : 264192 sectors
Super Offset : 8 sectors
Unused Space : before=264112 sectors, after=0 sectors
State : clean
Device UUID : ab1323e0:9c0426cf:3e168733:b73e9c5c
Internal Bitmap : 8 sectors from superblock
Update Time : Mon Mar 2 17:41:16 2020
Bad Block Log : 512 entries available at offset 40 sectors - bad
blocks present.
Checksum : f197b17c - correct
Events : 10407
Layout : left-symmetric
Chunk Size : 64K
Device Role : Active device 0
Array State : AAAA ('A' == active, '.' == missing, 'R' == replacing)
bill@bill-desk:~$ sudo mdadm --examine /dev/sdj1
/dev/sdj1:
Magic : a92b4efc
Version : 1.2
Feature Map : 0x1
Array UUID : 06ad8de5:3a7a15ad:88116f44:fcdee150
Name : bill-desk:0 (local to host bill-desk)
Creation Time : Sat Sep 22 19:10:10 2018
Raid Level : raid5
Raid Devices : 4
Avail Dev Size : 15627786240 (7451.91 GiB 8001.43 GB)
Array Size : 23441679360 (22355.73 GiB 24004.28 GB)
Data Offset : 264192 sectors
Super Offset : 8 sectors
Unused Space : before=264112 sectors, after=0 sectors
State : clean
Device UUID : c875f246:ce25d947:a413e198:4100082e
Internal Bitmap : 8 sectors from superblock
Update Time : Mon Mar 2 17:41:32 2020
Bad Block Log : 512 entries available at offset 40 sectors
Checksum : 7e0f516 - correct
Events : 10749
Layout : left-symmetric
Chunk Size : 64K
Device Role : Active device 1
Array State : AAA. ('A' == active, '.' == missing, 'R' == replacing)
bill@bill-desk:~$ sudo mdadm --examine /dev/sdk1
/dev/sdk1:
Magic : a92b4efc
Version : 1.2
Feature Map : 0x1
Array UUID : 06ad8de5:3a7a15ad:88116f44:fcdee150
Name : bill-desk:0 (local to host bill-desk)
Creation Time : Sat Sep 22 19:10:10 2018
Raid Level : raid5
Raid Devices : 4
Avail Dev Size : 15627786240 (7451.91 GiB 8001.43 GB)
Array Size : 23441679360 (22355.73 GiB 24004.28 GB)
Data Offset : 264192 sectors
Super Offset : 8 sectors
Unused Space : before=264112 sectors, after=0 sectors
State : clean
Device UUID : fd0634e6:6943f723:0e30260e:e253b1f4
Internal Bitmap : 8 sectors from superblock
Update Time : Mon Mar 2 17:41:32 2020
Bad Block Log : 512 entries available at offset 40 sectors
Checksum : bf2f13f2 - correct
Events : 10749
Layout : left-symmetric
Chunk Size : 64K
Device Role : Active device 2
Array State : AAA. ('A' == active, '.' == missing, 'R' == replacing)
bill@bill-desk:~$ sudo mdadm --examine /dev/sdl1
/dev/sdl1:
Magic : a92b4efc
Version : 1.2
Feature Map : 0x9
Array UUID : 06ad8de5:3a7a15ad:88116f44:fcdee150
Name : bill-desk:0 (local to host bill-desk)
Creation Time : Sat Sep 22 19:10:10 2018
Raid Level : raid5
Raid Devices : 4
Avail Dev Size : 15627786240 (7451.91 GiB 8001.43 GB)
Array Size : 23441679360 (22355.73 GiB 24004.28 GB)
Data Offset : 264192 sectors
Super Offset : 8 sectors
Unused Space : before=264112 sectors, after=0 sectors
State : clean
Device UUID : 8c628aed:802a5dc8:9d8a8910:9794ec02
Internal Bitmap : 8 sectors from superblock
Update Time : Mon Mar 2 17:41:32 2020
Bad Block Log : 512 entries available at offset 40 sectors - bad
blocks present.
Checksum : 7b89f1e6 - correct
Events : 10749
Layout : left-symmetric
Chunk Size : 64K
Device Role : spare
Array State : AAA. ('A' == active, '.' == missing, 'R' == replacing)
^ permalink raw reply [flat|nested] 5+ messages in thread
end of thread, other threads:[~2020-03-02 23:57 UTC | newest]
Thread overview: 5+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2020-03-01 18:07 Grow array and convert from raid 5 to 6 William Morgan
2020-03-01 19:07 ` antlists
2020-03-01 19:48 ` Roman Mamedov
2020-03-01 20:16 ` antlists
2020-03-02 23:57 ` William Morgan
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).