From: Roman Mamedov <rm@romanrm.net>
To: David T-G <davidtg-robot@justpickone.org>
Cc: Linux RAID list <linux-raid@vger.kernel.org>
Subject: Re: how do i fix these RAID5 arrays?
Date: Thu, 24 Nov 2022 03:28:21 +0500 [thread overview]
Message-ID: <20221124032821.628cd042@nvm> (raw)
In-Reply-To: <20221123220736.GD19721@jpo>
On Wed, 23 Nov 2022 22:07:36 +0000
David T-G <davidtg-robot@justpickone.org> wrote:
> diskfarm:~ # mdadm -D /dev/md50
> /dev/md50:
> Version : 1.2
> Creation Time : Thu Nov 4 00:56:36 2021
> Raid Level : raid0
> Array Size : 19526301696 (18.19 TiB 19.99 TB)
> Raid Devices : 6
> Total Devices : 6
> Persistence : Superblock is persistent
>
> Update Time : Thu Nov 4 00:56:36 2021
> State : clean
> Active Devices : 6
> Working Devices : 6
> Failed Devices : 0
> Spare Devices : 0
>
> Layout : -unknown-
> Chunk Size : 512K
>
> Consistency Policy : none
>
> Name : diskfarm:10T (local to host diskfarm)
> UUID : cccbe073:d92c6ecd:77ba5c46:5db6b3f0
> Events : 0
>
> Number Major Minor RaidDevice State
> 0 9 51 0 active sync /dev/md/51
> 1 9 52 1 active sync /dev/md/52
> 2 9 53 2 active sync /dev/md/53
> 3 9 54 3 active sync /dev/md/54
> 4 9 55 4 active sync /dev/md/55
> 5 9 56 5 active sync /dev/md/56
It feels you haven't thought this through entirely. Sequential writes to this
RAID0 array will alternate across all member arrays, and seeing how those are
not of independent disks, but instead are "vertical" across partitions on the
same disks, it will result in a crazy seek load, as first 512K is written to
the array of the *51 partitions, second 512K go to *52, then to *53,
effectively requiring a full stroke of each drive's head across the entire
surface for each and every 3 *megabytes* written.
mdraid in the "linear" mode, or LVM with one large LV across all PVs (which
are the individual RAID5 arrays), or multi-device Btrfs using "single" profile
for data, all of those would avoid the described effect.
But I should clarify, the entire idea of splitting drives like this seems
questionable to begin with, since drives more often fail entirely, not in part,
so you will not save any time on rebuilds; and the "bitmap" already protects
you against full rebuilds due to any hiccups such as a power cut; or even if a
drive failed in part, in your current setup, or even in the proposed ones I
mentioned above, losing even one RAID5 of all these, would result in a
complete loss of data anyway. Not to mention what you have seems like an insane
amount of complexity.
To summarize, maybe it's better to blow away the entire thing and restart from
the drawing board, while it's not too late? :)
> diskfarm:~ # mdadm -D /dev/md5[13456] | egrep '^/dev|active|removed'
> /dev/md51:
> 0 259 9 0 active sync /dev/sdb51
> 1 259 2 1 active sync /dev/sdc51
> 3 259 16 2 active sync /dev/sdd51
> - 0 0 3 removed
> /dev/md53:
> 0 259 11 0 active sync /dev/sdb53
> 1 259 4 1 active sync /dev/sdc53
> 3 259 18 2 active sync /dev/sdd53
> - 0 0 3 removed
> /dev/md54:
> 0 259 12 0 active sync /dev/sdb54
> 1 259 5 1 active sync /dev/sdc54
> 3 259 19 2 active sync /dev/sdd54
> - 0 0 3 removed
> /dev/md55:
> 0 259 13 0 active sync /dev/sdb55
> 1 259 6 1 active sync /dev/sdc55
> 3 259 20 2 active sync /dev/sdd55
> - 0 0 3 removed
> /dev/md56:
> 0 259 14 0 active sync /dev/sdb56
> 1 259 7 1 active sync /dev/sdc56
> 3 259 21 2 active sync /dev/sdd56
> - 0 0 3 removed
>
> that are obviously the sdk (new disk) slice. If md52 were also broken,
> I'd figure that the disk was somehow unplugged, but I don't think I can
> plug in one sixth of a disk and leave the rest unhooked :-) So ... In
> addition to wondering how I got here, how do I remove the "removed" ones
> and then re-add them to build and grow and finalize this?
If you want to fix it still, without dmesg it's hard to say how this could
have happened, but what does
mdadm --re-add /dev/md51 /dev/sdk51
say?
--
With respect,
Roman
next prev parent reply other threads:[~2022-11-23 22:28 UTC|newest]
Thread overview: 62+ messages / expand[flat|nested] mbox.gz Atom feed top
2022-11-23 22:07 how do i fix these RAID5 arrays? David T-G
2022-11-23 22:28 ` Roman Mamedov [this message]
2022-11-24 0:01 ` Roger Heflin
2022-11-24 21:20 ` David T-G
2022-11-24 21:49 ` Wol
2022-11-25 13:36 ` and dm-integrity, too (was "Re: how do i fix these RAID5 arrays?") David T-G
2022-11-24 21:10 ` how do i fix these RAID5 arrays? David T-G
2022-11-24 21:33 ` Wol
2022-11-25 1:16 ` Roger Heflin
2022-11-25 13:22 ` David T-G
[not found] ` <CAAMCDed1-4zFgHMS760dO1pThtkrn8K+FMuG-QQ+9W-FE0iq9Q@mail.gmail.com>
2022-11-25 19:49 ` David T-G
2022-11-28 14:24 ` md RAID0 can be grown (was "Re: how do i fix these RAID5 arrays?") David T-G
2022-11-29 21:17 ` Jani Partanen
2022-11-29 22:22 ` Roman Mamedov
2022-12-03 5:41 ` md vs LVM and VMs and ... (was "Re: md RAID0 can be grown (was ...") David T-G
2022-12-03 12:06 ` Wols Lists
2022-12-03 18:04 ` batches and serial numbers (was "Re: md vs LVM and VMs and ...") David T-G
2022-12-03 20:07 ` Wols Lists
2022-12-04 2:47 ` batches and serial numbers David T-G
2022-12-04 13:54 ` Wols Lists
2022-12-04 13:04 ` batches and serial numbers (was "Re: md vs LVM and VMs and ...") Reindl Harald
2022-12-03 5:41 ` md RAID0 can be grown David T-G
2022-11-25 13:30 ` about linear and about RAID10 (was "Re: how do i fix these RAID5 arrays?") David T-G
2022-11-25 14:23 ` Wols Lists
2022-11-25 19:50 ` about linear and about RAID10 David T-G
2022-11-25 18:00 ` about linear and about RAID10 (was "Re: how do i fix these RAID5 arrays?") Roger Heflin
2022-11-28 14:46 ` about linear and about RAID10 David T-G
2022-11-28 15:32 ` Reindl Harald
[not found] ` <CAAMCDecXkcmUe=ZFnJ_NndND0C2=D5qSoj1Hohsrty8y1uqdfw@mail.gmail.com>
2022-11-28 17:03 ` Reindl Harald
2022-11-28 20:45 ` John Stoffel
2022-12-03 5:58 ` David T-G
2022-12-03 12:16 ` Wols Lists
2022-12-03 18:27 ` David T-G
2022-12-03 23:26 ` Wol
2022-12-04 2:53 ` David T-G
2022-12-04 13:13 ` Reindl Harald
2022-12-04 13:08 ` Reindl Harald
2022-12-03 5:45 ` David T-G
2022-12-03 12:20 ` Reindl Harald
[not found] ` <CAAMCDee_YrhXo+5hp31YXgUHkyuUr-zTXOqi0-HUjMrHpYMkTQ@mail.gmail.com>
2022-12-03 5:52 ` stripe size checking (was "Re: about linear and about RAID10") David T-G
2022-11-25 14:49 ` how do i fix these RAID5 arrays? Wols Lists
2022-11-26 20:02 ` John Stoffel
2022-11-27 9:33 ` Wols Lists
2022-11-27 11:46 ` Reindl Harald
2022-11-27 11:52 ` Wols Lists
2022-11-27 12:06 ` Reindl Harald
2022-11-27 14:33 ` Wol
2022-11-27 18:08 ` Roman Mamedov
2022-11-27 19:21 ` Wol
2022-11-28 1:26 ` Reindl Harald
2022-11-27 18:23 ` Reindl Harald
2022-11-27 19:30 ` Wol
2022-11-27 19:51 ` Reindl Harald
2022-11-27 14:10 ` piergiorgio.sartor
2022-11-27 18:21 ` Reindl Harald
2022-11-27 19:37 ` Piergiorgio Sartor
2022-11-27 19:52 ` Reindl Harald
2022-11-27 22:05 ` Wol
2022-11-27 22:08 ` Reindl Harald
2022-11-27 22:11 ` Reindl Harald
2022-11-27 22:17 ` Roman Mamedov
2022-11-27 14:58 ` John Stoffel
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20221124032821.628cd042@nvm \
--to=rm@romanrm.net \
--cc=davidtg-robot@justpickone.org \
--cc=linux-raid@vger.kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).