From: Roman Mamedov <rm@romanrm.net>
To: Joel Parthemore <joel@parthemores.com>
Cc: linux-raid@vger.kernel.org
Subject: Re: request for help on IMSM-metadata RAID-5 array
Date: Sat, 23 Sep 2023 20:35:12 +0500 [thread overview]
Message-ID: <20230923203512.581fcd7d@nvm> (raw)
In-Reply-To: <4095b51a-1038-2fd0-6503-64c0daa913d8@parthemores.com>
On Sat, 23 Sep 2023 17:18:00 +0200
Joel Parthemore <joel@parthemores.com> wrote:
> I didn't want to try that again until I had confirmation that the
> out-of-sync wouldn't (or shouldn't) be an issue. (I had tried it once
> before, but the system had somehow swapped /dev/md126 and /dev/md127 so
> that /dev/md126 became the container and /dev/md127 the RAID-5 array,
> which confused me. So I stopped experimenting further until I had a
> chance to write to the list.)
>
> The array is assembled read only, and this time both /dev/md126 and
> /dev/md127 are looking like I expect them to. I started dd to make a
> backup image using dd if=/dev/md126 of=/dev/sdc bs=64K
> conv=noerror,sync. (The EXT4 file store on the 2TB RAID-5 array is about
> 900GB full.) At first, it was running most of the time and just
> occasionally in uninterruptible sleep, but the periods of
> uninterruptible sleep quickly started getting longer. Now it seems to be
> spending most but not quite all of its time in uninterruptible sleep. Is
> this some kind of race condition? Anyway, I'll leave it running
> overnight to see if it completes.
>
> Accessing the RAID array definitely isn't locking things up this time. I
> can go in and look at the partition table, for example, no problem.
> Access is awfully slow, but I assume that's because of whatever dd is or
> isn't doing.
>
> By the way, I'm using kernel 6.5.3, which isn't the latest (that would
> be 6.5.5) but is close.
Maybe it's an HDD issue, one of them did have some unreadable sectors in the
past, although the firmware has not decided to do anything about that, such
as reallocating them and recording that in SMART.
Check if one of the drives is holding up things, with a command like
iostat -x 2 /dev/sd?
If you see 100% next to one of the drives, and much less for others, that one
might be culprit.
--
With respect,
Roman
next prev parent reply other threads:[~2023-09-23 15:35 UTC|newest]
Thread overview: 14+ messages / expand[flat|nested] mbox.gz Atom feed top
2023-09-23 10:54 request for help on IMSM-metadata RAID-5 array Joel Parthemore
2023-09-23 11:24 ` Roman Mamedov
2023-09-23 15:18 ` Joel Parthemore
2023-09-23 15:35 ` Roman Mamedov [this message]
2023-09-23 15:45 ` Joel Parthemore
2023-09-23 18:49 ` Joel Parthemore
2023-09-25 1:43 ` Yu Kuai
2023-09-25 15:57 ` Joel Parthemore
2023-09-26 1:10 ` Yu Kuai
2023-09-29 19:44 ` Joel Parthemore
[not found] ` <a0b8a693-5d9c-d354-5afc-4500b78a983e@huaweicloud.com>
2023-10-05 7:28 ` Joel Parthemore
2023-09-25 9:44 ` Mariusz Tkaczyk
2023-09-25 15:52 ` Joel Parthemore
2023-09-25 16:43 ` Mariusz Tkaczyk
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20230923203512.581fcd7d@nvm \
--to=rm@romanrm.net \
--cc=joel@parthemores.com \
--cc=linux-raid@vger.kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox