From: Neil Brown <neilb@suse.de>
To: "Kwolek, Adam" <adam.kwolek@intel.com>
Cc: "linux-raid@vger.kernel.org" <linux-raid@vger.kernel.org>,
"Williams, Dan J" <dan.j.williams@intel.com>,
"Ciechanowski, Ed" <ed.ciechanowski@intel.com>
Subject: Re: [md PATCH 4/5] md: Fix: BIO I/O Error during reshape for external metadata
Date: Wed, 16 Jun 2010 15:02:37 +1000 [thread overview]
Message-ID: <20100616150237.0009d6d1@notabene.brown> (raw)
In-Reply-To: <905EDD02F158D948B186911EB64DB3D11EECE8A6@irsmsx503.ger.corp.intel.com>
On Wed, 9 Jun 2010 15:22:27 +0100
"Kwolek, Adam" <adam.kwolek@intel.com> wrote:
> (md: Online Capacity Expansion for IMSM)
> When sum of added disks and degraded disks is greater than max_degraded number, reshape decides that stripe is broken, so bio i/o error is a result.
> Added disks without data has no impact on volume degradation (contains no data so far), so we have to be sure that all disks used to reshape has In_sync flag set.
> We have to do this for disks without data.
Again, I'm not really following you.
I agree that devices that are added to make up numbers for a shape should be
marked In_sync, but that is already happening, roughly in the middle of
raid5_start_reshape.
Again, can you give me a specific situation where the current code does the
wrong thing?
Thanks,
NeilBrown
> ---
>
> drivers/md/raid5.c | 17 ++++++++++++++++-
> 1 files changed, 16 insertions(+), 1 deletions(-)
>
> diff --git a/drivers/md/raid5.c b/drivers/md/raid5.c index dc25a32..cb74045 100644
> --- a/drivers/md/raid5.c
> +++ b/drivers/md/raid5.c
> @@ -5468,7 +5468,7 @@ static int raid5_start_reshape(mddev_t *mddev)
> /* Add some new drives, as many as will fit.
> * We know there are enough to make the newly sized array work.
> */
> - list_for_each_entry(rdev, &mddev->disks, same_set)
> + list_for_each_entry(rdev, &mddev->disks, same_set) {
> if (rdev->raid_disk < 0 &&
> !test_bit(Faulty, &rdev->flags)) {
> if (raid5_add_disk(mddev, rdev) == 0) { @@ -5488,6 +5488,21 @@ static int raid5_start_reshape(mddev_t *mddev)
> } else
> break;
> }
> + /* if there is Online Capacity Expansion
> + * on degraded array for external meta
> + */
> + if (mddev->external &&
> + (conf->raid_disks <= (disk_count + conf->max_degraded))) {
> + /* check if not spare */
> + if (!(rdev->raid_disk < 0 &&
> + !test_bit(Faulty, &rdev->flags)))
> + /* make sure that all disks,
> + * even added previously have
> + * in sync flag set
> + */
> + set_bit(In_sync, &rdev->flags);
> + }
> + }
>
> /* When a reshape changes the number of devices, ->degraded
> * is measured against the large of the pre and post number of
>
next prev parent reply other threads:[~2010-06-16 5:02 UTC|newest]
Thread overview: 4+ messages / expand[flat|nested] mbox.gz Atom feed top
2010-06-09 14:22 [md PATCH 4/5] md: Fix: BIO I/O Error during reshape for external metadata Kwolek, Adam
2010-06-16 5:02 ` Neil Brown [this message]
2010-06-18 8:48 ` Kwolek, Adam
2010-06-29 2:09 ` Neil Brown
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20100616150237.0009d6d1@notabene.brown \
--to=neilb@suse.de \
--cc=adam.kwolek@intel.com \
--cc=dan.j.williams@intel.com \
--cc=ed.ciechanowski@intel.com \
--cc=linux-raid@vger.kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).