From: mwilck@arcor.de
To: neilb@suse.de, linux-raid@vger.kernel.org
Cc: mwilck@arcor.de
Subject: [PATCH 01/10] DDF: ddf_activate_spare: bugfix for 62ff3c40
Date: Tue, 30 Jul 2013 23:18:25 +0200 [thread overview]
Message-ID: <1375219114-5626-2-git-send-email-mwilck@arcor.de> (raw)
In-Reply-To: <51F82D3B.6060104@arcor.de>
From: Martin Wilck <mwilck@arcor.de>
Move the check for good drives in the dl loop - otherwise dl
may be NULL and mdmon may crash.
Signed-off-by: Martin Wilck <mwilck@arcor.de>
---
super-ddf.c | 14 +++++++-------
1 files changed, 7 insertions(+), 7 deletions(-)
diff --git a/super-ddf.c b/super-ddf.c
index 683f969..46262ce 100644
--- a/super-ddf.c
+++ b/super-ddf.c
@@ -4774,13 +4774,6 @@ static struct mdinfo *ddf_activate_spare(struct active_array *a,
/* For each slot, if it is not working, find a spare */
dl = ddf->dlist;
for (i = 0; i < a->info.array.raid_disks; i++) {
- be16 state = ddf->phys->entries[dl->pdnum].state;
- if (be16_and(state,
- cpu_to_be16(DDF_Failed|DDF_Missing)) ||
- !be16_and(state,
- cpu_to_be16(DDF_Online)))
- continue;
-
for (d = a->info.devs ; d ; d = d->next)
if (d->disk.raid_disk == i)
break;
@@ -4798,6 +4791,13 @@ static struct mdinfo *ddf_activate_spare(struct active_array *a,
int is_dedicated = 0;
struct extent *ex;
unsigned int j;
+ be16 state = ddf->phys->entries[dl->pdnum].state;
+ if (be16_and(state,
+ cpu_to_be16(DDF_Failed|DDF_Missing)) ||
+ !be16_and(state,
+ cpu_to_be16(DDF_Online)))
+ continue;
+
/* If in this array, skip */
for (d2 = a->info.devs ; d2 ; d2 = d2->next)
if (d2->state_fd >= 0 &&
--
1.7.1
next prev parent reply other threads:[~2013-07-30 21:18 UTC|newest]
Thread overview: 22+ messages / expand[flat|nested] mbox.gz Atom feed top
2013-07-26 20:58 Suspicious test failure - mdmon misses recovery events on loop devices Martin Wilck
2013-07-29 6:55 ` NeilBrown
2013-07-29 20:39 ` Martin Wilck
2013-07-29 20:42 ` Martin Wilck
2013-07-30 0:42 ` NeilBrown
2013-07-30 21:16 ` Martin Wilck
2013-07-30 21:18 ` [PATCH 00/10] Two bug fixes and a lot of debug code mwilck
2013-07-31 3:10 ` NeilBrown
2013-07-30 21:18 ` mwilck [this message]
2013-07-30 21:18 ` [PATCH 02/10] DDF: log disk status changes more nicely mwilck
2013-07-30 21:18 ` [PATCH 03/10] DDF: ddf_process_update: log offsets for conf changes mwilck
2013-07-30 21:18 ` [PATCH 04/10] DDF: load_ddf_header: more error logging mwilck
2013-07-30 21:18 ` [PATCH 05/10] DDF: ddf_set_disk: add some debug messages mwilck
2013-07-30 21:18 ` [PATCH 06/10] monitor: read_and_act: log status when called mwilck
2013-07-31 2:59 ` NeilBrown
2013-07-31 5:28 ` Martin Wilck
2013-07-30 21:18 ` [PATCH 07/10] mdmon: wait_and_act: fix debug message for SIGUSR1 mwilck
2013-07-30 21:18 ` [PATCH 08/10] mdmon: manage_member: debug messages for array state mwilck
2013-07-30 21:18 ` [PATCH 09/10] mdmon: manage_member: fix race condition during slow meta data writes mwilck
2013-07-30 21:18 ` [PATCH 10/10] tests/10ddf-create-fail-rebuild: new unit test for DDF mwilck
2013-07-31 5:36 ` [PATCH] tests/env-ddf-template: helper for new unit test mwilck
2013-07-31 6:49 ` NeilBrown
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=1375219114-5626-2-git-send-email-mwilck@arcor.de \
--to=mwilck@arcor.de \
--cc=linux-raid@vger.kernel.org \
--cc=neilb@suse.de \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).