From: Dan Williams <dan.j.williams@intel.com>
To: neilb@suse.de
Cc: linux-raid@vger.kernel.org
Subject: [PATCH 1/3] md/raid5: initialize conf->device_lock earlier
Date: Thu, 01 Oct 2009 18:18:26 -0700 [thread overview]
Message-ID: <20091002011825.24095.44406.stgit@dwillia2-linux.ch.intel.com> (raw)
In-Reply-To: <20091002011747.24095.70355.stgit@dwillia2-linux.ch.intel.com>
Deallocating a raid5_conf_t structure requires taking 'device_lock'.
Ensure it is initialized before it is used, i.e. initialize the lock
before attempting any further initializations that might fail.
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
---
drivers/md/raid5.c | 25 ++++++++++++-------------
1 files changed, 12 insertions(+), 13 deletions(-)
diff --git a/drivers/md/raid5.c b/drivers/md/raid5.c
index 9482980..c21cc50 100644
--- a/drivers/md/raid5.c
+++ b/drivers/md/raid5.c
@@ -4722,6 +4722,18 @@ static raid5_conf_t *setup_conf(mddev_t *mddev)
conf = kzalloc(sizeof(raid5_conf_t), GFP_KERNEL);
if (conf == NULL)
goto abort;
+ spin_lock_init(&conf->device_lock);
+ init_waitqueue_head(&conf->wait_for_stripe);
+ init_waitqueue_head(&conf->wait_for_overlap);
+ INIT_LIST_HEAD(&conf->handle_list);
+ INIT_LIST_HEAD(&conf->hold_list);
+ INIT_LIST_HEAD(&conf->delayed_list);
+ INIT_LIST_HEAD(&conf->bitmap_list);
+ INIT_LIST_HEAD(&conf->inactive_list);
+ atomic_set(&conf->active_stripes, 0);
+ atomic_set(&conf->preread_active_stripes, 0);
+ atomic_set(&conf->active_aligned_reads, 0);
+ conf->bypass_threshold = BYPASS_THRESHOLD;
conf->raid_disks = mddev->raid_disks;
conf->scribble_len = scribble_len(conf->raid_disks);
@@ -4744,19 +4756,6 @@ static raid5_conf_t *setup_conf(mddev_t *mddev)
if (raid5_alloc_percpu(conf) != 0)
goto abort;
- spin_lock_init(&conf->device_lock);
- init_waitqueue_head(&conf->wait_for_stripe);
- init_waitqueue_head(&conf->wait_for_overlap);
- INIT_LIST_HEAD(&conf->handle_list);
- INIT_LIST_HEAD(&conf->hold_list);
- INIT_LIST_HEAD(&conf->delayed_list);
- INIT_LIST_HEAD(&conf->bitmap_list);
- INIT_LIST_HEAD(&conf->inactive_list);
- atomic_set(&conf->active_stripes, 0);
- atomic_set(&conf->preread_active_stripes, 0);
- atomic_set(&conf->active_aligned_reads, 0);
- conf->bypass_threshold = BYPASS_THRESHOLD;
-
pr_debug("raid5: run(%s) called.\n", mdname(mddev));
list_for_each_entry(rdev, &mddev->disks, same_set) {
next prev parent reply other threads:[~2009-10-02 1:18 UTC|newest]
Thread overview: 13+ messages / expand[flat|nested] mbox.gz Atom feed top
2009-10-02 1:18 [PATCH 0/3] md fixes for 2.6.32-rc Dan Williams
2009-10-02 1:18 ` Dan Williams [this message]
2009-10-02 1:18 ` [PATCH 2/3] Revert "md/raid456: distribute raid processing over multiple cores" Dan Williams
2009-10-02 1:18 ` [PATCH 3/3] Allow sysfs_notify_dirent to be called from interrupt context Dan Williams
2009-10-02 4:13 ` [PATCH 0/3] md fixes for 2.6.32-rc Neil Brown
2009-10-03 15:54 ` Dan Williams
2009-10-07 0:36 ` Dan Williams
2009-10-07 4:34 ` Neil Brown
2009-10-07 12:05 ` Holger Kiehl
2009-10-07 18:33 ` Asdo
2009-10-08 8:50 ` Holger Kiehl
2009-10-11 12:16 ` Asdo
2009-10-11 13:17 ` Asdo
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20091002011825.24095.44406.stgit@dwillia2-linux.ch.intel.com \
--to=dan.j.williams@intel.com \
--cc=linux-raid@vger.kernel.org \
--cc=neilb@suse.de \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).