From: Yuri Tikhonov <yur@emcraft.com>
To: dan.j.williams@intel.com
Cc: linux-raid@vger.kernel.org, Wolfgang Denk <wd@denx.de>, dzu@denx.de
Subject: md raid acceleration and the async_tx api
Date: Mon, 27 Aug 2007 12:49:48 +0400 [thread overview]
Message-ID: <200708271249.48684.yur@emcraft.com> (raw)
Hello,
I tested the h/w accelerated RAID-5 using the kernel with PAGE_SIZE set to
64KB and found the bonnie++ application hangs-up during the "Re-writing"
test. I made some investigations and discovered that the hang-up occurs
because one of the mpage_end_io_read() calls is missing (these are the
callbacks initiated from the ops_complete_biofill() function).
The fact is that my low-level ADMA driver (the ppc440spe one) successfully
initiated the ops_complete_biofill() callback but the ops_complete_biofill()
function itself skipped calling the bi_end_io() handler of the completed bio
(current dev->read) because during processing of this (current dev->read) bio
some other request had come to the sh (current dev_q->toread). Thus
ops_complete_biofill() scheduled another biofill operation which, as a
result, overwrote the unacknowledged bio (dev->read in ops_run_biofill()),
and so we lost the previous dev->read bio completely.
Here is a patch that solves this problem. Perhaps this might be implemented
in some more elegant and effective way. What are your thoughts regarding
this?
Regards, Yuri
diff --git a/drivers/md/raid5.c b/drivers/md/raid5.c
index 08b4893..7abc96b 100644
--- a/drivers/md/raid5.c
+++ b/drivers/md/raid5.c
@@ -838,11 +838,24 @@ static void ops_complete_biofill(void *stripe_head_ref)
/* acknowledge completion of a biofill operation */
/* and check if we need to reply to a read request
*/
- if (test_bit(R5_Wantfill, &dev_q->flags) && !dev_q->toread) {
+ if (test_bit(R5_Wantfill, &dev_q->flags)) {
struct bio *rbi, *rbi2;
struct r5dev *dev = &sh->dev[i];
- clear_bit(R5_Wantfill, &dev_q->flags);
+ /* There is a chance that another fill operation
+ * had been scheduled for this dev while we
+ * processed sh. In this case do one of the following
+ * alternatives:
+ * - if there is no active completed biofill for the dev
+ * then go to the next dev leaving Wantfill set;
+ * - if there is active completed biofill for the dev
+ * then ack it but leave Wantfill set.
+ */
+ if (dev_q->toread && !dev->read)
+ continue;
+
+ if (!dev_q->toread)
+ clear_bit(R5_Wantfill, &dev_q->flags);
/* The access to dev->read is outside of the
* spin_lock_irq(&conf->device_lock), but is protected
next reply other threads:[~2007-08-27 8:49 UTC|newest]
Thread overview: 8+ messages / expand[flat|nested] mbox.gz Atom feed top
2007-08-27 8:49 Yuri Tikhonov [this message]
2007-08-27 19:12 ` md raid acceleration and the async_tx api Williams, Dan J
2007-08-30 14:57 ` Yuri Tikhonov
2007-08-30 19:34 ` Dan Williams
[not found] <200709071444.34911.yur@emcraft.com>
[not found] ` <0C7297FA1D2D244A9C7F6959C0BF1E520268732A@azsmsx413.amr.corp.intel.com>
2007-09-13 9:38 ` Yuri Tikhonov
2007-09-13 16:52 ` Dan Williams
2007-09-13 21:14 ` Mr. James W. Laferriere
2007-09-13 21:30 ` Williams, Dan J
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=200708271249.48684.yur@emcraft.com \
--to=yur@emcraft.com \
--cc=dan.j.williams@intel.com \
--cc=dzu@denx.de \
--cc=linux-raid@vger.kernel.org \
--cc=wd@denx.de \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).