From: Daniel Stodden <daniel.stodden@citrix.com>
To: "Vincent, Pradeep" <pradeepv@amazon.com>
Cc: Xen <xen-devel@lists.xensource.com>,
Jan Beulich <JBeulich@novell.com>,
"konrad.wilk@oracle.com" <konrad.wilk@oracle.com>
Subject: Re: [PATCH] xen/blkback: Don't let in-flight requests defer pending ones.
Date: Wed, 1 Jun 2011 11:07:20 -0700 [thread overview]
Message-ID: <1306951640.13705.4.camel@agari.van.xensource.com> (raw)
In-Reply-To: <1306950593.32587.26.camel@ramone>
[-- Attachment #1: Type: text/plain, Size: 3058 bytes --]
I attached the event counting patch. It should make sense, but it was
against XCP, so it may not apply all smoothly for you. If you really
want to play with it, and manage to find more interesting
storage/system/patch combinations than the one quoted below, that'd
probaby be really useful.
Daniel
On Wed, 2011-06-01 at 13:49 -0400, Daniel Stodden wrote:
> >
> > I think it would make sense to enhance blkfront design to increment
> > req_prod as soon as it processes an I/O while batching irq generation.
> > When blkfront and blkback are busy processing continuous stream of I/O
> > requests, it would be great if blkfront-blkback pipeline is able to
> > process them without generating unnecessary interrupts while improving I/O
> > latency performance.
>
>
> > Thoughts ? Any historical context on why this might
> > be bad ?
>
> No, and it's a good idea, and your assumptions are imho correct.
>
> The extreme case is an PUSH_REQUESTS_AND_CHECK_NOTIFY after each
> request. Even in this case, the majority of interrupts should be held
> off by looking at req_event. There are certainly variants, but I can
> show you some results for what you're asking. because I happened to try
> that just last week.
>
> Next consider adding a couple event counters in the backend. And you
> need the patch above, of course.
>
> req_event: Frontend event received
> rsp_event: Frontend notification sent
> req_again: FINAL_CHECK indicated more_to_do.
> boost-1, order=0:
>
> (This is the unmodified version)
>
> dd if=/dev/zero of=/dev/xvdc bs=1M count=1024
> 1073741824 bytes (1.1 GB) copied, 17.0105 s, 63.1 MB/s
> 1073741824 bytes (1.1 GB) copied, 17.7566 s, 60.5 MB/s
> 1073741824 bytes (1.1 GB) copied, 17.163 s, 62.6 MB/s
>
> rsp_event 6759
> req_event 6753
> req_again 16
>
> boost-2, order=0
>
> (This was the aggressive one, one PUSH_NOTIFY per ring request).
>
> dd if=/dev/zero of=/dev/xvdc bs=1M count=1024
> 1073741824 bytes (1.1 GB) copied, 17.3208 s, 62.0 MB/s
> 1073741824 bytes (1.1 GB) copied, 17.4851 s, 61.4 MB/s
> 1073741824 bytes (1.1 GB) copied, 17.7333 s, 60.5 MB/s
>
> rsp_event 7122
> req_event 7141
> req_again 5497
>
>
> So the result is that the event load even in the most aggressive case
> will typically increase only moderately. Instead, the restored outer
> loop in the dispatcher just starts to play out.
>
> I'm not proposing this as the final solution, we might chose to be more
> careful and limit event emission to some stride instead.
>
> Don't be confused by the throughput values not going up, the problem I
> had with the array (iSCSI in this case) just turned out to be elsewhere.
> I'm pretty convinced there are workload/storage configurations which
> benefit from that.
>
> In the case at hand, increasing the ring size was way more productive.
> At which point the queue depth multiplies as well. And I currently
> expect that the longer it gets the more urgent the issue you describe
> will be.
>
> But I also think it needs some experiments and wants to be backed by
> numbers.
>
> Daniel
[-- Attachment #2: blkback-final-check-stats.diff --]
[-- Type: text/x-patch, Size: 2262 bytes --]
# HG changeset patch
# Parent 9c6a7fdebc7fcb76421a8f053be7f5dfcf30c741
diff -r 9c6a7fdebc7f drivers/xen/blkback/blkback.c
--- a/drivers/xen/blkback/blkback.c Fri May 27 15:32:58 2011 -0700
+++ b/drivers/xen/blkback/blkback.c Fri May 27 15:33:39 2011 -0700
@@ -543,6 +543,8 @@
if (blkif->xenblkd)
wake_up_process(blkif->xenblkd);
+ blkif->st_req_event++;
+
return IRQ_HANDLED;
}
@@ -641,6 +643,8 @@
break;
RING_FINAL_CHECK_FOR_REQUESTS(&blk_rings->common, more_to_do);
+ if (more_to_do)
+ blkif->st_req_again++;
} while (more_to_do);
return more_to_do;
@@ -862,8 +866,10 @@
blk_rings->common.rsp_prod_pvt++;
RING_PUSH_RESPONSES_AND_CHECK_NOTIFY(&blk_rings->common, notify);
spin_unlock_irqrestore(&blkif->blk_ring_lock, flags);
- if (notify)
+ if (notify) {
notify_remote_via_irq(blkif->irq);
+ blkif->st_rsp_event++;
+ }
}
static int __init blkif_init(void)
diff -r 9c6a7fdebc7f drivers/xen/blkback/common.h
--- a/drivers/xen/blkback/common.h Fri May 27 15:32:58 2011 -0700
+++ b/drivers/xen/blkback/common.h Fri May 27 15:33:39 2011 -0700
@@ -117,6 +117,10 @@
s64 st_wr_sum_usecs;
s64 st_wr_max_usecs;
+ u64 st_req_event;
+ u64 st_req_again;
+ u64 st_rsp_event;
+
unsigned int nr_shared_pages;
grant_handle_t shmem_handle[BLKIF_MAX_RING_PAGES];
} blkif_t;
diff -r 9c6a7fdebc7f drivers/xen/blkback/xenbus.c
--- a/drivers/xen/blkback/xenbus.c Fri May 27 15:32:58 2011 -0700
+++ b/drivers/xen/blkback/xenbus.c Fri May 27 15:33:39 2011 -0700
@@ -337,6 +337,9 @@
VBD_SHOW(br_req, "%d\n", be->blkif->st_br_req);
VBD_SHOW(rd_sect, "%d\n", be->blkif->st_rd_sect);
VBD_SHOW(wr_sect, "%d\n", be->blkif->st_wr_sect);
+VBD_SHOW(req_event, "%llu\n", be->blkif->st_req_event);
+VBD_SHOW(rsp_event, "%llu\n", be->blkif->st_rsp_event);
+VBD_SHOW(req_again, "%llu\n", be->blkif->st_req_again);
VBD_SHOW_AVG(rd_usecs, be->blkif->st_rd_cnt,
be->blkif->st_rd_sum_usecs, be->blkif->st_rd_max_usecs);
VBD_SHOW_AVG(wr_usecs, be->blkif->st_wr_cnt,
@@ -351,6 +354,9 @@
&dev_attr_wr_sect.attr,
&dev_attr_rd_usecs.attr,
&dev_attr_wr_usecs.attr,
+ &dev_attr_req_event.attr,
+ &dev_attr_rsp_event.attr,
+ &dev_attr_req_again.attr,
NULL
};
[-- Attachment #3: Type: text/plain, Size: 138 bytes --]
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xensource.com
http://lists.xensource.com/xen-devel
next prev parent reply other threads:[~2011-06-01 18:07 UTC|newest]
Thread overview: 33+ messages / expand[flat|nested] mbox.gz Atom feed top
2011-05-02 7:04 [PATCH] blkback: Fix block I/O latency issue Vincent, Pradeep
2011-05-02 8:13 ` Jan Beulich
2011-05-03 1:10 ` Vincent, Pradeep
2011-05-03 14:55 ` Konrad Rzeszutek Wilk
2011-05-03 17:16 ` Vincent, Pradeep
2011-05-03 17:51 ` Daniel Stodden
2011-05-03 23:41 ` Vincent, Pradeep
2011-05-03 17:52 ` Daniel Stodden
2011-05-04 1:54 ` Vincent, Pradeep
2011-05-09 20:24 ` Konrad Rzeszutek Wilk
2011-05-13 0:40 ` Vincent, Pradeep
2011-05-13 2:51 ` Konrad Rzeszutek Wilk
2011-05-16 15:22 ` Konrad Rzeszutek Wilk
2011-05-20 6:12 ` Vincent, Pradeep
2011-05-24 16:02 ` Konrad Rzeszutek Wilk
2011-05-24 22:40 ` Vincent, Pradeep
2011-05-28 20:12 ` [RE-PATCH] " Daniel Stodden
2011-05-28 20:21 ` [PATCH] xen/blkback: Don't let in-flight requests defer pending ones Daniel Stodden
2011-05-29 8:09 ` Vincent, Pradeep
2011-05-29 11:34 ` Daniel Stodden
2011-06-01 8:02 ` Vincent, Pradeep
2011-06-01 8:24 ` Jan Beulich
2011-06-01 17:49 ` Daniel Stodden
2011-06-01 18:07 ` Daniel Stodden [this message]
2011-06-27 14:03 ` Konrad Rzeszutek Wilk
2011-06-27 18:42 ` Daniel Stodden
2011-06-27 19:13 ` Konrad Rzeszutek Wilk
2011-06-28 0:31 ` Daniel Stodden
2011-06-28 13:19 ` Konrad Rzeszutek Wilk
2011-05-31 13:44 ` Fix wrong help message for parameter nestedhvm Dong, Eddie
2011-05-31 16:23 ` Ian Campbell
2011-05-31 16:08 ` [PATCH] xen/blkback: Don't let in-flight requests defer pending ones Konrad Rzeszutek Wilk
2011-05-31 16:30 ` Daniel Stodden
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=1306951640.13705.4.camel@agari.van.xensource.com \
--to=daniel.stodden@citrix.com \
--cc=JBeulich@novell.com \
--cc=konrad.wilk@oracle.com \
--cc=pradeepv@amazon.com \
--cc=xen-devel@lists.xensource.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).