qemu-devel.nongnu.org archive mirror
 help / color / mirror / Atom feed
From: Paul Durrant <Paul.Durrant@citrix.com>
To: Tim Smith <tim.smith@citrix.com>,
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
	"qemu-devel@nongnu.org" <qemu-devel@nongnu.org>,
	"qemu-block@nongnu.org" <qemu-block@nongnu.org>
Cc: Anthony Perard <anthony.perard@citrix.com>,
	Kevin Wolf <kwolf@redhat.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Max Reitz <mreitz@redhat.com>
Subject: Re: [Qemu-devel] [PATCH 1/3] Improve xen_disk batching behaviour
Date: Fri, 2 Nov 2018 11:14:28 +0000	[thread overview]
Message-ID: <98e77c8dca544f50b831175d50b6d641@AMSPEX02CL03.citrite.net> (raw)
In-Reply-To: <154115285942.11300.11718576813181760505.stgit@dhcp-3-135.uk.xensource.com>

> -----Original Message-----
> From: Tim Smith [mailto:tim.smith@citrix.com]
> Sent: 02 November 2018 10:01
> To: xen-devel@lists.xenproject.org; qemu-devel@nongnu.org; qemu-
> block@nongnu.org
> Cc: Anthony Perard <anthony.perard@citrix.com>; Kevin Wolf
> <kwolf@redhat.com>; Paul Durrant <Paul.Durrant@citrix.com>; Stefano
> Stabellini <sstabellini@kernel.org>; Max Reitz <mreitz@redhat.com>
> Subject: [PATCH 1/3] Improve xen_disk batching behaviour
> 
> When I/O consists of many small requests, performance is improved by
> batching them together in a single io_submit() call. When there are
> relatively few requests, the extra overhead is not worth it. This
> introduces a check to start batching I/O requests via blk_io_plug()/
> blk_io_unplug() in an amount proportional to the number which were
> already in flight at the time we started reading the ring.
> 
> Signed-off-by: Tim Smith <tim.smith@citrix.com>

Reviewed-by: Paul Durrant <paul.durrant@citrix.com>

> ---
>  hw/block/xen_disk.c |   30 ++++++++++++++++++++++++++++++
>  1 file changed, 30 insertions(+)
> 
> diff --git a/hw/block/xen_disk.c b/hw/block/xen_disk.c
> index 36eff94f84..cb2881b7e6 100644
> --- a/hw/block/xen_disk.c
> +++ b/hw/block/xen_disk.c
> @@ -101,6 +101,9 @@ struct XenBlkDev {
>      AioContext          *ctx;
>  };
> 
> +/* Threshold of in-flight requests above which we will start using
> + * blk_io_plug()/blk_io_unplug() to batch requests */
> +#define IO_PLUG_THRESHOLD 1
>  /* ------------------------------------------------------------- */
> 
>  static void ioreq_reset(struct ioreq *ioreq)
> @@ -542,6 +545,8 @@ static void blk_handle_requests(struct XenBlkDev
> *blkdev)
>  {
>      RING_IDX rc, rp;
>      struct ioreq *ioreq;
> +    int inflight_atstart = blkdev->requests_inflight;
> +    int batched = 0;
> 
>      blkdev->more_work = 0;
> 
> @@ -550,6 +555,16 @@ static void blk_handle_requests(struct XenBlkDev
> *blkdev)
>      xen_rmb(); /* Ensure we see queued requests up to 'rp'. */
> 
>      blk_send_response_all(blkdev);
> +    /* If there was more than IO_PLUG_THRESHOLD ioreqs in flight
> +     * when we got here, this is an indication that there the bottleneck
> +     * is below us, so it's worth beginning to batch up I/O requests
> +     * rather than submitting them immediately. The maximum number
> +     * of requests we're willing to batch is the number already in
> +     * flight, so it can grow up to max_requests when the bottleneck
> +     * is below us */
> +    if (inflight_atstart > IO_PLUG_THRESHOLD) {
> +        blk_io_plug(blkdev->blk);
> +    }
>      while (rc != rp) {
>          /* pull request from ring */
>          if (RING_REQUEST_CONS_OVERFLOW(&blkdev->rings.common, rc)) {
> @@ -589,7 +604,22 @@ static void blk_handle_requests(struct XenBlkDev
> *blkdev)
>              continue;
>          }
> 
> +        if (inflight_atstart > IO_PLUG_THRESHOLD &&
> +            batched >= inflight_atstart) {
> +            blk_io_unplug(blkdev->blk);
> +        }
>          ioreq_runio_qemu_aio(ioreq);
> +        if (inflight_atstart > IO_PLUG_THRESHOLD) {
> +            if (batched >= inflight_atstart) {
> +                blk_io_plug(blkdev->blk);
> +                batched = 0;
> +            } else {
> +                batched++;
> +            }
> +        }
> +    }
> +    if (inflight_atstart > IO_PLUG_THRESHOLD) {
> +        blk_io_unplug(blkdev->blk);
>      }
> 
>      if (blkdev->more_work && blkdev->requests_inflight < blkdev-
> >max_requests) {


  reply	other threads:[~2018-11-02 11:14 UTC|newest]

Thread overview: 28+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2018-11-02 10:00 [Qemu-devel] [PATCH 0/3] Performance improvements for xen_disk v2 Tim Smith
2018-11-02 10:00 ` [Qemu-devel] [PATCH 1/3] Improve xen_disk batching behaviour Tim Smith
2018-11-02 11:14   ` Paul Durrant [this message]
2018-11-02 13:53   ` Anthony PERARD
2018-11-02 10:01 ` [Qemu-devel] [PATCH 2/3] Improve xen_disk response latency Tim Smith
2018-11-02 11:14   ` Paul Durrant
2018-11-02 13:53   ` Anthony PERARD
2018-11-02 10:01 ` [Qemu-devel] [PATCH 3/3] Avoid repeated memory allocation in xen_disk Tim Smith
2018-11-02 11:15   ` Paul Durrant
2018-11-02 13:53   ` Anthony PERARD
2018-11-02 11:04 ` [Qemu-devel] xen_disk qdevification (was: [PATCH 0/3] Performance improvements for xen_disk v2) Kevin Wolf
2018-11-02 11:13   ` Paul Durrant
2018-11-02 12:14     ` Kevin Wolf
2018-11-05 15:57     ` [Qemu-devel] xen_disk qdevification Markus Armbruster
2018-11-05 16:15       ` Paul Durrant
2018-11-08 14:00       ` Paul Durrant
2018-11-08 15:21         ` Kevin Wolf
2018-11-08 15:43           ` Paul Durrant
2018-11-08 16:44             ` Paul Durrant
2018-11-09 10:27               ` Paul Durrant
2018-11-09 10:40                 ` Kevin Wolf
2018-12-12  8:59   ` [Qemu-devel] [Xen-devel] xen_disk qdevification (was: [PATCH 0/3] Performance improvements for xen_disk v2) Olaf Hering
2018-12-12  9:22     ` Paul Durrant
2018-12-12 12:03     ` Kevin Wolf
2018-12-12 12:04     ` [Qemu-devel] [Xen-devel] xen_disk qdevification Markus Armbruster
  -- strict thread matches above, loose matches on Subject: below --
2018-11-02  9:29 [Qemu-devel] [PATCH 0/3] Performance improvements for xen_disk Tim Smith
2018-11-02  9:29 ` [Qemu-devel] [PATCH 1/3] Improve xen_disk batching behaviour Tim Smith
2018-09-07 10:21 Tim Smith
2018-09-07 14:11 ` Paul Durrant

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=98e77c8dca544f50b831175d50b6d641@AMSPEX02CL03.citrite.net \
    --to=paul.durrant@citrix.com \
    --cc=anthony.perard@citrix.com \
    --cc=kwolf@redhat.com \
    --cc=mreitz@redhat.com \
    --cc=qemu-block@nongnu.org \
    --cc=qemu-devel@nongnu.org \
    --cc=sstabellini@kernel.org \
    --cc=tim.smith@citrix.com \
    --cc=xen-devel@lists.xenproject.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).