qemu-devel.nongnu.org archive mirror
 help / color / mirror / Atom feed
From: Fam Zheng <famz@redhat.com>
To: Stefan Hajnoczi <stefanha@redhat.com>
Cc: atheurer@redhat.com, Kevin Wolf <kwolf@redhat.com>,
	qemu-block@nongnu.org, psuriset@redhat.com,
	qemu-devel@nongnu.org, Ming Lei <tom.leiming@gmail.com>,
	Paolo Bonzini <pbonzini@redhat.com>
Subject: Re: [Qemu-devel] [PATCH] virtio-blk: use blk_io_plug/unplug for Linux AIO batching
Date: Tue, 21 Jul 2015 15:37:43 +0800	[thread overview]
Message-ID: <20150721073743.GA4364@ad.nay.redhat.com> (raw)
In-Reply-To: <1437407656-26726-1-git-send-email-stefanha@redhat.com>

On Mon, 07/20 16:54, Stefan Hajnoczi wrote:
> The raw-posix block driver implements Linux AIO batching so multiple
> requests can be submitted with a single io_submit(2) system call.
> Batching is currently only used by virtio-scsi and
> virtio-blk-data-plane.
> 
> Enable batching for regular virtio-blk so the number of io_submit(2)
> system calls is reduced for workloads with queue depth > 1.
> 
> In 4KB random read performance tests with queue depth 32, the CPU
> utilization on the host is reduced by 9.4%.  The fio job is as follows:
> 
>   [global]
>   bs=4k
>   ioengine=libaio
>   iodepth=32
>   direct=1
>   sync=0
>   time_based=1
>   runtime=30
>   clocksource=gettimeofday
>   ramp_time=5
> 
>   [job1]
>   rw=randread
>   filename=/dev/vdb
>   size=4096M
>   write_bw_log=fio
>   write_iops_log=fio
>   write_lat_log=fio
>   log_avg_msec=1000
> 
> This benchmark was run on an raw image on LVM.  The disk was an SSD
> drive and -drive cache=none,aio=native was used.
> 
> Tested-by: Pradeep Surisetty <psuriset@redhat.com>
> Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
> ---
>  hw/block/virtio-blk.c | 4 ++++
>  1 file changed, 4 insertions(+)
> 
> diff --git a/hw/block/virtio-blk.c b/hw/block/virtio-blk.c
> index 6aefda4..a2137c8 100644
> --- a/hw/block/virtio-blk.c
> +++ b/hw/block/virtio-blk.c
> @@ -600,6 +600,8 @@ static void virtio_blk_handle_output(VirtIODevice *vdev, VirtQueue *vq)
>          return;
>      }
>  
> +    blk_io_plug(s->blk);
> +
>      while ((req = virtio_blk_get_request(s))) {
>          virtio_blk_handle_request(req, &mrb);
>      }
> @@ -607,6 +609,8 @@ static void virtio_blk_handle_output(VirtIODevice *vdev, VirtQueue *vq)
>      if (mrb.num_reqs) {
>          virtio_blk_submit_multireq(s->blk, &mrb);
>      }
> +
> +    blk_io_unplug(s->blk);
>  }
>  
>  static void virtio_blk_dma_restart_bh(void *opaque)
> -- 
> 2.4.3
> 
> 

Reviewed-by: Fam Zheng <famz@redhat.com>

  reply	other threads:[~2015-07-21  7:37 UTC|newest]

Thread overview: 3+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2015-07-20 15:54 [Qemu-devel] [PATCH] virtio-blk: use blk_io_plug/unplug for Linux AIO batching Stefan Hajnoczi
2015-07-21  7:37 ` Fam Zheng [this message]
2015-09-08 13:48 ` [Qemu-devel] [Qemu-block] " Stefan Hajnoczi

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20150721073743.GA4364@ad.nay.redhat.com \
    --to=famz@redhat.com \
    --cc=atheurer@redhat.com \
    --cc=kwolf@redhat.com \
    --cc=pbonzini@redhat.com \
    --cc=psuriset@redhat.com \
    --cc=qemu-block@nongnu.org \
    --cc=qemu-devel@nongnu.org \
    --cc=stefanha@redhat.com \
    --cc=tom.leiming@gmail.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).