From: Jens Axboe <axboe@kernel.dk>
To: Ming Lei <ming.lei@canonical.com>, linux-kernel@vger.kernel.org
Cc: "Michael S. Tsirkin" <mst@redhat.com>,
linux-api@vger.kernel.org,
virtualization@lists.linux-foundation.org,
Stefan Hajnoczi <stefanha@redhat.com>,
Paolo Bonzini <pbonzini@redhat.com>
Subject: Re: [RFC PATCH 0/2] block: virtio-blk: support multi vq per virtio-blk
Date: Fri, 13 Jun 2014 11:35:15 -0600 [thread overview]
Message-ID: <539B3653.6040408@kernel.dk> (raw)
In-Reply-To: <1402680562-8328-1-git-send-email-ming.lei@canonical.com>
On 06/13/2014 11:29 AM, Ming Lei wrote:
> Hi,
>
> This patches try to support multi virtual queues(multi-vq) in one
> virtio-blk device, and maps each virtual queue(vq) to blk-mq's
> hardware queue.
>
> With this approach, both scalability and performance problems on
> virtio-blk device get improved.
>
> For verifying the improvement, I implements virtio-blk multi-vq over
> qemu's dataplane feature, and both handling host notification
> from each vq and processing host I/O are still kept in the per-device
> iothread context, the changes are based on qemu v2.0.0 release, and
> can be accessed from below tree:
>
> git://kernel.ubuntu.com/ming/qemu.git #v2.0.0-virtblk-dataplane-mq
>
> For enabling the multi-vq feature, 'num_queues=N' need to be added into
> '-device virtio-blk-pci ...' of qemu command line, and suggest to pass
> 'vectors=N+1' to keep one MSI irq vector per each vq, and the feature
> depends on x-data-plane.
>
> Fio(libaio, randread, iodepth=64, bs=4K, jobs=N) is run inside VM to
> verify the improvement.
>
> I just create a small quadcore VM and run fio inside the VM, and
> num_queues of the virtio-blk device is set as 2, but looks the
> improvement is still obvious.
>
> 1), about scalability
> - without mutli-vq feature
> -- jobs=2, thoughput: 145K iops
> -- jobs=4, thoughput: 100K iops
> - without mutli-vq feature
> -- jobs=2, thoughput: 186K iops
> -- jobs=4, thoughput: 199K iops
Awesome! I was hoping someone would do that, and make virtio-blk take
full advantage of blk-mq.
--
Jens Axboe
prev parent reply other threads:[~2014-06-13 17:35 UTC|newest]
Thread overview: 14+ messages / expand[flat|nested] mbox.gz Atom feed top
2014-06-13 17:29 [RFC PATCH 0/2] block: virtio-blk: support multi vq per virtio-blk Ming Lei
2014-06-13 17:29 ` [RFC PATCH 1/2] include/uapi/linux/virtio_blk.h: introduce feature of VIRTIO_BLK_F_MQ Ming Lei
2014-06-16 12:42 ` Rusty Russell
2014-06-13 17:29 ` [RFC PATCH 2/2] block: virtio-blk: support multi virt queues per virtio-blk device Ming Lei
2014-06-16 12:47 ` Rusty Russell
2014-06-17 2:40 ` Stefan Hajnoczi
2014-06-17 15:50 ` Ming Lei
[not found] ` <CACVXFVO0SybUKArQbKkj2+hbk6q_gt3SBooNNmw1jErRR8d07A@mail.gmail.com>
2014-06-17 15:53 ` Paolo Bonzini
2014-06-17 16:00 ` Ming Lei
2014-06-17 16:34 ` Paolo Bonzini
2014-06-18 4:04 ` Ming Lei
[not found] ` <CACVXFVMkV5R5GUjM7Y-U54SA-ENZZ26dbbv4K2_OYPdmmiuUqg@mail.gmail.com>
2015-12-14 10:31 ` Paolo Bonzini
[not found] ` <566E9A7E.3030203@redhat.com>
2015-12-15 1:26 ` Ming Lei
2014-06-13 17:35 ` Jens Axboe [this message]
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=539B3653.6040408@kernel.dk \
--to=axboe@kernel.dk \
--cc=linux-api@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=ming.lei@canonical.com \
--cc=mst@redhat.com \
--cc=pbonzini@redhat.com \
--cc=stefanha@redhat.com \
--cc=virtualization@lists.linux-foundation.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).