From: Dongli Zhang <dongli.zhang@oracle.com>
To: Wei Li <wei.d.li@oracle.com>
Cc: Stefan Hajnoczi <stefanha@gmail.com>,
Stefan Hajnoczi <stefanha@redhat.com>,
Paolo Bonzini <pbonzini@redhat.com>,
qemu-devel@nongnu.org
Subject: Re: [Qemu-devel] Following up questions related to QEMU and I/O Thread
Date: Tue, 16 Apr 2019 07:23:38 +0800 [thread overview]
Message-ID: <898ef1d4-bfa2-9952-8ceb-f1282b85e29c@oracle.com> (raw)
In-Reply-To: <60340EAF-4C85-4798-9999-34F1A37E2086@oracle.com>
On 4/16/19 1:34 AM, Wei Li wrote:
> Hi @Paolo Bonzini & @Stefan Hajnoczi,
>
> Would you please help confirm whether @Paolo Bonzini's multiqueue feature change will benefit virtio-scsi or not? Thanks!
>
> @Stefan Hajnoczi,
> I also spent some time on exploring the virtio-scsi multi-queue features via num_queues parameter as below, here are what we found:
>
> 1. Increase number of Queues from one to the same number as CPU will get better IOPS increase.
> 2. Increase number of Queues to the number (e.g. 8) larger than the number of vCPU (e.g. 2) can get even better IOPS increase.
As mentioned in below link, when the number of hw queues is larger than
nr_cpu_ids, the blk-mq layer would limit and only use at most nr_cpu_ids queues
(e.g., /sys/block/sda/mq/).
That is, when the num_queus=4 while vcpus is 2, there should be only 2 queues
available /sys/block/sda/mq/
https://lore.kernel.org/lkml/1553682995-5682-1-git-send-email-dongli.zhang@oracle.com/
I am just curious how increasing the num_queues from 2 to 4 would double the
iops, while there are only 2 vcpus available...
Dongli Zhang
>
> In addition, It seems Qemu can get better IOPS while the attachment uses more queues than the number of vCPU, how could it possible? Could you please help us better understand the behavior? Thanks a lot!
>
>
> Host CPU Configuration:
> CPU(s): 2
> Thread(s) per core: 2
> Core(s) per socket: 1
> Socket(s): 1
>
> Commands for multi queue Setup:
> (QEMU) device_add driver=virtio-scsi-pci num_queues=1 id=test1
> (QEMU) device_add driver=virtio-scsi-pci num_queues=2 id=test2
> (QEMU) device_add driver=virtio-scsi-pci num_queues=4 id=test4
> (QEMU) device_add driver=virtio-scsi-pci num_queues=8 id=test8
>
>
> Result:
> | 8 Queues | 4 Queues | 2 Queues | Single Queue
> IOPS | +29% | 27% | 11% | Baseline
>
> Thanks,
> Wei
>
> On 4/5/19, 2:09 PM, "Wei Li" <wei.d.li@oracle.com> wrote:
>
> Thanks Stefan for your quick response!
>
> Hi Paolo,
> Could you please send us a link related to the multiqueue feature which you are working on so that we could start getting some details about the feature.
>
> Thanks again,
> Wei
>
> On 4/1/19, 3:54 AM, "Stefan Hajnoczi" <stefanha@gmail.com> wrote:
>
> On Fri, Mar 29, 2019 at 08:16:36AM -0700, Wei Li wrote:
> > Thanks Stefan for your reply and guidance!
> >
> > We spent some time on exploring the multiple I/O Threads approach per your feedback. Based on the perf measurement data, we did see some IOPS improvement for multiple volumes, which is great. :)
> >
> > In addition, IOPS for single Volume will still be a bottleneck, it seems like multiqueue block layer feature which Paolo is working on may be able to help improving the IOPS for single volume.
> >
> > @Paolo, @Stefan,
> > Would you mind sharing the multiqueue feature code branch with us? So that we could get some rough idea about this feature and maybe start doing some exploration?
>
> Paolo last worked on this code, so he may be able to send you a link.
>
> Stefan
>
>
>
>
>
WARNING: multiple messages have this Message-ID (diff)
From: Dongli Zhang <dongli.zhang@oracle.com>
To: Wei Li <wei.d.li@oracle.com>
Cc: Stefan Hajnoczi <stefanha@gmail.com>,
qemu-devel@nongnu.org, Stefan Hajnoczi <stefanha@redhat.com>,
Paolo Bonzini <pbonzini@redhat.com>
Subject: Re: [Qemu-devel] Following up questions related to QEMU and I/O Thread
Date: Tue, 16 Apr 2019 07:23:38 +0800 [thread overview]
Message-ID: <898ef1d4-bfa2-9952-8ceb-f1282b85e29c@oracle.com> (raw)
Message-ID: <20190415232338.OHiQn-I114ptYl_lSiiKGopaR94a8EaTUr72l82OBXo@z> (raw)
In-Reply-To: <60340EAF-4C85-4798-9999-34F1A37E2086@oracle.com>
On 4/16/19 1:34 AM, Wei Li wrote:
> Hi @Paolo Bonzini & @Stefan Hajnoczi,
>
> Would you please help confirm whether @Paolo Bonzini's multiqueue feature change will benefit virtio-scsi or not? Thanks!
>
> @Stefan Hajnoczi,
> I also spent some time on exploring the virtio-scsi multi-queue features via num_queues parameter as below, here are what we found:
>
> 1. Increase number of Queues from one to the same number as CPU will get better IOPS increase.
> 2. Increase number of Queues to the number (e.g. 8) larger than the number of vCPU (e.g. 2) can get even better IOPS increase.
As mentioned in below link, when the number of hw queues is larger than
nr_cpu_ids, the blk-mq layer would limit and only use at most nr_cpu_ids queues
(e.g., /sys/block/sda/mq/).
That is, when the num_queus=4 while vcpus is 2, there should be only 2 queues
available /sys/block/sda/mq/
https://lore.kernel.org/lkml/1553682995-5682-1-git-send-email-dongli.zhang@oracle.com/
I am just curious how increasing the num_queues from 2 to 4 would double the
iops, while there are only 2 vcpus available...
Dongli Zhang
>
> In addition, It seems Qemu can get better IOPS while the attachment uses more queues than the number of vCPU, how could it possible? Could you please help us better understand the behavior? Thanks a lot!
>
>
> Host CPU Configuration:
> CPU(s): 2
> Thread(s) per core: 2
> Core(s) per socket: 1
> Socket(s): 1
>
> Commands for multi queue Setup:
> (QEMU) device_add driver=virtio-scsi-pci num_queues=1 id=test1
> (QEMU) device_add driver=virtio-scsi-pci num_queues=2 id=test2
> (QEMU) device_add driver=virtio-scsi-pci num_queues=4 id=test4
> (QEMU) device_add driver=virtio-scsi-pci num_queues=8 id=test8
>
>
> Result:
> | 8 Queues | 4 Queues | 2 Queues | Single Queue
> IOPS | +29% | 27% | 11% | Baseline
>
> Thanks,
> Wei
>
> On 4/5/19, 2:09 PM, "Wei Li" <wei.d.li@oracle.com> wrote:
>
> Thanks Stefan for your quick response!
>
> Hi Paolo,
> Could you please send us a link related to the multiqueue feature which you are working on so that we could start getting some details about the feature.
>
> Thanks again,
> Wei
>
> On 4/1/19, 3:54 AM, "Stefan Hajnoczi" <stefanha@gmail.com> wrote:
>
> On Fri, Mar 29, 2019 at 08:16:36AM -0700, Wei Li wrote:
> > Thanks Stefan for your reply and guidance!
> >
> > We spent some time on exploring the multiple I/O Threads approach per your feedback. Based on the perf measurement data, we did see some IOPS improvement for multiple volumes, which is great. :)
> >
> > In addition, IOPS for single Volume will still be a bottleneck, it seems like multiqueue block layer feature which Paolo is working on may be able to help improving the IOPS for single volume.
> >
> > @Paolo, @Stefan,
> > Would you mind sharing the multiqueue feature code branch with us? So that we could get some rough idea about this feature and maybe start doing some exploration?
>
> Paolo last worked on this code, so he may be able to send you a link.
>
> Stefan
>
>
>
>
>
next prev parent reply other threads:[~2019-04-15 23:19 UTC|newest]
Thread overview: 47+ messages / expand[flat|nested] mbox.gz Atom feed top
2019-03-04 17:33 [Qemu-devel] Following up questions related to QEMU and I/O Thread Wei Li
2019-03-05 17:29 ` Stefan Hajnoczi
[not found] ` <2D7F11D0-4A02-4A0F-961D-854240376B17@oracle.com>
2019-04-01 9:07 ` Stefan Hajnoczi
2019-04-05 21:09 ` Wei Li
2019-04-05 21:09 ` Wei Li
2019-04-16 14:01 ` Paolo Bonzini
2019-04-16 14:01 ` Paolo Bonzini
2019-04-17 1:38 ` Wei Li
2019-04-17 1:38 ` Wei Li
2019-04-17 12:17 ` Paolo Bonzini
2019-04-17 12:17 ` Paolo Bonzini
2019-04-18 3:34 ` Wei Li
2019-04-18 3:34 ` Wei Li
[not found] ` <CC372DF3-1AC6-46B5-98A5-21159497034A@oracle.com>
2019-04-15 17:34 ` Wei Li
2019-04-15 17:34 ` Wei Li
2019-04-15 23:23 ` Dongli Zhang [this message]
2019-04-15 23:23 ` Dongli Zhang
2019-04-16 9:20 ` Stefan Hajnoczi
2019-04-16 9:20 ` Stefan Hajnoczi
2019-04-17 1:42 ` Wei Li
2019-04-17 1:42 ` Wei Li
[not found] ` <8E5AF770-69ED-4D44-8A25-B51344996D9E@oracle.com>
2019-04-23 4:21 ` Wei Li
2019-04-23 4:21 ` Wei Li
2019-04-23 12:04 ` Stefan Hajnoczi
2019-04-23 12:04 ` Stefan Hajnoczi
2019-04-26 8:14 ` Paolo Bonzini
2019-04-26 8:14 ` Paolo Bonzini
2019-04-26 23:02 ` Wei Li
2019-04-26 23:02 ` Wei Li
2019-04-27 4:24 ` Paolo Bonzini
2019-04-27 4:24 ` Paolo Bonzini
2019-04-29 17:49 ` Wei Li
2019-04-29 17:49 ` Wei Li
2019-04-29 13:40 ` Stefan Hajnoczi
2019-04-29 13:40 ` Stefan Hajnoczi
2019-04-29 17:56 ` Wei Li
2019-04-29 17:56 ` Wei Li
2019-05-01 16:36 ` Stefan Hajnoczi
2019-05-01 16:36 ` Stefan Hajnoczi
2019-05-03 16:21 ` Wei Li
2019-05-03 16:21 ` Wei Li
2019-05-03 18:05 ` Paolo Bonzini
2019-05-03 18:05 ` Paolo Bonzini
2019-05-03 18:11 ` Wei Li
2019-05-03 18:11 ` Wei Li
2019-04-30 11:21 ` Paolo Bonzini
2019-04-30 11:21 ` Paolo Bonzini
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=898ef1d4-bfa2-9952-8ceb-f1282b85e29c@oracle.com \
--to=dongli.zhang@oracle.com \
--cc=pbonzini@redhat.com \
--cc=qemu-devel@nongnu.org \
--cc=stefanha@gmail.com \
--cc=stefanha@redhat.com \
--cc=wei.d.li@oracle.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).