From: Avi Kivity <avi@redhat.com>
To: Stefan Hajnoczi <stefanha@linux.vnet.ibm.com>
Cc: Steve Dobbelstein <steved@us.ibm.com>,
Anthony Liguori <aliguori@us.ibm.com>,
kvm@vger.kernel.org, "Michael S. Tsirkin" <mst@redhat.com>,
qemu-devel@nongnu.org, Khoa Huynh <khoa@us.ibm.com>,
Sridhar Samudrala <sri@us.ibm.com>
Subject: [Qemu-devel] Re: [PATCH] virtio: Use ioeventfd for virtqueue notify
Date: Sun, 03 Oct 2010 13:01:59 +0200 [thread overview]
Message-ID: <4CA862A7.2080302@redhat.com> (raw)
In-Reply-To: <1285855312-11739-1-git-send-email-stefanha@linux.vnet.ibm.com>
On 09/30/2010 04:01 PM, Stefan Hajnoczi wrote:
> Virtqueue notify is currently handled synchronously in userspace virtio.
> This prevents the vcpu from executing guest code while hardware
> emulation code handles the notify.
>
> On systems that support KVM, the ioeventfd mechanism can be used to make
> virtqueue notify a lightweight exit by deferring hardware emulation to
> the iothread and allowing the VM to continue execution. This model is
> similar to how vhost receives virtqueue notifies.
Note that this is a tradeoff. If an idle core is available and the
scheduler places the iothread on that core, then the heavyweight exit is
replaced by a lightweight exit + IPI. If the iothread is co-located
with the vcpu, then we'll take a heavyweight exit in any case.
The first case is very likely if the host cpu is undercommitted and
there is heavy I/O activity. This is a typical subsystem benchmark
scenario (as opposed to a system benchmark like specvirt). My feeling
is that total system throughput will be decreased unless the scheduler
is clever enough to place the iothread and vcpu on the same host cpu
when the system is overcommitted.
We can't balance "feeling" against numbers, especially when we have a
precedent in vhost-net, so I think this should go in. But I think we
should also try to understand the effects of the extra IPIs and
cacheline bouncing that this creates. While virtio was designed to
minimize this, we know it has severe problems in this area.
> The result of this change is improved performance for userspace virtio
> devices. Virtio-blk throughput increases especially for multithreaded
> scenarios and virtio-net transmit throughput increases substantially.
> Full numbers are below.
>
> This patch employs ioeventfd virtqueue notify for all virtio devices.
> Linux kernels pre-2.6.34 only allow for 6 ioeventfds per VM and care
> must be taken so that vhost-net, the other ioeventfd user in QEMU, is
> able to function. On such kernels ioeventfd virtqueue notify will not
> be used.
>
> Khoa Huynh<khoa@us.ibm.com> collected the following data for
> virtio-blk with cache=none,aio=native:
>
> FFSB Test Threads Unmodified Patched
> (MB/s) (MB/s)
> Large file create 1 21.7 21.8
> 8 101.0 118.0
> 16 119.0 157.0
>
> Sequential reads 1 21.9 23.2
> 8 114.0 139.0
> 16 143.0 178.0
>
> Random reads 1 3.3 3.6
> 8 23.0 25.4
> 16 43.3 47.8
>
> Random writes 1 22.2 23.0
> 8 93.1 111.6
> 16 110.5 132.0
Impressive numbers. Can you also provide efficiency (bytes per host cpu
seconds)?
How many guest vcpus were used with this? With enough vcpus, there is
also a reduction in cacheline bouncing, since the virtio state in the
host gets to stay on one cpu (especially with aio=native).
> Sridhar Samudrala<sri@us.ibm.com> collected the following data for
> virtio-net with 2.6.36-rc1 on the host and 2.6.34 on the guest.
>
> Guest to Host TCP_STREAM throughput(Mb/sec)
> -------------------------------------------
> Msg Size vhost-net virtio-net virtio-net/ioeventfd
> 65536 12755 6430 7590
> 16384 8499 3084 5764
> 4096 4723 1578 3659
> 1024 1827 981 2060
Even more impressive (expected since the copying, which isn't present
for block, is now shunted off into an iothread).
On the last test you even exceeded vhost-net. Any theories how/why?
Again, efficiency numbers would be interesting.
> Host to Guest TCP_STREAM throughput(Mb/sec)
> -------------------------------------------
> Msg Size vhost-net virtio-net virtio-net/ioeventfd
> 65536 11156 5790 5853
> 16384 10787 5575 5691
> 4096 10452 5556 4277
> 1024 4437 3671 5277
Here you exceed vhost-net, too.
> +static int kvm_check_many_iobus_devs(void)
> +{
> + /* Older kernels have a 6 device limit on the KVM io bus. In that case
> + * creating many ioeventfds must be avoided. This tests checks for the
> + * limitation.
> + */
> + EventNotifier notifiers[7];
> + int i, ret = 0;
> + for (i = 0; i< ARRAY_SIZE(notifiers); i++) {
> + ret = event_notifier_init(¬ifiers[i], 0);
> + if (ret< 0) {
> + break;
> + }
> + ret = kvm_set_ioeventfd_pio_word(event_notifier_get_fd(¬ifiers[i]), 0, i, true);
> + if (ret< 0) {
> + event_notifier_cleanup(¬ifiers[i]);
> + break;
> + }
> + }
> +
> + /* Decide whether many devices are supported or not */
> + ret = i == ARRAY_SIZE(notifiers);
> +
> + while (i--> 0) {
> + kvm_set_ioeventfd_pio_word(event_notifier_get_fd(¬ifiers[i]), 0, i, false);
> + event_notifier_cleanup(¬ifiers[i]);
> + }
> + return ret;
> +}
Sorry about that.
IIRC there was a problem (shared by vhost-net) with interrupts remaining
enabled in the window between the guest kicking the queue and the host
waking up and disabling interrupts. An even more vague IIRC mst had an
idea to fix this?
--
error compiling committee.c: too many arguments to function
next prev parent reply other threads:[~2010-10-03 11:13 UTC|newest]
Thread overview: 23+ messages / expand[flat|nested] mbox.gz Atom feed top
2010-09-30 14:01 [Qemu-devel] [PATCH] virtio: Use ioeventfd for virtqueue notify Stefan Hajnoczi
2010-10-03 11:01 ` Avi Kivity [this message]
2010-10-03 13:51 ` [Qemu-devel] " Michael S. Tsirkin
2010-10-03 14:21 ` Avi Kivity
2010-10-03 14:28 ` Michael S. Tsirkin
2010-10-04 1:18 ` Anthony Liguori
2010-10-04 8:04 ` Avi Kivity
2010-10-04 14:01 ` Anthony Liguori
2010-10-04 16:12 ` Michael S. Tsirkin
2010-10-04 16:20 ` Anthony Liguori
2010-10-04 16:25 ` Michael S. Tsirkin
2010-10-04 14:30 ` Stefan Hajnoczi
2010-10-05 11:00 ` rukhsana ansari
2010-10-05 11:58 ` Avi Kivity
2010-10-19 13:07 ` Stefan Hajnoczi
2010-10-19 13:12 ` Anthony Liguori
2010-10-19 13:35 ` Michael S. Tsirkin
2010-10-19 13:44 ` Stefan Hajnoczi
2010-10-19 13:43 ` Michael S. Tsirkin
2010-11-10 14:52 ` Stefan Hajnoczi
2010-10-19 13:33 ` Michael S. Tsirkin
2010-10-25 13:25 ` Stefan Hajnoczi
2010-10-25 15:01 ` Stefan Hajnoczi
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=4CA862A7.2080302@redhat.com \
--to=avi@redhat.com \
--cc=aliguori@us.ibm.com \
--cc=khoa@us.ibm.com \
--cc=kvm@vger.kernel.org \
--cc=mst@redhat.com \
--cc=qemu-devel@nongnu.org \
--cc=sri@us.ibm.com \
--cc=stefanha@linux.vnet.ibm.com \
--cc=steved@us.ibm.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).