From: Stefan Hajnoczi <stefanha@gmail.com>
To: Avi Kivity <avi@redhat.com>
Cc: Steve Dobbelstein <steved@us.ibm.com>,
Anthony Liguori <aliguori@us.ibm.com>,
Stefan Hajnoczi <stefanha@linux.vnet.ibm.com>,
kvm@vger.kernel.org, "Michael S. Tsirkin" <mst@redhat.com>,
qemu-devel@nongnu.org, Khoa Huynh <khoa@us.ibm.com>,
Sridhar Samudrala <sri@us.ibm.com>
Subject: [Qemu-devel] Re: [PATCH] virtio: Use ioeventfd for virtqueue notify
Date: Mon, 4 Oct 2010 15:30:20 +0100 [thread overview]
Message-ID: <AANLkTinQANMzznSam5P=d3MuTR9_2ajgBQK80fdqU2V_@mail.gmail.com> (raw)
In-Reply-To: <4CA862A7.2080302@redhat.com>
On Sun, Oct 3, 2010 at 12:01 PM, Avi Kivity <avi@redhat.com> wrote:
> On 09/30/2010 04:01 PM, Stefan Hajnoczi wrote:
>>
>> Virtqueue notify is currently handled synchronously in userspace virtio.
>> This prevents the vcpu from executing guest code while hardware
>> emulation code handles the notify.
>>
>> On systems that support KVM, the ioeventfd mechanism can be used to make
>> virtqueue notify a lightweight exit by deferring hardware emulation to
>> the iothread and allowing the VM to continue execution. This model is
>> similar to how vhost receives virtqueue notifies.
>
> Note that this is a tradeoff. If an idle core is available and the
> scheduler places the iothread on that core, then the heavyweight exit is
> replaced by a lightweight exit + IPI. If the iothread is co-located with
> the vcpu, then we'll take a heavyweight exit in any case.
>
> The first case is very likely if the host cpu is undercommitted and there is
> heavy I/O activity. This is a typical subsystem benchmark scenario (as
> opposed to a system benchmark like specvirt). My feeling is that total
> system throughput will be decreased unless the scheduler is clever enough to
> place the iothread and vcpu on the same host cpu when the system is
> overcommitted.
>
> We can't balance "feeling" against numbers, especially when we have a
> precedent in vhost-net, so I think this should go in. But I think we should
> also try to understand the effects of the extra IPIs and cacheline bouncing
> that this creates. While virtio was designed to minimize this, we know it
> has severe problems in this area.
Right, there is a danger of optimizing for subsystem benchmark cases
rather than real world usage. I have posted some results that we've
gathered but more scrutiny is welcome.
>> Khoa Huynh<khoa@us.ibm.com> collected the following data for
>> virtio-blk with cache=none,aio=native:
>>
>> FFSB Test Threads Unmodified Patched
>> (MB/s) (MB/s)
>> Large file create 1 21.7 21.8
>> 8 101.0 118.0
>> 16 119.0 157.0
>>
>> Sequential reads 1 21.9 23.2
>> 8 114.0 139.0
>> 16 143.0 178.0
>>
>> Random reads 1 3.3 3.6
>> 8 23.0 25.4
>> 16 43.3 47.8
>>
>> Random writes 1 22.2 23.0
>> 8 93.1 111.6
>> 16 110.5 132.0
>
> Impressive numbers. Can you also provide efficiency (bytes per host cpu
> seconds)?
Khoa, do you have the host CPU numbers for these benchmark runs?
> How many guest vcpus were used with this? With enough vcpus, there is also
> a reduction in cacheline bouncing, since the virtio state in the host gets
> to stay on one cpu (especially with aio=native).
Guest: 2 vcpu, 4 GB RAM
Host: 16 cpus, 12 GB RAM
Khoa, is this correct?
Stefan
next prev parent reply other threads:[~2010-10-04 14:37 UTC|newest]
Thread overview: 23+ messages / expand[flat|nested] mbox.gz Atom feed top
2010-09-30 14:01 [Qemu-devel] [PATCH] virtio: Use ioeventfd for virtqueue notify Stefan Hajnoczi
2010-10-03 11:01 ` [Qemu-devel] " Avi Kivity
2010-10-03 13:51 ` Michael S. Tsirkin
2010-10-03 14:21 ` Avi Kivity
2010-10-03 14:28 ` Michael S. Tsirkin
2010-10-04 1:18 ` Anthony Liguori
2010-10-04 8:04 ` Avi Kivity
2010-10-04 14:01 ` Anthony Liguori
2010-10-04 16:12 ` Michael S. Tsirkin
2010-10-04 16:20 ` Anthony Liguori
2010-10-04 16:25 ` Michael S. Tsirkin
2010-10-04 14:30 ` Stefan Hajnoczi [this message]
2010-10-05 11:00 ` rukhsana ansari
2010-10-05 11:58 ` Avi Kivity
2010-10-19 13:07 ` Stefan Hajnoczi
2010-10-19 13:12 ` Anthony Liguori
2010-10-19 13:35 ` Michael S. Tsirkin
2010-10-19 13:44 ` Stefan Hajnoczi
2010-10-19 13:43 ` Michael S. Tsirkin
2010-11-10 14:52 ` Stefan Hajnoczi
2010-10-19 13:33 ` Michael S. Tsirkin
2010-10-25 13:25 ` Stefan Hajnoczi
2010-10-25 15:01 ` Stefan Hajnoczi
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to='AANLkTinQANMzznSam5P=d3MuTR9_2ajgBQK80fdqU2V_@mail.gmail.com' \
--to=stefanha@gmail.com \
--cc=aliguori@us.ibm.com \
--cc=avi@redhat.com \
--cc=khoa@us.ibm.com \
--cc=kvm@vger.kernel.org \
--cc=mst@redhat.com \
--cc=qemu-devel@nongnu.org \
--cc=sri@us.ibm.com \
--cc=stefanha@linux.vnet.ibm.com \
--cc=steved@us.ibm.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).