From: Avi Kivity <avi@redhat.com>
To: Alex Williamson <alex.williamson@redhat.com>
Cc: Jan Kiszka <jan.kiszka@siemens.com>,
"mst@redhat.com" <mst@redhat.com>,
"gleb@redhat.com" <gleb@redhat.com>,
"kvm@vger.kernel.org" <kvm@vger.kernel.org>,
"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>
Subject: Re: [PATCH v3 0/2] kvm: level irqfd and new eoifd
Date: Sun, 15 Jul 2012 11:33:55 +0300 [thread overview]
Message-ID: <50028073.8000804@redhat.com> (raw)
In-Reply-To: <1342109982.10815.20.camel@ul30vt>
On 07/12/2012 07:19 PM, Alex Williamson wrote:
> On Thu, 2012-07-12 at 12:35 +0300, Avi Kivity wrote:
>> On 07/11/2012 10:57 PM, Alex Williamson wrote:
>> >>
>> >> > We still have classic KVM device assignment to provide fast-path INTx.
>> >> > But if we want to replace it midterm, I think it's necessary for VFIO to
>> >> > be able to provide such a path as well.
>> >>
>> >> I would like VFIO to have no regressions vs. kvm device assignment,
>> >> except perhaps in uncommon corner cases. So I agree.
>> >
>> > I ran a few TCP_RR netperf tests forcing a 1Gb tg3 nic to use INTx.
>> > Without irqchip support vfio gets a bit more than 60% of KVM device
>> > assignment. That's a little bit of an unfair comparison since it's more
>> > than just the I/O path. With the proposed interfaces here, enabling
>> > irqchip, vfio is within 10% of KVM device assignment for INTx. For MSI,
>> > I can actually make vfio come out more than 30% better than KVM device
>> > assignment if I send the eventfd from the hard irq handler. Using a
>> > threaded handler as the code currently does, vfio is still behind KVM.
>> > It's hard to beat a direct call chain.
>>
>> We can have a direct call chain with vfio too, using a custom eventfd
>> poll function, no? Assuming we set up a fast path for unicast msi.
>
> You'll have to help me out a little, eventfd_signal walks the wait_queue
> and calls each function. On the injection path that includes
> irqfd_wakeup.
This is what I meant, except I forgot that we already do direct path for
MSI.
> For an MSI that seems to already provide direct
> injection.
Ugh, even for a broadcast MSI into 254 vcpu guests. That's going to be
one slow interrupt.
> For level we'll schedule_work, so that explains the overhead
> in that path, but it's not too dissimilar to a a threaded irq. vfio
> does something very similar, so there's a schedule_work both on inject
> and on eoi. I'll have to check whether anything prevents the unmask
> from the wait_queue function in vfio, that could be a significant chunk
> of the gap. Where's the custom poll function come into play? Thanks,
So I don't understand where the gap comes from. The number of context
switches for kvm and vfio is the same, as long as both use MSI (and
either both use threaded irq or both don't).
--
error compiling committee.c: too many arguments to function
next prev parent reply other threads:[~2012-07-15 8:34 UTC|newest]
Thread overview: 22+ messages / expand[flat|nested] mbox.gz Atom feed top
2012-07-03 19:21 [PATCH v3 0/2] kvm: level irqfd and new eoifd Alex Williamson
2012-07-03 19:21 ` [PATCH v3 1/2] kvm: Extend irqfd to support level interrupts Alex Williamson
2012-07-03 19:21 ` [PATCH v3 2/2] kvm: KVM_EOIFD, an eventfd for EOIs Alex Williamson
2012-07-04 14:00 ` Michael S. Tsirkin
2012-07-05 4:24 ` Alex Williamson
2012-07-05 15:53 ` Michael S. Tsirkin
2012-07-09 20:35 ` Alex Williamson
2012-07-13 13:36 ` Michael S. Tsirkin
2012-07-11 9:53 ` [PATCH v3 0/2] kvm: level irqfd and new eoifd Avi Kivity
2012-07-11 10:18 ` Jan Kiszka
2012-07-11 10:49 ` Avi Kivity
2012-07-11 11:23 ` Jan Kiszka
2012-07-11 11:51 ` Avi Kivity
2012-07-11 19:57 ` Alex Williamson
2012-07-12 9:35 ` Avi Kivity
2012-07-12 16:19 ` Alex Williamson
2012-07-12 17:38 ` Alex Williamson
2012-07-15 10:09 ` Avi Kivity
2012-07-16 14:08 ` Alex Williamson
2012-07-15 8:33 ` Avi Kivity [this message]
2012-07-16 14:03 ` Alex Williamson
2012-07-16 14:35 ` Avi Kivity
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=50028073.8000804@redhat.com \
--to=avi@redhat.com \
--cc=alex.williamson@redhat.com \
--cc=gleb@redhat.com \
--cc=jan.kiszka@siemens.com \
--cc=kvm@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=mst@redhat.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).