public inbox for kvm@vger.kernel.org
 help / color / mirror / Atom feed
From: Jan Kiszka <jan.kiszka@web.de>
To: Alex Williamson <alex.williamson@redhat.com>
Cc: kvm <kvm@vger.kernel.org>, "Michael S. Tsirkin" <mst@redhat.com>,
	Avi Kivity <avi@redhat.com>
Subject: Re: [RFT] IRQ sharing for assigned devices - method selection
Date: Fri, 07 Jan 2011 20:03:19 +0100	[thread overview]
Message-ID: <4D276377.2090000@web.de> (raw)
In-Reply-To: <1294426676.3214.20.camel@x201>

[-- Attachment #1: Type: text/plain, Size: 1989 bytes --]

Am 07.01.2011 19:57, Alex Williamson wrote:
> On Fri, 2011-01-07 at 10:33 +0100, Jan Kiszka wrote:
>> Hi,
>>
>> to finally select the approach for adding overdue IRQ sharing support
>> for PCI pass-through, I hacked up two versions based on Thomas' patches
>> and his suggestion to use a timeout-based mode transition:
>>
>>     git://git.kiszka.org/linux-kvm.git queues/dev-assign.notify
>>     git://git.kiszka.org/linux-kvm.git queues/dev-assign.timeout
>>
>>     git://git.kiszka.org/qemu-kvm.git queues/dev-assign
>>
>> Both approaches work, but I'm either lacking a sufficiently stressing
>> test environment to tickle out a relevant delta, even between masking at
>> irqchip vs. PCI config space level - or there is none... Yes, there are
>> differences at micro level but they do not manifest in measurable (ie.
>> above the noise level) load increase or throughput/latency decrease in
>> my limited tests here. I that actually turns out to be true, I would
>> happily bury all this dynamic mode switching again.
>>
>> So, if you have a good high-bandwidth test case at hand, I would
>> appreciate if you could give this a try and report your findings. Does
>> switching from exclusive to shared IRQ mode decrease the throughput or
>> increase the host load? Is there a difference to current kvm?
> 
> I think any sufficiently high bandwidth device will be using MSI and or
> NAPI, so I wouldn't expect we're going to see much change there.

That's also why I'm no longer sure it's worth to worry about irq_disable
vs. PCI disable. Anyone who cares about performance in a large
pass-through scenario will try to use MSI-capable hardware anyway (or
was so far unable to use tons of legacy IRQ driven devices due to IRQ
conflicts).

> Perhaps you can simply force a 1GbE device to use INTx and do some
> netperf TCP_RR tests to try to expose any latency differences.  Thanks,

I had the same idea, but I'm lacking a 1GbE peer here. :(

Jan


[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 259 bytes --]

  reply	other threads:[~2011-01-07 19:03 UTC|newest]

Thread overview: 4+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2011-01-07  9:33 [RFT] IRQ sharing for assigned devices - method selection Jan Kiszka
2011-01-07 18:57 ` Alex Williamson
2011-01-07 19:03   ` Jan Kiszka [this message]
2011-01-09  9:42     ` Avi Kivity

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=4D276377.2090000@web.de \
    --to=jan.kiszka@web.de \
    --cc=alex.williamson@redhat.com \
    --cc=avi@redhat.com \
    --cc=kvm@vger.kernel.org \
    --cc=mst@redhat.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox