* [RFT] IRQ sharing for assigned devices - method selection
@ 2011-01-07 9:33 Jan Kiszka
2011-01-07 18:57 ` Alex Williamson
0 siblings, 1 reply; 4+ messages in thread
From: Jan Kiszka @ 2011-01-07 9:33 UTC (permalink / raw)
To: kvm; +Cc: Alex Williamson, Michael S. Tsirkin, Avi Kivity
[-- Attachment #1: Type: text/plain, Size: 1194 bytes --]
Hi,
to finally select the approach for adding overdue IRQ sharing support
for PCI pass-through, I hacked up two versions based on Thomas' patches
and his suggestion to use a timeout-based mode transition:
git://git.kiszka.org/linux-kvm.git queues/dev-assign.notify
git://git.kiszka.org/linux-kvm.git queues/dev-assign.timeout
git://git.kiszka.org/qemu-kvm.git queues/dev-assign
Both approaches work, but I'm either lacking a sufficiently stressing
test environment to tickle out a relevant delta, even between masking at
irqchip vs. PCI config space level - or there is none... Yes, there are
differences at micro level but they do not manifest in measurable (ie.
above the noise level) load increase or throughput/latency decrease in
my limited tests here. I that actually turns out to be true, I would
happily bury all this dynamic mode switching again.
So, if you have a good high-bandwidth test case at hand, I would
appreciate if you could give this a try and report your findings. Does
switching from exclusive to shared IRQ mode decrease the throughput or
increase the host load? Is there a difference to current kvm?
Thanks in advance,
Jan
[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 259 bytes --]
^ permalink raw reply [flat|nested] 4+ messages in thread
* Re: [RFT] IRQ sharing for assigned devices - method selection
2011-01-07 9:33 [RFT] IRQ sharing for assigned devices - method selection Jan Kiszka
@ 2011-01-07 18:57 ` Alex Williamson
2011-01-07 19:03 ` Jan Kiszka
0 siblings, 1 reply; 4+ messages in thread
From: Alex Williamson @ 2011-01-07 18:57 UTC (permalink / raw)
To: Jan Kiszka; +Cc: kvm, Michael S. Tsirkin, Avi Kivity
On Fri, 2011-01-07 at 10:33 +0100, Jan Kiszka wrote:
> Hi,
>
> to finally select the approach for adding overdue IRQ sharing support
> for PCI pass-through, I hacked up two versions based on Thomas' patches
> and his suggestion to use a timeout-based mode transition:
>
> git://git.kiszka.org/linux-kvm.git queues/dev-assign.notify
> git://git.kiszka.org/linux-kvm.git queues/dev-assign.timeout
>
> git://git.kiszka.org/qemu-kvm.git queues/dev-assign
>
> Both approaches work, but I'm either lacking a sufficiently stressing
> test environment to tickle out a relevant delta, even between masking at
> irqchip vs. PCI config space level - or there is none... Yes, there are
> differences at micro level but they do not manifest in measurable (ie.
> above the noise level) load increase or throughput/latency decrease in
> my limited tests here. I that actually turns out to be true, I would
> happily bury all this dynamic mode switching again.
>
> So, if you have a good high-bandwidth test case at hand, I would
> appreciate if you could give this a try and report your findings. Does
> switching from exclusive to shared IRQ mode decrease the throughput or
> increase the host load? Is there a difference to current kvm?
I think any sufficiently high bandwidth device will be using MSI and or
NAPI, so I wouldn't expect we're going to see much change there.
Perhaps you can simply force a 1GbE device to use INTx and do some
netperf TCP_RR tests to try to expose any latency differences. Thanks,
Alex
^ permalink raw reply [flat|nested] 4+ messages in thread
* Re: [RFT] IRQ sharing for assigned devices - method selection
2011-01-07 18:57 ` Alex Williamson
@ 2011-01-07 19:03 ` Jan Kiszka
2011-01-09 9:42 ` Avi Kivity
0 siblings, 1 reply; 4+ messages in thread
From: Jan Kiszka @ 2011-01-07 19:03 UTC (permalink / raw)
To: Alex Williamson; +Cc: kvm, Michael S. Tsirkin, Avi Kivity
[-- Attachment #1: Type: text/plain, Size: 1989 bytes --]
Am 07.01.2011 19:57, Alex Williamson wrote:
> On Fri, 2011-01-07 at 10:33 +0100, Jan Kiszka wrote:
>> Hi,
>>
>> to finally select the approach for adding overdue IRQ sharing support
>> for PCI pass-through, I hacked up two versions based on Thomas' patches
>> and his suggestion to use a timeout-based mode transition:
>>
>> git://git.kiszka.org/linux-kvm.git queues/dev-assign.notify
>> git://git.kiszka.org/linux-kvm.git queues/dev-assign.timeout
>>
>> git://git.kiszka.org/qemu-kvm.git queues/dev-assign
>>
>> Both approaches work, but I'm either lacking a sufficiently stressing
>> test environment to tickle out a relevant delta, even between masking at
>> irqchip vs. PCI config space level - or there is none... Yes, there are
>> differences at micro level but they do not manifest in measurable (ie.
>> above the noise level) load increase or throughput/latency decrease in
>> my limited tests here. I that actually turns out to be true, I would
>> happily bury all this dynamic mode switching again.
>>
>> So, if you have a good high-bandwidth test case at hand, I would
>> appreciate if you could give this a try and report your findings. Does
>> switching from exclusive to shared IRQ mode decrease the throughput or
>> increase the host load? Is there a difference to current kvm?
>
> I think any sufficiently high bandwidth device will be using MSI and or
> NAPI, so I wouldn't expect we're going to see much change there.
That's also why I'm no longer sure it's worth to worry about irq_disable
vs. PCI disable. Anyone who cares about performance in a large
pass-through scenario will try to use MSI-capable hardware anyway (or
was so far unable to use tons of legacy IRQ driven devices due to IRQ
conflicts).
> Perhaps you can simply force a 1GbE device to use INTx and do some
> netperf TCP_RR tests to try to expose any latency differences. Thanks,
I had the same idea, but I'm lacking a 1GbE peer here. :(
Jan
[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 259 bytes --]
^ permalink raw reply [flat|nested] 4+ messages in thread
* Re: [RFT] IRQ sharing for assigned devices - method selection
2011-01-07 19:03 ` Jan Kiszka
@ 2011-01-09 9:42 ` Avi Kivity
0 siblings, 0 replies; 4+ messages in thread
From: Avi Kivity @ 2011-01-09 9:42 UTC (permalink / raw)
To: Jan Kiszka; +Cc: Alex Williamson, kvm, Michael S. Tsirkin
On 01/07/2011 09:03 PM, Jan Kiszka wrote:
> >>
> >> So, if you have a good high-bandwidth test case at hand, I would
> >> appreciate if you could give this a try and report your findings. Does
> >> switching from exclusive to shared IRQ mode decrease the throughput or
> >> increase the host load? Is there a difference to current kvm?
> >
> > I think any sufficiently high bandwidth device will be using MSI and or
> > NAPI, so I wouldn't expect we're going to see much change there.
>
> That's also why I'm no longer sure it's worth to worry about irq_disable
> vs. PCI disable. Anyone who cares about performance in a large
> pass-through scenario will try to use MSI-capable hardware anyway (or
> was so far unable to use tons of legacy IRQ driven devices due to IRQ
> conflicts).
PCI disable is probably only ridiculously slow with cf8/cfc config space
access, and significantly faster (though still slow) with mmconfig.
Needs to be taken into account as well.
--
error compiling committee.c: too many arguments to function
^ permalink raw reply [flat|nested] 4+ messages in thread
end of thread, other threads:[~2011-01-09 9:42 UTC | newest]
Thread overview: 4+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2011-01-07 9:33 [RFT] IRQ sharing for assigned devices - method selection Jan Kiszka
2011-01-07 18:57 ` Alex Williamson
2011-01-07 19:03 ` Jan Kiszka
2011-01-09 9:42 ` Avi Kivity
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox