public inbox for kvm@vger.kernel.org
 help / color / mirror / Atom feed
From: Hannes Reinecke <hare@suse.de>
To: Chris Wright <chrisw@sous-sol.org>
Cc: kvm@vger.kernel.org, Sheng Yang <sheng@linux.intel.com>,
	Alexander Graf <agraf@suse.de>
Subject: Re: MSI-X not enabled for ixgbe device-passthrough
Date: Mon, 29 Mar 2010 08:28:17 +0200	[thread overview]
Message-ID: <4BB04881.7030708@suse.de> (raw)
In-Reply-To: <20100326194023.GC29241@sequoia.sous-sol.org>

Chris Wright wrote:
> * Hannes Reinecke (hare@suse.de) wrote:
>> Chris Wright wrote:
>>> * Hannes Reinecke (hare@suse.de) wrote:
>>>> Hi all,
>>>>
>>>> I'm trying to setup a system with device-passthrough for
>>>> an ixgbe NIC.
>>>> The device itself seems to work, but it isn't using MSI-X.
>>>> So some more advanced features like DCB offloading etc
>>>> won't work.
>>> Please send the relevant dmesg from the guest when it initializes the
>>> device.  BTW, more typical case for that NIC is to assign the VF to the
>>> guest, not the whole PF.
>>>
>> Yes, I know. But my kernel I'm testing with doesn't have one for ixgbe.
> 
> Ah, you mean it's an older (heh, ok, older actually means not brand new
> upstream) kernel, w/out the recent sr-iov additions, fair enough.
> 
>> So I tested that one. And KVM really should enable MSI-X here,
>> VFs notwithstanding.
> 
> Yeah, although it's not just KVM involved, it's very much driven by
> the guest too.  The guest will see (or at least should see) the MSI-X
> capability, and decide based on the number of queues whether to enable
> MSI-X (completely driver dependent here).
> 
> Did you have a chance to boot the guest again, and send the lspci -vvv from
> the guest POV?  You should see two PCI capabilities (MSI and 0x40 and
> MSI-X at 0x50).
> 
> Actually, I have one of these devices, let me give it a quick test...
> working fine here.  Here's some relevant information:
> 
> Host:
> 
> $ sudo lspci -v -s 04:00.0
> 04:00.0 Ethernet controller: Intel Corporation 82599EB 10-Gigabit Network Connection (rev 01)
> 	Subsystem: Intel Corporation Ethernet Server Adapter X520-2
> 	Flags: bus master, fast devsel, latency 0, IRQ 16
> 	Memory at f8080000 (64-bit, prefetchable) [size=512K]
> 	I/O ports at d020 [size=32]
> 	Memory at f8104000 (64-bit, prefetchable) [size=16K]
> 	Capabilities: [40] Power Management version 3
> 	Capabilities: [50] MSI: Enable- Count=1/1 Maskable+ 64bit+
> 	Capabilities: [70] MSI-X: Enable+ Count=64 Masked-
> 	Capabilities: [a0] Express Endpoint, MSI 00
> 	Capabilities: [100] Advanced Error Reporting
> 	Capabilities: [140] Device Serial Number 00-1b-21-ff-ff-3f-a1-b0
> 	Capabilities: [150] Alternative Routing-ID Interpretation (ARI)
> 	Capabilities: [160] Single Root I/O Virtualization (SR-IOV)
> 
> $ grep kvm /proc/interrupts 
>   95:        289          0          0          0          0          0          0          0          0          0          0          0          0          0          0          0   PCI-MSI-edge      kvm_assigned_msix_device
>   96:         32          0          0          0        292          0          0          0          0          0          0          0          0          0          0          0   PCI-MSI-edge      kvm_assigned_msix_device
>   97:          2          0          0          0          0          0          0          0          0          0          0          0          0          0          0          0   PCI-MSI-edge      kvm_assigned_msix_device
> 
> Guest side:
> $ sudo lspci -vvv -s 00:04.0
> 00:04.0 Ethernet controller: Intel Corporation 82599EB 10-Gigabit Network Connection (rev 01)
> 	Subsystem: Intel Corporation Ethernet Server Adapter X520-2
> 	Control: I/O+ Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx+
> 	Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
> 	Latency: 0, Cache Line Size: 64 bytes
> 	Interrupt: pin A routed to IRQ 11
> 	Region 0: Memory at f2080000 (32-bit, non-prefetchable)
> 	Region 2: I/O ports at c200
> 	Region 4: Memory at f2100000 (32-bit, non-prefetchable)
> 	Capabilities: [40] MSI: Enable- Count=1/1 Maskable- 64bit-
> 		Address: 00000000  Data: 0000
> 	Capabilities: [50] MSI-X: Enable+ Count=64 Masked-
> 		Vector table: BAR=4 offset=00000000
> 		PBA: BAR=4 offset=00002000
> 
> $ grep eth1 /proc/interrupts
> 177:        387       PCI-MSI-X  eth1-rx-0
> 185:        421       PCI-MSI-X  eth1-tx-0
> 193:          2       PCI-MSI-X  eth1:lsc
> 
> $ dmesg | grep -e 00:04.0 -e ixgbe
> ixgbe: Intel(R) 10 Gigabit PCI Express Network Driver - version 2.0.8-k2
> ixgbe: Copyright (c) 1999-2009 Intel Corporation.
> ACPI: PCI Interrupt 0000:00:04.0[A] -> Link [LNKD] -> GSI 10 (level, high) -> IRQ 10
> PCI: Setting latency timer of device 0000:00:04.0 to 64
> ixgbe: 0000:00:04.0: ixgbe_init_interrupt_scheme: Multiqueue Disabled: Rx Queue count = 1, Tx Queue count = 1
> ixgbe 0000:00:04.0: (PCI Express:2.5Gb/s:Width x8) 00:1b:21:3f:a1:b0
> ixgbe 0000:00:04.0: MAC: 2, PHY: 11, SFP+: 5, PBA No: e66562-003
> ixgbe 0000:00:04.0: Intel(R) 10 Gigabit Network Connection
> ixgbe: eth1 NIC Link is Up 10 Gbps, Flow Control: None
> 
Ah. So I'll have to shout at Alex Graf.

No problems there :-)

Thanks for your help.

Cheers,

Hannes
-- 
Dr. Hannes Reinecke		      zSeries & Storage
hare@suse.de			      +49 911 74053 688
SUSE LINUX Products GmbH, Maxfeldstr. 5, 90409 Nürnberg
GF: Markus Rex, HRB 16746 (AG Nürnberg)

  reply	other threads:[~2010-03-29  6:28 UTC|newest]

Thread overview: 9+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2010-03-24 15:54 MSI-X not enabled for ixgbe device-passthrough Hannes Reinecke
2010-03-25  1:39 ` Sheng Yang
2010-03-25 15:29   ` Hannes Reinecke
2010-03-25 17:54 ` Chris Wright
2010-03-26 16:05   ` Hannes Reinecke
2010-03-26 19:40     ` Chris Wright
2010-03-29  6:28       ` Hannes Reinecke [this message]
2010-03-29 16:46         ` Chris Wright
2010-03-29 17:09           ` Alexander Graf

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=4BB04881.7030708@suse.de \
    --to=hare@suse.de \
    --cc=agraf@suse.de \
    --cc=chrisw@sous-sol.org \
    --cc=kvm@vger.kernel.org \
    --cc=sheng@linux.intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox