From: Jintack Lim <jintack@cs.columbia.edu>
To: QEMU Devel Mailing List <qemu-devel@nongnu.org>, vfio-users@redhat.com
Cc: Peter Xu <peterx@redhat.com>
Subject: [Qemu-devel] Assigning network devices to nested VMs results in driver errors in nested VMs
Date: Tue, 13 Feb 2018 23:44:09 -0500 [thread overview]
Message-ID: <CAHyh4xiTDwB2xG_bg00bMA0042yTRq7-dJ-+y3muR+=9N3Vx-g@mail.gmail.com> (raw)
Hi,
I'm trying to assign network devices to nested VMs on x86 using KVM,
but I got network device driver errors in the nested VMs. (I've tried
this about an year ago when vIOMMU patches were not upstreamed, and I
got similar errors at that time.)
This could be network driver issues, but I'd like to get some help if
somebody encountered similar issues.
I'm using v4.15.0 kernel and v2.11.0 QEMU, and I followed this [1]
guide. I had no problem with assigning devices to the first level VMs
(L1 VMs). And I also checked that the devices were assigned to nested
VMs with the lspci command in the nested VMs. But network device
drivers failed to initialize the device. I tried two network cards -
Intel Corporation 82599ES 10-Gigabit SFI/SFP+ Network Connection and
Mellanox Technologies MT27500 Family.
Intel driver error in the nested VM looks like this.
[ 1.939552] ixgbe: Intel(R) 10 Gigabit PCI Express Network Driver -
version 5.1.0-k
[ 1.949796] ixgbe: Copyright (c) 1999-2016 Intel Corporation.
[ 2.210024] ixgbe 0000:00:04.0: HW Init failed: -12
[ 2.218144] ixgbe: probe of 0000:00:04.0 failed with error -12
and I saw lots of these messages in the host (L0) kernel log when
booting the nested VM.
[ 1557.404173] DMAR: DRHD: handling fault status reg 102
[ 1557.409813] DMAR: [DMA Read] Request device [06:00.0] fault addr
90000 [fault reason 06] PTE Read access is not set
[ 1561.383957] DMAR: DRHD: handling fault status reg 202
[ 1561.389598] DMAR: [DMA Read] Request device [06:00.0] fault addr
90000 [fault reason 06] PTE Read access is not set
This is Mellanox driver error in another nested VM.
[ 2.481694] mlx4_core: Initializing 0000:00:04.0
[ 3.519422] mlx4_core 0000:00:04.0: Installed FW has unsupported
command interface revision 0
[ 3.537769] mlx4_core 0000:00:04.0: (Installed FW version is 0.0.000)
[ 3.551733] mlx4_core 0000:00:04.0: This driver version supports
only revisions 2 to 3
[ 3.568758] mlx4_core 0000:00:04.0: QUERY_FW command failed, aborting
[ 3.582789] mlx4_core 0000:00:04.0: Failed to init fw, aborting.
The host showed similar messages as above.
I wonder what could be the cause of these errors. Please let me know
if further information is needed.
[1] https://wiki.qemu.org/Features/VT-d
Thanks,
Jintack
next reply other threads:[~2018-02-14 4:44 UTC|newest]
Thread overview: 5+ messages / expand[flat|nested] mbox.gz Atom feed top
2018-02-14 4:44 Jintack Lim [this message]
2018-02-14 5:36 ` [Qemu-devel] Assigning network devices to nested VMs results in driver errors in nested VMs Peter Xu
2018-02-14 13:57 ` Jintack Lim
2018-02-14 19:55 ` Jintack Lim
-- strict thread matches above, loose matches on Subject: below --
2024-10-14 10:51 ryan.ljlu
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to='CAHyh4xiTDwB2xG_bg00bMA0042yTRq7-dJ-+y3muR+=9N3Vx-g@mail.gmail.com' \
--to=jintack@cs.columbia.edu \
--cc=peterx@redhat.com \
--cc=qemu-devel@nongnu.org \
--cc=vfio-users@redhat.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).