From: Alex Williamson <alex.williamson@redhat.com>
To: Chuck Zmudzinski <brchuckz@netscape.net>
Cc: "Michael S. Tsirkin" <mst@redhat.com>,
qemu-devel@nongnu.org,
Stefano Stabellini <sstabellini@kernel.org>,
Anthony Perard <anthony.perard@citrix.com>,
Paul Durrant <paul@xen.org>, Paolo Bonzini <pbonzini@redhat.com>,
Richard Henderson <richard.henderson@linaro.org>,
Eduardo Habkost <eduardo@habkost.net>,
Marcel Apfelbaum <marcel.apfelbaum@gmail.com>,
xen-devel@lists.xenproject.org
Subject: Re: [PATCH v6] xen/pt: reserve PCI slot 2 for Intel igd-passthru
Date: Tue, 3 Jan 2023 08:14:56 -0700 [thread overview]
Message-ID: <20230103081456.1d676b8e.alex.williamson@redhat.com> (raw)
In-Reply-To: <c21e933f-0539-9ffb-b2f8-f8e1a279b16f@netscape.net>
On Mon, 2 Jan 2023 18:10:24 -0500
Chuck Zmudzinski <brchuckz@netscape.net> wrote:
> On 1/2/23 12:46 PM, Michael S. Tsirkin wrote:
> > On Sun, Jan 01, 2023 at 06:52:03PM -0500, Chuck Zmudzinski wrote:
> > > Intel specifies that the Intel IGD must occupy slot 2 on the PCI bus,
> > > as noted in docs/igd-assign.txt in the Qemu source code.
> > >
> > > Currently, when the xl toolstack is used to configure a Xen HVM guest with
> > > Intel IGD passthrough to the guest with the Qemu upstream device model,
> > > a Qemu emulated PCI device will occupy slot 2 and the Intel IGD will occupy
> > > a different slot. This problem often prevents the guest from booting.
> > >
> > > The only available workaround is not good: Configure Xen HVM guests to use
> > > the old and no longer maintained Qemu traditional device model available
> > > from xenbits.xen.org which does reserve slot 2 for the Intel IGD.
> > >
> > > To implement this feature in the Qemu upstream device model for Xen HVM
> > > guests, introduce the following new functions, types, and macros:
> > >
> > > * XEN_PT_DEVICE_CLASS declaration, based on the existing TYPE_XEN_PT_DEVICE
> > > * XEN_PT_DEVICE_GET_CLASS macro helper function for XEN_PT_DEVICE_CLASS
> > > * typedef XenPTQdevRealize function pointer
> > > * XEN_PCI_IGD_SLOT_MASK, the value of slot_reserved_mask to reserve slot 2
> > > * xen_igd_reserve_slot and xen_igd_clear_slot functions
> > >
> > > The new xen_igd_reserve_slot function uses the existing slot_reserved_mask
> > > member of PCIBus to reserve PCI slot 2 for Xen HVM guests configured using
> > > the xl toolstack with the gfx_passthru option enabled, which sets the
> > > igd-passthru=on option to Qemu for the Xen HVM machine type.
> > >
> > > The new xen_igd_reserve_slot function also needs to be implemented in
> > > hw/xen/xen_pt_stub.c to prevent FTBFS during the link stage for the case
> > > when Qemu is configured with --enable-xen and --disable-xen-pci-passthrough,
> > > in which case it does nothing.
> > >
> > > The new xen_igd_clear_slot function overrides qdev->realize of the parent
> > > PCI device class to enable the Intel IGD to occupy slot 2 on the PCI bus
> > > since slot 2 was reserved by xen_igd_reserve_slot when the PCI bus was
> > > created in hw/i386/pc_piix.c for the case when igd-passthru=on.
> > >
> > > Move the call to xen_host_pci_device_get, and the associated error
> > > handling, from xen_pt_realize to the new xen_igd_clear_slot function to
> > > initialize the device class and vendor values which enables the checks for
> > > the Intel IGD to succeed. The verification that the host device is an
> > > Intel IGD to be passed through is done by checking the domain, bus, slot,
> > > and function values as well as by checking that gfx_passthru is enabled,
> > > the device class is VGA, and the device vendor in Intel.
> > >
> > > Signed-off-by: Chuck Zmudzinski <brchuckz@aol.com>
> >
> > I'm not sure why is the issue xen specific. Can you explain?
> > Doesn't it affect kvm too?
>
> Recall from docs/igd-assign.txt that there are two modes for
> igd passthrough: legacy and upt, and the igd needs to be
> at slot 2 only when using legacy mode which gives one
> single guest exclusive access to the Intel igd.
>
> It's only xen specific insofar as xen does not have support
> for the upt mode so xen must use legacy mode which
> requires the igd to be at slot 2. I am not an expert with
UPT mode never fully materialized for direct assignment, the folks at
Intel championing this scenario left.
> kvm, but if I understand correctly, with kvm one can use
> the upt mode with the Intel i915 kvmgt kernel module
> and in that case the guest will see a virtual Intel gpu
> that can be at any arbitrary slot when using kvmgt, and
> also, in that case, more than one guest can access the
> igd through the kvmgt kernel module.
This is true, IIRC an Intel vGPU does not need to be in slot 2.
> Again, I am not an expert and do not have as much
> experience with kvm, but if I understand correctly it is
> possible to use the legacy mode with kvm and I think you
> are correct that if one uses kvm in legacy mode and without
> using the Intel i915 kvmgt kernel module, then it would be
> necessary to reserve slot 2 for the igd on kvm.
It's necessary to configure the assigned IGD at slot 2 to make it
functional, yes, but I don't really understand this notion of
"reserving" slot 2. If something occupies address 00:02.0 in the
config, it's the user's or management tool's responsibility to move it
to make this configuration functional. Why does QEMU need to play a
part in reserving this bus address. IGD devices are not generally
hot-pluggable either, so it doesn't seem we need to reserve an address
in case an IGD device is added dynamically later.
> Your question makes me curious, and I have not been able
> to determine if anyone has tried igd passthrough using
> legacy mode on kvm with recent versions of linux and qemu.
Yes, it works.
> I will try reproducing the problem on kvm in legacy mode with
> current versions of linux and qemu and report my findings.
> With kvm, there might be enough flexibility to specify the
> slot number for every pci device in the guest. Such a
I think this is always the recommendation, libvirt will do this by
default in order to make sure the configuration is reproducible. This
is what we generally rely on for kvm/vfio IGD assignment to place the
GPU at the correct address.
> capability is not available using the xenlight toolstack
> for managing xen guests, so I have been using this patch
> to ensure that the Intel igd is at slot 2 with xen guests
> created by the xenlight toolstack.
Seems like a deficiency in xenlight. I'm not sure why QEMU should take
on this burden to support support tool stacks that lack such basic
features.
> The patch as is will only fix the problem on xen, so if the
> problem exists on kvm also, I agree that the patch should
> be modified to also fix it on kvm.
AFAICT, it's not a problem on kvm/vfio because we generally make use of
invocations that specify bus addresses for each device by default,
making this a configuration requirement for the user or management tool
stack. Thanks,
Alex
next prev parent reply other threads:[~2023-01-03 15:17 UTC|newest]
Thread overview: 20+ messages / expand[flat|nested] mbox.gz Atom feed top
[not found] <830263507e8f1a24a94f81909d5102c4b204e938.1672615492.git.brchuckz.ref@aol.com>
2023-01-01 23:52 ` [PATCH v6] xen/pt: reserve PCI slot 2 for Intel igd-passthru Chuck Zmudzinski
2023-01-02 17:46 ` Michael S. Tsirkin
2023-01-02 23:10 ` Chuck Zmudzinski
2023-01-03 15:14 ` Alex Williamson [this message]
2023-01-03 21:50 ` Chuck Zmudzinski
2023-01-03 22:58 ` Chuck Zmudzinski
2023-01-06 10:52 ` Anthony PERARD via
2023-01-06 14:02 ` Chuck Zmudzinski
2023-01-04 20:47 ` Chuck Zmudzinski
2023-01-05 21:20 ` Chuck Zmudzinski
2023-01-03 18:38 ` Chuck Zmudzinski
2023-01-06 13:00 ` Michael S. Tsirkin
2023-01-06 14:03 ` Anthony PERARD via
2023-01-06 14:10 ` Chuck Zmudzinski
2023-01-06 14:31 ` Chuck Zmudzinski
2023-01-06 15:02 ` Chuck Zmudzinski
2023-01-08 16:01 ` Chuck Zmudzinski
2023-01-06 14:42 ` Anthony PERARD via
2023-01-21 18:06 ` Chuck Zmudzinski
2023-01-22 1:09 ` Chuck Zmudzinski
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20230103081456.1d676b8e.alex.williamson@redhat.com \
--to=alex.williamson@redhat.com \
--cc=anthony.perard@citrix.com \
--cc=brchuckz@netscape.net \
--cc=eduardo@habkost.net \
--cc=marcel.apfelbaum@gmail.com \
--cc=mst@redhat.com \
--cc=paul@xen.org \
--cc=pbonzini@redhat.com \
--cc=qemu-devel@nongnu.org \
--cc=richard.henderson@linaro.org \
--cc=sstabellini@kernel.org \
--cc=xen-devel@lists.xenproject.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).