From: Jason Gunthorpe <jgg@nvidia.com>
To: Danilo Krummrich <dakr@kernel.org>
Cc: Zhi Wang <zhiw@nvidia.com>,
kvm@vger.kernel.org, alex.williamson@redhat.com,
kevin.tian@intel.com, airlied@gmail.com, daniel@ffwll.ch,
acurrid@nvidia.com, cjia@nvidia.com, smitra@nvidia.com,
ankita@nvidia.com, aniketa@nvidia.com, kwankhede@nvidia.com,
targupta@nvidia.com, zhiwang@kernel.org, acourbot@nvidia.com,
joelagnelf@nvidia.com, apopple@nvidia.com, jhubbard@nvidia.com,
nouveau@lists.freedesktop.org
Subject: Re: [RFC v2 03/14] vfio/nvidia-vgpu: introduce vGPU type uploading
Date: Thu, 4 Sep 2025 10:58:34 -0300 [thread overview]
Message-ID: <20250904135834.GN470103@nvidia.com> (raw)
In-Reply-To: <DCK0Y92W1QSY.1O2U2K3GV61QW@kernel.org>
On Thu, Sep 04, 2025 at 02:45:34PM +0200, Danilo Krummrich wrote:
> On Thu Sep 4, 2025 at 2:15 PM CEST, Jason Gunthorpe wrote:
> > On Thu, Sep 04, 2025 at 11:41:03AM +0200, Danilo Krummrich wrote:
> >
> >> > Another note: I don't see any use of the auxiliary bus in vGPU, any clients
> >> > should attach via the auxiliary bus API, it provides proper matching where
> >> > there's more than on compatible GPU in the system. nova-core already registers
> >> > an auxiliary device for each bound PCI device.
> >
> > The driver here attaches to the SRIOV VF pci_device, it should obtain the
> > nova-core handle of the PF device through pci_iov_get_pf_drvdata().
> >
> > This is the expected design of VFIO drivers because the driver core
> > does not support a single driver binding to two devices (aux and VF)
> > today.
>
> Yeah, that's for the VF PCI devices, but I thought vGPU will also have some kind
> of "control instance" for each physical device through which it can control the
> creation of VFs?
I recall there is something on the PF that is independent of the VFs,
but it is hard to stick that in an aux device, it will make the the
lifetime model very difficult since aux devices can become unbound at
any time while the vf is using it. It is alot easier to be part of
the PF driver somehow..
Then userspace activities like provisioning VFs I hope we will see
that done through fwctl as the networking drivers are doing. I see
this series is using request_firmware() to get VF profiles and some
sysfs, which seems inconsisent with any other VF provisioning scheme
in the kernel.
Jason
next prev parent reply other threads:[~2025-09-04 13:58 UTC|newest]
Thread overview: 23+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-09-03 22:10 [RFC v2 00/14] Introduce NVIDIA GPU Virtualization (vGPU) Support Zhi Wang
2025-09-03 22:10 ` [RFC v2 01/14] vfio/nvidia-vgpu: introduce vGPU lifecycle management prelude Zhi Wang
2025-09-03 22:10 ` [RFC v2 02/14] vfio/nvidia-vgpu: allocate GSP RM client for NVIDIA vGPU manager Zhi Wang
2025-09-03 22:11 ` [RFC v2 03/14] vfio/nvidia-vgpu: introduce vGPU type uploading Zhi Wang
2025-09-04 9:37 ` Danilo Krummrich
2025-09-04 9:41 ` Danilo Krummrich
2025-09-04 12:15 ` Jason Gunthorpe
2025-09-04 12:45 ` Danilo Krummrich
2025-09-04 13:58 ` Jason Gunthorpe [this message]
2025-09-04 15:43 ` Zhi Wang
2025-09-06 10:34 ` Danilo Krummrich
2025-09-03 22:11 ` [RFC v2 04/14] vfio/nvidia-vgpu: allocate vGPU channels when creating vGPUs Zhi Wang
2025-09-03 22:11 ` [RFC v2 05/14] vfio/nvidia-vgpu: allocate vGPU FB memory " Zhi Wang
2025-09-03 22:11 ` [RFC v2 06/14] vfio/nvidia-vgpu: allocate mgmt heap " Zhi Wang
2025-09-03 22:11 ` [RFC v2 07/14] vfio/nvidia-vgpu: map mgmt heap when creating a vGPU Zhi Wang
2025-09-03 22:11 ` [RFC v2 08/14] vfio/nvidia-vgpu: allocate GSP RM client when creating vGPUs Zhi Wang
2025-09-03 22:11 ` [RFC v2 09/14] vfio/nvidia-vgpu: bootload the new vGPU Zhi Wang
2025-09-03 22:11 ` [RFC v2 10/14] vfio/nvidia-vgpu: introduce vGPU host RPC channel Zhi Wang
2025-09-03 22:36 ` Timur Tabi
2025-09-03 22:11 ` [RFC v2 11/14] vfio/nvidia-vgpu: introduce NVIDIA vGPU VFIO variant driver Zhi Wang
2025-09-03 22:11 ` [RFC v2 12/14] vfio/nvidia-vgpu: scrub the guest FB memory of a vGPU Zhi Wang
2025-09-03 22:11 ` [RFC v2 13/14] vfio/nvidia-vgpu: introduce vGPU logging Zhi Wang
2025-09-03 22:11 ` [RFC v2 14/14] vfio/nvidia-vgpu: add a kernel doc to introduce NVIDIA vGPU Zhi Wang
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20250904135834.GN470103@nvidia.com \
--to=jgg@nvidia.com \
--cc=acourbot@nvidia.com \
--cc=acurrid@nvidia.com \
--cc=airlied@gmail.com \
--cc=alex.williamson@redhat.com \
--cc=aniketa@nvidia.com \
--cc=ankita@nvidia.com \
--cc=apopple@nvidia.com \
--cc=cjia@nvidia.com \
--cc=dakr@kernel.org \
--cc=daniel@ffwll.ch \
--cc=jhubbard@nvidia.com \
--cc=joelagnelf@nvidia.com \
--cc=kevin.tian@intel.com \
--cc=kvm@vger.kernel.org \
--cc=kwankhede@nvidia.com \
--cc=nouveau@lists.freedesktop.org \
--cc=smitra@nvidia.com \
--cc=targupta@nvidia.com \
--cc=zhiw@nvidia.com \
--cc=zhiwang@kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox