From: Greg KH <greg@kroah.com>
To: Zhi Wang <zhiw@nvidia.com>
Cc: "kvm@vger.kernel.org" <kvm@vger.kernel.org>,
"nouveau@lists.freedesktop.org" <nouveau@lists.freedesktop.org>,
"alex.williamson@redhat.com" <alex.williamson@redhat.com>,
"kevin.tian@intel.com" <kevin.tian@intel.com>,
Jason Gunthorpe <jgg@nvidia.com>,
"airlied@gmail.com" <airlied@gmail.com>,
"daniel@ffwll.ch" <daniel@ffwll.ch>,
Andy Currid <ACurrid@nvidia.com>, Neo Jia <cjia@nvidia.com>,
Surath Mitra <smitra@nvidia.com>,
Ankit Agrawal <ankita@nvidia.com>,
Aniket Agashe <aniketa@nvidia.com>,
Kirti Wankhede <kwankhede@nvidia.com>,
"Tarun Gupta (SW-GPU)" <targupta@nvidia.com>,
"zhiwang@kernel.org" <zhiwang@kernel.org>
Subject: Re: [RFC 01/29] nvkm/vgpu: introduce NVIDIA vGPU support prelude
Date: Mon, 14 Oct 2024 13:36:18 +0200 [thread overview]
Message-ID: <2024101408-splashed-criteria-6b1a@gregkh> (raw)
In-Reply-To: <bab2ee27-059e-4f9b-a5f8-87cee04630d1@nvidia.com>
On Mon, Oct 14, 2024 at 09:59:18AM +0000, Zhi Wang wrote:
> On 26/09/2024 12.20, Greg KH wrote:
> > External email: Use caution opening links or attachments
> >
> >
> > On Sun, Sep 22, 2024 at 05:49:23AM -0700, Zhi Wang wrote:
> >> NVIDIA GPU virtualization is a technology that allows multiple virtual
> >> machines (VMs) to share the power of a single GPU, enabling greater
> >> flexibility, efficiency, and cost-effectiveness in data centers and cloud
> >> environments.
> >>
> >> The first step of supporting NVIDIA vGPU in nvkm is to introduce the
> >> necessary vGPU data structures and functions to hook into the
> >> (de)initialization path of nvkm.
> >>
> >> Introduce NVIDIA vGPU data structures and functions hooking into the
> >> the (de)initialization path of nvkm and support the following patches.
> >>
> >> Cc: Neo Jia <cjia@nvidia.com>
> >> Cc: Surath Mitra <smitra@nvidia.com>
> >> Signed-off-by: Zhi Wang <zhiw@nvidia.com>
> >
> > Some minor comments that are a hint you all aren't running checkpatch on
> > your code...
> >
> >> --- /dev/null
> >> +++ b/drivers/gpu/drm/nouveau/include/nvkm/vgpu_mgr/vgpu_mgr.h
> >> @@ -0,0 +1,17 @@
> >> +/* SPDX-License-Identifier: MIT */
> >
> > Wait, what? Why? Ick. You all also forgot the copyright line :(
> >
>
> Will fix it accordingly.
>
> Back to the reason, I am trying to follow the majority in the nouveau
> since this is the change of nouveau.
>
> What's your guidelines about those already in the code?
My "guidelines" is that your lawyers agree what needs to be done and to
do that.
After that, my opinion is you do the proper thing and follow the kernel
licenses here, ESPECIALLY as you will be talking to gpl-only symbols
(hint, MIT licensed code doesn't make any sense there, and go get your
legal approval if you think it does...)
> >> +static bool support_vgpu_mgr = false;
> >
> > A global variable for the whole system? Are you sure that will work
> > well over time? Why isn't this a per-device thing?
> >
> >> +module_param_named(support_vgpu_mgr, support_vgpu_mgr, bool, 0400);
> >
> > This is not the 1990's, please never add new module parameters, use
> > per-device variables. And no documentation? That's not ok either even
> > if you did want to have this.
>
> Thanks for the comments. I am most collecting people opinion on the
> means of enabling/disabling the vGPU, via kernel parameter or not is
> just one of the options. If it is chosen, having a global kernel
> parameter is not expected to be in the !RFC patch.
That wasn't explained anywhere I noticed, did I miss it?
Please do this properly, again, kernel module parameters is not the
proper way.
> >> +static inline struct pci_dev *nvkm_to_pdev(struct nvkm_device *device)
> >> +{
> >> + struct nvkm_device_pci *pci = container_of(device, typeof(*pci),
> >> + device);
> >> +
> >> + return pci->pdev;
> >> +}
> >> +
> >> +/**
> >> + * nvkm_vgpu_mgr_is_supported - check if a platform support vGPU
> >> + * @device: the nvkm_device pointer
> >> + *
> >> + * Returns: true on supported platform which is newer than ADA Lovelace
> >> + * with SRIOV support.
> >> + */
> >> +bool nvkm_vgpu_mgr_is_supported(struct nvkm_device *device)
> >> +{
> >> + struct pci_dev *pdev = nvkm_to_pdev(device);
> >> +
> >> + if (!support_vgpu_mgr)
> >> + return false;
> >> +
> >> + return device->card_type == AD100 && pci_sriov_get_totalvfs(pdev);
> >
> > checkpatch please.
> >
>
> I did before sending it, but it doesn't complain this line.
>
> My command line
> $ scripts/checkpatch.pl [this patch]
Then something is odd as that ' ' should have been caught.
> > And "AD100" is an odd #define, as you know.
>
> I agree and people commented about it in the internal review. But it is
> from the nouveau driver and it has been used in many other places in
> nouveau driver. What would be your guidelines in this situation?
Something properly namespaced?
> >> +/**
> >> + * nvkm_vgpu_mgr_init - Initialize the vGPU manager support
> >> + * @device: the nvkm_device pointer
> >> + *
> >> + * Returns: 0 on success, -ENODEV on platforms that are not supported.
> >> + */
> >> +int nvkm_vgpu_mgr_init(struct nvkm_device *device)
> >> +{
> >> + struct nvkm_vgpu_mgr *vgpu_mgr = &device->vgpu_mgr;
> >> +
> >> + if (!nvkm_vgpu_mgr_is_supported(device))
> >> + return -ENODEV;
> >> +
> >> + vgpu_mgr->nvkm_dev = device;
> >> + vgpu_mgr->enabled = true;
> >> +
> >> + pci_info(nvkm_to_pdev(device),
> >> + "NVIDIA vGPU mananger support is enabled.\n");
> >
> > When drivers work properly, they are quiet.
> >
>
> I totally understand this rule that driver should be quiet. But this is
> not the same as "driver is loaded". This is a feature reporting like
> many others
And again, those "many others" need to be quiet too, we have many ways
to properly gather system information, and the kernel boot log is not
that.
> My concern is as nouveau is a kernel driver, when a user mets a kernel
> panic and offers a dmesg to analyze, it would be at least nice to know
> if the vGPU feature is turned on or not. Sysfs is doable, but it helps
> in different scenarios.
A kernel panic is usually way way way after this initial boot time
message. Again, keep the boot fast, and quiet please.
thanks,
greg k-h
next prev parent reply other threads:[~2024-10-14 11:49 UTC|newest]
Thread overview: 86+ messages / expand[flat|nested] mbox.gz Atom feed top
2024-09-22 12:49 [RFC 00/29] Introduce NVIDIA GPU Virtualization (vGPU) Support Zhi Wang
2024-09-22 12:49 ` [RFC 01/29] nvkm/vgpu: introduce NVIDIA vGPU support prelude Zhi Wang
2024-09-26 9:20 ` Greg KH
2024-10-14 9:59 ` Zhi Wang
2024-10-14 11:36 ` Greg KH [this message]
2024-09-22 12:49 ` [RFC 02/29] nvkm/vgpu: attach to nvkm as a nvkm client Zhi Wang
2024-09-26 9:21 ` Greg KH
2024-10-14 10:16 ` Zhi Wang
2024-10-14 11:33 ` Greg KH
2024-09-22 12:49 ` [RFC 03/29] nvkm/vgpu: reserve a larger GSP heap when NVIDIA vGPU is enabled Zhi Wang
2024-09-22 12:49 ` [RFC 04/29] nvkm/vgpu: set the VF partition count " Zhi Wang
2024-09-26 22:51 ` Jason Gunthorpe
2024-10-13 18:54 ` Zhi Wang
2024-10-15 12:20 ` Jason Gunthorpe
2024-10-15 15:19 ` Zhi Wang
2024-10-15 16:35 ` Jason Gunthorpe
2024-09-22 12:49 ` [RFC 05/29] nvkm/vgpu: populate GSP_VF_INFO " Zhi Wang
2024-09-26 22:52 ` Jason Gunthorpe
2024-09-22 12:49 ` [RFC 06/29] nvkm/vgpu: set RMSetSriovMode " Zhi Wang
2024-09-26 22:53 ` Jason Gunthorpe
2024-10-14 7:38 ` Zhi Wang
2024-10-15 3:49 ` Christoph Hellwig
2024-10-15 12:23 ` Jason Gunthorpe
2024-09-22 12:49 ` [RFC 07/29] nvkm/gsp: add a notify handler for GSP event GPUACCT_PERFMON_UTIL_SAMPLES Zhi Wang
2024-09-22 12:49 ` [RFC 08/29] nvkm/vgpu: get the size VMMU segment from GSP firmware Zhi Wang
2024-09-22 12:49 ` [RFC 09/29] nvkm/vgpu: introduce the reserved channel allocator Zhi Wang
2024-09-22 12:49 ` [RFC 10/29] nvkm/vgpu: introduce interfaces for NVIDIA vGPU VFIO module Zhi Wang
2024-09-22 12:49 ` [RFC 11/29] nvkm/vgpu: introduce GSP RM client alloc and free for vGPU Zhi Wang
2024-09-22 12:49 ` [RFC 12/29] nvkm/vgpu: introduce GSP RM control interface " Zhi Wang
2024-09-22 12:49 ` [RFC 13/29] nvkm: move chid.h to nvkm/engine Zhi Wang
2024-09-22 12:49 ` [RFC 14/29] nvkm/vgpu: introduce channel allocation for vGPU Zhi Wang
2024-09-22 12:49 ` [RFC 15/29] nvkm/vgpu: introduce FB memory " Zhi Wang
2024-09-22 12:49 ` [RFC 16/29] nvkm/vgpu: introduce BAR1 map routines for vGPUs Zhi Wang
2024-09-22 12:49 ` [RFC 17/29] nvkm/vgpu: introduce engine bitmap for vGPU Zhi Wang
2024-09-22 12:49 ` [RFC 18/29] nvkm/vgpu: introduce pci_driver.sriov_configure() in nvkm Zhi Wang
2024-09-26 22:56 ` Jason Gunthorpe
2024-10-14 8:32 ` Zhi Wang
2024-10-15 12:27 ` Jason Gunthorpe
2024-10-15 15:14 ` Zhi Wang
2024-10-14 8:36 ` Zhi Wang
2024-09-22 12:49 ` [RFC 19/29] vfio/vgpu_mgr: introdcue vGPU lifecycle management prelude Zhi Wang
2024-09-22 12:49 ` [RFC 20/29] vfio/vgpu_mgr: allocate GSP RM client for NVIDIA vGPU manager Zhi Wang
2024-09-22 12:49 ` [RFC 21/29] vfio/vgpu_mgr: introduce vGPU type uploading Zhi Wang
2024-09-22 12:49 ` [RFC 22/29] vfio/vgpu_mgr: allocate vGPU FB memory when creating vGPUs Zhi Wang
2024-09-22 12:49 ` [RFC 23/29] vfio/vgpu_mgr: allocate vGPU channels " Zhi Wang
2024-09-22 12:49 ` [RFC 24/29] vfio/vgpu_mgr: allocate mgmt heap " Zhi Wang
2024-09-22 12:49 ` [RFC 25/29] vfio/vgpu_mgr: map mgmt heap when creating a vGPU Zhi Wang
2024-09-22 12:49 ` [RFC 26/29] vfio/vgpu_mgr: allocate GSP RM client when creating vGPUs Zhi Wang
2024-09-22 12:49 ` [RFC 27/29] vfio/vgpu_mgr: bootload the new vGPU Zhi Wang
2024-09-25 0:31 ` Dave Airlie
2024-09-22 12:49 ` [RFC 28/29] vfio/vgpu_mgr: introduce vGPU host RPC channel Zhi Wang
2024-09-22 12:49 ` [RFC 29/29] vfio/vgpu_mgr: introduce NVIDIA vGPU VFIO variant driver Zhi Wang
2024-09-22 13:11 ` [RFC 00/29] Introduce NVIDIA GPU Virtualization (vGPU) Support Zhi Wang
2024-09-23 8:38 ` Danilo Krummrich
2024-09-24 19:49 ` Zhi Wang
2024-09-23 6:22 ` Tian, Kevin
2024-09-23 15:02 ` Jason Gunthorpe
2024-09-26 6:43 ` Tian, Kevin
2024-09-26 12:55 ` Jason Gunthorpe
2024-09-26 22:57 ` Jason Gunthorpe
2024-09-27 0:13 ` Tian, Kevin
2024-09-23 8:49 ` Danilo Krummrich
2024-09-23 15:01 ` Jason Gunthorpe
2024-09-23 22:50 ` Danilo Krummrich
2024-09-24 16:41 ` Jason Gunthorpe
2024-09-24 19:56 ` Danilo Krummrich
2024-09-24 22:52 ` Dave Airlie
2024-09-24 23:47 ` Jason Gunthorpe
2024-09-25 0:18 ` Dave Airlie
2024-09-25 1:29 ` Jason Gunthorpe
2024-09-25 0:53 ` Jason Gunthorpe
2024-09-25 1:08 ` Dave Airlie
2024-09-25 15:28 ` Jason Gunthorpe
2024-09-25 10:55 ` Danilo Krummrich
2024-09-26 9:14 ` Greg KH
2024-09-26 12:42 ` Jason Gunthorpe
2024-09-26 12:54 ` Greg KH
2024-09-26 13:07 ` Danilo Krummrich
2024-09-26 14:40 ` Jason Gunthorpe
2024-09-26 18:07 ` Andy Ritger
2024-09-26 22:23 ` Danilo Krummrich
2024-09-26 22:42 ` Danilo Krummrich
2024-09-27 12:51 ` Jason Gunthorpe
2024-09-27 14:22 ` Danilo Krummrich
2024-09-27 15:27 ` Jason Gunthorpe
2024-09-30 15:59 ` Danilo Krummrich
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=2024101408-splashed-criteria-6b1a@gregkh \
--to=greg@kroah.com \
--cc=ACurrid@nvidia.com \
--cc=airlied@gmail.com \
--cc=alex.williamson@redhat.com \
--cc=aniketa@nvidia.com \
--cc=ankita@nvidia.com \
--cc=cjia@nvidia.com \
--cc=daniel@ffwll.ch \
--cc=jgg@nvidia.com \
--cc=kevin.tian@intel.com \
--cc=kvm@vger.kernel.org \
--cc=kwankhede@nvidia.com \
--cc=nouveau@lists.freedesktop.org \
--cc=smitra@nvidia.com \
--cc=targupta@nvidia.com \
--cc=zhiw@nvidia.com \
--cc=zhiwang@kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox