qemu-devel.nongnu.org archive mirror
 help / color / mirror / Atom feed
From: Alex Williamson <alex.williamson@redhat.com>
To: "Cédric Le Goater" <clg@redhat.com>
Cc: qemu-devel@nongnu.org
Subject: Re: [PATCH v2] hw/vfio/pci-quirks: Support alternate offset for GPUDirect Cliques
Date: Wed, 14 Jun 2023 09:01:49 -0600	[thread overview]
Message-ID: <20230614090149.3e8d7d74.alex.williamson@redhat.com> (raw)
In-Reply-To: <41b86f7c-6d9d-8e05-fb26-2b613dca9c4a@redhat.com>

On Wed, 14 Jun 2023 14:37:08 +0200
Cédric Le Goater <clg@redhat.com> wrote:

> On 6/12/23 17:05, Alex Williamson wrote:
> > On Mon, 12 Jun 2023 16:07:33 +0200
> > Cédric Le Goater <clg@redhat.com> wrote:
> >   
> >> On 6/8/23 20:05, Alex Williamson wrote:  
> >>> NVIDIA Turing and newer GPUs implement the MSI-X capability at the offset
> >>> previously reserved for use by hypervisors to implement the GPUDirect
> >>> Cliques capability.  A revised specification provides an alternate
> >>> location.  Add a config space walk to the quirk to check for conflicts,
> >>> allowing us to fall back to the new location or generate an error at the
> >>> quirk setup rather than when the real conflicting capability is added
> >>> should there be no available location.
> >>>
> >>> Signed-off-by: Alex Williamson <alex.williamson@redhat.com>
> >>> ---
> >>>    hw/vfio/pci-quirks.c | 41 ++++++++++++++++++++++++++++++++++++++++-
> >>>    1 file changed, 40 insertions(+), 1 deletion(-)
> >>>
> >>> diff --git a/hw/vfio/pci-quirks.c b/hw/vfio/pci-quirks.c
> >>> index f0147a050aaa..0ed2fcd53152 100644
> >>> --- a/hw/vfio/pci-quirks.c
> >>> +++ b/hw/vfio/pci-quirks.c
> >>> @@ -1490,6 +1490,9 @@ void vfio_setup_resetfn_quirk(VFIOPCIDevice *vdev)
> >>>     * +---------------------------------+---------------------------------+
> >>>     *
> >>>     * https://lists.gnu.org/archive/html/qemu-devel/2017-08/pdfUda5iEpgOS.pdf
> >>> + *
> >>> + * Specification for Turning and later GPU architectures:  
> >>
> >> s/Turning/Turing/
> >>
> >> I will fix that.  
> > 
> > Yes, thanks!
> >     
> >>> + * https://lists.gnu.org/archive/html/qemu-devel/2023-06/pdf142OR4O4c2.pdf
> >>>     */
> >>>    static void get_nv_gpudirect_clique_id(Object *obj, Visitor *v,
> >>>                                           const char *name, void *opaque,
> >>> @@ -1530,7 +1533,9 @@ const PropertyInfo qdev_prop_nv_gpudirect_clique = {
> >>>    static int vfio_add_nv_gpudirect_cap(VFIOPCIDevice *vdev, Error **errp)
> >>>    {
> >>>        PCIDevice *pdev = &vdev->pdev;
> >>> -    int ret, pos = 0xC8;
> >>> +    int ret, pos;
> >>> +    bool c8_conflict = false, d4_conflict = false;
> >>> +    uint8_t tmp;
> >>>    
> >>>        if (vdev->nv_gpudirect_clique == 0xFF) {
> >>>            return 0;
> >>> @@ -1547,6 +1552,40 @@ static int vfio_add_nv_gpudirect_cap(VFIOPCIDevice *vdev, Error **errp)
> >>>            return -EINVAL;
> >>>        }
> >>>    
> >>> +    /*
> >>> +     * Per the updated specification above, it's recommended to use offset
> >>> +     * D4h for Turing and later GPU architectures due to a conflict of the
> >>> +     * MSI-X capability at C8h.  We don't know how to determine the GPU  
> >>
> >> There is a way :
> >>
> >>     # nvidia-smi -q | grep Architecture
> >>         Product Architecture                  : Turing  
> > 
> > There are a few problems with that:
> > 
> >   1) nvidia-smi is a proprietary tool.
> > 
> >   2) Using nvidia-smi, or even the PCI IDs database, would require
> >      ongoing maintenance to update the string or IDs for future
> >      architectures.
> > 
> >   3) nvidia-smi requires the device to be managed by the nvidia driver,
> >      which becomes and chicken and egg problem when we require the
> >      device to be managed by a vfio compatible driver by this point.  
> 
> For my education, could such information be exposed in a PCI vendor
> specific capability ? May be it is ?

Sure, nothing technically prevents it, but the vendor has to have a
need to do so whereas NVIDIA probably has their own means to interrogate
the device to determine the architectural level or doesn't mind
maintaining PCI IDs.  Probably a bit of both.  Thanks,

Alex



      reply	other threads:[~2023-06-14 15:03 UTC|newest]

Thread overview: 7+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2023-06-08 17:42 [PATCH] hw/vfio/pci-quirks: Support alternate offset for GPUDirect Cliques Alex Williamson
2023-06-08 17:46 ` Alex Williamson
2023-06-08 18:05 ` [PATCH v2] " Alex Williamson
2023-06-12 14:07   ` Cédric Le Goater
2023-06-12 15:05     ` Alex Williamson
2023-06-14 12:37       ` Cédric Le Goater
2023-06-14 15:01         ` Alex Williamson [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20230614090149.3e8d7d74.alex.williamson@redhat.com \
    --to=alex.williamson@redhat.com \
    --cc=clg@redhat.com \
    --cc=qemu-devel@nongnu.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).