linuxppc-dev.lists.ozlabs.org archive mirror
 help / color / mirror / Atom feed
From: Alex Williamson <alex.williamson@redhat.com>
To: Alexey Kardashevskiy <aik@ozlabs.ru>
Cc: "Christoph Hellwig" <hch@infradead.org>,
	"Jose Ricardo Ziviani" <joserz@linux.ibm.com>,
	kvm@vger.kernel.org, "Sam Bobroff" <sbobroff@linux.ibm.com>,
	"Alistair Popple" <alistair@popple.id.au>,
	"Daniel Henrique Barboza" <danielhb413@gmail.com>,
	linuxppc-dev@lists.ozlabs.org, kvm-ppc@vger.kernel.org,
	linux-kernel@vger.kernel.org,
	"Piotr Jaroszynski" <pjaroszynski@nvidia.com>,
	"Leonardo Augusto Guimarães Garcia" <lagarcia@br.ibm.com>,
	"Reza Arbab" <arbab@linux.ibm.com>,
	"David Gibson" <david@gibson.dropbear.id.au>
Subject: Re: [PATCH kernel v7 20/20] vfio_pci: Add NVIDIA GV100GL [Tesla V100 SXM2] subdriver
Date: Thu, 20 Dec 2018 19:08:21 -0700	[thread overview]
Message-ID: <20181220190821.5a408f93@x1.home> (raw)
In-Reply-To: <b296128d-be96-8683-b0c0-1eac0a7f18ca@ozlabs.ru>

On Fri, 21 Dec 2018 12:50:00 +1100
Alexey Kardashevskiy <aik@ozlabs.ru> wrote:

> On 21/12/2018 12:37, Alex Williamson wrote:
> > On Fri, 21 Dec 2018 12:23:16 +1100
> > Alexey Kardashevskiy <aik@ozlabs.ru> wrote:
> >   
> >> On 21/12/2018 03:46, Alex Williamson wrote:  
> >>> On Thu, 20 Dec 2018 19:23:50 +1100
> >>> Alexey Kardashevskiy <aik@ozlabs.ru> wrote:
> >>>     
> >>>> POWER9 Witherspoon machines come with 4 or 6 V100 GPUs which are not
> >>>> pluggable PCIe devices but still have PCIe links which are used
> >>>> for config space and MMIO. In addition to that the GPUs have 6 NVLinks
> >>>> which are connected to other GPUs and the POWER9 CPU. POWER9 chips
> >>>> have a special unit on a die called an NPU which is an NVLink2 host bus
> >>>> adapter with p2p connections to 2 to 3 GPUs, 3 or 2 NVLinks to each.
> >>>> These systems also support ATS (address translation services) which is
> >>>> a part of the NVLink2 protocol. Such GPUs also share on-board RAM
> >>>> (16GB or 32GB) to the system via the same NVLink2 so a CPU has
> >>>> cache-coherent access to a GPU RAM.
> >>>>
> >>>> This exports GPU RAM to the userspace as a new VFIO device region. This
> >>>> preregisters the new memory as device memory as it might be used for DMA.
> >>>> This inserts pfns from the fault handler as the GPU memory is not onlined
> >>>> until the vendor driver is loaded and trained the NVLinks so doing this
> >>>> earlier causes low level errors which we fence in the firmware so
> >>>> it does not hurt the host system but still better be avoided; for the same
> >>>> reason this does not map GPU RAM into the host kernel (usual thing for
> >>>> emulated access otherwise).
> >>>>
> >>>> This exports an ATSD (Address Translation Shootdown) register of NPU which
> >>>> allows TLB invalidations inside GPU for an operating system. The register
> >>>> conveniently occupies a single 64k page. It is also presented to
> >>>> the userspace as a new VFIO device region. One NPU has 8 ATSD registers,
> >>>> each of them can be used for TLB invalidation in a GPU linked to this NPU.
> >>>> This allocates one ATSD register per an NVLink bridge allowing passing
> >>>> up to 6 registers. Due to the host firmware bug (just recently fixed),
> >>>> only 1 ATSD register per NPU was actually advertised to the host system
> >>>> so this passes that alone register via the first NVLink bridge device in
> >>>> the group which is still enough as QEMU collects them all back and
> >>>> presents to the guest via vPHB to mimic the emulated NPU PHB on the host.
> >>>>
> >>>> In order to provide the userspace with the information about GPU-to-NVLink
> >>>> connections, this exports an additional capability called "tgt"
> >>>> (which is an abbreviated host system bus address). The "tgt" property
> >>>> tells the GPU its own system address and allows the guest driver to
> >>>> conglomerate the routing information so each GPU knows how to get directly
> >>>> to the other GPUs.
> >>>>
> >>>> For ATS to work, the nest MMU (an NVIDIA block in a P9 CPU) needs to
> >>>> know LPID (a logical partition ID or a KVM guest hardware ID in other
> >>>> words) and PID (a memory context ID of a userspace process, not to be
> >>>> confused with a linux pid). This assigns a GPU to LPID in the NPU and
> >>>> this is why this adds a listener for KVM on an IOMMU group. A PID comes
> >>>> via NVLink from a GPU and NPU uses a PID wildcard to pass it through.
> >>>>
> >>>> This requires coherent memory and ATSD to be available on the host as
> >>>> the GPU vendor only supports configurations with both features enabled
> >>>> and other configurations are known not to work. Because of this and
> >>>> because of the ways the features are advertised to the host system
> >>>> (which is a device tree with very platform specific properties),
> >>>> this requires enabled POWERNV platform.
> >>>>
> >>>> The V100 GPUs do not advertise any of these capabilities via the config
> >>>> space and there are more than just one device ID so this relies on
> >>>> the platform to tell whether these GPUs have special abilities such as
> >>>> NVLinks.
> >>>>
> >>>> Signed-off-by: Alexey Kardashevskiy <aik@ozlabs.ru>
> >>>> ---
> >>>> Changes:
> >>>> v6.1:
> >>>> * fixed outdated comment about VFIO_REGION_INFO_CAP_NVLINK2_LNKSPD
> >>>>
> >>>> v6:
> >>>> * reworked capabilities - tgt for nvlink and gpu and link-speed
> >>>> for nvlink only
> >>>>
> >>>> v5:
> >>>> * do not memremap GPU RAM for emulation, map it only when it is needed
> >>>> * allocate 1 ATSD register per NVLink bridge, if none left, then expose
> >>>> the region with a zero size
> >>>> * separate caps per device type
> >>>> * addressed AW review comments
> >>>>
> >>>> v4:
> >>>> * added nvlink-speed to the NPU bridge capability as this turned out to
> >>>> be not a constant value
> >>>> * instead of looking at the exact device ID (which also changes from system
> >>>> to system), now this (indirectly) looks at the device tree to know
> >>>> if GPU and NPU support NVLink
> >>>>
> >>>> v3:
> >>>> * reworded the commit log about tgt
> >>>> * added tracepoints (do we want them enabled for entire vfio-pci?)
> >>>> * added code comments
> >>>> * added write|mmap flags to the new regions
> >>>> * auto enabled VFIO_PCI_NVLINK2 config option
> >>>> * added 'tgt' capability to a GPU so QEMU can recreate ibm,npu and ibm,gpu
> >>>> references; there are required by the NVIDIA driver
> >>>> * keep notifier registered only for short time
> >>>> ---
> >>>>  drivers/vfio/pci/Makefile           |   1 +
> >>>>  drivers/vfio/pci/trace.h            | 102 ++++++
> >>>>  drivers/vfio/pci/vfio_pci_private.h |  14 +
> >>>>  include/uapi/linux/vfio.h           |  37 +++
> >>>>  drivers/vfio/pci/vfio_pci.c         |  27 +-
> >>>>  drivers/vfio/pci/vfio_pci_nvlink2.c | 482 ++++++++++++++++++++++++++++
> >>>>  drivers/vfio/pci/Kconfig            |   6 +
> >>>>  7 files changed, 667 insertions(+), 2 deletions(-)
> >>>>  create mode 100644 drivers/vfio/pci/trace.h
> >>>>  create mode 100644 drivers/vfio/pci/vfio_pci_nvlink2.c
> >>>>    
> >>> ...    
> >>>> diff --git a/include/uapi/linux/vfio.h b/include/uapi/linux/vfio.h
> >>>> index 8131028..5562587 100644
> >>>> --- a/include/uapi/linux/vfio.h
> >>>> +++ b/include/uapi/linux/vfio.h
> >>>> @@ -353,6 +353,21 @@ struct vfio_region_gfx_edid {
> >>>>  #define VFIO_DEVICE_GFX_LINK_STATE_DOWN  2
> >>>>  };
> >>>>  
> >>>> +/*
> >>>> + * 10de vendor sub-type
> >>>> + *
> >>>> + * NVIDIA GPU NVlink2 RAM is coherent RAM mapped onto the host address space.
> >>>> + */
> >>>> +#define VFIO_REGION_SUBTYPE_NVIDIA_NVLINK2_RAM	(1)
> >>>> +
> >>>> +/*
> >>>> + * 1014 vendor sub-type
> >>>> + *
> >>>> + * IBM NPU NVlink2 ATSD (Address Translation Shootdown) register of NPU
> >>>> + * to do TLB invalidation on a GPU.
> >>>> + */
> >>>> +#define VFIO_REGION_SUBTYPE_IBM_NVLINK2_ATSD	(1)
> >>>> +
> >>>>  /*
> >>>>   * The MSIX mappable capability informs that MSIX data of a BAR can be mmapped
> >>>>   * which allows direct access to non-MSIX registers which happened to be within
> >>>> @@ -363,6 +378,28 @@ struct vfio_region_gfx_edid {
> >>>>   */
> >>>>  #define VFIO_REGION_INFO_CAP_MSIX_MAPPABLE	3
> >>>>  
> >>>> +/*
> >>>> + * Capability with compressed real address (aka SSA - small system address)
> >>>> + * where GPU RAM is mapped on a system bus. Used by a GPU for DMA routing.
> >>>> + */
> >>>> +#define VFIO_REGION_INFO_CAP_NVLINK2_SSATGT	4
> >>>> +
> >>>> +struct vfio_region_info_cap_nvlink2_ssatgt {
> >>>> +	struct vfio_info_cap_header header;
> >>>> +	__u64 tgt;
> >>>> +};
> >>>> +
> >>>> +/*
> >>>> + * Capability with an NVLink link speed.
> >>>> + */    
> >>>
> >>> I was really hoping for something more like SSATGT above indicating the
> >>> intended users and purpose, and an update to SSATGT since it's now used
> >>> by both the GPU and NPU2.  This comment is correct, but it's basically
> >>> useless, it doesn't provide any information that isn't readily apparent
> >>> from the structure definition.  AIUI, SSATGT is used not only for the
> >>> GPU to determine where its RAM is mapped on the system bus, but also by
> >>> the NPU2 to associate itself to a GPU, right?    
> >>
> >> Correct. It could be improved by
> >>
> >> diff --git a/include/uapi/linux/vfio.h b/include/uapi/linux/vfio.h
> >> index 5562587..ff238ef9c 100644
> >> --- a/include/uapi/linux/vfio.h
> >> +++ b/include/uapi/linux/vfio.h
> >> @@ -380,7 +380,8 @@ struct vfio_region_gfx_edid {
> >>
> >>  /*
> >>   * Capability with compressed real address (aka SSA - small system address)
> >> - * where GPU RAM is mapped on a system bus. Used by a GPU for DMA routing.
> >> + * where GPU RAM is mapped on a system bus. Used by a GPU for DMA routing
> >> + * and by the userspace to associate a NVLink bridge with a GPU.
> >>   */
> >>  #define VFIO_REGION_INFO_CAP_NVLINK2_SSATGT    4
> >>
> >>
> >>  
> >>> And the link speed here
> >>> is consumed by the NPU2 in order to fill in DT information for the
> >>> guest for compatibility and possibly routing optimizations?    
> >>
> >>
> >> It is just some speed number, 8 or 9, one works and the other does not,
> >> depending on the actual system. The NVIDIA driver handles it in the
> >> binary blob. The existing comment is not much use but I am really not
> >> sure what other comment could be useful in here.  
> > 
> > So why do we need to expose it?  "Exposed on NPU2 devices for userspace
> > to export to guest VM via DT(?) or else <something bad happens/doesn't  
> > work> in the guest".  Work with me, there must be some justification  
> > for why it gets exposed, not just what it is.  Thanks,  
> 
> 
> How about this?
> 
> /*
>  * Capability with an NVLink link speed. The value is read by
>  * the NVlink2 bridge driver from the bridge's "ibm,nvlink-speed"
>  * property in the device tree. The value is fixed in the hardware
>  * and failing to provide the correct value results in the link
>  * not working with no indication from the driver why.
>  */

I'll take it.  With the above two changes,

Acked-by: Alex Williamson <alex.williamson@redhat.com>

  reply	other threads:[~2018-12-21  2:10 UTC|newest]

Thread overview: 30+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2018-12-20  8:23 [PATCH kernel v7 00/20] powerpc/powernv/npu, vfio: NVIDIA V100 + P9 passthrough Alexey Kardashevskiy
2018-12-20  8:23 ` [PATCH kernel v7 01/20] powerpc/ioda/npu: Call skiboot's hot reset hook when disabling NPU2 Alexey Kardashevskiy
2018-12-20  8:23 ` [PATCH kernel v7 02/20] powerpc/mm/iommu/vfio_spapr_tce: Change mm_iommu_get to reference a region Alexey Kardashevskiy
2018-12-20  8:23 ` [PATCH kernel v7 03/20] powerpc/vfio/iommu/kvm: Do not pin device memory Alexey Kardashevskiy
2018-12-20  8:23 ` [PATCH kernel v7 04/20] powerpc/powernv: Move npu struct from pnv_phb to pci_controller Alexey Kardashevskiy
2018-12-20  8:23 ` [PATCH kernel v7 05/20] powerpc/powernv/npu: Move OPAL calls away from context manipulation Alexey Kardashevskiy
2018-12-20  8:23 ` [PATCH kernel v7 06/20] powerpc/pseries/iommu: Use memory@ nodes in max RAM address calculation Alexey Kardashevskiy
2018-12-20  8:23 ` [PATCH kernel v7 07/20] powerpc/pseries/npu: Enable platform support Alexey Kardashevskiy
2018-12-20  8:23 ` [PATCH kernel v7 08/20] powerpc/pseries: Remove IOMMU API support for non-LPAR systems Alexey Kardashevskiy
2018-12-20  8:23 ` [PATCH kernel v7 09/20] powerpc/powernv/pseries: Rework device adding to IOMMU groups Alexey Kardashevskiy
2018-12-20  8:23 ` [PATCH kernel v7 10/20] powerpc/iommu_api: Move IOMMU groups setup to a single place Alexey Kardashevskiy
2018-12-20  8:23 ` [PATCH kernel v7 11/20] powerpc/powernv: Reference iommu_table while it is linked to a group Alexey Kardashevskiy
2018-12-20  8:23 ` [PATCH kernel v7 12/20] powerpc/powernv/npu: Move single TVE handling to NPU PE Alexey Kardashevskiy
2018-12-20  8:23 ` [PATCH kernel v7 13/20] powerpc/powernv/npu: Convert NPU IOMMU helpers to iommu_table_group_ops Alexey Kardashevskiy
2018-12-20  8:23 ` [PATCH kernel v7 14/20] powerpc/powernv/npu: Add compound IOMMU groups Alexey Kardashevskiy
2018-12-20  8:23 ` [PATCH kernel v7 15/20] powerpc/powernv/npu: Add release_ownership hook Alexey Kardashevskiy
2018-12-20  8:23 ` [PATCH kernel v7 16/20] powerpc/powernv/npu: Check mmio_atsd array bounds when populating Alexey Kardashevskiy
2018-12-20  8:23 ` [PATCH kernel v7 17/20] powerpc/powernv/npu: Fault user page into the hypervisor's pagetable Alexey Kardashevskiy
2018-12-20  8:23 ` [PATCH kernel v7 18/20] vfio_pci: Allow mapping extra regions Alexey Kardashevskiy
2018-12-20  8:23 ` [PATCH kernel v7 19/20] vfio_pci: Allow regions to add own capabilities Alexey Kardashevskiy
2018-12-20  8:23 ` [PATCH kernel v7 20/20] vfio_pci: Add NVIDIA GV100GL [Tesla V100 SXM2] subdriver Alexey Kardashevskiy
2018-12-20 16:30   ` Murilo Opsfelder Araujo
2018-12-21  0:46     ` Michael Ellerman
2018-12-20 16:46   ` Alex Williamson
2018-12-21  1:23     ` Alexey Kardashevskiy
2018-12-21  1:37       ` Alex Williamson
2018-12-21  1:50         ` Alexey Kardashevskiy
2018-12-21  2:08           ` Alex Williamson [this message]
2018-12-20  9:38 ` [PATCH kernel v7 00/20] powerpc/powernv/npu, vfio: NVIDIA V100 + P9 passthrough Michael Ellerman
2018-12-20 11:28   ` Alexey Kardashevskiy

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20181220190821.5a408f93@x1.home \
    --to=alex.williamson@redhat.com \
    --cc=aik@ozlabs.ru \
    --cc=alistair@popple.id.au \
    --cc=arbab@linux.ibm.com \
    --cc=danielhb413@gmail.com \
    --cc=david@gibson.dropbear.id.au \
    --cc=hch@infradead.org \
    --cc=joserz@linux.ibm.com \
    --cc=kvm-ppc@vger.kernel.org \
    --cc=kvm@vger.kernel.org \
    --cc=lagarcia@br.ibm.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linuxppc-dev@lists.ozlabs.org \
    --cc=pjaroszynski@nvidia.com \
    --cc=sbobroff@linux.ibm.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).