From: Alex Williamson <alex.williamson@redhat.com>
To: Alexey Kardashevskiy <aik@ozlabs.ru>
Cc: "Jose Ricardo Ziviani" <joserz@linux.ibm.com>,
"Sam Bobroff" <sbobroff@linux.ibm.com>,
"Alistair Popple" <alistair@popple.id.au>,
"Daniel Henrique Barboza" <danielhb413@gmail.com>,
linuxppc-dev@lists.ozlabs.org, kvm-ppc@vger.kernel.org,
"Piotr Jaroszynski" <pjaroszynski@nvidia.com>,
"Oliver O'Halloran" <oohall@gmail.com>,
"Andrew Donnellan" <andrew.donnellan@au1.ibm.com>,
"Leonardo Augusto Guimarães Garcia" <lagarcia@br.ibm.com>,
"Reza Arbab" <arbab@linux.ibm.com>,
"David Gibson" <david@gibson.dropbear.id.au>
Subject: Re: [PATCH kernel v4 19/19] vfio_pci: Add NVIDIA GV100GL [Tesla V100 SXM2] subdriver
Date: Mon, 10 Dec 2018 17:08:13 -0700 [thread overview]
Message-ID: <20181210170813.426e382b@x1.home> (raw)
In-Reply-To: <20181123055304.25116-20-aik@ozlabs.ru>
On Fri, 23 Nov 2018 16:53:04 +1100
Alexey Kardashevskiy <aik@ozlabs.ru> wrote:
> POWER9 Witherspoon machines come with 4 or 6 V100 GPUs which are not
> pluggable PCIe devices but still have PCIe links which are used
> for config space and MMIO. In addition to that the GPUs have 6 NVLinks
> which are connected to other GPUs and the POWER9 CPU. POWER9 chips
> have a special unit on a die called an NPU which is an NVLink2 host bus
> adapter with p2p connections to 2 to 3 GPUs, 3 or 2 NVLinks to each.
> These systems also support ATS (address translation services) which is
> a part of the NVLink2 protocol. Such GPUs also share on-board RAM
> (16GB or 32GB) to the system via the same NVLink2 so a CPU has
> cache-coherent access to a GPU RAM.
>
> This exports GPU RAM to the userspace as a new VFIO device region. This
> preregisters the new memory as device memory as it might be used for DMA.
> This inserts pfns from the fault handler as the GPU memory is not onlined
> until the vendor driver is loaded and trained the NVLinks so doing this
> earlier causes low level errors which we fence in the firmware so
> it does not hurt the host system but still better be avoided.
>
> This exports an ATSD (Address Translation Shootdown) register of NPU which
> allows TLB invalidations inside GPU for an operating system. The register
> conveniently occupies a single 64k page. It is also presented to
> the userspace as a new VFIO device region.
>
> In order to provide the userspace with the information about GPU-to-NVLink
> connections, this exports an additional capability called "tgt"
> (which is an abbreviated host system bus address). The "tgt" property
> tells the GPU its own system address and allows the guest driver to
> conglomerate the routing information so each GPU knows how to get directly
> to the other GPUs.
>
> For ATS to work, the nest MMU (an NVIDIA block in a P9 CPU) needs to
> know LPID (a logical partition ID or a KVM guest hardware ID in other
> words) and PID (a memory context ID of a userspace process, not to be
> confused with a linux pid). This assigns a GPU to LPID in the NPU and
> this is why this adds a listener for KVM on an IOMMU group. A PID comes
> via NVLink from a GPU and NPU uses a PID wildcard to pass it through.
>
> This requires coherent memory and ATSD to be available on the host as
> the GPU vendor only supports configurations with both features enabled
> and other configurations are known not to work. Because of this and
> because of the ways the features are advertised to the host system
> (which is a device tree with very platform specific properties),
> this requires enabled POWERNV platform.
>
> The V100 GPUs do not advertise none of these capabilities via the config
s/none/any/
> space and there are more than just one device ID so this relies on
> the platform to tell whether these GPUs have special abilities such as
> NVLinks.
>
> Signed-off-by: Alexey Kardashevskiy <aik@ozlabs.ru>
> ---
> Changes:
> v4:
> * added nvlink-speed to the NPU bridge capability as this turned out to
> be not a constant value
> * instead of looking at the exact device ID (which also changes from system
> to system), now this (indirectly) looks at the device tree to know
> if GPU and NPU support NVLink
>
> v3:
> * reworded the commit log about tgt
> * added tracepoints (do we want them enabled for entire vfio-pci?)
> * added code comments
> * added write|mmap flags to the new regions
> * auto enabled VFIO_PCI_NVLINK2 config option
> * added 'tgt' capability to a GPU so QEMU can recreate ibm,npu and ibm,gpu
> references; there are required by the NVIDIA driver
> * keep notifier registered only for short time
> ---
> drivers/vfio/pci/Makefile | 1 +
> drivers/vfio/pci/trace.h | 102 +++++++
> drivers/vfio/pci/vfio_pci_private.h | 2 +
> include/uapi/linux/vfio.h | 27 ++
> drivers/vfio/pci/vfio_pci.c | 37 ++-
> drivers/vfio/pci/vfio_pci_nvlink2.c | 448 ++++++++++++++++++++++++++++
> drivers/vfio/pci/Kconfig | 6 +
> 7 files changed, 621 insertions(+), 2 deletions(-)
> create mode 100644 drivers/vfio/pci/trace.h
> create mode 100644 drivers/vfio/pci/vfio_pci_nvlink2.c
>
> diff --git a/drivers/vfio/pci/Makefile b/drivers/vfio/pci/Makefile
> index 76d8ec0..9662c06 100644
> --- a/drivers/vfio/pci/Makefile
> +++ b/drivers/vfio/pci/Makefile
> @@ -1,5 +1,6 @@
>
> vfio-pci-y := vfio_pci.o vfio_pci_intrs.o vfio_pci_rdwr.o vfio_pci_config.o
> vfio-pci-$(CONFIG_VFIO_PCI_IGD) += vfio_pci_igd.o
> +vfio-pci-$(CONFIG_VFIO_PCI_NVLINK2) += vfio_pci_nvlink2.o
>
> obj-$(CONFIG_VFIO_PCI) += vfio-pci.o
...
> diff --git a/drivers/vfio/pci/vfio_pci_private.h b/drivers/vfio/pci/vfio_pci_private.h
> index 93c1738..7639241 100644
> --- a/drivers/vfio/pci/vfio_pci_private.h
> +++ b/drivers/vfio/pci/vfio_pci_private.h
> @@ -163,4 +163,6 @@ static inline int vfio_pci_igd_init(struct vfio_pci_device *vdev)
> return -ENODEV;
> }
> #endif
> +extern int vfio_pci_nvdia_v100_nvlink2_init(struct vfio_pci_device *vdev);
> +extern int vfio_pci_ibm_npu2_init(struct vfio_pci_device *vdev);
> #endif /* VFIO_PCI_PRIVATE_H */
> diff --git a/include/uapi/linux/vfio.h b/include/uapi/linux/vfio.h
> index 8131028..547e71e 100644
> --- a/include/uapi/linux/vfio.h
> +++ b/include/uapi/linux/vfio.h
> @@ -353,6 +353,20 @@ struct vfio_region_gfx_edid {
> #define VFIO_DEVICE_GFX_LINK_STATE_DOWN 2
> };
>
> +/* 10de vendor sub-type
> + *
> + * NVIDIA GPU NVlink2 RAM is coherent RAM mapped onto the host address space.
> + */
nit, prefer the comment style below leaving the first line of a
multi-line comment empty, coding style.
> +#define VFIO_REGION_SUBTYPE_NVIDIA_NVLINK2_RAM (1)
> +
> +/*
> + * 1014 vendor sub-type
> + *
> + * IBM NPU NVlink2 ATSD (Address Translation Shootdown) register of NPU
> + * to do TLB invalidation on a GPU.
> + */
> +#define VFIO_REGION_SUBTYPE_IBM_NVLINK2_ATSD (1)
> +
> /*
> * The MSIX mappable capability informs that MSIX data of a BAR can be mmapped
> * which allows direct access to non-MSIX registers which happened to be within
> @@ -363,6 +377,19 @@ struct vfio_region_gfx_edid {
> */
> #define VFIO_REGION_INFO_CAP_MSIX_MAPPABLE 3
>
> +/*
> + * Capability with compressed real address (aka SSA - small system address)
> + * where GPU RAM is mapped on a system bus. Used by a GPU for DMA routing.
> + */
> +#define VFIO_REGION_INFO_CAP_NPU2 4
> +
> +struct vfio_region_info_cap_npu2 {
> + struct vfio_info_cap_header header;
> + __u64 tgt;
> + __u32 link_speed;
> + __u32 __pad;
> +};
> +
> /**
> * VFIO_DEVICE_GET_IRQ_INFO - _IOWR(VFIO_TYPE, VFIO_BASE + 9,
> * struct vfio_irq_info)
> diff --git a/drivers/vfio/pci/vfio_pci.c b/drivers/vfio/pci/vfio_pci.c
> index 6cb70cf..b8a53f9 100644
> --- a/drivers/vfio/pci/vfio_pci.c
> +++ b/drivers/vfio/pci/vfio_pci.c
> @@ -224,6 +224,16 @@ static bool vfio_pci_nointx(struct pci_dev *pdev)
> return false;
> }
>
> +int __weak vfio_pci_nvdia_v100_nvlink2_init(struct vfio_pci_device *vdev)
> +{
> + return -ENODEV;
> +}
> +
> +int __weak vfio_pci_ibm_npu2_init(struct vfio_pci_device *vdev)
> +{
> + return -ENODEV;
> +}
> +
Why not static inlines in vfio_pci_private.h like we do for igd hooks?
...
> static void vfio_pci_disable(struct vfio_pci_device *vdev)
> diff --git a/drivers/vfio/pci/vfio_pci_nvlink2.c b/drivers/vfio/pci/vfio_pci_nvlink2.c
> new file mode 100644
> index 0000000..e8e06c3
> --- /dev/null
> +++ b/drivers/vfio/pci/vfio_pci_nvlink2.c
...
> +static int vfio_pci_nvgpu_mmap(struct vfio_pci_device *vdev,
> + struct vfio_pci_region *region, struct vm_area_struct *vma)
> +{
> + long ret;
> + struct vfio_pci_nvgpu_data *data = region->data;
> +
> + if (data->useraddr)
> + return -EPERM;
> +
> + if (vma->vm_end - vma->vm_start > data->size)
> + return -EINVAL;
> +
> + vma->vm_private_data = region;
> + vma->vm_flags |= VM_PFNMAP;
> + vma->vm_ops = &vfio_pci_nvgpu_mmap_vmops;
> +
> + /*
> + * Calling mm_iommu_newdev() here once as the region is not
> + * registered yet and therefore right initialization will happen now.
> + * Other places will use mm_iommu_find() which returns
> + * registered @mem and does not go gup().
> + */
> + data->useraddr = vma->vm_start;
> + data->mm = current->mm;
> +
> + atomic_inc(&data->mm->mm_count);
> + ret = mm_iommu_newdev(data->mm, data->useraddr,
> + (vma->vm_end - vma->vm_start) >> PAGE_SHIFT,
> + data->gpu_hpa, &data->mem);
> +
> + trace_vfio_pci_nvgpu_mmap(vdev->pdev, data->gpu_hpa, data->useraddr,
> + vma->vm_end - vma->vm_start, ret);
> +
> + return ret;
It's unfortunate that all these mm_iommu_foo function return long, this
function returns int, which made me go down the rabbit hole to see what
mm_iommu_newdev() and therefore mmio_iommu_do_alloc() can return. Can
you do a translation somewhere so this doesn't look like a possible
overflow? Thanks,
Alex
next prev parent reply other threads:[~2018-12-11 0:10 UTC|newest]
Thread overview: 35+ messages / expand[flat|nested] mbox.gz Atom feed top
2018-11-23 5:52 [PATCH kernel v4 00/19] powerpc/powernv/npu, vfio: NVIDIA V100 + P9 passthrough Alexey Kardashevskiy
2018-11-23 5:52 ` [PATCH kernel v4 01/19] powerpc/ioda/npu: Call skiboot's hot reset hook when disabling NPU2 Alexey Kardashevskiy
2018-12-05 4:21 ` David Gibson
2018-11-23 5:52 ` [PATCH kernel v4 02/19] powerpc/mm/iommu/vfio_spapr_tce: Change mm_iommu_get to reference a region Alexey Kardashevskiy
2018-12-05 4:25 ` David Gibson
2018-11-23 5:52 ` [PATCH kernel v4 03/19] powerpc/vfio/iommu/kvm: Do not pin device memory Alexey Kardashevskiy
2018-12-05 4:35 ` David Gibson
2018-12-13 3:25 ` Paul Mackerras
2018-11-23 5:52 ` [PATCH kernel v4 04/19] powerpc/powernv: Move npu struct from pnv_phb to pci_controller Alexey Kardashevskiy
2018-12-05 5:14 ` David Gibson
2018-12-05 5:47 ` Alexey Kardashevskiy
2018-12-05 6:17 ` Alexey Kardashevskiy
2018-12-05 22:40 ` David Gibson
2018-12-10 2:50 ` Alexey Kardashevskiy
2018-12-10 3:42 ` David Gibson
2018-11-23 5:52 ` [PATCH kernel v4 05/19] powerpc/powernv/npu: Move OPAL calls away from context manipulation Alexey Kardashevskiy
2018-11-23 5:52 ` [PATCH kernel v4 06/19] powerpc/pseries/iommu: Use memory@ nodes in max RAM address calculation Alexey Kardashevskiy
2018-11-23 5:52 ` [PATCH kernel v4 07/19] powerpc/pseries/npu: Enable platform support Alexey Kardashevskiy
2018-11-23 5:52 ` [PATCH kernel v4 08/19] powerpc/pseries: Remove IOMMU API support for non-LPAR systems Alexey Kardashevskiy
2018-11-23 5:52 ` [PATCH kernel v4 09/19] powerpc/powernv/pseries: Rework device adding to IOMMU groups Alexey Kardashevskiy
2018-11-23 5:52 ` [PATCH kernel v4 10/19] powerpc/iommu_api: Move IOMMU groups setup to a single place Alexey Kardashevskiy
2018-11-23 5:52 ` [PATCH kernel v4 11/19] powerpc/powernv: Reference iommu_table while it is linked to a group Alexey Kardashevskiy
2018-11-23 5:52 ` [PATCH kernel v4 12/19] powerpc/powernv: Add purge cache OPAL call Alexey Kardashevskiy
2018-11-23 5:52 ` [PATCH kernel v4 13/19] powerpc/powernv/npu: Move single TVE handling to NPU PE Alexey Kardashevskiy
2018-11-23 5:52 ` [PATCH kernel v4 14/19] powerpc/powernv/npu: Convert NPU IOMMU helpers to iommu_table_group_ops Alexey Kardashevskiy
2018-11-23 5:53 ` [PATCH kernel v4 15/19] powerpc/powernv/npu: Add compound IOMMU groups Alexey Kardashevskiy
2018-11-23 5:53 ` [PATCH kernel v4 16/19] powerpc/powernv/npu: Add release_ownership hook Alexey Kardashevskiy
2018-11-23 5:53 ` [PATCH kernel v4 17/19] vfio_pci: Allow mapping extra regions Alexey Kardashevskiy
2018-12-11 0:09 ` Alex Williamson
2018-11-23 5:53 ` [PATCH kernel v4 18/19] vfio_pci: Allow regions to add own capabilities Alexey Kardashevskiy
2018-12-11 0:10 ` Alex Williamson
2018-11-23 5:53 ` [PATCH kernel v4 19/19] vfio_pci: Add NVIDIA GV100GL [Tesla V100 SXM2] subdriver Alexey Kardashevskiy
2018-12-11 0:08 ` Alex Williamson [this message]
2018-12-11 0:57 ` Alexey Kardashevskiy
2018-12-11 1:27 ` Alex Williamson
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20181210170813.426e382b@x1.home \
--to=alex.williamson@redhat.com \
--cc=aik@ozlabs.ru \
--cc=alistair@popple.id.au \
--cc=andrew.donnellan@au1.ibm.com \
--cc=arbab@linux.ibm.com \
--cc=danielhb413@gmail.com \
--cc=david@gibson.dropbear.id.au \
--cc=joserz@linux.ibm.com \
--cc=kvm-ppc@vger.kernel.org \
--cc=lagarcia@br.ibm.com \
--cc=linuxppc-dev@lists.ozlabs.org \
--cc=oohall@gmail.com \
--cc=pjaroszynski@nvidia.com \
--cc=sbobroff@linux.ibm.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).