From: "Thomas Hellström" <thomas.hellstrom@linux.intel.com>
To: Matthew Brost <matthew.brost@intel.com>
Cc: intel-xe@lists.freedesktop.org, dri-devel@lists.freedesktop.org,
himal.prasad.ghimiray@intel.com, apopple@nvidia.com,
airlied@gmail.com, "Simona Vetter" <simona.vetter@ffwll.ch>,
felix.kuehling@amd.com,
"Christian König" <christian.koenig@amd.com>,
dakr@kernel.org, "Mrozek, Michal" <michal.mrozek@intel.com>,
"Joonas Lahtinen" <joonas.lahtinen@linux.intel.com>
Subject: Re: [PATCH 13/15] drm/xe: Support pcie p2p dma as a fast interconnect
Date: Wed, 29 Oct 2025 15:54:54 +0100 [thread overview]
Message-ID: <d8133bc3d85a1284f73f97f535b729872c998a23.camel@linux.intel.com> (raw)
In-Reply-To: <aQF5RkxXtW+6GIy7@lstrano-desk.jf.intel.com>
On Tue, 2025-10-28 at 19:17 -0700, Matthew Brost wrote:
> On Sat, Oct 25, 2025 at 02:04:10PM +0200, Thomas Hellström wrote:
> > Mimic the dma-buf method using dma_[map|unmap]_resource to map
> > for pcie-p2p dma.
> >
> > There's an ongoing area of work upstream to sort out how this best
> > should be done. One method proposed is to add an additional
> > pci_p2p_dma_pagemap aliasing the device_private pagemap and use
> > the corresponding pci_p2p_dma_pagemap page as input for
> > dma_map_page(). However, that would incur double the amount of
> > memory and latency to set up the drm_pagemap and given the huge
> > amount of memory present on modern GPUs, that would really not
> > work.
> > Hence the simple approach used in this patch.
> >
> > Signed-off-by: Thomas Hellström <thomas.hellstrom@linux.intel.com>
> > ---
> > drivers/gpu/drm/xe/xe_svm.c | 44
> > ++++++++++++++++++++++++++++++++++---
> > drivers/gpu/drm/xe/xe_svm.h | 1 +
> > 2 files changed, 42 insertions(+), 3 deletions(-)
> >
> > diff --git a/drivers/gpu/drm/xe/xe_svm.c
> > b/drivers/gpu/drm/xe/xe_svm.c
> > index 9dd96dad2cca..9814f95cb212 100644
> > --- a/drivers/gpu/drm/xe/xe_svm.c
> > +++ b/drivers/gpu/drm/xe/xe_svm.c
> > @@ -3,6 +3,8 @@
> > * Copyright © 2024 Intel Corporation
> > */
> >
> > +#include <linux/pci-p2pdma.h>
> > +
> > #include <drm/drm_drv.h>
> > #include <drm/drm_managed.h>
> > #include <drm/drm_pagemap.h>
> > @@ -442,6 +444,24 @@ static u64 xe_page_to_dpa(struct page *page)
> > return dpa;
> > }
> >
> > +static u64 xe_page_to_pcie(struct page *page)
> > +{
>
> This function looks almost exactly the same as xe_page_to_dpa, maybe
> extract out the common parts?
OK, I'll take a look at that.
/Thomas
>
> Everything else LGTM.
>
> Matt
>
> > + struct xe_pagemap *xpagemap = xe_page_to_pagemap(page);
> > + struct xe_vram_region *vr = xe_pagemap_to_vr(xpagemap);
> > + u64 hpa_base = xpagemap->hpa_base;
> > + u64 ioaddr;
> > + u64 pfn = page_to_pfn(page);
> > + u64 offset;
> > +
> > + xe_assert(vr->xe, is_device_private_page(page));
> > + xe_assert(vr->xe, (pfn << PAGE_SHIFT) >= hpa_base);
> > +
> > + offset = (pfn << PAGE_SHIFT) - hpa_base;
> > + ioaddr = vr->io_start + offset;
> > +
> > + return ioaddr;
> > +}
> > +
> > enum xe_svm_copy_dir {
> > XE_SVM_COPY_TO_VRAM,
> > XE_SVM_COPY_TO_SRAM,
> > @@ -793,7 +813,10 @@ static bool xe_has_interconnect(struct
> > drm_pagemap_peer *peer1,
> > struct device *dev1 = xe_peer_to_dev(peer1);
> > struct device *dev2 = xe_peer_to_dev(peer2);
> >
> > - return dev1 == dev2;
> > + if (dev1 == dev2)
> > + return true;
> > +
> > + return pci_p2pdma_distance(to_pci_dev(dev1), dev2, true)
> > >= 0;
> > }
> >
> > static DRM_PAGEMAP_OWNER_LIST_DEFINE(xe_owner_list);
> > @@ -1530,13 +1553,27 @@ xe_drm_pagemap_device_map(struct
> > drm_pagemap *dpagemap,
> > addr = xe_page_to_dpa(page);
> > prot = XE_INTERCONNECT_VRAM;
> > } else {
> > - addr = DMA_MAPPING_ERROR;
> > - prot = 0;
> > + addr = dma_map_resource(dev,
> > + xe_page_to_pcie(page),
> > + PAGE_SIZE << order, dir,
> > + DMA_ATTR_SKIP_CPU_SYNC);
> > + prot = XE_INTERCONNECT_P2P;
> > }
> >
> > return drm_pagemap_addr_encode(addr, prot, order, dir);
> > }
> >
> > +static void xe_drm_pagemap_device_unmap(struct drm_pagemap
> > *dpagemap,
> > + struct device *dev,
> > + struct drm_pagemap_addr
> > addr)
> > +{
> > + if (addr.proto != XE_INTERCONNECT_P2P)
> > + return;
> > +
> > + dma_unmap_resource(dev, addr.addr, PAGE_SIZE <<
> > addr.order,
> > + addr.dir, DMA_ATTR_SKIP_CPU_SYNC);
> > +}
> > +
> > static void xe_pagemap_destroy_work(struct work_struct *work)
> > {
> > struct xe_pagemap *xpagemap = container_of(work,
> > typeof(*xpagemap), destroy_work);
> > @@ -1573,6 +1610,7 @@ static void xe_pagemap_destroy(struct
> > drm_pagemap *dpagemap, bool from_atomic_or
> >
> > static const struct drm_pagemap_ops xe_drm_pagemap_ops = {
> > .device_map = xe_drm_pagemap_device_map,
> > + .device_unmap = xe_drm_pagemap_device_unmap,
> > .populate_mm = xe_drm_pagemap_populate_mm,
> > .destroy = xe_pagemap_destroy,
> > };
> > diff --git a/drivers/gpu/drm/xe/xe_svm.h
> > b/drivers/gpu/drm/xe/xe_svm.h
> > index 7cd7932f56c8..f5ed48993b6d 100644
> > --- a/drivers/gpu/drm/xe/xe_svm.h
> > +++ b/drivers/gpu/drm/xe/xe_svm.h
> > @@ -13,6 +13,7 @@
> > #include <drm/drm_pagemap_util.h>
> >
> > #define XE_INTERCONNECT_VRAM DRM_INTERCONNECT_DRIVER
> > +#define XE_INTERCONNECT_P2P (XE_INTERCONNECT_VRAM + 1)
> >
> > struct drm_device;
> > struct drm_file;
> > --
> > 2.51.0
> >
next prev parent reply other threads:[~2025-10-29 14:55 UTC|newest]
Thread overview: 49+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-10-25 12:03 [PATCH 00/15] Dynamic drm_pagemaps and Initial multi-device SVM Thomas Hellström
2025-10-25 12:03 ` [PATCH 01/15] drm/pagemap, drm/xe: Add refcounting to struct drm_pagemap Thomas Hellström
2025-10-29 0:31 ` Matthew Brost
2025-10-29 1:11 ` Matthew Brost
2025-10-29 14:51 ` Thomas Hellström
2025-10-25 12:03 ` [PATCH 02/15] drm/pagemap: Add a refcounted drm_pagemap backpointer to struct drm_pagemap_zdd Thomas Hellström
2025-10-29 0:33 ` Matthew Brost
2025-10-25 12:04 ` [PATCH 03/15] drm/pagemap, drm/xe: Manage drm_pagemap provider lifetimes Thomas Hellström
2025-10-29 0:46 ` Matthew Brost
2025-10-29 14:49 ` Thomas Hellström
2025-10-30 2:46 ` Matthew Brost
2025-10-25 12:04 ` [PATCH 04/15] drm/pagemap: Add a drm_pagemap cache and shrinker Thomas Hellström
2025-10-28 1:23 ` Matthew Brost
2025-10-28 9:46 ` Thomas Hellström
2025-10-28 10:29 ` Thomas Hellström
2025-10-28 18:38 ` Matthew Brost
2025-10-29 22:41 ` Matthew Brost
2025-10-29 22:48 ` Matthew Brost
2025-10-25 12:04 ` [PATCH 05/15] drm/xe: Use the " Thomas Hellström
2025-10-30 0:43 ` Matthew Brost
2025-10-25 12:04 ` [PATCH 06/15] drm/pagemap: Remove the drm_pagemap_create() interface Thomas Hellström
2025-10-29 1:00 ` Matthew Brost
2025-10-25 12:04 ` [PATCH 07/15] drm/pagemap_util: Add a utility to assign an owner to a set of interconnected gpus Thomas Hellström
2025-10-29 1:21 ` Matthew Brost
2025-10-29 14:52 ` Thomas Hellström
2025-10-25 12:04 ` [PATCH 08/15] drm/xe: Use the drm_pagemap_util helper to get a svm pagemap owner Thomas Hellström
2025-10-27 23:02 ` Matthew Brost
2025-10-25 12:04 ` [PATCH 09/15] drm/xe: Pass a drm_pagemap pointer around with the memory advise attributes Thomas Hellström
2025-10-28 0:35 ` Matthew Brost
2025-10-25 12:04 ` [PATCH 10/15] drm/xe: Use the vma attibute drm_pagemap to select where to migrate Thomas Hellström
2025-10-25 18:01 ` kernel test robot
2025-10-29 3:27 ` Matthew Brost
2025-10-29 14:56 ` Thomas Hellström
2025-10-29 16:59 ` kernel test robot
2025-10-25 12:04 ` [PATCH 11/15] drm/xe: Simplify madvise_preferred_mem_loc() Thomas Hellström
2025-10-27 23:14 ` Matthew Brost
2025-10-25 12:04 ` [PATCH 12/15] drm/xe/uapi: Extend the madvise functionality to support foreign pagemap placement for svm Thomas Hellström
2025-10-28 0:51 ` Matthew Brost
2025-10-25 12:04 ` [PATCH 13/15] drm/xe: Support pcie p2p dma as a fast interconnect Thomas Hellström
2025-10-28 1:14 ` Matthew Brost
2025-10-28 9:32 ` Thomas Hellström
2025-10-29 2:17 ` Matthew Brost
2025-10-29 14:54 ` Thomas Hellström [this message]
2025-10-25 12:04 ` [PATCH 14/15] drm/xe/vm: Add a prefetch debug printout Thomas Hellström
2025-10-27 23:16 ` Matthew Brost
2025-10-25 12:04 ` [PATCH 15/15] drm/xe: Retry migration once Thomas Hellström
2025-10-28 0:13 ` Matthew Brost
2025-10-28 9:11 ` Thomas Hellström
2025-10-28 19:03 ` Matthew Brost
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=d8133bc3d85a1284f73f97f535b729872c998a23.camel@linux.intel.com \
--to=thomas.hellstrom@linux.intel.com \
--cc=airlied@gmail.com \
--cc=apopple@nvidia.com \
--cc=christian.koenig@amd.com \
--cc=dakr@kernel.org \
--cc=dri-devel@lists.freedesktop.org \
--cc=felix.kuehling@amd.com \
--cc=himal.prasad.ghimiray@intel.com \
--cc=intel-xe@lists.freedesktop.org \
--cc=joonas.lahtinen@linux.intel.com \
--cc=matthew.brost@intel.com \
--cc=michal.mrozek@intel.com \
--cc=simona.vetter@ffwll.ch \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).