intel-xe.lists.freedesktop.org archive mirror
 help / color / mirror / Atom feed
From: Jason Gunthorpe <jgg@nvidia.com>
To: "Christian König" <christian.koenig@amd.com>
Cc: "Dave Airlie" <airlied@gmail.com>,
	"Thomas Hellström" <thomas.hellstrom@linux.intel.com>,
	dri-devel@lists.freedesktop.org, intel-xe@lists.freedesktop.org,
	"Matthew Brost" <matthew.brost@intel.com>,
	oak.zeng@intel.com, "Daniel Vetter" <daniel.vetter@ffwll.ch>
Subject: Re: Cross-device and cross-driver HMM support
Date: Wed, 3 Apr 2024 09:57:12 -0300	[thread overview]
Message-ID: <20240403125712.GA1744080@nvidia.com> (raw)
In-Reply-To: <5495090e-dee1-4c8e-91bc-240632fd3e35@amd.com>

On Wed, Apr 03, 2024 at 11:16:36AM +0200, Christian König wrote:
> Am 03.04.24 um 00:57 schrieb Dave Airlie:
> > On Wed, 27 Mar 2024 at 19:52, Thomas Hellström
> > <thomas.hellstrom@linux.intel.com> wrote:
> > > Hi!
> > > 
> > > With our SVM mirror work we'll soon start looking at HMM cross-device
> > > support. The identified needs are
> > > 
> > > 1) Instead of migrating foreign device memory to system when the
> > > current device is faulting, leave it in place...
> > > 1a) for access using internal interconnect,
> > > 1b) for access using PCIE p2p (probably mostly as a reference)
> 
> I still agree with Sima that we won't see P2P based on HMM between devices
> anytime soon if ever.

We've got a team working on the subset of this problem where we can
have a GPU driver install DEVICE_PRIVATE pages and the RDMA driver use
hmm_range_fault() to take the DEVICE_PRIVATE and return an equivilent
P2P page for DMA.

We already have a working prototype that is not too bad code wise.

> E.g. there is no common representation of DMA addresses with address spaces.
> In other words you need to know the device which does DMA for an address to
> make sense.

? Every device device calls hmm_range_fault() on it's own, to populate
its own private mirror page table, and gets a P2P page. The device can
DMA map that P2P for its own use to get a topologically appropriate
DMA address for its own private page table. The struct page for P2P
references the pgmap which references the target struct device, the
DMA API provides the requesting struct device. The infrastructure for
all this is all there already.

There is a seperate discussion about optimizing away the P2P pgmap,
but for the moment I'm focused on getting things working by relying on
it.

> Additional to that we don't have a representation for internal connections,
> e.g. the common kernel has no idea that device A and device B can talk
> directly to each other, but not with device C.

We do have this in the PCI P2P framework, it just isn't very complete,
but it does handle the immediate cases I see people building where we
have switches and ACS/!ACS paths with different addressing depending
on topology.

> > > and we plan to add an infrastructure for this. Probably this can be
> > > done initially without too much (or any) changes to the hmm code
> > > itself.

It is essential any work in this area is not tied to DRM.
hmm_range_fault() and DEVICE_PRIVATE are generic kernel concepts we
need to make them work better not build weird DRM side channels.
 
> > > So the question is basically whether anybody is interested in a
> > > drm-wide solution for this and in that case also whether anybody sees
> > > the need for cross-driver support?
> 
> We have use cases for this as well, yes.

Unfortunately this is a long journey. The immediate next steps are
Alistair's work to untangle the DAX refcounting mess from ZONE_DEVICE
pages:

https://lore.kernel.org/linux-mm/87ttlhmj9p.fsf@nvdebian.thelocal/

Leon is working on improving the DMA API and RDMA's ODP to
be better setup for this:

https://lore.kernel.org/linux-rdma/cover.1709635535.git.leon@kernel.org/

[Which is also the basis for fixing DMABUF's abuse of the DMA API]

Then it is pretty simple to teach hmm_range_fault() to convert a
DEVICE_PRIVATE page into a P2P page using a new pgmap op and from
there the rest already basically exists.

Folks doing non-PCIe topologies will need to teach the P2P layer how
address translation works on those buses.

Jason

  reply	other threads:[~2024-04-03 12:57 UTC|newest]

Thread overview: 7+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2024-03-27  9:52 Cross-device and cross-driver HMM support Thomas Hellström
2024-04-02 22:57 ` Dave Airlie
2024-04-03  9:16   ` Christian König
2024-04-03 12:57     ` Jason Gunthorpe [this message]
2024-04-03 14:06       ` Christian König
2024-04-03 15:09         ` Jason Gunthorpe
2024-04-09 10:18           ` Thomas Hellström

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20240403125712.GA1744080@nvidia.com \
    --to=jgg@nvidia.com \
    --cc=airlied@gmail.com \
    --cc=christian.koenig@amd.com \
    --cc=daniel.vetter@ffwll.ch \
    --cc=dri-devel@lists.freedesktop.org \
    --cc=intel-xe@lists.freedesktop.org \
    --cc=matthew.brost@intel.com \
    --cc=oak.zeng@intel.com \
    --cc=thomas.hellstrom@linux.intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).