From: Jason Gunthorpe <jgg@nvidia.com>
To: "Christian König" <christian.koenig@amd.com>
Cc: "Thomas Hellström" <thomas.hellstrom@linux.intel.com>,
intel-xe@lists.freedesktop.org,
"Matthew Brost" <matthew.brost@intel.com>,
"Maarten Lankhorst" <maarten.lankhorst@linux.intel.com>,
"Kasireddy Vivek" <vivek.kasireddy@intel.com>,
"Simona Vetter" <simona.vetter@ffwll.ch>,
dri-devel@lists.freedesktop.org, linaro-mm-sig@lists.linaro.org
Subject: Re: [RFC PATCH v2 1/2] dma-buf: Add support for private interconnects
Date: Mon, 29 Sep 2025 09:45:35 -0300 [thread overview]
Message-ID: <20250929124535.GI2617119@nvidia.com> (raw)
In-Reply-To: <f33a4344-545a-43f4-9a3b-24bf070d559c@amd.com>
On Mon, Sep 29, 2025 at 10:16:30AM +0200, Christian König wrote:
> The point is that the exporter manages all accesses to it's buffer
> and there can be more than one importer accessing it at the same
> time.
>
> So when an exporter sees that it already has an importer which can
> only do DMA to system memory it will expose only DMA address to all
> other importers as well.
I would rephrase that, if the exporter supports multiple placement
options for the memory (VRAM/CPU for example) then it needs to track
which placement options all its importer support and never place the
memory someplace an active importer cannot reach.
I don't want to say that just because one importer wants to use
dma_addr_t then all private interconnect options are disabled. If the
memory is in VRAM then multiple importers using private interconnect
concurrently with dma_addr_t should be possible.
This seems like it is making the argument that the exporter does need
to know the importer capability so it can figure out what placement
options are valid.
> > I didn't sketch further, but I think the exporter and importer should
> > both be providing a compatible list and then in almost all cases the
> > core code should do the matching.
>
> More or less matches my idea. I would just start with the exporter
> providing a list of how it's buffer is accessible because it knows
> about other importers and can pre-reduce the list if necessary.
I think the importer also has to advertise what it is able to support.
A big point of the private interconnect is that it won't use
scatterlist so it needs to be a negotiated feature.
> > For example, we have some systems with multipath PCI. This could
> > actually support those properly. The RDMA NIC has two struct devices
> > it operates with different paths, so it would write out two
> > &dmabuf_generic_dma_addr_t's - one for each.
>
> That is actually something we try rather hard to avoid. E.g. the
> exporter should offer only one path to each importer.
Real systems have multipath. We need to do a NxM negotiation where
both sides offer all their paths and the best quality path is
selected.
Once the attachment is made it should be one interconnect and one
stable address within that interconnect.
In this example I'd expect the Xe GPU driver to always offer its
private interconnect and a dma_addr_t based interconnct as both
exporter and importer. The core code should select one for the
attachment.
> We can of course do load balancing on a round robin bases.
I'm not thinking about load balancing, more a 'quality of path'
metric.
Jason
next prev parent reply other threads:[~2025-09-29 12:45 UTC|newest]
Thread overview: 21+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-09-26 8:46 [RFC PATCH v2 0/2] dma-buf private interconnect POC Thomas Hellström
2025-09-26 8:46 ` [RFC PATCH v2 1/2] dma-buf: Add support for private interconnects Thomas Hellström
2025-09-26 12:56 ` Christian König
2025-09-26 13:51 ` Thomas Hellström
2025-09-26 14:41 ` Jason Gunthorpe
2025-09-26 14:51 ` Christian König
2025-09-26 16:00 ` Jason Gunthorpe
2025-09-29 8:16 ` Thomas Hellström
2025-09-29 8:20 ` Christian König
2025-09-29 8:25 ` Thomas Hellström
2025-09-29 12:27 ` Jason Gunthorpe
2025-09-29 8:16 ` Christian König
2025-09-29 12:45 ` Jason Gunthorpe [this message]
2025-09-29 16:02 ` Thomas Hellström
2025-09-29 16:13 ` Jason Gunthorpe
2025-09-26 8:46 ` [RFC PATCH v2 2/2] drm/xe/dma-buf: Add generic interconnect support framework Thomas Hellström
2025-09-26 9:34 ` ✗ CI.checkpatch: warning for dma-buf private interconnect POC (rev2) Patchwork
2025-09-26 9:35 ` ✓ CI.KUnit: success " Patchwork
2025-09-26 9:50 ` ✗ CI.checksparse: warning " Patchwork
2025-09-26 10:11 ` ✓ Xe.CI.BAT: success " Patchwork
2025-09-26 14:23 ` ✗ Xe.CI.Full: failure " Patchwork
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20250929124535.GI2617119@nvidia.com \
--to=jgg@nvidia.com \
--cc=christian.koenig@amd.com \
--cc=dri-devel@lists.freedesktop.org \
--cc=intel-xe@lists.freedesktop.org \
--cc=linaro-mm-sig@lists.linaro.org \
--cc=maarten.lankhorst@linux.intel.com \
--cc=matthew.brost@intel.com \
--cc=simona.vetter@ffwll.ch \
--cc=thomas.hellstrom@linux.intel.com \
--cc=vivek.kasireddy@intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox