From: Jason Gunthorpe <jgg@mellanox.com>
To: Logan Gunthorpe <logang@deltatee.com>
Cc: Christoph Hellwig <hch@lst.de>,
Jerome Glisse <jglisse@redhat.com>,
"linux-mm@kvack.org" <linux-mm@kvack.org>,
"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
Greg Kroah-Hartman <gregkh@linuxfoundation.org>,
"Rafael J . Wysocki" <rafael@kernel.org>,
Bjorn Helgaas <bhelgaas@google.com>,
Christian Koenig <christian.koenig@amd.com>,
Felix Kuehling <Felix.Kuehling@amd.com>,
"linux-pci@vger.kernel.org" <linux-pci@vger.kernel.org>,
"dri-devel@lists.freedesktop.org"
<dri-devel@lists.freedesktop.org>,
Marek Szyprowski <m.szyprowski@samsung.com>,
Robin Murphy <robin.murphy@arm.com>,
Joerg Roedel <jroedel@suse.de>,
"iommu@lists.linux-foundation.org"
<iommu@lists.linux-foundation.org>
Subject: Re: [RFC PATCH 3/5] mm/vma: add support for peer to peer to device vma
Date: Wed, 30 Jan 2019 21:50:25 +0000 [thread overview]
Message-ID: <20190130215019.GL17080@mellanox.com> (raw)
In-Reply-To: <35bad6d5-c06b-f2a3-08e6-2ed0197c8691@deltatee.com>
On Wed, Jan 30, 2019 at 02:01:35PM -0700, Logan Gunthorpe wrote:
> And I feel the GUP->SGL->DMA flow should still be what we are aiming
> for. Even if we need a special GUP for special pages, and a special DMA
> map; and the SGL still has to be homogenous....
*shrug* so what if the special GUP called a VMA op instead of
traversing the VMA PTEs today? Why does it really matter? It could
easily change to a struct page flow tomorrow..
> > So, I see Jerome solving the GUP problem by replacing GUP entirely
> > using an API that is more suited to what these sorts of drivers
> > actually need.
>
> Yes, this is what I'm expecting and what I want. Not bypassing the whole
> thing by doing special things with VMAs.
IMHO struct page is a big pain for this application, and if we can
build flows that don't actually need it then we shouldn't require it
just because the old flows needed it.
HMM mirror is a new flow that doesn't need struct page.
Would you feel better if this also came along with a:
struct dma_sg_table *sgl_dma_map_user(struct device *dma_device,
void __user *prt, size_t len)
flow which returns a *DMA MAPPED* sgl that does not have struct page
pointers as another interface?
We can certainly call an API like this from RDMA for non-ODP MRs.
Eliminating the page pointers also eliminates the __iomem
problem. However this sgl object is not copyable or accessible from
the CPU, so the caller must be sure it doesn't need CPU access when
using this API.
For RDMA I'd include some flag in the struct ib_device if the driver
requires CPU accessible SGLs and call the right API. Maybe the block
layer could do the same trick for O_DIRECT?
This would also directly solve the P2P problem with hfi1/qib/rxe, as
I'd likely also say that pci_p2pdma_map_sg() returns the same DMA only
sgl thing.
Jason
next prev parent reply other threads:[~2019-01-30 21:50 UTC|newest]
Thread overview: 95+ messages / expand[flat|nested] mbox.gz Atom feed top
2019-01-29 17:47 [RFC PATCH 0/5] Device peer to peer (p2p) through vma jglisse
2019-01-29 17:47 ` [RFC PATCH 1/5] pci/p2p: add a function to test peer to peer capability jglisse
2019-01-29 18:24 ` Logan Gunthorpe
2019-01-29 19:44 ` Greg Kroah-Hartman
2019-01-29 19:53 ` Jerome Glisse
2019-01-29 20:44 ` Logan Gunthorpe
2019-01-29 21:00 ` Jerome Glisse
2019-01-29 19:56 ` Alex Deucher
2019-01-29 20:00 ` Jerome Glisse
2019-01-29 20:24 ` Logan Gunthorpe
2019-01-29 21:28 ` Alex Deucher
2019-01-30 10:25 ` Christian König
2019-01-29 17:47 ` [RFC PATCH 2/5] drivers/base: " jglisse
2019-01-29 18:26 ` Logan Gunthorpe
2019-01-29 19:54 ` Jerome Glisse
2019-01-29 19:46 ` Greg Kroah-Hartman
2019-01-29 19:56 ` Jerome Glisse
2019-01-29 17:47 ` [RFC PATCH 3/5] mm/vma: add support for peer to peer to device vma jglisse
2019-01-29 18:36 ` Logan Gunthorpe
2019-01-29 19:11 ` Jerome Glisse
2019-01-29 19:24 ` Logan Gunthorpe
2019-01-29 19:44 ` Jerome Glisse
2019-01-29 20:43 ` Logan Gunthorpe
2019-01-30 7:52 ` Christoph Hellwig
2019-01-29 19:32 ` Jason Gunthorpe
2019-01-29 19:50 ` Jerome Glisse
2019-01-29 20:24 ` Jason Gunthorpe
2019-01-29 20:44 ` Jerome Glisse
2019-01-29 23:02 ` Jason Gunthorpe
2019-01-30 0:08 ` Jerome Glisse
2019-01-30 4:30 ` Jason Gunthorpe
2019-01-30 15:43 ` Jerome Glisse
2019-01-29 20:39 ` Logan Gunthorpe
2019-01-29 20:57 ` Jerome Glisse
2019-01-29 21:30 ` Logan Gunthorpe
2019-01-29 21:50 ` Jerome Glisse
2019-01-29 22:58 ` Logan Gunthorpe
2019-01-29 23:47 ` Jerome Glisse
2019-01-30 1:17 ` Logan Gunthorpe
2019-01-30 2:48 ` Jerome Glisse
2019-01-30 4:18 ` Jason Gunthorpe
2019-01-30 8:00 ` Christoph Hellwig
2019-01-30 15:49 ` Jerome Glisse
2019-01-30 19:06 ` Jason Gunthorpe
2019-01-30 19:45 ` Logan Gunthorpe
2019-01-30 19:59 ` Jason Gunthorpe
2019-01-30 21:01 ` Logan Gunthorpe
2019-01-30 21:50 ` Jason Gunthorpe [this message]
2019-01-30 22:52 ` Logan Gunthorpe
2019-01-30 23:30 ` Jason Gunthorpe
2019-01-31 8:13 ` Christoph Hellwig
2019-01-31 15:37 ` Jerome Glisse
2019-01-31 19:02 ` Jason Gunthorpe
2019-01-31 19:19 ` Logan Gunthorpe
2019-01-31 19:54 ` Jason Gunthorpe
2019-01-31 19:35 ` Jerome Glisse
2019-01-31 19:44 ` Logan Gunthorpe
2019-01-31 19:58 ` Jason Gunthorpe
2019-01-30 17:17 ` Logan Gunthorpe
2019-01-30 18:56 ` Jason Gunthorpe
2019-01-30 19:22 ` Jerome Glisse
2019-01-30 19:38 ` Jason Gunthorpe
2019-01-30 20:00 ` Logan Gunthorpe
2019-01-30 20:11 ` Jason Gunthorpe
2019-01-30 20:43 ` Jerome Glisse
2019-01-30 20:50 ` Jason Gunthorpe
2019-01-30 21:45 ` Jerome Glisse
2019-01-30 21:56 ` Jason Gunthorpe
2019-01-30 22:30 ` Jerome Glisse
2019-01-30 22:33 ` Jason Gunthorpe
2019-01-30 22:47 ` Jerome Glisse
2019-01-30 22:51 ` Jason Gunthorpe
2019-01-30 22:58 ` Jerome Glisse
2019-01-30 19:52 ` Logan Gunthorpe
2019-01-30 20:35 ` Jerome Glisse
2019-01-29 20:58 ` Jason Gunthorpe
2019-01-30 8:02 ` Christoph Hellwig
2019-01-30 10:33 ` Koenig, Christian
2019-01-30 15:55 ` Jerome Glisse
2019-01-30 17:26 ` Christoph Hellwig
2019-01-30 17:32 ` Logan Gunthorpe
2019-01-30 17:39 ` Jason Gunthorpe
2019-01-30 18:05 ` Jerome Glisse
2019-01-30 17:44 ` Jason Gunthorpe
2019-01-30 18:13 ` Logan Gunthorpe
2019-01-30 18:50 ` Jerome Glisse
2019-01-31 8:02 ` Christoph Hellwig
2019-01-31 15:03 ` Jerome Glisse
2019-01-30 19:19 ` Jason Gunthorpe
2019-01-30 19:48 ` Logan Gunthorpe
2019-01-30 20:44 ` Jason Gunthorpe
2019-01-31 8:05 ` Christoph Hellwig
2019-01-31 15:11 ` Jerome Glisse
2019-01-29 17:47 ` [RFC PATCH 4/5] mm/hmm: add support for peer to peer to HMM device memory jglisse
2019-01-29 17:47 ` [RFC PATCH 5/5] mm/hmm: add support for peer to peer to special device vma jglisse
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20190130215019.GL17080@mellanox.com \
--to=jgg@mellanox.com \
--cc=Felix.Kuehling@amd.com \
--cc=bhelgaas@google.com \
--cc=christian.koenig@amd.com \
--cc=dri-devel@lists.freedesktop.org \
--cc=gregkh@linuxfoundation.org \
--cc=hch@lst.de \
--cc=iommu@lists.linux-foundation.org \
--cc=jglisse@redhat.com \
--cc=jroedel@suse.de \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=linux-pci@vger.kernel.org \
--cc=logang@deltatee.com \
--cc=m.szyprowski@samsung.com \
--cc=rafael@kernel.org \
--cc=robin.murphy@arm.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).