From: Jerome Glisse <j.glisse@gmail.com>
To: Jason Gunthorpe <jgunthorpe@obsidianresearch.com>
Cc: Jerome Glisse <jglisse@redhat.com>,
"Deucher, Alexander" <Alexander.Deucher@amd.com>,
"'linux-kernel@vger.kernel.org'" <linux-kernel@vger.kernel.org>,
"'linux-rdma@vger.kernel.org'" <linux-rdma@vger.kernel.org>,
"'linux-nvdimm@lists.01.org'" <linux-nvdimm@ml01.01.org>,
"'Linux-media@vger.kernel.org'" <Linux-media@vger.kernel.org>,
"'dri-devel@lists.freedesktop.org'"
<dri-devel@lists.freedesktop.org>,
"'linux-pci@vger.kernel.org'" <linux-pci@vger.kernel.org>,
"Kuehling, Felix" <Felix.Kuehling@amd.com>,
"Sagalovitch, Serguei" <Serguei.Sagalovitch@amd.com>,
"Blinzer, Paul" <Paul.Blinzer@amd.com>,
"Koenig, Christian" <Christian.Koenig@amd.com>,
"Suthikulpanit, Suravee" <Suravee.Suthikulpanit@amd.com>,
"Sander, Ben" <ben.sander@amd.com>,
hch@infradead.org, david1.zhou@amd.com, qiang.yu@amd.com
Subject: Re: Enabling peer to peer device transactions for PCIe devices
Date: Thu, 5 Jan 2017 20:58:31 -0500 [thread overview]
Message-ID: <20170106015831.GA2226@gmail.com> (raw)
In-Reply-To: <20170106003034.GB4670@obsidianresearch.com>
On Thu, Jan 05, 2017 at 05:30:34PM -0700, Jason Gunthorpe wrote:
> On Thu, Jan 05, 2017 at 06:23:52PM -0500, Jerome Glisse wrote:
>
> > > I still don't understand what you driving at - you've said in both
> > > cases a user VMA exists.
> >
> > In the former case no, there is no VMA directly but if you want one than
> > a device can provide one. But such VMA is useless as CPU access is not
> > expected.
>
> I disagree it is useless, the VMA is going to be necessary to support
> upcoming things like CAPI, you need it to support O_DIRECT from the
> filesystem, DPDK, etc. This is why I am opposed to any model that is
> not VMA based for setting up RDMA - that is shorted sighted and does
> not seem to reflect where the industry is going.
>
> So focus on having VMA backed by actual physical memory that covers
> your GPU objects and ask how do we wire up the '__user *' to the DMA
> API in the best way so the DMA API still has enough information to
> setup IOMMUs and whatnot.
I am talking about 2 different thing. Existing hardware and API where you
_do not_ have a vma and you do not need one. This is just existing stuff.
Some close driver provide a functionality on top of this design. Question
is do we want to do the same ? If yes and you insist on having a vma we
could provide one but this is does not apply and is useless for where we
are going with new hardware.
With new hardware you just use malloc or mmap to allocate memory and then
you use it directly with the device. Device driver can migrate any part of
the process address space to device memory. In this scheme you have your
usual VMAs but there is nothing special about them.
Now when you try to do get_user_page() on any page that is inside the
device it will fails because we do not allow any device memory to be pin.
There is various reasons for that and they are not going away in any hw
in the planing (so for next few years).
Still we do want to support peer to peer mapping. Plan is to only do so
with ODP capable hardware. Still we need to solve the IOMMU issue and
it needs special handling inside the RDMA device. The way it works is
that RDMA ask for a GPU page, GPU check if it has place inside its PCI
bar to map this page for the device, this can fail. If it succeed then
you need the IOMMU to let the RDMA device access the GPU PCI bar.
So here we have 2 orthogonal problem. First one is how to make 2 drivers
talks to each other to setup mapping to allow peer to peer and second is
about IOMMU.
> > What i was trying to get accross is that no matter what level you
> > consider in the end you still need something at the DMA API level.
> > And that the 2 different use case (device vma or regular vma) means
> > 2 differents API for the device driver.
>
> I agree we need new stuff at the DMA API level, but I am opposed to
> the idea we need two API paths that the *driver* has to figure out.
> That is fundamentally not what I want as a driver developer.
>
> Give me a common API to convert '__user *' to a scatter list and pin
> the pages. This needs to figure out your two cases. And Huge
> Pages. And ZONE_DIRECT.. (a better get_user_pages)
Pining is not gonna happen like i said it would hinder the GPU to the
point it would become useless.
> Give me an API to take the scatter list and DMA map it, handling all
> the stuff associated with peer-peer. (a better dma_map_sg)
>
> Give me a notifier scheme to rework my scatter list when physical
> pages need to change (mmu notifiers)
>
> Use the scatter list memory to convey needed information from the
> first step to the second.
>
> Do not bother the driver with distinctions on what kind of memory is
> behind that VMA. Don't ask me to use get_user_pages or
> gpu_get_user_pages, do not ask me to use dma_map_sg or
> dma_map_sg_peer_direct. The Driver Doesn't Need To Know.
I understand you want it easy but there must be part that must be aware,
at very least the ODP logic. Creating a peer to peer mapping is a multi
step process and some of those step can fails. Fallback is always to
migrate back to system memory as a default path that can not fail, except
if we are out of memory.
> IMHO this is why GPU direct is not mergable - it creates a crazy
> parallel mini-mm subsystem inside RDMA and uses that to connect to a
> GPU driver, everything is expected to have parallel paths for GPU
> direct and normal MM. No good at all.
Existing hardware and new hardware works differently. I am trying to
explain the two different design needed for each one. You understandtably
dislike the existing hardware that has more stringent requirement and
can not be supported transparently and need dedicated communication with
the two driver.
New hardware that have a completely different API in userspace. We can
decide to only support the latter and forget about the former.
> > > So, how do you identify these GPU objects? How do you expect RDMA
> > > convert them to scatter lists? How will ODP work?
> >
> > No ODP on those. If you want vma, the GPU device driver can provide
>
> You said you needed invalidate, that has to be done via ODP.
Invalidate is needed for both old and new hardware. With new hardware the
mmu_notifier is good enough. But you still need special handling when trying
to establish a mapping in HMM case where not all of the GPU memory can be
accessed through the bar. So no matter what it will need special handling
but this can happen in the common infrastructure code (in ODP fault path).
Cheers,
Jérôme
next prev parent reply other threads:[~2017-01-06 1:58 UTC|newest]
Thread overview: 126+ messages / expand[flat|nested] mbox.gz Atom feed top
2016-11-21 20:36 Enabling peer to peer device transactions for PCIe devices Deucher, Alexander
2016-11-22 18:11 ` Dan Williams
[not found] ` <75a1f44f-c495-7d1e-7e1c-17e89555edba@amd.com>
2016-11-22 20:01 ` Dan Williams
2016-11-22 20:10 ` Daniel Vetter
2016-11-22 20:24 ` Dan Williams
2016-11-22 20:35 ` Serguei Sagalovitch
2016-11-22 21:03 ` Daniel Vetter
2016-11-22 21:21 ` Dan Williams
2016-11-22 22:21 ` Sagalovitch, Serguei
2016-11-23 7:49 ` Daniel Vetter
2016-11-23 8:51 ` Christian König
2016-11-23 19:27 ` Serguei Sagalovitch
2016-11-23 17:03 ` Dave Hansen
2016-11-23 17:13 ` Logan Gunthorpe
2016-11-23 17:27 ` Bart Van Assche
2016-11-23 18:40 ` Dan Williams
2016-11-23 19:12 ` Jason Gunthorpe
2016-11-23 19:24 ` Serguei Sagalovitch
2016-11-23 19:06 ` Serguei Sagalovitch
2016-11-23 19:05 ` Jason Gunthorpe
2016-11-23 19:14 ` Serguei Sagalovitch
2016-11-23 19:32 ` Jason Gunthorpe
[not found] ` <c2c88376-5ba7-37d1-4d3e-592383ebb00a@amd.com>
2016-11-23 20:33 ` Jason Gunthorpe
2016-11-23 21:11 ` Logan Gunthorpe
2016-11-23 21:55 ` Jason Gunthorpe
2016-11-23 22:42 ` Dan Williams
2016-11-23 23:25 ` Jason Gunthorpe
2016-11-24 9:45 ` Christian König
2016-11-24 16:26 ` Jason Gunthorpe
2016-11-24 17:00 ` Serguei Sagalovitch
2016-11-24 17:55 ` Logan Gunthorpe
2016-11-25 13:06 ` Christian König
2016-11-25 16:45 ` Logan Gunthorpe
2016-11-25 17:20 ` Serguei Sagalovitch
2016-11-25 20:26 ` Felix Kuehling
2016-11-25 20:48 ` Serguei Sagalovitch
2016-11-24 0:40 ` Sagalovitch, Serguei
2016-11-24 16:24 ` Jason Gunthorpe
2016-11-24 1:25 ` Logan Gunthorpe
2016-11-24 16:42 ` Jason Gunthorpe
2016-11-24 18:11 ` Logan Gunthorpe
2016-11-25 7:58 ` Christoph Hellwig
2016-11-25 19:41 ` Jason Gunthorpe
2016-11-25 17:59 ` Serguei Sagalovitch
2016-11-25 13:22 ` Christian König
2016-11-25 17:16 ` Serguei Sagalovitch
2016-11-25 19:34 ` Jason Gunthorpe
2016-11-25 19:49 ` Serguei Sagalovitch
2016-11-25 20:19 ` Jason Gunthorpe
2016-11-25 23:41 ` Alex Deucher
2016-11-25 19:32 ` Jason Gunthorpe
2016-11-25 20:40 ` Christian König
2016-11-25 20:51 ` Felix Kuehling
2016-11-25 21:18 ` Jason Gunthorpe
2016-11-27 8:16 ` Haggai Eran
2016-11-27 14:02 ` Haggai Eran
2016-11-27 14:07 ` Christian König
2016-11-28 5:31 ` zhoucm1
2016-11-28 14:48 ` Serguei Sagalovitch
2016-11-28 18:36 ` Haggai Eran
2016-11-28 16:57 ` Jason Gunthorpe
2016-11-28 18:19 ` Haggai Eran
2016-11-28 19:02 ` Jason Gunthorpe
2016-11-30 10:45 ` Haggai Eran
2016-11-30 16:23 ` Jason Gunthorpe
2016-11-30 17:28 ` Serguei Sagalovitch
2016-12-04 7:33 ` Haggai Eran
2016-11-30 18:01 ` Logan Gunthorpe
2016-12-04 7:42 ` Haggai Eran
2016-12-04 13:06 ` Stephen Bates
2016-12-04 13:23 ` Stephen Bates
2016-12-05 17:18 ` Jason Gunthorpe
2016-12-05 17:40 ` Dan Williams
2016-12-05 18:02 ` Jason Gunthorpe
2016-12-05 18:08 ` Dan Williams
2016-12-05 18:39 ` Logan Gunthorpe
2016-12-05 18:48 ` Dan Williams
2016-12-05 19:14 ` Jason Gunthorpe
2016-12-05 19:27 ` Logan Gunthorpe
2016-12-05 19:46 ` Jason Gunthorpe
2016-12-05 19:59 ` Logan Gunthorpe
2016-12-05 20:06 ` Christoph Hellwig
2016-12-06 8:06 ` Stephen Bates
2016-12-06 16:38 ` Jason Gunthorpe
2016-12-06 16:51 ` Logan Gunthorpe
2016-12-06 17:28 ` Jason Gunthorpe
2016-12-06 21:47 ` Logan Gunthorpe
2016-12-06 22:02 ` Dan Williams
2016-12-06 17:12 ` Christoph Hellwig
2016-12-04 7:53 ` Haggai Eran
2016-11-30 17:10 ` Deucher, Alexander
2016-11-28 18:20 ` Logan Gunthorpe
2016-11-28 19:35 ` Serguei Sagalovitch
2016-11-28 21:36 ` Logan Gunthorpe
2016-11-28 21:55 ` Serguei Sagalovitch
2016-11-28 22:24 ` Jason Gunthorpe
2017-01-05 18:39 ` Jerome Glisse
2017-01-05 19:01 ` Jason Gunthorpe
2017-01-05 19:54 ` Jerome Glisse
2017-01-05 20:07 ` Jason Gunthorpe
2017-01-05 20:19 ` Jerome Glisse
2017-01-05 22:42 ` Jason Gunthorpe
2017-01-05 23:23 ` Jerome Glisse
2017-01-06 0:30 ` Jason Gunthorpe
2017-01-06 0:41 ` Serguei Sagalovitch
2017-01-06 1:58 ` Jerome Glisse [this message]
2017-01-06 16:56 ` Serguei Sagalovitch
2017-01-06 17:37 ` Jerome Glisse
2017-01-06 18:26 ` Jason Gunthorpe
2017-01-06 19:12 ` Deucher, Alexander
2017-01-06 22:10 ` Logan Gunthorpe
2017-01-12 4:54 ` Stephen Bates
2017-01-12 15:11 ` Jerome Glisse
2017-01-12 17:17 ` Jason Gunthorpe
2017-01-13 13:04 ` Christian König
2017-01-12 22:35 ` Logan Gunthorpe
2017-01-06 15:08 ` Henrique Almeida
2017-10-20 12:36 ` Ludwig Petrosyan
2017-10-20 15:48 ` Logan Gunthorpe
2017-10-22 6:13 ` Petrosyan, Ludwig
2017-10-22 17:19 ` Logan Gunthorpe
2017-10-23 16:08 ` David Laight
2017-10-23 22:04 ` Logan Gunthorpe
2017-10-24 5:58 ` Petrosyan, Ludwig
2017-10-24 14:58 ` David Laight
2017-10-26 13:28 ` Petrosyan, Ludwig
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20170106015831.GA2226@gmail.com \
--to=j.glisse@gmail.com \
--cc=Alexander.Deucher@amd.com \
--cc=Christian.Koenig@amd.com \
--cc=Felix.Kuehling@amd.com \
--cc=Linux-media@vger.kernel.org \
--cc=Paul.Blinzer@amd.com \
--cc=Serguei.Sagalovitch@amd.com \
--cc=Suravee.Suthikulpanit@amd.com \
--cc=ben.sander@amd.com \
--cc=david1.zhou@amd.com \
--cc=dri-devel@lists.freedesktop.org \
--cc=hch@infradead.org \
--cc=jglisse@redhat.com \
--cc=jgunthorpe@obsidianresearch.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-nvdimm@ml01.01.org \
--cc=linux-pci@vger.kernel.org \
--cc=linux-rdma@vger.kernel.org \
--cc=qiang.yu@amd.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).