From mboxrd@z Thu Jan 1 00:00:00 1970 From: Dan Williams Subject: Re: Enabling peer to peer device transactions for PCIe devices Date: Mon, 5 Dec 2016 09:40:38 -0800 Message-ID: References: <20161125193252.GC16504@obsidianresearch.com> <20161128165751.GB28381@obsidianresearch.com> <1480357179.19407.13.camel@mellanox.com> <20161128190244.GA21975@obsidianresearch.com> <20161130162353.GA24639@obsidianresearch.com> <5f5b7989-84f5-737e-47c8-831f752d6280@deltatee.com> <61a2fb07344aacd81111449d222de66e.squirrel@webmail.raithlin.com> <20161205171830.GB27784@obsidianresearch.com> Mime-Version: 1.0 Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Return-path: In-Reply-To: <20161205171830.GB27784-ePGOBjL8dl3ta4EC/59zMFaTQe2KTcn/@public.gmane.org> List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: linux-nvdimm-bounces-hn68Rpc1hR1g9hUCZPvPmw@public.gmane.org Sender: "Linux-nvdimm" To: Jason Gunthorpe Cc: Haggai Eran , "John.Bridgman-5C7GfCeVMHo@public.gmane.org" , "linux-rdma-u79uwXL29TY76Z2rM5mHXA@public.gmane.org" , "linux-nvdimm-y27Ovi1pjclAfugRpC6u6w@public.gmane.org" , "Felix.Kuehling-5C7GfCeVMHo@public.gmane.org" , "serguei.sagalovitch-5C7GfCeVMHo@public.gmane.org" , "Paul.Blinzer-5C7GfCeVMHo@public.gmane.org" , "linux-kernel-u79uwXL29TY76Z2rM5mHXA@public.gmane.org" , "dri-devel-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW@public.gmane.org" , Stephen Bates , "ben.sander-5C7GfCeVMHo@public.gmane.org" , "Suravee.Suthikulpanit-5C7GfCeVMHo@public.gmane.org" , "linux-pci-u79uwXL29TY76Z2rM5mHXA@public.gmane.org" , "Alexander.Deucher-5C7GfCeVMHo@public.gmane.org" , Max Gurtovoy , "christian.koenig-5C7GfCeVMHo@public.gmane.org" , "Linux-media-u79uwXL29TY76Z2rM5mHXA@public.gmane.org" List-Id: dri-devel@lists.freedesktop.org On Mon, Dec 5, 2016 at 9:18 AM, Jason Gunthorpe wrote: > On Sun, Dec 04, 2016 at 07:23:00AM -0600, Stephen Bates wrote: >> Hi All >> >> This has been a great thread (thanks to Alex for kicking it off) and I >> wanted to jump in and maybe try and put some summary around the >> discussion. I also wanted to propose we include this as a topic for LFS/MM >> because I think we need more discussion on the best way to add this >> functionality to the kernel. >> >> As far as I can tell the people looking for P2P support in the kernel fall >> into two main camps: >> >> 1. Those who simply want to expose static BARs on PCIe devices that can be >> used as the source/destination for DMAs from another PCIe device. This >> group has no need for memory invalidation and are happy to use >> physical/bus addresses and not virtual addresses. > > I didn't think there was much on this topic except for the CMB > thing.. Even that is really a mapped kernel address.. > >> I think something like the iopmem patches Logan and I submitted recently >> come close to addressing use case 1. There are some issues around >> routability but based on feedback to date that does not seem to be a >> show-stopper for an initial inclusion. > > If it is kernel only with physical addresess we don't need a uAPI for > it, so I'm not sure #1 is at all related to iopmem. > > Most people who want #1 probably can just mmap > /sys/../pci/../resourceX to get a user handle to it, or pass around > __iomem pointers in the kernel. This has been asked for before with > RDMA. > > I'm still not really clear what iopmem is for, or why DAX should ever > be involved in this.. Right, by default remap_pfn_range() does not establish DMA capable mappings. You can think of iopmem as remap_pfn_range() converted to use devm_memremap_pages(). Given the extra constraints of devm_memremap_pages() it seems reasonable to have those DMA capable mappings be optionally established via a separate driver.