From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: To: Jerome Glisse , Benjamin Herrenschmidt Cc: linux-kernel@vger.kernel.org, linux-pci@vger.kernel.org, linux-nvme@lists.infradead.org, linux-rdma@vger.kernel.org, linux-nvdimm@lists.01.org, linux-block@vger.kernel.org, Stephen Bates , Christoph Hellwig , Jens Axboe , Keith Busch , Sagi Grimberg , Bjorn Helgaas , Jason Gunthorpe , Max Gurtovoy , Dan Williams , Alex Williamson , Oliver OHalloran References: <20180228234006.21093-1-logang@deltatee.com> <1519876489.4592.3.camel@kernel.crashing.org> <1519876569.4592.4.camel@au1.ibm.com> <8e808448-fc01-5da0-51e7-1a6657d5a23a@deltatee.com> <1519936195.4592.18.camel@au1.ibm.com> <20180301205548.GA6742@redhat.com> From: Logan Gunthorpe Message-ID: Date: Thu, 1 Mar 2018 14:03:26 -0700 MIME-Version: 1.0 In-Reply-To: <20180301205548.GA6742@redhat.com> Content-Type: text/plain; charset=utf-8; format=flowed Subject: Re: [PATCH v2 00/10] Copy Offload in NVMe Fabrics with P2P PCI Memory List-ID: On 01/03/18 01:55 PM, Jerome Glisse wrote: > Well this again a new user of struct page for device memory just for > one usecase. I wanted HMM to be more versatile so that it could be use > for this kind of thing too. I guess the message didn't go through. I > will take some cycles tomorrow to look into this patchset to ascertain > how struct page is use in this context. We looked at it but didn't see how any of it was applicable to our needs. Logan