From mboxrd@z Thu Jan 1 00:00:00 1970 From: Jason Gunthorpe Subject: Re: Unexpected issues with 2 NVME initiators using the same target Date: Mon, 10 Jul 2017 15:32:51 -0600 Message-ID: <20170710213251.GA21908@obsidianresearch.com> References: <7D1C540B-FEA0-4101-8B58-87BCB7DB5492@oracle.com> <66b1b8be-e506-50b8-c01f-fa0e3cea98a4@grimberg.me> <9D8C7BC8-7E18-405A-9017-9DB23A6B5C15@oracle.com> <11aa1a24-9f0b-dbb8-18eb-ad357c7727b2@grimberg.me> <9E30754F-464A-4B62-ADE7-F6B2F6D95763@oracle.com> <20170709164755.GB3058@obsidianresearch.com> <20170710212430.GA21721@obsidianresearch.com> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Return-path: Content-Disposition: inline In-Reply-To: Sender: linux-rdma-owner-u79uwXL29TY76Z2rM5mHXA@public.gmane.org To: Chuck Lever Cc: Sagi Grimberg , Leon Romanovsky , Robert LeBlanc , Marta Rybczynska , Max Gurtovoy , Christoph Hellwig , "Gruher, Joseph R" , "shahar.salzman" , Laurence Oberman , "Riches Jr, Robert M" , linux-rdma , linux-nvme-IAPFreCvJWM7uuMidbF8XUB+6BGkLq7r@public.gmane.org, Liran Liss , Bart Van Assche List-Id: linux-rdma@vger.kernel.org On Mon, Jul 10, 2017 at 05:29:53PM -0400, Chuck Lever wrote: > > > On Jul 10, 2017, at 5:24 PM, Jason Gunthorpe wrote: > > > > On Mon, Jul 10, 2017 at 03:03:18PM -0400, Chuck Lever wrote: > > > >>>> Or I could revert all the "map page cache pages" logic and > >>>> just use memcpy for small NFS WRITEs, and RDMA the rest of > >>>> the time. That keeps everything simple, but means large > >>>> inline thresholds can't use send-in-place. > >>> > >>> Don't you have the same problem with RDMA WRITE? > >> > >> The server side initiates RDMA Writes. The final RDMA Write in a WR > >> chain is signaled, but a subsequent Send completion is used to > >> determine when the server may release resources used for the Writes. > >> We're already doing it the slow way there, and there's no ^C hazard > >> on the server. > > > > Wait, I guess I meant RDMA READ path. > > > > The same contraints apply to RKeys as inline send - you cannot DMA > > unmap rkey memory until the rkey is invalidated at the HCA. > > > > So posting an invalidate SQE and then immediately unmapping the DMA > > pages is bad too.. > > > > No matter how the data is transfered the unmapping must follow the > > same HCA synchronous model.. DMA unmap must only be done from the send > > completion handler (inline send or invalidate rkey), from the recv > > completion handler (send with invalidate), or from QP error state teardown. > > > > Anything that does DMA memory unmap from another thread is very, very > > suspect, eg async from a ctrl-c trigger event. > > 4.13 server side is converted to use the rdma_rw API for > handling RDMA Read. For non-iWARP cases, it's using the > local DMA key for Read sink buffers. For iWARP it should > be using Read-with-invalidate (IIRC). The server sounds fine, how does the client work? Jason -- To unsubscribe from this list: send the line "unsubscribe linux-rdma" in the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org More majordomo info at http://vger.kernel.org/majordomo-info.html