From mboxrd@z Thu Jan 1 00:00:00 1970 From: Jason Gunthorpe Subject: Re: [PATCH v1 09/20] xprtrdma: Limit the number of rpcrdma_mws Date: Tue, 7 Jun 2016 16:01:07 -0600 Message-ID: <20160607220107.GA9982@obsidianresearch.com> References: <20160607194001.18401.88592.stgit@manet.1015granger.net> <20160607194732.18401.71941.stgit@manet.1015granger.net> <20160607204941.GA7991@obsidianresearch.com> <20160607212831.GA9192@obsidianresearch.com> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Return-path: Content-Disposition: inline In-Reply-To: Sender: linux-rdma-owner-u79uwXL29TY76Z2rM5mHXA@public.gmane.org To: Chuck Lever Cc: linux-rdma , Linux NFS Mailing List List-Id: linux-rdma@vger.kernel.org On Tue, Jun 07, 2016 at 05:51:04PM -0400, Chuck Lever wrote: > There is a practical number of MRs that can be allocated > per device, I thought. And each MR consumes some amount > of host memory. > xprtrdma is happy to allocate thousands of MRs per QP/PD > pair, but that isn't practical as you start adding more > transports/connections. Yes, this is all sane and valid. Typically you'd target a concurrency limit per QP and allocate everything (# MRs, sq,rq,cq depth, etc) in accordance with that goal, I'd also suggest that target concurrency is probably a user tunable.. Jason -- To unsubscribe from this list: send the line "unsubscribe linux-rdma" in the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org More majordomo info at http://vger.kernel.org/majordomo-info.html