From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 079A4241661; Fri, 9 May 2025 19:03:57 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1746817438; cv=none; b=eLCJxr5NHNjoiwdKDDgrl1bdGLR7xcOeRYSz08PBwuZqnbk0BkHVnhD2wPlAaCQXtd+pvKeFV7u1boUFIo2YcBKNM88WVPdvfyW5sXZoNZ4WgbmMhTuuLO1Ys3AT8Qqn75a93Um7n+auANxx0GAuNh2GZm0xVPD/N+uqpsm80u0= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1746817438; c=relaxed/simple; bh=uX7FI93dY2S91RDTPQ/9sV7aE0XnrmdrvDrVcSDGqqY=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=YJIEmnmCt2dHr+jNkhlTYB3U45AnvV70z4gNM9YH1+/N263ewKflWAhiNN+ecmqxbNkIlzoSJFHxMCmNVYVBNIjUVZS+06tdC6mh/cUBJEin2B2Ee1qJ1AtQvjpgWMM0x4JNPNbnuDA4+vR4pAVuVth6SWSDplV/tXI0alKgFQw= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=LVt/CGfI; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="LVt/CGfI" Received: by smtp.kernel.org (Postfix) with ESMTPSA id E39BDC4CEEE; Fri, 9 May 2025 19:03:56 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1746817437; bh=uX7FI93dY2S91RDTPQ/9sV7aE0XnrmdrvDrVcSDGqqY=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=LVt/CGfIbWeHOc+CKP0SGfp2yOUab7lY4VUXIGy8oloyTM/CUDXnOB2h8jmV1zk3E Xwk1cvwbf/TcGHfZljupkhEUtYdjLOgjL806Ga0kmKUgjYjq5q64bLoAb1HIE+K4Mu KJszxLZSHUGbmQJ9zCJwznro8TCfjRbYS8yWF5vM96Ba8h6QoBC9Zj/e9VNMNdaYAY 1dE2mm4rdyZDWbsUyVkuFguL+yXaN50i0n2KJlzpwdDx9SGnFqvhYbv4taxueNhTC7 UGW+70Qq/GOp9GPO9dIz/6Yn4GlT6YEOPznrBI+fGLeOYlC0FRfLWmUPzRJN23tVFI Sl+zBIgMyA/Tg== From: cel@kernel.org To: NeilBrown , Jeff Layton , Olga Kornievskaia , Dai Ngo , Tom Talpey Cc: , , Chuck Lever , Christoph Hellwig Subject: [PATCH v5 01/19] svcrdma: Reduce the number of rdma_rw contexts per-QP Date: Fri, 9 May 2025 15:03:35 -0400 Message-ID: <20250509190354.5393-2-cel@kernel.org> X-Mailer: git-send-email 2.49.0 In-Reply-To: <20250509190354.5393-1-cel@kernel.org> References: <20250509190354.5393-1-cel@kernel.org> Precedence: bulk X-Mailing-List: linux-rdma@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: 8bit From: Chuck Lever There is an upper bound on the number of rdma_rw contexts that can be created per QP. This invisible upper bound is because rdma_create_qp() adds one or more additional SQEs for each ctxt that the ULP requests via qp_attr.cap.max_rdma_ctxs. The QP's actual Send Queue length is on the order of the sum of qp_attr.cap.max_send_wr and a factor times qp_attr.cap.max_rdma_ctxs. The factor can be up to three, depending on whether MR operations are required before RDMA Reads. This limit is not visible to RDMA consumers via dev->attrs. When the limit is surpassed, QP creation fails with -ENOMEM. For example: svcrdma's estimate of the number of rdma_rw contexts it needs is three times the number of pages in RPCSVC_MAXPAGES. When MAXPAGES is about 260, the internally-computed SQ length should be: 64 credits + 10 backlog + 3 * (3 * 260) = 2414 Which is well below the advertised qp_max_wr of 32768. If RPCSVC_MAXPAGES is increased to 4MB, that's 1040 pages: 64 credits + 10 backlog + 3 * (3 * 1040) = 9434 However, QP creation fails. Dynamic printk for mlx5 shows: calc_sq_size:618:(pid 1514): send queue size (9326 * 256 / 64 -> 65536) exceeds limits(32768) Although 9326 is still far below qp_max_wr, QP creation still fails. Because the total SQ length calculation is opaque to RDMA consumers, there doesn't seem to be much that can be done about this except for consumers to try to keep the requested rdma_rw ctxt count low. Fixes: 2da0f610e733 ("svcrdma: Increase the per-transport rw_ctx count") Reviewed-by: NeilBrown Reviewed-by: Christoph Hellwig Signed-off-by: Chuck Lever --- net/sunrpc/xprtrdma/svc_rdma_transport.c | 14 ++++++++------ 1 file changed, 8 insertions(+), 6 deletions(-) diff --git a/net/sunrpc/xprtrdma/svc_rdma_transport.c b/net/sunrpc/xprtrdma/svc_rdma_transport.c index 5940a56023d1..3d7f1413df02 100644 --- a/net/sunrpc/xprtrdma/svc_rdma_transport.c +++ b/net/sunrpc/xprtrdma/svc_rdma_transport.c @@ -406,12 +406,12 @@ static void svc_rdma_xprt_done(struct rpcrdma_notification *rn) */ static struct svc_xprt *svc_rdma_accept(struct svc_xprt *xprt) { + unsigned int ctxts, rq_depth, maxpayload; struct svcxprt_rdma *listen_rdma; struct svcxprt_rdma *newxprt = NULL; struct rdma_conn_param conn_param; struct rpcrdma_connect_private pmsg; struct ib_qp_init_attr qp_attr; - unsigned int ctxts, rq_depth; struct ib_device *dev; int ret = 0; RPC_IFDEBUG(struct sockaddr *sap); @@ -462,12 +462,14 @@ static struct svc_xprt *svc_rdma_accept(struct svc_xprt *xprt) newxprt->sc_max_bc_requests = 2; } - /* Arbitrarily estimate the number of rw_ctxs needed for - * this transport. This is enough rw_ctxs to make forward - * progress even if the client is using one rkey per page - * in each Read chunk. + /* Arbitrary estimate of the needed number of rdma_rw contexts. */ - ctxts = 3 * RPCSVC_MAXPAGES; + maxpayload = min(xprt->xpt_server->sv_max_payload, + RPCSVC_MAXPAYLOAD_RDMA); + ctxts = newxprt->sc_max_requests * 3 * + rdma_rw_mr_factor(dev, newxprt->sc_port_num, + maxpayload >> PAGE_SHIFT); + newxprt->sc_sq_depth = rq_depth + ctxts; if (newxprt->sc_sq_depth > dev->attrs.max_qp_wr) newxprt->sc_sq_depth = dev->attrs.max_qp_wr; -- 2.49.0