From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 58E7F305E3B; Tue, 10 Feb 2026 16:32:34 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1770741154; cv=none; b=CPw4B3ViPTrVK0QbK/reMj1Zs3+UPnsXPe3xJYi6Nl8JOWo2mh7GTZ2wtucbR/EdVUrZEc6cISvkLYAxQO1KroiDqs1YO192xo4muymfb0AJkISvM4gKNmm/s5g4JZnenIeNcrjG8BnRPKEwTspwOwIN9lzR7kCUd/Nkk/uqKfk= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1770741154; c=relaxed/simple; bh=pusn716g+JIM3RS5MPp+XlMCPQGS0mFCQ2k7SNQD5zI=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=sjw+rg2CTkoIVSOQWYY3fMtDmFj6Sn4iPP+HKMz9r6nzxbE1BNOFNeV4ZbWIhVbLRv9XZ8LLf40333N/GCWGEjha+YWjZY/fho4FFFNTmR5n2Zb4v/WkAlLnGEHOxziKr9MEaGGWgWJ7tDd+3YrXnPSunQ0TjLa3uAFRYcYbQJk= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=Y2LiZcAF; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="Y2LiZcAF" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 9781EC19424; Tue, 10 Feb 2026 16:32:33 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1770741154; bh=pusn716g+JIM3RS5MPp+XlMCPQGS0mFCQ2k7SNQD5zI=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=Y2LiZcAFPUEW3kVBMtP5wnGfSk0hvy4o14ZsoNXP6icK1Hqi7Gkyk3APwkwepMr1J NyEP4AGNq1ioPt3aNQB/niT+Ka+ARNbRmKJh2pKQNvGVo7G+auKIArp++hpkd6MU2D OsI/IP+6aLLtu/pE56rH9i6Sxi34UmmP4qxEdD4hdnbTvVLDAm8JGXus5v7uYr01C/ w7fNev8Jg/WuVKp4TAY4SgjZPUqG4OPRseC0DpoW91BWEblkqV8z2PvuUZufDfYLDB KEPu2fka2oHeGYk+rzpPcj0VizwtN6EChBBlvH9e97sQhEWfrVSVuEdIvibGqMucwv TwU6t9yXCha2w== From: Chuck Lever To: NeilBrown , Jeff Layton , Olga Kornievskaia , Dai Ngo , Tom Talpey Cc: , , Chuck Lever Subject: [RFC PATCH 09/15] svcrdma: Release write chunk resources without re-queuing Date: Tue, 10 Feb 2026 11:32:16 -0500 Message-ID: <20260210163222.2356793-10-cel@kernel.org> X-Mailer: git-send-email 2.52.0 In-Reply-To: <20260210163222.2356793-1-cel@kernel.org> References: <20260210163222.2356793-1-cel@kernel.org> Precedence: bulk X-Mailing-List: linux-rdma@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: 8bit From: Chuck Lever Each RDMA Send completion triggers a cascade of work items on the svcrdma_wq unbound workqueue: ib_cq_poll_work (on ib_comp_wq, per-CPU) -> svc_rdma_send_ctxt_put -> queue_work [work item 1] -> svc_rdma_write_info_free -> queue_work [work item 2] Every transition through queue_work contends on the unbound pool's spinlock. Profiling an 8KB NFSv3 read/write workload over RDMA shows about 4% of total CPU cycles spent on this lock, with the cascading re-queue of write_info release contributing roughly 1%. The initial queue_work in svc_rdma_send_ctxt_put is needed to move release work off the CQ completion context (which runs on a per-CPU bound workqueue). However, once executing on svcrdma_wq, there is no need to re-queue for each write_info structure. svc_rdma_reply_chunk_release already calls svc_rdma_cc_release inline, confirming these operations are safe in workqueue and nfsd thread context alike. Release write chunk resources inline in svc_rdma_write_info_free, removing the intermediate svc_rdma_write_info_free_async work item and the wi_work field from struct svc_rdma_write_info. Signed-off-by: Chuck Lever --- include/linux/sunrpc/svc_rdma.h | 1 - net/sunrpc/xprtrdma/svc_rdma_rw.c | 13 ++----------- 2 files changed, 2 insertions(+), 12 deletions(-) diff --git a/include/linux/sunrpc/svc_rdma.h b/include/linux/sunrpc/svc_rdma.h index e6cb52285818..9691238df47f 100644 --- a/include/linux/sunrpc/svc_rdma.h +++ b/include/linux/sunrpc/svc_rdma.h @@ -232,7 +232,6 @@ struct svc_rdma_write_info { unsigned int wi_next_off; struct svc_rdma_chunk_ctxt wi_cc; - struct work_struct wi_work; }; struct svc_rdma_send_ctxt { diff --git a/net/sunrpc/xprtrdma/svc_rdma_rw.c b/net/sunrpc/xprtrdma/svc_rdma_rw.c index b0bbebbecb3e..e62abdbf84f8 100644 --- a/net/sunrpc/xprtrdma/svc_rdma_rw.c +++ b/net/sunrpc/xprtrdma/svc_rdma_rw.c @@ -215,19 +215,10 @@ svc_rdma_write_info_alloc(struct svcxprt_rdma *rdma, return info; } -static void svc_rdma_write_info_free_async(struct work_struct *work) -{ - struct svc_rdma_write_info *info; - - info = container_of(work, struct svc_rdma_write_info, wi_work); - svc_rdma_cc_release(info->wi_rdma, &info->wi_cc, DMA_TO_DEVICE); - kfree(info); -} - static void svc_rdma_write_info_free(struct svc_rdma_write_info *info) { - INIT_WORK(&info->wi_work, svc_rdma_write_info_free_async); - queue_work(svcrdma_wq, &info->wi_work); + svc_rdma_cc_release(info->wi_rdma, &info->wi_cc, DMA_TO_DEVICE); + kfree(info); } /** -- 2.52.0