From: Leon Romanovsky <leon@kernel.org>
To: "D. Wythe" <alibuda@linux.alibaba.com>
Cc: "David S. Miller" <davem@davemloft.net>,
Dust Li <dust.li@linux.alibaba.com>,
Eric Dumazet <edumazet@google.com>,
Jakub Kicinski <kuba@kernel.org>, Paolo Abeni <pabeni@redhat.com>,
Sidraya Jayagond <sidraya@linux.ibm.com>,
Wenjia Zhang <wenjia@linux.ibm.com>,
Mahanta Jambigi <mjambigi@linux.ibm.com>,
Simon Horman <horms@kernel.org>,
Tony Lu <tonylu@linux.alibaba.com>,
Wen Gu <guwen@linux.alibaba.com>,
linux-kernel@vger.kernel.org, linux-rdma@vger.kernel.org,
linux-s390@vger.kernel.org, netdev@vger.kernel.org,
oliver.yang@linux.alibaba.com, pasic@linux.ibm.com
Subject: Re: [PATCH net-next v3] net/smc: transition to RDMA core CQ pooling
Date: Thu, 5 Mar 2026 10:55:05 +0200 [thread overview]
Message-ID: <20260305085505.GL12611@unreal> (raw)
In-Reply-To: <20260305022323.96125-1-alibuda@linux.alibaba.com>
On Thu, Mar 05, 2026 at 10:23:23AM +0800, D. Wythe wrote:
> The current SMC-R implementation relies on global per-device CQs
> and manual polling within tasklets, which introduces severe
> scalability bottlenecks due to global lock contention and tasklet
> scheduling overhead, resulting in poor performance as concurrency
> increases.
>
> Refactor the completion handling to utilize the ib_cqe API and
> standard RDMA core CQ pooling. This transition provides several key
> advantages:
>
> 1. Multi-CQ: Shift from a single shared per-device CQ to multiple
> link-specific CQs via the CQ pool. This allows completion processing
> to be parallelized across multiple CPU cores, effectively eliminating
> the global CQ bottleneck.
>
> 2. Leverage DIM: Utilizing the standard CQ pool with IB_POLL_SOFTIRQ
> enables Dynamic Interrupt Moderation from the RDMA core, optimizing
> interrupt frequency and reducing CPU load under high pressure.
>
> 3. O(1) Context Retrieval: Replaces the expensive wr_id based lookup
> logic (e.g., smc_wr_tx_find_pending_index) with direct context retrieval
> using container_of() on the embedded ib_cqe.
>
> 4. Code Simplification: This refactoring results in a reduction of
> ~150 lines of code. It removes redundant sequence tracking, complex lookup
> helpers, and manual CQ management, significantly improving maintainability.
>
> Performance Test: redis-benchmark with max 32 connections per QP
> Data format: Requests Per Second (RPS), Percentage in brackets
> represents the gain/loss compared to TCP.
>
> | Clients | TCP | SMC (original) | SMC (cq_pool) |
> |---------|----------|---------------------|---------------------|
> | c = 1 | 24449 | 31172 (+27%) | 34039 (+39%) |
> | c = 2 | 46420 | 53216 (+14%) | 64391 (+38%) |
> | c = 16 | 159673 | 83668 (-48%) <-- | 216947 (+36%) |
> | c = 32 | 164956 | 97631 (-41%) <-- | 249376 (+51%) |
> | c = 64 | 166322 | 118192 (-29%) <-- | 249488 (+50%) |
> | c = 128 | 167700 | 121497 (-27%) <-- | 249480 (+48%) |
> | c = 256 | 175021 | 146109 (-16%) <-- | 240384 (+37%) |
> | c = 512 | 168987 | 101479 (-40%) <-- | 226634 (+34%) |
>
> The results demonstrate that this optimization effectively resolves the
> scalability bottleneck, with RPS increasing by over 110% at c=64
> compared to the original implementation.
>
> Signed-off-by: D. Wythe <alibuda@linux.alibaba.com>
> ---
> v3:
> - Rebase to latest net-next tree.
> - Remove a redundant blank line in smc_wr_alloc_link_mem().
>
> v2:
> - Fix a logic bug in smc_wr_tx_process_cqe() where a zeroed field
> was checked instead of the saved pnd_snd copy. (Jakub)
> - Fix typo in comment: s/ib_draib_rq/ib_drain_rq/.
> - Minor comment alignment fix in struct smc_link.
> ---
> net/smc/smc_core.c | 9 +-
> net/smc/smc_core.h | 28 ++--
> net/smc/smc_ib.c | 113 +++++-----------
> net/smc/smc_ib.h | 7 -
> net/smc/smc_tx.c | 1 -
> net/smc/smc_wr.c | 312 +++++++++++++++++++--------------------------
> net/smc/smc_wr.h | 40 ++----
> 7 files changed, 193 insertions(+), 317 deletions(-)
Thanks a lot for this important conversion, for RDMA API usage:
Reviewed-by: Leon Romanovsky <leonro@nvidia.com>
next prev parent reply other threads:[~2026-03-05 8:55 UTC|newest]
Thread overview: 4+ messages / expand[flat|nested] mbox.gz Atom feed top
2026-03-05 2:23 [PATCH net-next v3] net/smc: transition to RDMA core CQ pooling D. Wythe
2026-03-05 8:55 ` Leon Romanovsky [this message]
2026-03-06 12:07 ` Mahanta Jambigi
2026-03-07 10:07 ` D. Wythe
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20260305085505.GL12611@unreal \
--to=leon@kernel.org \
--cc=alibuda@linux.alibaba.com \
--cc=davem@davemloft.net \
--cc=dust.li@linux.alibaba.com \
--cc=edumazet@google.com \
--cc=guwen@linux.alibaba.com \
--cc=horms@kernel.org \
--cc=kuba@kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-rdma@vger.kernel.org \
--cc=linux-s390@vger.kernel.org \
--cc=mjambigi@linux.ibm.com \
--cc=netdev@vger.kernel.org \
--cc=oliver.yang@linux.alibaba.com \
--cc=pabeni@redhat.com \
--cc=pasic@linux.ibm.com \
--cc=sidraya@linux.ibm.com \
--cc=tonylu@linux.alibaba.com \
--cc=wenjia@linux.ibm.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox