public inbox for netdev@vger.kernel.org
 help / color / mirror / Atom feed
From: Mahanta Jambigi <mjambigi@linux.ibm.com>
To: "D. Wythe" <alibuda@linux.alibaba.com>,
	"David S. Miller" <davem@davemloft.net>,
	Dust Li <dust.li@linux.alibaba.com>,
	Eric Dumazet <edumazet@google.com>,
	Jakub Kicinski <kuba@kernel.org>, Paolo Abeni <pabeni@redhat.com>,
	Sidraya Jayagond <sidraya@linux.ibm.com>,
	Wenjia Zhang <wenjia@linux.ibm.com>
Cc: Simon Horman <horms@kernel.org>,
	Tony Lu <tonylu@linux.alibaba.com>,
	Wen Gu <guwen@linux.alibaba.com>,
	linux-kernel@vger.kernel.org, linux-rdma@vger.kernel.org,
	linux-s390@vger.kernel.org, netdev@vger.kernel.org,
	oliver.yang@linux.alibaba.com, pasic@linux.ibm.com
Subject: Re: [PATCH RFC net-next] net/smc: transition to RDMA core CQ pooling
Date: Fri, 6 Feb 2026 16:58:23 +0530	[thread overview]
Message-ID: <daefb72f-398e-489f-bdbc-db997ef9c5ae@linux.ibm.com> (raw)
In-Reply-To: <20260202094800.30373-1-alibuda@linux.alibaba.com>



On 02/02/26 3:18 pm, D. Wythe wrote:
> The current SMC-R implementation relies on global per-device CQs
> and manual polling within tasklets, which introduces severe
> scalability bottlenecks due to global lock contention and tasklet
> scheduling overhead, resulting in poor performance as concurrency
> increases.
> 
> Refactor the completion handling to utilize the ib_cqe API and
> standard RDMA core CQ pooling. This transition provides several key
> advantages:
> 
> 1. Multi-CQ: Shift from a single shared per-device CQ to multiple
> link-specific CQs via the CQ pool. This allows completion processing
> to be parallelized across multiple CPU cores, effectively eliminating
> the global CQ bottleneck.
> 
> 2. Leverage DIM: Utilizing the standard CQ pool with IB_POLL_SOFTIRQ
> enables Dynamic Interrupt Moderation from the RDMA core, optimizing
> interrupt frequency and reducing CPU load under high pressure.
> 
> 3. O(1) Context Retrieval: Replaces the expensive wr_id based lookup
> logic (e.g., smc_wr_tx_find_pending_index) with direct context retrieval
> using container_of() on the embedded ib_cqe.
> 
> 4. Code Simplification: This refactoring results in a reduction of
> ~150 lines of code. It removes redundant sequence tracking, complex lookup
> helpers, and manual CQ management, significantly improving maintainability.
> 
> Performance Test: redis-benchmark with max 32 connections per QP
> Data format: Requests Per Second (RPS), Percentage in brackets
> represents the gain/loss compared to TCP.
> 
> | Clients | TCP      | SMC (original)      | SMC (cq_pool)       |
> |---------|----------|---------------------|---------------------|
> | c = 1   | 24449    | 31172  (+27%)       | 34039  (+39%)       |
> | c = 2   | 46420    | 53216  (+14%)       | 64391  (+38%)       |
> | c = 16  | 159673   | 83668  (-48%)  <--  | 216947 (+36%)       |
> | c = 32  | 164956   | 97631  (-41%)  <--  | 249376 (+51%)       |
> | c = 64  | 166322   | 118192 (-29%)  <--  | 249488 (+50%)       |
> | c = 128 | 167700   | 121497 (-27%)  <--  | 249480 (+48%)       |
> | c = 256 | 175021   | 146109 (-16%)  <--  | 240384 (+37%)       |
> | c = 512 | 168987   | 101479 (-40%)  <--  | 226634 (+34%)       |
> 
> The results demonstrate that this optimization effectively resolves the
> scalability bottleneck, with RPS increasing by over 110% at c=64
> compared to the original implementation.

I applied your patch to the latest kernel(6.19-rc8) & saw below
Performance results:

1) In my evaluation, I ran several *uperf* based workloads using a
request/response (RR) pattern, and I observed performance *degradation*
ranging from *4%* to *59%*, depending on the specific read/write sizes
used. For example, with a TCP RR workload using 50 parallel clients
(nprocs=50) sending a 200‑byte request and reading a 1000‑byte response
over a 60‑second run, I measured approximately 59% degradation compared
to SMC‑R original performance.

2) In contrast, with uperf *streaming‑type* workloads, your patch shows
clear gains. I observed performance *improvement* ranging from *11%* to
*75%*, again depending on the specific streaming parameters. One
representative case is a TCP streaming/bulk‑receive workload with 250
parallel clients (nprocs=250) performing 640 reads per burst with 30 KB
per read, running continuously for 60 seconds, where I measured
approximately *75%* *improvement* over the SMC‑R original performance.

Note: I ran above tests with default WR(work request buffers), default
receive & transmit buffer size with smc_run.

I am looking for additional details regarding the redis-benchmark
performance results you previously shared. I would like to understand
whether the workload behaved more like a traditional request/response
(RR) pattern or a streaming-type workload, and what SMC‑R configuration
was used during the tests?

1) SMC Work Request (WR) Settings - Did your test environment use the
default SMC‑R work request buffers?
  net.smc.smcr_max_recv_wr = 48
  net.smc.smcr_max_send_wr = 16
2) SMC-R Buffer sizes used via smc_run - Did you use default transmit &
receive buffer sizes(smc_run -r <recv_size> -t <send_size>)?
3) Additional system or network tuning e.g CPU affinity, NIC offload
settings etc?

  parent reply	other threads:[~2026-02-06 11:28 UTC|newest]

Thread overview: 13+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2026-02-02  9:48 [PATCH RFC net-next] net/smc: transition to RDMA core CQ pooling D. Wythe
2026-02-02 12:30 ` Leon Romanovsky
2026-02-03  9:24   ` D. Wythe
2026-02-06 11:28 ` Mahanta Jambigi [this message]
2026-02-09  7:53   ` D. Wythe
2026-02-13 11:23     ` Mahanta Jambigi
2026-02-24  2:19       ` D. Wythe
2026-02-27  4:41         ` Mahanta Jambigi
2026-02-27  9:29           ` D. Wythe
2026-02-11 12:51 ` Dust Li
2026-02-12  9:35   ` D. Wythe
2026-02-23 12:31 ` Mahanta Jambigi
2026-02-24  6:56   ` D. Wythe

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=daefb72f-398e-489f-bdbc-db997ef9c5ae@linux.ibm.com \
    --to=mjambigi@linux.ibm.com \
    --cc=alibuda@linux.alibaba.com \
    --cc=davem@davemloft.net \
    --cc=dust.li@linux.alibaba.com \
    --cc=edumazet@google.com \
    --cc=guwen@linux.alibaba.com \
    --cc=horms@kernel.org \
    --cc=kuba@kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-rdma@vger.kernel.org \
    --cc=linux-s390@vger.kernel.org \
    --cc=netdev@vger.kernel.org \
    --cc=oliver.yang@linux.alibaba.com \
    --cc=pabeni@redhat.com \
    --cc=pasic@linux.ibm.com \
    --cc=sidraya@linux.ibm.com \
    --cc=tonylu@linux.alibaba.com \
    --cc=wenjia@linux.ibm.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox