public inbox for netdev@vger.kernel.org
 help / color / mirror / Atom feed
From: Mahanta Jambigi <mjambigi@linux.ibm.com>
To: "D. Wythe" <alibuda@linux.alibaba.com>
Cc: "David S. Miller" <davem@davemloft.net>,
	Dust Li <dust.li@linux.alibaba.com>,
	Eric Dumazet <edumazet@google.com>,
	Jakub Kicinski <kuba@kernel.org>, Paolo Abeni <pabeni@redhat.com>,
	Sidraya Jayagond <sidraya@linux.ibm.com>,
	Wenjia Zhang <wenjia@linux.ibm.com>,
	Simon Horman <horms@kernel.org>,
	Tony Lu <tonylu@linux.alibaba.com>,
	Wen Gu <guwen@linux.alibaba.com>,
	linux-kernel@vger.kernel.org, linux-rdma@vger.kernel.org,
	linux-s390@vger.kernel.org, netdev@vger.kernel.org,
	oliver.yang@linux.alibaba.com, pasic@linux.ibm.com
Subject: Re: [PATCH RFC net-next] net/smc: transition to RDMA core CQ pooling
Date: Fri, 13 Feb 2026 16:53:28 +0530	[thread overview]
Message-ID: <2d71bab3-161d-414e-90e3-0e408ca931c2@linux.ibm.com> (raw)
In-Reply-To: <20260209075338.GA61095@j66a10360.sqa.eu95>



On 09/02/26 1:23 pm, D. Wythe wrote:
> On Fri, Feb 06, 2026 at 04:58:23PM +0530, Mahanta Jambigi wrote:
>>
>>
>> On 02/02/26 3:18 pm, D. Wythe wrote:
>>> The current SMC-R implementation relies on global per-device CQs
>>> and manual polling within tasklets, which introduces severe
>>> scalability bottlenecks due to global lock contention and tasklet
>>> scheduling overhead, resulting in poor performance as concurrency
>>> increases.
>>>
>>> Refactor the completion handling to utilize the ib_cqe API and
>>> standard RDMA core CQ pooling. This transition provides several key
>>> advantages:
>>>
>>> 1. Multi-CQ: Shift from a single shared per-device CQ to multiple
>>> link-specific CQs via the CQ pool. This allows completion processing
>>> to be parallelized across multiple CPU cores, effectively eliminating
>>> the global CQ bottleneck.
>>>
>>> 2. Leverage DIM: Utilizing the standard CQ pool with IB_POLL_SOFTIRQ
>>> enables Dynamic Interrupt Moderation from the RDMA core, optimizing
>>> interrupt frequency and reducing CPU load under high pressure.
>>>
>>> 3. O(1) Context Retrieval: Replaces the expensive wr_id based lookup
>>> logic (e.g., smc_wr_tx_find_pending_index) with direct context retrieval
>>> using container_of() on the embedded ib_cqe.
>>>
>>> 4. Code Simplification: This refactoring results in a reduction of
>>> ~150 lines of code. It removes redundant sequence tracking, complex lookup
>>> helpers, and manual CQ management, significantly improving maintainability.
>>>
>>> Performance Test: redis-benchmark with max 32 connections per QP
>>> Data format: Requests Per Second (RPS), Percentage in brackets
>>> represents the gain/loss compared to TCP.
>>>
>>> | Clients | TCP      | SMC (original)      | SMC (cq_pool)       |
>>> |---------|----------|---------------------|---------------------|
>>> | c = 1   | 24449    | 31172  (+27%)       | 34039  (+39%)       |
>>> | c = 2   | 46420    | 53216  (+14%)       | 64391  (+38%)       |
>>> | c = 16  | 159673   | 83668  (-48%)  <--  | 216947 (+36%)       |
>>> | c = 32  | 164956   | 97631  (-41%)  <--  | 249376 (+51%)       |
>>> | c = 64  | 166322   | 118192 (-29%)  <--  | 249488 (+50%)       |
>>> | c = 128 | 167700   | 121497 (-27%)  <--  | 249480 (+48%)       |
>>> | c = 256 | 175021   | 146109 (-16%)  <--  | 240384 (+37%)       |
>>> | c = 512 | 168987   | 101479 (-40%)  <--  | 226634 (+34%)       |
>>>
>>> The results demonstrate that this optimization effectively resolves the
>>> scalability bottleneck, with RPS increasing by over 110% at c=64
>>> compared to the original implementation.
>>
>> I applied your patch to the latest kernel(6.19-rc8) & saw below
>> Performance results:
>>
>> 1) In my evaluation, I ran several *uperf* based workloads using a
>> request/response (RR) pattern, and I observed performance *degradation*
>> ranging from *4%* to *59%*, depending on the specific read/write sizes
>> used. For example, with a TCP RR workload using 50 parallel clients
>> (nprocs=50) sending a 200‑byte request and reading a 1000‑byte response
>> over a 60‑second run, I measured approximately 59% degradation compared
>> to SMC‑R original performance.
>>
> 
> The only setting I changed was net.smc.smcr_max_conns_per_lgr = 32, all
> other parameters were left at their default values. redis-benchmark is a
> classic Request/Response (RR) workload, which contradicts your test
> results. Since I'm unable to reproduce your results, it would be
> very helpful if you could share the specific test configuration for my
> analysis.

I used a simple client–server setup connected via 25 Gb/s RoCE_Express2
adapters on the same LAN(connection established via SMC-R v1). After
running the commands shown below, I observed a performance degradation
of up to 59%.

Server: smc_run uperf -s
Client: smc_run uperf -m rr1c-200x1000-50.xml

cat rr1c-200x1000-50.xml

<?xml version="1.0"?>
<profile name="TCP_RR">
	<group nprocs="50">
		<transaction iterations="1">
			<flowop type="connect" options="remotehost=server_ip protocol=tcp
tcp_nodelay" />
		</transaction>
		<transaction duration="60">
			<flowop type="write" options="size=200"/>
			<flowop type="read" options="size=1000"/>
		</transaction>
		<transaction iterations="1">
			<flowop type="disconnect" />
		</transaction>
	</group>
</profile>

I installed redis-server on the server machine & redis-benchmark on the
client machine & I was able to establish the SMC-R using below commands.
If you could help me with the exact commands you used to measure the
redis-benchmark performance, I can try the same on my setup.

Server: smc_run redis-server --port <port_num> --save "" --appendonly no
  --protected-mode no --bind 0.0.0.0
Client: smc_run redis-benchmark -h <server_ip> -p <port_num> -n 10000 -c
50 -t ping_inline,ping_bulk -q

  reply	other threads:[~2026-02-13 11:23 UTC|newest]

Thread overview: 13+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2026-02-02  9:48 [PATCH RFC net-next] net/smc: transition to RDMA core CQ pooling D. Wythe
2026-02-02 12:30 ` Leon Romanovsky
2026-02-03  9:24   ` D. Wythe
2026-02-06 11:28 ` Mahanta Jambigi
2026-02-09  7:53   ` D. Wythe
2026-02-13 11:23     ` Mahanta Jambigi [this message]
2026-02-24  2:19       ` D. Wythe
2026-02-27  4:41         ` Mahanta Jambigi
2026-02-27  9:29           ` D. Wythe
2026-02-11 12:51 ` Dust Li
2026-02-12  9:35   ` D. Wythe
2026-02-23 12:31 ` Mahanta Jambigi
2026-02-24  6:56   ` D. Wythe

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=2d71bab3-161d-414e-90e3-0e408ca931c2@linux.ibm.com \
    --to=mjambigi@linux.ibm.com \
    --cc=alibuda@linux.alibaba.com \
    --cc=davem@davemloft.net \
    --cc=dust.li@linux.alibaba.com \
    --cc=edumazet@google.com \
    --cc=guwen@linux.alibaba.com \
    --cc=horms@kernel.org \
    --cc=kuba@kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-rdma@vger.kernel.org \
    --cc=linux-s390@vger.kernel.org \
    --cc=netdev@vger.kernel.org \
    --cc=oliver.yang@linux.alibaba.com \
    --cc=pabeni@redhat.com \
    --cc=pasic@linux.ibm.com \
    --cc=sidraya@linux.ibm.com \
    --cc=tonylu@linux.alibaba.com \
    --cc=wenjia@linux.ibm.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox