From: Aaron Conole <aconole@redhat.com>
To: Paolo Abeni <pabeni@redhat.com>
Cc: Sebastian Andrzej Siewior <bigeasy@linutronix.de>,
netdev@vger.kernel.org, linux-rt-devel@lists.linux.dev,
"David S. Miller" <davem@davemloft.net>,
Eric Dumazet <edumazet@google.com>,
Jakub Kicinski <kuba@kernel.org>,
Simon Horman <horms@kernel.org>,
Thomas Gleixner <tglx@linutronix.de>,
Eelco Chaudron <echaudro@redhat.com>,
Ilya Maximets <i.maximets@ovn.org>,
dev@openvswitch.org
Subject: Re: [PATCH net-next v2 12/18] openvswitch: Move ovs_frag_data_storage into the struct ovs_pcpu_storage
Date: Thu, 17 Apr 2025 11:07:11 -0400 [thread overview]
Message-ID: <f7t4iymg734.fsf@redhat.com> (raw)
In-Reply-To: <867bb4b6-df27-4948-ab51-9dcc11c04064@redhat.com> (Paolo Abeni's message of "Thu, 17 Apr 2025 10:01:17 +0200")
Paolo Abeni <pabeni@redhat.com> writes:
> On 4/16/25 6:45 PM, Sebastian Andrzej Siewior wrote:
>> On 2025-04-15 12:26:13 [-0400], Aaron Conole wrote:
>>> I'm going to reply here, but I need to bisect a bit more (though I
>>> suspect the results below are due to 11/18). When I tested with this
>>> patch there were lots of "unexplained" latency spikes during processing
>>> (note, I'm not doing PREEMPT_RT in my testing, but I guess it would
>>> smooth the spikes out at the cost of max performance).
>>>
>>> With the series:
>>> [SUM] 0.00-300.00 sec 3.28 TBytes 96.1 Gbits/sec 9417 sender
>>> [SUM] 0.00-300.00 sec 3.28 TBytes 96.1 Gbits/sec receiver
>>>
>>> Without the series:
>>> [SUM] 0.00-300.00 sec 3.26 TBytes 95.5 Gbits/sec 149 sender
>>> [SUM] 0.00-300.00 sec 3.26 TBytes 95.5 Gbits/sec receiver
>>>
>>> And while the 'final' numbers might look acceptable, one thing I'll note
>>> is I saw multiple stalls as:
>>>
>>> [ 5] 57.00-58.00 sec 128 KBytes 903 Kbits/sec 0 4.02 MBytes
>>>
>>> But without the patch, I didn't see such stalls. My testing:
>>>
>>> 1. Install openvswitch userspace and ipcalc
>>> 2. start userspace.
>>> 3. Setup two netns and connect them (I have a more complicated script to
>>> set up the flows, and I can send that to you)
>>> 4. Use iperf3 to test (-P5 -t 300)
>>>
>>> As I wrote I suspect the locking in 11 is leading to these stalls, as
>>> the data I'm sending shouldn't be hitting the frag path.
>>>
>>> Do these results seem expected to you?
>>
>> You have slightly better throughput but way more retries. I wouldn't
>> expect that. And then the stall.
>>
>> Patch 10 & 12 move per-CPU variables around and makes them "static"
>> rather than allocating them at module init time. I would not expect this
>> to have a negative impact.
>> Patch #11 assigns the current thread to a variable and clears it again.
>> The remaining lockdep code disappears. The whole thing runs with BH
>> disabled so no preemption.
>>
>> I can't explain what you observe here. Unless it is a random glitch
>> please send the script and I try to take a look.
>
> I also think this series should not have any visible performance impact
> on not RT OVS tests. @Aaron: could you please double check the results
> (both the good on unpatched kernel and the bad with the series applied)
> are reproducible and not due some glitches.
I agree, it doesn't seem like it should. I guess a v3 is coming, so I
will retry with that. I planned to ack 10/18 and 12/18 anyway; even
without the lock restructure, it seems 'nicer' to have the pcpu
variables in a single location.
BTW, I am using a slightly modified version of:
https://gist.github.com/apconole/ed78c9a2e76add9942dc3d6cbcfff4ca
It sets things up similarly to an SDN deployment (although not perfectly
since I was testing something very special at the time), and I was just
doing netns->netns testing (so it would go through ct() calls but not
ct(nat) calls).
> @Sebastian: I think the 'owner' assignment could be optimized out at
> compile time for non RT build - will likely not matter for performances,
> but I think it will be 'nicer', could you please update the patches to
> do that?
>
> Thanks!
>
> Paolo
next prev parent reply other threads:[~2025-04-17 15:07 UTC|newest]
Thread overview: 30+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-04-14 16:07 [PATCH net-next v2 00/18] net: Cover more per-CPU storage with local nested BH locking Sebastian Andrzej Siewior
2025-04-14 16:07 ` [PATCH net-next v2 01/18] net: page_pool: Don't recycle into cache on PREEMPT_RT Sebastian Andrzej Siewior
2025-04-14 16:07 ` [PATCH net-next v2 02/18] net: dst_cache: Use nested-BH locking for dst_cache::cache Sebastian Andrzej Siewior
2025-04-14 16:07 ` [PATCH net-next v2 03/18] ipv4/route: Use this_cpu_inc() for stats on PREEMPT_RT Sebastian Andrzej Siewior
2025-04-14 16:07 ` [PATCH net-next v2 04/18] ipv6: sr: Use nested-BH locking for hmac_storage Sebastian Andrzej Siewior
2025-04-14 16:07 ` [PATCH net-next v2 05/18] xdp: Use nested-BH locking for system_page_pool Sebastian Andrzej Siewior
2025-04-14 16:07 ` [PATCH net-next v2 06/18] netfilter: nf_dup{4, 6}: Move duplication check to task_struct Sebastian Andrzej Siewior
2025-04-29 9:23 ` Peter Zijlstra
2025-04-14 16:07 ` [PATCH net-next v2 07/18] netfilter: nft_inner: Use nested-BH locking for nft_pcpu_tun_ctx Sebastian Andrzej Siewior
2025-04-14 16:07 ` [PATCH net-next v2 08/18] netfilter: nf_dup_netdev: Move the recursion counter struct netdev_xmit Sebastian Andrzej Siewior
2025-04-14 16:07 ` [PATCH net-next v2 09/18] xfrm: Use nested-BH locking for nat_keepalive_sk_ipv[46] Sebastian Andrzej Siewior
2025-04-14 16:07 ` [PATCH net-next v2 10/18] openvswitch: Merge three per-CPU structures into one Sebastian Andrzej Siewior
2025-04-14 16:07 ` [PATCH net-next v2 11/18] openvswitch: Use nested-BH locking for ovs_pcpu_storage Sebastian Andrzej Siewior
2025-04-14 16:07 ` [PATCH net-next v2 12/18] openvswitch: Move ovs_frag_data_storage into the struct ovs_pcpu_storage Sebastian Andrzej Siewior
2025-04-15 16:26 ` Aaron Conole
2025-04-16 16:45 ` Sebastian Andrzej Siewior
2025-04-17 8:01 ` Paolo Abeni
2025-04-17 9:08 ` Sebastian Andrzej Siewior
2025-04-17 9:48 ` Paolo Abeni
2025-04-17 10:18 ` Sebastian Andrzej Siewior
2025-04-17 15:07 ` Aaron Conole [this message]
2025-04-14 16:07 ` [PATCH net-next v2 13/18] net/sched: act_mirred: Move the recursion counter struct netdev_xmit Sebastian Andrzej Siewior
2025-04-17 8:29 ` Paolo Abeni
2025-04-17 10:47 ` Sebastian Andrzej Siewior
2025-04-17 11:31 ` Paolo Abeni
2025-04-14 16:07 ` [PATCH net-next v2 14/18] net/sched: Use nested-BH locking for sch_frag_data_storage Sebastian Andrzej Siewior
2025-04-14 16:07 ` [PATCH net-next v2 15/18] mptcp: Use nested-BH locking for hmac_storage Sebastian Andrzej Siewior
2025-04-14 16:07 ` [PATCH net-next v2 16/18] rds: Disable only bottom halves in rds_page_remainder_alloc() Sebastian Andrzej Siewior
2025-04-14 16:07 ` [PATCH net-next v2 17/18] rds: Acquire per-CPU pointer within BH disabled section Sebastian Andrzej Siewior
2025-04-14 16:07 ` [PATCH net-next v2 18/18] rds: Use nested-BH locking for rds_page_remainder Sebastian Andrzej Siewior
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=f7t4iymg734.fsf@redhat.com \
--to=aconole@redhat.com \
--cc=bigeasy@linutronix.de \
--cc=davem@davemloft.net \
--cc=dev@openvswitch.org \
--cc=echaudro@redhat.com \
--cc=edumazet@google.com \
--cc=horms@kernel.org \
--cc=i.maximets@ovn.org \
--cc=kuba@kernel.org \
--cc=linux-rt-devel@lists.linux.dev \
--cc=netdev@vger.kernel.org \
--cc=pabeni@redhat.com \
--cc=tglx@linutronix.de \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).