From: "Jason A. Donenfeld" <Jason@zx2c4.com>
To: Sherry Yang <sherry.yang@oracle.com>,
netdev@vger.kernel.org, linux-kernel@vger.kernel.org,
linux-rt-users@vger.kernel.org, Tejun Heo <tj@kernel.org>,
Lai Jiangshan <jiangshanlai@gmail.com>,
Sebastian Siewior <bigeasy@linutronix.de>
Cc: Sebastian Siewior <bigeasy@linutronix.de>,
Jack Vogel <jack.vogel@oracle.com>,
Tariq Toukan <tariqt@nvidia.com>
Subject: Re: 10% regression in qperf tcp latency after introducing commit "4a61bf7f9b18 random: defer fast pool mixing to worker"
Date: Thu, 22 Sep 2022 00:32:49 +0200 [thread overview]
Message-ID: <YyuREcGAXV9828w5@zx2c4.com> (raw)
In-Reply-To: <BD03BFF6-C369-4D34-A38B-49653F1CBC53@oracle.com>
Hi Sherry (and Sebastian and Netdev and Tejun and whomever),
I'm top-replying so that I can provide an overview of what's up to other
readers, and then I'll leave your email below for additional context.
random.c used to have a hard IRQ handler that did something like this:
do_some_stuff()
spin_lock()
do_some_other_stuff()
spin_lock()
That worked fine, but Sebastian pointed out that having spinlocks in a
hard IRQ handler was a big no-no for RT. Not wanting to make those into
raw spinlocks, he suggested we hoist things into a workqueue. So that's
what we did together, and now that function reads:
do_some_stuff()
queue_work_on(raw_smp_processor_id(), other_stuff_worker);
That seemed reasonable to me -- it's a pattern practiced a million times
all over the kernel -- and is currently how random.c's
add_interrupt_randomness() functions.
Sherry, however, has reported a ~10% performance regression using qperf
with TCP over some heavy duty infiniband cards. According to Sherry's
tests, removing the call to queue_work_on() makes the performance
regression go away.
That leads me to suspect that queue_work_on() might actually not be as
cheap as I assumed? If so, is that surprising to anybody else? And what
should we do about this?
Unfortunately, as you'll see from reading below, I'm hopeless in trying
to recreate Sherry's test rig, and even Sherry was unable to reproduce
it on different hardware. Nonetheless, a 10% regression on fancy 40gbps
hardware seems like something worthy of wider concern.
What are our options? Investigate queue_work_on() bottlenecks? Move back
to the original pattern, but use raw spinlocks? Some thing else?
Sherry -- are you able to do a bit of profiling to see which
instructions or which area of a function is the hottest or creating that
bottleneck? I think we probably need more information to do something
with this.
Also, because I still have no idea how I can reproduce this myself, you
might need to take the reigns with helping to develop and test a patch,
since I'm kind of stabbing in the dark here.
Anyway, because this might be rather involved, I figure it's best to
move this conversation on list in case other folks have insights.
Regards,
Jason
On Wed, Sep 21, 2022 at 06:09:27PM +0000, Sherry Yang wrote:
> > On Sep 20, 2022, at 7:44 AM, Jason A. Donenfeld <Jason@zx2c4.com> wrote:
> >
> > Anyway, a few questions:
> > 1) Does the regression disappear if you change this line:
> > - queue_work_on(raw_smp_processor_id(), system_highpri_wq, &fast_pool->mix);
> > + schedule_work_on(raw_smp_processor_id(), &fast_pool->mix);
>
> After applying this change, we still see performance regression there on linux-stable v5.15
>
> >
> > 2) Does the regression disappear if you remove this line:
> > - queue_work_on(raw_smp_processor_id(), system_highpri_wq, &fast_pool->mix);
> > + //queue_work_on(raw_smp_processor_id(), system_highpri_wq, &fast_pool->mix);
>
> After applying this change, we see performance get recovered on linux-stable v5.15.
>
> >
> >> We could see performance regression there.
> >
> > Can you give me some detailed instructions on how I can reproduce
> > this? Can it be reproduced inside of a single VM using network
> > namespaces, for example? Something like that would greatly help me
> > nail this down. For example, if you can give me a bash script that
> > does everything entirely on a single host?
> We are dong qperf tcp latency test there. All test results above are collected from X7 server with Mellanox Technologies
> MT27500 Family [ConnectX-3] cards:
> Infiniband device 'mlx4_0' port 1 status:
> default gid: fe80:0000:0000:0000:0010:e000:0178:9eb1
> base lid: 0x6
> sm lid: 0x1
> state: 4: ACTIVE
> phys state: 5: LinkUp
> rate: 40 Gb/sec (4X QDR)
> link_layer: InfiniBand
>
> Cards are configured with IP addresses on private subnet for IPoIB
> performance testing.
> Regression identified in this bug is in TCP latency in this stack as reported
> by qperf tcp_lat metric:
>
> We have one system listen as a qperf server:
> [root@yourQperfServer ~]# qperf
>
> Have the other system connect to qperf server as a client (in this case, it’s X7 server with Mellanox card):
> [root@yourQperfClient ~]# numactl -m0 -N0 qperf 20.20.20.101 -v -uu -ub --time 60 --wait_server 20 -oo msg_size:4K:1024K:*2 tcp_lat
>
> However, our test team ran other experiments yesterday.
> * Ran benchmark on X5-2 system over ixgbe interface
> * Ran 8 processes of the benchmark on the original system over the Mellanox card
> Both these experiments failed to reproduce the regression. This highlights that the regression is not seen over ethernet network devices
> and is only seen when running a single instance of the qperf benchmark.
next parent reply other threads:[~2022-09-21 22:33 UTC|newest]
Thread overview: 13+ messages / expand[flat|nested] mbox.gz Atom feed top
[not found] <B1BC4DB8-8F40-4975-B8E7-9ED9BFF1D50E@oracle.com>
[not found] ` <CAHmME9rUn0b5FKNFYkxyrn5cLiuW_nOxUZi3mRpPaBkUo9JWEQ@mail.gmail.com>
[not found] ` <04044E39-B150-4147-A090-3D942AF643DF@oracle.com>
[not found] ` <CAHmME9oKcqceoFpKkooCp5wriLLptpN=+WrrG0KcDWjBahM0bQ@mail.gmail.com>
[not found] ` <BD03BFF6-C369-4D34-A38B-49653F1CBC53@oracle.com>
2022-09-21 22:32 ` Jason A. Donenfeld [this message]
2022-09-21 23:35 ` 10% regression in qperf tcp latency after introducing commit "4a61bf7f9b18 random: defer fast pool mixing to worker" Jason A. Donenfeld
2022-09-21 23:54 ` Tejun Heo
2022-09-22 16:45 ` Jason A. Donenfeld
2022-09-22 16:55 ` [PATCH] random: use tasklet rather than workqueue for mixing fast pool Jason A. Donenfeld
2022-09-26 22:04 ` [PATCH v2] random: use immediate per-cpu timer " Jason A. Donenfeld
2022-09-27 7:41 ` David Laight
2022-09-27 8:23 ` Jason A. Donenfeld
2022-09-27 10:42 ` [PATCH v3] random: use expired per-cpu timer rather than wq " Jason A. Donenfeld
2022-09-28 12:06 ` Sebastian Andrzej Siewior
2022-09-28 16:15 ` Jason A. Donenfeld
2022-09-29 14:18 ` Sebastian Andrzej Siewior
2022-09-28 11:23 ` 10% regression in qperf tcp latency after introducing commit "4a61bf7f9b18 random: defer fast pool mixing to worker" Sebastian Siewior
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=YyuREcGAXV9828w5@zx2c4.com \
--to=jason@zx2c4.com \
--cc=bigeasy@linutronix.de \
--cc=jack.vogel@oracle.com \
--cc=jiangshanlai@gmail.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-rt-users@vger.kernel.org \
--cc=netdev@vger.kernel.org \
--cc=sherry.yang@oracle.com \
--cc=tariqt@nvidia.com \
--cc=tj@kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).