From: david laight <david.laight@runbox.com>
To: Pawan Gupta <pawan.kumar.gupta@linux.intel.com>
Cc: Dave Hansen <dave.hansen@intel.com>,
Nikolay Borisov <nik.borisov@suse.com>,
x86@kernel.org, David Kaplan <david.kaplan@amd.com>,
"H. Peter Anvin" <hpa@zytor.com>,
Josh Poimboeuf <jpoimboe@kernel.org>,
Sean Christopherson <seanjc@google.com>,
Paolo Bonzini <pbonzini@redhat.com>,
Borislav Petkov <bp@alien8.de>,
Dave Hansen <dave.hansen@linux.intel.com>,
linux-kernel@vger.kernel.org, kvm@vger.kernel.org,
Asit Mallick <asit.k.mallick@intel.com>,
Tao Zhang <tao1.zhang@intel.com>,
Peter Zijlstra <peterz@infradead.org>
Subject: Re: [PATCH v4 04/11] x86/bhi: Make clear_bhb_loop() effective on newer CPUs
Date: Fri, 5 Dec 2025 09:21:40 +0000 [thread overview]
Message-ID: <20251205092140.48fa5271@pumpkin> (raw)
In-Reply-To: <smt7yrupcypkjsfrtlwp6kznol3mrgrer63plubwfp2hcunoul@yi5rbq5r3w5j>
On Thu, 4 Dec 2025 13:56:02 -0800
Pawan Gupta <pawan.kumar.gupta@linux.intel.com> wrote:
> On Thu, Dec 04, 2025 at 09:15:11AM +0000, david laight wrote:
> > On Wed, 3 Dec 2025 17:40:26 -0800
> > Pawan Gupta <pawan.kumar.gupta@linux.intel.com> wrote:
> >
> > > On Tue, Nov 25, 2025 at 11:34:07AM +0000, david laight wrote:
> > > > On Mon, 24 Nov 2025 11:31:26 -0800
> > > > Pawan Gupta <pawan.kumar.gupta@linux.intel.com> wrote:
> > > >
> > > > > On Sat, Nov 22, 2025 at 11:05:58AM +0000, david laight wrote:
> > > > ...
> > > > > > For subtle reasons one of the mitigations that slows kernel entry caused
> > > > > > a doubling of the execution time of a largely single-threaded task that
> > > > > > spends almost all its time in userspace!
> > > > > > (I thought I'd disabled it at compile time - but the config option
> > > > > > changed underneath me...)
> > > > >
> > > > > That is surprising. If its okay, could you please share more details about
> > > > > this application? Or any other way I can reproduce this?
> > > >
> > > > The 'trigger' program is a multi-threaded program that wakes up every 10ms
> > > > to process RTP and TDM audio data.
> > > > So we have a low RT priority process with one thread per cpu.
> > > > Since they are RT they usually get scheduled on the same cpu as last lime.
> > > > I think this simple program will have the desired effect:
> > > > A main process that does:
> > > > syscall(SYS_clock_gettime, CLOCK_MONOTONIC, &start_time);
> > > > start_time += 1sec;
> > > > for (n = 1; n < num_cpu; n++)
> > > > pthread_create(thread_code, start_time);
> > > > thread_code(start_time);
> > > > with:
> > > > thread_code(ts)
> > > > {
> > > > for (;;) {
> > > > ts += 10ms;
> > > > syscall(SYS_clock_nanosleep, CLOCK_MONOTONIC, TIMER_ABSTIME, &ts, NULL);
> > > > do_work();
> > > > }
> > > >
> > > > So all the threads wake up at exactly the same time every 10ms.
> > > > (You need to use syscall(), don't look at what glibc does.)
> > > >
> > > > On my system the program wasn't doing anything, so do_work() was empty.
> > > > What matters is whether all the threads end up running at the same time.
> > > > I managed that using pthread_broadcast(), but the clock code above
> > > > ought to be worse (and I've since changed the daemon to work that way
> > > > to avoid all this issues with pthread_broadcast() being sequential
> > > > and threads not running because the target cpu is running an ISR or
> > > > just looping in kernel).
> > > >
> > > > The process that gets 'hit' is anything cpu bound.
> > > > Even a shell loop (eg while :; do ;: done) but with a counter will do.
> > > >
> > > > Without the 'trigger' program, it will (mostly) sit on one cpu and the
> > > > clock frequency of that cpu will increase to (say) 3GHz while the other
> > > > all run at 800Mhz.
> > > > But the 'trigger' program runs threads on all the cpu at the same time.
> > > > So the 'hit' program is pre-empted and is later rescheduled on a
> > > > different cpu - running at 800MHz.
> > > > The cpu speed increases, but 10ms later it gets bounced again.
> > >
> > > Sorry I haven't tried creating this test yet.
> > >
> > > > The real issue is that the cpu speed is associated with the cpu, not
> > > > the process running on it.
> > >
> > > So if the 'hit' program gets scheduled to a CPU that is running at 3GHz
> > > then we don't expect a dramatic performance drop? Setting scaling_governor
> > > to "performance" would be an interesting test.
> >
> > I failed to find a way to lock the cpu frequency (for other testing) on
> > that system an i7-7xxx - and the system will start thermally throttling
> > if you aren't careful.
>
> i7-7xxx would be Kaby Lake gen, those shouldn't need to deploy BHB clear
> mitigation. I am guessing it is the legacy-IBRS mitigation in your case.
>
> What you described looks very similar to the issue fixed by commit:
>
> aa1567a7e644 ("intel_idle: Add ibrs_off module parameter to force-disable IBRS")
>
> Commit bf5835bcdb96 ("intel_idle: Disable IBRS during long idle")
> disables IBRS when the cstate is 6 or lower. However, there are
> some use cases where a customer may want to use max_cstate=1 to
> lower latency. Such use cases will suffer from the performance
> degradation caused by the enabling of IBRS in the sibling idle thread.
> Add a "ibrs_off" module parameter to force disable IBRS and the
> CPUIDLE_FLAG_IRQ_ENABLE flag if set.
>
> In the case of a Skylake server with max_cstate=1, this new ibrs_off
> option will likely increase the IRQ response latency as IRQ will now
> be disabled.
>
> When running SPECjbb2015 with cstates set to C1 on a Skylake system.
>
> First test when the kernel is booted with: "intel_idle.ibrs_off":
>
> max-jOPS = 117828, critical-jOPS = 66047
>
> Then retest when the kernel is booted without the "intel_idle.ibrs_off"
> added:
>
> max-jOPS = 116408, critical-jOPS = 58958
>
> That means booting with "intel_idle.ibrs_off" improves performance by:
>
> max-jOPS: +1.2%, which could be considered noise range.
> critical-jOPS: +12%, which is definitely a solid improvement.
No, it wasn't anything to do with sibling threads.
It was the simple issue of the single-threaded 'busy in userspace' program
getting migrated to an idle cpu running at a low priority.
The IBRS mitigation just affected the timings of the other processes in the
system enough to force the user thread be pre-empted and rescheduled.
So it was not directly related to this code - even though it caused it.
The real issues is the cpu speed being tied to the physical cpu, not the
thread running on it.
>
> > ISTR that the hardware does most of the work.
> > So I'm not sure what difference "performance" makes (and can't remember what
> > might be set for that system - could set set anyway.)
>
> > We did have to disable some of the low power states, waking the cpu from those
> > just takes far too long.
>
> Seems like you have a workaround in place already.
I just needed to find out why my fpga compile had gone out from 12 minutes
to over 20 with a kernel update.
Fixing that was easy, but the 'busy thread being migrated to an idle cpu'
is a separate issue that could affect a lot of workloads.
(Whether or not these mitigations are in place.)
Diagnosing it required looking at the scheduler ftrace events and then
realising what effect they would have.
It wouldn't surprise me if people haven't 'fixed' the problem by pinning
a process to a specific cpu, I couldn't try that because the fpga compiler
has some multithreaded parts.
David
next prev parent reply other threads:[~2025-12-05 9:22 UTC|newest]
Thread overview: 63+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-11-20 6:17 [PATCH v4 00/11] VMSCAPE optimization for BHI variant Pawan Gupta
2025-11-20 6:17 ` [PATCH v4 01/11] x86/bhi: x86/vmscape: Move LFENCE out of clear_bhb_loop() Pawan Gupta
2025-11-20 16:15 ` Nikolay Borisov
2025-11-20 16:56 ` Pawan Gupta
2025-11-20 16:58 ` Nikolay Borisov
2025-11-20 6:18 ` [PATCH v4 02/11] x86/bhi: Move the BHB sequence to a macro for reuse Pawan Gupta
2025-11-20 16:28 ` Nikolay Borisov
2025-11-20 16:57 ` Pawan Gupta
2025-11-25 0:21 ` Pawan Gupta
2025-11-20 6:18 ` [PATCH v4 03/11] x86/bhi: Make the depth of BHB-clearing configurable Pawan Gupta
2025-11-20 17:02 ` Nikolay Borisov
2025-11-20 6:18 ` [PATCH v4 04/11] x86/bhi: Make clear_bhb_loop() effective on newer CPUs Pawan Gupta
2025-11-21 12:33 ` Nikolay Borisov
2025-11-21 16:40 ` Dave Hansen
2025-11-21 16:45 ` Nikolay Borisov
2025-11-21 16:50 ` Dave Hansen
2025-11-21 18:16 ` Pawan Gupta
2025-11-21 18:42 ` Dave Hansen
2025-11-21 21:26 ` Pawan Gupta
2025-11-21 21:36 ` Dave Hansen
2025-11-24 19:21 ` Pawan Gupta
2025-11-22 11:05 ` david laight
2025-11-24 19:31 ` Pawan Gupta
2025-11-25 11:34 ` david laight
2025-12-04 1:40 ` Pawan Gupta
2025-12-04 9:15 ` david laight
2025-12-04 21:56 ` Pawan Gupta
2025-12-05 9:21 ` david laight [this message]
2025-11-26 19:23 ` Pawan Gupta
2026-03-06 21:00 ` Jim Mattson
2026-03-06 22:32 ` Pawan Gupta
2026-03-06 22:57 ` Jim Mattson
2026-03-06 23:29 ` Pawan Gupta
2026-03-07 0:35 ` Jim Mattson
2026-03-07 1:00 ` Pawan Gupta
2026-03-07 1:10 ` Jim Mattson
2026-03-07 2:41 ` Pawan Gupta
2026-03-07 5:05 ` Jim Mattson
2026-03-09 22:29 ` Pawan Gupta
2026-03-09 23:05 ` Jim Mattson
2026-03-10 0:00 ` Pawan Gupta
2026-03-10 0:08 ` Jim Mattson
2026-03-10 0:52 ` Pawan Gupta
2025-11-20 6:18 ` [PATCH v4 05/11] x86/vmscape: Rename x86_ibpb_exit_to_user to x86_predictor_flush_exit_to_user Pawan Gupta
2025-11-20 6:19 ` [PATCH v4 06/11] x86/vmscape: Move mitigation selection to a switch() Pawan Gupta
2025-11-21 14:27 ` Nikolay Borisov
2025-11-24 23:09 ` Pawan Gupta
2025-11-25 10:19 ` Nikolay Borisov
2025-11-25 17:45 ` Pawan Gupta
2025-11-20 6:19 ` [PATCH v4 07/11] x86/vmscape: Use write_ibpb() instead of indirect_branch_prediction_barrier() Pawan Gupta
2025-11-21 12:59 ` Nikolay Borisov
2025-11-20 6:19 ` [PATCH v4 08/11] x86/vmscape: Use static_call() for predictor flush Pawan Gupta
2025-11-20 6:19 ` [PATCH v4 09/11] x86/vmscape: Deploy BHB clearing mitigation Pawan Gupta
2025-11-21 14:18 ` Nikolay Borisov
2025-11-21 18:29 ` Pawan Gupta
2025-11-21 14:23 ` Nikolay Borisov
2025-11-21 18:41 ` Pawan Gupta
2025-11-21 18:53 ` Nikolay Borisov
2025-11-21 21:29 ` Pawan Gupta
2025-11-20 6:20 ` [PATCH v4 10/11] x86/vmscape: Override conflicting attack-vector controls with =force Pawan Gupta
2025-11-21 18:04 ` Nikolay Borisov
2025-11-20 6:20 ` [PATCH v4 11/11] x86/vmscape: Add cmdline vmscape=on to override attack vector controls Pawan Gupta
2025-11-25 11:41 ` Nikolay Borisov
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20251205092140.48fa5271@pumpkin \
--to=david.laight@runbox.com \
--cc=asit.k.mallick@intel.com \
--cc=bp@alien8.de \
--cc=dave.hansen@intel.com \
--cc=dave.hansen@linux.intel.com \
--cc=david.kaplan@amd.com \
--cc=hpa@zytor.com \
--cc=jpoimboe@kernel.org \
--cc=kvm@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=nik.borisov@suse.com \
--cc=pawan.kumar.gupta@linux.intel.com \
--cc=pbonzini@redhat.com \
--cc=peterz@infradead.org \
--cc=seanjc@google.com \
--cc=tao1.zhang@intel.com \
--cc=x86@kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox