linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [RFC][PATCH 0/5] sched: Try and address some recent-ish regressions
@ 2025-05-20  9:45 Peter Zijlstra
  2025-05-20  9:45 ` [RFC][PATCH 1/5] sched/deadline: Less agressive dl_server handling Peter Zijlstra
                   ` (6 more replies)
  0 siblings, 7 replies; 33+ messages in thread
From: Peter Zijlstra @ 2025-05-20  9:45 UTC (permalink / raw)
  To: mingo, juri.lelli, vincent.guittot, dietmar.eggemann, rostedt,
	bsegall, mgorman, vschneid, clm
  Cc: linux-kernel, peterz


Hi!

So Chris poked me about how they're having a wee performance drop after around
6.11. He's extended his schbench tool to mimic the workload in question.

Specifically the commandline given:

  schbench -L -m 4 -M auto -t 128 -n 0 -r 60

This benchmark wants to stay on a single (large) LLC (Chris, perhaps add an
option to start the CPU mask with
/sys/devices/system/cpu/cpu0/cache/index3/shared_cpu_list or something). Both
the machine Chris has (SKL, 20+ cores per LLC) and the machines I ran this on
(SKL,SPR 20+ cores) are Intel, AMD has smaller LLC and the problem wasn't as
pronounced there.

Use performance CPU governor (as always when benchmarking). Also, if the test
results are unstable as all heck, disable turbo.

After a fair amount of tinkering I managed to reproduce on my SPR and Thomas'
SKL. The SKL would only give usable numbers with the second socket offline and
turbo disabled -- YMMV.

Chris further provided a bisect into the DELAY_DEQUEUE patches and a bisect
leading to commit 5f6bd380c7bd ("sched/rt: Remove default bandwidth control")
-- which enables the dl_server by default.


SKL (performance, no_turbo):

schbench-6.9.0-1.txt:average rps: 2040360.55
schbench-6.9.0-2.txt:average rps: 2038846.78
schbench-6.9.0-3.txt:average rps: 2037892.28

schbench-6.15.0-rc6+-1.txt:average rps: 1907718.18
schbench-6.15.0-rc6+-2.txt:average rps: 1906931.07
schbench-6.15.0-rc6+-3.txt:average rps: 1903190.38

schbench-6.15.0-rc6+-dirty-1.txt:average rps: 2002224.78
schbench-6.15.0-rc6+-dirty-2.txt:average rps: 2007116.80
schbench-6.15.0-rc6+-dirty-3.txt:average rps: 2005294.57

schbench-6.15.0-rc6+-dirty-delayed-1.txt:average rps: 2011282.15
schbench-6.15.0-rc6+-dirty-delayed-2.txt:average rps: 2016347.10
schbench-6.15.0-rc6+-dirty-delayed-3.txt:average rps: 2014515.47

schbench-6.15.0-rc6+-dirty-delayed-default-1.txt:average rps: 2042169.00
schbench-6.15.0-rc6+-dirty-delayed-default-2.txt:average rps: 2032789.77
schbench-6.15.0-rc6+-dirty-delayed-default-3.txt:average rps: 2040313.95


SPR (performance):

schbench-6.9.0-1.txt:average rps: 2975450.75
schbench-6.9.0-2.txt:average rps: 2975464.38
schbench-6.9.0-3.txt:average rps: 2974881.02

schbench-6.15.0-rc6+-1.txt:average rps: 2882537.37
schbench-6.15.0-rc6+-2.txt:average rps: 2881658.70
schbench-6.15.0-rc6+-3.txt:average rps: 2884293.37

schbench-6.15.0-rc6+-dl_server-1.txt:average rps: 2924423.18
schbench-6.15.0-rc6+-dl_server-2.txt:average rps: 2920422.63

schbench-6.15.0-rc6+-dirty-1.txt:average rps: 3011540.97
schbench-6.15.0-rc6+-dirty-2.txt:average rps: 3010124.10

schbench-6.15.0-rc6+-dirty-delayed-1.txt:average rps: 3030883.15
schbench-6.15.0-rc6+-dirty-delayed-2.txt:average rps: 3031627.05

schbench-6.15.0-rc6+-dirty-delayed-default-1.txt:average rps: 3053005.98
schbench-6.15.0-rc6+-dirty-delayed-default-2.txt:average rps: 3052972.80


As can be seen, the SPR is much easier to please than the SKL for whatever
reason. I'm thinking we can make TTWU_QUEUE_DELAYED default on, but I suspect
TTWU_QUEUE_DEFAULT might be a harder sell -- we'd need to run more than this
one benchmark.

Anyway, the patches are stable (finally!, I hope, knock on wood) but in a
somewhat rough state. At the very least the last patch is missing ttwu_stat(),
still need to figure out how to account it ;-)

Chris, I'm hoping your machine will agree with these numbers; it hasn't been
straight sailing in that regard.


^ permalink raw reply	[flat|nested] 33+ messages in thread

end of thread, other threads:[~2025-06-16 16:37 UTC | newest]

Thread overview: 33+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2025-05-20  9:45 [RFC][PATCH 0/5] sched: Try and address some recent-ish regressions Peter Zijlstra
2025-05-20  9:45 ` [RFC][PATCH 1/5] sched/deadline: Less agressive dl_server handling Peter Zijlstra
2025-06-03 16:03   ` Juri Lelli
2025-06-13  9:43     ` Peter Zijlstra
2025-05-20  9:45 ` [RFC][PATCH 2/5] sched: Optimize ttwu() / select_task_rq() Peter Zijlstra
2025-06-09  5:01   ` Mike Galbraith
2025-06-13  9:40     ` Peter Zijlstra
2025-06-13 10:20       ` Mike Galbraith
2025-05-20  9:45 ` [RFC][PATCH 3/5] sched: Split up ttwu_runnable() Peter Zijlstra
2025-05-20  9:45 ` [RFC][PATCH 4/5] sched: Add ttwu_queue controls Peter Zijlstra
2025-05-20  9:45 ` [RFC][PATCH 5/5] sched: Add ttwu_queue support for delayed tasks Peter Zijlstra
2025-06-06 15:03   ` Vincent Guittot
2025-06-06 15:38     ` Peter Zijlstra
2025-06-06 16:55       ` Vincent Guittot
2025-06-11  9:39         ` Peter Zijlstra
2025-06-16 12:39           ` Vincent Guittot
2025-06-06 16:18     ` Phil Auld
2025-06-16 12:01     ` Peter Zijlstra
2025-06-16 16:37       ` Peter Zijlstra
2025-06-13  7:34   ` Dietmar Eggemann
2025-06-13  9:51     ` Peter Zijlstra
2025-06-13 10:46       ` Peter Zijlstra
2025-06-16  8:16         ` Dietmar Eggemann
2025-05-28 19:59 ` [RFC][PATCH 0/5] sched: Try and address some recent-ish regressions Peter Zijlstra
2025-05-29  1:41   ` Chris Mason
2025-06-14 10:04     ` Peter Zijlstra
2025-06-16  0:35       ` Chris Mason
2025-05-29 10:18   ` Beata Michalska
2025-05-30  9:00     ` Peter Zijlstra
2025-05-30 10:04   ` Chris Mason
2025-06-02  4:44 ` K Prateek Nayak
2025-06-13  3:28   ` K Prateek Nayak
2025-06-14 10:15     ` Peter Zijlstra

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).