* [RFC PATCH v1 1/1] perf test: Increase load in lock contention test on low-activity systems @ 2025-07-09 20:38 Jan Polensky 2025-07-10 5:47 ` Thomas Richter 2025-07-16 19:50 ` Namhyung Kim 0 siblings, 2 replies; 5+ messages in thread From: Jan Polensky @ 2025-07-09 20:38 UTC (permalink / raw) To: adrian.hunter, irogers, namhyung; +Cc: linux-perf-users On low-activity systems, the 'kernel lock contention analysis test' often fails due to insufficient system load. To address this, the test now increases load by using multiple groups and threads in 'perf bench sched messaging', scaled to the number of available CPUs. Signed-off-by: Jan Polensky <japo@linux.ibm.com> --- tools/perf/tests/shell/lock_contention.sh | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/tools/perf/tests/shell/lock_contention.sh b/tools/perf/tests/shell/lock_contention.sh index 30d195d4c62f..e859e1503b5c 100755 --- a/tools/perf/tests/shell/lock_contention.sh +++ b/tools/perf/tests/shell/lock_contention.sh @@ -44,7 +44,8 @@ check() { test_record() { echo "Testing perf lock record and perf lock contention" - perf lock record -o ${perfdata} -- perf bench sched messaging > /dev/null 2>&1 + perf lock record -o ${perfdata} -- perf bench sched messaging \ + --group "$(nproc)" --thread "$(nproc)" > /dev/null 2>&1 # the output goes to the stderr and we expect only 1 output (-E 1) perf lock contention -i ${perfdata} -E 1 -q 2> ${result} if [ "$(cat "${result}" | wc -l)" != "1" ]; then -- 2.48.1 ^ permalink raw reply related [flat|nested] 5+ messages in thread
* Re: [RFC PATCH v1 1/1] perf test: Increase load in lock contention test on low-activity systems 2025-07-09 20:38 [RFC PATCH v1 1/1] perf test: Increase load in lock contention test on low-activity systems Jan Polensky @ 2025-07-10 5:47 ` Thomas Richter 2025-07-16 19:50 ` Namhyung Kim 1 sibling, 0 replies; 5+ messages in thread From: Thomas Richter @ 2025-07-10 5:47 UTC (permalink / raw) To: Jan Polensky, adrian.hunter, irogers, namhyung; +Cc: linux-perf-users On 7/9/25 22:38, Jan Polensky wrote: > On low-activity systems, the 'kernel lock contention analysis test' > often fails due to insufficient system load. To address this, the test > now increases load by using multiple groups and threads in 'perf bench > sched messaging', scaled to the number of available CPUs. > > Signed-off-by: Jan Polensky <japo@linux.ibm.com> > --- > tools/perf/tests/shell/lock_contention.sh | 3 ++- > 1 file changed, 2 insertions(+), 1 deletion(-) > > diff --git a/tools/perf/tests/shell/lock_contention.sh b/tools/perf/tests/shell/lock_contention.sh > index 30d195d4c62f..e859e1503b5c 100755 > --- a/tools/perf/tests/shell/lock_contention.sh > +++ b/tools/perf/tests/shell/lock_contention.sh > @@ -44,7 +44,8 @@ check() { > test_record() > { > echo "Testing perf lock record and perf lock contention" > - perf lock record -o ${perfdata} -- perf bench sched messaging > /dev/null 2>&1 > + perf lock record -o ${perfdata} -- perf bench sched messaging \ > + --group "$(nproc)" --thread "$(nproc)" > /dev/null 2>&1 > # the output goes to the stderr and we expect only 1 output (-E 1) > perf lock contention -i ${perfdata} -E 1 -q 2> ${result} > if [ "$(cat "${result}" | wc -l)" != "1" ]; then > -- > 2.48.1 > > Reviewed-by: Thomas Richter <tmricht@linux.ibm.com> -- Thomas Richter, Dept 3303, IBM s390 Linux Development, Boeblingen, Germany -- IBM Deutschland Research & Development GmbH Vorsitzender des Aufsichtsrats: Wolfgang Wendt Geschäftsführung: David Faller Sitz der Gesellschaft: Böblingen / Registergericht: Amtsgericht Stuttgart, HRB 243294 ^ permalink raw reply [flat|nested] 5+ messages in thread
* Re: [RFC PATCH v1 1/1] perf test: Increase load in lock contention test on low-activity systems 2025-07-09 20:38 [RFC PATCH v1 1/1] perf test: Increase load in lock contention test on low-activity systems Jan Polensky 2025-07-10 5:47 ` Thomas Richter @ 2025-07-16 19:50 ` Namhyung Kim 2025-07-25 16:23 ` Jan Polensky 1 sibling, 1 reply; 5+ messages in thread From: Namhyung Kim @ 2025-07-16 19:50 UTC (permalink / raw) To: Jan Polensky; +Cc: adrian.hunter, irogers, linux-perf-users Hello, On Wed, Jul 09, 2025 at 10:38:09PM +0200, Jan Polensky wrote: > On low-activity systems, the 'kernel lock contention analysis test' > often fails due to insufficient system load. To address this, the test > now increases load by using multiple groups and threads in 'perf bench > sched messaging', scaled to the number of available CPUs. > > Signed-off-by: Jan Polensky <japo@linux.ibm.com> > --- > tools/perf/tests/shell/lock_contention.sh | 3 ++- > 1 file changed, 2 insertions(+), 1 deletion(-) > > diff --git a/tools/perf/tests/shell/lock_contention.sh b/tools/perf/tests/shell/lock_contention.sh > index 30d195d4c62f..e859e1503b5c 100755 > --- a/tools/perf/tests/shell/lock_contention.sh > +++ b/tools/perf/tests/shell/lock_contention.sh > @@ -44,7 +44,8 @@ check() { > test_record() > { > echo "Testing perf lock record and perf lock contention" > - perf lock record -o ${perfdata} -- perf bench sched messaging > /dev/null 2>&1 > + perf lock record -o ${perfdata} -- perf bench sched messaging \ > + --group "$(nproc)" --thread "$(nproc)" > /dev/null 2>&1 $ perf bench sched messaging -h # Running 'sched/messaging' benchmark: Usage: perf bench sched messaging <options> -g, --group <n> Specify number of groups -l, --nr_loops <n> Specify the number of loops to run (default: 100) -p, --pipe Use pipe() instead of socketpair() -t, --thread Be multi thread instead of multi process The default value of --group option is 10 so it can be smaller on tiny systems. Note that it'd create 20 tasks for each group (I don't think we have an option to control this). Also --thread option doesn't take an argument so the number will be ignored. More importantly, there are several instances of this workload. Probably we want to change all of them by introducing a shell variable. But I'm a bit afraid how much overhead it'd bring up. Thanks, Namhyung > # the output goes to the stderr and we expect only 1 output (-E 1) > perf lock contention -i ${perfdata} -E 1 -q 2> ${result} > if [ "$(cat "${result}" | wc -l)" != "1" ]; then > -- > 2.48.1 > ^ permalink raw reply [flat|nested] 5+ messages in thread
* Re: [RFC PATCH v1 1/1] perf test: Increase load in lock contention test on low-activity systems 2025-07-16 19:50 ` Namhyung Kim @ 2025-07-25 16:23 ` Jan Polensky 2025-07-26 5:35 ` Namhyung Kim 0 siblings, 1 reply; 5+ messages in thread From: Jan Polensky @ 2025-07-25 16:23 UTC (permalink / raw) To: Namhyung Kim; +Cc: adrian.hunter, irogers, linux-perf-users Hello Namhyung, On Wed, Jul 16, 2025 at 12:50:51PM -0700, Namhyung Kim wrote: > Hello, > On Wed, Jul 09, 2025 at 10:38:09PM +0200, Jan Polensky wrote: > > On low-activity systems, the 'kernel lock contention analysis test' > > often fails due to insufficient system load. To address this, the test > > now increases load by using multiple groups and threads in 'perf bench > > sched messaging', scaled to the number of available CPUs. > > > > Signed-off-by: Jan Polensky <japo@linux.ibm.com> > > --- [skip] > > More importantly, there are several instances of this workload. > Probably we want to change all of them by introducing a shell variable. You're right — introducing a shell variable makes sense, especially since there are 13 identical calls. However, I’m slightly concerned that this might reduce readability. We might also consider an alternative solution to avoid this. > But I'm a bit afraid how much overhead it'd bring up. Indeed, using the --group parameter on large systems can lead to issues. For example: [root@localhost perf]# ./perf stat -- perf bench sched messaging --group $(nproc) --thread # Running 'sched/messaging' benchmark: perf: socketpair(): Too many open files To avoid this, I suggest following the approach proposed by Thomas Richter (tmricht@linux.ibm.com), using the -p (pipe) mode instead: perf bench sched messaging -p The pipe-based solution generates sufficient load and lock contention for the test, while avoiding the use of sockets and their associated file descriptor limits. I’ll incorporate these changes and send the next version shortly. Thanks again for your feedback – it's much appreciated. Best regards, Jan ^ permalink raw reply [flat|nested] 5+ messages in thread
* Re: [RFC PATCH v1 1/1] perf test: Increase load in lock contention test on low-activity systems 2025-07-25 16:23 ` Jan Polensky @ 2025-07-26 5:35 ` Namhyung Kim 0 siblings, 0 replies; 5+ messages in thread From: Namhyung Kim @ 2025-07-26 5:35 UTC (permalink / raw) To: Jan Polensky; +Cc: adrian.hunter, irogers, linux-perf-users On Fri, Jul 25, 2025 at 06:23:17PM +0200, Jan Polensky wrote: > Hello Namhyung, > On Wed, Jul 16, 2025 at 12:50:51PM -0700, Namhyung Kim wrote: > > Hello, > > On Wed, Jul 09, 2025 at 10:38:09PM +0200, Jan Polensky wrote: > > > On low-activity systems, the 'kernel lock contention analysis test' > > > often fails due to insufficient system load. To address this, the test > > > now increases load by using multiple groups and threads in 'perf bench > > > sched messaging', scaled to the number of available CPUs. > > > > > > Signed-off-by: Jan Polensky <japo@linux.ibm.com> > > > --- > [skip] > > > > More importantly, there are several instances of this workload. > > Probably we want to change all of them by introducing a shell variable. > You're right — introducing a shell variable makes sense, especially since > there are 13 identical calls. However, I’m slightly concerned that this > might reduce readability. We might also consider an alternative solution > to avoid this. We already have similar code in other places like record.sh. Thanks, Namhyung > > But I'm a bit afraid how much overhead it'd bring up. > Indeed, using the --group parameter on large systems can lead to issues. > For example: > > [root@localhost perf]# ./perf stat -- perf bench sched messaging --group $(nproc) --thread > # Running 'sched/messaging' benchmark: > perf: socketpair(): Too many open files > > To avoid this, I suggest following the approach proposed by Thomas > Richter (tmricht@linux.ibm.com), using the -p (pipe) mode instead: > > perf bench sched messaging -p > > The pipe-based solution generates sufficient load and lock contention for > the test, while avoiding the use of sockets and their associated file > descriptor limits. > > I’ll incorporate these changes and send the next version shortly. > > Thanks again for your feedback – it's much appreciated. > > Best regards, > Jan ^ permalink raw reply [flat|nested] 5+ messages in thread
end of thread, other threads:[~2025-07-26 5:36 UTC | newest] Thread overview: 5+ messages (download: mbox.gz follow: Atom feed -- links below jump to the message on this page -- 2025-07-09 20:38 [RFC PATCH v1 1/1] perf test: Increase load in lock contention test on low-activity systems Jan Polensky 2025-07-10 5:47 ` Thomas Richter 2025-07-16 19:50 ` Namhyung Kim 2025-07-25 16:23 ` Jan Polensky 2025-07-26 5:35 ` Namhyung Kim
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox; as well as URLs for NNTP newsgroup(s).