* [PATCH] perf tests record: allow for some difference in cycle count in leader sampling test on aarch64
@ 2025-10-01 19:50 Anubhav Shelat
2025-10-01 20:43 ` Ian Rogers
2025-10-02 6:55 ` Thomas Richter
0 siblings, 2 replies; 16+ messages in thread
From: Anubhav Shelat @ 2025-10-01 19:50 UTC (permalink / raw)
To: mpetlan, acme, namhyung, irogers, linux-perf-users
Cc: peterz, mingo, mark.rutland, alexander.shishkin, jolsa,
adrian.hunter, kan.liang, dapeng1.mi, james.clark, Anubhav Shelat
On aarch64 systems, when running the leader sampling test, the cycle
count on the leader is consistently ~30 cycles less than the cycle count
on the slave event causing the test to fail. This looks like the result
of some hardware property of aarch64 processors, so allow for a small
difference in cycles between the leader and slave events on aarch64
systems.
Signed-off-by: Anubhav Shelat <ashelat@redhat.com>
---
tools/perf/tests/shell/record.sh | 7 ++++++-
1 file changed, 6 insertions(+), 1 deletion(-)
diff --git a/tools/perf/tests/shell/record.sh b/tools/perf/tests/shell/record.sh
index b1ad24fb3b33..dff83d64e970 100755
--- a/tools/perf/tests/shell/record.sh
+++ b/tools/perf/tests/shell/record.sh
@@ -280,7 +280,12 @@ test_leader_sampling() {
while IFS= read -r line
do
cycles=$(echo $line | awk '{for(i=1;i<=NF;i++) if($i=="cycles:") print $(i-1)}')
- if [ $(($index%2)) -ne 0 ] && [ ${cycles}x != ${prev_cycles}x ]
+ # On aarch64 systems the leader event gets stopped ~30 cycles before the slave, so allow some
+ # difference
+ if [ "$(uname -m)" = "aarch64" ] && (( cycles - prev_cycles < 50 ))
+ then
+ valid_counts=$(($valid_counts+1))
+ elif [ $(($index%2)) -ne 0 ] && [ ${cycles}x != ${prev_cycles}x ]
then
invalid_counts=$(($invalid_counts+1))
else
--
2.47.3
^ permalink raw reply related [flat|nested] 16+ messages in thread
* Re: [PATCH] perf tests record: allow for some difference in cycle count in leader sampling test on aarch64
2025-10-01 19:50 [PATCH] perf tests record: allow for some difference in cycle count in leader sampling test on aarch64 Anubhav Shelat
@ 2025-10-01 20:43 ` Ian Rogers
2025-10-02 6:55 ` Thomas Richter
1 sibling, 0 replies; 16+ messages in thread
From: Ian Rogers @ 2025-10-01 20:43 UTC (permalink / raw)
To: Anubhav Shelat
Cc: mpetlan, acme, namhyung, linux-perf-users, peterz, mingo,
mark.rutland, alexander.shishkin, jolsa, adrian.hunter, kan.liang,
dapeng1.mi, james.clark
On Wed, Oct 1, 2025 at 12:52 PM Anubhav Shelat <ashelat@redhat.com> wrote:
>
> On aarch64 systems, when running the leader sampling test, the cycle
> count on the leader is consistently ~30 cycles less than the cycle count
> on the slave event causing the test to fail. This looks like the result
> of some hardware property of aarch64 processors, so allow for a small
> difference in cycles between the leader and slave events on aarch64
> systems.
>
> Signed-off-by: Anubhav Shelat <ashelat@redhat.com>
> ---
> tools/perf/tests/shell/record.sh | 7 ++++++-
> 1 file changed, 6 insertions(+), 1 deletion(-)
>
> diff --git a/tools/perf/tests/shell/record.sh b/tools/perf/tests/shell/record.sh
> index b1ad24fb3b33..dff83d64e970 100755
> --- a/tools/perf/tests/shell/record.sh
> +++ b/tools/perf/tests/shell/record.sh
> @@ -280,7 +280,12 @@ test_leader_sampling() {
> while IFS= read -r line
> do
> cycles=$(echo $line | awk '{for(i=1;i<=NF;i++) if($i=="cycles:") print $(i-1)}')
> - if [ $(($index%2)) -ne 0 ] && [ ${cycles}x != ${prev_cycles}x ]
> + # On aarch64 systems the leader event gets stopped ~30 cycles before the slave, so allow some
> + # difference
> + if [ "$(uname -m)" = "aarch64" ] && (( cycles - prev_cycles < 50 ))
If cycles is 0 then this will always pass, should this be checking a range?
Thanks,
Ian
> + then
> + valid_counts=$(($valid_counts+1))
> + elif [ $(($index%2)) -ne 0 ] && [ ${cycles}x != ${prev_cycles}x ]
> then
> invalid_counts=$(($invalid_counts+1))
> else
> --
> 2.47.3
>
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: [PATCH] perf tests record: allow for some difference in cycle count in leader sampling test on aarch64
2025-10-01 19:50 [PATCH] perf tests record: allow for some difference in cycle count in leader sampling test on aarch64 Anubhav Shelat
2025-10-01 20:43 ` Ian Rogers
@ 2025-10-02 6:55 ` Thomas Richter
[not found] ` <CA+G8DhL49FWD47bkbcXYeb9T=AbxNhC-ypqjkNxRnW0JqmYnPw@mail.gmail.com>
1 sibling, 1 reply; 16+ messages in thread
From: Thomas Richter @ 2025-10-02 6:55 UTC (permalink / raw)
To: Anubhav Shelat, mpetlan, acme, namhyung, irogers,
linux-perf-users
Cc: peterz, mingo, mark.rutland, alexander.shishkin, jolsa,
adrian.hunter, kan.liang, dapeng1.mi, james.clark
On 10/1/25 21:50, Anubhav Shelat wrote:
> On aarch64 systems, when running the leader sampling test, the cycle
> count on the leader is consistently ~30 cycles less than the cycle count
> on the slave event causing the test to fail. This looks like the result
> of some hardware property of aarch64 processors, so allow for a small
> difference in cycles between the leader and slave events on aarch64
> systems.
I have observed the same behavior on s390 too and I guess other
platforms run into similar issues as well.
Can we use a larger range to allow the test to pass?
I suggest we 'ignore' a small percentage of hits which
violate the range, lets say 10 %.
So if 90% of the cycles are in the allowed range, the test is good.
Just my 2 cents from debugging this on about 5 different s390
machine generations with and without virtualization and different
work-loads.
[...]
--
Thomas Richter, Dept 3303, IBM s390 Linux Development, Boeblingen, Germany
--
IBM Deutschland Research & Development GmbH
Vorsitzender des Aufsichtsrats: Wolfgang Wendt
Geschäftsführung: David Faller
Sitz der Gesellschaft: Böblingen / Registergericht: Amtsgericht Stuttgart, HRB 243294
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: [PATCH] perf tests record: allow for some difference in cycle count in leader sampling test on aarch64
[not found] ` <CA+G8DhL49FWD47bkbcXYeb9T=AbxNhC-ypqjkNxRnW0JqmYnPw@mail.gmail.com>
@ 2025-10-02 17:44 ` Anubhav Shelat
2025-10-07 5:47 ` Thomas Richter
1 sibling, 0 replies; 16+ messages in thread
From: Anubhav Shelat @ 2025-10-02 17:44 UTC (permalink / raw)
To: Thomas Richter
Cc: mpetlan, acme, namhyung, irogers, linux-perf-users, peterz, mingo,
mark.rutland, alexander.shishkin, jolsa, adrian.hunter, kan.liang,
dapeng1.mi, james.clark
On Oct 1, 2025 at 9:44 PM, Ian Rogers wrote:
> If cycles is 0 then this will always pass, should this be checking a range?
Yes you're right this will be better.
On Oct 2, 2025 at 7:56 AM, Thomas Richter wrote:
> Can we use a larger range to allow the test to pass?
What range do you get on s390? When I do group measurements using
"perf record -e "{cycles,cycles}:Su" perf test -w brstack" like in the
test I always get somewhere between 20 and 50 cycles difference. I
haven't tested on s390x, but I see no cycle count difference when
testing the same command on x86. I have observed much larger, more
varied differences when using software events.
Anubhav
On Thu, Oct 2, 2025 at 2:39 PM Anubhav Shelat <ashelat@redhat.com> wrote:
>
> On Oct 1, 2025 at 9:44 PM, Ian Rogers wrote:
> > If cycles is 0 then this will always pass, should this be checking a range?
>
> Yes you're right this will be better.
>
> On Oct 2, 2025 at 7:56 AM, Thomas Richter wrote:
> > Can we use a larger range to allow the test to pass?
>
> What range do you get on s390? When I do group measurements using "perf record -e "{cycles,cycles}:Su" perf test -w brstack" like in the test I always get somewhere between 20 and 50 cycles difference. I haven't tested on s390x, but I see no cycle count difference when testing the same command on x86. I have observed much larger, more varied differences when using software events.
>
> Anubhav
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: [PATCH] perf tests record: allow for some difference in cycle count in leader sampling test on aarch64
[not found] ` <CA+G8DhL49FWD47bkbcXYeb9T=AbxNhC-ypqjkNxRnW0JqmYnPw@mail.gmail.com>
2025-10-02 17:44 ` Anubhav Shelat
@ 2025-10-07 5:47 ` Thomas Richter
2025-10-07 12:34 ` James Clark
1 sibling, 1 reply; 16+ messages in thread
From: Thomas Richter @ 2025-10-07 5:47 UTC (permalink / raw)
To: Anubhav Shelat
Cc: mpetlan, acme, namhyung, irogers, linux-perf-users, peterz, mingo,
mark.rutland, alexander.shishkin, jolsa, adrian.hunter, kan.liang,
dapeng1.mi, james.clark
On 10/2/25 15:39, Anubhav Shelat wrote:
> On Oct 1, 2025 at 9:44 PM, Ian Rogers wrote:
>> If cycles is 0 then this will always pass, should this be checking a
> range?
>
> Yes you're right this will be better.
>
> On Oct 2, 2025 at 7:56 AM, Thomas Richter wrote:
>> Can we use a larger range to allow the test to pass?
>
> What range do you get on s390? When I do group measurements using "perf
> record -e "{cycles,cycles}:Su" perf test -w brstack" like in the test I
> always get somewhere between 20 and 50 cycles difference. I haven't tested
> on s390x, but I see no cycle count difference when testing the same command
> on x86. I have observed much larger, more varied differences when using
> software events.
>
> Anubhav
>
Here is the output of the
# perf record -e "{cycles,cycles}:Su" -- ./perf test -w brstack
# perf script | grep brstack
commands:
perf 1110782 426394.696874: 6885000 cycles: 116fc9e brstack_bench+0xae (/r>
perf 1110782 426394.696875: 1377000 cycles: 116fb98 brstack_foo+0x0 (/root>
perf 1110782 426394.696877: 1377000 cycles: 116fb48 brstack_bar+0x0 (/root>
perf 1110782 426394.696878: 1377000 cycles: 116fc94 brstack_bench+0xa4 (/r>
perf 1110782 426394.696880: 1377000 cycles: 116fc84 brstack_bench+0x94 (/r>
perf 1110782 426394.696881: 1377000 cycles: 116fb7c brstack_bar+0x34 (/roo>
perf 1110782 426394.696883: 1377000 cycles: 116fb7c brstack_bar+0x34 (/roo>
perf 1110782 426394.696884: 1377000 cycles: 116fb7c brstack_bar+0x34 (/roo>
perf 1110782 426394.696885: 1377000 cycles: 116fb7c brstack_bar+0x34 (/roo>
perf 1110782 426394.696887: 1377000 cycles: 116fb7c brstack_bar+0x34 (/roo>
perf 1110782 426394.696888: 1377000 cycles: 116fc98 brstack_bench+0xa8 (/r>
perf 1110782 426394.696890: 1377000 cycles: 116fb7c brstack_bar+0x34 (/roo>
perf 1110782 426394.696891: 1377000 cycles: 116fc9e brstack_bench+0xae (/r>
perf 1110782 426394.703542: 1377000 cycles: 116fb7c brstack_bar+0x34 (/roo>
perf 1110782 426394.703542: 30971975 cycles: 116fb7c brstack_bar+0x34 (/roo>
perf 1110782 426394.703543: 1377000 cycles: 116fc76 brstack_bench+0x86 (/r>
perf 1110782 426394.703545: 1377000 cycles: 116fc06 brstack_bench+0x16 (/r>
perf 1110782 426394.703546: 1377000 cycles: 116fc9e brstack_bench+0xae (/r>
perf 1110782 426394.703547: 1377000 cycles: 116fc20 brstack_bench+0x30 (/r>
perf 1110782 426394.703549: 1377000 cycles: 116fc9e brstack_bench+0xae (/r>
perf 1110782 426394.703550: 1377000 cycles: 116fcbc brstack_bench+0xcc
They are usual identical values beside one or two which are way off. Ignoring those would
be good.
--
Thomas Richter, Dept 3303, IBM s390 Linux Development, Boeblingen, Germany
--
IBM Deutschland Research & Development GmbH
Vorsitzender des Aufsichtsrats: Wolfgang Wendt
Geschäftsführung: David Faller
Sitz der Gesellschaft: Böblingen / Registergericht: Amtsgericht Stuttgart, HRB 243294
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: [PATCH] perf tests record: allow for some difference in cycle count in leader sampling test on aarch64
2025-10-07 5:47 ` Thomas Richter
@ 2025-10-07 12:34 ` James Clark
2025-10-08 7:52 ` Namhyung Kim
2025-10-08 10:48 ` Thomas Richter
0 siblings, 2 replies; 16+ messages in thread
From: James Clark @ 2025-10-07 12:34 UTC (permalink / raw)
To: Thomas Richter, Anubhav Shelat
Cc: mpetlan, acme, namhyung, irogers, linux-perf-users, peterz, mingo,
mark.rutland, alexander.shishkin, jolsa, adrian.hunter, kan.liang,
dapeng1.mi
On 07/10/2025 6:47 am, Thomas Richter wrote:
> On 10/2/25 15:39, Anubhav Shelat wrote:
>> On Oct 1, 2025 at 9:44 PM, Ian Rogers wrote:
>>> If cycles is 0 then this will always pass, should this be checking a
>> range?
>>
>> Yes you're right this will be better.
>>
>> On Oct 2, 2025 at 7:56 AM, Thomas Richter wrote:
>>> Can we use a larger range to allow the test to pass?
>>
>> What range do you get on s390? When I do group measurements using "perf
>> record -e "{cycles,cycles}:Su" perf test -w brstack" like in the test I
>> always get somewhere between 20 and 50 cycles difference. I haven't tested
>> on s390x, but I see no cycle count difference when testing the same command
>> on x86. I have observed much larger, more varied differences when using
>> software events.
>>
>> Anubhav
>>
>
> Here is the output of the
>
> # perf record -e "{cycles,cycles}:Su" -- ./perf test -w brstack
> # perf script | grep brstack
>
> commands:
>
> perf 1110782 426394.696874: 6885000 cycles: 116fc9e brstack_bench+0xae (/r>
> perf 1110782 426394.696875: 1377000 cycles: 116fb98 brstack_foo+0x0 (/root>
> perf 1110782 426394.696877: 1377000 cycles: 116fb48 brstack_bar+0x0 (/root>
> perf 1110782 426394.696878: 1377000 cycles: 116fc94 brstack_bench+0xa4 (/r>
> perf 1110782 426394.696880: 1377000 cycles: 116fc84 brstack_bench+0x94 (/r>
> perf 1110782 426394.696881: 1377000 cycles: 116fb7c brstack_bar+0x34 (/roo>
> perf 1110782 426394.696883: 1377000 cycles: 116fb7c brstack_bar+0x34 (/roo>
> perf 1110782 426394.696884: 1377000 cycles: 116fb7c brstack_bar+0x34 (/roo>
> perf 1110782 426394.696885: 1377000 cycles: 116fb7c brstack_bar+0x34 (/roo>
> perf 1110782 426394.696887: 1377000 cycles: 116fb7c brstack_bar+0x34 (/roo>
> perf 1110782 426394.696888: 1377000 cycles: 116fc98 brstack_bench+0xa8 (/r>
> perf 1110782 426394.696890: 1377000 cycles: 116fb7c brstack_bar+0x34 (/roo>
> perf 1110782 426394.696891: 1377000 cycles: 116fc9e brstack_bench+0xae (/r>
> perf 1110782 426394.703542: 1377000 cycles: 116fb7c brstack_bar+0x34 (/roo>
> perf 1110782 426394.703542: 30971975 cycles: 116fb7c brstack_bar+0x34 (/roo>
> perf 1110782 426394.703543: 1377000 cycles: 116fc76 brstack_bench+0x86 (/r>
> perf 1110782 426394.703545: 1377000 cycles: 116fc06 brstack_bench+0x16 (/r>
> perf 1110782 426394.703546: 1377000 cycles: 116fc9e brstack_bench+0xae (/r>
> perf 1110782 426394.703547: 1377000 cycles: 116fc20 brstack_bench+0x30 (/r>
> perf 1110782 426394.703549: 1377000 cycles: 116fc9e brstack_bench+0xae (/r>
> perf 1110782 426394.703550: 1377000 cycles: 116fcbc brstack_bench+0xcc
>
> They are usual identical values beside one or two which are way off. Ignoring those would
> be good.
>
FWIW I ran 100+ iterations my Arm Juno and N1SDP boards and the test
passed every time.
Are we sure there isn't some kind of race condition or bug that the test
has found? Rather than a bug in the test?
At least "This looks like the result of some hardware property of
aarch64 processors" in the commit message can't be accurate as this
isn't the case everywhere.
James
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: [PATCH] perf tests record: allow for some difference in cycle count in leader sampling test on aarch64
2025-10-07 12:34 ` James Clark
@ 2025-10-08 7:52 ` Namhyung Kim
2025-10-08 10:48 ` Thomas Richter
1 sibling, 0 replies; 16+ messages in thread
From: Namhyung Kim @ 2025-10-08 7:52 UTC (permalink / raw)
To: James Clark
Cc: Thomas Richter, Anubhav Shelat, mpetlan, acme, irogers,
linux-perf-users, peterz, mingo, mark.rutland, alexander.shishkin,
jolsa, adrian.hunter, kan.liang, dapeng1.mi
Hello,
On Tue, Oct 07, 2025 at 01:34:46PM +0100, James Clark wrote:
>
>
> On 07/10/2025 6:47 am, Thomas Richter wrote:
> > On 10/2/25 15:39, Anubhav Shelat wrote:
> > > On Oct 1, 2025 at 9:44 PM, Ian Rogers wrote:
> > > > If cycles is 0 then this will always pass, should this be checking a
> > > range?
> > >
> > > Yes you're right this will be better.
> > >
> > > On Oct 2, 2025 at 7:56 AM, Thomas Richter wrote:
> > > > Can we use a larger range to allow the test to pass?
> > >
> > > What range do you get on s390? When I do group measurements using "perf
> > > record -e "{cycles,cycles}:Su" perf test -w brstack" like in the test I
> > > always get somewhere between 20 and 50 cycles difference. I haven't tested
> > > on s390x, but I see no cycle count difference when testing the same command
> > > on x86. I have observed much larger, more varied differences when using
> > > software events.
> > >
> > > Anubhav
> > >
> >
> > Here is the output of the
> >
> > # perf record -e "{cycles,cycles}:Su" -- ./perf test -w brstack
> > # perf script | grep brstack
> >
> > commands:
> >
> > perf 1110782 426394.696874: 6885000 cycles: 116fc9e brstack_bench+0xae (/r>
> > perf 1110782 426394.696875: 1377000 cycles: 116fb98 brstack_foo+0x0 (/root>
> > perf 1110782 426394.696877: 1377000 cycles: 116fb48 brstack_bar+0x0 (/root>
> > perf 1110782 426394.696878: 1377000 cycles: 116fc94 brstack_bench+0xa4 (/r>
> > perf 1110782 426394.696880: 1377000 cycles: 116fc84 brstack_bench+0x94 (/r>
> > perf 1110782 426394.696881: 1377000 cycles: 116fb7c brstack_bar+0x34 (/roo>
> > perf 1110782 426394.696883: 1377000 cycles: 116fb7c brstack_bar+0x34 (/roo>
> > perf 1110782 426394.696884: 1377000 cycles: 116fb7c brstack_bar+0x34 (/roo>
> > perf 1110782 426394.696885: 1377000 cycles: 116fb7c brstack_bar+0x34 (/roo>
> > perf 1110782 426394.696887: 1377000 cycles: 116fb7c brstack_bar+0x34 (/roo>
> > perf 1110782 426394.696888: 1377000 cycles: 116fc98 brstack_bench+0xa8 (/r>
> > perf 1110782 426394.696890: 1377000 cycles: 116fb7c brstack_bar+0x34 (/roo>
> > perf 1110782 426394.696891: 1377000 cycles: 116fc9e brstack_bench+0xae (/r>
> > perf 1110782 426394.703542: 1377000 cycles: 116fb7c brstack_bar+0x34 (/roo>
> > perf 1110782 426394.703542: 30971975 cycles: 116fb7c brstack_bar+0x34 (/roo>
> > perf 1110782 426394.703543: 1377000 cycles: 116fc76 brstack_bench+0x86 (/r>
> > perf 1110782 426394.703545: 1377000 cycles: 116fc06 brstack_bench+0x16 (/r>
> > perf 1110782 426394.703546: 1377000 cycles: 116fc9e brstack_bench+0xae (/r>
> > perf 1110782 426394.703547: 1377000 cycles: 116fc20 brstack_bench+0x30 (/r>
> > perf 1110782 426394.703549: 1377000 cycles: 116fc9e brstack_bench+0xae (/r>
> > perf 1110782 426394.703550: 1377000 cycles: 116fcbc brstack_bench+0xcc
> >
> > They are usual identical values beside one or two which are way off. Ignoring those would
> > be good.
> >
>
> FWIW I ran 100+ iterations my Arm Juno and N1SDP boards and the test passed
> every time.
>
> Are we sure there isn't some kind of race condition or bug that the test has
> found? Rather than a bug in the test?
I suspect this too.
>
> At least "This looks like the result of some hardware property of aarch64
> processors" in the commit message can't be accurate as this isn't the case
> everywhere.
I guess this depends on hardware capability to start/stop the PMU globally
rather than doing it for each counter.
Maybe we can have two set of checks depends on the hardware. For
example, we keep the existing test on x86 (and selected ARM machines?)
and add a range test for others. It'd be great if the kernel could
expose that information to userspace.
Thanks,
Namhyung
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: [PATCH] perf tests record: allow for some difference in cycle count in leader sampling test on aarch64
2025-10-07 12:34 ` James Clark
2025-10-08 7:52 ` Namhyung Kim
@ 2025-10-08 10:48 ` Thomas Richter
2025-10-08 11:24 ` James Clark
1 sibling, 1 reply; 16+ messages in thread
From: Thomas Richter @ 2025-10-08 10:48 UTC (permalink / raw)
To: James Clark, Anubhav Shelat
Cc: mpetlan, acme, namhyung, irogers, linux-perf-users, peterz, mingo,
mark.rutland, alexander.shishkin, jolsa, adrian.hunter, kan.liang,
dapeng1.mi
On 10/7/25 14:34, James Clark wrote:
>
>
> On 07/10/2025 6:47 am, Thomas Richter wrote:
>> On 10/2/25 15:39, Anubhav Shelat wrote:
>>> On Oct 1, 2025 at 9:44 PM, Ian Rogers wrote:
>>>> If cycles is 0 then this will always pass, should this be checking a
>>> range?
>>>
>>> Yes you're right this will be better.
>>>
>>> On Oct 2, 2025 at 7:56 AM, Thomas Richter wrote:
>>>> Can we use a larger range to allow the test to pass?
>>>
>>> What range do you get on s390? When I do group measurements using "perf
>>> record -e "{cycles,cycles}:Su" perf test -w brstack" like in the test I
>>> always get somewhere between 20 and 50 cycles difference. I haven't tested
>>> on s390x, but I see no cycle count difference when testing the same command
>>> on x86. I have observed much larger, more varied differences when using
>>> software events.
>>>
>>> Anubhav
>>>
>>
>> Here is the output of the
>>
>> # perf record -e "{cycles,cycles}:Su" -- ./perf test -w brstack
>> # perf script | grep brstack
>>
>> commands:
>>
>> perf 1110782 426394.696874: 6885000 cycles: 116fc9e brstack_bench+0xae (/r>
>> perf 1110782 426394.696875: 1377000 cycles: 116fb98 brstack_foo+0x0 (/root>
>> perf 1110782 426394.696877: 1377000 cycles: 116fb48 brstack_bar+0x0 (/root>
>> perf 1110782 426394.696878: 1377000 cycles: 116fc94 brstack_bench+0xa4 (/r>
>> perf 1110782 426394.696880: 1377000 cycles: 116fc84 brstack_bench+0x94 (/r>
>> perf 1110782 426394.696881: 1377000 cycles: 116fb7c brstack_bar+0x34 (/roo>
>> perf 1110782 426394.696883: 1377000 cycles: 116fb7c brstack_bar+0x34 (/roo>
>> perf 1110782 426394.696884: 1377000 cycles: 116fb7c brstack_bar+0x34 (/roo>
>> perf 1110782 426394.696885: 1377000 cycles: 116fb7c brstack_bar+0x34 (/roo>
>> perf 1110782 426394.696887: 1377000 cycles: 116fb7c brstack_bar+0x34 (/roo>
>> perf 1110782 426394.696888: 1377000 cycles: 116fc98 brstack_bench+0xa8 (/r>
>> perf 1110782 426394.696890: 1377000 cycles: 116fb7c brstack_bar+0x34 (/roo>
>> perf 1110782 426394.696891: 1377000 cycles: 116fc9e brstack_bench+0xae (/r>
>> perf 1110782 426394.703542: 1377000 cycles: 116fb7c brstack_bar+0x34 (/roo>
>> perf 1110782 426394.703542: 30971975 cycles: 116fb7c brstack_bar+0x34 (/roo>
>> perf 1110782 426394.703543: 1377000 cycles: 116fc76 brstack_bench+0x86 (/r>
>> perf 1110782 426394.703545: 1377000 cycles: 116fc06 brstack_bench+0x16 (/r>
>> perf 1110782 426394.703546: 1377000 cycles: 116fc9e brstack_bench+0xae (/r>
>> perf 1110782 426394.703547: 1377000 cycles: 116fc20 brstack_bench+0x30 (/r>
>> perf 1110782 426394.703549: 1377000 cycles: 116fc9e brstack_bench+0xae (/r>
>> perf 1110782 426394.703550: 1377000 cycles: 116fcbc brstack_bench+0xcc
>>
>> They are usual identical values beside one or two which are way off. Ignoring those would
>> be good.
>>
>
> FWIW I ran 100+ iterations my Arm Juno and N1SDP boards and the test passed every time.
>
> Are we sure there isn't some kind of race condition or bug that the test has found? Rather than a bug in the test?
There is always a possibility of a bug, that can not be ruled out for certain.
However as LPARs on s390 run on top of a hypervisor, there is a chance for the
linux guest being stopped while hardware keeps running.
I see these runoff values time and again, roughly every second run fails with
one runoff value
Hope this helps
--
Thomas Richter, Dept 3303, IBM s390 Linux Development, Boeblingen, Germany
--
IBM Deutschland Research & Development GmbH
Vorsitzender des Aufsichtsrats: Wolfgang Wendt
Geschäftsführung: David Faller
Sitz der Gesellschaft: Böblingen / Registergericht: Amtsgericht Stuttgart, HRB 243294
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: [PATCH] perf tests record: allow for some difference in cycle count in leader sampling test on aarch64
2025-10-08 10:48 ` Thomas Richter
@ 2025-10-08 11:24 ` James Clark
2025-10-09 12:14 ` Thomas Richter
[not found] ` <CA+G8Dh+Odf40jdY4h1knjU+3sSjZokMx6OdzRT3o9v1=ndKORQ@mail.gmail.com>
0 siblings, 2 replies; 16+ messages in thread
From: James Clark @ 2025-10-08 11:24 UTC (permalink / raw)
To: Thomas Richter, Anubhav Shelat, Namhyung Kim
Cc: mpetlan, acme, irogers, linux-perf-users, peterz, mingo,
mark.rutland, alexander.shishkin, jolsa, adrian.hunter, kan.liang,
dapeng1.mi
On 08/10/2025 11:48 am, Thomas Richter wrote:
> On 10/7/25 14:34, James Clark wrote:
>>
>>
>> On 07/10/2025 6:47 am, Thomas Richter wrote:
>>> On 10/2/25 15:39, Anubhav Shelat wrote:
>>>> On Oct 1, 2025 at 9:44 PM, Ian Rogers wrote:
>>>>> If cycles is 0 then this will always pass, should this be checking a
>>>> range?
>>>>
>>>> Yes you're right this will be better.
>>>>
>>>> On Oct 2, 2025 at 7:56 AM, Thomas Richter wrote:
>>>>> Can we use a larger range to allow the test to pass?
>>>>
>>>> What range do you get on s390? When I do group measurements using "perf
>>>> record -e "{cycles,cycles}:Su" perf test -w brstack" like in the test I
>>>> always get somewhere between 20 and 50 cycles difference. I haven't tested
>>>> on s390x, but I see no cycle count difference when testing the same command
>>>> on x86. I have observed much larger, more varied differences when using
>>>> software events.
>>>>
>>>> Anubhav
>>>>
>>>
>>> Here is the output of the
>>>
>>> # perf record -e "{cycles,cycles}:Su" -- ./perf test -w brstack
>>> # perf script | grep brstack
>>>
>>> commands:
>>>
>>> perf 1110782 426394.696874: 6885000 cycles: 116fc9e brstack_bench+0xae (/r>
>>> perf 1110782 426394.696875: 1377000 cycles: 116fb98 brstack_foo+0x0 (/root>
>>> perf 1110782 426394.696877: 1377000 cycles: 116fb48 brstack_bar+0x0 (/root>
>>> perf 1110782 426394.696878: 1377000 cycles: 116fc94 brstack_bench+0xa4 (/r>
>>> perf 1110782 426394.696880: 1377000 cycles: 116fc84 brstack_bench+0x94 (/r>
>>> perf 1110782 426394.696881: 1377000 cycles: 116fb7c brstack_bar+0x34 (/roo>
>>> perf 1110782 426394.696883: 1377000 cycles: 116fb7c brstack_bar+0x34 (/roo>
>>> perf 1110782 426394.696884: 1377000 cycles: 116fb7c brstack_bar+0x34 (/roo>
>>> perf 1110782 426394.696885: 1377000 cycles: 116fb7c brstack_bar+0x34 (/roo>
>>> perf 1110782 426394.696887: 1377000 cycles: 116fb7c brstack_bar+0x34 (/roo>
>>> perf 1110782 426394.696888: 1377000 cycles: 116fc98 brstack_bench+0xa8 (/r>
>>> perf 1110782 426394.696890: 1377000 cycles: 116fb7c brstack_bar+0x34 (/roo>
>>> perf 1110782 426394.696891: 1377000 cycles: 116fc9e brstack_bench+0xae (/r>
>>> perf 1110782 426394.703542: 1377000 cycles: 116fb7c brstack_bar+0x34 (/roo>
>>> perf 1110782 426394.703542: 30971975 cycles: 116fb7c brstack_bar+0x34 (/roo>
>>> perf 1110782 426394.703543: 1377000 cycles: 116fc76 brstack_bench+0x86 (/r>
>>> perf 1110782 426394.703545: 1377000 cycles: 116fc06 brstack_bench+0x16 (/r>
>>> perf 1110782 426394.703546: 1377000 cycles: 116fc9e brstack_bench+0xae (/r>
>>> perf 1110782 426394.703547: 1377000 cycles: 116fc20 brstack_bench+0x30 (/r>
>>> perf 1110782 426394.703549: 1377000 cycles: 116fc9e brstack_bench+0xae (/r>
>>> perf 1110782 426394.703550: 1377000 cycles: 116fcbc brstack_bench+0xcc
>>>
>>> They are usual identical values beside one or two which are way off. Ignoring those would
>>> be good.
>>>
>>
>> FWIW I ran 100+ iterations my Arm Juno and N1SDP boards and the test passed every time.
>>
>> Are we sure there isn't some kind of race condition or bug that the test has found? Rather than a bug in the test?
> There is always a possibility of a bug, that can not be ruled out for certain.
> However as LPARs on s390 run on top of a hypervisor, there is a chance for the
> linux guest being stopped while hardware keeps running.
>
I have no idea what's going on or how that works, so maybe this question
is useless, but doesn't that mean that guests can determine/guess the
counter values from other guests? If the hardware keeps the counter
running when the guest isn't, that sounds like something is leaking from
one guest to another? Should the hypervisor not be saving and restoring
context?
> I see these runoff values time and again, roughly every second run fails with
> one runoff value
>
> Hope this helps
>
That may explain the issue for s390 then, but I'm assuming it doesn't
explain the issues on Arm if the failures there aren't in a VM. But even
if they were in a VM, the PMU is fully virtualised and the events would
be stopped and resumed when the guest is switched out.
James
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: [PATCH] perf tests record: allow for some difference in cycle count in leader sampling test on aarch64
2025-10-08 11:24 ` James Clark
@ 2025-10-09 12:14 ` Thomas Richter
[not found] ` <CA+G8Dh+Odf40jdY4h1knjU+3sSjZokMx6OdzRT3o9v1=ndKORQ@mail.gmail.com>
1 sibling, 0 replies; 16+ messages in thread
From: Thomas Richter @ 2025-10-09 12:14 UTC (permalink / raw)
To: James Clark, Anubhav Shelat, Namhyung Kim
Cc: mpetlan, acme, irogers, linux-perf-users, peterz, mingo,
mark.rutland, alexander.shishkin, jolsa, adrian.hunter, kan.liang,
dapeng1.mi
On 10/8/25 13:24, James Clark wrote:
>
>
> On 08/10/2025 11:48 am, Thomas Richter wrote:
>> On 10/7/25 14:34, James Clark wrote:
>>>
>>>
>>> On 07/10/2025 6:47 am, Thomas Richter wrote:
>>>> On 10/2/25 15:39, Anubhav Shelat wrote:
>>>>> On Oct 1, 2025 at 9:44 PM, Ian Rogers wrote:
>>>>>> If cycles is 0 then this will always pass, should this be checking a
>>>>> range?
>>>>>
>>>>> Yes you're right this will be better.
>>>>>
>>>>> On Oct 2, 2025 at 7:56 AM, Thomas Richter wrote:
>>>>>> Can we use a larger range to allow the test to pass?
>>>>>
>>>>> What range do you get on s390? When I do group measurements using "perf
>>>>> record -e "{cycles,cycles}:Su" perf test -w brstack" like in the test I
>>>>> always get somewhere between 20 and 50 cycles difference. I haven't tested
>>>>> on s390x, but I see no cycle count difference when testing the same command
>>>>> on x86. I have observed much larger, more varied differences when using
>>>>> software events.
>>>>>
>>>>> Anubhav
>>>>>
>>>>
>>>> Here is the output of the
>>>>
>>>> # perf record -e "{cycles,cycles}:Su" -- ./perf test -w brstack
>>>> # perf script | grep brstack
>>>>
>>>> commands:
>>>>
>>>> perf 1110782 426394.696874: 6885000 cycles: 116fc9e brstack_bench+0xae (/r>
>>>> perf 1110782 426394.696875: 1377000 cycles: 116fb98 brstack_foo+0x0 (/root>
>>>> perf 1110782 426394.696877: 1377000 cycles: 116fb48 brstack_bar+0x0 (/root>
>>>> perf 1110782 426394.696878: 1377000 cycles: 116fc94 brstack_bench+0xa4 (/r>
>>>> perf 1110782 426394.696880: 1377000 cycles: 116fc84 brstack_bench+0x94 (/r>
>>>> perf 1110782 426394.696881: 1377000 cycles: 116fb7c brstack_bar+0x34 (/roo>
>>>> perf 1110782 426394.696883: 1377000 cycles: 116fb7c brstack_bar+0x34 (/roo>
>>>> perf 1110782 426394.696884: 1377000 cycles: 116fb7c brstack_bar+0x34 (/roo>
>>>> perf 1110782 426394.696885: 1377000 cycles: 116fb7c brstack_bar+0x34 (/roo>
>>>> perf 1110782 426394.696887: 1377000 cycles: 116fb7c brstack_bar+0x34 (/roo>
>>>> perf 1110782 426394.696888: 1377000 cycles: 116fc98 brstack_bench+0xa8 (/r>
>>>> perf 1110782 426394.696890: 1377000 cycles: 116fb7c brstack_bar+0x34 (/roo>
>>>> perf 1110782 426394.696891: 1377000 cycles: 116fc9e brstack_bench+0xae (/r>
>>>> perf 1110782 426394.703542: 1377000 cycles: 116fb7c brstack_bar+0x34 (/roo>
>>>> perf 1110782 426394.703542: 30971975 cycles: 116fb7c brstack_bar+0x34 (/roo>
>>>> perf 1110782 426394.703543: 1377000 cycles: 116fc76 brstack_bench+0x86 (/r>
>>>> perf 1110782 426394.703545: 1377000 cycles: 116fc06 brstack_bench+0x16 (/r>
>>>> perf 1110782 426394.703546: 1377000 cycles: 116fc9e brstack_bench+0xae (/r>
>>>> perf 1110782 426394.703547: 1377000 cycles: 116fc20 brstack_bench+0x30 (/r>
>>>> perf 1110782 426394.703549: 1377000 cycles: 116fc9e brstack_bench+0xae (/r>
>>>> perf 1110782 426394.703550: 1377000 cycles: 116fcbc brstack_bench+0xcc
>>>>
>>>> They are usual identical values beside one or two which are way off. Ignoring those would
>>>> be good.
>>>>
>>>
>>> FWIW I ran 100+ iterations my Arm Juno and N1SDP boards and the test passed every time.
>>>
>>> Are we sure there isn't some kind of race condition or bug that the test has found? Rather than a bug in the test?
>> There is always a possibility of a bug, that can not be ruled out for certain.
>> However as LPARs on s390 run on top of a hypervisor, there is a chance for the
>> linux guest being stopped while hardware keeps running.
>>
>
> I have no idea what's going on or how that works, so maybe this question is useless, but doesn't that mean that guests can determine/guess the counter values from other guests? If the hardware keeps the counter running when the guest isn't, that sounds like something is leaking from one guest to another? Should the hypervisor not be saving and restoring context?
I have not enough knowledge myself to answer that. But I try to find out.
However I guess that the hypervisor saves and restores context in term of other guests.
Maybe the hypervisor does some work on behalf of the currently running guest?
I'll start digging...
>
>> I see these runoff values time and again, roughly every second run fails with
>> one runoff value
>>
>> Hope this helps
>>
>
> That may explain the issue for s390 then, but I'm assuming it doesn't explain the issues on Arm if the failures there aren't in a VM. But even if they were in a VM, the PMU is fully virtualised and the events would be stopped and resumed when the guest is switched out.
>
> James
>
--
Thomas Richter, Dept 3303, IBM s390 Linux Development, Boeblingen, Germany
--
IBM Deutschland Research & Development GmbH
Vorsitzender des Aufsichtsrats: Wolfgang Wendt
Geschäftsführung: David Faller
Sitz der Gesellschaft: Böblingen / Registergericht: Amtsgericht Stuttgart, HRB 243294
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: [PATCH] perf tests record: allow for some difference in cycle count in leader sampling test on aarch64
[not found] ` <CA+G8Dh+Odf40jdY4h1knjU+3sSjZokMx6OdzRT3o9v1=ndKORQ@mail.gmail.com>
@ 2025-10-09 13:55 ` Anubhav Shelat
2025-10-09 14:17 ` James Clark
2025-10-09 14:08 ` James Clark
1 sibling, 1 reply; 16+ messages in thread
From: Anubhav Shelat @ 2025-10-09 13:55 UTC (permalink / raw)
To: James Clark
Cc: Thomas Richter, Namhyung Kim, mpetlan, acme, irogers,
linux-perf-users, peterz, mingo, mark.rutland, alexander.shishkin,
jolsa, adrian.hunter, kan.liang, dapeng1.mi
I tested on a new arm machine and I'm getting a similar issue as
Thomas, but the test fails every 20 or so runs and I'm not getting the
issue that I previously mentioned.
Running test #15
10bc60-10bcc4 g test_loop
perf does have symbol 'test_loop'
10c354-10c418 l brstack
perf does have symbol 'brstack'
Basic leader sampling test
Basic leader sampling test [Success]
Invalid Counts: 1
Valid Counts: 27
Running test #16
10bc60-10bcc4 g test_loop
perf does have symbol 'test_loop'
10c354-10c418 l brstack
perf does have symbol 'brstack'
Basic leader sampling test
Basic leader sampling test [Success]
Invalid Counts: 1
Valid Counts: 27
Running test #17
10bc60-10bcc4 g test_loop
perf does have symbol 'test_loop'
10c354-10c418 l brstack
perf does have symbol 'brstack'
Basic leader sampling test
Leader sampling [Failed inconsistent cycles count]
Invalid Counts: 8
Valid Counts: 28
Initially I thought it was the throttling issue mentioned in the
comment in test_leadership_sampling, but there's another thread says
that it's fixed:
https://lore.kernel.org/lkml/20250520181644.2673067-2-kan.liang@linux.intel.com/
On Thu, Oct 9, 2025 at 2:43 PM Anubhav Shelat <ashelat@redhat.com> wrote:
>
> I tested on a new arm machine and I'm getting a similar issue as Thomas, but the test fails every 20 or so runs and I'm not getting the issue that I previously mentioned.
>
> Running test #15
> 10bc60-10bcc4 g test_loop
> perf does have symbol 'test_loop'
> 10c354-10c418 l brstack
> perf does have symbol 'brstack'
> Basic leader sampling test
> Basic leader sampling test [Success]
> Invalid Counts: 1
> Valid Counts: 27
> Running test #16
> 10bc60-10bcc4 g test_loop
> perf does have symbol 'test_loop'
> 10c354-10c418 l brstack
> perf does have symbol 'brstack'
> Basic leader sampling test
> Basic leader sampling test [Success]
> Invalid Counts: 1
> Valid Counts: 27
> Running test #17
> 10bc60-10bcc4 g test_loop
> perf does have symbol 'test_loop'
> 10c354-10c418 l brstack
> perf does have symbol 'brstack'
> Basic leader sampling test
> Leader sampling [Failed inconsistent cycles count]
> Invalid Counts: 8
> Valid Counts: 28
>
> Initially I thought it was the throttling issue mentioned in the comment in test_leadership_sampling, but there's another thread says that it's fixed:
> https://lore.kernel.org/lkml/20250520181644.2673067-2-kan.liang@linux.intel.com/
>
>
> On Wed, Oct 8, 2025 at 12:24 PM James Clark <james.clark@linaro.org> wrote:
>>
>>
>>
>> On 08/10/2025 11:48 am, Thomas Richter wrote:
>> > On 10/7/25 14:34, James Clark wrote:
>> >>
>> >>
>> >> On 07/10/2025 6:47 am, Thomas Richter wrote:
>> >>> On 10/2/25 15:39, Anubhav Shelat wrote:
>> >>>> On Oct 1, 2025 at 9:44 PM, Ian Rogers wrote:
>> >>>>> If cycles is 0 then this will always pass, should this be checking a
>> >>>> range?
>> >>>>
>> >>>> Yes you're right this will be better.
>> >>>>
>> >>>> On Oct 2, 2025 at 7:56 AM, Thomas Richter wrote:
>> >>>>> Can we use a larger range to allow the test to pass?
>> >>>>
>> >>>> What range do you get on s390? When I do group measurements using "perf
>> >>>> record -e "{cycles,cycles}:Su" perf test -w brstack" like in the test I
>> >>>> always get somewhere between 20 and 50 cycles difference. I haven't tested
>> >>>> on s390x, but I see no cycle count difference when testing the same command
>> >>>> on x86. I have observed much larger, more varied differences when using
>> >>>> software events.
>> >>>>
>> >>>> Anubhav
>> >>>>
>> >>>
>> >>> Here is the output of the
>> >>>
>> >>> # perf record -e "{cycles,cycles}:Su" -- ./perf test -w brstack
>> >>> # perf script | grep brstack
>> >>>
>> >>> commands:
>> >>>
>> >>> perf 1110782 426394.696874: 6885000 cycles: 116fc9e brstack_bench+0xae (/r>
>> >>> perf 1110782 426394.696875: 1377000 cycles: 116fb98 brstack_foo+0x0 (/root>
>> >>> perf 1110782 426394.696877: 1377000 cycles: 116fb48 brstack_bar+0x0 (/root>
>> >>> perf 1110782 426394.696878: 1377000 cycles: 116fc94 brstack_bench+0xa4 (/r>
>> >>> perf 1110782 426394.696880: 1377000 cycles: 116fc84 brstack_bench+0x94 (/r>
>> >>> perf 1110782 426394.696881: 1377000 cycles: 116fb7c brstack_bar+0x34 (/roo>
>> >>> perf 1110782 426394.696883: 1377000 cycles: 116fb7c brstack_bar+0x34 (/roo>
>> >>> perf 1110782 426394.696884: 1377000 cycles: 116fb7c brstack_bar+0x34 (/roo>
>> >>> perf 1110782 426394.696885: 1377000 cycles: 116fb7c brstack_bar+0x34 (/roo>
>> >>> perf 1110782 426394.696887: 1377000 cycles: 116fb7c brstack_bar+0x34 (/roo>
>> >>> perf 1110782 426394.696888: 1377000 cycles: 116fc98 brstack_bench+0xa8 (/r>
>> >>> perf 1110782 426394.696890: 1377000 cycles: 116fb7c brstack_bar+0x34 (/roo>
>> >>> perf 1110782 426394.696891: 1377000 cycles: 116fc9e brstack_bench+0xae (/r>
>> >>> perf 1110782 426394.703542: 1377000 cycles: 116fb7c brstack_bar+0x34 (/roo>
>> >>> perf 1110782 426394.703542: 30971975 cycles: 116fb7c brstack_bar+0x34 (/roo>
>> >>> perf 1110782 426394.703543: 1377000 cycles: 116fc76 brstack_bench+0x86 (/r>
>> >>> perf 1110782 426394.703545: 1377000 cycles: 116fc06 brstack_bench+0x16 (/r>
>> >>> perf 1110782 426394.703546: 1377000 cycles: 116fc9e brstack_bench+0xae (/r>
>> >>> perf 1110782 426394.703547: 1377000 cycles: 116fc20 brstack_bench+0x30 (/r>
>> >>> perf 1110782 426394.703549: 1377000 cycles: 116fc9e brstack_bench+0xae (/r>
>> >>> perf 1110782 426394.703550: 1377000 cycles: 116fcbc brstack_bench+0xcc
>> >>>
>> >>> They are usual identical values beside one or two which are way off. Ignoring those would
>> >>> be good.
>> >>>
>> >>
>> >> FWIW I ran 100+ iterations my Arm Juno and N1SDP boards and the test passed every time.
>> >>
>> >> Are we sure there isn't some kind of race condition or bug that the test has found? Rather than a bug in the test?
>> > There is always a possibility of a bug, that can not be ruled out for certain.
>> > However as LPARs on s390 run on top of a hypervisor, there is a chance for the
>> > linux guest being stopped while hardware keeps running.
>> >
>>
>> I have no idea what's going on or how that works, so maybe this question
>> is useless, but doesn't that mean that guests can determine/guess the
>> counter values from other guests? If the hardware keeps the counter
>> running when the guest isn't, that sounds like something is leaking from
>> one guest to another? Should the hypervisor not be saving and restoring
>> context?
>>
>> > I see these runoff values time and again, roughly every second run fails with
>> > one runoff value
>> >
>> > Hope this helps
>> >
>>
>> That may explain the issue for s390 then, but I'm assuming it doesn't
>> explain the issues on Arm if the failures there aren't in a VM. But even
>> if they were in a VM, the PMU is fully virtualised and the events would
>> be stopped and resumed when the guest is switched out.
>>
>> James
>>
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: [PATCH] perf tests record: allow for some difference in cycle count in leader sampling test on aarch64
[not found] ` <CA+G8Dh+Odf40jdY4h1knjU+3sSjZokMx6OdzRT3o9v1=ndKORQ@mail.gmail.com>
2025-10-09 13:55 ` Anubhav Shelat
@ 2025-10-09 14:08 ` James Clark
1 sibling, 0 replies; 16+ messages in thread
From: James Clark @ 2025-10-09 14:08 UTC (permalink / raw)
To: Anubhav Shelat
Cc: Thomas Richter, Namhyung Kim, mpetlan, acme, irogers,
linux-perf-users, peterz, mingo, mark.rutland, alexander.shishkin,
jolsa, adrian.hunter, kan.liang, dapeng1.mi
On 09/10/2025 2:43 pm, Anubhav Shelat wrote:
> I tested on a new arm machine and I'm getting a similar issue as Thomas,
Which are your new and old Arm machines exactly? And which kernel
versions did you run the test on?
> but the test fails every 20 or so runs and I'm not getting the issue that I
> previously mentioned.
>
What do you mean here? Below I see the leader sampling test failure,
which I thought was the same issue that was previously mentioned?
> Running test #15
> 10bc60-10bcc4 g test_loop
> perf does have symbol 'test_loop'
> 10c354-10c418 l brstack
> perf does have symbol 'brstack'
> Basic leader sampling test
> Basic leader sampling test [Success]
> Invalid Counts: 1
> Valid Counts: 27
> Running test #16
> 10bc60-10bcc4 g test_loop
> perf does have symbol 'test_loop'
> 10c354-10c418 l brstack
> perf does have symbol 'brstack'
> Basic leader sampling test
> Basic leader sampling test [Success]
> Invalid Counts: 1
> Valid Counts: 27
> Running test #17
> 10bc60-10bcc4 g test_loop
> perf does have symbol 'test_loop'
> 10c354-10c418 l brstack
> perf does have symbol 'brstack'
> Basic leader sampling test
> Leader sampling [Failed inconsistent cycles count]
> Invalid Counts: 8
> Valid Counts: 28
>
> Initially I thought it was the throttling issue mentioned in the comment in
> test_leadership_sampling, but there's another thread says that it's fixed:
> https://lore.kernel.org/lkml/20250520181644.2673067-2-kan.liang@linux.intel.com/
>
>
>
> On Wed, Oct 8, 2025 at 12:24 PM James Clark <james.clark@linaro.org> wrote:
>
>>
>>
>> On 08/10/2025 11:48 am, Thomas Richter wrote:
>>> On 10/7/25 14:34, James Clark wrote:
>>>>
>>>>
>>>> On 07/10/2025 6:47 am, Thomas Richter wrote:
>>>>> On 10/2/25 15:39, Anubhav Shelat wrote:
>>>>>> On Oct 1, 2025 at 9:44 PM, Ian Rogers wrote:
>>>>>>> If cycles is 0 then this will always pass, should this be checking a
>>>>>> range?
>>>>>>
>>>>>> Yes you're right this will be better.
>>>>>>
>>>>>> On Oct 2, 2025 at 7:56 AM, Thomas Richter wrote:
>>>>>>> Can we use a larger range to allow the test to pass?
>>>>>>
>>>>>> What range do you get on s390? When I do group measurements using
>> "perf
>>>>>> record -e "{cycles,cycles}:Su" perf test -w brstack" like in the test
>> I
>>>>>> always get somewhere between 20 and 50 cycles difference. I haven't
>> tested
>>>>>> on s390x, but I see no cycle count difference when testing the same
>> command
>>>>>> on x86. I have observed much larger, more varied differences when
>> using
>>>>>> software events.
>>>>>>
>>>>>> Anubhav
>>>>>>
>>>>>
>>>>> Here is the output of the
>>>>>
>>>>> # perf record -e "{cycles,cycles}:Su" -- ./perf test -w brstack
>>>>> # perf script | grep brstack
>>>>>
>>>>> commands:
>>>>>
>>>>> perf 1110782 426394.696874: 6885000 cycles: 116fc9e
>> brstack_bench+0xae (/r>
>>>>> perf 1110782 426394.696875: 1377000 cycles: 116fb98
>> brstack_foo+0x0 (/root>
>>>>> perf 1110782 426394.696877: 1377000 cycles: 116fb48
>> brstack_bar+0x0 (/root>
>>>>> perf 1110782 426394.696878: 1377000 cycles: 116fc94
>> brstack_bench+0xa4 (/r>
>>>>> perf 1110782 426394.696880: 1377000 cycles: 116fc84
>> brstack_bench+0x94 (/r>
>>>>> perf 1110782 426394.696881: 1377000 cycles: 116fb7c
>> brstack_bar+0x34 (/roo>
>>>>> perf 1110782 426394.696883: 1377000 cycles: 116fb7c
>> brstack_bar+0x34 (/roo>
>>>>> perf 1110782 426394.696884: 1377000 cycles: 116fb7c
>> brstack_bar+0x34 (/roo>
>>>>> perf 1110782 426394.696885: 1377000 cycles: 116fb7c
>> brstack_bar+0x34 (/roo>
>>>>> perf 1110782 426394.696887: 1377000 cycles: 116fb7c
>> brstack_bar+0x34 (/roo>
>>>>> perf 1110782 426394.696888: 1377000 cycles: 116fc98
>> brstack_bench+0xa8 (/r>
>>>>> perf 1110782 426394.696890: 1377000 cycles: 116fb7c
>> brstack_bar+0x34 (/roo>
>>>>> perf 1110782 426394.696891: 1377000 cycles: 116fc9e
>> brstack_bench+0xae (/r>
>>>>> perf 1110782 426394.703542: 1377000 cycles: 116fb7c
>> brstack_bar+0x34 (/roo>
>>>>> perf 1110782 426394.703542: 30971975 cycles: 116fb7c
>> brstack_bar+0x34 (/roo>
>>>>> perf 1110782 426394.703543: 1377000 cycles: 116fc76
>> brstack_bench+0x86 (/r>
>>>>> perf 1110782 426394.703545: 1377000 cycles: 116fc06
>> brstack_bench+0x16 (/r>
>>>>> perf 1110782 426394.703546: 1377000 cycles: 116fc9e
>> brstack_bench+0xae (/r>
>>>>> perf 1110782 426394.703547: 1377000 cycles: 116fc20
>> brstack_bench+0x30 (/r>
>>>>> perf 1110782 426394.703549: 1377000 cycles: 116fc9e
>> brstack_bench+0xae (/r>
>>>>> perf 1110782 426394.703550: 1377000 cycles: 116fcbc
>> brstack_bench+0xcc
>>>>>
>>>>> They are usual identical values beside one or two which are way off.
>> Ignoring those would
>>>>> be good.
>>>>>
>>>>
>>>> FWIW I ran 100+ iterations my Arm Juno and N1SDP boards and the test
>> passed every time.
>>>>
>>>> Are we sure there isn't some kind of race condition or bug that the
>> test has found? Rather than a bug in the test?
>>> There is always a possibility of a bug, that can not be ruled out for
>> certain.
>>> However as LPARs on s390 run on top of a hypervisor, there is a chance
>> for the
>>> linux guest being stopped while hardware keeps running.
>>>
>>
>> I have no idea what's going on or how that works, so maybe this question
>> is useless, but doesn't that mean that guests can determine/guess the
>> counter values from other guests? If the hardware keeps the counter
>> running when the guest isn't, that sounds like something is leaking from
>> one guest to another? Should the hypervisor not be saving and restoring
>> context?
>>
>>> I see these runoff values time and again, roughly every second run fails
>> with
>>> one runoff value
>>>
>>> Hope this helps
>>>
>>
>> That may explain the issue for s390 then, but I'm assuming it doesn't
>> explain the issues on Arm if the failures there aren't in a VM. But even
>> if they were in a VM, the PMU is fully virtualised and the events would
>> be stopped and resumed when the guest is switched out.
>>
>> James
>>
>>
>
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: [PATCH] perf tests record: allow for some difference in cycle count in leader sampling test on aarch64
2025-10-09 13:55 ` Anubhav Shelat
@ 2025-10-09 14:17 ` James Clark
[not found] ` <CA+G8DhKQkTKoNer5GfZedPUj4xMizWVJUWFocP2eQ_cmPJtBOQ@mail.gmail.com>
0 siblings, 1 reply; 16+ messages in thread
From: James Clark @ 2025-10-09 14:17 UTC (permalink / raw)
To: Anubhav Shelat
Cc: Thomas Richter, Namhyung Kim, mpetlan, acme, irogers,
linux-perf-users, peterz, mingo, mark.rutland, alexander.shishkin,
jolsa, adrian.hunter, kan.liang, dapeng1.mi
On 09/10/2025 2:55 pm, Anubhav Shelat wrote:
> I tested on a new arm machine and I'm getting a similar issue as
> Thomas, but the test fails every 20 or so runs and I'm not getting the
> issue that I previously mentioned.
>
> Running test #15
> 10bc60-10bcc4 g test_loop
> perf does have symbol 'test_loop'
> 10c354-10c418 l brstack
> perf does have symbol 'brstack'
> Basic leader sampling test
> Basic leader sampling test [Success]
> Invalid Counts: 1
> Valid Counts: 27
> Running test #16
> 10bc60-10bcc4 g test_loop
> perf does have symbol 'test_loop'
> 10c354-10c418 l brstack
> perf does have symbol 'brstack'
> Basic leader sampling test
> Basic leader sampling test [Success]
> Invalid Counts: 1
> Valid Counts: 27
> Running test #17
> 10bc60-10bcc4 g test_loop
> perf does have symbol 'test_loop'
> 10c354-10c418 l brstack
> perf does have symbol 'brstack'
> Basic leader sampling test
> Leader sampling [Failed inconsistent cycles count]
> Invalid Counts: 8
> Valid Counts: 28
>
> Initially I thought it was the throttling issue mentioned in the
> comment in test_leadership_sampling, but there's another thread says
> that it's fixed:
> https://lore.kernel.org/lkml/20250520181644.2673067-2-kan.liang@linux.intel.com/
>
>
After reading that patch it seems like we should actually be removing
the 80% tolerance from the leader sampling test. Both instances of the
cycles counts should be the same now.
(Excluding s390) I'm starting to think you were hitting this bug on an
older kernel? Or something else is going wrong that we should get to
the bottom of. The test could have found something and we shouldn't
ignore it yet.
> On Thu, Oct 9, 2025 at 2:43 PM Anubhav Shelat <ashelat@redhat.com> wrote:
>>
>> I tested on a new arm machine and I'm getting a similar issue as Thomas, but the test fails every 20 or so runs and I'm not getting the issue that I previously mentioned.
>>
>> Running test #15
>> 10bc60-10bcc4 g test_loop
>> perf does have symbol 'test_loop'
>> 10c354-10c418 l brstack
>> perf does have symbol 'brstack'
>> Basic leader sampling test
>> Basic leader sampling test [Success]
>> Invalid Counts: 1
>> Valid Counts: 27
>> Running test #16
>> 10bc60-10bcc4 g test_loop
>> perf does have symbol 'test_loop'
>> 10c354-10c418 l brstack
>> perf does have symbol 'brstack'
>> Basic leader sampling test
>> Basic leader sampling test [Success]
>> Invalid Counts: 1
>> Valid Counts: 27
>> Running test #17
>> 10bc60-10bcc4 g test_loop
>> perf does have symbol 'test_loop'
>> 10c354-10c418 l brstack
>> perf does have symbol 'brstack'
>> Basic leader sampling test
>> Leader sampling [Failed inconsistent cycles count]
>> Invalid Counts: 8
>> Valid Counts: 28
>>
>> Initially I thought it was the throttling issue mentioned in the comment in test_leadership_sampling, but there's another thread says that it's fixed:
>> https://lore.kernel.org/lkml/20250520181644.2673067-2-kan.liang@linux.intel.com/
>>
>>
>> On Wed, Oct 8, 2025 at 12:24 PM James Clark <james.clark@linaro.org> wrote:
>>>
>>>
>>>
>>> On 08/10/2025 11:48 am, Thomas Richter wrote:
>>>> On 10/7/25 14:34, James Clark wrote:
>>>>>
>>>>>
>>>>> On 07/10/2025 6:47 am, Thomas Richter wrote:
>>>>>> On 10/2/25 15:39, Anubhav Shelat wrote:
>>>>>>> On Oct 1, 2025 at 9:44 PM, Ian Rogers wrote:
>>>>>>>> If cycles is 0 then this will always pass, should this be checking a
>>>>>>> range?
>>>>>>>
>>>>>>> Yes you're right this will be better.
>>>>>>>
>>>>>>> On Oct 2, 2025 at 7:56 AM, Thomas Richter wrote:
>>>>>>>> Can we use a larger range to allow the test to pass?
>>>>>>>
>>>>>>> What range do you get on s390? When I do group measurements using "perf
>>>>>>> record -e "{cycles,cycles}:Su" perf test -w brstack" like in the test I
>>>>>>> always get somewhere between 20 and 50 cycles difference. I haven't tested
>>>>>>> on s390x, but I see no cycle count difference when testing the same command
>>>>>>> on x86. I have observed much larger, more varied differences when using
>>>>>>> software events.
>>>>>>>
>>>>>>> Anubhav
>>>>>>>
>>>>>>
>>>>>> Here is the output of the
>>>>>>
>>>>>> # perf record -e "{cycles,cycles}:Su" -- ./perf test -w brstack
>>>>>> # perf script | grep brstack
>>>>>>
>>>>>> commands:
>>>>>>
>>>>>> perf 1110782 426394.696874: 6885000 cycles: 116fc9e brstack_bench+0xae (/r>
>>>>>> perf 1110782 426394.696875: 1377000 cycles: 116fb98 brstack_foo+0x0 (/root>
>>>>>> perf 1110782 426394.696877: 1377000 cycles: 116fb48 brstack_bar+0x0 (/root>
>>>>>> perf 1110782 426394.696878: 1377000 cycles: 116fc94 brstack_bench+0xa4 (/r>
>>>>>> perf 1110782 426394.696880: 1377000 cycles: 116fc84 brstack_bench+0x94 (/r>
>>>>>> perf 1110782 426394.696881: 1377000 cycles: 116fb7c brstack_bar+0x34 (/roo>
>>>>>> perf 1110782 426394.696883: 1377000 cycles: 116fb7c brstack_bar+0x34 (/roo>
>>>>>> perf 1110782 426394.696884: 1377000 cycles: 116fb7c brstack_bar+0x34 (/roo>
>>>>>> perf 1110782 426394.696885: 1377000 cycles: 116fb7c brstack_bar+0x34 (/roo>
>>>>>> perf 1110782 426394.696887: 1377000 cycles: 116fb7c brstack_bar+0x34 (/roo>
>>>>>> perf 1110782 426394.696888: 1377000 cycles: 116fc98 brstack_bench+0xa8 (/r>
>>>>>> perf 1110782 426394.696890: 1377000 cycles: 116fb7c brstack_bar+0x34 (/roo>
>>>>>> perf 1110782 426394.696891: 1377000 cycles: 116fc9e brstack_bench+0xae (/r>
>>>>>> perf 1110782 426394.703542: 1377000 cycles: 116fb7c brstack_bar+0x34 (/roo>
>>>>>> perf 1110782 426394.703542: 30971975 cycles: 116fb7c brstack_bar+0x34 (/roo>
>>>>>> perf 1110782 426394.703543: 1377000 cycles: 116fc76 brstack_bench+0x86 (/r>
>>>>>> perf 1110782 426394.703545: 1377000 cycles: 116fc06 brstack_bench+0x16 (/r>
>>>>>> perf 1110782 426394.703546: 1377000 cycles: 116fc9e brstack_bench+0xae (/r>
>>>>>> perf 1110782 426394.703547: 1377000 cycles: 116fc20 brstack_bench+0x30 (/r>
>>>>>> perf 1110782 426394.703549: 1377000 cycles: 116fc9e brstack_bench+0xae (/r>
>>>>>> perf 1110782 426394.703550: 1377000 cycles: 116fcbc brstack_bench+0xcc
>>>>>>
>>>>>> They are usual identical values beside one or two which are way off. Ignoring those would
>>>>>> be good.
>>>>>>
>>>>>
>>>>> FWIW I ran 100+ iterations my Arm Juno and N1SDP boards and the test passed every time.
>>>>>
>>>>> Are we sure there isn't some kind of race condition or bug that the test has found? Rather than a bug in the test?
>>>> There is always a possibility of a bug, that can not be ruled out for certain.
>>>> However as LPARs on s390 run on top of a hypervisor, there is a chance for the
>>>> linux guest being stopped while hardware keeps running.
>>>>
>>>
>>> I have no idea what's going on or how that works, so maybe this question
>>> is useless, but doesn't that mean that guests can determine/guess the
>>> counter values from other guests? If the hardware keeps the counter
>>> running when the guest isn't, that sounds like something is leaking from
>>> one guest to another? Should the hypervisor not be saving and restoring
>>> context?
>>>
>>>> I see these runoff values time and again, roughly every second run fails with
>>>> one runoff value
>>>>
>>>> Hope this helps
>>>>
>>>
>>> That may explain the issue for s390 then, but I'm assuming it doesn't
>>> explain the issues on Arm if the failures there aren't in a VM. But even
>>> if they were in a VM, the PMU is fully virtualised and the events would
>>> be stopped and resumed when the guest is switched out.
>>>
>>> James
>>>
>
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: [PATCH] perf tests record: allow for some difference in cycle count in leader sampling test on aarch64
[not found] ` <CA+G8DhKQkTKoNer5GfZedPUj4xMizWVJUWFocP2eQ_cmPJtBOQ@mail.gmail.com>
@ 2025-10-09 14:59 ` James Clark
2025-10-09 15:22 ` Anubhav Shelat
2025-10-13 15:36 ` Arnaldo Carvalho de Melo
0 siblings, 2 replies; 16+ messages in thread
From: James Clark @ 2025-10-09 14:59 UTC (permalink / raw)
To: Anubhav Shelat
Cc: Thomas Richter, Namhyung Kim, mpetlan, acme, irogers,
linux-perf-users, peterz, mingo, mark.rutland, alexander.shishkin,
jolsa, adrian.hunter, kan.liang, dapeng1.mi
On 09/10/2025 3:43 pm, Anubhav Shelat wrote:
> The first machine was running kernel 6.12.0-55.37.1.el10_0.aarch64 on a KVM
> virtual machine.
> The second machine was running kernel 6.12.0-119.el10.aarch64 also on a KVM.
>
That's quite old. Make sure you test on the latest kernel before sending
patches. The tests in mainline should be targeting the latest kernel,
especially in this case because the throttling fix didn't have a fixes
tag so won't be backported.
That change to fix throttling and group sampling is only from v6.16.
Also what hardware is the VM running on?
> On Thu, Oct 9, 2025 at 3:17 PM James Clark <james.clark@linaro.org> wrote:
>> After reading that patch it seems like we should actually be removing
>> the 80% tolerance from the leader sampling test. Both instances of the
>> cycles counts should be the same now.
>
> If there's no tolerance then the leader sampling test would fail much more
> often. In most of the runs there's at least one case where the leader event
> has much fewer cycles.
>
That's assuming that we've agreed that any difference in cycle counts is
expected and valid. I don't agree that's the case yet and I think it's a
bug. I only see identical counts, and the commit message in Kan's fix
describes that the values should be the same for all architectures.
>> (Excluding s390) I'm starting to think you were hitting this bug on an
>> older kernel? Or something else is going wrong that we should get to
>> the bottom of. The test could have found something and we shouldn't
>> ignore it yet.
>
> I agree that the first bug I mentioned might be from an older kernel, but
> there's still the case here where the cycle counts don't match. I'll keep
> looking into it.
>
> Anubhav
>
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: [PATCH] perf tests record: allow for some difference in cycle count in leader sampling test on aarch64
2025-10-09 14:59 ` James Clark
@ 2025-10-09 15:22 ` Anubhav Shelat
2025-10-13 15:36 ` Arnaldo Carvalho de Melo
1 sibling, 0 replies; 16+ messages in thread
From: Anubhav Shelat @ 2025-10-09 15:22 UTC (permalink / raw)
To: James Clark
Cc: Thomas Richter, Namhyung Kim, mpetlan, acme, irogers,
linux-perf-users, peterz, mingo, mark.rutland, alexander.shishkin,
jolsa, adrian.hunter, kan.liang, dapeng1.mi
On Thu, Oct 9, 2025 at 4:00 PM James Clark <james.clark@linaro.org> wrote:
> Also what hardware is the VM running on?
Both were running on an HPE Apollo CN99XX server.
> That's assuming that we've agreed that any difference in cycle counts is
> expected and valid. I don't agree that's the case yet and I think it's a
> bug. I only see identical counts, and the commit message in Kan's fix
> describes that the values should be the same for all architectures.
True
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: [PATCH] perf tests record: allow for some difference in cycle count in leader sampling test on aarch64
2025-10-09 14:59 ` James Clark
2025-10-09 15:22 ` Anubhav Shelat
@ 2025-10-13 15:36 ` Arnaldo Carvalho de Melo
1 sibling, 0 replies; 16+ messages in thread
From: Arnaldo Carvalho de Melo @ 2025-10-13 15:36 UTC (permalink / raw)
To: James Clark
Cc: Anubhav Shelat, Thomas Richter, Namhyung Kim, mpetlan, irogers,
linux-perf-users, peterz, mingo, mark.rutland, alexander.shishkin,
jolsa, adrian.hunter, kan.liang, dapeng1.mi
On Thu, Oct 09, 2025 at 03:59:54PM +0100, James Clark wrote:
>
>
> On 09/10/2025 3:43 pm, Anubhav Shelat wrote:
> > The first machine was running kernel 6.12.0-55.37.1.el10_0.aarch64 on a KVM
> > virtual machine.
> > The second machine was running kernel 6.12.0-119.el10.aarch64 also on a KVM.
> That's quite old. Make sure you test on the latest kernel before sending
While I agree with you I think that 6.12 is not really that 6.12ish, as
its a disto kernel, an enterprise one at that, so tons of backports.
Having said that, yeah, the right thing is to build the latest upstream
kernel and see if it works, if it works, try to identify backports, and
only when its determined that it is something present on upstream,
report it publicly.
> patches. The tests in mainline should be targeting the latest kernel,
> especially in this case because the throttling fix didn't have a fixes tag
> so won't be backported.
Right.
> That change to fix throttling and group sampling is only from v6.16.
> Also what hardware is the VM running on?
Anubhav, please provide this info,
Thanks,
- Arnaldo
> > On Thu, Oct 9, 2025 at 3:17 PM James Clark <james.clark@linaro.org> wrote:
> > > After reading that patch it seems like we should actually be removing
> > > the 80% tolerance from the leader sampling test. Both instances of the
> > > cycles counts should be the same now.
> > If there's no tolerance then the leader sampling test would fail much more
> > often. In most of the runs there's at least one case where the leader event
> > has much fewer cycles.
> That's assuming that we've agreed that any difference in cycle counts is
> expected and valid. I don't agree that's the case yet and I think it's a
> bug. I only see identical counts, and the commit message in Kan's fix
> describes that the values should be the same for all architectures.
> > > (Excluding s390) I'm starting to think you were hitting this bug on an
> > > older kernel? Or something else is going wrong that we should get to
> > > the bottom of. The test could have found something and we shouldn't
> > > ignore it yet.
> > I agree that the first bug I mentioned might be from an older kernel, but
> > there's still the case here where the cycle counts don't match. I'll keep
> > looking into it.
^ permalink raw reply [flat|nested] 16+ messages in thread
end of thread, other threads:[~2025-10-13 15:36 UTC | newest]
Thread overview: 16+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2025-10-01 19:50 [PATCH] perf tests record: allow for some difference in cycle count in leader sampling test on aarch64 Anubhav Shelat
2025-10-01 20:43 ` Ian Rogers
2025-10-02 6:55 ` Thomas Richter
[not found] ` <CA+G8DhL49FWD47bkbcXYeb9T=AbxNhC-ypqjkNxRnW0JqmYnPw@mail.gmail.com>
2025-10-02 17:44 ` Anubhav Shelat
2025-10-07 5:47 ` Thomas Richter
2025-10-07 12:34 ` James Clark
2025-10-08 7:52 ` Namhyung Kim
2025-10-08 10:48 ` Thomas Richter
2025-10-08 11:24 ` James Clark
2025-10-09 12:14 ` Thomas Richter
[not found] ` <CA+G8Dh+Odf40jdY4h1knjU+3sSjZokMx6OdzRT3o9v1=ndKORQ@mail.gmail.com>
2025-10-09 13:55 ` Anubhav Shelat
2025-10-09 14:17 ` James Clark
[not found] ` <CA+G8DhKQkTKoNer5GfZedPUj4xMizWVJUWFocP2eQ_cmPJtBOQ@mail.gmail.com>
2025-10-09 14:59 ` James Clark
2025-10-09 15:22 ` Anubhav Shelat
2025-10-13 15:36 ` Arnaldo Carvalho de Melo
2025-10-09 14:08 ` James Clark
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).