linux-perf-users.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Ravi Bangoria <ravi.bangoria@amd.com>
To: Ian Rogers <irogers@google.com>,
	"Liang, Kan" <kan.liang@linux.intel.com>,
	"Xing, Zhengjun" <zhengjun.xing@intel.com>,
	sedat.dilek@gmail.com
Cc: Arnaldo Carvalho de Melo <acme@redhat.com>,
	Peter Zijlstra <peterz@infradead.org>,
	Ingo Molnar <mingo@redhat.com>,
	Mark Rutland <mark.rutland@arm.com>,
	Alexander Shishkin <alexander.shishkin@linux.intel.com>,
	Jiri Olsa <jolsa@kernel.org>, Namhyung Kim <namhyung@kernel.org>,
	linux-perf-users@vger.kernel.org, linux-kernel@vger.kernel.org,
	Nick Desaulniers <ndesaulniers@google.com>,
	Nathan Chancellor <natechancellor@gmail.com>,
	llvm@lists.linux.dev, Ben Hutchings <benh@debian.org>,
	James Clark <james.clark@arm.com>,
	Stephane Eranian <eranian@google.com>,
	Ravi Bangoria <ravi.bangoria@amd.com>
Subject: Re: [6.1.7][6.2-rc5] perf all metrics test: FAILED!
Date: Wed, 1 Feb 2023 12:21:32 +0530	[thread overview]
Message-ID: <6871e91f-997b-8558-e87b-7bf147f2750c@amd.com> (raw)
In-Reply-To: <CAP-5=fWcb8m9vkfqSC9H2Gqi2dBjbPuGb2F27Fq-ejmR25foQw@mail.gmail.com>

Hi Ian,

> So I think this is a kernel bug triggering a perf tool bug. The kernel
> bug can be worked around in the perf tool. I only had an Ivybridge to
> test with (hence slightly different events) but what I see is both
> tma_dram_bound and tma_l3_bound using the same 4 events. I could work
> around the "<not counted>" by adding the --metric-no-group flag:
> 
> ```
> $ perf stat -M tma_l3_bound --metric-no-group -a sleep 1
> 
> Performance counter stats for 'system wide':
> 
>           400,404      MEM_LOAD_UOPS_RETIRED.LLC_HIT    #      4.3 %
> tma_l3_bound             (74.99%)
>       128,937,891      CYCLE_ACTIVITY.STALLS_L2_PENDING
>                         (87.46%)
>           167,459      MEM_LOAD_UOPS_RETIRED.LLC_MISS
>                         (74.99%)
>       759,574,967      CPU_CLK_UNHALTED.THREAD
>                         (87.47%)
> 
>       1.001526438 seconds time elapsed
> 
> $ perf stat -M tma_dram_bound -a --metric-no-group sleep 1
> 
> Performance counter stats for 'system wide':
> 
>           259,954      MEM_LOAD_UOPS_RETIRED.LLC_HIT    #     15.2 %
> tma_dram_bound           (74.99%)
>       118,807,043      CYCLE_ACTIVITY.STALLS_L2_PENDING
>                         (87.46%)
>           111,699      MEM_LOAD_UOPS_RETIRED.LLC_MISS
>                         (74.95%)
>       587,571,060      CPU_CLK_UNHALTED.THREAD
>                         (87.45%)
> 
>       1.001518093 seconds time elapsed
> ```
> 
> The issue is that perf metrics use weak groups of events. A weak group
> is the same as a group of events initially. We want to use groups of
> events with metrics so that all the counters are scheduled in and out
> at the same time, and not multiplexed independently. Imagine measuring
> IPC but the counts for instructions and cycles are measured at
> different periods, the resultant IPC value would be unlikely to be
> accurate. If perf_event_open fails then the perf tool retries the
> events without the group. If I try just 3 of the events in a weak
> group then the failure can be seen:
> 
> ```
> $ perf stat -e "{MEM_LOAD_UOPS_RETIRED.LLC_HIT,MEM_LOAD_UOPS_RETIRED.LLC_MISS,CYCLE_ACTIVITY.STALLS_L2_PENDING}:W"
> -a sleep 1
> 
> Performance counter stats for 'system wide':
> 
>     <not counted>      MEM_LOAD_UOPS_RETIRED.LLC_HIT
>                         (0.00%)
>     <not counted>      MEM_LOAD_UOPS_RETIRED.LLC_MISS
>                         (0.00%)
>     <not counted>      CYCLE_ACTIVITY.STALLS_L2_PENDING
>                         (0.00%)
> 
>       1.001458485 seconds time elapsed
> ```
> 
> The kernel should have failed the perf_event_open on opening the third
> event and then measured without the group,

IIUC, Kernel should not fail opening of the 3rd event, because there are 4
general purpose counters on Intel and all three events can be scheduled
on any of the 4 counter (I checked IvyBridge).

However, what I don't understand is why kernel failed to schedule the group.
Unless someone has pre-occupied 2 or more GP counter, group should get
schedule fine.

> which it can do with
> multiplexing as in the following:
> 
> ```
> $ perf stat -e "MEM_LOAD_UOPS_RETIRED.LLC_HIT,MEM_LOAD_UOPS_RETIRED.LLC_MISS,CYCLE_ACTIVITY.STALLS_L2_PENDING"
> -a sleep 1
> 
> Performance counter stats for 'system wide':
> 
>         1,239,397      MEM_LOAD_UOPS_RETIRED.LLC_HIT
>                         (79.06%)
>           174,826      MEM_LOAD_UOPS_RETIRED.LLC_MISS
>                         (64.60%)
>       124,026,024      CYCLE_ACTIVITY.STALLS_L2_PENDING
>                         (81.16%)
> 
>       1.001483434 seconds time elapsed
> ```
> 
> When the --metric-no-group flag is given to perf then it doesn't
> produce the initial weak group, which works around the bug of the
> kernel not failing on the 3rd perf_event_open. I've added Kan and
> Zhengjun to the e-mail as they work on the Intel kernel PMU code.
> 
> There's a question about what we should do in the perf test about
> this? I have a few solutions:
> 
> 1) try metric tests again with the --metric-no-group flag and don't
> fail the test if this succeeds. This allows kernel bugs to hide, so
> I'm not a huge fan.
> 
> 2) add a new metric flag/constraint to say not to group, this way the
> metric will automatically apply the "--metric-no-group" flag. It is a
> bit of work to wire this up but this kind of failure is common enough
> in PMUs that it is probably worthwhile. We also need to add the flag
> to metrics and I'm not sure how to get a good list of the metrics that
> currently fail and require it. This is okay but error prone.
> 
> 3) fix the kernel bug and let the perf test fail until an adequate
> kernel is installed. Probably the best option.

Thanks,
Ravi


      parent reply	other threads:[~2023-02-01  6:51 UTC|newest]

Thread overview: 13+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2023-01-29  9:58 [6.1.7][6.2-rc5] perf all metrics test: FAILED! Sedat Dilek
2023-01-29 23:21 ` Ian Rogers
2023-01-30  2:24   ` Sedat Dilek
2023-01-30 10:04     ` James Clark
2023-01-31  0:20       ` Ian Rogers
2023-01-31  3:45         ` Sedat Dilek
2023-01-31  3:55           ` Ian Rogers
2023-01-31  6:14             ` Sedat Dilek
2023-01-31  6:20               ` Sedat Dilek
2023-02-01 15:27             ` Liang, Kan
2023-02-01 17:02               ` Ian Rogers
2023-02-01 19:06                 ` Liang, Kan
2023-02-01  6:51         ` Ravi Bangoria [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=6871e91f-997b-8558-e87b-7bf147f2750c@amd.com \
    --to=ravi.bangoria@amd.com \
    --cc=acme@redhat.com \
    --cc=alexander.shishkin@linux.intel.com \
    --cc=benh@debian.org \
    --cc=eranian@google.com \
    --cc=irogers@google.com \
    --cc=james.clark@arm.com \
    --cc=jolsa@kernel.org \
    --cc=kan.liang@linux.intel.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-perf-users@vger.kernel.org \
    --cc=llvm@lists.linux.dev \
    --cc=mark.rutland@arm.com \
    --cc=mingo@redhat.com \
    --cc=namhyung@kernel.org \
    --cc=natechancellor@gmail.com \
    --cc=ndesaulniers@google.com \
    --cc=peterz@infradead.org \
    --cc=sedat.dilek@gmail.com \
    --cc=zhengjun.xing@intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).