From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 979E3C6FD18 for ; Wed, 19 Apr 2023 13:20:00 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233411AbjDSNT7 (ORCPT ); Wed, 19 Apr 2023 09:19:59 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:56062 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233414AbjDSNTv (ORCPT ); Wed, 19 Apr 2023 09:19:51 -0400 Received: from mail-wm1-x32f.google.com (mail-wm1-x32f.google.com [IPv6:2a00:1450:4864:20::32f]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 5688315454 for ; Wed, 19 Apr 2023 06:19:47 -0700 (PDT) Received: by mail-wm1-x32f.google.com with SMTP id 5b1f17b1804b1-3f182f35930so9765e9.1 for ; Wed, 19 Apr 2023 06:19:47 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20221208; t=1681910386; x=1684502386; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:from:to:cc:subject:date :message-id:reply-to; bh=j983YFludd/GyMTEasQelz3+1RiLp31tfO0wT/ETbtM=; b=o3ZP9xclPR9616VadFNj8rnyYb4qGniegdajvA77WQzJe4XoEmbMNMavaf3w9qQx+C CosjpVtrISjbAjuUgq0rbTY7iR3WuHeK5tosWKU+uhz3VNygtE47jDk7cs+flnw/gUJk J9VnV1+AQyaEuNBd+eGR6X7nPsnDIIn9IlI6FjZnGUfRYFB+YmyC28DSWxISCpG0W+bT HO2qd7JVqQiQPFJ+70ZGuziAMp3qLisW5JrP8OhVENc8AsQqghQXr++bdg5+ezgJa9NK HX4MiHTunwFsOJgyrx7+gCvwt0zXE3TO9+3iK35AI1JNKoW3XXRUYDC6RvE52Z07XcZ2 uLqw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1681910386; x=1684502386; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=j983YFludd/GyMTEasQelz3+1RiLp31tfO0wT/ETbtM=; b=RmTE8F6LJOylE3IgV6vVBKP1Mlkx66Amnt0DxbrbQsIdrsSKHZhr7c2bSgwAt9Sfmp cNcU8eIj6tb+hucQSiCpRabVmLeIxdLAdT9yzN4IR406Rulj6DuzMminVuesOemRqIKz 7IjmCtebU27DqosEY6S/QWjHpCQwKyWi1ngWtqRpc63tIGww54LDHs1Rdp1z9uFW7XiB ld1hX27DWvAvSLngzJ27/+Er5Uh5pv9IotiZ3mIJ0tBPdPomuT1aTZ7aAAE5qctFljxp ZB16p8tK/yVGUJSplXJVPbTqgZk9bvQA+/oPSfbJEEYh6lx3Jsogu2FZphguUwaKpV5Y AJlg== X-Gm-Message-State: AAQBX9cyU8j3/t7P5Mq1YPQRDaHSLfqz2bn+LpWy+B41j3aDUC404cEh /N7b9nRgnisrwYaxPXcKrBoEt6XxIg/2tXZo07hggQ== X-Google-Smtp-Source: AKy350bxeStL2sqZdJTM0uS74kWxjTyWkgtDGK4eJo2trp3qCazP4s142SML6zxLZUufbEynbEAxDaqQhH17tJD2OZY= X-Received: by 2002:a05:600c:1c27:b0:3f1:6fe9:4a98 with SMTP id j39-20020a05600c1c2700b003f16fe94a98mr146850wms.5.1681910385676; Wed, 19 Apr 2023 06:19:45 -0700 (PDT) MIME-Version: 1.0 References: <20230414051922.3625666-1-irogers@google.com> <56ac61a0-ccf0-210e-e429-11062a07b8bf@linux.intel.com> <5031f492-9734-be75-3283-5961771d87c8@linux.intel.com> <99150cb1-fe50-97cf-ad80-cceb9194eb9a@linux.intel.com> <84b19053-2e9f-5251-6816-26d2475894c0@linux.intel.com> In-Reply-To: <84b19053-2e9f-5251-6816-26d2475894c0@linux.intel.com> From: Ian Rogers Date: Wed, 19 Apr 2023 06:19:29 -0700 Message-ID: Subject: Re: [PATCH v2] perf stat: Introduce skippable evsels To: "Liang, Kan" Cc: Peter Zijlstra , Ingo Molnar , Arnaldo Carvalho de Melo , Mark Rutland , Alexander Shishkin , Jiri Olsa , Namhyung Kim , Adrian Hunter , Florian Fischer , linux-perf-users@vger.kernel.org, linux-kernel@vger.kernel.org Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-perf-users@vger.kernel.org On Wed, Apr 19, 2023 at 5:31=E2=80=AFAM Liang, Kan wrote: > > > > On 2023-04-18 9:00 p.m., Ian Rogers wrote: > > On Tue, Apr 18, 2023 at 5:12=E2=80=AFPM Ian Rogers = wrote: > >> > >> On Tue, Apr 18, 2023 at 2:51=E2=80=AFPM Liang, Kan wrote: > >>> > >>> > >>> > >>> On 2023-04-18 4:08 p.m., Ian Rogers wrote: > >>>> On Tue, Apr 18, 2023 at 11:19=E2=80=AFAM Liang, Kan wrote: > >>>>> > >>>>> > >>>>> > >>>>> On 2023-04-18 11:43 a.m., Ian Rogers wrote: > >>>>>> On Tue, Apr 18, 2023 at 6:03=E2=80=AFAM Liang, Kan wrote: > >>>>>>> > >>>>>>> > >>>>>>> > >>>>>>> On 2023-04-17 2:13 p.m., Ian Rogers wrote: > >>>>>>>> The json TopdownL1 is enabled if present unconditionally for per= f stat > >>>>>>>> default. Enabling it on Skylake has multiplexing as TopdownL1 on > >>>>>>>> Skylake has multiplexing unrelated to this change - at least on = the > >>>>>>>> machine I was testing on. We can remove the metric group Topdown= L1 on > >>>>>>>> Skylake so that we don't enable it by default, there is still th= e > >>>>>>>> group TmaL1. To me, disabling TopdownL1 seems less desirable tha= n > >>>>>>>> running with multiplexing - previously to get into topdown analy= sis > >>>>>>>> there has to be knowledge that "perf stat -M TopdownL1" is the w= ay to > >>>>>>>> do this. > >>>>>>> > >>>>>>> To be honest, I don't think it's a good idea to remove the Topdow= nL1. We > >>>>>>> cannot remove it just because the new way cannot handle it. The p= erf > >>>>>>> stat default works well until 6.3-rc7. It's a regression issue of= the > >>>>>>> current perf-tools-next. > >>>>>> > >>>>>> I'm not so clear it is a regression to consistently add TopdownL1 = for > >>>>>> all architectures supporting it. > >>>>> > >>>>> > >>>>> Breaking the perf stat default is a regression. > >>>> > >>>> Breaking is overstating the use of multiplexing. The impact is less > >>>> accuracy in the IPC and branch misses default metrics, > >>> > >>> Inaccuracy is a breakage for the default. > >> > >> Can you present a case where this matters? The events are already not > >> grouped and so inaccurate for metrics. > > > > Removing CPUs without perf metrics from the TopdownL1 metric group is > > implemented here: > > https://lore.kernel.org/lkml/20230419005423.343862-6-irogers@google.com= / > > Note, this applies to pre-Icelake and atom CPUs as these also lack > > perf metric (aka topdown) events. > > > > That may give the end user the impression that the pre-Icelake doesn't > support the Topdown Level1 events, which is not true. > > I think perf should either keep it for all Intel platforms which > supports tma_L1_group, or remove the TopdownL1 name entirely for Intel > platform (let the end user use the tma_L1_group and the name exposed by > the kernel as before.). How does this work on hybrid systems? We will enable TopdownL1 because of the presence of perf metric (aka topdown) events but this will also enable TopdownL1 on the atom core. > > > With that change I don't have a case that requires skippable evsels, > > and so we can take that series with patch 6 over the v1 of that series > > with this change. > > > > I'm afraid this is not the only problem the commit 94b1a603fca7 ("perf > stat: Add TopdownL1 metric as a default if present") in the > perf-tools-next branch introduced. > > The topdown L2 in the perf stat default on SPR and big core of the ADL > is still missed. I don't see a possible fix for this on the current > perf-tools-next branch. I thought in its current state the json metrics for TopdownL2 on SPR have multiplexing. Given L1 is used to drill down to L2, it seems odd to start on L2, but given L1 is used to compute the thresholds for L2, this should be to have both L1 and L2 on these platforms. However, that doesn't work as you don't want multiplexing. This all seems backward to avoid potential multiplexing on branch miss rate and IPC, just always having TopdownL1 seems cleanest with the skippable evsels working around the permissions issue - as put forward in this patch. Possibly adding L2 metrics on ADL/SPR, but only once the multiplexing issue is resolved. Thanks, Ian > Thanks, > Kan