From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 91C09C77B60 for ; Sat, 29 Apr 2023 05:36:35 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1347207AbjD2Fgd (ORCPT ); Sat, 29 Apr 2023 01:36:33 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:37976 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1347141AbjD2Fg3 (ORCPT ); Sat, 29 Apr 2023 01:36:29 -0400 Received: from mail-pf1-x449.google.com (mail-pf1-x449.google.com [IPv6:2607:f8b0:4864:20::449]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id C9B354497 for ; Fri, 28 Apr 2023 22:36:00 -0700 (PDT) Received: by mail-pf1-x449.google.com with SMTP id d2e1a72fcca58-63b5a2099a6so414863b3a.3 for ; Fri, 28 Apr 2023 22:36:00 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20221208; t=1682746560; x=1685338560; h=cc:to:from:subject:references:mime-version:message-id:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=dIl1kUAo9IYD/UhzcgxFmKKCjB+AQPORsd+rIG9tHuw=; b=cVN0YdL7ddtj5JrxAYIqU+LfGzYb7kPu1CHgv4NhurPXxTGeimWIepl5GMNSUjxSVs RHfYOVexoa6QpeeDNXZCSSFl9GTzyRXj1v0aWsc7UkG0JtVNtc8FFAMYu6FQbSuUEUDO 9LbBx2B+LVdUTnt7zB0VJSxm84lBlfjP6r72oXFmfNMRxuiCEpaIZlYppbNf/Xh0m3w1 WQ8hPqt935xZdYZ0NPxGgQZt5WiSQ4IOhSncysujVc9d4HaS98tQX1BJvDYUXMMl2sg4 Y11KLLulZMT1fQPvofXrOKeE3eLV/S5Rmqt3lqRm8r3j8GxwQPX4Rfz22sjzpV5A9EK/ JHUA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1682746560; x=1685338560; h=cc:to:from:subject:references:mime-version:message-id:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=dIl1kUAo9IYD/UhzcgxFmKKCjB+AQPORsd+rIG9tHuw=; b=Su5Mi5qBnKpwtyi/CVIoWTvicCyTk7APGgu+AC1fJA049rsxYe9nZM/pH11fWNUDAY kQ4KtG2XsWB48LjF+MQV4oJZiqNFBM7WmPNuDQwFd+nn7rI3ioJvWKGqB8E4w/aib64t uioYc8VxPK0gXpA4MhElYSEdlnRLhdp9vMhKn1raZCaxJaWeTKBkt66CqXglcosnnSCL 6kDQ0fKYJ/TeWIcQCqicOydy1Slk3vA+w68Uk7avCvh6GdB45g11DJ/zNT+yAQYqxHvX la7IuvLqtE8g5jar7z+V8lutluf1C2fQipoJUxwyAeXMQ4j4EXXP9/t/EjsMAJ9V4R24 KC4g== X-Gm-Message-State: AC+VfDzMwnI5vI+wteXauzhm+9NnaFKk86DXjZ61tKJg5Q8bB1wH53KU l9IbVkgTqDJEoZCC/MpgcJTnfRNWXwLZ X-Google-Smtp-Source: ACHHUZ4PXxHbwDxdRNRMUj2a+HODRdvSZGCSs65evD6naAIq3Wpq0ZJ1JnDD4jTqcf7pGI8+A0KR4geV3uOA X-Received: from irogers.svl.corp.google.com ([2620:15c:2d4:203:c563:7e28:fb7c:bce3]) (user=irogers job=sendgmr) by 2002:a05:6a00:1625:b0:63d:255f:36f8 with SMTP id e5-20020a056a00162500b0063d255f36f8mr1991185pfc.3.1682746560270; Fri, 28 Apr 2023 22:36:00 -0700 (PDT) Date: Fri, 28 Apr 2023 22:34:25 -0700 In-Reply-To: <20230429053506.1962559-1-irogers@google.com> Message-Id: <20230429053506.1962559-6-irogers@google.com> Mime-Version: 1.0 References: <20230429053506.1962559-1-irogers@google.com> X-Mailer: git-send-email 2.40.1.495.gc816e09b53d-goog Subject: [PATCH v3 05/46] perf parse-events: Don't reorder ungrouped events by pmu From: Ian Rogers To: Arnaldo Carvalho de Melo , Kan Liang , Ahmad Yasin , Peter Zijlstra , Ingo Molnar , Stephane Eranian , Andi Kleen , Perry Taylor , Samantha Alt , Caleb Biggers , Weilin Wang , Edward Baker , Mark Rutland , Alexander Shishkin , Jiri Olsa , Namhyung Kim , Adrian Hunter , Florian Fischer , Rob Herring , Zhengjun Xing , John Garry , Kajol Jain , Sumanth Korikkar , Thomas Richter , Tiezhu Yang , Ravi Bangoria , Leo Yan , Yang Jihong , James Clark , Suzuki Poulouse , Kang Minchul , Athira Rajeev , linux-perf-users@vger.kernel.org, linux-kernel@vger.kernel.org Cc: Ian Rogers Content-Type: text/plain; charset="UTF-8" Precedence: bulk List-ID: X-Mailing-List: linux-perf-users@vger.kernel.org The pmu_group_name by default returns "cpu" which on non-hybrid/ARM means that ungrouped software, and hardware events are all going to sort by the original insertion index. However, on hybrid and ARM wildcard expansion may mean the PMU name is set and events will be unnecessarily reordered - triggering the reordering warning. Signed-off-by: Ian Rogers --- tools/perf/util/parse-events.c | 23 +++++++++++++++-------- 1 file changed, 15 insertions(+), 8 deletions(-) diff --git a/tools/perf/util/parse-events.c b/tools/perf/util/parse-events.c index d71019dcd614..34ba840ae19a 100644 --- a/tools/perf/util/parse-events.c +++ b/tools/perf/util/parse-events.c @@ -2140,25 +2140,32 @@ static int evlist__cmp(void *state, const struct list_head *l, const struct list int *leader_idx = state; int lhs_leader_idx = *leader_idx, rhs_leader_idx = *leader_idx, ret; const char *lhs_pmu_name, *rhs_pmu_name; + bool lhs_has_group = false, rhs_has_group = false; /* * First sort by grouping/leader. Read the leader idx only if the evsel * is part of a group, as -1 indicates no group. */ - if (lhs_core->leader != lhs_core || lhs_core->nr_members > 1) + if (lhs_core->leader != lhs_core || lhs_core->nr_members > 1) { + lhs_has_group = true; lhs_leader_idx = lhs_core->leader->idx; - if (rhs_core->leader != rhs_core || rhs_core->nr_members > 1) + } + if (rhs_core->leader != rhs_core || rhs_core->nr_members > 1) { + rhs_has_group = true; rhs_leader_idx = rhs_core->leader->idx; + } if (lhs_leader_idx != rhs_leader_idx) return lhs_leader_idx - rhs_leader_idx; - /* Group by PMU. Groups can't span PMUs. */ - lhs_pmu_name = evsel__group_pmu_name(lhs); - rhs_pmu_name = evsel__group_pmu_name(rhs); - ret = strcmp(lhs_pmu_name, rhs_pmu_name); - if (ret) - return ret; + /* Group by PMU if there is a group. Groups can't span PMUs. */ + if (lhs_has_group && rhs_has_group) { + lhs_pmu_name = evsel__group_pmu_name(lhs); + rhs_pmu_name = evsel__group_pmu_name(rhs); + ret = strcmp(lhs_pmu_name, rhs_pmu_name); + if (ret) + return ret; + } /* Architecture specific sorting. */ return arch_evlist__cmp(lhs, rhs); -- 2.40.1.495.gc816e09b53d-goog