From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 95029C77B60 for ; Sat, 29 Apr 2023 05:51:47 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229918AbjD2Fvq (ORCPT ); Sat, 29 Apr 2023 01:51:46 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:55890 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230457AbjD2Fvp (ORCPT ); Sat, 29 Apr 2023 01:51:45 -0400 Received: from mail-yb1-xb4a.google.com (mail-yb1-xb4a.google.com [IPv6:2607:f8b0:4864:20::b4a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 70F373A88 for ; Fri, 28 Apr 2023 22:51:07 -0700 (PDT) Received: by mail-yb1-xb4a.google.com with SMTP id 3f1490d57ef6-b9a7c45b8e1so936566276.3 for ; Fri, 28 Apr 2023 22:51:07 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20221208; t=1682747465; x=1685339465; h=cc:to:from:subject:references:mime-version:message-id:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=OKlYtbEHxsaZvta2/vy5tQOz2M7WlkKnPlSTO/BorIw=; b=NLzIOlyVZd7dqx9HBpYGKytwxovGz3J9bICCjpdVQJV//BHyd3Zov2NPm2cjNFfZfx tOOTHjlODZG98h16i4LGMoPRSM0mixuqiqme1qGnh7oV5M6Unwd4V+Yss1MSM2jqw5TY lXZQp8IfTz58YTWhoqjo3OI6AQ1oufSgzU//kd7/4y6e/gIJAuMGjxmxSsMg5neKCm6J mBE43bB0fgAWlC4TxMDaqASixzesek5BDP9RdapHZ2HcOl+/2shsd3naf3vuQKBLXWfb KXOHMQBojE3NyzX5THGd4N6INXZANXMzg1OQgc7epWq9k4TB8I2NY5e4+2Omyv7Jj4Ip 6InA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1682747465; x=1685339465; h=cc:to:from:subject:references:mime-version:message-id:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=OKlYtbEHxsaZvta2/vy5tQOz2M7WlkKnPlSTO/BorIw=; b=HS17vqpoGBgYEBMmNtQjHzAYh1bRpe+vr9gJH+MUiJWhkOO76gF4KmZei3PeGiYV5C CF+TeU4Dd2vQdT7Hp2aOdKzTg7CMiDfVYem/FVi8qnNVWgAV/LAGRlzHuSUQqSpq3Mz3 iYXMACEoCc2bbEICr0wwtCB8LRZrAv0jq/lFnD+nc1wWqlauOl/PemgYtZ0P/LY7xfMi pubhMOP4DqgNPJ0pXXIKDCHud5lSkw9axhx0hoVyKqiRWYl9m4UmNRujd4j8RfC/GeXS fTNTNldxx+o8iJys0zHKD4c3bELVgFVV0LV5OWxkVqOIMOkK8QgbiOMbxjEF3vplQ9g0 X1Hw== X-Gm-Message-State: AC+VfDw4OBLDNzs9y4oQn5zvWnopPPvnuAXi7UuKFheoCzxlKApIfxQ5 rjAWAZLHGF7RGkg/OuGUL1ndLwkoZ+XQ X-Google-Smtp-Source: ACHHUZ5oK/0Rx+UrawbenIsLixZHsS4dI3F2WC3MKzw5BaIbbNjKcs+ghKwXeUJuyc/GRuc3QUBVgVIGS2Kd X-Received: from irogers.svl.corp.google.com ([2620:15c:2d4:203:c563:7e28:fb7c:bce3]) (user=irogers job=sendgmr) by 2002:a05:6902:100e:b0:b8b:f584:6b73 with SMTP id w14-20020a056902100e00b00b8bf5846b73mr3091570ybt.10.1682746738992; Fri, 28 Apr 2023 22:38:58 -0700 (PDT) Date: Fri, 28 Apr 2023 22:34:46 -0700 In-Reply-To: <20230429053506.1962559-1-irogers@google.com> Message-Id: <20230429053506.1962559-27-irogers@google.com> Mime-Version: 1.0 References: <20230429053506.1962559-1-irogers@google.com> X-Mailer: git-send-email 2.40.1.495.gc816e09b53d-goog Subject: [PATCH v3 26/46] perf parse-events: Wildcard legacy cache events From: Ian Rogers To: Arnaldo Carvalho de Melo , Kan Liang , Ahmad Yasin , Peter Zijlstra , Ingo Molnar , Stephane Eranian , Andi Kleen , Perry Taylor , Samantha Alt , Caleb Biggers , Weilin Wang , Edward Baker , Mark Rutland , Alexander Shishkin , Jiri Olsa , Namhyung Kim , Adrian Hunter , Florian Fischer , Rob Herring , Zhengjun Xing , John Garry , Kajol Jain , Sumanth Korikkar , Thomas Richter , Tiezhu Yang , Ravi Bangoria , Leo Yan , Yang Jihong , James Clark , Suzuki Poulouse , Kang Minchul , Athira Rajeev , linux-perf-users@vger.kernel.org, linux-kernel@vger.kernel.org Cc: Ian Rogers Content-Type: text/plain; charset="UTF-8" Precedence: bulk List-ID: X-Mailing-List: linux-perf-users@vger.kernel.org It is inconsistent that "perf stat -e instructions-retired" wildcard opens on all PMUs while legacy cache events like "perf stat -e L1-dcache-load-miss" do not. A behavior introduced by hybrid is that a legacy cache event like L1-dcache-load-miss should wildcard open on all hybrid PMUs. Previously hybrid would call to is_event_supported for each PMU, a failure of which results in the event not being added. This isn't done in this case as the parser should just create perf_event_attr and the later open should fail, or the counter give "". If this wants to be avoided then the PMU can be named with the event. Signed-off-by: Ian Rogers --- tools/perf/util/parse-events-hybrid.c | 33 ------------- tools/perf/util/parse-events-hybrid.h | 7 --- tools/perf/util/parse-events.c | 68 ++++++++++++++------------- tools/perf/util/parse-events.h | 3 +- tools/perf/util/parse-events.y | 2 +- 5 files changed, 37 insertions(+), 76 deletions(-) diff --git a/tools/perf/util/parse-events-hybrid.c b/tools/perf/util/parse-events-hybrid.c index 7c9f9150bad5..d2c0be051d46 100644 --- a/tools/perf/util/parse-events-hybrid.c +++ b/tools/perf/util/parse-events-hybrid.c @@ -179,36 +179,3 @@ int parse_events__add_numeric_hybrid(struct parse_events_state *parse_state, return add_raw_hybrid(parse_state, list, attr, name, metric_id, config_terms); } - -int parse_events__add_cache_hybrid(struct list_head *list, int *idx, - struct perf_event_attr *attr, - const char *name, - const char *metric_id, - struct list_head *config_terms, - bool *hybrid, - struct parse_events_state *parse_state) -{ - struct perf_pmu *pmu; - int ret; - - *hybrid = false; - if (!perf_pmu__has_hybrid()) - return 0; - - *hybrid = true; - perf_pmu__for_each_hybrid_pmu(pmu) { - LIST_HEAD(terms); - - if (pmu_cmp(parse_state, pmu)) - continue; - - copy_config_terms(&terms, config_terms); - ret = create_event_hybrid(PERF_TYPE_HW_CACHE, idx, list, - attr, name, metric_id, &terms, pmu); - free_config_terms(&terms); - if (ret) - return ret; - } - - return 0; -} diff --git a/tools/perf/util/parse-events-hybrid.h b/tools/perf/util/parse-events-hybrid.h index cbc05fec02a2..bc2966e73897 100644 --- a/tools/perf/util/parse-events-hybrid.h +++ b/tools/perf/util/parse-events-hybrid.h @@ -15,11 +15,4 @@ int parse_events__add_numeric_hybrid(struct parse_events_state *parse_state, struct list_head *config_terms, bool *hybrid); -int parse_events__add_cache_hybrid(struct list_head *list, int *idx, - struct perf_event_attr *attr, - const char *name, const char *metric_id, - struct list_head *config_terms, - bool *hybrid, - struct parse_events_state *parse_state); - #endif /* __PERF_PARSE_EVENTS_HYBRID_H */ diff --git a/tools/perf/util/parse-events.c b/tools/perf/util/parse-events.c index f692dd953593..9f2bbf8f3a81 100644 --- a/tools/perf/util/parse-events.c +++ b/tools/perf/util/parse-events.c @@ -472,46 +472,48 @@ static int parse_events__decode_legacy_cache(const char *name, int pmu_type, __u int parse_events_add_cache(struct list_head *list, int *idx, const char *name, struct parse_events_error *err, - struct list_head *head_config, - struct parse_events_state *parse_state) + struct list_head *head_config) { - struct perf_event_attr attr; - LIST_HEAD(config_terms); - const char *config_name, *metric_id; - int ret; - bool hybrid; + struct perf_pmu *pmu = NULL; + bool found_supported = false; + const char *config_name = get_config_name(head_config); + const char *metric_id = get_config_metric_id(head_config); + while ((pmu = perf_pmu__scan(pmu)) != NULL) { + LIST_HEAD(config_terms); + struct perf_event_attr attr; + int ret; - memset(&attr, 0, sizeof(attr)); - attr.type = PERF_TYPE_HW_CACHE; - ret = parse_events__decode_legacy_cache(name, /*pmu_type=*/0, &attr.config); - if (ret) - return ret; + /* Skip unsupported PMUs. */ + if (!perf_pmu__supports_legacy_cache(pmu)) + continue; - if (head_config) { - if (config_attr(&attr, head_config, err, - config_term_common)) - return -EINVAL; + memset(&attr, 0, sizeof(attr)); + attr.type = PERF_TYPE_HW_CACHE; - if (get_config_terms(head_config, &config_terms)) - return -ENOMEM; - } + ret = parse_events__decode_legacy_cache(name, pmu->type, &attr.config); + if (ret) + return ret; - config_name = get_config_name(head_config); - metric_id = get_config_metric_id(head_config); - ret = parse_events__add_cache_hybrid(list, idx, &attr, - config_name ? : name, - metric_id, - &config_terms, - &hybrid, parse_state); - if (hybrid) - goto out_free_terms; + found_supported = true; - ret = add_event(list, idx, &attr, config_name ? : name, metric_id, - &config_terms); -out_free_terms: - free_config_terms(&config_terms); - return ret; + if (head_config) { + if (config_attr(&attr, head_config, err, + config_term_common)) + return -EINVAL; + + if (get_config_terms(head_config, &config_terms)) + return -ENOMEM; + } + + if (__add_event(list, idx, &attr, /*init_attr*/true, config_name ?: name, + metric_id, pmu, &config_terms, /*auto_merge_stats=*/false, + /*cpu_list=*/NULL) == NULL) + return -ENOMEM; + + free_config_terms(&config_terms); + } + return found_supported ? 0 : -EINVAL; } #ifdef HAVE_LIBTRACEEVENT diff --git a/tools/perf/util/parse-events.h b/tools/perf/util/parse-events.h index 5acb62c2e00a..0c26303f7f63 100644 --- a/tools/perf/util/parse-events.h +++ b/tools/perf/util/parse-events.h @@ -172,8 +172,7 @@ int parse_events_add_tool(struct parse_events_state *parse_state, int tool_event); int parse_events_add_cache(struct list_head *list, int *idx, const char *name, struct parse_events_error *error, - struct list_head *head_config, - struct parse_events_state *parse_state); + struct list_head *head_config); int parse_events_add_breakpoint(struct list_head *list, int *idx, u64 addr, char *type, u64 len); int parse_events_add_pmu(struct parse_events_state *parse_state, diff --git a/tools/perf/util/parse-events.y b/tools/perf/util/parse-events.y index f84fa1b132b3..cc7528558845 100644 --- a/tools/perf/util/parse-events.y +++ b/tools/perf/util/parse-events.y @@ -476,7 +476,7 @@ PE_LEGACY_CACHE opt_event_config list = alloc_list(); ABORT_ON(!list); - err = parse_events_add_cache(list, &parse_state->idx, $1, error, $2, parse_state); + err = parse_events_add_cache(list, &parse_state->idx, $1, error, $2); parse_events_terms__delete($2); free($1); -- 2.40.1.495.gc816e09b53d-goog