From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id EF9E9C433F5 for ; Thu, 19 May 2022 03:20:44 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233308AbiESDUn (ORCPT ); Wed, 18 May 2022 23:20:43 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:46204 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233228AbiESDUg (ORCPT ); Wed, 18 May 2022 23:20:36 -0400 Received: from mail-yw1-x1149.google.com (mail-yw1-x1149.google.com [IPv6:2607:f8b0:4864:20::1149]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id F296162124 for ; Wed, 18 May 2022 20:20:20 -0700 (PDT) Received: by mail-yw1-x1149.google.com with SMTP id 00721157ae682-2f8398e99dcso35841757b3.9 for ; Wed, 18 May 2022 20:20:20 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=dwmBV4G5Hrx+MlkRr6GaHrCf5HeOC3stEpj1wSJ3O8A=; b=o5NjPzUznWE13DNTK0wFIG1NKbT9cGiL1lhWxHLGQ5r4bBD8e+wKWhYW3xk+khrigG p3tBuKe6wk9kILLORw3RjNJgw3fqxOvAEaLz5tmFWYrQen1SMb3WKawQMFJShV1sw045 6iL9u6tHbnB4pXOc9c6ABTFFyfysO6NiVWBmEFBu4urYthwKcLYO0JgBZ2O9zSjB5Ts/ OdPHakQX8hM2snxYehC8fyLK8FCJQxjZv22U/i+kN81boFVA+QYITEfEXoRMV+OYQVmA T/RiO5aoShu5XWQgrx+lb4PJs44plrEyd0wRRsN4CuB5yPzK+oAWfZ7oVifHOgBeNsB+ EuMA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=dwmBV4G5Hrx+MlkRr6GaHrCf5HeOC3stEpj1wSJ3O8A=; b=hjYZ2z4FnQB/U97mZddgRl0IUfOIAXYXMaTWm53ng+yiayB79h//XPlzvGK9Cnmf6z TbnPlaP++C6doU4XZe6wuvqhZ9owpyS3uVQRJIKIP2a5pPtthOxbLlVO9hei6pQH6C+5 M04qBop2HoNF3aKocvMsZ1bxiq+y4GD2LMV3KWUTfE97DGny2DRsSaI2fzBpkmfLmc3m 6Dhjj/lbrI3M19VD86q4id4MDsDY9OsZTulA4ze4K9R19d1sM8sFIaZutzi60AEl63SH brzIAQE0Xj5I+6IluelaB5x0cx8VHJ8CVw2gCVbjBoZZGd7JbN74FuxM/3D79XDsfujd EEvA== X-Gm-Message-State: AOAM532juTSQboFJgveWYu9CEXoVuBDMS4nDz+utz2pF2JcrE3ipkJaV eMUUI68m5OAdyvhHsEJdvEdwp4jVLaDD X-Google-Smtp-Source: ABdhPJzSCHP7XeMQopIcnWVduwBBs6JvQNIRox8q0b96Ga/SO581j62Doa4cRRTYzHBanNwOkT+LHDHRV6G3 X-Received: from irogers.svl.corp.google.com ([2620:15c:2cd:202:a233:bf3c:6ac:2a98]) (user=irogers job=sendgmr) by 2002:a25:230f:0:b0:64d:76e2:6aa5 with SMTP id j15-20020a25230f000000b0064d76e26aa5mr2481170ybj.116.1652930420577; Wed, 18 May 2022 20:20:20 -0700 (PDT) Date: Wed, 18 May 2022 20:20:04 -0700 In-Reply-To: <20220519032005.1273691-1-irogers@google.com> Message-Id: <20220519032005.1273691-5-irogers@google.com> Mime-Version: 1.0 References: <20220519032005.1273691-1-irogers@google.com> X-Mailer: git-send-email 2.36.1.124.g0e6072fb45-goog Subject: [PATCH 4/5] perf bpf_counter: Tidy use of CPU map index From: Ian Rogers To: Michael Petlan , Peter Zijlstra , Ingo Molnar , Arnaldo Carvalho de Melo , Mark Rutland , Alexander Shishkin , Jiri Olsa , Namhyung Kim , Alexei Starovoitov , Daniel Borkmann , Andrii Nakryiko , Martin KaFai Lau , Song Liu , Yonghong Song , John Fastabend , KP Singh , James Clark , Kan Liang , Quentin Monnet , Dave Marchevsky , Zhengjun Xing , Lv Ruyi , linux-perf-users@vger.kernel.org, linux-kernel@vger.kernel.org, netdev@vger.kernel.org, bpf@vger.kernel.org Cc: Stephane Eranian , Ian Rogers Content-Type: text/plain; charset="UTF-8" Precedence: bulk List-ID: X-Mailing-List: linux-perf-users@vger.kernel.org BPF counters are typically running across all CPUs and so the CPU map index and CPU number are the same. There may be cases with offline CPUs where this isn't the case and so ensure the cpu map index for perf_counts is going to be a valid index by explicitly iterating over the CPU map. This also makes it clearer that users of perf_counts are using an index. Collapse some multiple uses of perf_counts into single uses. Signed-off-by: Ian Rogers --- tools/perf/util/bpf_counter.c | 61 ++++++++++++++++++++--------------- 1 file changed, 35 insertions(+), 26 deletions(-) diff --git a/tools/perf/util/bpf_counter.c b/tools/perf/util/bpf_counter.c index 3ce8d03cb7ec..d4931f54e1dd 100644 --- a/tools/perf/util/bpf_counter.c +++ b/tools/perf/util/bpf_counter.c @@ -224,25 +224,25 @@ static int bpf_program_profiler__disable(struct evsel *evsel) static int bpf_program_profiler__read(struct evsel *evsel) { - // perf_cpu_map uses /sys/devices/system/cpu/online - int num_cpu = evsel__nr_cpus(evsel); // BPF_MAP_TYPE_PERCPU_ARRAY uses /sys/devices/system/cpu/possible // Sometimes possible > online, like on a Ryzen 3900X that has 24 // threads but its possible showed 0-31 -acme int num_cpu_bpf = libbpf_num_possible_cpus(); struct bpf_perf_event_value values[num_cpu_bpf]; struct bpf_counter *counter; + struct perf_counts_values *counts; int reading_map_fd; __u32 key = 0; - int err, cpu; + int err, idx, bpf_cpu; if (list_empty(&evsel->bpf_counter_list)) return -EAGAIN; - for (cpu = 0; cpu < num_cpu; cpu++) { - perf_counts(evsel->counts, cpu, 0)->val = 0; - perf_counts(evsel->counts, cpu, 0)->ena = 0; - perf_counts(evsel->counts, cpu, 0)->run = 0; + perf_cpu_map__for_each_idx(idx, evsel__cpus(evsel)) { + counts = perf_counts(evsel->counts, idx, 0); + counts->val = 0; + counts->ena = 0; + counts->run = 0; } list_for_each_entry(counter, &evsel->bpf_counter_list, list) { struct bpf_prog_profiler_bpf *skel = counter->skel; @@ -256,10 +256,15 @@ static int bpf_program_profiler__read(struct evsel *evsel) return err; } - for (cpu = 0; cpu < num_cpu; cpu++) { - perf_counts(evsel->counts, cpu, 0)->val += values[cpu].counter; - perf_counts(evsel->counts, cpu, 0)->ena += values[cpu].enabled; - perf_counts(evsel->counts, cpu, 0)->run += values[cpu].running; + for (bpf_cpu = 0; bpf_cpu < num_cpu_bpf; bpf_cpu++) { + idx = perf_cpu_map__idx(evsel__cpus(evsel), + (struct perf_cpu){.cpu = bpf_cpu}); + if (idx == -1) + continue; + counts = perf_counts(evsel->counts, idx, 0); + counts->val += values[bpf_cpu].counter; + counts->ena += values[bpf_cpu].enabled; + counts->run += values[bpf_cpu].running; } } return 0; @@ -621,6 +626,7 @@ static int bperf__read(struct evsel *evsel) struct bperf_follower_bpf *skel = evsel->follower_skel; __u32 num_cpu_bpf = cpu__max_cpu().cpu; struct bpf_perf_event_value values[num_cpu_bpf]; + struct perf_counts_values *counts; int reading_map_fd, err = 0; __u32 i; int j; @@ -639,29 +645,32 @@ static int bperf__read(struct evsel *evsel) case BPERF_FILTER_GLOBAL: assert(i == 0); - perf_cpu_map__for_each_cpu(entry, j, all_cpu_map) { - cpu = entry.cpu; - perf_counts(evsel->counts, cpu, 0)->val = values[cpu].counter; - perf_counts(evsel->counts, cpu, 0)->ena = values[cpu].enabled; - perf_counts(evsel->counts, cpu, 0)->run = values[cpu].running; + perf_cpu_map__for_each_cpu(entry, j, evsel__cpus(evsel)) { + counts = perf_counts(evsel->counts, j, 0); + counts->val = values[entry.cpu].counter; + counts->ena = values[entry.cpu].enabled; + counts->run = values[entry.cpu].running; } break; case BPERF_FILTER_CPU: - cpu = evsel->core.cpus->map[i].cpu; - perf_counts(evsel->counts, i, 0)->val = values[cpu].counter; - perf_counts(evsel->counts, i, 0)->ena = values[cpu].enabled; - perf_counts(evsel->counts, i, 0)->run = values[cpu].running; + cpu = perf_cpu_map__cpu(evsel__cpus(evsel), i).cpu; + assert(cpu >= 0); + counts = perf_counts(evsel->counts, i, 0); + counts->val = values[cpu].counter; + counts->ena = values[cpu].enabled; + counts->run = values[cpu].running; break; case BPERF_FILTER_PID: case BPERF_FILTER_TGID: - perf_counts(evsel->counts, 0, i)->val = 0; - perf_counts(evsel->counts, 0, i)->ena = 0; - perf_counts(evsel->counts, 0, i)->run = 0; + counts = perf_counts(evsel->counts, 0, i); + counts->val = 0; + counts->ena = 0; + counts->run = 0; for (cpu = 0; cpu < num_cpu_bpf; cpu++) { - perf_counts(evsel->counts, 0, i)->val += values[cpu].counter; - perf_counts(evsel->counts, 0, i)->ena += values[cpu].enabled; - perf_counts(evsel->counts, 0, i)->run += values[cpu].running; + counts->val += values[cpu].counter; + counts->ena += values[cpu].enabled; + counts->run += values[cpu].running; } break; default: -- 2.36.1.124.g0e6072fb45-goog