From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 38468C433EF for ; Thu, 14 Apr 2022 01:46:49 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S239457AbiDNBtK (ORCPT ); Wed, 13 Apr 2022 21:49:10 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:41532 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S239436AbiDNBtJ (ORCPT ); Wed, 13 Apr 2022 21:49:09 -0400 Received: from mail-yb1-xb49.google.com (mail-yb1-xb49.google.com [IPv6:2607:f8b0:4864:20::b49]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id E54D02E091 for ; Wed, 13 Apr 2022 18:46:46 -0700 (PDT) Received: by mail-yb1-xb49.google.com with SMTP id b65-20020a25e444000000b0063dd00480f8so3192356ybh.13 for ; Wed, 13 Apr 2022 18:46:46 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:message-id:mime-version:subject:from:to:cc; bh=4qwvYVooM7YaW2143ob33/RDOtgCck3amy1lZ/qZ43Q=; b=a82+8uScUFTY7UVOEcfN1mbnKXdw94mA5rNEPHiinFfjwaAc0yJbWi6EkFclTpOHst EQL1MiawyR2AvJ120Nz3AyPXHpD7fj6H2vkVVeiLn7CkioUhXRh44RGbM4GC87nS8NPA lPCfINqrWTnXWyoCGURkhfEoIgRsuARD3M+PNgy1+0IfXm2TMwdZj03N7A84wURImXGT fq9jiy5h/gdZuBEOBQWpNYqGp7HnNDvyhN8sKkJLPJpcVx3OcmIRXB+KfGePEC7fpI77 DJt1T6AClZaJ5hrq6Cg0kDdHpfD+vtGtVuaOMnPtlZz7cpE0NRk5S+HzgKdy3jxIkJnO INzQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:message-id:mime-version:subject:from:to:cc; bh=4qwvYVooM7YaW2143ob33/RDOtgCck3amy1lZ/qZ43Q=; b=xxCZ84UjAhFHVFR7Shw5vB6tQ2SH/n1KBog0qpKe3fhIFrx22UBy7ETedErFbrUovD rNiMGXrfNK5p2DaE74XpGmx6YZ2wwRuwHHA5k0iBRsi/DAYHCfDfWctNZhetbiRphbsi YjcEu6h2SzAMjsSdiBMlKCS9CE3+eCZWuZ14XtCZfH2pr/5yFLWXkjdSpy2YHR110JRx X3y/msf9ECFMGrJgdqJBGW0f9Yq8NlrOAj67f4IqHwhJ6NeUe+YSwu3cG+rfMSOA0/GS EXoY1HMZvOFwaf1wDtQKSIsqYuLJzDq4lJM3zyYF/B1RunFr2tEt7eNmFa8GWVN5gJCu xkcg== X-Gm-Message-State: AOAM532Oh5Ks66YHwW9t8/vZwbi/wCqDgqoRMuBFEBTy/MiQl0lA9TPb 5o4gHXw/Tb2HZ6W5wX6CdGtrpog8zQ8V X-Google-Smtp-Source: ABdhPJy6OxalAJ2viaEkc8iVPXzpVO2fRlZPiZ/UHIAzZTkwwkgrTiWyCENuCg+wbC6JtsqCplrE96jWMF3e X-Received: from irogers.svl.corp.google.com ([2620:15c:2cd:202:9135:da53:a8a2:bf11]) (user=irogers job=sendgmr) by 2002:a81:16cb:0:b0:2eb:f4c2:fadd with SMTP id 194-20020a8116cb000000b002ebf4c2faddmr288000yww.475.1649900806080; Wed, 13 Apr 2022 18:46:46 -0700 (PDT) Date: Wed, 13 Apr 2022 18:46:40 -0700 Message-Id: <20220414014642.3308206-1-irogers@google.com> Mime-Version: 1.0 X-Mailer: git-send-email 2.36.0.rc0.470.gd361397f0d-goog Subject: [PATCH v2 1/3] perf record: Fix per-thread option. From: Ian Rogers To: Alexey Bayduraev , Peter Zijlstra , Ingo Molnar , Arnaldo Carvalho de Melo , Mark Rutland , Alexander Shishkin , Jiri Olsa , Namhyung Kim , Ian Rogers , Alexey Bayduraev , Andi Kleen , Riccardo Mancini , linux-kernel@vger.kernel.org, linux-perf-users@vger.kernel.org Cc: Stephane Eranian Content-Type: text/plain; charset="UTF-8" Precedence: bulk List-ID: X-Mailing-List: linux-perf-users@vger.kernel.org From: Alexey Bayduraev Per-thread mode doesn't have specific CPUs for events, add checks for this case. Minor fix to a pr_debug by Ian Rogers to avoid an out of bound array access. Reported-by: Ian Rogers Fixes: 7954f71689f9 ("perf record: Introduce thread affinity and mmap masks") Signed-off-by: Ian Rogers Signed-off-by: Alexey Bayduraev --- tools/perf/builtin-record.c | 22 +++++++++++++++++----- 1 file changed, 17 insertions(+), 5 deletions(-) diff --git a/tools/perf/builtin-record.c b/tools/perf/builtin-record.c index ba74fab02e62..069825c48d40 100644 --- a/tools/perf/builtin-record.c +++ b/tools/perf/builtin-record.c @@ -989,8 +989,11 @@ static int record__thread_data_init_maps(struct record_thread *thread_data, stru struct mmap *overwrite_mmap = evlist->overwrite_mmap; struct perf_cpu_map *cpus = evlist->core.user_requested_cpus; - thread_data->nr_mmaps = bitmap_weight(thread_data->mask->maps.bits, - thread_data->mask->maps.nbits); + if (cpu_map__is_dummy(cpus)) + thread_data->nr_mmaps = nr_mmaps; + else + thread_data->nr_mmaps = bitmap_weight(thread_data->mask->maps.bits, + thread_data->mask->maps.nbits); if (mmap) { thread_data->maps = zalloc(thread_data->nr_mmaps * sizeof(struct mmap *)); if (!thread_data->maps) @@ -1007,16 +1010,17 @@ static int record__thread_data_init_maps(struct record_thread *thread_data, stru thread_data->nr_mmaps, thread_data->maps, thread_data->overwrite_maps); for (m = 0, tm = 0; m < nr_mmaps && tm < thread_data->nr_mmaps; m++) { - if (test_bit(cpus->map[m].cpu, thread_data->mask->maps.bits)) { + if (cpu_map__is_dummy(cpus) || + test_bit(cpus->map[m].cpu, thread_data->mask->maps.bits)) { if (thread_data->maps) { thread_data->maps[tm] = &mmap[m]; pr_debug2("thread_data[%p]: cpu%d: maps[%d] -> mmap[%d]\n", - thread_data, cpus->map[m].cpu, tm, m); + thread_data, perf_cpu_map__cpu(cpus, m).cpu, tm, m); } if (thread_data->overwrite_maps) { thread_data->overwrite_maps[tm] = &overwrite_mmap[m]; pr_debug2("thread_data[%p]: cpu%d: ow_maps[%d] -> ow_mmap[%d]\n", - thread_data, cpus->map[m].cpu, tm, m); + thread_data, perf_cpu_map__cpu(cpus, m).cpu, tm, m); } tm++; } @@ -3329,6 +3333,9 @@ static void record__mmap_cpu_mask_init(struct mmap_cpu_mask *mask, struct perf_c { int c; + if (cpu_map__is_dummy(cpus)) + return; + for (c = 0; c < cpus->nr; c++) set_bit(cpus->map[c].cpu, mask->bits); } @@ -3680,6 +3687,11 @@ static int record__init_thread_masks(struct record *rec) if (!record__threads_enabled(rec)) return record__init_thread_default_masks(rec, cpus); + if (cpu_map__is_dummy(cpus)) { + pr_err("--per-thread option is mutually exclusive to parallel streaming mode.\n"); + return -EINVAL; + } + switch (rec->opts.threads_spec) { case THREAD_SPEC__CPU: ret = record__init_thread_cpu_masks(rec, cpus); -- 2.36.0.rc0.470.gd361397f0d-goog