From: Jiri Olsa <jolsa@redhat.com>
To: Alexey Budankov <alexey.budankov@linux.intel.com>
Cc: Arnaldo Carvalho de Melo <acme@kernel.org>,
Ingo Molnar <mingo@redhat.com>,
Peter Zijlstra <peterz@infradead.org>,
Namhyung Kim <namhyung@kernel.org>,
Alexander Shishkin <alexander.shishkin@linux.intel.com>,
Andi Kleen <ak@linux.intel.com>,
linux-kernel <linux-kernel@vger.kernel.org>
Subject: Re: [PATCH v3 3/4] perf record: apply affinity masks when reading mmap buffers
Date: Thu, 10 Jan 2019 10:54:50 +0100 [thread overview]
Message-ID: <20190110095450.GB25764@krava> (raw)
In-Reply-To: <93fedb49-f4fc-8153-2920-5b6b107bbca2@linux.intel.com>
On Thu, Jan 10, 2019 at 12:41:55PM +0300, Alexey Budankov wrote:
> On 09.01.2019 19:53, Jiri Olsa wrote:
> > On Wed, Jan 09, 2019 at 12:38:23PM +0300, Alexey Budankov wrote:
> >
> > SNIP
> >
> >> diff --git a/tools/perf/util/mmap.c b/tools/perf/util/mmap.c
> >> index e5220790f1fb..ee0230eed635 100644
> >> --- a/tools/perf/util/mmap.c
> >> +++ b/tools/perf/util/mmap.c
> >> @@ -377,6 +377,24 @@ void perf_mmap__munmap(struct perf_mmap *map)
> >> auxtrace_mmap__munmap(&map->auxtrace_mmap);
> >> }
> >>
> >> +static void perf_mmap__setup_affinity_mask(struct perf_mmap *map, struct mmap_params *mp)
> >> +{
> >> + int c, cpu, nr_cpus, node;
> >> +
> >> + CPU_ZERO(&map->affinity_mask);
> >> + if (mp->affinity == PERF_AFFINITY_NODE && cpu__max_node() > 1) {
> >> + nr_cpus = cpu_map__nr(mp->cpu_map);
> >> + node = cpu__get_node(map->cpu);
> >> + for (c = 0; c < nr_cpus; c++) {
> >> + cpu = mp->cpu_map->map[c]; /* map c index to online cpu index */
> >> + if (cpu__get_node(cpu) == node)
> >> + CPU_SET(cpu, &map->affinity_mask);
> >
> > should we do that from from all possible cpus task (perf record)
> > can run on, instead of mp->cpu_map, which might be only subset
> > (-C ... option)
>
> That is how it should be and because mp->cpu_map depends on -C option value
> in this patch set version it requires to be corrected, possibly like this:
>
> struct mmap_params mp = {
> .nr_cblocks = nr_cblocks,
> .affinity = affinity,
> .cpu_map = cpu_map__new(NULL) /* builds struct cpu_map from /sys/devices/system/cpu/online */
> };
> and
> if (mp->affinity == PERF_AFFINITY_NODE && cpu__max_node() > 1 && mp->cpu_map)
>
> Thanks!
>
> >
> > also node -> cpu_map is static configuration, we could prepare
> > this map ahead (like cpunode_map) and just assign it in here
> > based on node index
>
> It makes sense and either way is possible. However the static configuration
> looks a bit trickier because it incurs additional mask objects duplication
> and conversion from struct cpu_map to cpu_set_t still remains the same.
ok, please at least put that node mask creation into separate function
thanks,
jirka
next prev parent reply other threads:[~2019-01-10 9:54 UTC|newest]
Thread overview: 14+ messages / expand[flat|nested] mbox.gz Atom feed top
2019-01-09 9:19 [PATCH v3 0/4] Reduce NUMA related overhead in perf record profiling on large server systems Alexey Budankov
2019-01-09 9:35 ` [PATCH v3 1/4] perf record: allocate affinity masks Alexey Budankov
2019-01-09 9:37 ` [PATCH v3 2/4] perf record: bind the AIO user space buffers to nodes Alexey Budankov
2019-01-09 15:58 ` Jiri Olsa
2019-01-09 16:58 ` Alexey Budankov
2019-01-09 9:38 ` [PATCH v3 3/4] perf record: apply affinity masks when reading mmap buffers Alexey Budankov
2019-01-09 16:53 ` Jiri Olsa
2019-01-10 9:41 ` Alexey Budankov
2019-01-10 9:54 ` Jiri Olsa [this message]
2019-01-10 10:19 ` Alexey Budankov
2019-01-09 9:40 ` [PATCH v3 4/4] perf record: implement --affinity=node|cpu option Alexey Budankov
2019-01-09 14:41 ` [PATCH v3 0/4] Reduce NUMA related overhead in perf record profiling on large server systems Jiri Olsa
2019-01-09 15:51 ` Jiri Olsa
2019-01-09 16:11 ` Alexey Budankov
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20190110095450.GB25764@krava \
--to=jolsa@redhat.com \
--cc=acme@kernel.org \
--cc=ak@linux.intel.com \
--cc=alexander.shishkin@linux.intel.com \
--cc=alexey.budankov@linux.intel.com \
--cc=linux-kernel@vger.kernel.org \
--cc=mingo@redhat.com \
--cc=namhyung@kernel.org \
--cc=peterz@infradead.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox