From: Arnaldo Carvalho de Melo <acme@kernel.org>
To: "Naveen N. Rao" <naveen.n.rao@linux.vnet.ibm.com>
Cc: sathnaga@linux.vnet.ibm.com, mingo@kernel.org,
linux-kernel@vger.kernel.org, linux-perf-users@vger.kernel.org,
srikar@linux.vnet.ibm.com, bala24@linux.vnet.ibm.com
Subject: Re: [PATCH v3 2/2] perf/bench/numa: Handle discontiguous/sparse numa nodes
Date: Tue, 31 Oct 2017 12:26:57 -0300 [thread overview]
Message-ID: <20171031152657.GU7045@kernel.org> (raw)
In-Reply-To: <20171031151658.clq6qmdfw3gj6afg@naverao1-tp.localdomain>
Em Tue, Oct 31, 2017 at 08:46:58PM +0530, Naveen N. Rao escreveu:
> On 2017/08/21 10:17AM, sathnaga@linux.vnet.ibm.com wrote:
> > From: Satheesh Rajendran <sathnaga@linux.vnet.ibm.com>
> >
> > Certain systems are designed to have sparse/discontiguous nodes.
> > On such systems, perf bench numa hangs, shows wrong number of nodes
> > and shows values for non-existent nodes. Handle this by only
> > taking nodes that are exposed by kernel to userspace.
> >
> > Cc: Arnaldo Carvalho de Melo <acme@kernel.org>
> > Reviewed-by: Srikar Dronamraju <srikar@linux.vnet.ibm.com>
> > Signed-off-by: Satheesh Rajendran <sathnaga@linux.vnet.ibm.com>
> > Signed-off-by: Balamuruhan S <bala24@linux.vnet.ibm.com>
> > ---
> > tools/perf/bench/numa.c | 17 ++++++++++-------
> > 1 file changed, 10 insertions(+), 7 deletions(-)
> >
> > diff --git a/tools/perf/bench/numa.c b/tools/perf/bench/numa.c
> > index 2483174..d4cccc4 100644
> > --- a/tools/perf/bench/numa.c
> > +++ b/tools/perf/bench/numa.c
> > @@ -287,12 +287,12 @@ static cpu_set_t bind_to_cpu(int target_cpu)
> >
> > static cpu_set_t bind_to_node(int target_node)
> > {
> > - int cpus_per_node = g->p.nr_cpus/g->p.nr_nodes;
> > + int cpus_per_node = g->p.nr_cpus/nr_numa_nodes();
> > cpu_set_t orig_mask, mask;
> > int cpu;
> > int ret;
> >
> > - BUG_ON(cpus_per_node*g->p.nr_nodes != g->p.nr_cpus);
> > + BUG_ON(cpus_per_node*nr_numa_nodes() != g->p.nr_cpus);
> > BUG_ON(!cpus_per_node);
> >
> > ret = sched_getaffinity(0, sizeof(orig_mask), &orig_mask);
> > @@ -692,7 +692,7 @@ static int parse_setup_node_list(void)
> > int i;
> >
> > for (i = 0; i < mul; i++) {
> > - if (t >= g->p.nr_tasks) {
> > + if (t >= g->p.nr_tasks || !node_has_cpus(bind_node)) {
> > printf("\n# NOTE: ignoring bind NODEs starting at NODE#%d\n", bind_node);
> > goto out;
> > }
> > @@ -973,6 +973,7 @@ static void calc_convergence(double runtime_ns_max, double *convergence)
> > int node;
> > int cpu;
> > int t;
> > + int processes;
> >
> > if (!g->p.show_convergence && !g->p.measure_convergence)
> > return;
> > @@ -1007,13 +1008,14 @@ static void calc_convergence(double runtime_ns_max, double *convergence)
> > sum = 0;
> >
> > for (node = 0; node < g->p.nr_nodes; node++) {
> > + if (!is_node_present(node))
> > + continue;
> > nr = nodes[node];
> > nr_min = min(nr, nr_min);
> > nr_max = max(nr, nr_max);
> > sum += nr;
> > }
> > BUG_ON(nr_min > nr_max);
> > -
>
> Looks like an un-necessary change there.
Right, and I would leave the 'int processes' declaration where it is, as
it is not used outside that loop.
The move of that declaration to the top of the calc_convergence()
function made me spend some cycles trying to figure out why that was
done, only to realize that it was an unnecessary change :-\
> - Naveen
>
> > BUG_ON(sum > g->p.nr_tasks);
> >
> > if (0 && (sum < g->p.nr_tasks))
> > @@ -1027,8 +1029,9 @@ static void calc_convergence(double runtime_ns_max, double *convergence)
> > process_groups = 0;
> >
> > for (node = 0; node < g->p.nr_nodes; node++) {
> > - int processes = count_node_processes(node);
> > -
> > + if (!is_node_present(node))
> > + continue;
> > + processes = count_node_processes(node);
> > nr = nodes[node];
> > tprintf(" %2d/%-2d", nr, processes);
> >
> > @@ -1334,7 +1337,7 @@ static void print_summary(void)
> >
> > printf("\n ###\n");
> > printf(" # %d %s will execute (on %d nodes, %d CPUs):\n",
> > - g->p.nr_tasks, g->p.nr_tasks == 1 ? "task" : "tasks", g->p.nr_nodes, g->p.nr_cpus);
> > + g->p.nr_tasks, g->p.nr_tasks == 1 ? "task" : "tasks", nr_numa_nodes(), g->p.nr_cpus);
> > printf(" # %5dx %5ldMB global shared mem operations\n",
> > g->p.nr_loops, g->p.bytes_global/1024/1024);
> > printf(" # %5dx %5ldMB process shared mem operations\n",
next prev parent reply other threads:[~2017-10-31 15:27 UTC|newest]
Thread overview: 11+ messages / expand[flat|nested] mbox.gz Atom feed top
2017-08-21 10:15 [PATCH v3 0/2] Fixup for discontiguous/sparse numa nodes sathnaga
2017-08-21 10:15 ` [PATCH v3 1/2] perf/bench/numa: Add functions to detect sparse " sathnaga
2017-10-31 15:14 ` Naveen N. Rao
2017-11-14 12:47 ` Satheesh Rajendran
2017-08-21 10:17 ` [PATCH v3 2/2] perf/bench/numa: Handle discontiguous/sparse " sathnaga
2017-10-31 15:16 ` Naveen N. Rao
2017-10-31 15:26 ` Arnaldo Carvalho de Melo [this message]
2017-11-15 15:51 ` Satheesh Rajendran
2017-11-14 12:46 ` Satheesh Rajendran
2017-11-15 9:43 ` Naveen N. Rao
2017-09-19 9:34 ` [PATCH v3 0/2] Fixup for " Satheesh Rajendran
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20171031152657.GU7045@kernel.org \
--to=acme@kernel.org \
--cc=bala24@linux.vnet.ibm.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-perf-users@vger.kernel.org \
--cc=mingo@kernel.org \
--cc=naveen.n.rao@linux.vnet.ibm.com \
--cc=sathnaga@linux.vnet.ibm.com \
--cc=srikar@linux.vnet.ibm.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).