From: Valentin Schneider <vschneid@redhat.com>
To: Yury Norov <yury.norov@gmail.com>
Cc: netdev@vger.kernel.org, linux-rdma@vger.kernel.org,
linux-kernel@vger.kernel.org, Saeed Mahameed <saeedm@nvidia.com>,
Leon Romanovsky <leon@kernel.org>,
"David S. Miller" <davem@davemloft.net>,
Eric Dumazet <edumazet@google.com>,
Jakub Kicinski <kuba@kernel.org>, Paolo Abeni <pabeni@redhat.com>,
Andy Shevchenko <andriy.shevchenko@linux.intel.com>,
Rasmus Villemoes <linux@rasmusvillemoes.dk>,
Ingo Molnar <mingo@kernel.org>,
Peter Zijlstra <peterz@infradead.org>,
Vincent Guittot <vincent.guittot@linaro.org>,
Dietmar Eggemann <dietmar.eggemann@arm.com>,
Steven Rostedt <rostedt@goodmis.org>,
Mel Gorman <mgorman@suse.de>,
Greg Kroah-Hartman <gregkh@linuxfoundation.org>,
Heiko Carstens <hca@linux.ibm.com>,
Tony Luck <tony.luck@intel.com>,
Jonathan Cameron <Jonathan.Cameron@huawei.com>,
Gal Pressman <gal@nvidia.com>, Tariq Toukan <tariqt@nvidia.com>,
Jesse Brandeburg <jesse.brandeburg@intel.com>
Subject: Re: [PATCH v4 5/7] sched/topology: Introduce sched_numa_hop_mask()
Date: Tue, 27 Sep 2022 17:45:15 +0100 [thread overview]
Message-ID: <xhsmhfsgc4vhg.mognet@vschneid.remote.csb> (raw)
In-Reply-To: <YzCYXEytXy8UJQFv@yury-laptop>
On 25/09/22 11:05, Yury Norov wrote:
> On Fri, Sep 23, 2022 at 04:55:40PM +0100, Valentin Schneider wrote:
>> +const struct cpumask *sched_numa_hop_mask(int node, int hops)
>> +{
>> + struct cpumask ***masks = rcu_dereference(sched_domains_numa_masks);
>> +
>> + if (node == NUMA_NO_NODE && !hops)
>> + return cpu_online_mask;
>> +
>> + if (node >= nr_node_ids || hops >= sched_domains_numa_levels)
>> + return ERR_PTR(-EINVAL);
>
> This looks like a sanity check. If so, it should go before the snippet
> above, so that client code would behave consistently.
>
nr_node_ids is unsigned, so -1 >= nr_node_ids is true.
>> +
>> + if (!masks)
>> + return NULL;
>
> In (node == NUMA_NO_NODE && !hops) case you return online cpus. Here
> you return NULL just to convert it to cpu_online_mask in the caller.
> This looks inconsistent. So, together with the above comment, this
> makes me feel that you'd do it like this:
>
> const struct cpumask *sched_numa_hop_mask(int node, int hops)
> {
> struct cpumask ***masks;
>
> if (node >= nr_node_ids || hops >= sched_domains_numa_levels)
> {
> #ifdef CONFIG_SCHED_DEBUG
> pr_err(...);
> #endif
> return ERR_PTR(-EINVAL);
> }
>
> if (node == NUMA_NO_NODE && !hops)
> return cpu_online_mask; /* or NULL */
>
> masks = rcu_dereference(sched_domains_numa_masks);
> if (!masks)
> return cpu_online_mask; /* or NULL */
>
> return masks[hops][node];
> }
If we're being pedantic, sched_numa_hop_mask() shouldn't return
cpu_online_mask in those cases, but that was the least horrible
option I found to get something sensible for the NUMA_NO_NODE /
!CONFIG_NUMA case. I might be able to better handle this with your
suggestion of having a mask iterator.
next prev parent reply other threads:[~2022-09-27 16:45 UTC|newest]
Thread overview: 25+ messages / expand[flat|nested] mbox.gz Atom feed top
2022-09-23 13:25 [PATCH v4 0/7] sched, net: NUMA-aware CPU spreading interface Valentin Schneider
2022-09-23 13:25 ` [PATCH v4 1/7] lib/find_bit: Introduce find_next_andnot_bit() Valentin Schneider
2022-09-23 15:44 ` [PATCH v4 0/7] sched, net: NUMA-aware CPU spreading interface Yury Norov
2022-09-23 15:49 ` Valentin Schneider
2022-09-23 15:55 ` [PATCH v4 2/7] cpumask: Introduce for_each_cpu_andnot() Valentin Schneider
2022-09-25 15:23 ` Yury Norov
2022-09-27 16:45 ` Valentin Schneider
2022-09-27 20:02 ` Yury Norov
2022-09-23 15:55 ` [PATCH v4 3/7] lib/test_cpumask: Add for_each_cpu_and(not) tests Valentin Schneider
2022-09-23 15:55 ` [PATCH v4 4/7] sched/core: Merge cpumask_andnot()+for_each_cpu() into for_each_cpu_andnot() Valentin Schneider
2022-09-23 15:55 ` [PATCH v4 5/7] sched/topology: Introduce sched_numa_hop_mask() Valentin Schneider
2022-09-25 15:00 ` Yury Norov
2022-09-25 15:24 ` Yury Norov
2022-09-27 16:45 ` Valentin Schneider
2022-09-27 19:30 ` Yury Norov
2022-09-25 18:05 ` Yury Norov
2022-09-25 18:13 ` Yury Norov
2022-09-27 16:45 ` Valentin Schneider [this message]
2022-09-23 15:55 ` [PATCH v4 6/7] sched/topology: Introduce for_each_numa_hop_cpu() Valentin Schneider
2022-09-25 14:58 ` Yury Norov
2022-09-27 16:45 ` Valentin Schneider
2022-09-23 15:55 ` [PATCH v4 7/7] net/mlx5e: Improve remote NUMA preferences used for the IRQ affinity hints Valentin Schneider
2022-09-25 7:48 ` [PATCH v4 0/7] sched, net: NUMA-aware CPU spreading interface Tariq Toukan
2022-10-18 6:36 ` Tariq Toukan
2022-10-18 16:50 ` Valentin Schneider
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=xhsmhfsgc4vhg.mognet@vschneid.remote.csb \
--to=vschneid@redhat.com \
--cc=Jonathan.Cameron@huawei.com \
--cc=andriy.shevchenko@linux.intel.com \
--cc=davem@davemloft.net \
--cc=dietmar.eggemann@arm.com \
--cc=edumazet@google.com \
--cc=gal@nvidia.com \
--cc=gregkh@linuxfoundation.org \
--cc=hca@linux.ibm.com \
--cc=jesse.brandeburg@intel.com \
--cc=kuba@kernel.org \
--cc=leon@kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-rdma@vger.kernel.org \
--cc=linux@rasmusvillemoes.dk \
--cc=mgorman@suse.de \
--cc=mingo@kernel.org \
--cc=netdev@vger.kernel.org \
--cc=pabeni@redhat.com \
--cc=peterz@infradead.org \
--cc=rostedt@goodmis.org \
--cc=saeedm@nvidia.com \
--cc=tariqt@nvidia.com \
--cc=tony.luck@intel.com \
--cc=vincent.guittot@linaro.org \
--cc=yury.norov@gmail.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox