From: Waiman Long <waiman.long@hpe.com>
To: Boqun Feng <boqun.feng@gmail.com>
Cc: Alexander Viro <viro@zeniv.linux.org.uk>,
Jan Kara <jack@suse.com>, Jeff Layton <jlayton@poochiereds.net>,
"J. Bruce Fields" <bfields@fieldses.org>,
Tejun Heo <tj@kernel.org>,
Christoph Lameter <cl@linux-foundation.org>,
<linux-fsdevel@vger.kernel.org>, <linux-kernel@vger.kernel.org>,
Ingo Molnar <mingo@redhat.com>,
Peter Zijlstra <peterz@infradead.org>,
Andi Kleen <andi@firstfloor.org>,
Dave Chinner <dchinner@redhat.com>,
Scott J Norton <scott.norton@hpe.com>,
Douglas Hatch <doug.hatch@hpe.com>
Subject: Re: [RFC PATCH v2 6/7] lib/persubnode: Introducing a simple per-subnode APIs
Date: Tue, 12 Jul 2016 14:37:51 -0400 [thread overview]
Message-ID: <578538FF.7040306@hpe.com> (raw)
In-Reply-To: <20160712031456.GA5885@insomnia>
On 07/11/2016 11:14 PM, Boqun Feng wrote:
> On Mon, Jul 11, 2016 at 01:32:11PM -0400, Waiman Long wrote:
>> +/*
>> + * Initialize the subnodes
>> + *
>> + * All the sibling CPUs will be in the same subnode. On top of that, we will
>> + * put at most 2 sibling groups into the same subnode. The percpu
>> + * topology_sibling_cpumask() and topology_core_cpumask() are used for
>> + * grouping CPUs into subnodes. The subnode ID is the CPU number of the
>> + * first CPU in the subnode.
>> + */
>> +static int __init subnode_init(void)
>> +{
>> + int cpu;
>> + int nr_subnodes = 0;
>> + const int subnode_nr_cpus = 2;
>> +
>> + /*
>> + * Some of the bits in the subnode_mask will be cleared as we proceed.
>> + */
>> + for_each_cpu(cpu, subnode_mask) {
>> + int ccpu, scpu;
>> + int cpucnt = 0;
>> +
>> + cpumask_var_t core_mask = topology_core_cpumask(cpu);
>> + cpumask_var_t sibling_mask;
>> +
>> + /*
>> + * Put subnode_nr_cpus of CPUs and their siblings into each
>> + * subnode.
>> + */
>> + for_each_cpu_from(cpu, ccpu, core_mask) {
>> + sibling_mask = topology_sibling_cpumask(ccpu);
>> + for_each_cpu_from(ccpu, scpu, sibling_mask) {
>> + /*
>> + * Clear the bits of the higher CPUs.
>> + */
>> + if (scpu> cpu)
>> + cpumask_clear_cpu(scpu, subnode_mask);
> Do we also need to clear the 'core_mask' here? Consider a core consist
> of two sibling groups and each sibling group consist of two cpus. At the
> beginning of the outer loop(for_each_cpu_from(cpu, ccpu, core_mask)):
>
> 'core_mask' is 0b1111
>
> so at the beginning of the inner loop first time:
>
> 'ccpu' is 0, therefore 'sibling_mask' is 0b1100, in this loop we set the
> 'cpu_subnode_id' of cpu 0 and 1 to 0.
>
> at the beginning of the inner loop second time:
>
> 'ccpu' is 1 because we don't clear cpu 1 from 'core_mask'. Therefore
> sibling_mask is still 0b1100, so in this loop we still do the setting on
> 'cpu_subnode_id' of cpu 0 and 1.
>
> Am I missing something here?
>
You are right. The current code work in my test as the 2 sibling CPUs
occupy the a lower and higher numbers like (0, 72) for a 72-core system.
It may not work for other sibling CPU assignment.
The core_mask, however, is a global data variable and we cannot modify
it. I will make the following change instead:
diff --git a/lib/persubnode.c b/lib/persubnode.c
index 9febe7c..d1c8c29 100644
--- a/lib/persubnode.c
+++ b/lib/persubnode.c
@@ -94,6 +94,8 @@ static int __init subnode_init(void)
* subnode.
*/
for_each_cpu_from(cpu, ccpu, core_mask) {
+ if (!cpumask_test_cpu(ccpu, subnode_mask))
+ continue; /* Skip allocated CPU */
sibling_mask = topology_sibling_cpumask(ccpu);
for_each_cpu_from(ccpu, scpu, sibling_mask) {
/*
Thanks for catching this bug.
Cheers,
Longman
next prev parent reply other threads:[~2016-07-12 18:37 UTC|newest]
Thread overview: 24+ messages / expand[flat|nested] mbox.gz Atom feed top
2016-07-11 17:32 [PATCH v2 0/7] vfs: Use dlock list for SB's s_inodes list Waiman Long
2016-07-11 17:32 ` [PATCH v2 1/7] lib/dlock-list: Distributed and lock-protected lists Waiman Long
2016-07-13 16:08 ` Tejun Heo
2016-07-14 2:54 ` Waiman Long
2016-07-14 11:50 ` Tejun Heo
2016-07-14 14:35 ` Jan Kara
2016-07-14 14:51 ` Tejun Heo
2016-07-14 16:16 ` Waiman Long
2016-07-14 17:13 ` Waiman Long
2016-07-14 17:41 ` Tejun Heo
2016-07-11 17:32 ` [PATCH v2 2/7] lib/dlock-list: Add __percpu modifier for parameters Waiman Long
2016-07-13 16:08 ` Tejun Heo
2016-07-14 2:54 ` Waiman Long
2016-07-11 17:32 ` [PATCH v2 3/7] fsnotify: Simplify inode iteration on umount Waiman Long
2016-07-11 17:32 ` [PATCH v2 4/7] vfs: Remove unnecessary list_for_each_entry_safe() variants Waiman Long
2016-07-11 17:32 ` [PATCH v2 5/7] vfs: Use dlock list for superblock's inode list Waiman Long
2016-07-11 17:32 ` [RFC PATCH v2 6/7] lib/persubnode: Introducing a simple per-subnode APIs Waiman Long
2016-07-12 3:14 ` Boqun Feng
2016-07-12 18:37 ` Waiman Long [this message]
2016-07-12 14:27 ` Tejun Heo
2016-07-12 18:51 ` Waiman Long
2016-07-12 18:57 ` Tejun Heo
2016-07-12 19:42 ` Waiman Long
2016-07-11 17:32 ` [RFC PATCH v2 7/7] lib/dlock-list: Use the per-subnode APIs for managing lists Waiman Long
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=578538FF.7040306@hpe.com \
--to=waiman.long@hpe.com \
--cc=andi@firstfloor.org \
--cc=bfields@fieldses.org \
--cc=boqun.feng@gmail.com \
--cc=cl@linux-foundation.org \
--cc=dchinner@redhat.com \
--cc=doug.hatch@hpe.com \
--cc=jack@suse.com \
--cc=jlayton@poochiereds.net \
--cc=linux-fsdevel@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=mingo@redhat.com \
--cc=peterz@infradead.org \
--cc=scott.norton@hpe.com \
--cc=tj@kernel.org \
--cc=viro@zeniv.linux.org.uk \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).