From: Waiman Long <waiman.long@hpe.com>
To: Jan Kara <jack@suse.cz>
Cc: Alexander Viro <viro@zeniv.linux.org.uk>,
Jan Kara <jack@suse.com>, Jeff Layton <jlayton@poochiereds.net>,
"J. Bruce Fields" <bfields@fieldses.org>,
Tejun Heo <tj@kernel.org>,
Christoph Lameter <cl@linux-foundation.org>,
<linux-fsdevel@vger.kernel.org>, <linux-kernel@vger.kernel.org>,
Ingo Molnar <mingo@redhat.com>,
Peter Zijlstra <peterz@infradead.org>,
Andi Kleen <andi@firstfloor.org>,
Dave Chinner <dchinner@redhat.com>,
Boqun Feng <boqun.feng@gmail.com>,
Scott J Norton <scott.norton@hpe.com>,
Douglas Hatch <doug.hatch@hpe.com>
Subject: Re: [PATCH v6 1/4] lib/percpu-list: Per-cpu list with associated per-cpu locks
Date: Mon, 21 Mar 2016 22:31:24 -0400 [thread overview]
Message-ID: <56F0AE7C.9040204@hpe.com> (raw)
In-Reply-To: <20160321094918.GG30819@quack.suse.cz>
On 03/21/2016 05:49 AM, Jan Kara wrote:
> On Fri 18-03-16 15:44:01, Waiman Long wrote:
>> +static __always_inline bool
>> +__pcpu_list_next_cpu(struct pcpu_list_head *head, struct pcpu_list_state *state)
>> +{
>> + if (state->lock)
>> + spin_unlock(state->lock);
>> +next_cpu:
>> + /*
>> + * for_each_possible_cpu(cpu)
>> + */
>> + state->cpu = cpumask_next(state->cpu, cpu_possible_mask);
>> + if (state->cpu>= nr_cpu_ids)
>> + return false; /* All the per-cpu lists iterated */
>> +
>> + state->head =&per_cpu_ptr(head, state->cpu)->list;
>> + if (list_empty(state->head))
>> + goto next_cpu;
>> +
>> + state->lock =&per_cpu_ptr(head, state->cpu)->lock;
>> + spin_lock(state->lock);
>> + state->curr = list_entry(state->head->next,
>> + struct pcpu_list_node, list);
>> + return true;
> Waiman, I repeat it for the third time as you keep ignoring it: This is
> *racy*. The list for state->cpu can be empty by the time you acquire
> state->lock and thus state->curr will point somewhere around the head of
> the list but definitely not to a list member where it should.
>
> Honza
I am sorry for missing your previous comment. Yes, it is possible that
the list is empty after the lock. So I should have checked for that
before returning. Thanks for reminding me that. I will fix that later on.
Cheers,
Longman
next prev parent reply other threads:[~2016-03-22 2:31 UTC|newest]
Thread overview: 7+ messages / expand[flat|nested] mbox.gz Atom feed top
2016-03-18 19:44 [PATCH v6 0/4] vfs: Use per-cpu list for SB's s_inodes list Waiman Long
2016-03-18 19:44 ` [PATCH v6 1/4] lib/percpu-list: Per-cpu list with associated per-cpu locks Waiman Long
2016-03-21 9:49 ` Jan Kara
2016-03-22 2:31 ` Waiman Long [this message]
2016-03-18 19:44 ` [PATCH v6 2/4] fsnotify: Simplify inode iteration on umount Waiman Long
2016-03-18 19:44 ` [PATCH v6 3/4] vfs: Remove unnecessary list_for_each_entry_safe() variants Waiman Long
2016-03-18 19:44 ` [PATCH v6 4/4] vfs: Use per-cpu list for superblock's inode list Waiman Long
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=56F0AE7C.9040204@hpe.com \
--to=waiman.long@hpe.com \
--cc=andi@firstfloor.org \
--cc=bfields@fieldses.org \
--cc=boqun.feng@gmail.com \
--cc=cl@linux-foundation.org \
--cc=dchinner@redhat.com \
--cc=doug.hatch@hpe.com \
--cc=jack@suse.com \
--cc=jack@suse.cz \
--cc=jlayton@poochiereds.net \
--cc=linux-fsdevel@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=mingo@redhat.com \
--cc=peterz@infradead.org \
--cc=scott.norton@hpe.com \
--cc=tj@kernel.org \
--cc=viro@zeniv.linux.org.uk \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox