From: Ben Blum <bblum@andrew.cmu.edu>
To: NeilBrown <neilb@suse.de>
Cc: Paul Menage <menage@google.com>, Ben Blum <bblum@andrew.cmu.edu>,
Li Zefan <lizf@cn.fujitsu.com>, Oleg Nesterov <oleg@tv-sign.ru>,
containers@lists.linux-foundation.org,
"Paul E.McKenney" <paulmck@linux.vnet.ibm.com>,
"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>
Subject: Re: Possible race between cgroup_attach_proc and de_thread, and questionable code in de_thread.
Date: Wed, 27 Jul 2011 11:07:10 -0400 [thread overview]
Message-ID: <20110727150710.GB5242@unix33.andrew.cmu.edu> (raw)
In-Reply-To: <20110727171101.5e32d8eb@notabene.brown>
On Wed, Jul 27, 2011 at 05:11:01PM +1000, NeilBrown wrote:
>
> Hi,
> I've been exploring the use of RCU in the kernel, particularly looking for
> things that don't quite look right. I found cgroup_attach_proc which was
> added a few months ago.
Awesome, thanks! :)
>
> It contains:
>
> rcu_read_lock();
> if (!thread_group_leader(leader)) {
> /*
> * a race with de_thread from another thread's exec() may strip
> * us of our leadership, making while_each_thread unsafe to use
> * on this task. if this happens, there is no choice but to
> * throw this task away and try again (from cgroup_procs_write);
> * this is "double-double-toil-and-trouble-check locking".
> */
> rcu_read_unlock();
> retval = -EAGAIN;
> goto out_free_group_list;
> }
>
> (and having the comment helps a lot!)
>
> The comment acknowledges a race with de_thread but seems to assume that
> rcu_read_lock() will protect against that race. It won't.
> It could possibly protect if the racy code in de_thread() contained a call
> to synchronize_rcu(), but it doesn't so there is no obvious exclusion
> between the two.
> I note that some other locks are held and maybe some other lock provides
> the required exclusion - I haven't explored that too deeply - but if that is
> the case, then the use of rcu_read_lock() here is pointless - it isn't
> needed just to call thread_group_leader().
I wrote this code, and I admit to not having a full understanding of RCU
myself. The code was once more complicated (before the patches went in,
mind you), and had a series of checks like that leading up to a
list_for_each_entry over the ->thread_group list (in "step 3", instead
of iterating over the flex_array), and had read_lock(&tasklist_lock)
around it. (...)
(The other locks held are just cgroup_mutex and threadgroup_fork_lock,
which wouldn't provide the exclusion.)
>
> The race as I understand it is with this code:
>
>
> list_replace_rcu(&leader->tasks, &tsk->tasks);
> list_replace_init(&leader->sibling, &tsk->sibling);
>
> tsk->group_leader = tsk;
> leader->group_leader = tsk;
>
>
> which seems to be called with only tasklist_lock held, which doesn't seem to
> be held in the cgroup code.
>
> If the "thread_group_leader(leader)" call in cgroup_attach_proc() runs before
> this chunk is run with the same value for 'leader', but the
> while_each_thread is run after, then the while_read_thread() might loop
> forever. rcu_read_lock doesn't prevent this from happening.
Somehow I was under the impression that holding tasklist_lock (for
writing) provided exclusion from code that holds rcu_read_lock -
probably because there are other points in the kernel which do
while_each_thread with only RCU-read held (and not tasklist):
- kernel/hung_task.c, check_hung_uninterruptible_tasks()
- kernel/posix-cpu-timers.c, thread_group_cputime()
- fs/ioprio.c, ioprio_set() and ioprio_get()
(There are also places, like kernel/signal.c, where code does
while_each_thread with only sighand->siglock held. this also seems
sketchy, since de_thread only takes that lock after the code quoted
above. there's a big comment in fs/exec.c where this is also done, but I
don't quite understand it.)
You seem to imply that rcu_read_lock() doesn't exclude against
write_lock(&tasklist_lock). If that's true, then we can fix the cgroup
code simply by replacing rcu_read_lock/rcu_read_unlock with
read_lock and read_unlocck on tasklist_lock. (I can hurry a bugfix patch
for this together if so.)
Wouldn't this mean that the three places listed above are also wrong?
>
> The code in de_thread() is actually questionable by itself.
> "list_replace_rcu" cannot really be used on the head of a list - it is only
> meant to be used on a member of a list.
> To move a list from one head to another you should be using
> list_splice_init_rcu().
> The ->tasks list doesn't seem to have a clearly distinguished 'head' but
> whatever is passed as 'g' to while_each_thread() is effectively a head and
> removing it from a list can cause a loop using while_each_thread() can not
> find the head and so never complete.
>
> I' not sure how best to fix this, though possibly changing
> while_each_thead to:
>
> while ((t = next_task(t)) != g && !thread_group_leader(t))
>
> might be part of it. We would also need to move
> tsk->group_leader = tsk;
> in the above up to the top, and probably add some memory barrier.
> However I don't know enough about how the list is used to be sure.
>
> Comments?
>
> Thanks,
> NeilBrown
>
>
I barely understand de_thread() from the reader's perspective, let alone
from the author's perspective, so I can't speak for that one.
Thanks for pointing this out!
-- Ben
next parent reply other threads:[~2011-07-27 15:31 UTC|newest]
Thread overview: 36+ messages / expand[flat|nested] mbox.gz Atom feed top
[not found] <20110727171101.5e32d8eb@notabene.brown>
2011-07-27 15:07 ` Ben Blum [this message]
2011-07-27 23:42 ` Possible race between cgroup_attach_proc and de_thread, and questionable code in de_thread Paul E. McKenney
2011-07-28 1:08 ` NeilBrown
2011-07-28 6:26 ` Ben Blum
2011-07-28 7:13 ` NeilBrown
2011-07-29 14:28 ` [PATCH][BUGFIX] cgroups: more safe tasklist locking in cgroup_attach_proc Ben Blum
2011-08-01 19:31 ` Paul Menage
2011-08-15 18:49 ` Oleg Nesterov
2011-08-15 22:50 ` Frederic Weisbecker
2011-08-15 23:04 ` Ben Blum
2011-08-15 23:09 ` Ben Blum
2011-08-15 23:19 ` Frederic Weisbecker
2011-08-15 23:11 ` [PATCH][BUGFIX] cgroups: fix ordering of calls " Ben Blum
2011-08-15 23:20 ` Frederic Weisbecker
2011-08-15 23:31 ` Paul Menage
2011-09-01 21:46 ` [PATCH][BUGFIX] cgroups: more safe tasklist locking " Ben Blum
2011-09-02 12:32 ` Oleg Nesterov
2011-09-08 2:11 ` Ben Blum
2011-10-14 0:31 ` [PATCH 1/2] cgroups: use sighand lock instead of tasklist_lock " Ben Blum
2011-10-14 12:15 ` Frederic Weisbecker
2011-10-14 0:36 ` [PATCH 2/2] cgroups: convert ss->attach to use whole threadgroup flex_array (cpuset, memcontrol) Ben Blum
2011-10-14 12:21 ` Frederic Weisbecker
2011-10-14 13:53 ` Ben Blum
2011-10-14 13:54 ` Ben Blum
2011-10-14 15:22 ` Frederic Weisbecker
2011-10-17 19:11 ` Ben Blum
2011-10-14 15:21 ` Frederic Weisbecker
2011-10-19 5:43 ` Paul Menage
2011-07-28 12:17 ` Possible race between cgroup_attach_proc and de_thread, and questionable code in de_thread Paul E. McKenney
2011-08-14 17:51 ` Oleg Nesterov
2011-08-14 23:58 ` NeilBrown
2011-08-15 18:01 ` Paul E. McKenney
2011-08-14 17:45 ` Oleg Nesterov
2011-08-14 17:40 ` Oleg Nesterov
2011-08-15 0:11 ` NeilBrown
2011-08-15 19:09 ` Oleg Nesterov
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20110727150710.GB5242@unix33.andrew.cmu.edu \
--to=bblum@andrew.cmu.edu \
--cc=containers@lists.linux-foundation.org \
--cc=linux-kernel@vger.kernel.org \
--cc=lizf@cn.fujitsu.com \
--cc=menage@google.com \
--cc=neilb@suse.de \
--cc=oleg@tv-sign.ru \
--cc=paulmck@linux.vnet.ibm.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).