linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Ben Blum <bblum@andrew.cmu.edu>
To: Oleg Nesterov <oleg@redhat.com>
Cc: Ben Blum <bblum@andrew.cmu.edu>, NeilBrown <neilb@suse.de>,
	paulmck@linux.vnet.ibm.com, Paul Menage <menage@google.com>,
	Li Zefan <lizf@cn.fujitsu.com>,
	containers@lists.linux-foundation.org,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	Andrew Morton <akpm@linux-foundation.org>,
	Frederic Weisbecker <fweisbec@gmail.com>
Subject: Re: [PATCH][BUGFIX] cgroups: more safe tasklist locking in cgroup_attach_proc
Date: Thu, 1 Sep 2011 17:46:43 -0400	[thread overview]
Message-ID: <20110901214643.GD10401@unix33.andrew.cmu.edu> (raw)
In-Reply-To: <20110815184957.GA16588@redhat.com>

On Mon, Aug 15, 2011 at 08:49:57PM +0200, Oleg Nesterov wrote:
> > -	rcu_read_lock();
> > +	read_lock(&tasklist_lock);
> >  	if (!thread_group_leader(leader)) {
> 
> Agreed, this should work.
> 
> But can't we avoid the global list? thread_group_leader() or not, we do
> not really care. We only need to ensure we can safely find all threads.
> 
> How about the patch below?

I was content with the tasklist_lock because cgroup_attach_proc is
already a pretty heavyweight operation, and probably pretty rare that a
user would want to do multiple of them at once quickly. I asked Andrew
to take the simple tasklist_lock patch just now, since it does fix the
bug at least.

Anyway, looking at this, hmm. I am not sure if this protects adequately?
In de_thread, the sighand lock is held only around the first half
(around zap_other_threads), and not around the following section where
leadership is transferred (esp. around the list_replace calls).
tasklist_lock is held here, though, so it seems like the right lock to
hold.

> 
> 
> With or without this/your patch this leader can die right after we
> drop the lock. ss->can_attach(leader) and ss->attach(leader) look
> suspicious. If a sub-thread execs, this task_struct has nothing to
> do with the threadgroup.

hmm. I thought I had this case covered, but it's been so long since I
actually wrote the code that if I did I can't remember how. I think
exiting is no issue since we hold a reference on the task_struct, but
exec may still be a problem. I'm thinking:

- cgroup_attach_proc drops the tasklist_lock
- a sub-thread execs, and in exec_mmap (after de_thread) changes the mm
- ss->attach, for example in memcg, wants to use leader->mm, which is
  now wrong

this seems to be possible as the code currently is. I wonder if the best
fix is just to have exec (maybe around de_thread) bounce off of or hold
threadgroup_fork_read_lock somewhere?

> 
> 
> 
> Also. This is off-topic, but... Why cgroup_attach_proc() and
> cgroup_attach_task() do ->attach_task() + cgroup_task_migrate()
> in the different order? cgroup_attach_proc() looks wrong even
> if currently doesn't matter.

(already submitted a patch for this)

Thanks,
Ben

> 
> 
> Oleg.
> 
> --- x/kernel/cgroup.c
> +++ x/kernel/cgroup.c
> @@ -2000,6 +2000,7 @@ int cgroup_attach_proc(struct cgroup *cg
>  	/* threadgroup list cursor and array */
>  	struct task_struct *tsk;
>  	struct flex_array *group;
> +	unsigned long flags;
>  	/*
>  	 * we need to make sure we have css_sets for all the tasks we're
>  	 * going to move -before- we actually start moving them, so that in
> @@ -2027,19 +2028,10 @@ int cgroup_attach_proc(struct cgroup *cg
>  		goto out_free_group_list;
>  
>  	/* prevent changes to the threadgroup list while we take a snapshot. */
> -	rcu_read_lock();
> -	if (!thread_group_leader(leader)) {
> -		/*
> -		 * a race with de_thread from another thread's exec() may strip
> -		 * us of our leadership, making while_each_thread unsafe to use
> -		 * on this task. if this happens, there is no choice but to
> -		 * throw this task away and try again (from cgroup_procs_write);
> -		 * this is "double-double-toil-and-trouble-check locking".
> -		 */
> -		rcu_read_unlock();
> -		retval = -EAGAIN;
> +	retval = -EAGAIN;
> +	if (!lock_task_sighand(leader, &flags))
>  		goto out_free_group_list;
> -	}
> +
>  	/* take a reference on each task in the group to go in the array. */
>  	tsk = leader;
>  	i = 0;
> @@ -2055,9 +2047,9 @@ int cgroup_attach_proc(struct cgroup *cg
>  		BUG_ON(retval != 0);
>  		i++;
>  	} while_each_thread(leader, tsk);
> +	unlock_task_sighand(leader, &flags);
>  	/* remember the number of threads in the array for later. */
>  	group_size = i;
> -	rcu_read_unlock();
>  
>  	/*
>  	 * step 1: check that we can legitimately attach to the cgroup.
> 
> 

  parent reply	other threads:[~2011-09-01 21:48 UTC|newest]

Thread overview: 36+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
     [not found] <20110727171101.5e32d8eb@notabene.brown>
2011-07-27 15:07 ` Possible race between cgroup_attach_proc and de_thread, and questionable code in de_thread Ben Blum
2011-07-27 23:42   ` Paul E. McKenney
2011-07-28  1:08     ` NeilBrown
2011-07-28  6:26       ` Ben Blum
2011-07-28  7:13         ` NeilBrown
2011-07-29 14:28           ` [PATCH][BUGFIX] cgroups: more safe tasklist locking in cgroup_attach_proc Ben Blum
2011-08-01 19:31             ` Paul Menage
2011-08-15 18:49             ` Oleg Nesterov
2011-08-15 22:50               ` Frederic Weisbecker
2011-08-15 23:04                 ` Ben Blum
2011-08-15 23:09                   ` Ben Blum
2011-08-15 23:19                     ` Frederic Weisbecker
2011-08-15 23:11                 ` [PATCH][BUGFIX] cgroups: fix ordering of calls " Ben Blum
2011-08-15 23:20                   ` Frederic Weisbecker
2011-08-15 23:31                   ` Paul Menage
2011-09-01 21:46               ` Ben Blum [this message]
2011-09-02 12:32                 ` [PATCH][BUGFIX] cgroups: more safe tasklist locking " Oleg Nesterov
2011-09-08  2:11                   ` Ben Blum
2011-10-14  0:31               ` [PATCH 1/2] cgroups: use sighand lock instead of tasklist_lock " Ben Blum
2011-10-14 12:15                 ` Frederic Weisbecker
2011-10-14  0:36               ` [PATCH 2/2] cgroups: convert ss->attach to use whole threadgroup flex_array (cpuset, memcontrol) Ben Blum
2011-10-14 12:21                 ` Frederic Weisbecker
2011-10-14 13:53                   ` Ben Blum
2011-10-14 13:54                     ` Ben Blum
2011-10-14 15:22                       ` Frederic Weisbecker
2011-10-17 19:11                         ` Ben Blum
2011-10-14 15:21                     ` Frederic Weisbecker
2011-10-19  5:43                 ` Paul Menage
2011-07-28 12:17       ` Possible race between cgroup_attach_proc and de_thread, and questionable code in de_thread Paul E. McKenney
2011-08-14 17:51         ` Oleg Nesterov
2011-08-14 23:58           ` NeilBrown
2011-08-15 18:01           ` Paul E. McKenney
2011-08-14 17:45       ` Oleg Nesterov
2011-08-14 17:40 ` Oleg Nesterov
2011-08-15  0:11   ` NeilBrown
2011-08-15 19:09     ` Oleg Nesterov

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20110901214643.GD10401@unix33.andrew.cmu.edu \
    --to=bblum@andrew.cmu.edu \
    --cc=akpm@linux-foundation.org \
    --cc=containers@lists.linux-foundation.org \
    --cc=fweisbec@gmail.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=lizf@cn.fujitsu.com \
    --cc=menage@google.com \
    --cc=neilb@suse.de \
    --cc=oleg@redhat.com \
    --cc=paulmck@linux.vnet.ibm.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).