linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Ben Blum <bblum@andrew.cmu.edu>
To: Oleg Nesterov <oleg@redhat.com>
Cc: Ben Blum <bblum@andrew.cmu.edu>, NeilBrown <neilb@suse.de>,
	paulmck@linux.vnet.ibm.com, Paul Menage <paul@paulmenage.org>,
	Li Zefan <lizf@cn.fujitsu.com>,
	containers@lists.linux-foundation.org,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	Andrew Morton <akpm@linux-foundation.org>,
	Frederic Weisbecker <fweisbec@gmail.com>
Subject: Re: [PATCH][BUGFIX] cgroups: more safe tasklist locking in cgroup_attach_proc
Date: Wed, 7 Sep 2011 22:11:42 -0400	[thread overview]
Message-ID: <20110908021141.GB22545@unix33.andrew.cmu.edu> (raw)
In-Reply-To: <20110902123251.GA26764@redhat.com>

On Fri, Sep 02, 2011 at 02:32:51PM +0200, Oleg Nesterov wrote:
> On 09/01, Ben Blum wrote:
> >
> > On Mon, Aug 15, 2011 at 08:49:57PM +0200, Oleg Nesterov wrote:
> > >
> > > But can't we avoid the global list? thread_group_leader() or not, we do
> > > not really care. We only need to ensure we can safely find all threads.
> > >
> > > How about the patch below?
> >
> > I was content with the tasklist_lock because cgroup_attach_proc is
> > already a pretty heavyweight operation, and probably pretty rare that a
> > user would want to do multiple of them at once quickly.
> 
> Perhaps. But this is the global lock, no matter what. Why do you prefer
> to take it instead of per-process ->siglock?

Basically just because I wanted to find the simplest race-free
implementation - at that point I was just shooting to have a complete
set of patches, and didn't judge the global lock to be bad enough to
think about replacing. I could yet be convinced that siglock is better.

> 
> > I asked Andrew
> > to take the simple tasklist_lock patch just now, since it does fix the
> > bug at least.
> 
> I disagree.
> 
> > Anyway, looking at this, hmm. I am not sure if this protects adequately?
> > In de_thread, the sighand lock is held only around the first half
> > (around zap_other_threads),
> 
> We only need this lock to find all threads,
> 
> > and not around the following section where
> > leadership is transferred (esp. around the list_replace calls).
> > tasklist_lock is held here, though, so it seems like the right lock to
> > hold.
> 
> This doesn't matter at all, I think. The code under while_each_thread()
> doesn't need the stable ->group_leader, and it can be changed right after
> you drop tasklist. list_replace() calls do not play with ->thread_group
> list.
> 
> How can tasklist make any difference?

Oops. I misread the list_replace calls as being on ->thread_group,
instead of on ->tasks as they actually are. Hm, and I see __exit_signal
takes the sighand lock around the call to __unhash_process.

OK, I believe it will work. Oh, and I guess the sighand lock also
subsumes the group_leader check that you were objecting to in the other
thread.

Sorry for the delay. Any way this change goes through, though, the
commenting should be clear on how de_thread interacts with it. Something
like:

/* If the leader exits, its links on the thread_group list become
 * invalid. One way this can happen is if a sub-thread does exec() when
 * de_thread() calls release_task(leader) (and leader->sighand gets set
 * to NULL, in which case lock_task_sighand will fail). Since in that
 * case the threadgroup is still around, cgroup_procs_write should try
 * again (finding the new leader), which EAGAIN indicates here. This is
 * "double-double-toil-and-trouble-check locking". */

(I rather like the phrase "double-double-..." to describe the overall
approach, and would prefer if it stayed in the comment.)

> 
> > > With or without this/your patch this leader can die right after we
> > > drop the lock. ss->can_attach(leader) and ss->attach(leader) look
> > > suspicious. If a sub-thread execs, this task_struct has nothing to
> > > do with the threadgroup.
> >
> > hmm. I thought I had this case covered, but it's been so long since I
> > actually wrote the code that if I did I can't remember how.
> 
> This all doesn't look right... Hopefully addressed by Tejun patches.
> 

Will look into those separately.

Thanks,
Ben

> 
> Oleg.
> 
> 

  reply	other threads:[~2011-09-08  2:12 UTC|newest]

Thread overview: 36+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
     [not found] <20110727171101.5e32d8eb@notabene.brown>
2011-07-27 15:07 ` Possible race between cgroup_attach_proc and de_thread, and questionable code in de_thread Ben Blum
2011-07-27 23:42   ` Paul E. McKenney
2011-07-28  1:08     ` NeilBrown
2011-07-28  6:26       ` Ben Blum
2011-07-28  7:13         ` NeilBrown
2011-07-29 14:28           ` [PATCH][BUGFIX] cgroups: more safe tasklist locking in cgroup_attach_proc Ben Blum
2011-08-01 19:31             ` Paul Menage
2011-08-15 18:49             ` Oleg Nesterov
2011-08-15 22:50               ` Frederic Weisbecker
2011-08-15 23:04                 ` Ben Blum
2011-08-15 23:09                   ` Ben Blum
2011-08-15 23:19                     ` Frederic Weisbecker
2011-08-15 23:11                 ` [PATCH][BUGFIX] cgroups: fix ordering of calls " Ben Blum
2011-08-15 23:20                   ` Frederic Weisbecker
2011-08-15 23:31                   ` Paul Menage
2011-09-01 21:46               ` [PATCH][BUGFIX] cgroups: more safe tasklist locking " Ben Blum
2011-09-02 12:32                 ` Oleg Nesterov
2011-09-08  2:11                   ` Ben Blum [this message]
2011-10-14  0:31               ` [PATCH 1/2] cgroups: use sighand lock instead of tasklist_lock " Ben Blum
2011-10-14 12:15                 ` Frederic Weisbecker
2011-10-14  0:36               ` [PATCH 2/2] cgroups: convert ss->attach to use whole threadgroup flex_array (cpuset, memcontrol) Ben Blum
2011-10-14 12:21                 ` Frederic Weisbecker
2011-10-14 13:53                   ` Ben Blum
2011-10-14 13:54                     ` Ben Blum
2011-10-14 15:22                       ` Frederic Weisbecker
2011-10-17 19:11                         ` Ben Blum
2011-10-14 15:21                     ` Frederic Weisbecker
2011-10-19  5:43                 ` Paul Menage
2011-07-28 12:17       ` Possible race between cgroup_attach_proc and de_thread, and questionable code in de_thread Paul E. McKenney
2011-08-14 17:51         ` Oleg Nesterov
2011-08-14 23:58           ` NeilBrown
2011-08-15 18:01           ` Paul E. McKenney
2011-08-14 17:45       ` Oleg Nesterov
2011-08-14 17:40 ` Oleg Nesterov
2011-08-15  0:11   ` NeilBrown
2011-08-15 19:09     ` Oleg Nesterov

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20110908021141.GB22545@unix33.andrew.cmu.edu \
    --to=bblum@andrew.cmu.edu \
    --cc=akpm@linux-foundation.org \
    --cc=containers@lists.linux-foundation.org \
    --cc=fweisbec@gmail.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=lizf@cn.fujitsu.com \
    --cc=neilb@suse.de \
    --cc=oleg@redhat.com \
    --cc=paul@paulmenage.org \
    --cc=paulmck@linux.vnet.ibm.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).