linuxppc-dev.lists.ozlabs.org archive mirror
 help / color / mirror / Atom feed
From: Will Schmidt <will_schmidt@vnet.ibm.com>
To: Andrew Morton <akpm@linux-foundation.org>
Cc: linuxppc-dev@ozlabs.org, Anton Blanchard <anton@samba.org>,
	linux-kernel@vger.kernel.org
Subject: Re: [PATCH 1/3] [PATCH i386] during VM oom condition, kill all threads in process group
Date: Fri, 08 Jun 2007 14:19:18 -0500	[thread overview]
Message-ID: <1181330358.21409.31.camel@farscape.rchland.ibm.com> (raw)
In-Reply-To: <20070607171018.d51fc5da.akpm@linux-foundation.org>

On Thu, 2007-06-07 at 17:10 -0700, Andrew Morton wrote:
> On Thu, 7 Jun 2007 18:16:21 -0500
> Anton Blanchard <anton@samba.org> wrote:
> 
> >  
> > Hi,
> > 
> > > zap_other_threads() requires tasklist_lock.

Yup, I missed that.   Thanks for pointing it out.

> > > 
> > > If we're going to do this then we should probably create some new function
> > > (with a better name) which takes tasklsit_lock and then calls
> > > zap_other_threads().

I expect this will be a write_lock_irq() since zap_other_threads will be
doing a bit more than just reading the task info.

This will be down in a do-page-fault failure path (see
arch/*/mm/fault.c).  I wonder if calling write_lock is going to be safe,
or if its possible to get into a deadlock?  i.e. should I branch back up
to the survive: label if I can't take the lock?  Would that even be
sufficient? or is it not an issue here? 

> > > 
> > > Does this patch fix any observed-in-the-real-world problem?  If so, please
> > > describe it.
> > 
> > Yeah we have had complaints where threaded apps have only one thread
> > shot down instead of the entire process. This leaves the application in
> > a bad state, whereas if it had been killed cleanly the application could
> > have restarted.
> > 
> > My understanding is that fatal signals should kill all threads in the
> > group.
> > 
> 
> OK, well could we please get all that info appropriatelt captured in #2's
> changelog?
Yup, next spin I'll add more to the changelog. 

> 
> Other architectures will probably need to implement this.

-Will

  parent reply	other threads:[~2007-06-08 19:19 UTC|newest]

Thread overview: 6+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
     [not found] <20070605174831.21740.33119.stgit@farscape.rchland.ibm.com>
     [not found] ` <20070605174838.21740.55720.stgit@farscape.rchland.ibm.com>
2007-06-05 18:17   ` [PATCH 2/3] [PATCH powerpc] during VM oom condition, kill all threads in process group Will Schmidt
     [not found] ` <20070607153459.2a1b3230.akpm@linux-foundation.org>
     [not found]   ` <20070607231621.GB32549@kryten>
     [not found]     ` <20070607171018.d51fc5da.akpm@linux-foundation.org>
2007-06-08 19:19       ` Will Schmidt [this message]
2007-06-08 19:32         ` [PATCH 1/3] [PATCH i386] " Andrew Morton
2007-06-08 21:12           ` Will Schmidt
2007-06-08 22:48             ` Eric W. Biederman
2007-06-13 15:51               ` Oleg Nesterov

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=1181330358.21409.31.camel@farscape.rchland.ibm.com \
    --to=will_schmidt@vnet.ibm.com \
    --cc=akpm@linux-foundation.org \
    --cc=anton@samba.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linuxppc-dev@ozlabs.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).