linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Minchan Kim <minchan.kim@gmail.com>
To: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
Cc: Andrew Morton <akpm@linux-foundation.org>,
	Neil Brown <neilb@suse.de>, Wu Fengguang <fengguang.wu@intel.com>,
	Rik van Riel <riel@redhat.com>,
	KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	"linux-mm@kvack.org" <linux-mm@kvack.org>,
	"Li, Shaohua" <shaohua.li@intel.com>
Subject: Re: Deadlock possibly caused by too_many_isolated.
Date: Tue, 19 Oct 2010 11:16:17 +0900	[thread overview]
Message-ID: <AANLkTi=1j5ejRyki+2wmKvOitorteW6uL53wfAWiPeAs@mail.gmail.com> (raw)
In-Reply-To: <20101019105257.A1C6.A69D9226@jp.fujitsu.com>

On Tue, Oct 19, 2010 at 11:03 AM, KOSAKI Motohiro
<kosaki.motohiro@jp.fujitsu.com> wrote:
>> On Tue, Oct 19, 2010 at 10:21 AM, KOSAKI Motohiro
>> <kosaki.motohiro@jp.fujitsu.com> wrote:
>> >> On Tue, Oct 19, 2010 at 9:57 AM, KOSAKI Motohiro
>> >> <kosaki.motohiro@jp.fujitsu.com> wrote:
>> >> >> > I think there are two bugs here.
>> >> >> > The raid1 bug that Torsten mentions is certainly real (and has been around
>> >> >> > for an embarrassingly long time).
>> >> >> > The bug that I identified in too_many_isolated is also a real bug and can be
>> >> >> > triggered without md/raid1 in the mix.
>> >> >> > So this is not a 'full fix' for every bug in the kernel :-), but it could
>> >> >> > well be a full fix for this particular bug.
>> >> >> >
>> >> >>
>> >> >> Can we just delete the too_many_isolated() logic?  (Crappy comment
>> >> >> describes what the code does but not why it does it).
>> >> >
>> >> > if my remember is correct, we got bug report that LTP may makes misterious
>> >> > OOM killer invocation about 1-2 years ago. because, if too many parocess are in
>> >> > reclaim path, all of reclaimable pages can be isolated and last reclaimer found
>> >> > the system don't have any reclaimable pages and lead to invoke OOM killer.
>> >> > We have strong motivation to avoid false positive oom. then, some discusstion
>> >> > made this patch.
>> >> >
>> >> > if my remember is incorrect, I hope Wu or Rik fix me.
>> >>
>> >> AFAIR, it's right.
>> >>
>> >> How about this?
>> >>
>> >> It's rather aggressive throttling than old(ie, it considers not lru
>> >> type granularity but zone )
>> >> But I think it can prevent unnecessary OOM problem and solve deadlock problem.
>> >
>> > Can you please elaborate your intention? Do you think Wu's approach is wrong?
>>
>> No. I think Wu's patch may work well. But I agree Andrew.
>> Couldn't we remove the too_many_isolated logic? If it is, we can solve
>> the problem simply.
>> But If we remove the logic, we will meet long time ago problem, again.
>> So my patch's intention is to prevent OOM and deadlock problem with
>> simple patch without adding new heuristic in too_many_isolated.
>
> But your patch is much false positive/negative chance because isolated pages timing
> and too_many_isolated_zone() call site are in far distance place.

Yes.
How about the returning *did_some_progress can imply too_many_isolated
fail by using MSB or new variable?
Then, page_allocator can check it whether it causes read reclaim fail
or parallel reclaim.
The point is let's throttle without holding FS/IO lock.

> So, if anyone don't say Wu's one is wrong, I like his one.
>

I am not against it and just want to solve the problem without adding new logic.



-- 
Kind regards,
Minchan Kim

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

  reply	other threads:[~2010-10-19  2:16 UTC|newest]

Thread overview: 58+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2010-09-14 23:11 Deadlock possibly caused by too_many_isolated Neil Brown
2010-09-15  0:30 ` Rik van Riel
2010-09-15  2:23   ` Neil Brown
2010-09-15  2:37     ` Wu Fengguang
2010-09-15  2:54       ` Wu Fengguang
2010-09-15  3:06         ` Wu Fengguang
2010-09-15  3:13           ` Wu Fengguang
2010-09-15  3:18             ` Shaohua Li
2010-09-15  3:31               ` Wu Fengguang
2010-09-15  3:17           ` Neil Brown
2010-09-15  3:47             ` Wu Fengguang
2010-09-15  8:28     ` Wu Fengguang
2010-09-15  8:44       ` Neil Brown
2010-10-18  4:14         ` Neil Brown
2010-10-18  5:04           ` KOSAKI Motohiro
2010-10-18 10:58           ` Torsten Kaiser
2010-10-18 23:11             ` Neil Brown
2010-10-19  8:43               ` Torsten Kaiser
2010-10-19 10:06                 ` Torsten Kaiser
2010-10-20  5:57                   ` Wu Fengguang
2010-10-20  7:05                     ` KOSAKI Motohiro
2010-10-20  9:27                       ` Wu Fengguang
2010-10-20 13:03                         ` Jens Axboe
2010-10-22  5:37                           ` Wu Fengguang
2010-10-22  8:07                             ` Wu Fengguang
2010-10-22  8:09                               ` Jens Axboe
2010-10-24 16:52                                 ` Wu Fengguang
2010-10-25  6:40                                   ` Neil Brown
2010-10-25  7:26                                     ` Wu Fengguang
2010-10-20  7:25                     ` Torsten Kaiser
2010-10-20  9:01                       ` Wu Fengguang
2010-10-20 10:07                         ` Torsten Kaiser
2010-10-20 14:23                       ` Minchan Kim
2010-10-20 15:35                         ` Torsten Kaiser
2010-10-20 23:31                           ` Minchan Kim
2010-10-18 16:15           ` Wu Fengguang
2010-10-18 21:58             ` Andrew Morton
2010-10-18 22:31               ` Neil Brown
2010-10-18 22:41                 ` Andrew Morton
2010-10-19  0:57                   ` KOSAKI Motohiro
2010-10-19  1:15                     ` Minchan Kim
2010-10-19  1:21                       ` KOSAKI Motohiro
2010-10-19  1:32                         ` Minchan Kim
2010-10-19  2:03                           ` KOSAKI Motohiro
2010-10-19  2:16                             ` Minchan Kim [this message]
2010-10-19  2:54                               ` KOSAKI Motohiro
2010-10-19  2:35                       ` Wu Fengguang
2010-10-19  2:52                         ` Minchan Kim
2010-10-19  3:05                           ` Wu Fengguang
2010-10-19  3:09                             ` Minchan Kim
2010-10-19  3:13                               ` KOSAKI Motohiro
2010-10-19  5:11                                 ` Minchan Kim
2010-10-19  3:21                               ` Shaohua Li
2010-10-19  7:15                                 ` Shaohua Li
2010-10-19  7:34                                   ` Minchan Kim
2010-10-19  2:24                   ` Wu Fengguang
2010-10-19  2:37                     ` KOSAKI Motohiro
2010-10-19  2:37                     ` Minchan Kim

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to='AANLkTi=1j5ejRyki+2wmKvOitorteW6uL53wfAWiPeAs@mail.gmail.com' \
    --to=minchan.kim@gmail.com \
    --cc=akpm@linux-foundation.org \
    --cc=fengguang.wu@intel.com \
    --cc=kamezawa.hiroyu@jp.fujitsu.com \
    --cc=kosaki.motohiro@jp.fujitsu.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=neilb@suse.de \
    --cc=riel@redhat.com \
    --cc=shaohua.li@intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).