From: Michal Hocko <mhocko@kernel.org>
To: Tetsuo Handa <penguin-kernel@I-love.SAKURA.ne.jp>
Cc: david@fromorbit.com, dchinner@redhat.com, hch@lst.de,
	mgorman@suse.de, viro@ZenIV.linux.org.uk, linux-mm@kvack.org,
	hannes@cmpxchg.org, linux-kernel@vger.kernel.org
Subject: Re: [RFC PATCH 1/2] mm, vmscan: account the number of isolated pages per zone
Date: Fri, 3 Feb 2017 15:41:11 +0100	[thread overview]
Message-ID: <20170203144111.GA19325@dhcp22.suse.cz> (raw)
In-Reply-To: <201702031957.AGH86961.MLtOQVFOSHJFFO@I-love.SAKURA.ne.jp>
On Fri 03-02-17 19:57:39, Tetsuo Handa wrote:
> Michal Hocko wrote:
> > On Mon 30-01-17 09:55:46, Michal Hocko wrote:
> > > On Sun 29-01-17 00:27:27, Tetsuo Handa wrote:
> > [...]
> > > > Regarding [1], it helped avoiding the too_many_isolated() issue. I can't
> > > > tell whether it has any negative effect, but I got on the first trial that
> > > > all allocating threads are blocked on wait_for_completion() from flush_work()
> > > > in drain_all_pages() introduced by "mm, page_alloc: drain per-cpu pages from
> > > > workqueue context". There was no warn_alloc() stall warning message afterwords.
> > > 
> > > That patch is buggy and there is a follow up [1] which is not sitting in the
> > > mmotm (and thus linux-next) yet. I didn't get to review it properly and
> > > I cannot say I would be too happy about using WQ from the page
> > > allocator. I believe even the follow up needs to have WQ_RECLAIM WQ.
> > > 
> > > [1] http://lkml.kernel.org/r/20170125083038.rzb5f43nptmk7aed@techsingularity.net
> > 
> > Did you get chance to test with this follow up patch? It would be
> > interesting to see whether OOM situation can still starve the waiter.
> > The current linux-next should contain this patch.
> 
> So far I can't reproduce problems except two listed below (cond_resched() trap
> in printk() and IDLE priority trap are excluded from the list). But I agree that
> the follow up patch needs to use a WQ_RECLAIM WQ. It is theoretically possible
> that an allocation request which can trigger the OOM killer waits for the
> system_wq while there is already a work which is in system_wq which is looping
> forever inside the page allocator without triggering the OOM killer.
Well, this shouldn't happen AFAICS because a new worker would be
requested and that would certainly require a memory and that allocation
would trigger the OOM killer. On the other hand I agree that it would be
safer to not depend on memory allocation from within the page allocator.
> Maybe the follow up patch can share the vmstat WQ?
Yes, this would be an option.
-- 
Michal Hocko
SUSE Labs
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
next prev parent reply	other threads:[~2017-02-03 14:41 UTC|newest]
Thread overview: 55+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2017-01-18 13:44 [RFC PATCH 0/2] fix unbounded too_many_isolated Michal Hocko
2017-01-18 13:44 ` [RFC PATCH 1/2] mm, vmscan: account the number of isolated pages per zone Michal Hocko
2017-01-18 14:46   ` Mel Gorman
2017-01-18 15:15     ` Michal Hocko
2017-01-18 15:54       ` Mel Gorman
2017-01-18 16:17         ` Michal Hocko
2017-01-18 17:00           ` Mel Gorman
2017-01-18 17:29             ` Michal Hocko
2017-01-19 10:07               ` Mel Gorman
2017-01-19 11:23                 ` Michal Hocko
2017-01-19 13:11                   ` Mel Gorman
2017-01-20 13:27                     ` Tetsuo Handa
2017-01-21  7:42                       ` Tetsuo Handa
2017-01-25 10:15                         ` Michal Hocko
2017-01-25 10:19                           ` Christoph Hellwig
2017-01-25 10:46                             ` Michal Hocko
2017-01-25 11:09                               ` Tetsuo Handa
2017-01-25 13:00                                 ` Michal Hocko
2017-01-27 14:49                                   ` Michal Hocko
2017-01-28 15:27                                     ` Tetsuo Handa
2017-01-30  8:55                                       ` Michal Hocko
2017-02-02 10:14                                         ` Michal Hocko
2017-02-03 10:57                                           ` Tetsuo Handa
2017-02-03 14:41                                             ` Michal Hocko [this message]
2017-02-03 14:50                                             ` Michal Hocko
2017-02-03 17:24                                               ` Brian Foster
2017-02-06  6:29                                                 ` Tetsuo Handa
2017-02-06 14:35                                                   ` Brian Foster
2017-02-06 14:42                                                     ` Michal Hocko
2017-02-06 15:47                                                       ` Brian Foster
2017-02-07 10:30                                                     ` Tetsuo Handa
2017-02-07 16:54                                                       ` Brian Foster
2017-02-03 14:55                                             ` Michal Hocko
2017-02-05 10:43                                               ` Tetsuo Handa
2017-02-06 10:34                                                 ` Michal Hocko
2017-02-06 10:39                                                 ` Michal Hocko
2017-02-07 21:12                                                   ` Michal Hocko
2017-02-08  9:24                                                     ` Peter Zijlstra
2017-02-21  9:40                                             ` Michal Hocko
2017-02-21 14:35                                               ` Tetsuo Handa
2017-02-21 15:53                                                 ` Michal Hocko
2017-02-22  2:02                                                   ` Tetsuo Handa
2017-02-22  7:54                                                     ` Michal Hocko
2017-02-26  6:30                                                       ` Tetsuo Handa
2017-01-31 11:58                                   ` Michal Hocko
2017-01-31 12:51                                     ` Christoph Hellwig
2017-01-31 13:21                                       ` Michal Hocko
2017-01-25 10:33                           ` [RFC PATCH 1/2] mm, vmscan: account the number of isolated pagesper zone Tetsuo Handa
2017-01-25 12:34                             ` Michal Hocko
2017-01-25 13:13                               ` [RFC PATCH 1/2] mm, vmscan: account the number of isolated pages per zone Tetsuo Handa
2017-01-25  9:53                       ` Michal Hocko
2017-01-20  6:42                 ` Hillf Danton
2017-01-20  9:25                   ` Mel Gorman
2017-01-18 13:44 ` [RFC PATCH 2/2] mm, vmscan: do not loop on too_many_isolated for ever Michal Hocko
2017-01-18 14:50   ` Mel Gorman
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox
  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):
  git send-email \
    --in-reply-to=20170203144111.GA19325@dhcp22.suse.cz \
    --to=mhocko@kernel.org \
    --cc=david@fromorbit.com \
    --cc=dchinner@redhat.com \
    --cc=hannes@cmpxchg.org \
    --cc=hch@lst.de \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=mgorman@suse.de \
    --cc=penguin-kernel@I-love.SAKURA.ne.jp \
    --cc=viro@ZenIV.linux.org.uk \
    /path/to/YOUR_REPLY
  https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
  Be sure your reply has a Subject: header at the top and a blank line
  before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).