From: Michal Hocko <mhocko@kernel.org>
To: Tetsuo Handa <penguin-kernel@i-love.sakura.ne.jp>
Cc: guro@fb.com, rientjes@google.com, hannes@cmpxchg.org,
vdavydov.dev@gmail.com, tj@kernel.org, linux-mm@kvack.org,
akpm@linux-foundation.org, torvalds@linux-foundation.org
Subject: Re: [PATCH] mm,oom: Don't call schedule_timeout_killable() with oom_lock held.
Date: Fri, 25 May 2018 10:31:18 +0200 [thread overview]
Message-ID: <20180525083118.GI11881@dhcp22.suse.cz> (raw)
In-Reply-To: <201805250117.w4P1HgdG039943@www262.sakura.ne.jp>
On Fri 25-05-18 10:17:42, Tetsuo Handa wrote:
> Michal Hocko wrote:
> > On Thu 24-05-18 19:51:24, Tetsuo Handa wrote:
> > > Michal Hocko wrote:
> > > > Look. I am fed up with this discussion. You are fiddling with the code
> > > > and moving hacks around with a lot of hand waving. Rahter than trying to
> > > > look at the underlying problem. Your patch completely ignores PREEMPT as
> > > > I've mentioned in previous versions.
> > >
> > > I'm not ignoring PREEMPT. To fix this OOM lockup problem properly, as much
> > > efforts as fixing Spectre/Meltdown problems will be required. This patch is
> > > a mitigation for regression introduced by fixing CVE-2018-1000200. Nothing
> > > is good with deferring this patch.
> > >
> > > > I would be OK with removing the sleep from the out_of_memory path based
> > > > on your argumentation that we have a _proper_ synchronization with the
> > > > exit path now.
> > >
> > > Such attempt should be made in a separate patch.
> > >
> > > You suggested removing this sleep from my patch without realizing that
> > > we need explicit schedule_timeout_*() for PF_WQ_WORKER threads.
> >
> > And that sleep is in should_reclaim_retry. If that is not sufficient it
> > should be addressed rather than spilling more of that crud all over the
> > place.
>
> Then, please show me (by writing a patch yourself) how to tell whether
> we should sleep there. What I can come up is shown below.
>
> -@@ -4241,6 +4240,12 @@ bool gfp_pfmemalloc_allowed(gfp_t gfp_mask)
> - /* Retry as long as the OOM killer is making progress */
> - if (did_some_progress) {
> - no_progress_loops = 0;
> -+ /*
> -+ * This schedule_timeout_*() serves as a guaranteed sleep for
> -+ * PF_WQ_WORKER threads when __zone_watermark_ok() == false.
> -+ */
> -+ if (!tsk_is_oom_victim(current))
> -+ schedule_timeout_uninterruptible(1);
> - goto retry;
> - }
> +@@ -3927,6 +3926,14 @@ bool gfp_pfmemalloc_allowed(gfp_t gfp_mask)
> + (*no_progress_loops)++;
>
> + /*
> ++ * We do a short sleep here if the OOM killer/reaper/victims are
> ++ * holding oom_lock, in order to try to give them some CPU resources
> ++ * for releasing memory.
> ++ */
> ++ if (mutex_is_locked(&oom_lock) && !tsk_is_oom_victim(current))
> ++ schedule_timeout_uninterruptible(1);
> ++
> ++ /*
> + * Make sure we converge to OOM if we cannot make any progress
> + * several times in the row.
> + */
>
> As far as I know, whether a domain which the current thread belongs to is
> already OOM is not known as of should_reclaim_retry(). Therefore, sleeping
> there can become a pointless delay if the domain which the current thread
> belongs to and the domain which the owner of oom_lock (it can be a random
> thread inside out_of_memory() or exit_mmap()) belongs to differs.
>
> But you insist sleeping there means that you don't care about such
> pointless delay?
What is wrong with the folliwing? should_reclaim_retry should be a
natural reschedule point. PF_WQ_WORKER is a special case which needs a
stronger rescheduling policy. Doing that unconditionally seems more
straightforward than depending on a zone being a good candidate for a
further reclaim.
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 3c6f4008ea55..b01b19d3d596 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -3925,6 +3925,7 @@ should_reclaim_retry(gfp_t gfp_mask, unsigned order,
{
struct zone *zone;
struct zoneref *z;
+ bool ret = false;
/*
* Costly allocations might have made a progress but this doesn't mean
@@ -3988,25 +3989,26 @@ should_reclaim_retry(gfp_t gfp_mask, unsigned order,
}
}
- /*
- * Memory allocation/reclaim might be called from a WQ
- * context and the current implementation of the WQ
- * concurrency control doesn't recognize that
- * a particular WQ is congested if the worker thread is
- * looping without ever sleeping. Therefore we have to
- * do a short sleep here rather than calling
- * cond_resched().
- */
- if (current->flags & PF_WQ_WORKER)
- schedule_timeout_uninterruptible(1);
- else
- cond_resched();
-
- return true;
+ ret = true;
+ goto out;
}
}
- return false;
+out:
+ /*
+ * Memory allocation/reclaim might be called from a WQ
+ * context and the current implementation of the WQ
+ * concurrency control doesn't recognize that
+ * a particular WQ is congested if the worker thread is
+ * looping without ever sleeping. Therefore we have to
+ * do a short sleep here rather than calling
+ * cond_resched().
+ */
+ if (current->flags & PF_WQ_WORKER)
+ schedule_timeout_uninterruptible(1);
+ else
+ cond_resched();
+ return ret;
}
static inline bool
--
Michal Hocko
SUSE Labs
next prev parent reply other threads:[~2018-05-25 8:31 UTC|newest]
Thread overview: 51+ messages / expand[flat|nested] mbox.gz Atom feed top
2018-05-12 14:18 [PATCH] mm,oom: Don't call schedule_timeout_killable() with oom_lock held Tetsuo Handa
2018-05-15 9:16 ` Michal Hocko
2018-05-18 10:14 ` Tetsuo Handa
2018-05-18 12:20 ` Michal Hocko
2018-05-20 15:56 ` Tetsuo Handa
2018-05-22 6:18 ` Michal Hocko
2018-05-23 10:24 ` Tetsuo Handa
2018-05-23 11:57 ` Michal Hocko
2018-05-23 13:45 ` Tetsuo Handa
2018-05-23 14:56 ` Michal Hocko
2018-05-24 10:51 ` Tetsuo Handa
2018-05-24 11:50 ` Michal Hocko
2018-05-25 1:17 ` Tetsuo Handa
2018-05-25 8:31 ` Michal Hocko [this message]
2018-05-25 10:57 ` Tetsuo Handa
2018-05-25 11:42 ` Michal Hocko
2018-05-25 11:46 ` Tetsuo Handa
2018-05-28 12:43 ` Michal Hocko
2018-05-28 20:57 ` Tetsuo Handa
2018-05-29 7:17 ` Michal Hocko
2018-05-29 23:07 ` Andrew Morton
2018-05-31 10:10 ` Tetsuo Handa
2018-05-31 10:44 ` Michal Hocko
2018-05-31 15:23 ` Tetsuo Handa
2018-05-31 18:47 ` Michal Hocko
2018-06-01 1:21 ` Tetsuo Handa
2018-06-01 8:04 ` Michal Hocko
2018-06-01 15:28 ` Michal Hocko
2018-06-01 21:11 ` Andrew Morton
2018-06-04 7:04 ` Michal Hocko
2018-06-04 10:41 ` Tetsuo Handa
2018-06-04 11:22 ` Michal Hocko
2018-06-04 11:30 ` Tetsuo Handa
2018-06-06 9:02 ` David Rientjes
2018-06-06 13:37 ` Tetsuo Handa
2018-06-06 18:44 ` David Rientjes
2018-05-29 7:17 ` Michal Hocko
2018-05-29 8:16 ` Michal Hocko
2018-05-29 14:33 ` Tetsuo Handa
2018-05-29 17:18 ` Michal Hocko
2018-05-29 17:28 ` Michal Hocko
2018-05-31 16:31 ` [PATCH] mm, memcg, oom: fix pre-mature allocation failures kbuild test robot
-- strict thread matches above, loose matches on Subject: below --
2018-03-22 10:51 [PATCH] mm,oom: Don't call schedule_timeout_killable() with oom_lock held Tetsuo Handa
2018-03-22 11:45 ` Michal Hocko
2018-03-22 13:16 ` Tetsuo Handa
2018-01-22 13:46 Tetsuo Handa
2018-01-23 8:38 ` Michal Hocko
2018-01-23 12:07 ` Tetsuo Handa
2018-01-23 12:42 ` Michal Hocko
2018-01-24 13:28 ` Tetsuo Handa
2018-02-13 11:58 ` Tetsuo Handa
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20180525083118.GI11881@dhcp22.suse.cz \
--to=mhocko@kernel.org \
--cc=akpm@linux-foundation.org \
--cc=guro@fb.com \
--cc=hannes@cmpxchg.org \
--cc=linux-mm@kvack.org \
--cc=penguin-kernel@i-love.sakura.ne.jp \
--cc=rientjes@google.com \
--cc=tj@kernel.org \
--cc=torvalds@linux-foundation.org \
--cc=vdavydov.dev@gmail.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).