linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
* [PATCH] mm,oom: do not loop !__GFP_FS allocation if the OOM killer is disabled.
@ 2016-01-11  5:07 Tetsuo Handa
  2016-01-11 15:45 ` Michal Hocko
                   ` (2 more replies)
  0 siblings, 3 replies; 12+ messages in thread
From: Tetsuo Handa @ 2016-01-11  5:07 UTC (permalink / raw)
  To: mhocko, hannes, rientjes; +Cc: linux-mm, Tetsuo Handa

After the OOM killer is disabled during suspend operation,
any !__GFP_NOFAIL && __GFP_FS allocations are forced to fail.
Thus, any !__GFP_NOFAIL && !__GFP_FS allocations should be
forced to fail as well.

Signed-off-by: Tetsuo Handa <penguin-kernel@I-love.SAKURA.ne.jp>
---
 mm/page_alloc.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 3c3a5c5..214f824 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -2766,7 +2766,7 @@ __alloc_pages_may_oom(gfp_t gfp_mask, unsigned int order,
 			 * and the OOM killer can't be invoked, but
 			 * keep looping as per tradition.
 			 */
-			*did_some_progress = 1;
+			*did_some_progress = !oom_killer_disabled;
 			goto out;
 		}
 		if (pm_suspended_storage())
-- 
1.8.3.1

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 12+ messages in thread
* [PATCH] mm,oom: do not loop !__GFP_FS allocation if the OOM killer is disabled.
@ 2016-01-23 15:38 Tetsuo Handa
  2016-01-25 14:55 ` Michal Hocko
  0 siblings, 1 reply; 12+ messages in thread
From: Tetsuo Handa @ 2016-01-23 15:38 UTC (permalink / raw)
  To: linux-mm; +Cc: Tetsuo Handa, Johannes Weiner

After the OOM killer is disabled during suspend operation,
any !__GFP_NOFAIL && __GFP_FS allocations are forced to fail.
Thus, any !__GFP_NOFAIL && !__GFP_FS allocations should be
forced to fail as well.

Signed-off-by: Tetsuo Handa <penguin-kernel@I-love.SAKURA.ne.jp>
Signed-off-by: Johannes Weiner <hannes@cmpxchg.org>
Acked-by: Michal Hocko <mhocko@suse.com>
Acked-by: David Rientjes <rientjes@google.com>
---
 mm/page_alloc.c | 6 +++++-
 1 file changed, 5 insertions(+), 1 deletion(-)

diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 6463426..2f71caa 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -2749,8 +2749,12 @@ __alloc_pages_may_oom(gfp_t gfp_mask, unsigned int order,
 			 * XXX: Page reclaim didn't yield anything,
 			 * and the OOM killer can't be invoked, but
 			 * keep looping as per tradition.
+			 *
+			 * But do not keep looping if oom_killer_disable()
+			 * was already called, for the system is trying to
+			 * enter a quiescent state during suspend.
 			 */
-			*did_some_progress = 1;
+			*did_some_progress = !oom_killer_disabled;
 			goto out;
 		}
 		if (pm_suspended_storage())
-- 
1.8.3.1

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 12+ messages in thread

end of thread, other threads:[~2016-01-25 14:55 UTC | newest]

Thread overview: 12+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2016-01-11  5:07 [PATCH] mm,oom: do not loop !__GFP_FS allocation if the OOM killer is disabled Tetsuo Handa
2016-01-11 15:45 ` Michal Hocko
2016-01-11 17:00 ` Johannes Weiner
2016-01-11 17:20   ` Michal Hocko
2016-01-11 17:43     ` Johannes Weiner
2016-01-11 17:49       ` Michal Hocko
2016-01-11 21:30         ` Tetsuo Handa
2016-01-11 22:02           ` Johannes Weiner
2016-01-12  8:17             ` Michal Hocko
2016-01-19 23:22 ` David Rientjes
  -- strict thread matches above, loose matches on Subject: below --
2016-01-23 15:38 Tetsuo Handa
2016-01-25 14:55 ` Michal Hocko

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).