* [RFC 1/3] mm, oom: refactor oom detection
2015-12-01 12:56 [RFC 0/3] OOM detection rework v3 Michal Hocko
@ 2015-12-01 12:56 ` Michal Hocko
2015-12-11 16:16 ` Johannes Weiner
2015-12-01 12:56 ` [RFC 2/3] mm: throttle on IO only when there are too many dirty and writeback pages Michal Hocko
` (2 subsequent siblings)
3 siblings, 1 reply; 11+ messages in thread
From: Michal Hocko @ 2015-12-01 12:56 UTC (permalink / raw)
To: linux-mm
Cc: Andrew Morton, Linus Torvalds, Mel Gorman, Johannes Weiner,
David Rientjes, Tetsuo Handa, Hillf Danton, KAMEZAWA Hiroyuki,
Michal Hocko
From: Michal Hocko <mhocko@suse.com>
__alloc_pages_slowpath has traditionally relied on the direct reclaim
and did_some_progress as an indicator that it makes sense to retry
allocation rather than declaring OOM. shrink_zones had to rely on
zone_reclaimable if shrink_zone didn't make any progress to prevent
from a premature OOM killer invocation - the LRU might be full of dirty
or writeback pages and direct reclaim cannot clean those up.
zone_reclaimable allows to rescan the reclaimable lists several
times and restart if a page is freed. This is really subtle behavior
and it might lead to a livelock when a single freed page keeps allocator
looping but the current task will not be able to allocate that single
page. OOM killer would be more appropriate than looping without any
progress for unbounded amount of time.
This patch changes OOM detection logic and pulls it out from shrink_zone
which is too low to be appropriate for any high level decisions such as OOM
which is per zonelist property. It is __alloc_pages_slowpath which knows
how many attempts have been done and what was the progress so far
therefore it is more appropriate to implement this logic.
The new heuristic tries to be more deterministic and easier to follow.
It builds on an assumption that retrying makes sense only if the
currently reclaimable memory + free pages would allow the current
allocation request to succeed (as per __zone_watermark_ok) at least for
one zone in the usable zonelist.
This alone wouldn't be sufficient, though, because the writeback might
get stuck and reclaimable pages might be pinned for a really long time
or even depend on the current allocation context. Therefore there is a
feedback mechanism implemented which reduces the reclaim target after
each reclaim round without any progress. This means that we should
eventually converge to only NR_FREE_PAGES as the target and fail on the
wmark check and proceed to OOM. The backoff is simple and linear with
1/16 of the reclaimable pages for each round without any progress. We
are optimistic and reset counter for successful reclaim rounds.
Costly high order pages mostly preserve their semantic and those without
__GFP_REPEAT fail right away while those which have the flag set will
back off after the amount of reclaimable pages reaches equivalent of the
requested order. The only difference is that if there was no progress
during the reclaim we rely on zone watermark check. This is more logical
thing to do than previous 1<<order attempts which were a result of
zone_reclaimable faking the progress.
[rientjes@google.com: use zone_page_state_snapshot for NR_FREE_PAGES]
[rientjes@google.com: shrink_zones doesn't need to return anything]
Acked-by: Hillf Danton <hillf.zj@alibaba-inc.com>
Signed-off-by: Michal Hocko <mhocko@suse.com>
---
include/linux/swap.h | 1 +
mm/page_alloc.c | 66 ++++++++++++++++++++++++++++++++++++++++++++++------
mm/vmscan.c | 25 ++++----------------
3 files changed, 64 insertions(+), 28 deletions(-)
diff --git a/include/linux/swap.h b/include/linux/swap.h
index 457181844b6e..738ae2206635 100644
--- a/include/linux/swap.h
+++ b/include/linux/swap.h
@@ -316,6 +316,7 @@ extern void lru_cache_add_active_or_unevictable(struct page *page,
struct vm_area_struct *vma);
/* linux/mm/vmscan.c */
+extern unsigned long zone_reclaimable_pages(struct zone *zone);
extern unsigned long try_to_free_pages(struct zonelist *zonelist, int order,
gfp_t gfp_mask, nodemask_t *mask);
extern int __isolate_lru_page(struct page *page, isolate_mode_t mode);
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index e267faad4649..af221067de6a 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -2984,6 +2984,13 @@ static inline bool is_thp_gfp_mask(gfp_t gfp_mask)
return (gfp_mask & (GFP_TRANSHUGE | __GFP_KSWAPD_RECLAIM)) == GFP_TRANSHUGE;
}
+/*
+ * Number of backoff steps for potentially reclaimable pages if the direct reclaim
+ * cannot make any progress. Each step will reduce 1/MAX_STALL_BACKOFF of the
+ * reclaimable memory.
+ */
+#define MAX_STALL_BACKOFF 16
+
static inline struct page *
__alloc_pages_slowpath(gfp_t gfp_mask, unsigned int order,
struct alloc_context *ac)
@@ -2996,6 +3003,9 @@ __alloc_pages_slowpath(gfp_t gfp_mask, unsigned int order,
enum migrate_mode migration_mode = MIGRATE_ASYNC;
bool deferred_compaction = false;
int contended_compaction = COMPACT_CONTENDED_NONE;
+ struct zone *zone;
+ struct zoneref *z;
+ int stall_backoff = 0;
/*
* In the slowpath, we sanity check order to avoid ever trying to
@@ -3155,13 +3165,53 @@ __alloc_pages_slowpath(gfp_t gfp_mask, unsigned int order,
if (gfp_mask & __GFP_NORETRY)
goto noretry;
- /* Keep reclaiming pages as long as there is reasonable progress */
+ /*
+ * Do not retry high order allocations unless they are __GFP_REPEAT
+ * and even then do not retry endlessly unless explicitly told so
+ */
pages_reclaimed += did_some_progress;
- if ((did_some_progress && order <= PAGE_ALLOC_COSTLY_ORDER) ||
- ((gfp_mask & __GFP_REPEAT) && pages_reclaimed < (1 << order))) {
- /* Wait for some write requests to complete then retry */
- wait_iff_congested(ac->preferred_zone, BLK_RW_ASYNC, HZ/50);
- goto retry;
+ if (order > PAGE_ALLOC_COSTLY_ORDER) {
+ if (!(gfp_mask & __GFP_NOFAIL) &&
+ (!(gfp_mask & __GFP_REPEAT) || pages_reclaimed >= (1<<order)))
+ goto noretry;
+
+ if (did_some_progress)
+ goto retry;
+ }
+
+ /*
+ * Be optimistic and consider all pages on reclaimable LRUs as usable
+ * but make sure we converge to OOM if we cannot make any progress after
+ * multiple consecutive failed attempts.
+ */
+ if (did_some_progress)
+ stall_backoff = 0;
+ else
+ stall_backoff = min(stall_backoff+1, MAX_STALL_BACKOFF);
+
+ /*
+ * Keep reclaiming pages while there is a chance this will lead somewhere.
+ * If none of the target zones can satisfy our allocation request even
+ * if all reclaimable pages are considered then we are screwed and have
+ * to go OOM.
+ */
+ for_each_zone_zonelist_nodemask(zone, z, ac->zonelist, ac->high_zoneidx, ac->nodemask) {
+ unsigned long free = zone_page_state_snapshot(zone, NR_FREE_PAGES);
+ unsigned long target;
+
+ target = zone_reclaimable_pages(zone);
+ target -= DIV_ROUND_UP(stall_backoff * target, MAX_STALL_BACKOFF);
+ target += free;
+
+ /*
+ * Would the allocation succeed if we reclaimed the whole target?
+ */
+ if (__zone_watermark_ok(zone, order, min_wmark_pages(zone),
+ ac->high_zoneidx, alloc_flags, target)) {
+ /* Wait for some write requests to complete then retry */
+ wait_iff_congested(zone, BLK_RW_ASYNC, HZ/50);
+ goto retry;
+ }
}
/* Reclaim has failed us, start killing things */
@@ -3170,8 +3220,10 @@ __alloc_pages_slowpath(gfp_t gfp_mask, unsigned int order,
goto got_pg;
/* Retry as long as the OOM killer is making progress */
- if (did_some_progress)
+ if (did_some_progress) {
+ stall_backoff = 0;
goto retry;
+ }
noretry:
/*
diff --git a/mm/vmscan.c b/mm/vmscan.c
index 4589cfdbe405..489212252cd6 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -192,7 +192,7 @@ static bool sane_reclaim(struct scan_control *sc)
}
#endif
-static unsigned long zone_reclaimable_pages(struct zone *zone)
+unsigned long zone_reclaimable_pages(struct zone *zone)
{
unsigned long nr;
@@ -2516,10 +2516,8 @@ static inline bool compaction_ready(struct zone *zone, int order)
*
* If a zone is deemed to be full of pinned pages then just give it a light
* scan then give up on it.
- *
- * Returns true if a zone was reclaimable.
*/
-static bool shrink_zones(struct zonelist *zonelist, struct scan_control *sc)
+static void shrink_zones(struct zonelist *zonelist, struct scan_control *sc)
{
struct zoneref *z;
struct zone *zone;
@@ -2527,7 +2525,6 @@ static bool shrink_zones(struct zonelist *zonelist, struct scan_control *sc)
unsigned long nr_soft_scanned;
gfp_t orig_mask;
enum zone_type requested_highidx = gfp_zone(sc->gfp_mask);
- bool reclaimable = false;
/*
* If the number of buffer_heads in the machine exceeds the maximum
@@ -2592,17 +2589,10 @@ static bool shrink_zones(struct zonelist *zonelist, struct scan_control *sc)
&nr_soft_scanned);
sc->nr_reclaimed += nr_soft_reclaimed;
sc->nr_scanned += nr_soft_scanned;
- if (nr_soft_reclaimed)
- reclaimable = true;
/* need some check for avoid more shrink_zone() */
}
- if (shrink_zone(zone, sc, zone_idx(zone) == classzone_idx))
- reclaimable = true;
-
- if (global_reclaim(sc) &&
- !reclaimable && zone_reclaimable(zone))
- reclaimable = true;
+ shrink_zone(zone, sc, zone_idx(zone));
}
/*
@@ -2610,8 +2600,6 @@ static bool shrink_zones(struct zonelist *zonelist, struct scan_control *sc)
* promoted it to __GFP_HIGHMEM.
*/
sc->gfp_mask = orig_mask;
-
- return reclaimable;
}
/*
@@ -2636,7 +2624,6 @@ static unsigned long do_try_to_free_pages(struct zonelist *zonelist,
int initial_priority = sc->priority;
unsigned long total_scanned = 0;
unsigned long writeback_threshold;
- bool zones_reclaimable;
retry:
delayacct_freepages_start();
@@ -2647,7 +2634,7 @@ static unsigned long do_try_to_free_pages(struct zonelist *zonelist,
vmpressure_prio(sc->gfp_mask, sc->target_mem_cgroup,
sc->priority);
sc->nr_scanned = 0;
- zones_reclaimable = shrink_zones(zonelist, sc);
+ shrink_zones(zonelist, sc);
total_scanned += sc->nr_scanned;
if (sc->nr_reclaimed >= sc->nr_to_reclaim)
@@ -2694,10 +2681,6 @@ static unsigned long do_try_to_free_pages(struct zonelist *zonelist,
goto retry;
}
- /* Any of the zones still reclaimable? Don't OOM. */
- if (zones_reclaimable)
- return 1;
-
return 0;
}
--
2.6.2
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply related [flat|nested] 11+ messages in thread
* Re: [RFC 1/3] mm, oom: refactor oom detection
2015-12-01 12:56 ` [RFC 1/3] mm, oom: refactor oom detection Michal Hocko
@ 2015-12-11 16:16 ` Johannes Weiner
2015-12-14 18:34 ` Michal Hocko
0 siblings, 1 reply; 11+ messages in thread
From: Johannes Weiner @ 2015-12-11 16:16 UTC (permalink / raw)
To: Michal Hocko
Cc: linux-mm, Andrew Morton, Linus Torvalds, Mel Gorman,
David Rientjes, Tetsuo Handa, Hillf Danton, KAMEZAWA Hiroyuki,
Michal Hocko
On Tue, Dec 01, 2015 at 01:56:45PM +0100, Michal Hocko wrote:
> From: Michal Hocko <mhocko@suse.com>
>
> __alloc_pages_slowpath has traditionally relied on the direct reclaim
> and did_some_progress as an indicator that it makes sense to retry
> allocation rather than declaring OOM. shrink_zones had to rely on
> zone_reclaimable if shrink_zone didn't make any progress to prevent
> from a premature OOM killer invocation - the LRU might be full of dirty
> or writeback pages and direct reclaim cannot clean those up.
>
> zone_reclaimable allows to rescan the reclaimable lists several
> times and restart if a page is freed. This is really subtle behavior
> and it might lead to a livelock when a single freed page keeps allocator
> looping but the current task will not be able to allocate that single
> page. OOM killer would be more appropriate than looping without any
> progress for unbounded amount of time.
>
> This patch changes OOM detection logic and pulls it out from shrink_zone
> which is too low to be appropriate for any high level decisions such as OOM
> which is per zonelist property. It is __alloc_pages_slowpath which knows
> how many attempts have been done and what was the progress so far
> therefore it is more appropriate to implement this logic.
>
> The new heuristic tries to be more deterministic and easier to follow.
> It builds on an assumption that retrying makes sense only if the
> currently reclaimable memory + free pages would allow the current
> allocation request to succeed (as per __zone_watermark_ok) at least for
> one zone in the usable zonelist.
>
> This alone wouldn't be sufficient, though, because the writeback might
> get stuck and reclaimable pages might be pinned for a really long time
> or even depend on the current allocation context. Therefore there is a
> feedback mechanism implemented which reduces the reclaim target after
> each reclaim round without any progress. This means that we should
> eventually converge to only NR_FREE_PAGES as the target and fail on the
> wmark check and proceed to OOM. The backoff is simple and linear with
> 1/16 of the reclaimable pages for each round without any progress. We
> are optimistic and reset counter for successful reclaim rounds.
>
> Costly high order pages mostly preserve their semantic and those without
> __GFP_REPEAT fail right away while those which have the flag set will
> back off after the amount of reclaimable pages reaches equivalent of the
> requested order. The only difference is that if there was no progress
> during the reclaim we rely on zone watermark check. This is more logical
> thing to do than previous 1<<order attempts which were a result of
> zone_reclaimable faking the progress.
>
> [rientjes@google.com: use zone_page_state_snapshot for NR_FREE_PAGES]
> [rientjes@google.com: shrink_zones doesn't need to return anything]
> Acked-by: Hillf Danton <hillf.zj@alibaba-inc.com>
> Signed-off-by: Michal Hocko <mhocko@suse.com>
This makes sense to me and the patch looks good. Just a few nitpicks.
Could you change the word "refactor" in the title? This is not a
non-functional change.
> @@ -2984,6 +2984,13 @@ static inline bool is_thp_gfp_mask(gfp_t gfp_mask)
> return (gfp_mask & (GFP_TRANSHUGE | __GFP_KSWAPD_RECLAIM)) == GFP_TRANSHUGE;
> }
>
> +/*
> + * Number of backoff steps for potentially reclaimable pages if the direct reclaim
> + * cannot make any progress. Each step will reduce 1/MAX_STALL_BACKOFF of the
> + * reclaimable memory.
> + */
> +#define MAX_STALL_BACKOFF 16
"stall backoff" is a fairly non-descript and doesn't give a good clue
at what exactly the variable is going to be doing.
How about MAX_DISCOUNT_RECLAIMABLE?
> @@ -3155,13 +3165,53 @@ __alloc_pages_slowpath(gfp_t gfp_mask, unsigned int order,
> if (gfp_mask & __GFP_NORETRY)
> goto noretry;
>
> - /* Keep reclaiming pages as long as there is reasonable progress */
> + /*
> + * Do not retry high order allocations unless they are __GFP_REPEAT
> + * and even then do not retry endlessly unless explicitly told so
> + */
> pages_reclaimed += did_some_progress;
> - if ((did_some_progress && order <= PAGE_ALLOC_COSTLY_ORDER) ||
> - ((gfp_mask & __GFP_REPEAT) && pages_reclaimed < (1 << order))) {
> - /* Wait for some write requests to complete then retry */
> - wait_iff_congested(ac->preferred_zone, BLK_RW_ASYNC, HZ/50);
> - goto retry;
> + if (order > PAGE_ALLOC_COSTLY_ORDER) {
> + if (!(gfp_mask & __GFP_NOFAIL) &&
> + (!(gfp_mask & __GFP_REPEAT) || pages_reclaimed >= (1<<order)))
> + goto noretry;
> +
> + if (did_some_progress)
> + goto retry;
> + }
I'm a bit bothered by this change as goto noretry is not the inverse
of not doing goto retry: goto noretry jumps over _may_oom().
Of course, _may_oom() would filter a higher-order allocation anyway,
and we could say that it's such a fundamental concept that will never
change in the kernel that it's not a problem to repeat this clause
here. But you could probably say the same thing about not invoking OOM
for < ZONE_NORMAL, for !__GFP_FS, for __GFP_THISNODE, and I'm a bit
wary of these things spreading out of _may_oom() again after I just
put effort into consolidating all the OOM clauses in there.
It should be possible to keep the original branch and then nest the
decaying retry logic in there.
> + /*
> + * Be optimistic and consider all pages on reclaimable LRUs as usable
> + * but make sure we converge to OOM if we cannot make any progress after
> + * multiple consecutive failed attempts.
> + */
> + if (did_some_progress)
> + stall_backoff = 0;
> + else
> + stall_backoff = min(stall_backoff+1, MAX_STALL_BACKOFF);
The rest of the backoff logic would be nasty to shift out by another
tab, but it could easily live in its own function.
In fact, the longer I think about it, it would probably be better for
__alloc_pages_slowpath anyway as that zonelist walk looks a bit too
low-level and unwieldy for the highlevel control flow function.
The outer control flow could look something like this:
/* Do not loop if specifically requested */
if (gfp_mask & __GFP_NORETRY)
goto noretry;
/* Keep reclaiming pages as long as there is reasonable progress */
if (did_some_progress) {
pages_reclaimed += did_some_progress;
no_progress_loops = 0;
} else {
no_progress_loops++;
}
if (should_retry_reclaim(gfp_mask, order, ac, did_some_progress,
no_progress_loops, pages_reclaimed)) {
/* Wait for some write requests to complete then retry */
wait_iff_congested(ac->preferred_zone, BLK_RW_ASYNC, HZ/50);
goto retry;
}
/* Reclaim has failed us, start killing things */
page = __alloc_pages_may_oom(gfp_mask, order, ac, &did_some_progress);
if (page)
goto got_pg;
/* Retry as long as the OOM killer is making progress */
if (did_some_progress) {
no_progress_loops = 0;
goto retry;
}
noretry:
> + /*
> + * Keep reclaiming pages while there is a chance this will lead somewhere.
> + * If none of the target zones can satisfy our allocation request even
> + * if all reclaimable pages are considered then we are screwed and have
> + * to go OOM.
> + */
> + for_each_zone_zonelist_nodemask(zone, z, ac->zonelist, ac->high_zoneidx, ac->nodemask) {
> + unsigned long free = zone_page_state_snapshot(zone, NR_FREE_PAGES);
> + unsigned long target;
> +
> + target = zone_reclaimable_pages(zone);
> + target -= DIV_ROUND_UP(stall_backoff * target, MAX_STALL_BACKOFF);
> + target += free;
target is also a little non-descript. Maybe available?
available += zone_reclaimable_pages(zone);
available -= DIV_ROUND_UP(discount_reclaimable * available,
MAX_DISCOUNT_RECLAIMABLE);
available += zone_page_state_snapshot(zone, NR_FREE_PAGES);
But yeah, this is mostly bikeshed territory now.
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [RFC 1/3] mm, oom: refactor oom detection
2015-12-11 16:16 ` Johannes Weiner
@ 2015-12-14 18:34 ` Michal Hocko
0 siblings, 0 replies; 11+ messages in thread
From: Michal Hocko @ 2015-12-14 18:34 UTC (permalink / raw)
To: Johannes Weiner
Cc: linux-mm, Andrew Morton, Linus Torvalds, Mel Gorman,
David Rientjes, Tetsuo Handa, Hillf Danton, KAMEZAWA Hiroyuki
On Fri 11-12-15 11:16:15, Johannes Weiner wrote:
> On Tue, Dec 01, 2015 at 01:56:45PM +0100, Michal Hocko wrote:
> > From: Michal Hocko <mhocko@suse.com>
[...]
> This makes sense to me and the patch looks good. Just a few nitpicks.
Thanks for the review!
> Could you change the word "refactor" in the title? This is not a
> non-functional change.
Sure. I will go with rework.
> > @@ -2984,6 +2984,13 @@ static inline bool is_thp_gfp_mask(gfp_t gfp_mask)
> > return (gfp_mask & (GFP_TRANSHUGE | __GFP_KSWAPD_RECLAIM)) == GFP_TRANSHUGE;
> > }
> >
> > +/*
> > + * Number of backoff steps for potentially reclaimable pages if the direct reclaim
> > + * cannot make any progress. Each step will reduce 1/MAX_STALL_BACKOFF of the
> > + * reclaimable memory.
> > + */
> > +#define MAX_STALL_BACKOFF 16
>
> "stall backoff" is a fairly non-descript and doesn't give a good clue
> at what exactly the variable is going to be doing.
The idea was to reflect that this is a step in which we do a backoff
rather than absolute amount.
> How about MAX_DISCOUNT_RECLAIMABLE?
this would indicate an absolute value to me. What about MAX_RECLAIM_RETRIES?
> > @@ -3155,13 +3165,53 @@ __alloc_pages_slowpath(gfp_t gfp_mask, unsigned int order,
> > if (gfp_mask & __GFP_NORETRY)
> > goto noretry;
> >
> > - /* Keep reclaiming pages as long as there is reasonable progress */
> > + /*
> > + * Do not retry high order allocations unless they are __GFP_REPEAT
> > + * and even then do not retry endlessly unless explicitly told so
> > + */
> > pages_reclaimed += did_some_progress;
> > - if ((did_some_progress && order <= PAGE_ALLOC_COSTLY_ORDER) ||
> > - ((gfp_mask & __GFP_REPEAT) && pages_reclaimed < (1 << order))) {
> > - /* Wait for some write requests to complete then retry */
> > - wait_iff_congested(ac->preferred_zone, BLK_RW_ASYNC, HZ/50);
> > - goto retry;
> > + if (order > PAGE_ALLOC_COSTLY_ORDER) {
> > + if (!(gfp_mask & __GFP_NOFAIL) &&
> > + (!(gfp_mask & __GFP_REPEAT) || pages_reclaimed >= (1<<order)))
> > + goto noretry;
> > +
> > + if (did_some_progress)
> > + goto retry;
> > + }
>
> I'm a bit bothered by this change as goto noretry is not the inverse
> of not doing goto retry: goto noretry jumps over _may_oom().
>
> Of course, _may_oom() would filter a higher-order allocation anyway,
> and we could say that it's such a fundamental concept that will never
> change in the kernel that it's not a problem to repeat this clause
> here. But you could probably say the same thing about not invoking OOM
> for < ZONE_NORMAL, for !__GFP_FS, for __GFP_THISNODE, and I'm a bit
> wary of these things spreading out of _may_oom() again after I just
> put effort into consolidating all the OOM clauses in there.
You are right. Our OOM rules are complex already and any partial rules
outside of _may_oom are adding to the confusion even more.
> It should be possible to keep the original branch and then nest the
> decaying retry logic in there.
>
> > + /*
> > + * Be optimistic and consider all pages on reclaimable LRUs as usable
> > + * but make sure we converge to OOM if we cannot make any progress after
> > + * multiple consecutive failed attempts.
> > + */
> > + if (did_some_progress)
> > + stall_backoff = 0;
> > + else
> > + stall_backoff = min(stall_backoff+1, MAX_STALL_BACKOFF);
>
> The rest of the backoff logic would be nasty to shift out by another
> tab, but it could easily live in its own function.
Yeah, I'll go this way. It looks much better this way!
[...]
> > + /*
> > + * Keep reclaiming pages while there is a chance this will lead somewhere.
> > + * If none of the target zones can satisfy our allocation request even
> > + * if all reclaimable pages are considered then we are screwed and have
> > + * to go OOM.
> > + */
> > + for_each_zone_zonelist_nodemask(zone, z, ac->zonelist, ac->high_zoneidx, ac->nodemask) {
> > + unsigned long free = zone_page_state_snapshot(zone, NR_FREE_PAGES);
> > + unsigned long target;
> > +
> > + target = zone_reclaimable_pages(zone);
> > + target -= DIV_ROUND_UP(stall_backoff * target, MAX_STALL_BACKOFF);
> > + target += free;
>
> target is also a little non-descript. Maybe available?
>
> available += zone_reclaimable_pages(zone);
> available -= DIV_ROUND_UP(discount_reclaimable * available,
> MAX_DISCOUNT_RECLAIMABLE);
> available += zone_page_state_snapshot(zone, NR_FREE_PAGES);
>
> But yeah, this is mostly bikeshed territory now.
Thanks for the feeback. Here is the cumulative diff after all my current
changes.
---
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index e267faad4649..f77e283fb8c6 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -2984,6 +2984,75 @@ static inline bool is_thp_gfp_mask(gfp_t gfp_mask)
return (gfp_mask & (GFP_TRANSHUGE | __GFP_KSWAPD_RECLAIM)) == GFP_TRANSHUGE;
}
+/*
+ * Maximum number of reclaim retries without any progress before OOM killer
+ * is consider as the only way to move forward.
+ */
+#define MAX_RECLAIM_RETRIES 16
+
+/*
+ * Checks whether it makes sense to retry the reclaim to make a forward progress
+ * for the given allocation request.
+ * The reclaim feedback represented by did_some_progress (any progress during
+ * the last reclaim round), pages_reclaimed (cumulative number of reclaimed
+ * pages) and no_progress_loops (number of reclaim rounds without any progress
+ * in a row) is considered as well as the reclaimable pages on the applicable
+ * zone list (with a backoff mechanism which is a function of no_progress_loops).
+ *
+ * Returns true if a retry is viable or false to enter the oom path.
+ */
+static inline bool
+should_reclaim_retry(gfp_t gfp_mask, unsigned order,
+ struct alloc_context *ac, int alloc_flags,
+ bool did_some_progress, unsigned long pages_reclaimed,
+ int no_progress_loops)
+{
+ struct zone *zone;
+ struct zoneref *z;
+
+ /*
+ * Make sure we converge to OOM if we cannot make any progress
+ * several times in the row.
+ */
+ if (no_progress_loops > MAX_RECLAIM_RETRIES)
+ return false;
+
+ /* Do not retry high order allocations unless they are __GFP_REPEAT */
+ if (order > PAGE_ALLOC_COSTLY_ORDER) {
+ if (!(gfp_mask & __GFP_REPEAT) || pages_reclaimed >= (1<<order))
+ return false;
+
+ if (did_some_progress)
+ return true;
+ }
+
+ /*
+ * Keep reclaiming pages while there is a chance this will lead somewhere.
+ * If none of the target zones can satisfy our allocation request even
+ * if all reclaimable pages are considered then we are screwed and have
+ * to go OOM.
+ */
+ for_each_zone_zonelist_nodemask(zone, z, ac->zonelist, ac->high_zoneidx, ac->nodemask) {
+ unsigned long available;
+
+ available = zone_reclaimable_pages(zone);
+ available -= DIV_ROUND_UP(no_progress_loops * available, MAX_RECLAIM_RETRIES);
+ available += zone_page_state_snapshot(zone, NR_FREE_PAGES);
+
+ /*
+ * Would the allocation succeed if we reclaimed the whole available?
+ */
+ if (__zone_watermark_ok(zone, order, min_wmark_pages(zone),
+ ac->high_zoneidx, alloc_flags, available)) {
+ /* Wait for some write requests to complete then retry */
+ wait_iff_congested(zone, BLK_RW_ASYNC, HZ/50);
+ return true;
+ }
+ }
+
+ return false;
+}
+
static inline struct page *
__alloc_pages_slowpath(gfp_t gfp_mask, unsigned int order,
struct alloc_context *ac)
@@ -2996,6 +3065,7 @@ __alloc_pages_slowpath(gfp_t gfp_mask, unsigned int order,
enum migrate_mode migration_mode = MIGRATE_ASYNC;
bool deferred_compaction = false;
int contended_compaction = COMPACT_CONTENDED_NONE;
+ int no_progress_loops = 0;
/*
* In the slowpath, we sanity check order to avoid ever trying to
@@ -3155,23 +3225,28 @@ __alloc_pages_slowpath(gfp_t gfp_mask, unsigned int order,
if (gfp_mask & __GFP_NORETRY)
goto noretry;
- /* Keep reclaiming pages as long as there is reasonable progress */
- pages_reclaimed += did_some_progress;
- if ((did_some_progress && order <= PAGE_ALLOC_COSTLY_ORDER) ||
- ((gfp_mask & __GFP_REPEAT) && pages_reclaimed < (1 << order))) {
- /* Wait for some write requests to complete then retry */
- wait_iff_congested(ac->preferred_zone, BLK_RW_ASYNC, HZ/50);
- goto retry;
+ if (did_some_progress) {
+ no_progress_loops = 0;
+ pages_reclaimed += did_some_progress;
+ } else {
+ no_progress_loops++;
}
+ if (should_reclaim_retry(gfp_mask, order, ac, alloc_flags,
+ did_some_progress > 0, pages_reclaimed,
+ no_progress_loops))
+ goto retry;
+
/* Reclaim has failed us, start killing things */
page = __alloc_pages_may_oom(gfp_mask, order, ac, &did_some_progress);
if (page)
goto got_pg;
/* Retry as long as the OOM killer is making progress */
- if (did_some_progress)
+ if (did_some_progress) {
+ no_progress_loops = 0;
goto retry;
+ }
noretry:
/*
--
Michal Hocko
SUSE Labs
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply related [flat|nested] 11+ messages in thread
* [RFC 2/3] mm: throttle on IO only when there are too many dirty and writeback pages
2015-12-01 12:56 [RFC 0/3] OOM detection rework v3 Michal Hocko
2015-12-01 12:56 ` [RFC 1/3] mm, oom: refactor oom detection Michal Hocko
@ 2015-12-01 12:56 ` Michal Hocko
2015-12-02 7:09 ` Hillf Danton
2015-12-11 16:25 ` Johannes Weiner
2015-12-01 12:56 ` [RFC 3/3] mm: use watermak checks for __GFP_REPEAT high order allocations Michal Hocko
2015-12-11 8:42 ` [RFC 0/3] OOM detection rework v3 Michal Hocko
3 siblings, 2 replies; 11+ messages in thread
From: Michal Hocko @ 2015-12-01 12:56 UTC (permalink / raw)
To: linux-mm
Cc: Andrew Morton, Linus Torvalds, Mel Gorman, Johannes Weiner,
David Rientjes, Tetsuo Handa, Hillf Danton, KAMEZAWA Hiroyuki,
Michal Hocko
From: Michal Hocko <mhocko@suse.com>
wait_iff_congested has been used to throttle allocator before it retried
another round of direct reclaim to allow the writeback to make some
progress and prevent reclaim from looping over dirty/writeback pages
without making any progress. We used to do congestion_wait before
0e093d99763e ("writeback: do not sleep on the congestion queue if
there are no congested BDIs or if significant congestion is not being
encountered in the current zone") but that led to undesirable stalls
and sleeping for the full timeout even when the BDI wasn't congested.
Hence wait_iff_congested was used instead. But it seems that even
wait_iff_congested doesn't work as expected. We might have a small file
LRU list with all pages dirty/writeback and yet the bdi is not congested
so this is just a cond_resched in the end and can end up triggering pre
mature OOM.
This patch replaces the unconditional wait_iff_congested by
congestion_wait which is executed only if we _know_ that the last round
of direct reclaim didn't make any progress and dirty+writeback pages are
more than a half of the reclaimable pages on the zone which might be
usable for our target allocation. This shouldn't reintroduce stalls
fixed by 0e093d99763e because congestion_wait is called only when we
are getting hopeless when sleeping is a better choice than OOM with many
pages under IO.
We have to preserve logic introduced by "mm, vmstat: allow WQ concurrency
to discover memory reclaim doesn't make any progress" into the
__alloc_pages_slowpath now that wait_iff_congested is not used anymore.
As the only remaining user of wait_iff_congested is shrink_inactive_list
we can remove the WQ specific short sleep from wait_iff_congested
because the sleep is needed to be done only once in the allocation retry
cycle.
Signed-off-by: Michal Hocko <mhocko@suse.com>
---
mm/backing-dev.c | 19 +++----------------
mm/page_alloc.c | 33 ++++++++++++++++++++++++++++++---
2 files changed, 33 insertions(+), 19 deletions(-)
diff --git a/mm/backing-dev.c b/mm/backing-dev.c
index 7340353f8aea..d2473ce9cc57 100644
--- a/mm/backing-dev.c
+++ b/mm/backing-dev.c
@@ -957,9 +957,8 @@ EXPORT_SYMBOL(congestion_wait);
* jiffies for either a BDI to exit congestion of the given @sync queue
* or a write to complete.
*
- * In the absence of zone congestion, a short sleep or a cond_resched is
- * performed to yield the processor and to allow other subsystems to make
- * a forward progress.
+ * In the absence of zone congestion, cond_resched() is called to yield
+ * the processor if necessary but otherwise does not sleep.
*
* The return value is 0 if the sleep is for the full timeout. Otherwise,
* it is the number of jiffies that were still remaining when the function
@@ -980,19 +979,7 @@ long wait_iff_congested(struct zone *zone, int sync, long timeout)
if (atomic_read(&nr_wb_congested[sync]) == 0 ||
!test_bit(ZONE_CONGESTED, &zone->flags)) {
- /*
- * Memory allocation/reclaim might be called from a WQ
- * context and the current implementation of the WQ
- * concurrency control doesn't recognize that a particular
- * WQ is congested if the worker thread is looping without
- * ever sleeping. Therefore we have to do a short sleep
- * here rather than calling cond_resched().
- */
- if (current->flags & PF_WQ_WORKER)
- schedule_timeout(1);
- else
- cond_resched();
-
+ cond_resched();
/* In case we scheduled, work out time remaining */
ret = timeout - (jiffies - start);
if (ret < 0)
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index af221067de6a..168a675e9116 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -3198,8 +3198,9 @@ __alloc_pages_slowpath(gfp_t gfp_mask, unsigned int order,
for_each_zone_zonelist_nodemask(zone, z, ac->zonelist, ac->high_zoneidx, ac->nodemask) {
unsigned long free = zone_page_state_snapshot(zone, NR_FREE_PAGES);
unsigned long target;
+ unsigned long reclaimable;
- target = zone_reclaimable_pages(zone);
+ reclaimable = target = zone_reclaimable_pages(zone);
target -= DIV_ROUND_UP(stall_backoff * target, MAX_STALL_BACKOFF);
target += free;
@@ -3208,8 +3209,34 @@ __alloc_pages_slowpath(gfp_t gfp_mask, unsigned int order,
*/
if (__zone_watermark_ok(zone, order, min_wmark_pages(zone),
ac->high_zoneidx, alloc_flags, target)) {
- /* Wait for some write requests to complete then retry */
- wait_iff_congested(zone, BLK_RW_ASYNC, HZ/50);
+ unsigned long writeback = zone_page_state_snapshot(zone, NR_WRITEBACK),
+ dirty = zone_page_state_snapshot(zone, NR_FILE_DIRTY);
+
+ /*
+ * If we didn't make any progress and have a lot of
+ * dirty + writeback pages then we should wait for
+ * an IO to complete to slow down the reclaim and
+ * prevent from pre mature OOM
+ */
+ if (!did_some_progress && 2*(writeback + dirty) > reclaimable) {
+ congestion_wait(BLK_RW_ASYNC, HZ/10);
+ goto retry;
+ }
+
+ /*
+ * Memory allocation/reclaim might be called from a WQ
+ * context and the current implementation of the WQ
+ * concurrency control doesn't recognize that
+ * a particular WQ is congested if the worker thread is
+ * looping without ever sleeping. Therefore we have to
+ * do a short sleep here rather than calling
+ * cond_resched().
+ */
+ if (current->flags & PF_WQ_WORKER)
+ schedule_timeout(1);
+ else
+ cond_resched();
+
goto retry;
}
}
--
2.6.2
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply related [flat|nested] 11+ messages in thread
* Re: [RFC 2/3] mm: throttle on IO only when there are too many dirty and writeback pages
2015-12-01 12:56 ` [RFC 2/3] mm: throttle on IO only when there are too many dirty and writeback pages Michal Hocko
@ 2015-12-02 7:09 ` Hillf Danton
2015-12-11 16:25 ` Johannes Weiner
1 sibling, 0 replies; 11+ messages in thread
From: Hillf Danton @ 2015-12-02 7:09 UTC (permalink / raw)
To: 'Michal Hocko', linux-mm
Cc: 'Andrew Morton', 'Linus Torvalds',
'Mel Gorman', 'Johannes Weiner',
'David Rientjes', 'Tetsuo Handa',
'KAMEZAWA Hiroyuki', 'Michal Hocko'
> From: Michal Hocko <mhocko@suse.com>
>
> wait_iff_congested has been used to throttle allocator before it retried
> another round of direct reclaim to allow the writeback to make some
> progress and prevent reclaim from looping over dirty/writeback pages
> without making any progress. We used to do congestion_wait before
> 0e093d99763e ("writeback: do not sleep on the congestion queue if
> there are no congested BDIs or if significant congestion is not being
> encountered in the current zone") but that led to undesirable stalls
> and sleeping for the full timeout even when the BDI wasn't congested.
> Hence wait_iff_congested was used instead. But it seems that even
> wait_iff_congested doesn't work as expected. We might have a small file
> LRU list with all pages dirty/writeback and yet the bdi is not congested
> so this is just a cond_resched in the end and can end up triggering pre
> mature OOM.
>
> This patch replaces the unconditional wait_iff_congested by
> congestion_wait which is executed only if we _know_ that the last round
> of direct reclaim didn't make any progress and dirty+writeback pages are
> more than a half of the reclaimable pages on the zone which might be
> usable for our target allocation. This shouldn't reintroduce stalls
> fixed by 0e093d99763e because congestion_wait is called only when we
> are getting hopeless when sleeping is a better choice than OOM with many
> pages under IO.
>
> We have to preserve logic introduced by "mm, vmstat: allow WQ concurrency
> to discover memory reclaim doesn't make any progress" into the
> __alloc_pages_slowpath now that wait_iff_congested is not used anymore.
> As the only remaining user of wait_iff_congested is shrink_inactive_list
> we can remove the WQ specific short sleep from wait_iff_congested
> because the sleep is needed to be done only once in the allocation retry
> cycle.
>
> Signed-off-by: Michal Hocko <mhocko@suse.com>
> ---
Acked-by: Hillf Danton <hillf.zj@alibaba-inc.com>
> mm/backing-dev.c | 19 +++----------------
> mm/page_alloc.c | 33 ++++++++++++++++++++++++++++++---
> 2 files changed, 33 insertions(+), 19 deletions(-)
>
> diff --git a/mm/backing-dev.c b/mm/backing-dev.c
> index 7340353f8aea..d2473ce9cc57 100644
> --- a/mm/backing-dev.c
> +++ b/mm/backing-dev.c
> @@ -957,9 +957,8 @@ EXPORT_SYMBOL(congestion_wait);
> * jiffies for either a BDI to exit congestion of the given @sync queue
> * or a write to complete.
> *
> - * In the absence of zone congestion, a short sleep or a cond_resched is
> - * performed to yield the processor and to allow other subsystems to make
> - * a forward progress.
> + * In the absence of zone congestion, cond_resched() is called to yield
> + * the processor if necessary but otherwise does not sleep.
> *
> * The return value is 0 if the sleep is for the full timeout. Otherwise,
> * it is the number of jiffies that were still remaining when the function
> @@ -980,19 +979,7 @@ long wait_iff_congested(struct zone *zone, int sync, long timeout)
> if (atomic_read(&nr_wb_congested[sync]) == 0 ||
> !test_bit(ZONE_CONGESTED, &zone->flags)) {
>
> - /*
> - * Memory allocation/reclaim might be called from a WQ
> - * context and the current implementation of the WQ
> - * concurrency control doesn't recognize that a particular
> - * WQ is congested if the worker thread is looping without
> - * ever sleeping. Therefore we have to do a short sleep
> - * here rather than calling cond_resched().
> - */
> - if (current->flags & PF_WQ_WORKER)
> - schedule_timeout(1);
> - else
> - cond_resched();
> -
> + cond_resched();
> /* In case we scheduled, work out time remaining */
> ret = timeout - (jiffies - start);
> if (ret < 0)
> diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> index af221067de6a..168a675e9116 100644
> --- a/mm/page_alloc.c
> +++ b/mm/page_alloc.c
> @@ -3198,8 +3198,9 @@ __alloc_pages_slowpath(gfp_t gfp_mask, unsigned int order,
> for_each_zone_zonelist_nodemask(zone, z, ac->zonelist, ac->high_zoneidx, ac->nodemask) {
> unsigned long free = zone_page_state_snapshot(zone, NR_FREE_PAGES);
> unsigned long target;
> + unsigned long reclaimable;
>
> - target = zone_reclaimable_pages(zone);
> + reclaimable = target = zone_reclaimable_pages(zone);
> target -= DIV_ROUND_UP(stall_backoff * target, MAX_STALL_BACKOFF);
> target += free;
>
> @@ -3208,8 +3209,34 @@ __alloc_pages_slowpath(gfp_t gfp_mask, unsigned int order,
> */
> if (__zone_watermark_ok(zone, order, min_wmark_pages(zone),
> ac->high_zoneidx, alloc_flags, target)) {
> - /* Wait for some write requests to complete then retry */
> - wait_iff_congested(zone, BLK_RW_ASYNC, HZ/50);
> + unsigned long writeback = zone_page_state_snapshot(zone, NR_WRITEBACK),
> + dirty = zone_page_state_snapshot(zone, NR_FILE_DIRTY);
> +
> + /*
> + * If we didn't make any progress and have a lot of
> + * dirty + writeback pages then we should wait for
> + * an IO to complete to slow down the reclaim and
> + * prevent from pre mature OOM
> + */
> + if (!did_some_progress && 2*(writeback + dirty) > reclaimable) {
> + congestion_wait(BLK_RW_ASYNC, HZ/10);
> + goto retry;
> + }
> +
> + /*
> + * Memory allocation/reclaim might be called from a WQ
> + * context and the current implementation of the WQ
> + * concurrency control doesn't recognize that
> + * a particular WQ is congested if the worker thread is
> + * looping without ever sleeping. Therefore we have to
> + * do a short sleep here rather than calling
> + * cond_resched().
> + */
> + if (current->flags & PF_WQ_WORKER)
> + schedule_timeout(1);
> + else
> + cond_resched();
> +
> goto retry;
> }
> }
> --
> 2.6.2
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [RFC 2/3] mm: throttle on IO only when there are too many dirty and writeback pages
2015-12-01 12:56 ` [RFC 2/3] mm: throttle on IO only when there are too many dirty and writeback pages Michal Hocko
2015-12-02 7:09 ` Hillf Danton
@ 2015-12-11 16:25 ` Johannes Weiner
1 sibling, 0 replies; 11+ messages in thread
From: Johannes Weiner @ 2015-12-11 16:25 UTC (permalink / raw)
To: Michal Hocko
Cc: linux-mm, Andrew Morton, Linus Torvalds, Mel Gorman,
David Rientjes, Tetsuo Handa, Hillf Danton, KAMEZAWA Hiroyuki,
Michal Hocko
On Tue, Dec 01, 2015 at 01:56:46PM +0100, Michal Hocko wrote:
> From: Michal Hocko <mhocko@suse.com>
>
> wait_iff_congested has been used to throttle allocator before it retried
> another round of direct reclaim to allow the writeback to make some
> progress and prevent reclaim from looping over dirty/writeback pages
> without making any progress. We used to do congestion_wait before
> 0e093d99763e ("writeback: do not sleep on the congestion queue if
> there are no congested BDIs or if significant congestion is not being
> encountered in the current zone") but that led to undesirable stalls
> and sleeping for the full timeout even when the BDI wasn't congested.
> Hence wait_iff_congested was used instead. But it seems that even
> wait_iff_congested doesn't work as expected. We might have a small file
> LRU list with all pages dirty/writeback and yet the bdi is not congested
> so this is just a cond_resched in the end and can end up triggering pre
> mature OOM.
>
> This patch replaces the unconditional wait_iff_congested by
> congestion_wait which is executed only if we _know_ that the last round
> of direct reclaim didn't make any progress and dirty+writeback pages are
> more than a half of the reclaimable pages on the zone which might be
> usable for our target allocation. This shouldn't reintroduce stalls
> fixed by 0e093d99763e because congestion_wait is called only when we
> are getting hopeless when sleeping is a better choice than OOM with many
> pages under IO.
>
> We have to preserve logic introduced by "mm, vmstat: allow WQ concurrency
> to discover memory reclaim doesn't make any progress" into the
> __alloc_pages_slowpath now that wait_iff_congested is not used anymore.
> As the only remaining user of wait_iff_congested is shrink_inactive_list
> we can remove the WQ specific short sleep from wait_iff_congested
> because the sleep is needed to be done only once in the allocation retry
> cycle.
>
> Signed-off-by: Michal Hocko <mhocko@suse.com>
Yep, this looks like the right thing to do. However, the code it adds
to __alloc_pages_slowpath() is putting even more weight behind the
argument that the reclaim retry logic should be in its own function.
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 11+ messages in thread
* [RFC 3/3] mm: use watermak checks for __GFP_REPEAT high order allocations
2015-12-01 12:56 [RFC 0/3] OOM detection rework v3 Michal Hocko
2015-12-01 12:56 ` [RFC 1/3] mm, oom: refactor oom detection Michal Hocko
2015-12-01 12:56 ` [RFC 2/3] mm: throttle on IO only when there are too many dirty and writeback pages Michal Hocko
@ 2015-12-01 12:56 ` Michal Hocko
2015-12-02 7:07 ` Hillf Danton
2015-12-11 8:42 ` [RFC 0/3] OOM detection rework v3 Michal Hocko
3 siblings, 1 reply; 11+ messages in thread
From: Michal Hocko @ 2015-12-01 12:56 UTC (permalink / raw)
To: linux-mm
Cc: Andrew Morton, Linus Torvalds, Mel Gorman, Johannes Weiner,
David Rientjes, Tetsuo Handa, Hillf Danton, KAMEZAWA Hiroyuki,
Michal Hocko
From: Michal Hocko <mhocko@suse.com>
__alloc_pages_slowpath retries costly allocations until at least
order worth of pages were reclaimed or the watermark check for at least
one zone would succeed after all reclaiming all pages if the reclaim
hasn't made any progress.
The first condition was added by a41f24ea9fd6 ("page allocator: smarter
retry of costly-order allocations) and it assumed that lumpy reclaim
could have created a page of the sufficient order. Lumpy reclaim,
has been removed quite some time ago so the assumption doesn't hold
anymore. It would be more appropriate to check the compaction progress
instead but this patch simply removes the check and relies solely
on the watermark check.
To prevent from too many retries the stall_backoff is not reseted after
a reclaim which made progress because we cannot assume it helped high
order situation.
Signed-off-by: Michal Hocko <mhocko@suse.com>
---
mm/page_alloc.c | 20 ++++++++------------
1 file changed, 8 insertions(+), 12 deletions(-)
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 168a675e9116..45de14cd62f4 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -2998,7 +2998,6 @@ __alloc_pages_slowpath(gfp_t gfp_mask, unsigned int order,
bool can_direct_reclaim = gfp_mask & __GFP_DIRECT_RECLAIM;
struct page *page = NULL;
int alloc_flags;
- unsigned long pages_reclaimed = 0;
unsigned long did_some_progress;
enum migrate_mode migration_mode = MIGRATE_ASYNC;
bool deferred_compaction = false;
@@ -3167,24 +3166,21 @@ __alloc_pages_slowpath(gfp_t gfp_mask, unsigned int order,
/*
* Do not retry high order allocations unless they are __GFP_REPEAT
- * and even then do not retry endlessly unless explicitly told so
+ * unless explicitly told so.
*/
- pages_reclaimed += did_some_progress;
- if (order > PAGE_ALLOC_COSTLY_ORDER) {
- if (!(gfp_mask & __GFP_NOFAIL) &&
- (!(gfp_mask & __GFP_REPEAT) || pages_reclaimed >= (1<<order)))
- goto noretry;
-
- if (did_some_progress)
- goto retry;
- }
+ if (order > PAGE_ALLOC_COSTLY_ORDER &&
+ !(gfp_mask & (__GFP_REPEAT|__GFP_NOFAIL)))
+ goto noretry;
/*
* Be optimistic and consider all pages on reclaimable LRUs as usable
* but make sure we converge to OOM if we cannot make any progress after
* multiple consecutive failed attempts.
+ * Costly __GFP_REPEAT allocations might have made a progress but this
+ * doesn't mean their order will become available due to high fragmentation
+ * so do not reset the backoff for them
*/
- if (did_some_progress)
+ if (did_some_progress && order <= PAGE_ALLOC_COSTLY_ORDER)
stall_backoff = 0;
else
stall_backoff = min(stall_backoff+1, MAX_STALL_BACKOFF);
--
2.6.2
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply related [flat|nested] 11+ messages in thread
* Re: [RFC 3/3] mm: use watermak checks for __GFP_REPEAT high order allocations
2015-12-01 12:56 ` [RFC 3/3] mm: use watermak checks for __GFP_REPEAT high order allocations Michal Hocko
@ 2015-12-02 7:07 ` Hillf Danton
2015-12-02 8:52 ` Michal Hocko
0 siblings, 1 reply; 11+ messages in thread
From: Hillf Danton @ 2015-12-02 7:07 UTC (permalink / raw)
To: 'Michal Hocko', linux-mm
Cc: 'Andrew Morton', 'Linus Torvalds',
'Mel Gorman', 'Johannes Weiner',
'David Rientjes', 'Tetsuo Handa',
'KAMEZAWA Hiroyuki', 'Michal Hocko'
> From: Michal Hocko <mhocko@suse.com>
>
> __alloc_pages_slowpath retries costly allocations until at least
> order worth of pages were reclaimed or the watermark check for at least
> one zone would succeed after all reclaiming all pages if the reclaim
> hasn't made any progress.
>
> The first condition was added by a41f24ea9fd6 ("page allocator: smarter
> retry of costly-order allocations) and it assumed that lumpy reclaim
> could have created a page of the sufficient order. Lumpy reclaim,
> has been removed quite some time ago so the assumption doesn't hold
> anymore. It would be more appropriate to check the compaction progress
> instead but this patch simply removes the check and relies solely
> on the watermark check.
>
> To prevent from too many retries the stall_backoff is not reseted after
> a reclaim which made progress because we cannot assume it helped high
> order situation.
>
> Signed-off-by: Michal Hocko <mhocko@suse.com>
> ---
> mm/page_alloc.c | 20 ++++++++------------
> 1 file changed, 8 insertions(+), 12 deletions(-)
>
> diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> index 168a675e9116..45de14cd62f4 100644
> --- a/mm/page_alloc.c
> +++ b/mm/page_alloc.c
> @@ -2998,7 +2998,6 @@ __alloc_pages_slowpath(gfp_t gfp_mask, unsigned int order,
> bool can_direct_reclaim = gfp_mask & __GFP_DIRECT_RECLAIM;
> struct page *page = NULL;
> int alloc_flags;
> - unsigned long pages_reclaimed = 0;
> unsigned long did_some_progress;
> enum migrate_mode migration_mode = MIGRATE_ASYNC;
> bool deferred_compaction = false;
> @@ -3167,24 +3166,21 @@ __alloc_pages_slowpath(gfp_t gfp_mask, unsigned int order,
>
> /*
> * Do not retry high order allocations unless they are __GFP_REPEAT
> - * and even then do not retry endlessly unless explicitly told so
> + * unless explicitly told so.
s/unless/or/
Acked-by: Hillf Danton <hillf.zj@alibaba-inc.com>
> */
> - pages_reclaimed += did_some_progress;
> - if (order > PAGE_ALLOC_COSTLY_ORDER) {
> - if (!(gfp_mask & __GFP_NOFAIL) &&
> - (!(gfp_mask & __GFP_REPEAT) || pages_reclaimed >= (1<<order)))
> - goto noretry;
> -
> - if (did_some_progress)
> - goto retry;
> - }
> + if (order > PAGE_ALLOC_COSTLY_ORDER &&
> + !(gfp_mask & (__GFP_REPEAT|__GFP_NOFAIL)))
> + goto noretry;
>
> /*
> * Be optimistic and consider all pages on reclaimable LRUs as usable
> * but make sure we converge to OOM if we cannot make any progress after
> * multiple consecutive failed attempts.
> + * Costly __GFP_REPEAT allocations might have made a progress but this
> + * doesn't mean their order will become available due to high fragmentation
> + * so do not reset the backoff for them
> */
> - if (did_some_progress)
> + if (did_some_progress && order <= PAGE_ALLOC_COSTLY_ORDER)
> stall_backoff = 0;
> else
> stall_backoff = min(stall_backoff+1, MAX_STALL_BACKOFF);
> --
> 2.6.2
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [RFC 3/3] mm: use watermak checks for __GFP_REPEAT high order allocations
2015-12-02 7:07 ` Hillf Danton
@ 2015-12-02 8:52 ` Michal Hocko
0 siblings, 0 replies; 11+ messages in thread
From: Michal Hocko @ 2015-12-02 8:52 UTC (permalink / raw)
To: Hillf Danton
Cc: linux-mm, 'Andrew Morton', 'Linus Torvalds',
'Mel Gorman', 'Johannes Weiner',
'David Rientjes', 'Tetsuo Handa',
'KAMEZAWA Hiroyuki'
On Wed 02-12-15 15:07:26, Hillf Danton wrote:
> > From: Michal Hocko <mhocko@suse.com>
> >
> > __alloc_pages_slowpath retries costly allocations until at least
> > order worth of pages were reclaimed or the watermark check for at least
> > one zone would succeed after all reclaiming all pages if the reclaim
> > hasn't made any progress.
> >
> > The first condition was added by a41f24ea9fd6 ("page allocator: smarter
> > retry of costly-order allocations) and it assumed that lumpy reclaim
> > could have created a page of the sufficient order. Lumpy reclaim,
> > has been removed quite some time ago so the assumption doesn't hold
> > anymore. It would be more appropriate to check the compaction progress
> > instead but this patch simply removes the check and relies solely
> > on the watermark check.
> >
> > To prevent from too many retries the stall_backoff is not reseted after
> > a reclaim which made progress because we cannot assume it helped high
> > order situation.
> >
> > Signed-off-by: Michal Hocko <mhocko@suse.com>
> > ---
> > mm/page_alloc.c | 20 ++++++++------------
> > 1 file changed, 8 insertions(+), 12 deletions(-)
> >
> > diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> > index 168a675e9116..45de14cd62f4 100644
> > --- a/mm/page_alloc.c
> > +++ b/mm/page_alloc.c
> > @@ -2998,7 +2998,6 @@ __alloc_pages_slowpath(gfp_t gfp_mask, unsigned int order,
> > bool can_direct_reclaim = gfp_mask & __GFP_DIRECT_RECLAIM;
> > struct page *page = NULL;
> > int alloc_flags;
> > - unsigned long pages_reclaimed = 0;
> > unsigned long did_some_progress;
> > enum migrate_mode migration_mode = MIGRATE_ASYNC;
> > bool deferred_compaction = false;
> > @@ -3167,24 +3166,21 @@ __alloc_pages_slowpath(gfp_t gfp_mask, unsigned int order,
> >
> > /*
> > * Do not retry high order allocations unless they are __GFP_REPEAT
> > - * and even then do not retry endlessly unless explicitly told so
> > + * unless explicitly told so.
>
> s/unless/or/
Fixed
> Acked-by: Hillf Danton <hillf.zj@alibaba-inc.com>
Thanks!
>
> > */
> > - pages_reclaimed += did_some_progress;
> > - if (order > PAGE_ALLOC_COSTLY_ORDER) {
> > - if (!(gfp_mask & __GFP_NOFAIL) &&
> > - (!(gfp_mask & __GFP_REPEAT) || pages_reclaimed >= (1<<order)))
> > - goto noretry;
> > -
> > - if (did_some_progress)
> > - goto retry;
> > - }
> > + if (order > PAGE_ALLOC_COSTLY_ORDER &&
> > + !(gfp_mask & (__GFP_REPEAT|__GFP_NOFAIL)))
> > + goto noretry;
> >
> > /*
> > * Be optimistic and consider all pages on reclaimable LRUs as usable
> > * but make sure we converge to OOM if we cannot make any progress after
> > * multiple consecutive failed attempts.
> > + * Costly __GFP_REPEAT allocations might have made a progress but this
> > + * doesn't mean their order will become available due to high fragmentation
> > + * so do not reset the backoff for them
> > */
> > - if (did_some_progress)
> > + if (did_some_progress && order <= PAGE_ALLOC_COSTLY_ORDER)
> > stall_backoff = 0;
> > else
> > stall_backoff = min(stall_backoff+1, MAX_STALL_BACKOFF);
> > --
> > 2.6.2
>
--
Michal Hocko
SUSE Labs
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [RFC 0/3] OOM detection rework v3
2015-12-01 12:56 [RFC 0/3] OOM detection rework v3 Michal Hocko
` (2 preceding siblings ...)
2015-12-01 12:56 ` [RFC 3/3] mm: use watermak checks for __GFP_REPEAT high order allocations Michal Hocko
@ 2015-12-11 8:42 ` Michal Hocko
3 siblings, 0 replies; 11+ messages in thread
From: Michal Hocko @ 2015-12-11 8:42 UTC (permalink / raw)
To: linux-mm
Cc: Andrew Morton, Linus Torvalds, Mel Gorman, Johannes Weiner,
David Rientjes, Tetsuo Handa, Hillf Danton, KAMEZAWA Hiroyuki
Hi,
are there any fundamental objections to the new approach? I am very well
aware that this is not a small change and it will take some time to
settle but can we move on and get this to mmotm tree (and linux-next) so
that it gets a larger test coverage. I do not think this is a material
for the next merge window. Maybe 4.6?
What do you think?
--
Michal Hocko
SUSE Labs
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 11+ messages in thread