From: Andrew Morton <akpm@linux-foundation.org>
To: Mel Gorman <mel@csn.ul.ie>
Cc: Linux Kernel List <linux-kernel@vger.kernel.org>,
linux-mm@kvack.org, Rik van Riel <riel@redhat.com>,
Johannes Weiner <hannes@cmpxchg.org>,
Minchan Kim <minchan.kim@gmail.com>,
Christoph Lameter <cl@linux-foundation.org>,
KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>,
KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>,
Dave Chinner <david@fromorbit.com>,
Wu Fengguang <fengguang.wu@intel.com>,
David Rientjes <rientjes@google.com>
Subject: Re: [PATCH 3/3] mm: page allocator: Drain per-cpu lists after direct reclaim allocation fails
Date: Fri, 3 Sep 2010 16:00:26 -0700 [thread overview]
Message-ID: <20100903160026.564fdcc9.akpm@linux-foundation.org> (raw)
In-Reply-To: <1283504926-2120-4-git-send-email-mel@csn.ul.ie>
On Fri, 3 Sep 2010 10:08:46 +0100
Mel Gorman <mel@csn.ul.ie> wrote:
> When under significant memory pressure, a process enters direct reclaim
> and immediately afterwards tries to allocate a page. If it fails and no
> further progress is made, it's possible the system will go OOM. However,
> on systems with large amounts of memory, it's possible that a significant
> number of pages are on per-cpu lists and inaccessible to the calling
> process. This leads to a process entering direct reclaim more often than
> it should increasing the pressure on the system and compounding the problem.
>
> This patch notes that if direct reclaim is making progress but
> allocations are still failing that the system is already under heavy
> pressure. In this case, it drains the per-cpu lists and tries the
> allocation a second time before continuing.
>
> Signed-off-by: Mel Gorman <mel@csn.ul.ie>
> Reviewed-by: Minchan Kim <minchan.kim@gmail.com>
> Reviewed-by: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
> Reviewed-by: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
> Reviewed-by: Christoph Lameter <cl@linux.com>
> ---
> mm/page_alloc.c | 20 ++++++++++++++++----
> 1 files changed, 16 insertions(+), 4 deletions(-)
>
> diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> index bbaa959..750e1dc 100644
> --- a/mm/page_alloc.c
> +++ b/mm/page_alloc.c
> @@ -1847,6 +1847,7 @@ __alloc_pages_direct_reclaim(gfp_t gfp_mask, unsigned int order,
> struct page *page = NULL;
> struct reclaim_state reclaim_state;
> struct task_struct *p = current;
> + bool drained = false;
>
> cond_resched();
>
> @@ -1865,14 +1866,25 @@ __alloc_pages_direct_reclaim(gfp_t gfp_mask, unsigned int order,
>
> cond_resched();
>
> - if (order != 0)
> - drain_all_pages();
> + if (unlikely(!(*did_some_progress)))
> + return NULL;
>
> - if (likely(*did_some_progress))
> - page = get_page_from_freelist(gfp_mask, nodemask, order,
> +retry:
> + page = get_page_from_freelist(gfp_mask, nodemask, order,
> zonelist, high_zoneidx,
> alloc_flags, preferred_zone,
> migratetype);
> +
> + /*
> + * If an allocation failed after direct reclaim, it could be because
> + * pages are pinned on the per-cpu lists. Drain them and try again
> + */
> + if (!page && !drained) {
> + drain_all_pages();
> + drained = true;
> + goto retry;
> + }
> +
> return page;
> }
The patch looks reasonable.
But please take a look at the recent thread "mm: minute-long livelocks
in memory reclaim". There, people are pointing fingers at that
drain_all_pages() call, suspecting that it's causing huge IPI storms.
Dave was going to test this theory but afaik hasn't yet done so. It
would be nice to tie these threads together if poss?
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
next prev parent reply other threads:[~2010-09-04 0:47 UTC|newest]
Thread overview: 55+ messages / expand[flat|nested] mbox.gz Atom feed top
2010-09-03 9:08 [PATCH 0/3] Reduce watermark-related problems with the per-cpu allocator V4 Mel Gorman
2010-09-03 9:08 ` [PATCH 1/3] mm: page allocator: Update free page counters after pages are placed on the free list Mel Gorman
2010-09-03 22:38 ` Andrew Morton
2010-09-05 18:06 ` Mel Gorman
2010-09-03 9:08 ` [PATCH 2/3] mm: page allocator: Calculate a better estimate of NR_FREE_PAGES when memory is low and kswapd is awake Mel Gorman
2010-09-03 22:55 ` Andrew Morton
2010-09-03 23:17 ` Christoph Lameter
2010-09-03 23:28 ` Andrew Morton
2010-09-04 0:54 ` Christoph Lameter
2010-09-05 18:12 ` Mel Gorman
2010-09-03 9:08 ` [PATCH 3/3] mm: page allocator: Drain per-cpu lists after direct reclaim allocation fails Mel Gorman
2010-09-03 23:00 ` Andrew Morton [this message]
2010-09-04 2:25 ` Dave Chinner
2010-09-04 3:21 ` Andrew Morton
2010-09-04 7:58 ` Dave Chinner
2010-09-04 8:14 ` Dave Chinner
[not found] ` <20100905015400.GA10714@localhost>
[not found] ` <20100905021555.GG705@dastard>
[not found] ` <20100905060539.GA17450@localhost>
[not found] ` <20100905131447.GJ705@dastard>
2010-09-05 13:45 ` Wu Fengguang
2010-09-05 23:33 ` Dave Chinner
2010-09-06 4:02 ` Dave Chinner
2010-09-06 8:40 ` Mel Gorman
2010-09-06 21:50 ` Dave Chinner
2010-09-08 8:49 ` Dave Chinner
2010-09-09 12:39 ` Mel Gorman
2010-09-10 6:17 ` Dave Chinner
2010-09-07 14:23 ` Christoph Lameter
2010-09-08 2:13 ` Wu Fengguang
2010-09-04 3:23 ` Wu Fengguang
2010-09-04 3:59 ` Andrew Morton
2010-09-04 4:37 ` Wu Fengguang
2010-09-05 18:22 ` Mel Gorman
2010-09-05 18:14 ` Mel Gorman
2010-09-08 7:43 ` KOSAKI Motohiro
2010-09-08 20:05 ` Christoph Lameter
2010-09-09 12:41 ` Mel Gorman
2010-09-09 13:45 ` Christoph Lameter
2010-09-09 13:55 ` Mel Gorman
2010-09-09 14:32 ` Christoph Lameter
2010-09-09 15:05 ` Mel Gorman
2010-09-10 2:56 ` KOSAKI Motohiro
2010-09-03 23:05 ` [PATCH 0/3] Reduce watermark-related problems with the per-cpu allocator V4 Andrew Morton
2010-09-21 11:17 ` Mel Gorman
2010-09-21 12:58 ` [stable] " Greg KH
2010-09-21 14:23 ` Mel Gorman
2010-09-23 18:49 ` Greg KH
2010-09-24 9:14 ` Mel Gorman
-- strict thread matches above, loose matches on Subject: below --
2010-08-31 17:37 [PATCH 0/3] Reduce watermark-related problems with the per-cpu allocator V3 Mel Gorman
2010-08-31 17:37 ` [PATCH 3/3] mm: page allocator: Drain per-cpu lists after direct reclaim allocation fails Mel Gorman
2010-08-31 18:26 ` Christoph Lameter
2010-08-23 8:00 [PATCH 0/3] Reduce watermark-related problems with the per-cpu allocator V2 Mel Gorman
2010-08-23 8:00 ` [PATCH 3/3] mm: page allocator: Drain per-cpu lists after direct reclaim allocation fails Mel Gorman
2010-08-23 23:17 ` KOSAKI Motohiro
2010-08-16 9:42 [RFC PATCH 0/3] Reduce watermark-related problems with the per-cpu allocator Mel Gorman
2010-08-16 9:42 ` [PATCH 3/3] mm: page allocator: Drain per-cpu lists after direct reclaim allocation fails Mel Gorman
2010-08-16 14:50 ` Rik van Riel
2010-08-17 2:57 ` Minchan Kim
2010-08-18 3:02 ` KAMEZAWA Hiroyuki
2010-08-19 14:47 ` Minchan Kim
2010-08-19 15:10 ` Mel Gorman
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20100903160026.564fdcc9.akpm@linux-foundation.org \
--to=akpm@linux-foundation.org \
--cc=cl@linux-foundation.org \
--cc=david@fromorbit.com \
--cc=fengguang.wu@intel.com \
--cc=hannes@cmpxchg.org \
--cc=kamezawa.hiroyu@jp.fujitsu.com \
--cc=kosaki.motohiro@jp.fujitsu.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=mel@csn.ul.ie \
--cc=minchan.kim@gmail.com \
--cc=riel@redhat.com \
--cc=rientjes@google.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).