linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Wu Fengguang <fengguang.wu@intel.com>
To: Mel Gorman <mel@csn.ul.ie>
Cc: Andrea Arcangeli <aarcange@redhat.com>,
	KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>,
	LKML <linux-kernel@vger.kernel.org>,
	linux-mm <linux-mm@kvack.org>,
	Andrew Morton <akpm@linux-foundation.org>,
	Johannes Weiner <hannes@cmpxchg.org>,
	Minchan Kim <minchan.kim@gmail.com>,
	Rik van Riel <riel@redhat.com>
Subject: Re: [PATCH 4/7] vmscan: narrowing synchrounous lumply reclaim condition
Date: Tue, 2 Nov 2010 10:04:32 +0800	[thread overview]
Message-ID: <20101102020432.GA4829@localhost> (raw)
In-Reply-To: <20101028102048.GD4896@csn.ul.ie>

> To make compaction a full replacement for lumpy, reclaim would have to
> know how to reclaim order-9 worth of pages and then compact properly.
> It's not setup for this and a naive algorithm would spend a lot of time
> in the compaction scanning code (which is pretty inefficient). A possible
> alternative would be to lumpy-compact i.e. select a page from the LRU and
> move all pages around it elsewhere. Again, this is not what we are currently
> doing but it's a direction that could be taken.

Agreed. The more lumpy reclaim, the more young pages being wrongly
evicted. THP could trigger lumpy reclaims heavily, that's why Andreas
need to disable it. Lumpy migration looks much better.  Compaction
looks like some pre/batched lumpy-migration. We may also do on demand
lumpy migration in future.

> +static void set_lumpy_reclaim_mode(int priority, struct scan_control *sc,
> +                                bool sync)
> +{
> +     enum lumpy_mode mode = sync ? LUMPY_MODE_SYNC : LUMPY_MODE_ASYNC;
> +
> +     /*
> +      * Some reclaim have alredy been failed. No worth to try synchronous
> +      * lumpy reclaim.
> +      */
> +     if (sync && sc->lumpy_reclaim_mode == LUMPY_MODE_NONE)
> +             return;
> +
> +     /*
> +      * If we need a large contiguous chunk of memory, or have
> +      * trouble getting a small set of contiguous pages, we
> +      * will reclaim both active and inactive pages.
> +      */
> +     if (sc->order > PAGE_ALLOC_COSTLY_ORDER)
> +             sc->lumpy_reclaim_mode = mode;
> +     else if (sc->order && priority < DEF_PRIORITY - 2)
> +             sc->lumpy_reclaim_mode = mode;
> +     else
> +             sc->lumpy_reclaim_mode = LUMPY_MODE_NONE;
> +}

Andrea, I don't see the conflicts in doing lumpy reclaim improvements
in parallel to compaction and THP. If lumpy reclaim hurts THP, it can
be trivially disabled in your tree for huge page order allocations?

+       if (sc->order > 9)
+               sc->lumpy_reclaim_mode = LUMPY_MODE_NONE;

Thanks,
Fengguang

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Fight unfair telecom policy in Canada: sign http://dissolvethecrtc.ca/
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

  reply	other threads:[~2010-11-02  2:05 UTC|newest]

Thread overview: 33+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2010-08-05  6:11 [RFC][PATCH 0/7] low latency synchrounous lumpy reclaim KOSAKI Motohiro
2010-08-05  6:12 ` [PATCH 1/7] vmscan: raise the bar to PAGEOUT_IO_SYNC stalls KOSAKI Motohiro
2010-08-05 15:02   ` Mel Gorman
2010-08-08  6:42     ` KOSAKI Motohiro
2010-08-05 15:19   ` Rik van Riel
2010-08-05  6:13 ` [PATCH 2/7] vmscan: synchronous lumpy reclaim don't call congestion_wait() KOSAKI Motohiro
2010-08-05 13:55   ` Minchan Kim
2010-08-05 15:05   ` Rik van Riel
2010-08-05 15:06   ` Mel Gorman
2010-08-05  6:13 ` [PATCH 3/7] vmscan: synchrounous lumpy reclaim use lock_page() instead trylock_page() KOSAKI Motohiro
2010-08-05 14:17   ` Minchan Kim
2010-08-06  0:52     ` Minchan Kim
2010-08-05 15:12   ` Mel Gorman
2010-08-05 15:26   ` Rik van Riel
2010-08-05  6:14 ` [PATCH 4/7] vmscan: narrowing synchrounous lumply reclaim condition KOSAKI Motohiro
2010-08-05 14:59   ` Minchan Kim
2010-10-27 16:41   ` Andrea Arcangeli
2010-10-27 17:16     ` Mel Gorman
2010-10-27 18:03       ` Andrea Arcangeli
2010-10-28  8:00         ` KOSAKI Motohiro
2010-10-28 15:12           ` Andrea Arcangeli
2010-10-29  2:23             ` KOSAKI Motohiro
2010-10-28 10:20         ` Mel Gorman
2010-11-02  2:04           ` Wu Fengguang [this message]
2010-10-28  2:31     ` Ed Tomlinson
2010-10-28 15:22       ` Andrea Arcangeli
2010-08-05  6:14 ` [PATCH 5/7] vmscan: kill dead code in shrink_inactive_list() KOSAKI Motohiro
2010-08-05 15:08   ` Minchan Kim
2010-08-05 15:14   ` Mel Gorman
2010-08-05  6:15 ` [PATCH 6/7] vmscan: remove PF_SWAPWRITE from __zone_reclaim() KOSAKI Motohiro
2010-08-05  6:16 ` [PATCH 7/7] vmscan: isolated_lru_pages() stop neighbor search if neighbor can't be isolated KOSAKI Motohiro
2010-08-05 15:25   ` Mel Gorman
2010-08-05 15:40   ` Minchan Kim

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20101102020432.GA4829@localhost \
    --to=fengguang.wu@intel.com \
    --cc=aarcange@redhat.com \
    --cc=akpm@linux-foundation.org \
    --cc=hannes@cmpxchg.org \
    --cc=kosaki.motohiro@jp.fujitsu.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=mel@csn.ul.ie \
    --cc=minchan.kim@gmail.com \
    --cc=riel@redhat.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).