From: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
To: linux-kernel@vger.kernel.org, stable@vger.kernel.org
Cc: Greg KH <gregkh@linuxfoundation.org>,
torvalds@linux-foundation.org, akpm@linux-foundation.org,
alan@lxorguk.ukuu.org.uk, Mel Gorman <mgorman@suse.de>,
Rik van Riel <riel@redhat.com>,
Andrea Arcangeli <aarcange@redhat.com>,
Minchan Kim <minchan@kernel.org>, Dave Jones <davej@redhat.com>,
Jan Kara <jack@suse.cz>, Andy Isaacson <adi@hexapodia.org>,
Nai Xia <nai.xia@gmail.com>, Johannes Weiner <jweiner@redhat.com>
Subject: [ 25/41] mm: compaction: make isolate_lru_page() filter-aware again
Date: Mon, 30 Jul 2012 10:31:24 -0700 [thread overview]
Message-ID: <20120730172903.282607750@linuxfoundation.org> (raw)
In-Reply-To: <20120730172901.306897424@linuxfoundation.org>
From: Greg KH <gregkh@linuxfoundation.org>
3.0-stable review patch. If anyone has any objections, please let me know.
------------------
From: Mel Gorman <mgorman@suse.de>
commit c82449352854ff09e43062246af86bdeb628f0c3 upstream.
Stable note: Not tracked in Bugzilla. A fix aimed at preserving page aging
information by reducing LRU list churning had the side-effect of
reducing THP allocation success rates. This was part of a series
to restore the success rates while preserving the reclaim fix.
Commit 39deaf85 ("mm: compaction: make isolate_lru_page() filter-aware")
noted that compaction does not migrate dirty or writeback pages and that
is was meaningless to pick the page and re-add it to the LRU list. This
had to be partially reverted because some dirty pages can be migrated by
compaction without blocking.
This patch updates "mm: compaction: make isolate_lru_page" by skipping
over pages that migration has no possibility of migrating to minimise LRU
disruption.
Signed-off-by: Mel Gorman <mgorman@suse.de>
Reviewed-by: Rik van Riel<riel@redhat.com>
Cc: Andrea Arcangeli <aarcange@redhat.com>
Reviewed-by: Minchan Kim <minchan@kernel.org>
Cc: Dave Jones <davej@redhat.com>
Cc: Jan Kara <jack@suse.cz>
Cc: Andy Isaacson <adi@hexapodia.org>
Cc: Nai Xia <nai.xia@gmail.com>
Cc: Johannes Weiner <jweiner@redhat.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
---
include/linux/mmzone.h | 2 ++
mm/compaction.c | 3 +++
mm/vmscan.c | 35 +++++++++++++++++++++++++++++++++--
3 files changed, 38 insertions(+), 2 deletions(-)
--- a/include/linux/mmzone.h
+++ b/include/linux/mmzone.h
@@ -166,6 +166,8 @@ static inline int is_unevictable_lru(enu
#define ISOLATE_CLEAN ((__force isolate_mode_t)0x4)
/* Isolate unmapped file */
#define ISOLATE_UNMAPPED ((__force isolate_mode_t)0x8)
+/* Isolate for asynchronous migration */
+#define ISOLATE_ASYNC_MIGRATE ((__force isolate_mode_t)0x10)
/* LRU Isolation modes. */
typedef unsigned __bitwise__ isolate_mode_t;
--- a/mm/compaction.c
+++ b/mm/compaction.c
@@ -371,6 +371,9 @@ static isolate_migrate_t isolate_migrate
continue;
}
+ if (!cc->sync)
+ mode |= ISOLATE_ASYNC_MIGRATE;
+
/* Try isolate the page */
if (__isolate_lru_page(page, mode, 0) != 0)
continue;
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -1045,8 +1045,39 @@ int __isolate_lru_page(struct page *page
ret = -EBUSY;
- if ((mode & ISOLATE_CLEAN) && (PageDirty(page) || PageWriteback(page)))
- return ret;
+ /*
+ * To minimise LRU disruption, the caller can indicate that it only
+ * wants to isolate pages it will be able to operate on without
+ * blocking - clean pages for the most part.
+ *
+ * ISOLATE_CLEAN means that only clean pages should be isolated. This
+ * is used by reclaim when it is cannot write to backing storage
+ *
+ * ISOLATE_ASYNC_MIGRATE is used to indicate that it only wants to pages
+ * that it is possible to migrate without blocking
+ */
+ if (mode & (ISOLATE_CLEAN|ISOLATE_ASYNC_MIGRATE)) {
+ /* All the caller can do on PageWriteback is block */
+ if (PageWriteback(page))
+ return ret;
+
+ if (PageDirty(page)) {
+ struct address_space *mapping;
+
+ /* ISOLATE_CLEAN means only clean pages */
+ if (mode & ISOLATE_CLEAN)
+ return ret;
+
+ /*
+ * Only pages without mappings or that have a
+ * ->migratepage callback are possible to migrate
+ * without blocking
+ */
+ mapping = page_mapping(page);
+ if (mapping && !mapping->a_ops->migratepage)
+ return ret;
+ }
+ }
if ((mode & ISOLATE_UNMAPPED) && page_mapped(page))
return ret;
next prev parent reply other threads:[~2012-07-30 17:33 UTC|newest]
Thread overview: 44+ messages / expand[flat|nested] mbox.gz Atom feed top
2012-07-30 17:30 [ 00/41] 3.0.39-rc2 stable review Greg Kroah-Hartman
2012-07-30 17:31 ` [ 01/41] cifs: always update the inode cache with the results from a FIND_* Greg Kroah-Hartman
2012-07-30 17:31 ` [ 02/41] ntp: Fix STA_INS/DEL clearing bug Greg Kroah-Hartman
2012-07-30 17:31 ` [ 03/41] mm: fix lost kswapd wakeup in kswapd_stop() Greg Kroah-Hartman
2012-07-30 17:31 ` [ 04/41] MIPS: Properly align the .data..init_task section Greg Kroah-Hartman
2012-07-30 17:31 ` [ 05/41] UBIFS: fix a bug in empty space fix-up Greg Kroah-Hartman
2012-07-30 17:31 ` [ 06/41] dm raid1: fix crash with mirror recovery and discard Greg Kroah-Hartman
2012-07-30 17:31 ` [ 07/41] mm/vmstat.c: cache align vm_stat Greg Kroah-Hartman
2012-07-30 17:31 ` [ 08/41] mm: memory hotplug: Check if pages are correctly reserved on a per-section basis Greg Kroah-Hartman
2012-07-30 17:31 ` [ 09/41] mm: reduce the amount of work done when updating min_free_kbytes Greg Kroah-Hartman
2012-07-30 17:31 ` [ 10/41] mm: vmscan: fix force-scanning small targets without swap Greg Kroah-Hartman
2012-07-30 17:31 ` [ 11/41] vmscan: clear ZONE_CONGESTED for zone with good watermark Greg Kroah-Hartman
2012-07-30 17:31 ` [ 12/41] vmscan: add shrink_slab tracepoints Greg Kroah-Hartman
2012-07-30 17:31 ` [ 13/41] vmscan: shrinker->nr updates race and go wrong Greg Kroah-Hartman
2012-07-30 17:31 ` [ 14/41] vmscan: reduce wind up shrinker->nr when shrinker cant do work Greg Kroah-Hartman
2012-07-30 17:31 ` [ 15/41] vmscan: limit direct reclaim for higher order allocations Greg Kroah-Hartman
2012-07-30 17:31 ` [ 16/41] vmscan: abort reclaim/compaction if compaction can proceed Greg Kroah-Hartman
2012-07-30 17:31 ` [ 17/41] mm: compaction: trivial clean up in acct_isolated() Greg Kroah-Hartman
2012-07-30 17:31 ` [ 18/41] mm: change isolate mode from #define to bitwise type Greg Kroah-Hartman
2012-07-30 17:31 ` [ 19/41] mm: compaction: make isolate_lru_page() filter-aware Greg Kroah-Hartman
2012-07-30 17:31 ` [ 20/41] mm: zone_reclaim: " Greg Kroah-Hartman
2012-07-30 17:31 ` [ 21/41] mm: migration: clean up unmap_and_move() Greg Kroah-Hartman
2012-07-30 17:31 ` [ 22/41] mm: compaction: allow compaction to isolate dirty pages Greg Kroah-Hartman
2012-07-30 17:31 ` [ 23/41] mm: compaction: determine if dirty pages can be migrated without blocking within ->migratepage Greg Kroah-Hartman
2012-07-30 17:31 ` [ 24/41] mm: page allocator: do not call direct reclaim for THP allocations while compaction is deferred Greg Kroah-Hartman
2012-07-30 17:31 ` Greg Kroah-Hartman [this message]
2012-07-30 17:31 ` [ 26/41] kswapd: avoid unnecessary rebalance after an unsuccessful balancing Greg Kroah-Hartman
2012-07-30 17:31 ` [ 27/41] kswapd: assign new_order and new_classzone_idx after wakeup in sleeping Greg Kroah-Hartman
2012-07-30 17:31 ` [ 28/41] mm: compaction: introduce sync-light migration for use by compaction Greg Kroah-Hartman
2012-07-31 16:43 ` Herton Ronaldo Krzesinski
2012-07-31 17:00 ` Greg Kroah-Hartman
2012-07-30 17:31 ` [ 29/41] mm: vmscan: when reclaiming for compaction, ensure there are sufficient free pages available Greg Kroah-Hartman
2012-07-30 17:31 ` [ 30/41] mm: vmscan: do not OOM if aborting reclaim to start compaction Greg Kroah-Hartman
2012-07-30 17:31 ` [ 31/41] mm: vmscan: check if reclaim should really abort even if compaction_ready() is true for one zone Greg Kroah-Hartman
2012-07-30 17:31 ` [ 32/41] vmscan: promote shared file mapped pages Greg Kroah-Hartman
2012-07-30 17:31 ` [ 33/41] vmscan: activate executable pages after first usage Greg Kroah-Hartman
2012-07-30 17:31 ` [ 34/41] mm/vmscan.c: consider swap space when deciding whether to continue reclaim Greg Kroah-Hartman
2012-07-30 17:31 ` [ 35/41] mm: test PageSwapBacked in lumpy reclaim Greg Kroah-Hartman
2012-07-30 17:31 ` [ 36/41] mm: vmscan: convert global reclaim to per-memcg LRU lists Greg Kroah-Hartman
2012-07-30 17:31 ` [ 37/41] cpusets: avoid looping when storing to mems_allowed if one node remains set Greg Kroah-Hartman
2012-07-30 17:31 ` [ 38/41] cpusets: stall when updating mems_allowed for mempolicy or disjoint nodemask Greg Kroah-Hartman
2012-07-30 17:31 ` [ 39/41] cpuset: mm: reduce large amounts of memory barrier related damage v3 Greg Kroah-Hartman
2012-07-30 17:31 ` [ 40/41] mm/hugetlb: fix warning in alloc_huge_page/dequeue_huge_page_vma Greg Kroah-Hartman
2012-07-30 17:31 ` [ 41/41] vmscan: fix initial shrinker size handling Greg Kroah-Hartman
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20120730172903.282607750@linuxfoundation.org \
--to=gregkh@linuxfoundation.org \
--cc=aarcange@redhat.com \
--cc=adi@hexapodia.org \
--cc=akpm@linux-foundation.org \
--cc=alan@lxorguk.ukuu.org.uk \
--cc=davej@redhat.com \
--cc=jack@suse.cz \
--cc=jweiner@redhat.com \
--cc=linux-kernel@vger.kernel.org \
--cc=mgorman@suse.de \
--cc=minchan@kernel.org \
--cc=nai.xia@gmail.com \
--cc=riel@redhat.com \
--cc=stable@vger.kernel.org \
--cc=torvalds@linux-foundation.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox