From: Mel Gorman <mgorman@suse.de>
To: Stable <stable@vger.kernel.org>
Cc: "Linux-MM <linux-mm"@kvack.org,
LKML <linux-kernel@vger.kernel.org>, Mel Gorman <mgorman@suse.de>
Subject: [PATCH 07/34] vmscan: shrinker->nr updates race and go wrong
Date: Thu, 19 Jul 2012 15:36:17 +0100 [thread overview]
Message-ID: <1342708604-26540-8-git-send-email-mgorman@suse.de> (raw)
In-Reply-To: <1342708604-26540-1-git-send-email-mgorman@suse.de>
From: Dave Chinner <dchinner@redhat.com>
commit acf92b485cccf028177f46918e045c0c4e80ee10 upstream.
Stable note: Not tracked in Bugzilla. This patch reduces excessive
reclaim of slab objects reducing the amount of information
that has to be brought back in from disk.
shrink_slab() allows shrinkers to be called in parallel so the
struct shrinker can be updated concurrently. It does not provide any
exclusion for such updates, so we can get the shrinker->nr value
increasing or decreasing incorrectly.
As a result, when a shrinker repeatedly returns a value of -1 (e.g.
a VFS shrinker called w/ GFP_NOFS), the shrinker->nr goes haywire,
sometimes updating with the scan count that wasn't used, sometimes
losing it altogether. Worse is when a shrinker does work and that
update is lost due to racy updates, which means the shrinker will do
the work again!
Fix this by making the total_scan calculations independent of
shrinker->nr, and making the shrinker->nr updates atomic w.r.t. to
other updates via cmpxchg loops.
Signed-off-by: Dave Chinner <dchinner@redhat.com>
Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
Signed-off-by: Mel Gorman <mgorman@suse.de>
---
mm/vmscan.c | 45 ++++++++++++++++++++++++++++++++-------------
1 file changed, 32 insertions(+), 13 deletions(-)
diff --git a/mm/vmscan.c b/mm/vmscan.c
index d875058..31b551e 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -251,17 +251,29 @@ unsigned long shrink_slab(struct shrink_control *shrink,
unsigned long total_scan;
unsigned long max_pass;
int shrink_ret = 0;
+ long nr;
+ long new_nr;
+ /*
+ * copy the current shrinker scan count into a local variable
+ * and zero it so that other concurrent shrinker invocations
+ * don't also do this scanning work.
+ */
+ do {
+ nr = shrinker->nr;
+ } while (cmpxchg(&shrinker->nr, nr, 0) != nr);
+
+ total_scan = nr;
max_pass = do_shrinker_shrink(shrinker, shrink, 0);
delta = (4 * nr_pages_scanned) / shrinker->seeks;
delta *= max_pass;
do_div(delta, lru_pages + 1);
- shrinker->nr += delta;
- if (shrinker->nr < 0) {
+ total_scan += delta;
+ if (total_scan < 0) {
printk(KERN_ERR "shrink_slab: %pF negative objects to "
"delete nr=%ld\n",
- shrinker->shrink, shrinker->nr);
- shrinker->nr = max_pass;
+ shrinker->shrink, total_scan);
+ total_scan = max_pass;
}
/*
@@ -269,13 +281,10 @@ unsigned long shrink_slab(struct shrink_control *shrink,
* never try to free more than twice the estimate number of
* freeable entries.
*/
- if (shrinker->nr > max_pass * 2)
- shrinker->nr = max_pass * 2;
-
- total_scan = shrinker->nr;
- shrinker->nr = 0;
+ if (total_scan > max_pass * 2)
+ total_scan = max_pass * 2;
- trace_mm_shrink_slab_start(shrinker, shrink, total_scan,
+ trace_mm_shrink_slab_start(shrinker, shrink, nr,
nr_pages_scanned, lru_pages,
max_pass, delta, total_scan);
@@ -296,9 +305,19 @@ unsigned long shrink_slab(struct shrink_control *shrink,
cond_resched();
}
- shrinker->nr += total_scan;
- trace_mm_shrink_slab_end(shrinker, shrink_ret, total_scan,
- shrinker->nr);
+ /*
+ * move the unused scan count back into the shrinker in a
+ * manner that handles concurrent updates. If we exhausted the
+ * scan, there is no need to do an update.
+ */
+ do {
+ nr = shrinker->nr;
+ new_nr = total_scan + nr;
+ if (total_scan <= 0)
+ break;
+ } while (cmpxchg(&shrinker->nr, nr, new_nr) != nr);
+
+ trace_mm_shrink_slab_end(shrinker, shrink_ret, nr, new_nr);
}
up_read(&shrinker_rwsem);
out:
--
1.7.9.2
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
next prev parent reply other threads:[~2012-07-19 14:36 UTC|newest]
Thread overview: 40+ messages / expand[flat|nested] mbox.gz Atom feed top
2012-07-19 14:36 [PATCH 00/34] Memory management performance backports for -stable Mel Gorman
2012-07-19 14:36 ` [PATCH 01/34] mm: vmstat: cache align vm_stat Mel Gorman
2012-07-19 14:36 ` [PATCH 02/34] mm: memory hotplug: Check if pages are correctly reserved on a per-section basis Mel Gorman
2012-07-19 14:36 ` [PATCH 03/34] mm: Reduce the amount of work done when updating min_free_kbytes Mel Gorman
2012-07-19 14:36 ` [PATCH 04/34] mm: vmscan: fix force-scanning small targets without swap Mel Gorman
2012-07-19 14:36 ` [PATCH 05/34] vmscan: clear ZONE_CONGESTED for zone with good watermark Mel Gorman
2012-07-19 14:36 ` [PATCH 06/34] vmscan: add shrink_slab tracepoints Mel Gorman
2012-07-19 14:36 ` Mel Gorman [this message]
2012-07-19 14:36 ` [PATCH 08/34] vmscan: reduce wind up shrinker->nr when shrinker can't do work Mel Gorman
2012-07-19 14:36 ` [PATCH 09/34] mm: limit direct reclaim for higher order allocations Mel Gorman
2012-07-19 14:36 ` [PATCH 10/34] mm: Abort reclaim/compaction if compaction can proceed Mel Gorman
2012-07-19 14:36 ` [PATCH 11/34] mm: compaction: trivial clean up in acct_isolated() Mel Gorman
2012-07-19 14:36 ` [PATCH 12/34] mm: change isolate mode from #define to bitwise type Mel Gorman
2012-07-19 14:36 ` [PATCH 13/34] mm: compaction: make isolate_lru_page() filter-aware Mel Gorman
2012-07-19 14:36 ` [PATCH 14/34] mm: zone_reclaim: " Mel Gorman
2012-07-19 14:36 ` [PATCH 15/34] mm: migration: clean up unmap_and_move() Mel Gorman
2012-07-19 14:36 ` [PATCH 16/34] mm: compaction: Allow compaction to isolate dirty pages Mel Gorman
2012-07-19 14:36 ` [PATCH 17/34] mm: compaction: Determine if dirty pages can be migrated without blocking within ->migratepage Mel Gorman
2012-07-19 14:36 ` [PATCH 18/34] mm: page allocator: Do not call direct reclaim for THP allocations while compaction is deferred Mel Gorman
2012-07-19 14:36 ` [PATCH 19/34] mm: compaction: make isolate_lru_page() filter-aware again Mel Gorman
2012-07-19 14:36 ` [PATCH 20/34] kswapd: avoid unnecessary rebalance after an unsuccessful balancing Mel Gorman
2012-07-19 14:36 ` [PATCH 21/34] kswapd: assign new_order and new_classzone_idx after wakeup in sleeping Mel Gorman
2012-07-22 19:25 ` Hugh Dickins
2012-07-23 9:37 ` Mel Gorman
2012-07-19 14:36 ` [PATCH 22/34] mm: compaction: Introduce sync-light migration for use by compaction Mel Gorman
2012-07-19 14:36 ` [PATCH 23/34] mm: vmscan: When reclaiming for compaction, ensure there are sufficient free pages available Mel Gorman
2012-07-19 14:36 ` [PATCH 24/34] mm: vmscan: Do not OOM if aborting reclaim to start compaction Mel Gorman
2012-07-19 14:36 ` [PATCH 25/34] mm: vmscan: Check if reclaim should really abort even if compaction_ready() is true for one zone Mel Gorman
2012-07-19 14:36 ` [PATCH 26/34] vmscan: promote shared file mapped pages Mel Gorman
2012-07-19 14:36 ` [PATCH 27/34] vmscan: activate executable pages after first usage Mel Gorman
2012-07-19 14:36 ` [PATCH 28/34] mm/vmscan.c: consider swap space when deciding whether to continue reclaim Mel Gorman
2012-07-19 14:36 ` [PATCH 29/34] mm: test PageSwapBacked in lumpy reclaim Mel Gorman
2012-07-19 14:36 ` [PATCH 30/34] mm: vmscan: Do not force kswapd to scan small targets Mel Gorman
2012-07-19 14:36 ` [PATCH 31/34] cpusets: avoid looping when storing to mems_allowed if one node remains set Mel Gorman
2012-07-19 14:36 ` [PATCH 32/34] cpusets: stall when updating mems_allowed for mempolicy or disjoint nodemask Mel Gorman
2012-07-19 14:36 ` [PATCH 33/34] cpuset: mm: Reduce large amounts of memory barrier related damage v3 Mel Gorman
2012-07-19 14:36 ` [PATCH 34/34] mm/hugetlb: fix warning in alloc_huge_page/dequeue_huge_page_vma Mel Gorman
2012-07-22 22:58 ` [PATCH 00/34] Memory management performance backports for -stable Ben Hutchings
2012-07-23 9:16 ` Mel Gorman
-- strict thread matches above, loose matches on Subject: below --
2012-07-23 13:38 [PATCH 00/34] Memory management performance backports for -stable V2 Mel Gorman
2012-07-23 13:38 ` [PATCH 07/34] vmscan: shrinker->nr updates race and go wrong Mel Gorman
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=1342708604-26540-8-git-send-email-mgorman@suse.de \
--to=mgorman@suse.de \
--cc="Linux-MM <linux-mm"@kvack.org \
--cc=linux-kernel@vger.kernel.org \
--cc=stable@vger.kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).