From: Mel Gorman <mgorman@suse.de>
To: Linux-MM <linux-mm@kvack.org>
Cc: Johannes Weiner <hannes@cmpxchg.org>, Dave Hansen <dave@sr71.net>,
Christoph Lameter <cl@linux.com>,
LKML <linux-kernel@vger.kernel.org>, Mel Gorman <mgorman@suse.de>
Subject: [PATCH 19/22] mm: page allocator: Watch for magazine and zone lock contention
Date: Wed, 8 May 2013 17:03:04 +0100 [thread overview]
Message-ID: <1368028987-8369-20-git-send-email-mgorman@suse.de> (raw)
In-Reply-To: <1368028987-8369-1-git-send-email-mgorman@suse.de>
When refilling or draining magazines it is possible that the locks are
contended. This patch will refill/drain a minimum number of pages and
attempt to refill/drain a maximum number. Between the min and max
ranges it will check contention and release the lock if it is contended.
Signed-off-by: Mel Gorman <mgorman@suse.de>
---
mm/page_alloc.c | 38 ++++++++++++++++++++++++++++++--------
1 file changed, 30 insertions(+), 8 deletions(-)
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 63952f6..727c8d3 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -1071,8 +1071,10 @@ void mark_free_pages(struct zone *zone)
#endif /* CONFIG_PM */
#define MAGAZINE_LIMIT (1024)
-#define MAGAZINE_ALLOC_BATCH (384)
-#define MAGAZINE_FREE_BATCH (64)
+#define MAGAZINE_MIN_ALLOC_BATCH (16)
+#define MAGAZINE_MIN_FREE_BATCH (16)
+#define MAGAZINE_MAX_ALLOC_BATCH (384)
+#define MAGAZINE_MAX_FREE_BATCH (64)
static inline struct free_magazine *lock_magazine(struct zone *zone)
{
@@ -1138,6 +1140,11 @@ static inline void unlock_magazine(struct free_magazine *mag)
spin_unlock(&mag->lock);
}
+static inline bool magazine_contended(struct free_magazine *mag)
+{
+ return spin_is_contended(&mag->lock);
+}
+
static
struct page *__rmqueue_magazine(struct free_magazine *mag,
int migratetype)
@@ -1163,8 +1170,8 @@ static void magazine_drain(struct zone *zone, struct free_magazine *mag,
struct list_head *list;
struct page *page;
unsigned int batch_free = 0;
- unsigned int to_free = MAGAZINE_FREE_BATCH;
- unsigned int nr_freed_cma = 0;
+ unsigned int to_free = MAGAZINE_MAX_FREE_BATCH;
+ unsigned int nr_freed_cma = 0, nr_freed = 0;
unsigned long flags;
struct free_area_magazine *area = &mag->area;
LIST_HEAD(free_list);
@@ -1190,9 +1197,13 @@ static void magazine_drain(struct zone *zone, struct free_magazine *mag,
list = &area->free_list[migratetype];;
} while (list_empty(list));
- /* This is the only non-empty list. Free them all. */
+ /*
+ * This is the only non-empty list. Free up the the min-free
+ * batch so that the spinlock contention is still checked
+ */
if (batch_free == MIGRATE_PCPTYPES)
- batch_free = to_free;
+ batch_free = min_t(unsigned int,
+ MAGAZINE_MIN_FREE_BATCH, to_free);
do {
page = list_entry(list->prev, struct page, lru);
@@ -1201,7 +1212,13 @@ static void magazine_drain(struct zone *zone, struct free_magazine *mag,
list_move(&page->lru, &free_list);
if (is_migrate_isolate_page(zone, page))
nr_freed_cma++;
+ nr_freed++;
} while (--to_free && --batch_free && !list_empty(list));
+
+ /* Watch for parallel contention */
+ if (nr_freed > MAGAZINE_MIN_FREE_BATCH &&
+ magazine_contended(mag))
+ break;
}
/* Free the list of pages to the buddy allocator */
@@ -1213,7 +1230,7 @@ static void magazine_drain(struct zone *zone, struct free_magazine *mag,
__free_one_page(page, zone, 0, get_freepage_migratetype(page));
}
__mod_zone_page_state(zone, NR_FREE_PAGES,
- MAGAZINE_FREE_BATCH - nr_freed_cma);
+ nr_freed - nr_freed_cma);
if (nr_freed_cma)
__mod_zone_page_state(zone, NR_FREE_CMA_PAGES, nr_freed_cma);
spin_unlock_irqrestore(&zone->lock, flags);
@@ -1388,12 +1405,17 @@ retry:
unsigned int nr_alloced = 0;
spin_lock_irqsave(&zone->lock, flags);
- for (i = 0; i < MAGAZINE_ALLOC_BATCH; i++) {
+ for (i = 0; i < MAGAZINE_MAX_ALLOC_BATCH; i++) {
page = __rmqueue(zone, 0, migratetype);
if (!page)
break;
list_add(&page->lru, &alloc_list);
nr_alloced++;
+
+ /* Watch for parallel contention */
+ if (nr_alloced > MAGAZINE_MIN_ALLOC_BATCH &&
+ spin_is_contended(&zone->lock))
+ break;
}
if (!is_migrate_cma(mt))
__mod_zone_page_state(zone, NR_FREE_PAGES, -nr_alloced);
--
1.8.1.4
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
next prev parent reply other threads:[~2013-05-08 16:03 UTC|newest]
Thread overview: 33+ messages / expand[flat|nested] mbox.gz Atom feed top
2013-05-08 16:02 [RFC PATCH 00/22] Per-cpu page allocator replacement prototype Mel Gorman
2013-05-08 16:02 ` [PATCH 01/22] mm: page allocator: Lookup pageblock migratetype with IRQs enabled during free Mel Gorman
2013-05-08 16:02 ` [PATCH 02/22] mm: page allocator: Push down where IRQs are disabled during page free Mel Gorman
2013-05-08 16:02 ` [PATCH 03/22] mm: page allocator: Use unsigned int for order in more places Mel Gorman
2013-05-08 16:02 ` [PATCH 04/22] mm: page allocator: Only check migratetype of pages being drained while CMA active Mel Gorman
2013-05-08 16:02 ` [PATCH 05/22] oom: Use number of online nodes when deciding whether to suppress messages Mel Gorman
2013-05-08 16:02 ` [PATCH 06/22] mm: page allocator: Convert hot/cold parameter and immediate callers to bool Mel Gorman
2013-05-08 16:02 ` [PATCH 07/22] mm: page allocator: Do not lookup the pageblock migratetype during allocation Mel Gorman
2013-05-08 16:02 ` [PATCH 08/22] mm: page allocator: Remove the per-cpu page allocator Mel Gorman
2013-05-08 16:02 ` [PATCH 09/22] mm: page allocator: Allocate/free order-0 pages from a per-zone magazine Mel Gorman
2013-05-08 18:41 ` Christoph Lameter
2013-05-09 15:23 ` Mel Gorman
2013-05-09 16:21 ` Christoph Lameter
2013-05-09 17:27 ` Mel Gorman
2013-05-09 18:08 ` Christoph Lameter
2013-05-08 16:02 ` [PATCH 10/22] mm: page allocator: Allocate and free pages from magazine in batches Mel Gorman
2013-05-08 16:02 ` [PATCH 11/22] mm: page allocator: Shrink the magazine to the migratetypes in use Mel Gorman
2013-05-08 16:02 ` [PATCH 12/22] mm: page allocator: Remove knowledge of hot/cold from page allocator Mel Gorman
2013-05-08 16:02 ` [PATCH 13/22] mm: page allocator: Use list_splice to refill the magazine Mel Gorman
2013-05-08 16:02 ` [PATCH 14/22] mm: page allocator: Do not disable IRQs just to update stats Mel Gorman
2013-05-08 16:03 ` [PATCH 15/22] mm: page allocator: Check if interrupts are enabled only once per allocation attempt Mel Gorman
2013-05-08 16:03 ` [PATCH 16/22] mm: page allocator: Remove coalescing improvement heuristic during page free Mel Gorman
2013-05-08 16:03 ` [PATCH 17/22] mm: page allocator: Move magazine access behind accessors Mel Gorman
2013-05-08 16:03 ` [PATCH 18/22] mm: page allocator: Split magazine lock in two to reduce contention Mel Gorman
2013-05-09 15:21 ` Dave Hansen
2013-05-15 19:44 ` Andi Kleen
2013-05-08 16:03 ` Mel Gorman [this message]
2013-05-08 16:03 ` [PATCH 20/22] mm: page allocator: Hold magazine lock for a batch of pages Mel Gorman
2013-05-08 16:03 ` [PATCH 21/22] mm: compaction: Release free page list under a batched magazine lock Mel Gorman
2013-05-08 16:03 ` [PATCH 22/22] mm: page allocator: Drain magazines for direct compact failures Mel Gorman
2013-05-09 15:41 ` [RFC PATCH 00/22] Per-cpu page allocator replacement prototype Dave Hansen
2013-05-09 16:25 ` Christoph Lameter
2013-05-09 17:33 ` Mel Gorman
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=1368028987-8369-20-git-send-email-mgorman@suse.de \
--to=mgorman@suse.de \
--cc=cl@linux.com \
--cc=dave@sr71.net \
--cc=hannes@cmpxchg.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).