From: Mel Gorman <mgorman@techsingularity.net>
To: Andrew Morton <akpm@linux-foundation.org>
Cc: Vlastimil Babka <vbabka@suse.cz>,
Jesper Dangaard Brouer <brouer@redhat.com>,
Linux-MM <linux-mm@kvack.org>,
LKML <linux-kernel@vger.kernel.org>,
Mel Gorman <mgorman@techsingularity.net>
Subject: [PATCH 28/28] mm, page_alloc: Defer debugging checks of pages allocated from the PCP
Date: Fri, 15 Apr 2016 10:07:55 +0100 [thread overview]
Message-ID: <1460711275-1130-16-git-send-email-mgorman@techsingularity.net> (raw)
In-Reply-To: <1460711275-1130-1-git-send-email-mgorman@techsingularity.net>
Every page allocated checks a number of page fields for validity. This
catches corruption bugs of pages that are already freed but it is expensive.
This patch weakens the debugging check by checking PCP pages only when
the PCP lists are being refilled. All compound pages are checked. This
potentially avoids debugging checks entirely if the PCP lists are never
emptied and refilled so some corruption issues may be missed. Full checking
requires DEBUG_VM.
With the two deferred debugging patches applied, the impact to a page
allocator microbenchmark is
4.6.0-rc3 4.6.0-rc3
inline-v3r6 deferalloc-v3r7
Min alloc-odr0-1 344.00 ( 0.00%) 317.00 ( 7.85%)
Min alloc-odr0-2 248.00 ( 0.00%) 231.00 ( 6.85%)
Min alloc-odr0-4 209.00 ( 0.00%) 192.00 ( 8.13%)
Min alloc-odr0-8 181.00 ( 0.00%) 166.00 ( 8.29%)
Min alloc-odr0-16 168.00 ( 0.00%) 154.00 ( 8.33%)
Min alloc-odr0-32 161.00 ( 0.00%) 148.00 ( 8.07%)
Min alloc-odr0-64 158.00 ( 0.00%) 145.00 ( 8.23%)
Min alloc-odr0-128 156.00 ( 0.00%) 143.00 ( 8.33%)
Min alloc-odr0-256 168.00 ( 0.00%) 154.00 ( 8.33%)
Min alloc-odr0-512 178.00 ( 0.00%) 167.00 ( 6.18%)
Min alloc-odr0-1024 186.00 ( 0.00%) 174.00 ( 6.45%)
Min alloc-odr0-2048 192.00 ( 0.00%) 180.00 ( 6.25%)
Min alloc-odr0-4096 198.00 ( 0.00%) 184.00 ( 7.07%)
Min alloc-odr0-8192 200.00 ( 0.00%) 188.00 ( 6.00%)
Min alloc-odr0-16384 201.00 ( 0.00%) 188.00 ( 6.47%)
Min free-odr0-1 189.00 ( 0.00%) 180.00 ( 4.76%)
Min free-odr0-2 132.00 ( 0.00%) 126.00 ( 4.55%)
Min free-odr0-4 104.00 ( 0.00%) 99.00 ( 4.81%)
Min free-odr0-8 90.00 ( 0.00%) 85.00 ( 5.56%)
Min free-odr0-16 84.00 ( 0.00%) 80.00 ( 4.76%)
Min free-odr0-32 80.00 ( 0.00%) 76.00 ( 5.00%)
Min free-odr0-64 78.00 ( 0.00%) 74.00 ( 5.13%)
Min free-odr0-128 77.00 ( 0.00%) 73.00 ( 5.19%)
Min free-odr0-256 94.00 ( 0.00%) 91.00 ( 3.19%)
Min free-odr0-512 108.00 ( 0.00%) 112.00 ( -3.70%)
Min free-odr0-1024 115.00 ( 0.00%) 118.00 ( -2.61%)
Min free-odr0-2048 120.00 ( 0.00%) 125.00 ( -4.17%)
Min free-odr0-4096 123.00 ( 0.00%) 129.00 ( -4.88%)
Min free-odr0-8192 126.00 ( 0.00%) 130.00 ( -3.17%)
Min free-odr0-16384 126.00 ( 0.00%) 131.00 ( -3.97%)
Note that the free paths for large numbers of pages is impacted as the
debugging cost gets shifted into that path when the page data is no longer
necessarily cache-hot.
Signed-off-by: Mel Gorman <mgorman@techsingularity.net>
---
mm/page_alloc.c | 92 +++++++++++++++++++++++++++++++++++++++------------------
1 file changed, 64 insertions(+), 28 deletions(-)
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index b5722790c846..147c0d55ed32 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -1704,7 +1704,41 @@ static inline bool free_pages_prezeroed(bool poisoned)
page_poisoning_enabled() && poisoned;
}
-static int prep_new_page(struct page *page, unsigned int order, gfp_t gfp_flags,
+#ifdef CONFIG_DEBUG_VM
+static bool check_pcp_refill(struct page *page)
+{
+ return false;
+}
+
+static bool check_new_pcp(struct page *page)
+{
+ return check_new_page(page);
+}
+#else
+static bool check_pcp_refill(struct page *page)
+{
+ return check_new_page(page);
+}
+static bool check_new_pcp(struct page *page)
+{
+ return false;
+}
+#endif /* CONFIG_DEBUG_VM */
+
+static bool check_new_pages(struct page *page, unsigned int order)
+{
+ int i;
+ for (i = 0; i < (1 << order); i++) {
+ struct page *p = page + i;
+
+ if (unlikely(check_new_page(p)))
+ return true;
+ }
+
+ return false;
+}
+
+static void prep_new_page(struct page *page, unsigned int order, gfp_t gfp_flags,
unsigned int alloc_flags)
{
int i;
@@ -1712,8 +1746,6 @@ static int prep_new_page(struct page *page, unsigned int order, gfp_t gfp_flags,
for (i = 0; i < (1 << order); i++) {
struct page *p = page + i;
- if (unlikely(check_new_page(p)))
- return 1;
if (poisoned)
poisoned &= page_is_poisoned(p);
}
@@ -1745,8 +1777,6 @@ static int prep_new_page(struct page *page, unsigned int order, gfp_t gfp_flags,
set_page_pfmemalloc(page);
else
clear_page_pfmemalloc(page);
-
- return 0;
}
/*
@@ -2168,6 +2198,9 @@ static int rmqueue_bulk(struct zone *zone, unsigned int order,
if (unlikely(page == NULL))
break;
+ if (unlikely(check_pcp_refill(page)))
+ continue;
+
/*
* Split buddy pages returned by expand() are received here
* in physical page order. The page is added to the callers and
@@ -2579,20 +2612,22 @@ struct page *buffered_rmqueue(struct zone *preferred_zone,
struct list_head *list;
local_irq_save(flags);
- pcp = &this_cpu_ptr(zone->pageset)->pcp;
- list = &pcp->lists[migratetype];
- if (list_empty(list)) {
- pcp->count += rmqueue_bulk(zone, 0,
- pcp->batch, list,
- migratetype, cold);
- if (unlikely(list_empty(list)))
- goto failed;
- }
+ do {
+ pcp = &this_cpu_ptr(zone->pageset)->pcp;
+ list = &pcp->lists[migratetype];
+ if (list_empty(list)) {
+ pcp->count += rmqueue_bulk(zone, 0,
+ pcp->batch, list,
+ migratetype, cold);
+ if (unlikely(list_empty(list)))
+ goto failed;
+ }
- if (cold)
- page = list_last_entry(list, struct page, lru);
- else
- page = list_first_entry(list, struct page, lru);
+ if (cold)
+ page = list_last_entry(list, struct page, lru);
+ else
+ page = list_first_entry(list, struct page, lru);
+ } while (page && check_new_pcp(page));
__dec_zone_state(zone, NR_ALLOC_BATCH);
list_del(&page->lru);
@@ -2605,14 +2640,16 @@ struct page *buffered_rmqueue(struct zone *preferred_zone,
WARN_ON_ONCE((gfp_flags & __GFP_NOFAIL) && (order > 1));
spin_lock_irqsave(&zone->lock, flags);
- page = NULL;
- if (alloc_flags & ALLOC_HARDER) {
- page = __rmqueue_smallest(zone, order, MIGRATE_HIGHATOMIC);
- if (page)
- trace_mm_page_alloc_zone_locked(page, order, migratetype);
- }
- if (!page)
- page = __rmqueue(zone, order, migratetype);
+ do {
+ page = NULL;
+ if (alloc_flags & ALLOC_HARDER) {
+ page = __rmqueue_smallest(zone, order, MIGRATE_HIGHATOMIC);
+ if (page)
+ trace_mm_page_alloc_zone_locked(page, order, migratetype);
+ }
+ if (!page)
+ page = __rmqueue(zone, order, migratetype);
+ } while (page && check_new_pages(page, order));
spin_unlock(&zone->lock);
if (!page)
goto failed;
@@ -2979,8 +3016,7 @@ get_page_from_freelist(gfp_t gfp_mask, unsigned int order, int alloc_flags,
page = buffered_rmqueue(ac->preferred_zoneref->zone, zone, order,
gfp_mask, alloc_flags, ac->migratetype);
if (page) {
- if (prep_new_page(page, order, gfp_mask, alloc_flags))
- goto try_this_zone;
+ prep_new_page(page, order, gfp_mask, alloc_flags);
/*
* If this is a high-order atomic allocation then check
--
2.6.4
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
next prev parent reply other threads:[~2016-04-15 9:10 UTC|newest]
Thread overview: 80+ messages / expand[flat|nested] mbox.gz Atom feed top
2016-04-15 8:58 [PATCH 00/28] Optimise page alloc/free fast paths v3 Mel Gorman
2016-04-15 8:58 ` [PATCH 01/28] mm, page_alloc: Only check PageCompound for high-order pages Mel Gorman
2016-04-25 9:33 ` Vlastimil Babka
2016-04-26 10:33 ` Mel Gorman
2016-04-26 11:20 ` Vlastimil Babka
2016-04-15 8:58 ` [PATCH 02/28] mm, page_alloc: Use new PageAnonHead helper in the free page fast path Mel Gorman
2016-04-25 9:56 ` Vlastimil Babka
2016-04-15 8:58 ` [PATCH 03/28] mm, page_alloc: Reduce branches in zone_statistics Mel Gorman
2016-04-25 11:15 ` Vlastimil Babka
2016-04-15 8:58 ` [PATCH 04/28] mm, page_alloc: Inline zone_statistics Mel Gorman
2016-04-25 11:17 ` Vlastimil Babka
2016-04-15 8:58 ` [PATCH 05/28] mm, page_alloc: Inline the fast path of the zonelist iterator Mel Gorman
2016-04-25 14:50 ` Vlastimil Babka
2016-04-26 10:30 ` Mel Gorman
2016-04-26 11:05 ` Vlastimil Babka
2016-04-15 8:58 ` [PATCH 06/28] mm, page_alloc: Use __dec_zone_state for order-0 page allocation Mel Gorman
2016-04-26 11:25 ` Vlastimil Babka
2016-04-15 8:58 ` [PATCH 07/28] mm, page_alloc: Avoid unnecessary zone lookups during pageblock operations Mel Gorman
2016-04-26 11:29 ` Vlastimil Babka
2016-04-15 8:59 ` [PATCH 08/28] mm, page_alloc: Convert alloc_flags to unsigned Mel Gorman
2016-04-26 11:31 ` Vlastimil Babka
2016-04-15 8:59 ` [PATCH 09/28] mm, page_alloc: Convert nr_fair_skipped to bool Mel Gorman
2016-04-26 11:37 ` Vlastimil Babka
2016-04-15 8:59 ` [PATCH 10/28] mm, page_alloc: Remove unnecessary local variable in get_page_from_freelist Mel Gorman
2016-04-26 11:38 ` Vlastimil Babka
2016-04-15 8:59 ` [PATCH 11/28] mm, page_alloc: Remove unnecessary initialisation " Mel Gorman
2016-04-26 11:39 ` Vlastimil Babka
2016-04-15 9:07 ` [PATCH 13/28] mm, page_alloc: Remove redundant check for empty zonelist Mel Gorman
2016-04-15 9:07 ` [PATCH 14/28] mm, page_alloc: Simplify last cpupid reset Mel Gorman
2016-04-26 13:30 ` Vlastimil Babka
2016-04-15 9:07 ` [PATCH 15/28] mm, page_alloc: Move might_sleep_if check to the allocator slowpath Mel Gorman
2016-04-26 13:41 ` Vlastimil Babka
2016-04-26 14:50 ` Mel Gorman
2016-04-26 15:16 ` Vlastimil Babka
2016-04-26 16:29 ` Mel Gorman
2016-04-15 9:07 ` [PATCH 16/28] mm, page_alloc: Move __GFP_HARDWALL modifications out of the fastpath Mel Gorman
2016-04-26 14:13 ` Vlastimil Babka
2016-04-15 9:07 ` [PATCH 17/28] mm, page_alloc: Check once if a zone has isolated pageblocks Mel Gorman
2016-04-26 14:27 ` Vlastimil Babka
2016-04-15 9:07 ` [PATCH 18/28] mm, page_alloc: Shorten the page allocator fast path Mel Gorman
2016-04-26 15:23 ` Vlastimil Babka
2016-04-15 9:07 ` [PATCH 19/28] mm, page_alloc: Reduce cost of fair zone allocation policy retry Mel Gorman
2016-04-26 17:24 ` Vlastimil Babka
2016-04-15 9:07 ` [PATCH 20/28] mm, page_alloc: Shortcut watermark checks for order-0 pages Mel Gorman
2016-04-26 17:32 ` Vlastimil Babka
2016-04-15 9:07 ` [PATCH 21/28] mm, page_alloc: Avoid looking up the first zone in a zonelist twice Mel Gorman
2016-04-26 17:46 ` Vlastimil Babka
2016-04-15 9:07 ` [PATCH 22/28] mm, page_alloc: Remove field from alloc_context Mel Gorman
2016-04-15 9:07 ` [PATCH 23/28] mm, page_alloc: Check multiple page fields with a single branch Mel Gorman
2016-04-26 18:41 ` Vlastimil Babka
2016-04-27 10:07 ` Mel Gorman
2016-04-15 9:07 ` [PATCH 24/28] mm, page_alloc: Remove unnecessary variable from free_pcppages_bulk Mel Gorman
2016-04-26 18:43 ` Vlastimil Babka
2016-04-15 9:07 ` [PATCH 25/28] mm, page_alloc: Inline pageblock lookup in page free fast paths Mel Gorman
2016-04-26 19:10 ` Vlastimil Babka
2016-04-15 9:07 ` [PATCH 26/28] cpuset: use static key better and convert to new API Mel Gorman
2016-04-26 19:49 ` Vlastimil Babka
2016-04-15 9:07 ` [PATCH 27/28] mm, page_alloc: Defer debugging checks of freed pages until a PCP drain Mel Gorman
2016-04-27 11:59 ` Vlastimil Babka
2016-04-27 12:01 ` [PATCH 1/3] mm, page_alloc: un-inline the bad part of free_pages_check Vlastimil Babka
2016-04-27 12:01 ` [PATCH 2/3] mm, page_alloc: pull out side effects from free_pages_check Vlastimil Babka
2016-04-27 12:41 ` Mel Gorman
2016-04-27 13:00 ` Vlastimil Babka
2016-04-27 12:01 ` [PATCH 3/3] mm, page_alloc: don't duplicate code in free_pcp_prepare Vlastimil Babka
2016-04-27 12:37 ` [PATCH 1/3] mm, page_alloc: un-inline the bad part of free_pages_check Mel Gorman
2016-04-27 12:53 ` Vlastimil Babka
2016-04-15 9:07 ` Mel Gorman [this message]
2016-04-27 14:06 ` [PATCH 28/28] mm, page_alloc: Defer debugging checks of pages allocated from the PCP Vlastimil Babka
2016-04-27 15:31 ` Mel Gorman
2016-05-17 6:41 ` Naoya Horiguchi
2016-05-18 7:51 ` Vlastimil Babka
2016-05-18 7:55 ` Vlastimil Babka
2016-05-18 8:49 ` Mel Gorman
2016-04-26 12:04 ` [PATCH 13/28] mm, page_alloc: Remove redundant check for empty zonelist Vlastimil Babka
2016-04-26 13:00 ` Mel Gorman
2016-04-26 19:11 ` Andrew Morton
2016-04-15 12:44 ` [PATCH 00/28] Optimise page alloc/free fast paths v3 Jesper Dangaard Brouer
2016-04-15 13:08 ` Mel Gorman
2016-04-16 7:21 ` [PATCH 12/28] mm, page_alloc: Remove unnecessary initialisation from __alloc_pages_nodemask() Mel Gorman
2016-04-26 11:41 ` Vlastimil Babka
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=1460711275-1130-16-git-send-email-mgorman@techsingularity.net \
--to=mgorman@techsingularity.net \
--cc=akpm@linux-foundation.org \
--cc=brouer@redhat.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=vbabka@suse.cz \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).