From: Mel Gorman <mgorman@techsingularity.net>
To: Andrew Morton <akpm@linux-foundation.org>
Cc: Vlastimil Babka <vbabka@suse.cz>, Linux-MM <linux-mm@kvack.org>,
LKML <linux-kernel@vger.kernel.org>,
Mel Gorman <mgorman@techsingularity.net>
Subject: [PATCH 07/24] mm, page_alloc: Avoid unnecessary zone lookups during pageblock operations
Date: Tue, 12 Apr 2016 11:12:08 +0100 [thread overview]
Message-ID: <1460455945-29644-8-git-send-email-mgorman@techsingularity.net> (raw)
In-Reply-To: <1460455945-29644-1-git-send-email-mgorman@techsingularity.net>
Pageblocks have an associated bitmap to store migrate types and whether
the pageblock should be skipped during compaction. The bitmap may be
associated with a memory section or a zone but the zone is looked up
unconditionally. The compiler should optimise this away automatically so
this is a cosmetic patch only in many cases.
Signed-off-by: Mel Gorman <mgorman@techsingularity.net>
---
mm/page_alloc.c | 22 +++++++++-------------
1 file changed, 9 insertions(+), 13 deletions(-)
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index ab16560b76e6..d00847bb1612 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -6759,23 +6759,23 @@ void *__init alloc_large_system_hash(const char *tablename,
}
/* Return a pointer to the bitmap storing bits affecting a block of pages */
-static inline unsigned long *get_pageblock_bitmap(struct zone *zone,
+static inline unsigned long *get_pageblock_bitmap(struct page *page,
unsigned long pfn)
{
#ifdef CONFIG_SPARSEMEM
return __pfn_to_section(pfn)->pageblock_flags;
#else
- return zone->pageblock_flags;
+ return page_zone(page)->pageblock_flags;
#endif /* CONFIG_SPARSEMEM */
}
-static inline int pfn_to_bitidx(struct zone *zone, unsigned long pfn)
+static inline int pfn_to_bitidx(struct page *page, unsigned long pfn)
{
#ifdef CONFIG_SPARSEMEM
pfn &= (PAGES_PER_SECTION-1);
return (pfn >> pageblock_order) * NR_PAGEBLOCK_BITS;
#else
- pfn = pfn - round_down(zone->zone_start_pfn, pageblock_nr_pages);
+ pfn = pfn - round_down(page_zone(page)->zone_start_pfn, pageblock_nr_pages);
return (pfn >> pageblock_order) * NR_PAGEBLOCK_BITS;
#endif /* CONFIG_SPARSEMEM */
}
@@ -6793,14 +6793,12 @@ unsigned long get_pfnblock_flags_mask(struct page *page, unsigned long pfn,
unsigned long end_bitidx,
unsigned long mask)
{
- struct zone *zone;
unsigned long *bitmap;
unsigned long bitidx, word_bitidx;
unsigned long word;
- zone = page_zone(page);
- bitmap = get_pageblock_bitmap(zone, pfn);
- bitidx = pfn_to_bitidx(zone, pfn);
+ bitmap = get_pageblock_bitmap(page, pfn);
+ bitidx = pfn_to_bitidx(page, pfn);
word_bitidx = bitidx / BITS_PER_LONG;
bitidx &= (BITS_PER_LONG-1);
@@ -6822,20 +6820,18 @@ void set_pfnblock_flags_mask(struct page *page, unsigned long flags,
unsigned long end_bitidx,
unsigned long mask)
{
- struct zone *zone;
unsigned long *bitmap;
unsigned long bitidx, word_bitidx;
unsigned long old_word, word;
BUILD_BUG_ON(NR_PAGEBLOCK_BITS != 4);
- zone = page_zone(page);
- bitmap = get_pageblock_bitmap(zone, pfn);
- bitidx = pfn_to_bitidx(zone, pfn);
+ bitmap = get_pageblock_bitmap(page, pfn);
+ bitidx = pfn_to_bitidx(page, pfn);
word_bitidx = bitidx / BITS_PER_LONG;
bitidx &= (BITS_PER_LONG-1);
- VM_BUG_ON_PAGE(!zone_spans_pfn(zone, pfn), page);
+ VM_BUG_ON_PAGE(!zone_spans_pfn(page_zone(page), pfn), page);
bitidx += end_bitidx;
mask <<= (BITS_PER_LONG - bitidx - 1);
--
2.6.4
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
next prev parent reply other threads:[~2016-04-12 10:13 UTC|newest]
Thread overview: 25+ messages / expand[flat|nested] mbox.gz Atom feed top
2016-04-12 10:12 [PATCH 00/24] Optimise page alloc/free fast paths v2 Mel Gorman
2016-04-12 10:12 ` [PATCH 01/24] mm, page_alloc: Only check PageCompound for high-order pages Mel Gorman
2016-04-12 10:12 ` [PATCH 02/24] mm, page_alloc: Use new PageAnonHead helper in the free page fast path Mel Gorman
2016-04-12 10:12 ` [PATCH 03/24] mm, page_alloc: Reduce branches in zone_statistics Mel Gorman
2016-04-12 10:12 ` [PATCH 04/24] mm, page_alloc: Inline zone_statistics Mel Gorman
2016-04-12 10:12 ` [PATCH 05/24] mm, page_alloc: Inline the fast path of the zonelist iterator Mel Gorman
2016-04-12 10:12 ` [PATCH 06/24] mm, page_alloc: Use __dec_zone_state for order-0 page allocation Mel Gorman
2016-04-12 10:12 ` Mel Gorman [this message]
2016-04-12 10:12 ` [PATCH 08/24] mm, page_alloc: Convert alloc_flags to unsigned Mel Gorman
2016-04-12 10:12 ` [PATCH 09/24] mm, page_alloc: Convert nr_fair_skipped to bool Mel Gorman
2016-04-12 10:12 ` [PATCH 10/24] mm, page_alloc: Remove unnecessary local variable in get_page_from_freelist Mel Gorman
2016-04-12 10:12 ` [PATCH 11/24] mm, page_alloc: Remove unnecessary initialisation " Mel Gorman
2016-04-12 10:12 ` [PATCH 12/24] mm, page_alloc: Remove unnecessary initialisation from __alloc_pages_nodemask() Mel Gorman
2016-04-12 10:12 ` [PATCH 13/24] mm, page_alloc: Remove redundant check for empty zonelist Mel Gorman
2016-04-12 10:12 ` [PATCH 14/24] mm, page_alloc: Simplify last cpupid reset Mel Gorman
2016-04-12 10:12 ` [PATCH 15/24] mm, page_alloc: Move might_sleep_if check to the allocator slowpath Mel Gorman
2016-04-12 10:12 ` [PATCH 16/24] mm, page_alloc: Move __GFP_HARDWALL modifications out of the fastpath Mel Gorman
2016-04-12 10:12 ` [PATCH 17/24] mm, page_alloc: Reduce cost of fair zone allocation policy retry Mel Gorman
2016-04-12 10:12 ` [PATCH 18/24] mm, page_alloc: Shortcut watermark checks for order-0 pages Mel Gorman
2016-04-12 10:12 ` [PATCH 19/24] mm, page_alloc: Avoid looking up the first zone in a zonelist twice Mel Gorman
2016-04-12 10:12 ` [PATCH 20/24] mm, page_alloc: Check multiple page fields with a single branch Mel Gorman
2016-04-12 10:12 ` [PATCH 21/24] cpuset: use static key better and convert to new API Mel Gorman
2016-04-12 10:12 ` [PATCH 22/24] mm, page_alloc: Check once if a zone has isolated pageblocks Mel Gorman
2016-04-12 10:12 ` [PATCH 23/24] mm, page_alloc: Remove unnecessary variable from free_pcppages_bulk Mel Gorman
2016-04-12 10:12 ` [PATCH 24/24] mm, page_alloc: Do not lookup pcp migratetype during bulk free Mel Gorman
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=1460455945-29644-8-git-send-email-mgorman@techsingularity.net \
--to=mgorman@techsingularity.net \
--cc=akpm@linux-foundation.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=vbabka@suse.cz \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).