From: Mel Gorman <mel@csn.ul.ie>
To: Mel Gorman <mel@csn.ul.ie>,
Linux Memory Management List <linux-mm@kvack.org>
Cc: Pekka Enberg <penberg@cs.helsinki.fi>,
Rik van Riel <riel@redhat.com>,
KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>,
Christoph Lameter <cl@linux-foundation.org>,
Johannes Weiner <hannes@cmpxchg.org>,
Nick Piggin <npiggin@suse.de>,
Linux Kernel Mailing List <linux-kernel@vger.kernel.org>,
Lin Ming <ming.m.lin@intel.com>,
Zhang Yanmin <yanmin_zhang@linux.intel.com>,
Peter Zijlstra <peterz@infradead.org>
Subject: [PATCH 23/27] Update NR_FREE_PAGES only as necessary
Date: Mon, 16 Mar 2009 17:53:37 +0000 [thread overview]
Message-ID: <1237226020-14057-24-git-send-email-mel@csn.ul.ie> (raw)
In-Reply-To: <1237226020-14057-1-git-send-email-mel@csn.ul.ie>
When pages are being freed to the buddy allocator, the zone
NR_FREE_PAGES counter must be updated. In the case of bulk per-cpu page
freeing, it's updated once per page. This retouches cache lines more
than necessary. Update the counters one per per-cpu bulk free.
Signed-off-by: Mel Gorman <mel@csn.ul.ie>
Reviewed-by: Christoph Lameter <cl@linux-foundation.org>
---
mm/page_alloc.c | 12 ++++++------
1 files changed, 6 insertions(+), 6 deletions(-)
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 21affd4..98ce091 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -460,7 +460,6 @@ static inline void __free_one_page(struct page *page,
int migratetype)
{
unsigned long page_idx;
- int order_size = 1 << order;
if (unlikely(PageCompound(page)))
if (unlikely(destroy_compound_page(page, order)))
@@ -470,10 +469,9 @@ static inline void __free_one_page(struct page *page,
page_idx = page_to_pfn(page) & ((1 << MAX_ORDER) - 1);
- VM_BUG_ON(page_idx & (order_size - 1));
+ VM_BUG_ON(page_idx & ((1 << order) - 1));
VM_BUG_ON(bad_range(zone, page));
- __mod_zone_page_state(zone, NR_FREE_PAGES, order_size);
while (order < MAX_ORDER-1) {
unsigned long combined_idx;
struct page *buddy;
@@ -528,6 +526,8 @@ static void free_pages_bulk(struct zone *zone, int count,
spin_lock(&zone->lock);
zone_clear_flag(zone, ZONE_ALL_UNRECLAIMABLE);
zone->pages_scanned = 0;
+
+ __mod_zone_page_state(zone, NR_FREE_PAGES, count);
while (count--) {
struct page *page;
@@ -546,6 +546,8 @@ static void free_one_page(struct zone *zone, struct page *page, int order,
spin_lock(&zone->lock);
zone_clear_flag(zone, ZONE_ALL_UNRECLAIMABLE);
zone->pages_scanned = 0;
+
+ __mod_zone_page_state(zone, NR_FREE_PAGES, 1 << order);
__free_one_page(page, zone, order, migratetype);
spin_unlock(&zone->lock);
}
@@ -690,7 +692,6 @@ struct page *__rmqueue_smallest(struct zone *zone, unsigned int order,
list_del(&page->lru);
rmv_page_order(page);
area->nr_free--;
- __mod_zone_page_state(zone, NR_FREE_PAGES, - (1UL << order));
expand(zone, page, order, current_order, area, migratetype);
return page;
}
@@ -830,8 +831,6 @@ __rmqueue_fallback(struct zone *zone, int order, int start_migratetype)
/* Remove the page from the freelists */
list_del(&page->lru);
rmv_page_order(page);
- __mod_zone_page_state(zone, NR_FREE_PAGES,
- -(1UL << order));
if (current_order == pageblock_order)
set_pageblock_migratetype(page,
@@ -905,6 +904,7 @@ static int rmqueue_bulk(struct zone *zone, unsigned int order,
set_page_private(page, migratetype);
list = &page->lru;
}
+ __mod_zone_page_state(zone, NR_FREE_PAGES, -(i << order));
spin_unlock(&zone->lock);
return i;
}
--
1.5.6.5
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
next prev parent reply other threads:[~2009-03-16 17:51 UTC|newest]
Thread overview: 44+ messages / expand[flat|nested] mbox.gz Atom feed top
2009-03-16 17:53 [PATCH 00/26] Cleanup and optimise the page allocator V4 Mel Gorman
2009-03-16 17:53 ` [PATCH 01/27] Replace __alloc_pages_internal() with __alloc_pages_nodemask() Mel Gorman
2009-03-16 17:53 ` [PATCH 02/27] Do not sanity check order in the fast path Mel Gorman
2009-03-16 17:53 ` [PATCH 03/27] Do not check NUMA node ID when the caller knows the node is valid Mel Gorman
2009-03-16 17:53 ` [PATCH 04/27] Check only once if the zonelist is suitable for the allocation Mel Gorman
2009-03-16 17:53 ` [PATCH 05/27] Break up the allocator entry point into fast and slow paths Mel Gorman
2009-03-16 19:30 ` Christoph Lameter
2009-03-16 17:53 ` [PATCH 06/27] Move check for disabled anti-fragmentation out of fastpath Mel Gorman
2009-03-16 17:53 ` [PATCH 07/27] Check in advance if the zonelist needs additional filtering Mel Gorman
2009-03-16 17:53 ` [PATCH 08/27] Calculate the preferred zone for allocation only once Mel Gorman
2009-03-16 17:53 ` [PATCH 09/27] Calculate the migratetype " Mel Gorman
2009-03-16 17:53 ` [PATCH 10/27] Calculate the alloc_flags " Mel Gorman
2009-03-16 17:53 ` [PATCH 11/27] Calculate the cold parameter " Mel Gorman
2009-03-16 17:53 ` [PATCH 12/27] Remove a branch by assuming __GFP_HIGH == ALLOC_HIGH Mel Gorman
2009-03-16 17:53 ` [PATCH 13/27] Inline __rmqueue_smallest() Mel Gorman
2009-03-16 18:55 ` Christoph Lameter
2009-03-16 17:53 ` [PATCH 14/27] Inline buffered_rmqueue() Mel Gorman
2009-03-16 17:53 ` [PATCH 15/27] Inline __rmqueue_fallback() Mel Gorman
2009-03-16 17:53 ` [PATCH 16/27] Save text by reducing call sites of __rmqueue() Mel Gorman
2009-03-16 17:53 ` [PATCH 17/27] Do not call get_pageblock_migratetype() more than necessary Mel Gorman
2009-03-16 17:53 ` [PATCH 18/27] Do not disable interrupts in free_page_mlock() Mel Gorman
2009-03-16 18:57 ` Christoph Lameter
2009-03-16 17:53 ` [PATCH 19/27] Do not setup zonelist cache when there is only one node Mel Gorman
2009-03-16 17:53 ` [PATCH 20/27] Use a pre-calculated value for num_online_nodes() Mel Gorman
2009-03-16 17:53 ` [PATCH 21/27] Do not check for compound pages during the page allocator sanity checks Mel Gorman
2009-03-16 17:53 ` [PATCH 22/27] Use allocation flags as an index to the zone watermark Mel Gorman
2009-03-16 17:53 ` Mel Gorman [this message]
2009-03-16 17:53 ` [PATCH 24/27] Convert gfp_zone() to use a table of precalculated values Mel Gorman
2009-03-16 19:12 ` Christoph Lameter
2009-03-18 13:52 ` Mel Gorman
2009-03-18 14:15 ` Christoph Lameter
2009-03-18 15:35 ` Mel Gorman
2009-03-18 17:21 ` Christoph Lameter
2009-03-18 18:17 ` Mel Gorman
2009-03-18 19:07 ` Christoph Lameter
2009-03-18 19:46 ` Mel Gorman
2009-03-19 0:04 ` KAMEZAWA Hiroyuki
2009-03-19 15:05 ` Christoph Lameter
2009-03-19 16:53 ` Christoph Lameter
2009-03-19 18:11 ` Mel Gorman
2009-03-19 18:15 ` Christoph Lameter
2009-03-19 18:37 ` Christoph Lameter
2009-03-16 17:53 ` [PATCH 25/27] Re-sort GFP flags and fix whitespace alignment for easier reading Mel Gorman
2009-03-16 17:53 ` [PATCH 26/27] Get the pageblock migratetype without disabling interrupts Mel Gorman
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=1237226020-14057-24-git-send-email-mel@csn.ul.ie \
--to=mel@csn.ul.ie \
--cc=cl@linux-foundation.org \
--cc=hannes@cmpxchg.org \
--cc=kosaki.motohiro@jp.fujitsu.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=ming.m.lin@intel.com \
--cc=npiggin@suse.de \
--cc=penberg@cs.helsinki.fi \
--cc=peterz@infradead.org \
--cc=riel@redhat.com \
--cc=yanmin_zhang@linux.intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).