From: Hiroyuki KAMEZAWA <kamezawa.hiroyu@jp.fujitsu.com>
To: linux-mm <linux-mm@kvack.org>,
LHMS <lhms-devel@lists.sourceforge.net>,
William Lee Irwin III <wli@holomorphy.com>
Cc: Dave Hansen <haveblue@us.ibm.com>,
Hirokazu Takahashi <taka@valinux.co.jp>,
ncunningham@linuxmail.org
Subject: [RFC/PATCH] free_area[] bitmap elimination[1/3]
Date: Tue, 24 Aug 2004 21:28:05 +0900 [thread overview]
Message-ID: <412B3455.1000604@jp.fujitsu.com> (raw)
[-- Attachment #1: Type: text/plain, Size: 218 bytes --]
this is 2nd part.
code for intialization .
calculation of zone->alinged_order is newly added.
-- Kame
==
--
--the clue is these footmarks leading to the door.--
KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
[-- Attachment #2: eliminate-bitmap-init.patch --]
[-- Type: text/x-patch, Size: 4871 bytes --]
This patch removes bitmap allocation in zone_init_free_lists() and
page_to_bitmap_size();
And new added member zone->aligned_order is initialized.
zone->alined_order guarantees "zone is aligned to (1 << zone->aligned_order)
contiguous pages"
If zone->alined_order == MAX_ORDER, zone is completely aligned, and
every page is guaranteed to have its buddy page in any order.
zone->aligned_order is used in free_pages_bulk() to skip range checking.
By using this, if order < zone->aligned_order,
we do not have to worry about "a page can have its buddy in an order or not?"
This would work well in several architectures.
But my ia64 box shows zone->aligned_order=0 .....this aligned_order would not
be helpful in some environment.
-- Kame
---
linux-2.6.8.1-mm4-kame-kamezawa/mm/page_alloc.c | 72 +++++++++---------------
1 files changed, 28 insertions(+), 44 deletions(-)
diff -puN mm/page_alloc.c~eliminate-bitmap-init mm/page_alloc.c
--- linux-2.6.8.1-mm4-kame/mm/page_alloc.c~eliminate-bitmap-init 2004-08-24 18:25:14.000000000 +0900
+++ linux-2.6.8.1-mm4-kame-kamezawa/mm/page_alloc.c 2004-08-24 20:32:14.640312608 +0900
@@ -301,7 +301,7 @@ void __free_pages_ok(struct page *page,
* subsystem according to empirical testing, and this is also justified
* by considering the behavior of a buddy system containing a single
* large block of memory acted on by a series of small allocations.
- * This behavior is a critical factor in sglist merging's success.
+ * This behavior is a critical factor in s merging's success.
*
* -- wli
*/
@@ -1499,6 +1499,25 @@ static void __init calculate_zone_totalp
printk(KERN_DEBUG "On node %d totalpages: %lu\n", pgdat->node_id, realtotalpages);
}
+/*
+ * calculate_aligned_order()
+ * this function calculates an upper bound order of alignment of buddy pages.
+ * if order < zone->aligned_order, every page are guaranteed to have its buddy.
+ */
+void __init calculate_aligned_order(int nid, int zone, unsigned long start_pfn,
+ unsigned long size)
+{
+ int order;
+ unsigned long mask;
+ struct zone *zonep = zone_table[NODEZONE(nid, zone)];
+ for (order = 0 ; order < MAX_ORDER; order++) {
+ mask = (unsigned long)1 << order;
+ if ((start_pfn & mask) || (size & mask))
+ break;
+ }
+ if (order < zonep->aligned_order)
+ zonep->aligned_order = order;
+}
/*
* Initially all pages are reserved - free ones are freed
@@ -1510,7 +1529,7 @@ void __init memmap_init_zone(unsigned lo
{
struct page *start = pfn_to_page(start_pfn);
struct page *page;
-
+ unsigned long saved_start_pfn = start_pfn;
for (page = start; page < (start + size); page++) {
set_page_zone(page, NODEZONE(nid, zone));
set_page_count(page, 0);
@@ -1524,51 +1543,18 @@ void __init memmap_init_zone(unsigned lo
#endif
start_pfn++;
}
-}
-
-/*
- * Page buddy system uses "index >> (i+1)", where "index" is
- * at most "size-1".
- *
- * The extra "+3" is to round down to byte size (8 bits per byte
- * assumption). Thus we get "(size-1) >> (i+4)" as the last byte
- * we can access.
- *
- * The "+1" is because we want to round the byte allocation up
- * rather than down. So we should have had a "+7" before we shifted
- * down by three. Also, we have to add one as we actually _use_ the
- * last bit (it's [0,n] inclusive, not [0,n[).
- *
- * So we actually had +7+1 before we shift down by 3. But
- * (n+8) >> 3 == (n >> 3) + 1 (modulo overflows, which we do not have).
- *
- * Finally, we LONG_ALIGN because all bitmap operations are on longs.
- */
-unsigned long pages_to_bitmap_size(unsigned long order, unsigned long nr_pages)
-{
- unsigned long bitmap_size;
-
- bitmap_size = (nr_pages-1) >> (order+4);
- bitmap_size = LONG_ALIGN(bitmap_size+1);
-
- return bitmap_size;
+ /* Because memmap_init_zone() is called in suitable way
+ * even if zone has memory hole,
+ * calling calculate_aligned_order(zone) here is reasonable
+ */
+ calculate_aligned_order(nid, zone, saved_start_pfn, size);
}
void zone_init_free_lists(struct pglist_data *pgdat, struct zone *zone, unsigned long size)
{
int order;
- for (order = 0; ; order++) {
- unsigned long bitmap_size;
-
+ for (order = 0 ; order < MAX_ORDER ; order++) {
INIT_LIST_HEAD(&zone->free_area[order].free_list);
- if (order == MAX_ORDER-1) {
- zone->free_area[order].map = NULL;
- break;
- }
-
- bitmap_size = pages_to_bitmap_size(order, size);
- zone->free_area[order].map =
- (unsigned long *) alloc_bootmem_node(pgdat, bitmap_size);
}
}
@@ -1681,11 +1667,9 @@ static void __init free_area_init_core(s
if ((zone_start_pfn) & (zone_required_alignment-1))
printk("BUG: wrong zone alignment, it will crash\n");
-
+ zone->aligned_order = MAX_ORDER;
memmap_init(size, nid, j, zone_start_pfn);
-
zone_start_pfn += size;
-
zone_init_free_lists(pgdat, zone, zone->spanned_pages);
}
}
_
next reply other threads:[~2004-08-24 12:22 UTC|newest]
Thread overview: 2+ messages / expand[flat|nested] mbox.gz Atom feed top
2004-08-24 12:28 Hiroyuki KAMEZAWA [this message]
[not found] ` <1093366752.1009.44.camel@nighthawk>
[not found] ` <412BD597.1050001@jp.fujitsu.com>
[not found] ` <1093392120.4030.119.camel@nighthawk>
2004-08-25 0:24 ` [Lhms-devel] Re: [RFC/PATCH] free_area[] bitmap elimination[1/3] Hiroyuki KAMEZAWA
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=412B3455.1000604@jp.fujitsu.com \
--to=kamezawa.hiroyu@jp.fujitsu.com \
--cc=haveblue@us.ibm.com \
--cc=lhms-devel@lists.sourceforge.net \
--cc=linux-mm@kvack.org \
--cc=ncunningham@linuxmail.org \
--cc=taka@valinux.co.jp \
--cc=wli@holomorphy.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.