* [PATCH v2] Fix handling of memreserve if the range lands in highmem
@ 2008-01-10 7:21 Kumar Gala
2008-01-23 23:04 ` Paul Mackerras
0 siblings, 1 reply; 2+ messages in thread
From: Kumar Gala @ 2008-01-10 7:21 UTC (permalink / raw)
To: linuxppc-dev
There were several issues if a memreserve range existed and happened
to be in highmem:
* The bootmem allocator is only aware of lowmem so calling
reserve_bootmem with a highmem address would cause a BUG_ON
* All highmem pages were provided to the buddy allocator
Added a lmb_is_reserved() api that we now use to determine if a highem
page should continue to be PageReserved or provided to the buddy
allocator.
Also, we incorrectly reported the amount of pages reserved since all
highmem pages are initally marked reserved and we clear the
PageReserved flag as we "free" up the highmem pages.
---
Handle case pointed out by Scott Wood if a memreserve crosses the lowmem
boundary.
arch/powerpc/mm/lmb.c | 13 +++++++++++++
arch/powerpc/mm/mem.c | 20 ++++++++++++++++----
include/asm-powerpc/lmb.h | 1 +
3 files changed, 30 insertions(+), 4 deletions(-)
diff --git a/arch/powerpc/mm/lmb.c b/arch/powerpc/mm/lmb.c
index 8f4d2dc..4ce23bc 100644
--- a/arch/powerpc/mm/lmb.c
+++ b/arch/powerpc/mm/lmb.c
@@ -342,3 +342,16 @@ void __init lmb_enforce_memory_limit(unsigned long memory_limit)
}
}
}
+
+int __init lmb_is_reserved(unsigned long addr)
+{
+ int i;
+
+ for (i = 0; i < lmb.reserved.cnt; i++) {
+ unsigned long upper = lmb.reserved.region[i].base +
+ lmb.reserved.region[i].size - 1;
+ if ((addr >= lmb.reserved.region[i].base) && (addr <= upper))
+ return 1;
+ }
+ return 0;
+}
diff --git a/arch/powerpc/mm/mem.c b/arch/powerpc/mm/mem.c
index 5402fb6..0b29da3 100644
--- a/arch/powerpc/mm/mem.c
+++ b/arch/powerpc/mm/mem.c
@@ -218,9 +218,19 @@ void __init do_init_bootmem(void)
#endif
/* reserve the sections we're already using */
- for (i = 0; i < lmb.reserved.cnt; i++)
- reserve_bootmem(lmb.reserved.region[i].base,
- lmb_size_bytes(&lmb.reserved, i));
+ for (i = 0; i < lmb.reserved.cnt; i++) {
+ unsigned long addr = lmb.reserved.region[i].base +
+ lmb_size_bytes(&lmb.reserved, i) - 1;
+ if (addr < total_lowmem)
+ reserve_bootmem(lmb.reserved.region[i].base,
+ lmb_size_bytes(&lmb.reserved, i));
+ else if (lmb.reserved.region[i].base < total_lowmem) {
+ unsigned long adjusted_size = total_lowmem -
+ lmb.reserved.region[i].base;
+ reserve_bootmem(lmb.reserved.region[i].base,
+ adjusted_size);
+ }
+ }
/* XXX need to clip this if using highmem? */
sparse_memory_present_with_active_regions(0);
@@ -334,11 +344,13 @@ void __init mem_init(void)
highmem_mapnr = total_lowmem >> PAGE_SHIFT;
for (pfn = highmem_mapnr; pfn < max_mapnr; ++pfn) {
struct page *page = pfn_to_page(pfn);
-
+ if (lmb_is_reserved(pfn << PAGE_SHIFT))
+ continue;
ClearPageReserved(page);
init_page_count(page);
__free_page(page);
totalhigh_pages++;
+ reservedpages--;
}
totalram_pages += totalhigh_pages;
printk(KERN_DEBUG "High memory: %luk\n",
diff --git a/include/asm-powerpc/lmb.h b/include/asm-powerpc/lmb.h
index b5f9f4c..5d1dc48 100644
--- a/include/asm-powerpc/lmb.h
+++ b/include/asm-powerpc/lmb.h
@@ -51,6 +51,7 @@ extern unsigned long __init __lmb_alloc_base(unsigned long size,
extern unsigned long __init lmb_phys_mem_size(void);
extern unsigned long __init lmb_end_of_DRAM(void);
extern void __init lmb_enforce_memory_limit(unsigned long memory_limit);
+extern int __init lmb_is_reserved(unsigned long addr);
extern void lmb_dump_all(void);
--
1.5.3.7
^ permalink raw reply related [flat|nested] 2+ messages in thread
* Re: [PATCH v2] Fix handling of memreserve if the range lands in highmem
2008-01-10 7:21 [PATCH v2] Fix handling of memreserve if the range lands in highmem Kumar Gala
@ 2008-01-23 23:04 ` Paul Mackerras
0 siblings, 0 replies; 2+ messages in thread
From: Paul Mackerras @ 2008-01-23 23:04 UTC (permalink / raw)
To: Kumar Gala; +Cc: linuxppc-dev
Kumar Gala writes:
> There were several issues if a memreserve range existed and happened
> to be in highmem:
>
> * The bootmem allocator is only aware of lowmem so calling
> reserve_bootmem with a highmem address would cause a BUG_ON
> * All highmem pages were provided to the buddy allocator
>
> Added a lmb_is_reserved() api that we now use to determine if a highem
> page should continue to be PageReserved or provided to the buddy
> allocator.
>
> Also, we incorrectly reported the amount of pages reserved since all
> highmem pages are initally marked reserved and we clear the
> PageReserved flag as we "free" up the highmem pages.
This patch breaks all the 64-bit configs since 64-bit doesn't have
total_lowmem. The extra complexity is only needed in the
CONFIG_HIGHMEM case, so an ifdef would be one solution, though an ugly
one. We do already have an ifdef just above the place you changed in
arch/powerpc/mm/mem.c which you could just enlarge rather than adding
a new ifdef.
And clearly I can't pull your tree until this is sorted somehow...
Paul.
^ permalink raw reply [flat|nested] 2+ messages in thread
end of thread, other threads:[~2008-01-23 23:04 UTC | newest]
Thread overview: 2+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2008-01-10 7:21 [PATCH v2] Fix handling of memreserve if the range lands in highmem Kumar Gala
2008-01-23 23:04 ` Paul Mackerras
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).