From: Yinghai Lu <yinghai@kernel.org> To: Ingo Molnar <mingo@elte.hu>, Thomas Gleixner <tglx@linutronix.de>, "H. Peter Anvin" <hpa@zytor.com>, Andrew Morton <akpm@linux-foundation.org>, David Miller <davem@davemloft.net>, Be Cc: Johannes Weiner <hannes@cmpxchg.org>, linux-kernel@vger.kernel.org, linux-arch@vger.kernel.org, Yinghai Lu <yinghai@kernel.org> Subject: [PATCH 07/24] lmb: Add reserve_lmb/free_lmb Date: Fri, 26 Mar 2010 15:21:37 -0700 [thread overview] Message-ID: <1269642114-16588-8-git-send-email-yinghai@kernel.org> (raw) In-Reply-To: <1269642114-16588-1-git-send-email-yinghai@kernel.org> They will check if the region array is big enough. __check_and_double_region_array will try to double the region if that array spare slots if not big enough. find_lmb_area() is used to find good postion for new region array. Old array will be copied to new array. Arch code should provide to get_max_mapped, so the new array have accessiable address Signed-off-by: Yinghai Lu <yinghai@kernel.org> --- include/linux/lmb.h | 4 ++ mm/lmb.c | 89 +++++++++++++++++++++++++++++++++++++++++++++++++++ 2 files changed, 93 insertions(+), 0 deletions(-) diff --git a/include/linux/lmb.h b/include/linux/lmb.h index 05234bd..95ae3f4 100644 --- a/include/linux/lmb.h +++ b/include/linux/lmb.h @@ -83,9 +83,13 @@ lmb_end_pfn(struct lmb_region *type, unsigned long region_nr) lmb_size_pages(type, region_nr); } +void reserve_lmb(u64 start, u64 end, char *name); +void free_lmb(u64 start, u64 end); +void add_lmb_memory(u64 start, u64 end); u64 __find_lmb_area(u64 ei_start, u64 ei_last, u64 start, u64 end, u64 size, u64 align); u64 find_lmb_area(u64 start, u64 end, u64 size, u64 align); +u64 get_max_mapped(void); #include <asm/lmb.h> diff --git a/mm/lmb.c b/mm/lmb.c index d5d5dc4..9798458 100644 --- a/mm/lmb.c +++ b/mm/lmb.c @@ -551,6 +551,95 @@ int lmb_find(struct lmb_property *res) return -1; } +u64 __weak __init get_max_mapped(void) +{ + u64 end = max_low_pfn; + + end <<= PAGE_SHIFT; + + return end; +} + +static void __init __check_and_double_region_array(struct lmb_region *type, + struct lmb_property *static_region, + u64 ex_start, u64 ex_end) +{ + u64 start, end, size, mem; + struct lmb_property *new, *old; + unsigned long rgnsz = type->nr_regions; + + /* Do we have enough slots left ? */ + if ((rgnsz - type->cnt) > max_t(unsigned long, rgnsz/8, 2)) + return; + + old = type->region; + /* Double the array size */ + size = sizeof(struct lmb_property) * rgnsz * 2; + if (old == static_region) + start = 0; + else + start = __pa(old) + sizeof(struct lmb_property) * rgnsz; + end = ex_start; + mem = -1ULL; + if (start + size < end) + mem = find_lmb_area(start, end, size, sizeof(struct lmb_property)); + if (mem == -1ULL) { + start = ex_end; + end = get_max_mapped(); + if (start + size < end) + mem = find_lmb_area(start, end, size, sizeof(struct lmb_property)); + } + if (mem == -1ULL) + panic("can not find more space for lmb.reserved.region array"); + + new = __va(mem); + /* Copy old to new */ + memcpy(&new[0], &old[0], sizeof(struct lmb_property) * rgnsz); + memset(&new[rgnsz], 0, sizeof(struct lmb_property) * rgnsz); + + memset(&old[0], 0, sizeof(struct lmb_property) * rgnsz); + type->region = new; + type->nr_regions = rgnsz * 2; + printk(KERN_DEBUG "lmb.reserved.region array is doubled to %ld at [%llx - %llx]\n", + type->nr_regions, mem, mem + size - 1); + + /* Reserve new array and free old one */ + lmb_reserve(mem, sizeof(struct lmb_property) * rgnsz * 2); + if (old != static_region) + lmb_free(__pa(old), sizeof(struct lmb_property) * rgnsz); +} + +void __init add_lmb_memory(u64 start, u64 end) +{ + __check_and_double_region_array(&lmb.memory, &lmb_memory_region[0], start, end); + lmb_add(start, end - start); +} + +void __init reserve_lmb(u64 start, u64 end, char *name) +{ + if (start == end) + return; + + if (WARN_ONCE(start > end, "reserve_lmb: wrong range [%#llx, %#llx]\n", start, end)) + return; + + __check_and_double_region_array(&lmb.reserved, &lmb_reserved_region[0], start, end); + lmb_reserve(start, end - start); +} + +void __init free_lmb(u64 start, u64 end) +{ + if (start == end) + return; + + if (WARN_ONCE(start > end, "free_lmb: wrong range [%#llx, %#llx]\n", start, end)) + return; + + /* keep punching hole, could run out of slots too */ + __check_and_double_region_array(&lmb.reserved, &lmb_reserved_region[0], start, end); + lmb_free(start, end - start); +} + static int __init find_overlapped_early(u64 start, u64 end) { int i; -- 1.6.4.2
WARNING: multiple messages have this Message-ID (diff)
From: Yinghai Lu <yinghai@kernel.org> To: Ingo Molnar <mingo@elte.hu>, Thomas Gleixner <tglx@linutronix.de>, "H. Peter Anvin" <hpa@zytor.com>, Andrew Morton <akpm@linux-foundation.org>, David Miller <davem@davemloft.net>, Benjamin Herrenschmidt <benh@kernel.crashing.org>, Linus Torvalds <torvalds@linux-foundation.org> Cc: Johannes Weiner <hannes@cmpxchg.org>, linux-kernel@vger.kernel.org, linux-arch@vger.kernel.org, Yinghai Lu <yinghai@kernel.org> Subject: [PATCH 07/24] lmb: Add reserve_lmb/free_lmb Date: Fri, 26 Mar 2010 15:21:37 -0700 [thread overview] Message-ID: <1269642114-16588-8-git-send-email-yinghai@kernel.org> (raw) Message-ID: <20100326222137.gLsL3DEC3_e7uKHhSZ4By6PRxbbyc9U7F-Ggiwg_l1I@z> (raw) In-Reply-To: <1269642114-16588-1-git-send-email-yinghai@kernel.org> They will check if the region array is big enough. __check_and_double_region_array will try to double the region if that array spare slots if not big enough. find_lmb_area() is used to find good postion for new region array. Old array will be copied to new array. Arch code should provide to get_max_mapped, so the new array have accessiable address Signed-off-by: Yinghai Lu <yinghai@kernel.org> --- include/linux/lmb.h | 4 ++ mm/lmb.c | 89 +++++++++++++++++++++++++++++++++++++++++++++++++++ 2 files changed, 93 insertions(+), 0 deletions(-) diff --git a/include/linux/lmb.h b/include/linux/lmb.h index 05234bd..95ae3f4 100644 --- a/include/linux/lmb.h +++ b/include/linux/lmb.h @@ -83,9 +83,13 @@ lmb_end_pfn(struct lmb_region *type, unsigned long region_nr) lmb_size_pages(type, region_nr); } +void reserve_lmb(u64 start, u64 end, char *name); +void free_lmb(u64 start, u64 end); +void add_lmb_memory(u64 start, u64 end); u64 __find_lmb_area(u64 ei_start, u64 ei_last, u64 start, u64 end, u64 size, u64 align); u64 find_lmb_area(u64 start, u64 end, u64 size, u64 align); +u64 get_max_mapped(void); #include <asm/lmb.h> diff --git a/mm/lmb.c b/mm/lmb.c index d5d5dc4..9798458 100644 --- a/mm/lmb.c +++ b/mm/lmb.c @@ -551,6 +551,95 @@ int lmb_find(struct lmb_property *res) return -1; } +u64 __weak __init get_max_mapped(void) +{ + u64 end = max_low_pfn; + + end <<= PAGE_SHIFT; + + return end; +} + +static void __init __check_and_double_region_array(struct lmb_region *type, + struct lmb_property *static_region, + u64 ex_start, u64 ex_end) +{ + u64 start, end, size, mem; + struct lmb_property *new, *old; + unsigned long rgnsz = type->nr_regions; + + /* Do we have enough slots left ? */ + if ((rgnsz - type->cnt) > max_t(unsigned long, rgnsz/8, 2)) + return; + + old = type->region; + /* Double the array size */ + size = sizeof(struct lmb_property) * rgnsz * 2; + if (old == static_region) + start = 0; + else + start = __pa(old) + sizeof(struct lmb_property) * rgnsz; + end = ex_start; + mem = -1ULL; + if (start + size < end) + mem = find_lmb_area(start, end, size, sizeof(struct lmb_property)); + if (mem == -1ULL) { + start = ex_end; + end = get_max_mapped(); + if (start + size < end) + mem = find_lmb_area(start, end, size, sizeof(struct lmb_property)); + } + if (mem == -1ULL) + panic("can not find more space for lmb.reserved.region array"); + + new = __va(mem); + /* Copy old to new */ + memcpy(&new[0], &old[0], sizeof(struct lmb_property) * rgnsz); + memset(&new[rgnsz], 0, sizeof(struct lmb_property) * rgnsz); + + memset(&old[0], 0, sizeof(struct lmb_property) * rgnsz); + type->region = new; + type->nr_regions = rgnsz * 2; + printk(KERN_DEBUG "lmb.reserved.region array is doubled to %ld at [%llx - %llx]\n", + type->nr_regions, mem, mem + size - 1); + + /* Reserve new array and free old one */ + lmb_reserve(mem, sizeof(struct lmb_property) * rgnsz * 2); + if (old != static_region) + lmb_free(__pa(old), sizeof(struct lmb_property) * rgnsz); +} + +void __init add_lmb_memory(u64 start, u64 end) +{ + __check_and_double_region_array(&lmb.memory, &lmb_memory_region[0], start, end); + lmb_add(start, end - start); +} + +void __init reserve_lmb(u64 start, u64 end, char *name) +{ + if (start == end) + return; + + if (WARN_ONCE(start > end, "reserve_lmb: wrong range [%#llx, %#llx]\n", start, end)) + return; + + __check_and_double_region_array(&lmb.reserved, &lmb_reserved_region[0], start, end); + lmb_reserve(start, end - start); +} + +void __init free_lmb(u64 start, u64 end) +{ + if (start == end) + return; + + if (WARN_ONCE(start > end, "free_lmb: wrong range [%#llx, %#llx]\n", start, end)) + return; + + /* keep punching hole, could run out of slots too */ + __check_and_double_region_array(&lmb.reserved, &lmb_reserved_region[0], start, end); + lmb_free(start, end - start); +} + static int __init find_overlapped_early(u64 start, u64 end) { int i; -- 1.6.4.2
next prev parent reply other threads:[~2010-03-26 22:24 UTC|newest] Thread overview: 72+ messages / expand[flat|nested] mbox.gz Atom feed top 2010-03-26 22:21 [PATCH -v8 00/24] use lmb with x86 Yinghai Lu 2010-03-26 22:21 ` Yinghai Lu 2010-03-26 22:21 ` [PATCH 01/24] x86: Make smp_locks end with page alignment Yinghai Lu 2010-03-26 22:21 ` Yinghai Lu 2010-03-26 23:33 ` Johannes Weiner 2010-03-26 22:21 ` [PATCH 02/24] x86: Make sure free_init_pages() free pages in boundary Yinghai Lu 2010-03-26 22:21 ` Yinghai Lu 2010-03-26 23:06 ` Johannes Weiner 2010-03-26 23:06 ` Johannes Weiner 2010-03-26 23:45 ` Yinghai Lu 2010-03-27 0:07 ` Johannes Weiner 2010-03-27 0:07 ` Johannes Weiner 2010-03-27 0:17 ` Yinghai Lu 2010-03-27 0:17 ` Yinghai Lu 2010-03-27 1:19 ` [PATCH -v3] " Yinghai Lu 2010-03-28 0:03 ` Johannes Weiner 2010-03-28 0:50 ` Yinghai Lu 2010-03-28 1:01 ` Johannes Weiner 2010-03-28 1:58 ` Yinghai Lu 2010-03-28 23:35 ` [patch v5] x86: page-alin initrd area size Johannes Weiner 2010-03-29 0:41 ` Yinghai Lu 2010-03-26 22:21 ` [PATCH 03/24] x86: Do not free zero sized per cpu areas Yinghai Lu 2010-03-26 22:21 ` Yinghai Lu 2010-03-26 23:42 ` Johannes Weiner 2010-03-26 23:42 ` Johannes Weiner 2010-03-26 23:49 ` Yinghai Lu 2010-03-26 23:49 ` Yinghai Lu 2010-03-26 23:54 ` Johannes Weiner 2010-03-26 23:56 ` Yinghai Lu 2010-03-27 1:18 ` [PATCH -v8] " Yinghai Lu 2010-03-26 22:21 ` [PATCH 04/24] lmb: Move lmb.c to mm/ Yinghai Lu 2010-03-26 22:21 ` Yinghai Lu 2010-03-26 22:21 ` [PATCH 05/24] lmb: Seperate region array from lmb_region struct Yinghai Lu 2010-03-26 22:21 ` Yinghai Lu 2010-03-26 22:21 ` [PATCH 06/24] lmb: Add find_lmb_area() Yinghai Lu 2010-03-26 22:21 ` Yinghai Lu 2010-03-26 22:21 ` Yinghai Lu [this message] 2010-03-26 22:21 ` [PATCH 07/24] lmb: Add reserve_lmb/free_lmb Yinghai Lu 2010-03-26 22:21 ` [PATCH 08/24] lmb: Add find_lmb_area_size() Yinghai Lu 2010-03-26 22:21 ` Yinghai Lu 2010-03-26 22:21 ` [PATCH 09/24] bootmem, x86: Add weak version of reserve_bootmem_generic Yinghai Lu 2010-03-26 22:21 ` Yinghai Lu 2010-03-26 22:21 ` [PATCH 10/24] lmb: Add lmb_to_bootmem() Yinghai Lu 2010-03-26 22:21 ` Yinghai Lu 2010-03-26 22:21 ` [PATCH 11/24] lmb: Add get_free_all_memory_range() Yinghai Lu 2010-03-26 22:21 ` Yinghai Lu 2010-03-26 22:21 ` [PATCH 12/24] lmb: Add lmb_register_active_regions() and lmb_hole_size() Yinghai Lu 2010-03-26 22:21 ` Yinghai Lu 2010-03-26 22:21 ` [PATCH 13/24] lmb: Prepare to include linux/lmb.h in core file Yinghai Lu 2010-03-26 22:21 ` Yinghai Lu 2010-03-26 22:21 ` [PATCH 14/24] lmb: Add find_memory_core_early() Yinghai Lu 2010-03-26 22:21 ` Yinghai Lu 2010-03-26 22:21 ` [PATCH 15/24] lmb: Add find_lmb_area_node() Yinghai Lu 2010-03-26 22:21 ` Yinghai Lu 2010-03-26 22:21 ` [PATCH 16/24] lmb: Add lmb_free_memory_size() Yinghai Lu 2010-03-26 22:21 ` Yinghai Lu 2010-03-26 22:21 ` [PATCH 17/24] lmb: Add lmb_memory_size() Yinghai Lu 2010-03-26 22:21 ` Yinghai Lu 2010-03-26 22:21 ` [PATCH 18/24] lmb: Add reserve_lmb_overlap_ok() Yinghai Lu 2010-03-26 22:21 ` Yinghai Lu 2010-03-26 22:21 ` [PATCH 19/24] x86: Add sanitize_e820_map() Yinghai Lu 2010-03-26 22:21 ` Yinghai Lu 2010-03-26 22:21 ` [PATCH 20/24] x86: Use lmb to replace early_res Yinghai Lu 2010-03-26 22:21 ` Yinghai Lu 2010-03-26 22:21 ` [PATCH 21/24] x86: Replace e820_/_early string with lmb_ Yinghai Lu 2010-03-26 22:21 ` Yinghai Lu 2010-03-26 22:21 ` [PATCH 22/24] x86: Remove not used early_res code Yinghai Lu 2010-03-26 22:21 ` Yinghai Lu 2010-03-26 22:21 ` [PATCH 23/24] x86, lmb: Use lmb_memory_size()/lmb_free_memory_size() to get correct dma_reserve Yinghai Lu 2010-03-26 22:21 ` Yinghai Lu 2010-03-26 22:21 ` [PATCH 24/24] x86: Align e820 ram range to page Yinghai Lu 2010-03-26 22:21 ` Yinghai Lu
Reply instructions: You may reply publicly to this message via plain-text email using any one of the following methods: * Save the following mbox file, import it into your mail client, and reply-to-all from there: mbox Avoid top-posting and favor interleaved quoting: https://en.wikipedia.org/wiki/Posting_style#Interleaved_style * Reply using the --to, --cc, and --in-reply-to switches of git-send-email(1): git send-email \ --in-reply-to=1269642114-16588-8-git-send-email-yinghai@kernel.org \ --to=yinghai@kernel.org \ --cc=akpm@linux-foundation.org \ --cc=davem@davemloft.net \ --cc=hannes@cmpxchg.org \ --cc=hpa@zytor.com \ --cc=linux-arch@vger.kernel.org \ --cc=linux-kernel@vger.kernel.org \ --cc=mingo@elte.hu \ --cc=tglx@linutronix.de \ /path/to/YOUR_REPLY https://kernel.org/pub/software/scm/git/docs/git-send-email.html * If your mail client supports setting the In-Reply-To header via mailto: links, try the mailto: linkBe sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox; as well as URLs for NNTP newsgroup(s).