From mboxrd@z Thu Jan 1 00:00:00 1970 From: Subject: [PATCH 3/3] x86: move memblock_x86_reserve_range PGTABLE to find_early_table_space Date: Tue, 7 Jun 2011 19:13:29 +0100 Message-ID: <1307470409-7654-3-git-send-email-stefano.stabellini@eu.citrix.com> References: Mime-Version: 1.0 Content-Type: text/plain Return-path: In-Reply-To: Sender: linux-kernel-owner@vger.kernel.org To: hpa@zytor.com Cc: hpa@linux.intel.com, konrad.wilk@oracle.com, mingo@elte.hu, linux-kernel@vger.kernel.org, xen-devel@lists.xensource.com, Stefano.Stabellini@eu.citrix.com, yinghai@kernel.org, Stefano Stabellini List-Id: xen-devel@lists.xenproject.org From: Stefano Stabellini Now that find_early_table_space knows how to calculate the exact amout of memory needed by the kernel pagetable, we can reserve the range directly in find_early_table_space. Signed-off-by: Stefano Stabellini Reviewed-by: Konrad Rzeszutek Wilk --- arch/x86/mm/init.c | 8 ++++---- 1 files changed, 4 insertions(+), 4 deletions(-) diff --git a/arch/x86/mm/init.c b/arch/x86/mm/init.c index 15590fd..36bacfe 100644 --- a/arch/x86/mm/init.c +++ b/arch/x86/mm/init.c @@ -104,6 +104,10 @@ static void __init find_early_table_space(unsigned long start, printk(KERN_DEBUG "kernel direct mapping tables up to %lx @ %lx-%lx\n", end, pgt_buf_start << PAGE_SHIFT, pgt_buf_top << PAGE_SHIFT); + + if (pgt_buf_top > pgt_buf_start) + memblock_x86_reserve_range(pgt_buf_start << PAGE_SHIFT, + pgt_buf_top << PAGE_SHIFT, "PGTABLE"); } struct map_range { @@ -301,10 +305,6 @@ unsigned long __init_refok init_memory_mapping(unsigned long start, printk(KERN_DEBUG "initial kernel pagetable allocation wasted %lx" " pages\n", pgt_buf_top - pgt_buf_end); - if (!after_bootmem && pgt_buf_end > pgt_buf_start) - memblock_x86_reserve_range(pgt_buf_start << PAGE_SHIFT, - pgt_buf_end << PAGE_SHIFT, "PGTABLE"); - if (!after_bootmem) early_memtest(start, end); -- 1.7.2.3