public inbox for linux-kernel@vger.kernel.org
 help / color / mirror / Atom feed
* [PATCH 3/3] x86/mm: Separate paging setup from memory mapping
@ 2012-07-14 13:41 Pekka Enberg
  2012-07-14 20:47 ` Yinghai Lu
  0 siblings, 1 reply; 3+ messages in thread
From: Pekka Enberg @ 2012-07-14 13:41 UTC (permalink / raw)
  To: mingo; +Cc: linux-kernel, x86, Pekka Enberg, Tejun Heo, Yinghai Lu

Move PSE and PGE bit twiddling from init_memory_mapping() to a new
setup_paging() function to simplify the former function. The
init_memory_mapping() function is called later in the boot process by
gart_iommu_init(), efi_ioremap(), and arch_add_memory() which have no
business whatsover updating the CR4 register.

Cc: Tejun Heo <tj@kernel.org>
Cc: Yinghai Lu <yinghai@kernel.org>
Signed-off-by: Pekka Enberg <penberg@kernel.org>
---
 arch/x86/include/asm/page_types.h |    2 ++
 arch/x86/kernel/setup.c           |    2 ++
 arch/x86/mm/init.c                |   23 +++++++++++++----------
 3 files changed, 17 insertions(+), 10 deletions(-)

diff --git a/arch/x86/include/asm/page_types.h b/arch/x86/include/asm/page_types.h
index e21fdd1..529905e 100644
--- a/arch/x86/include/asm/page_types.h
+++ b/arch/x86/include/asm/page_types.h
@@ -51,6 +51,8 @@ static inline phys_addr_t get_max_mapped(void)
 	return (phys_addr_t)max_pfn_mapped << PAGE_SHIFT;
 }
 
+extern void setup_paging(void);
+
 extern unsigned long init_memory_mapping(unsigned long start,
 					 unsigned long end);
 
diff --git a/arch/x86/kernel/setup.c b/arch/x86/kernel/setup.c
index 16be6dc..a883978 100644
--- a/arch/x86/kernel/setup.c
+++ b/arch/x86/kernel/setup.c
@@ -913,6 +913,8 @@ void __init setup_arch(char **cmdline_p)
 
 	init_gbpages();
 
+	setup_paging();
+
 	/* max_pfn_mapped is updated here */
 	max_low_pfn_mapped = init_memory_mapping(0, max_low_pfn<<PAGE_SHIFT);
 	max_pfn_mapped = max_low_pfn_mapped;
diff --git a/arch/x86/mm/init.c b/arch/x86/mm/init.c
index e270f94..79b4b89 100644
--- a/arch/x86/mm/init.c
+++ b/arch/x86/mm/init.c
@@ -127,6 +127,19 @@ static unsigned long addr_to_pud_pfn(unsigned long addr)
 	return (addr >> PUD_SHIFT) << (PUD_SHIFT - PAGE_SHIFT);
 }
 
+void setup_paging(void)
+{
+	/* Enable PSE if available */
+	if (cpu_has_pse)
+		set_in_cr4(X86_CR4_PSE);
+
+	/* Enable PGE if available */
+	if (cpu_has_pge) {
+		set_in_cr4(X86_CR4_PGE);
+		__supported_pte_mask |= _PAGE_GLOBAL;
+	}
+}
+
 /*
  * Setup the direct mapping of the physical memory at PAGE_OFFSET.
  * This runs before bootmem is initialized and gets pages directly from
@@ -159,16 +172,6 @@ unsigned long __init_refok init_memory_mapping(unsigned long start,
 	use_gbpages = direct_gbpages;
 #endif
 
-	/* Enable PSE if available */
-	if (cpu_has_pse)
-		set_in_cr4(X86_CR4_PSE);
-
-	/* Enable PGE if available */
-	if (cpu_has_pge) {
-		set_in_cr4(X86_CR4_PGE);
-		__supported_pte_mask |= _PAGE_GLOBAL;
-	}
-
 	if (use_gbpages)
 		page_size_mask |= 1 << PG_LEVEL_1G;
 	if (use_pse)
-- 
1.7.7.6


^ permalink raw reply related	[flat|nested] 3+ messages in thread

* Re: [PATCH 3/3] x86/mm: Separate paging setup from memory mapping
  2012-07-14 13:41 [PATCH 3/3] x86/mm: Separate paging setup from memory mapping Pekka Enberg
@ 2012-07-14 20:47 ` Yinghai Lu
  2012-07-15 11:02   ` Pekka Enberg
  0 siblings, 1 reply; 3+ messages in thread
From: Yinghai Lu @ 2012-07-14 20:47 UTC (permalink / raw)
  To: Pekka Enberg; +Cc: mingo, linux-kernel, x86, Tejun Heo

[-- Attachment #1: Type: text/plain, Size: 545 bytes --]

On Sat, Jul 14, 2012 at 6:41 AM, Pekka Enberg <penberg@kernel.org> wrote:
> Move PSE and PGE bit twiddling from init_memory_mapping() to a new
> setup_paging() function to simplify the former function. The
> init_memory_mapping() function is called later in the boot process by
> gart_iommu_init(), efi_ioremap(), and arch_add_memory() which have no
> business whatsover updating the CR4 register.

I have one local patch that will only set that one time too. also do
other page size probe one time.

please check attached one.

Thanks

Yinghai

[-- Attachment #2: get_page_size_mask.patch --]
[-- Type: application/octet-stream, Size: 4696 bytes --]

Subject: [PATCH 1/3] x86, mm:  Introduce global page_size_mask

Add probe_page_size_mask() to detect if need to use 1G or 2M.
and store them in page_size_mask.

Only probe them at first init_memory_mapping calling.
second and later init_memory_mapping() calling does not need probe again.
also we don't need to pass use_gbpages around.

Suggested-by: Ingo Molnar <mingo@elte.hu>
Signe-off-by: Yinghai Lu <yinghai@kernel.org>

---
 arch/x86/include/asm/pgtable.h |    1 
 arch/x86/mm/init.c             |   70 +++++++++++++++++++++--------------------
 2 files changed, 37 insertions(+), 34 deletions(-)

Index: linux-2.6/arch/x86/include/asm/pgtable.h
===================================================================
--- linux-2.6.orig/arch/x86/include/asm/pgtable.h
+++ linux-2.6/arch/x86/include/asm/pgtable.h
@@ -597,6 +597,7 @@ static inline int pgd_none(pgd_t pgd)
 #ifndef __ASSEMBLY__
 
 extern int direct_gbpages;
+extern int page_size_mask;
 
 /* local pte updates need not use xchg for locking */
 static inline pte_t native_local_ptep_get_and_clear(pte_t *ptep)
Index: linux-2.6/arch/x86/mm/init.c
===================================================================
--- linux-2.6.orig/arch/x86/mm/init.c
+++ linux-2.6/arch/x86/mm/init.c
@@ -35,8 +35,10 @@ struct map_range {
 	unsigned page_size_mask;
 };
 
-static void __init find_early_table_space(struct map_range *mr, unsigned long end,
-					  int use_pse, int use_gbpages)
+int page_size_mask = -1;
+
+static void __init find_early_table_space(struct map_range *mr,
+					  unsigned long end)
 {
 	unsigned long puds, pmds, ptes, tables, start = 0, good_end = end;
 	phys_addr_t base;
@@ -44,7 +46,7 @@ static void __init find_early_table_spac
 	puds = (end + PUD_SIZE - 1) >> PUD_SHIFT;
 	tables = roundup(puds * sizeof(pud_t), PAGE_SIZE);
 
-	if (use_gbpages) {
+	if (page_size_mask & (1 << PG_LEVEL_1G)) {
 		unsigned long extra;
 
 		extra = end - ((end>>PUD_SHIFT) << PUD_SHIFT);
@@ -54,7 +56,7 @@ static void __init find_early_table_spac
 
 	tables += roundup(pmds * sizeof(pmd_t), PAGE_SIZE);
 
-	if (use_pse) {
+	if (page_size_mask & (1 << PG_LEVEL_2M)) {
 		unsigned long extra;
 
 		extra = end - ((end>>PMD_SHIFT) << PMD_SHIFT);
@@ -90,6 +92,34 @@ static void __init find_early_table_spac
 		(pgt_buf_top << PAGE_SHIFT) - 1);
 }
 
+static void probe_page_size_mask(void)
+{
+	if (page_size_mask != -1)
+		return;
+
+	page_size_mask = 0;
+#if !defined(CONFIG_DEBUG_PAGEALLOC) && !defined(CONFIG_KMEMCHECK)
+	/*
+	 * For CONFIG_DEBUG_PAGEALLOC, identity mapping will use small pages.
+	 * This will simplify cpa(), which otherwise needs to support splitting
+	 * large pages into small in interrupt context, etc.
+	 */
+	if (direct_gbpages)
+		page_size_mask |= 1 << PG_LEVEL_1G;
+	if (cpu_has_pse)
+		page_size_mask |= 1 << PG_LEVEL_2M;
+#endif
+
+	/* Enable PSE if available */
+	if (cpu_has_pse)
+		set_in_cr4(X86_CR4_PSE);
+
+	/* Enable PGE if available */
+	if (cpu_has_pge) {
+		set_in_cr4(X86_CR4_PGE);
+		__supported_pte_mask |= _PAGE_GLOBAL;
+	}
+}
 void __init native_pagetable_reserve(u64 start, u64 end)
 {
 	memblock_reserve(start, end - start);
@@ -125,44 +155,16 @@ static int __meminit save_mr(struct map_
 unsigned long __init_refok init_memory_mapping(unsigned long start,
 					       unsigned long end)
 {
-	unsigned long page_size_mask = 0;
 	unsigned long start_pfn, end_pfn;
 	unsigned long ret = 0;
 	unsigned long pos;
-
 	struct map_range mr[NR_RANGE_MR];
 	int nr_range, i;
-	int use_pse, use_gbpages;
 
 	printk(KERN_INFO "init_memory_mapping: [mem %#010lx-%#010lx]\n",
 	       start, end - 1);
 
-#if defined(CONFIG_DEBUG_PAGEALLOC) || defined(CONFIG_KMEMCHECK)
-	/*
-	 * For CONFIG_DEBUG_PAGEALLOC, identity mapping will use small pages.
-	 * This will simplify cpa(), which otherwise needs to support splitting
-	 * large pages into small in interrupt context, etc.
-	 */
-	use_pse = use_gbpages = 0;
-#else
-	use_pse = cpu_has_pse;
-	use_gbpages = direct_gbpages;
-#endif
-
-	/* Enable PSE if available */
-	if (cpu_has_pse)
-		set_in_cr4(X86_CR4_PSE);
-
-	/* Enable PGE if available */
-	if (cpu_has_pge) {
-		set_in_cr4(X86_CR4_PGE);
-		__supported_pte_mask |= _PAGE_GLOBAL;
-	}
-
-	if (use_gbpages)
-		page_size_mask |= 1 << PG_LEVEL_1G;
-	if (use_pse)
-		page_size_mask |= 1 << PG_LEVEL_2M;
+	probe_page_size_mask();
 
 	memset(mr, 0, sizeof(mr));
 	nr_range = 0;
@@ -267,7 +269,7 @@ unsigned long __init_refok init_memory_m
 	 * nodes are discovered.
 	 */
 	if (!after_bootmem)
-		find_early_table_space(&mr[0], end, use_pse, use_gbpages);
+		find_early_table_space(&mr[0], end);
 
 	for (i = 0; i < nr_range; i++)
 		ret = kernel_physical_mapping_init(mr[i].start, mr[i].end,

^ permalink raw reply	[flat|nested] 3+ messages in thread

* Re: [PATCH 3/3] x86/mm: Separate paging setup from memory mapping
  2012-07-14 20:47 ` Yinghai Lu
@ 2012-07-15 11:02   ` Pekka Enberg
  0 siblings, 0 replies; 3+ messages in thread
From: Pekka Enberg @ 2012-07-15 11:02 UTC (permalink / raw)
  To: Yinghai Lu; +Cc: mingo, linux-kernel, x86, Tejun Heo

On Sat, Jul 14, 2012 at 6:41 AM, Pekka Enberg <penberg@kernel.org> wrote:
>> Move PSE and PGE bit twiddling from init_memory_mapping() to a new
>> setup_paging() function to simplify the former function. The
>> init_memory_mapping() function is called later in the boot process by
>> gart_iommu_init(), efi_ioremap(), and arch_add_memory() which have no
>> business whatsover updating the CR4 register.

On Sat, Jul 14, 2012 at 11:47 PM, Yinghai Lu <yinghai@kernel.org> wrote:
> I have one local patch that will only set that one time too. also do
> other page size probe one time.
>
> please check attached one.

Your patch still keeps the same sort of code flow. That is,
init_memory_mapping() does all sorts of things upon first call. I
personally prefer the more explicit approach in this patch. In fact,
you could do the page_size_mask setup in the setup_paging() function.

                        Pekka

^ permalink raw reply	[flat|nested] 3+ messages in thread

end of thread, other threads:[~2012-07-15 11:03 UTC | newest]

Thread overview: 3+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2012-07-14 13:41 [PATCH 3/3] x86/mm: Separate paging setup from memory mapping Pekka Enberg
2012-07-14 20:47 ` Yinghai Lu
2012-07-15 11:02   ` Pekka Enberg

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox