From: "Kirill A. Shutemov" <kirill.shutemov@linux.intel.com> To: Linus Torvalds <torvalds@linux-foundation.org>, Andrew Morton <akpm@linux-foundation.org>, x86@kernel.org, Thomas Gleixner <tglx@linutronix.de>, Ingo Molnar <mingo@redhat.com>, "H. Peter Anvin" <hpa@zytor.com> Cc: Andi Kleen <ak@linux.intel.com>, Dave Hansen <dave.hansen@intel.com>, Andy Lutomirski <luto@amacapital.net>, linux-arch@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, "Kirill A. Shutemov" <kirill.shutemov@linux.intel.com> Subject: [PATCHv1, RFC 2/8] x86/mm: Make virtual memory layout movable for CONFIG_X86_5LEVEL Date: Thu, 25 May 2017 23:33:28 +0300 [thread overview] Message-ID: <20170525203334.867-3-kirill.shutemov@linux.intel.com> (raw) In-Reply-To: <20170525203334.867-1-kirill.shutemov@linux.intel.com> We need to be able to adjust virtual memory layout at runtime to be able to switch between 4- and 5-level paging at boot-time. KASLR already has movable __VMALLOC_BASE, __VMEMMAP_BASE and __PAGE_OFFSET. Let's re-use it. Signed-off-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com> --- arch/x86/include/asm/kaslr.h | 4 ---- arch/x86/include/asm/page_64.h | 4 ++++ arch/x86/include/asm/page_64_types.h | 2 +- arch/x86/include/asm/pgtable_64_types.h | 2 +- arch/x86/kernel/head64.c | 9 +++++++++ arch/x86/mm/kaslr.c | 8 -------- 6 files changed, 15 insertions(+), 14 deletions(-) diff --git a/arch/x86/include/asm/kaslr.h b/arch/x86/include/asm/kaslr.h index 1052a797d71d..683c9d736314 100644 --- a/arch/x86/include/asm/kaslr.h +++ b/arch/x86/include/asm/kaslr.h @@ -4,10 +4,6 @@ unsigned long kaslr_get_random_long(const char *purpose); #ifdef CONFIG_RANDOMIZE_MEMORY -extern unsigned long page_offset_base; -extern unsigned long vmalloc_base; -extern unsigned long vmemmap_base; - void kernel_randomize_memory(void); #else static inline void kernel_randomize_memory(void) { } diff --git a/arch/x86/include/asm/page_64.h b/arch/x86/include/asm/page_64.h index b4a0d43248cf..a12fb4dcdd15 100644 --- a/arch/x86/include/asm/page_64.h +++ b/arch/x86/include/asm/page_64.h @@ -10,6 +10,10 @@ extern unsigned long max_pfn; extern unsigned long phys_base; +extern unsigned long page_offset_base; +extern unsigned long vmalloc_base; +extern unsigned long vmemmap_base; + static inline unsigned long __phys_addr_nodebug(unsigned long x) { unsigned long y = x - __START_KERNEL_map; diff --git a/arch/x86/include/asm/page_64_types.h b/arch/x86/include/asm/page_64_types.h index 3f5f08b010d0..0126d6bc2eb1 100644 --- a/arch/x86/include/asm/page_64_types.h +++ b/arch/x86/include/asm/page_64_types.h @@ -42,7 +42,7 @@ #define __PAGE_OFFSET_BASE _AC(0xffff880000000000, UL) #endif -#ifdef CONFIG_RANDOMIZE_MEMORY +#if defined(CONFIG_RANDOMIZE_MEMORY) || defined(CONFIG_X86_5LEVEL) #define __PAGE_OFFSET page_offset_base #else #define __PAGE_OFFSET __PAGE_OFFSET_BASE diff --git a/arch/x86/include/asm/pgtable_64_types.h b/arch/x86/include/asm/pgtable_64_types.h index 06470da156ba..a9f77ead7088 100644 --- a/arch/x86/include/asm/pgtable_64_types.h +++ b/arch/x86/include/asm/pgtable_64_types.h @@ -85,7 +85,7 @@ typedef struct { pteval_t pte; } pte_t; #define __VMALLOC_BASE _AC(0xffffc90000000000, UL) #define __VMEMMAP_BASE _AC(0xffffea0000000000, UL) #endif -#ifdef CONFIG_RANDOMIZE_MEMORY +#if defined(CONFIG_RANDOMIZE_MEMORY) || defined(CONFIG_X86_5LEVEL) #define VMALLOC_START vmalloc_base #define VMEMMAP_START vmemmap_base #else diff --git a/arch/x86/kernel/head64.c b/arch/x86/kernel/head64.c index 9403633f4c7c..408ed402db1a 100644 --- a/arch/x86/kernel/head64.c +++ b/arch/x86/kernel/head64.c @@ -38,6 +38,15 @@ extern pmd_t early_dynamic_pgts[EARLY_DYNAMIC_PAGE_TABLES][PTRS_PER_PMD]; static unsigned int __initdata next_early_pgt; pmdval_t early_pmd_flags = __PAGE_KERNEL_LARGE & ~(_PAGE_GLOBAL | _PAGE_NX); +#if defined(CONFIG_RANDOMIZE_MEMORY) || defined(CONFIG_X86_5LEVEL) +unsigned long page_offset_base = __PAGE_OFFSET_BASE; +EXPORT_SYMBOL(page_offset_base); +unsigned long vmalloc_base = __VMALLOC_BASE; +EXPORT_SYMBOL(vmalloc_base); +unsigned long vmemmap_base = __VMEMMAP_BASE; +EXPORT_SYMBOL(vmemmap_base); +#endif + static void __init *fixup_pointer(void *ptr, unsigned long physaddr) { return ptr - (void *)_text + (void *)physaddr; diff --git a/arch/x86/mm/kaslr.c b/arch/x86/mm/kaslr.c index af599167fe3c..e6420b18f6e0 100644 --- a/arch/x86/mm/kaslr.c +++ b/arch/x86/mm/kaslr.c @@ -53,14 +53,6 @@ static const unsigned long vaddr_end = EFI_VA_END; static const unsigned long vaddr_end = __START_KERNEL_map; #endif -/* Default values */ -unsigned long page_offset_base = __PAGE_OFFSET_BASE; -EXPORT_SYMBOL(page_offset_base); -unsigned long vmalloc_base = __VMALLOC_BASE; -EXPORT_SYMBOL(vmalloc_base); -unsigned long vmemmap_base = __VMEMMAP_BASE; -EXPORT_SYMBOL(vmemmap_base); - /* * Memory regions randomized by KASLR (except modules that use a separate logic * earlier during boot). The list is ordered based on virtual addresses. This -- 2.11.0 -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@kvack.org. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
WARNING: multiple messages have this Message-ID (diff)
From: "Kirill A. Shutemov" <kirill.shutemov@linux.intel.com> To: Linus Torvalds <torvalds@linux-foundation.org>, Andrew Morton <akpm@linux-foundation.org>, x86@kernel.org, Thomas Gleixner <tglx@linutronix.de>, Ingo Molnar <mingo@redhat.com>, "H. Peter Anvin" <hpa@zytor.com> Cc: Andi Kleen <ak@linux.intel.com>, Dave Hansen <dave.hansen@intel.com>, Andy Lutomirski <luto@amacapital.net>, linux-arch@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, "Kirill A. Shutemov" <kirill.shutemov@linux.intel.com> Subject: [PATCHv1, RFC 2/8] x86/mm: Make virtual memory layout movable for CONFIG_X86_5LEVEL Date: Thu, 25 May 2017 23:33:28 +0300 [thread overview] Message-ID: <20170525203334.867-3-kirill.shutemov@linux.intel.com> (raw) Message-ID: <20170525203328.7v7wJT8JVx6r226sOf5mEzaqglWBfoEmBbIVcyPwrHw@z> (raw) In-Reply-To: <20170525203334.867-1-kirill.shutemov@linux.intel.com> We need to be able to adjust virtual memory layout at runtime to be able to switch between 4- and 5-level paging at boot-time. KASLR already has movable __VMALLOC_BASE, __VMEMMAP_BASE and __PAGE_OFFSET. Let's re-use it. Signed-off-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com> --- arch/x86/include/asm/kaslr.h | 4 ---- arch/x86/include/asm/page_64.h | 4 ++++ arch/x86/include/asm/page_64_types.h | 2 +- arch/x86/include/asm/pgtable_64_types.h | 2 +- arch/x86/kernel/head64.c | 9 +++++++++ arch/x86/mm/kaslr.c | 8 -------- 6 files changed, 15 insertions(+), 14 deletions(-) diff --git a/arch/x86/include/asm/kaslr.h b/arch/x86/include/asm/kaslr.h index 1052a797d71d..683c9d736314 100644 --- a/arch/x86/include/asm/kaslr.h +++ b/arch/x86/include/asm/kaslr.h @@ -4,10 +4,6 @@ unsigned long kaslr_get_random_long(const char *purpose); #ifdef CONFIG_RANDOMIZE_MEMORY -extern unsigned long page_offset_base; -extern unsigned long vmalloc_base; -extern unsigned long vmemmap_base; - void kernel_randomize_memory(void); #else static inline void kernel_randomize_memory(void) { } diff --git a/arch/x86/include/asm/page_64.h b/arch/x86/include/asm/page_64.h index b4a0d43248cf..a12fb4dcdd15 100644 --- a/arch/x86/include/asm/page_64.h +++ b/arch/x86/include/asm/page_64.h @@ -10,6 +10,10 @@ extern unsigned long max_pfn; extern unsigned long phys_base; +extern unsigned long page_offset_base; +extern unsigned long vmalloc_base; +extern unsigned long vmemmap_base; + static inline unsigned long __phys_addr_nodebug(unsigned long x) { unsigned long y = x - __START_KERNEL_map; diff --git a/arch/x86/include/asm/page_64_types.h b/arch/x86/include/asm/page_64_types.h index 3f5f08b010d0..0126d6bc2eb1 100644 --- a/arch/x86/include/asm/page_64_types.h +++ b/arch/x86/include/asm/page_64_types.h @@ -42,7 +42,7 @@ #define __PAGE_OFFSET_BASE _AC(0xffff880000000000, UL) #endif -#ifdef CONFIG_RANDOMIZE_MEMORY +#if defined(CONFIG_RANDOMIZE_MEMORY) || defined(CONFIG_X86_5LEVEL) #define __PAGE_OFFSET page_offset_base #else #define __PAGE_OFFSET __PAGE_OFFSET_BASE diff --git a/arch/x86/include/asm/pgtable_64_types.h b/arch/x86/include/asm/pgtable_64_types.h index 06470da156ba..a9f77ead7088 100644 --- a/arch/x86/include/asm/pgtable_64_types.h +++ b/arch/x86/include/asm/pgtable_64_types.h @@ -85,7 +85,7 @@ typedef struct { pteval_t pte; } pte_t; #define __VMALLOC_BASE _AC(0xffffc90000000000, UL) #define __VMEMMAP_BASE _AC(0xffffea0000000000, UL) #endif -#ifdef CONFIG_RANDOMIZE_MEMORY +#if defined(CONFIG_RANDOMIZE_MEMORY) || defined(CONFIG_X86_5LEVEL) #define VMALLOC_START vmalloc_base #define VMEMMAP_START vmemmap_base #else diff --git a/arch/x86/kernel/head64.c b/arch/x86/kernel/head64.c index 9403633f4c7c..408ed402db1a 100644 --- a/arch/x86/kernel/head64.c +++ b/arch/x86/kernel/head64.c @@ -38,6 +38,15 @@ extern pmd_t early_dynamic_pgts[EARLY_DYNAMIC_PAGE_TABLES][PTRS_PER_PMD]; static unsigned int __initdata next_early_pgt; pmdval_t early_pmd_flags = __PAGE_KERNEL_LARGE & ~(_PAGE_GLOBAL | _PAGE_NX); +#if defined(CONFIG_RANDOMIZE_MEMORY) || defined(CONFIG_X86_5LEVEL) +unsigned long page_offset_base = __PAGE_OFFSET_BASE; +EXPORT_SYMBOL(page_offset_base); +unsigned long vmalloc_base = __VMALLOC_BASE; +EXPORT_SYMBOL(vmalloc_base); +unsigned long vmemmap_base = __VMEMMAP_BASE; +EXPORT_SYMBOL(vmemmap_base); +#endif + static void __init *fixup_pointer(void *ptr, unsigned long physaddr) { return ptr - (void *)_text + (void *)physaddr; diff --git a/arch/x86/mm/kaslr.c b/arch/x86/mm/kaslr.c index af599167fe3c..e6420b18f6e0 100644 --- a/arch/x86/mm/kaslr.c +++ b/arch/x86/mm/kaslr.c @@ -53,14 +53,6 @@ static const unsigned long vaddr_end = EFI_VA_END; static const unsigned long vaddr_end = __START_KERNEL_map; #endif -/* Default values */ -unsigned long page_offset_base = __PAGE_OFFSET_BASE; -EXPORT_SYMBOL(page_offset_base); -unsigned long vmalloc_base = __VMALLOC_BASE; -EXPORT_SYMBOL(vmalloc_base); -unsigned long vmemmap_base = __VMEMMAP_BASE; -EXPORT_SYMBOL(vmemmap_base); - /* * Memory regions randomized by KASLR (except modules that use a separate logic * earlier during boot). The list is ordered based on virtual addresses. This -- 2.11.0
next prev parent reply other threads:[~2017-05-25 20:33 UTC|newest] Thread overview: 96+ messages / expand[flat|nested] mbox.gz Atom feed top 2017-05-25 20:33 [PATCHv1, RFC 0/8] Boot-time switching between 4- and 5-level paging Kirill A. Shutemov 2017-05-25 20:33 ` [PATCHv1, RFC 1/8] x86/boot/compressed/64: Detect and handle 5-level paging at boot-time Kirill A. Shutemov 2017-05-25 20:33 ` Kirill A. Shutemov 2017-05-25 20:33 ` Kirill A. Shutemov [this message] 2017-05-25 20:33 ` [PATCHv1, RFC 2/8] x86/mm: Make virtual memory layout movable for CONFIG_X86_5LEVEL Kirill A. Shutemov 2017-05-25 20:33 ` [PATCHv1, RFC 3/8] x86/mm: Make PGDIR_SHIFT and PTRS_PER_P4D variable Kirill A. Shutemov 2017-05-25 20:33 ` Kirill A. Shutemov 2017-05-25 20:33 ` [PATCHv1, RFC 4/8] x86/mm: Handle boot-time paging mode switching at early boot Kirill A. Shutemov 2017-05-25 20:33 ` Kirill A. Shutemov 2017-05-25 20:33 ` [PATCHv1, RFC 5/8] x86/mm: Fold p4d page table layer at runtime Kirill A. Shutemov 2017-05-25 20:33 ` Kirill A. Shutemov 2017-05-27 15:09 ` Brian Gerst 2017-05-27 15:09 ` Brian Gerst 2017-05-27 22:46 ` Kirill A. Shutemov 2017-05-27 22:46 ` Kirill A. Shutemov 2017-05-27 22:56 ` Brian Gerst 2017-05-27 22:56 ` Brian Gerst 2017-05-25 20:33 ` [PATCHv1, RFC 6/8] x86/mm: Replace compile-time checks for 5-level with runtime-time Kirill A. Shutemov 2017-05-25 20:33 ` Kirill A. Shutemov 2017-05-25 20:33 ` [PATCHv1, RFC 7/8] x86/mm: Hacks for boot-time switching between 4- and 5-level paging Kirill A. Shutemov 2017-05-25 20:33 ` Kirill A. Shutemov 2017-05-26 22:10 ` KASAN vs. " Kirill A. Shutemov 2017-05-26 22:10 ` Kirill A. Shutemov 2017-05-29 10:02 ` Dmitry Vyukov 2017-05-29 11:18 ` Andrey Ryabinin 2017-05-29 11:19 ` Dmitry Vyukov 2017-05-29 11:19 ` Dmitry Vyukov 2017-05-29 11:45 ` Andrey Ryabinin 2017-05-29 11:45 ` Andrey Ryabinin 2017-05-29 12:46 ` Andrey Ryabinin 2017-05-29 12:46 ` Andrey Ryabinin 2017-06-01 14:56 ` Andrey Ryabinin 2017-06-01 14:56 ` Andrey Ryabinin 2017-07-10 12:33 ` Kirill A. Shutemov 2017-07-10 12:33 ` Kirill A. Shutemov 2017-07-10 12:43 ` Dmitry Vyukov 2017-07-10 12:43 ` Dmitry Vyukov 2017-07-10 14:17 ` Kirill A. Shutemov 2017-07-10 14:17 ` Kirill A. Shutemov 2017-07-10 15:56 ` Andy Lutomirski 2017-07-10 15:56 ` Andy Lutomirski 2017-07-10 18:47 ` Kirill A. Shutemov 2017-07-10 18:47 ` Kirill A. Shutemov 2017-07-10 20:07 ` Andy Lutomirski 2017-07-10 20:07 ` Andy Lutomirski 2017-07-10 21:24 ` Kirill A. Shutemov 2017-07-10 21:24 ` Kirill A. Shutemov 2017-07-11 0:30 ` Andy Lutomirski 2017-07-11 0:30 ` Andy Lutomirski 2017-07-11 10:35 ` Kirill A. Shutemov 2017-07-11 15:06 ` Andy Lutomirski 2017-07-11 15:06 ` Andy Lutomirski 2017-07-11 15:15 ` Andrey Ryabinin 2017-07-11 15:15 ` Andrey Ryabinin 2017-07-11 16:45 ` Andrey Ryabinin 2017-07-11 17:03 ` Kirill A. Shutemov 2017-07-11 17:03 ` Kirill A. Shutemov 2017-07-11 17:29 ` Andrey Ryabinin 2017-07-11 17:29 ` Andrey Ryabinin 2017-07-11 19:05 ` Kirill A. Shutemov 2017-07-11 19:05 ` Kirill A. Shutemov 2017-07-13 12:58 ` Andrey Ryabinin 2017-07-13 12:58 ` Andrey Ryabinin 2017-07-13 13:52 ` Kirill A. Shutemov 2017-07-13 14:15 ` Kirill A. Shutemov 2017-07-13 14:15 ` Kirill A. Shutemov 2017-07-13 14:19 ` Andrey Ryabinin 2017-07-13 14:19 ` Andrey Ryabinin 2017-07-24 12:13 ` Kirill A. Shutemov 2017-07-24 14:07 ` Andrey Ryabinin 2017-07-10 16:57 ` Andrey Ryabinin 2017-05-25 20:33 ` [PATCHv1, RFC 8/8] x86/mm: Allow to boot without la57 if CONFIG_X86_5LEVEL=y Kirill A. Shutemov 2017-05-25 20:33 ` Kirill A. Shutemov 2017-05-25 23:24 ` [PATCHv1, RFC 0/8] Boot-time switching between 4- and 5-level paging Linus Torvalds 2017-05-25 23:24 ` Linus Torvalds 2017-05-26 0:40 ` Andy Lutomirski 2017-05-26 0:40 ` Andy Lutomirski 2017-05-26 4:18 ` Kevin Easton 2017-05-26 4:18 ` Kevin Easton 2017-05-26 7:21 ` Andy Lutomirski 2017-05-26 13:00 ` Kirill A. Shutemov 2017-05-26 13:00 ` Kirill A. Shutemov 2017-05-26 13:35 ` Andi Kleen 2017-05-26 15:51 ` Linus Torvalds 2017-05-26 15:51 ` Linus Torvalds 2017-05-26 15:58 ` Kirill A. Shutemov 2017-05-26 15:58 ` Kirill A. Shutemov 2017-05-26 16:13 ` Linus Torvalds 2017-05-26 16:13 ` Linus Torvalds 2017-05-26 18:24 ` hpa 2017-05-26 18:24 ` hpa 2017-05-26 19:23 ` Dave Hansen 2017-05-26 19:36 ` hpa 2017-05-26 19:36 ` hpa 2017-05-26 19:40 ` hpa 2017-05-26 19:40 ` hpa
Reply instructions: You may reply publicly to this message via plain-text email using any one of the following methods: * Save the following mbox file, import it into your mail client, and reply-to-all from there: mbox Avoid top-posting and favor interleaved quoting: https://en.wikipedia.org/wiki/Posting_style#Interleaved_style * Reply using the --to, --cc, and --in-reply-to switches of git-send-email(1): git send-email \ --in-reply-to=20170525203334.867-3-kirill.shutemov@linux.intel.com \ --to=kirill.shutemov@linux.intel.com \ --cc=ak@linux.intel.com \ --cc=akpm@linux-foundation.org \ --cc=dave.hansen@intel.com \ --cc=hpa@zytor.com \ --cc=linux-arch@vger.kernel.org \ --cc=linux-kernel@vger.kernel.org \ --cc=linux-mm@kvack.org \ --cc=luto@amacapital.net \ --cc=mingo@redhat.com \ --cc=tglx@linutronix.de \ --cc=torvalds@linux-foundation.org \ --cc=x86@kernel.org \ /path/to/YOUR_REPLY https://kernel.org/pub/software/scm/git/docs/git-send-email.html * If your mail client supports setting the In-Reply-To header via mailto: links, try the mailto: linkBe sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox; as well as URLs for NNTP newsgroup(s).