From mboxrd@z Thu Jan 1 00:00:00 1970 From: Joerg Roedel Subject: [PATCH v3 4/7] x86/mm/64: Implement arch_sync_kernel_mappings() Date: Fri, 15 May 2020 16:00:20 +0200 Message-ID: <20200515140023.25469-5-joro@8bytes.org> References: <20200515140023.25469-1-joro@8bytes.org> Return-path: In-Reply-To: <20200515140023.25469-1-joro@8bytes.org> Sender: linux-acpi-owner@vger.kernel.org To: x86@kernel.org Cc: hpa@zytor.com, Dave Hansen , Andy Lutomirski , Peter Zijlstra , rjw@rjwysocki.net, Arnd Bergmann , Andrew Morton , Steven Rostedt , Vlastimil Babka , Michal Hocko , Matthew Wilcox , Joerg Roedel , joro@8bytes.org, linux-kernel@vger.kernel.org, linux-acpi@vger.kernel.org, linux-arch@vger.kernel.org, linux-mm@kvack.org List-Id: linux-arch.vger.kernel.org From: Joerg Roedel Implement the function to sync changes in vmalloc and ioremap ranges to all page-tables. Signed-off-by: Joerg Roedel Acked-by: Andy Lutomirski --- arch/x86/include/asm/pgtable_64_types.h | 2 ++ arch/x86/mm/init_64.c | 5 +++++ 2 files changed, 7 insertions(+) diff --git a/arch/x86/include/asm/pgtable_64_types.h b/arch/x86/include/asm/pgtable_64_types.h index 52e5f5f2240d..8f63efb2a2cc 100644 --- a/arch/x86/include/asm/pgtable_64_types.h +++ b/arch/x86/include/asm/pgtable_64_types.h @@ -159,4 +159,6 @@ extern unsigned int ptrs_per_p4d; #define PGD_KERNEL_START ((PAGE_SIZE / 2) / sizeof(pgd_t)) +#define ARCH_PAGE_TABLE_SYNC_MASK (pgtable_l5_enabled() ? PGTBL_PGD_MODIFIED : PGTBL_P4D_MODIFIED) + #endif /* _ASM_X86_PGTABLE_64_DEFS_H */ diff --git a/arch/x86/mm/init_64.c b/arch/x86/mm/init_64.c index 3b289c2f75cd..541af8e5bcd4 100644 --- a/arch/x86/mm/init_64.c +++ b/arch/x86/mm/init_64.c @@ -217,6 +217,11 @@ void sync_global_pgds(unsigned long start, unsigned long end) sync_global_pgds_l4(start, end); } +void arch_sync_kernel_mappings(unsigned long start, unsigned long end) +{ + sync_global_pgds(start, end); +} + /* * NOTE: This function is marked __ref because it calls __init function * (alloc_bootmem_pages). It's safe to do it ONLY when after_bootmem == 0. -- 2.17.1 From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: From: Joerg Roedel Subject: [PATCH v3 4/7] x86/mm/64: Implement arch_sync_kernel_mappings() Date: Fri, 15 May 2020 16:00:20 +0200 Message-ID: <20200515140023.25469-5-joro@8bytes.org> In-Reply-To: <20200515140023.25469-1-joro@8bytes.org> References: <20200515140023.25469-1-joro@8bytes.org> Sender: owner-linux-mm@kvack.org To: x86@kernel.org Cc: hpa@zytor.com, Dave Hansen , Andy Lutomirski , Peter Zijlstra , rjw@rjwysocki.net, Arnd Bergmann , Andrew Morton , Steven Rostedt , Vlastimil Babka , Michal Hocko , Matthew Wilcox , Joerg Roedel , joro@8bytes.org, linux-kernel@vger.kernel.org, linux-acpi@vger.kernel.org, linux-arch@vger.kernel.org, linux-mm@kvack.org List-ID: Message-ID: <20200515140020.yaqkNJfIDN8FL-9_fYh1ubjJe2t2w8pCxlcYgyH6OCQ@z> From: Joerg Roedel Implement the function to sync changes in vmalloc and ioremap ranges to all page-tables. Signed-off-by: Joerg Roedel Acked-by: Andy Lutomirski --- arch/x86/include/asm/pgtable_64_types.h | 2 ++ arch/x86/mm/init_64.c | 5 +++++ 2 files changed, 7 insertions(+) diff --git a/arch/x86/include/asm/pgtable_64_types.h b/arch/x86/include/asm/pgtable_64_types.h index 52e5f5f2240d..8f63efb2a2cc 100644 --- a/arch/x86/include/asm/pgtable_64_types.h +++ b/arch/x86/include/asm/pgtable_64_types.h @@ -159,4 +159,6 @@ extern unsigned int ptrs_per_p4d; #define PGD_KERNEL_START ((PAGE_SIZE / 2) / sizeof(pgd_t)) +#define ARCH_PAGE_TABLE_SYNC_MASK (pgtable_l5_enabled() ? PGTBL_PGD_MODIFIED : PGTBL_P4D_MODIFIED) + #endif /* _ASM_X86_PGTABLE_64_DEFS_H */ diff --git a/arch/x86/mm/init_64.c b/arch/x86/mm/init_64.c index 3b289c2f75cd..541af8e5bcd4 100644 --- a/arch/x86/mm/init_64.c +++ b/arch/x86/mm/init_64.c @@ -217,6 +217,11 @@ void sync_global_pgds(unsigned long start, unsigned long end) sync_global_pgds_l4(start, end); } +void arch_sync_kernel_mappings(unsigned long start, unsigned long end) +{ + sync_global_pgds(start, end); +} + /* * NOTE: This function is marked __ref because it calls __init function * (alloc_bootmem_pages). It's safe to do it ONLY when after_bootmem == 0. -- 2.17.1