From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, Sebastian Andrzej Siewior , Thomas Gleixner , Andy Lutomirski , Dave Hansen , Peter Zijlstra , Borislav Petkov Subject: [PATCH 4.18 039/350] x86/mm/pat: Disable preemption around __flush_tlb_all() Date: Sun, 11 Nov 2018 14:18:23 -0800 Message-Id: <20181111221708.813843679@linuxfoundation.org> In-Reply-To: <20181111221707.043394111@linuxfoundation.org> References: <20181111221707.043394111@linuxfoundation.org> MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Sender: linux-kernel-owner@vger.kernel.org List-ID: 4.18-stable review patch. If anyone has any objections, please let me know. ------------------ From: Sebastian Andrzej Siewior commit f77084d96355f5fba8e2c1fb3a51a393b1570de7 upstream. The WARN_ON_ONCE(__read_cr3() != build_cr3()) in switch_mm_irqs_off() triggers every once in a while during a snapshotted system upgrade. The warning triggers since commit decab0888e6e ("x86/mm: Remove preempt_disable/enable() from __native_flush_tlb()"). The callchain is: get_page_from_freelist() -> post_alloc_hook() -> __kernel_map_pages() with CONFIG_DEBUG_PAGEALLOC enabled. Disable preemption during CR3 reset / __flush_tlb_all() and add a comment why preemption has to be disabled so it won't be removed accidentaly. Add another preemptible() check in __flush_tlb_all() to catch callers with enabled preemption when PGE is enabled, because PGE enabled does not trigger the warning in __native_flush_tlb(). Suggested by Andy Lutomirski. Fixes: decab0888e6e ("x86/mm: Remove preempt_disable/enable() from __native_flush_tlb()") Signed-off-by: Sebastian Andrzej Siewior Signed-off-by: Thomas Gleixner Cc: Andy Lutomirski Cc: Dave Hansen Cc: Peter Zijlstra Cc: Borislav Petkov Cc: stable@vger.kernel.org Link: https://lkml.kernel.org/r/20181017103432.zgv46nlu3hc7k4rq@linutronix.de Signed-off-by: Greg Kroah-Hartman --- arch/x86/include/asm/tlbflush.h | 6 ++++++ arch/x86/mm/pageattr.c | 6 +++++- 2 files changed, 11 insertions(+), 1 deletion(-) --- a/arch/x86/include/asm/tlbflush.h +++ b/arch/x86/include/asm/tlbflush.h @@ -469,6 +469,12 @@ static inline void __native_flush_tlb_on */ static inline void __flush_tlb_all(void) { + /* + * This is to catch users with enabled preemption and the PGE feature + * and don't trigger the warning in __native_flush_tlb(). + */ + VM_WARN_ON_ONCE(preemptible()); + if (boot_cpu_has(X86_FEATURE_PGE)) { __flush_tlb_global(); } else { --- a/arch/x86/mm/pageattr.c +++ b/arch/x86/mm/pageattr.c @@ -2063,9 +2063,13 @@ void __kernel_map_pages(struct page *pag /* * We should perform an IPI and flush all tlbs, - * but that can deadlock->flush only current cpu: + * but that can deadlock->flush only current cpu. + * Preemption needs to be disabled around __flush_tlb_all() due to + * CR3 reload in __native_flush_tlb(). */ + preempt_disable(); __flush_tlb_all(); + preempt_enable(); arch_flush_lazy_mmu_mode(); }