From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751140AbdL3Vbu (ORCPT ); Sat, 30 Dec 2017 16:31:50 -0500 Received: from mail-wm0-f68.google.com ([74.125.82.68]:35239 "EHLO mail-wm0-f68.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1750920AbdL3Vbs (ORCPT ); Sat, 30 Dec 2017 16:31:48 -0500 X-Google-Smtp-Source: ACJfBotHopVVU2PoTDzUo69ZX5ViaagquwIICZQiO9m+BuNzJvopy5vqYFvfJIKQl6FSlRfwEn2kKA== Date: Sat, 30 Dec 2017 22:31:45 +0100 From: Ingo Molnar To: Thomas Gleixner Cc: LKML , Linus Torvalds , x86@kernel.org, Andy Lutomirski , Dave Hansen , Peter Zijlstra , Borislav Petkov , Dominik Brodowski Subject: Re: [patch 3/3] x86/mm: Remove preempt_disable/enable() from __native_flush_tlb() Message-ID: <20171230213145.rh26gyvfopujzumb@gmail.com> References: <20171230211351.980176980@linutronix.de> <20171230211829.679325424@linutronix.de> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20171230211829.679325424@linutronix.de> User-Agent: NeoMutt/20170609 (1.8.3) Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org The cleanup looks good to me, just a few speling nits: * Thomas Gleixner wrote: > The preempt_disable/enable() pair in __native_flush_tlb() was added in > commit 5cf0791da5c1 ("x86/mm: Disable preemption during CR3 read+write") to > protect the UP variant of flush_tlb_mm_range(). > > That preempt_disable/enable() pair should have been added to the UP variant > of flush_tlb_mm_range() instead. > > The UP variant was removed with commit ce4a4e565f52 ("x86/mm: Remove the UP > asm/tlbflush.h code, always use the (formerly) SMP code"), but the > preempt_disable/enable() pair stayed around. > > The latest change to __native_flush_tlb() in commit 6fd166aae78c ("x86/mm: > Use/Fix PCID to optimize user/kernel switches") added an access to a per > cpu variable outside the preempt disabled regions which makes no sense at > all. __native_flush_tlb() must always be called with at least preemption > disabled. s/cpu/CPU > > Remove the preempt_disable/enable() pair and add a WARN_ON_ONCE() to catch > bad callers independent of the smp_processor_id() debugging. > > Signed-off-by: Thomas Gleixner > --- > arch/x86/include/asm/tlbflush.h | 14 ++++++++------ > 1 file changed, 8 insertions(+), 6 deletions(-) > > --- a/arch/x86/include/asm/tlbflush.h > +++ b/arch/x86/include/asm/tlbflush.h > @@ -345,15 +345,17 @@ static inline void invalidate_user_asid( > */ > static inline void __native_flush_tlb(void) > { > - invalidate_user_asid(this_cpu_read(cpu_tlbstate.loaded_mm_asid)); > /* > - * If current->mm == NULL then we borrow a mm which may change > - * during a task switch and therefore we must not be preempted > - * while we write CR3 back: > + * Preemption or interrupts must be disabled to protect the access > + * to the per cpu variable and to prevent being preempted between > + * read_cr3() and write_cr3(). > */ > - preempt_disable(); > + WARN_ON_ONCE(preemptible()); > + > + invalidate_user_asid(this_cpu_read(cpu_tlbstate.loaded_mm_asid)); > + > + /* If current->mm == NULL then the read_cr3() "borrows" a mm */ > native_write_cr3(__native_read_cr3()); > - preempt_enable(); s/a mm/an mm Reviewed-by: Ingo Molnar Thanks, Ingo