From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail.linuxfoundation.org ([140.211.169.12]:33150 "EHLO mail.linuxfoundation.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1750980AbdL3Pit (ORCPT ); Sat, 30 Dec 2017 10:38:49 -0500 Subject: Patch "x86/mm: Remove flush_tlb() and flush_tlb_current_task()" has been added to the 4.9-stable tree To: luto@kernel.org, akpm@linux-foundation.org, bp@alien8.de, brgerst@gmail.com, dave.hansen@intel.com, dvlasenk@redhat.com, gregkh@linuxfoundation.org, hpa@zytor.com, hughd@google.com, jpoimboe@redhat.com, mhocko@suse.com, mingo@kernel.org, namit@vmware.com, peterz@infradead.org, riel@redhat.com, tglx@linutronix.de, torvalds@linux-foundation.org Cc: , From: Date: Sat, 30 Dec 2017 16:38:38 +0100 Message-ID: <151464831871208@kroah.com> MIME-Version: 1.0 Content-Type: text/plain; charset=ANSI_X3.4-1968 Content-Transfer-Encoding: 8bit Sender: stable-owner@vger.kernel.org List-ID: This is a note to let you know that I've just added the patch titled x86/mm: Remove flush_tlb() and flush_tlb_current_task() to the 4.9-stable tree which can be found at: http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=summary The filename of the patch is: x86-mm-remove-flush_tlb-and-flush_tlb_current_task.patch and it can be found in the queue-4.9 subdirectory. If you, or anyone else, feels it should not be added to the stable tree, please let know about it. >>From 29961b59a51f8c6838a26a45e871a7ed6771809b Mon Sep 17 00:00:00 2001 From: Andy Lutomirski Date: Sat, 22 Apr 2017 00:01:20 -0700 Subject: x86/mm: Remove flush_tlb() and flush_tlb_current_task() From: Andy Lutomirski commit 29961b59a51f8c6838a26a45e871a7ed6771809b upstream. I was trying to figure out what how flush_tlb_current_task() would possibly work correctly if current->mm != current->active_mm, but I realized I could spare myself the effort: it has no callers except the unused flush_tlb() macro. Signed-off-by: Andy Lutomirski Cc: Andrew Morton Cc: Borislav Petkov Cc: Brian Gerst Cc: Dave Hansen Cc: Denys Vlasenko Cc: H. Peter Anvin Cc: Josh Poimboeuf Cc: Linus Torvalds Cc: Michal Hocko Cc: Nadav Amit Cc: Peter Zijlstra Cc: Rik van Riel Cc: Thomas Gleixner Link: http://lkml.kernel.org/r/e52d64c11690f85e9f1d69d7b48cc2269cd2e94b.1492844372.git.luto@kernel.org Signed-off-by: Ingo Molnar Cc: Hugh Dickins Signed-off-by: Greg Kroah-Hartman --- arch/x86/include/asm/tlbflush.h | 9 --------- arch/x86/mm/tlb.c | 17 ----------------- 2 files changed, 26 deletions(-) --- a/arch/x86/include/asm/tlbflush.h +++ b/arch/x86/include/asm/tlbflush.h @@ -205,7 +205,6 @@ static inline void __flush_tlb_one(unsig /* * TLB flushing: * - * - flush_tlb() flushes the current mm struct TLBs * - flush_tlb_all() flushes all processes TLBs * - flush_tlb_mm(mm) flushes the specified mm context TLB's * - flush_tlb_page(vma, vmaddr) flushes one page @@ -237,11 +236,6 @@ static inline void flush_tlb_all(void) __flush_tlb_all(); } -static inline void flush_tlb(void) -{ - __flush_tlb_up(); -} - static inline void local_flush_tlb(void) { __flush_tlb_up(); @@ -303,14 +297,11 @@ static inline void flush_tlb_kernel_rang flush_tlb_mm_range(vma->vm_mm, start, end, vma->vm_flags) extern void flush_tlb_all(void); -extern void flush_tlb_current_task(void); extern void flush_tlb_page(struct vm_area_struct *, unsigned long); extern void flush_tlb_mm_range(struct mm_struct *mm, unsigned long start, unsigned long end, unsigned long vmflag); extern void flush_tlb_kernel_range(unsigned long start, unsigned long end); -#define flush_tlb() flush_tlb_current_task() - void native_flush_tlb_others(const struct cpumask *cpumask, struct mm_struct *mm, unsigned long start, unsigned long end); --- a/arch/x86/mm/tlb.c +++ b/arch/x86/mm/tlb.c @@ -287,23 +287,6 @@ void native_flush_tlb_others(const struc smp_call_function_many(cpumask, flush_tlb_func, &info, 1); } -void flush_tlb_current_task(void) -{ - struct mm_struct *mm = current->mm; - - preempt_disable(); - - count_vm_tlb_event(NR_TLB_LOCAL_FLUSH_ALL); - - /* This is an implicit full barrier that synchronizes with switch_mm. */ - local_flush_tlb(); - - trace_tlb_flush(TLB_LOCAL_SHOOTDOWN, TLB_FLUSH_ALL); - if (cpumask_any_but(mm_cpumask(mm), smp_processor_id()) < nr_cpu_ids) - flush_tlb_others(mm_cpumask(mm), mm, 0UL, TLB_FLUSH_ALL); - preempt_enable(); -} - /* * See Documentation/x86/tlb.txt for details. We choose 33 * because it is large enough to cover the vast majority (at Patches currently in stable-queue which might be from luto@kernel.org are queue-4.9/x86-vm86-32-switch-to-flush_tlb_mm_range-in-mark_screen_rdonly.patch queue-4.9/x86-mm-make-flush_tlb_mm_range-more-predictable.patch queue-4.9/x86-mm-remove-flush_tlb-and-flush_tlb_current_task.patch