From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail.linuxfoundation.org ([140.211.169.12]:37362 "EHLO mail.linuxfoundation.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1750961AbdL3Qit (ORCPT ); Sat, 30 Dec 2017 11:38:49 -0500 Subject: Patch "x86/mm: Reimplement flush_tlb_page() using flush_tlb_mm_range()" has been added to the 4.9-stable tree To: luto@kernel.org, akpm@linux-foundation.org, bpetkov@suse.de, dave.hansen@intel.com, gregkh@linuxfoundation.org, hughd@google.com, keescook@chromium.org, mgorman@suse.de, mhocko@suse.com, mingo@kernel.org, nadav.amit@gmail.com, namit@vmware.com, peterz@infradead.org, riel@redhat.com, tglx@linutronix.de, torvalds@linux-foundation.org Cc: , From: Date: Sat, 30 Dec 2017 17:38:45 +0100 Message-ID: <151465192515664@kroah.com> MIME-Version: 1.0 Content-Type: text/plain; charset=ANSI_X3.4-1968 Content-Transfer-Encoding: 8bit Sender: stable-owner@vger.kernel.org List-ID: This is a note to let you know that I've just added the patch titled x86/mm: Reimplement flush_tlb_page() using flush_tlb_mm_range() to the 4.9-stable tree which can be found at: http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=summary The filename of the patch is: x86-mm-reimplement-flush_tlb_page-using-flush_tlb_mm_range.patch and it can be found in the queue-4.9 subdirectory. If you, or anyone else, feels it should not be added to the stable tree, please let know about it. >>From ca6c99c0794875c6d1db6e22f246699691ab7e6b Mon Sep 17 00:00:00 2001 From: Andy Lutomirski Date: Mon, 22 May 2017 15:30:01 -0700 Subject: x86/mm: Reimplement flush_tlb_page() using flush_tlb_mm_range() From: Andy Lutomirski commit ca6c99c0794875c6d1db6e22f246699691ab7e6b upstream. flush_tlb_page() was very similar to flush_tlb_mm_range() except that it had a couple of issues: - It was missing an smp_mb() in the case where current->active_mm != mm. (This is a longstanding bug reported by Nadav Amit) - It was missing tracepoints and vm counter updates. The only reason that I can see for keeping it at as a separate function is that it could avoid a few branches that flush_tlb_mm_range() needs to decide to flush just one page. This hardly seems worthwhile. If we decide we want to get rid of those branches again, a better way would be to introduce an __flush_tlb_mm_range() helper and make both flush_tlb_page() and flush_tlb_mm_range() use it. Signed-off-by: Andy Lutomirski Acked-by: Kees Cook Cc: Andrew Morton Cc: Borislav Petkov Cc: Dave Hansen Cc: Linus Torvalds Cc: Mel Gorman Cc: Michal Hocko Cc: Nadav Amit Cc: Nadav Amit Cc: Peter Zijlstra Cc: Rik van Riel Cc: Thomas Gleixner Cc: linux-mm@kvack.org Link: http://lkml.kernel.org/r/3cc3847cf888d8907577569b8bac3f01992ef8f9.1495492063.git.luto@kernel.org Signed-off-by: Ingo Molnar Cc: Hugh Dickins Signed-off-by: Greg Kroah-Hartman --- arch/x86/include/asm/tlbflush.h | 6 +++++- arch/x86/mm/tlb.c | 27 --------------------------- 2 files changed, 5 insertions(+), 28 deletions(-) --- a/arch/x86/include/asm/tlbflush.h +++ b/arch/x86/include/asm/tlbflush.h @@ -297,11 +297,15 @@ static inline void flush_tlb_kernel_rang flush_tlb_mm_range(vma->vm_mm, start, end, vma->vm_flags) extern void flush_tlb_all(void); -extern void flush_tlb_page(struct vm_area_struct *, unsigned long); extern void flush_tlb_mm_range(struct mm_struct *mm, unsigned long start, unsigned long end, unsigned long vmflag); extern void flush_tlb_kernel_range(unsigned long start, unsigned long end); +static inline void flush_tlb_page(struct vm_area_struct *vma, unsigned long a) +{ + flush_tlb_mm_range(vma->vm_mm, a, a + PAGE_SIZE, VM_NONE); +} + void native_flush_tlb_others(const struct cpumask *cpumask, struct mm_struct *mm, unsigned long start, unsigned long end); --- a/arch/x86/mm/tlb.c +++ b/arch/x86/mm/tlb.c @@ -354,33 +354,6 @@ out: preempt_enable(); } -void flush_tlb_page(struct vm_area_struct *vma, unsigned long start) -{ - struct mm_struct *mm = vma->vm_mm; - - preempt_disable(); - - if (current->active_mm == mm) { - if (current->mm) { - /* - * Implicit full barrier (INVLPG) that synchronizes - * with switch_mm. - */ - __flush_tlb_one(start); - } else { - leave_mm(smp_processor_id()); - - /* Synchronize with switch_mm. */ - smp_mb(); - } - } - - if (cpumask_any_but(mm_cpumask(mm), smp_processor_id()) < nr_cpu_ids) - flush_tlb_others(mm_cpumask(mm), mm, start, start + PAGE_SIZE); - - preempt_enable(); -} - static void do_flush_tlb_all(void *info) { count_vm_tlb_event(NR_TLB_REMOTE_FLUSH_RECEIVED); Patches currently in stable-queue which might be from luto@kernel.org are queue-4.9/x86-vm86-32-switch-to-flush_tlb_mm_range-in-mark_screen_rdonly.patch queue-4.9/x86-mm-remove-the-up-asm-tlbflush.h-code-always-use-the-formerly-smp-code.patch queue-4.9/x86-mm-reimplement-flush_tlb_page-using-flush_tlb_mm_range.patch queue-4.9/x86-mm-make-flush_tlb_mm_range-more-predictable.patch queue-4.9/x86-mm-remove-flush_tlb-and-flush_tlb_current_task.patch queue-4.9/x86-mm-disable-pcid-on-32-bit-kernels.patch