From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail.linuxfoundation.org ([140.211.169.12]:37738 "EHLO mail.linuxfoundation.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1753986AbcAXXRL (ORCPT ); Sun, 24 Jan 2016 18:17:11 -0500 Subject: Patch "x86/mm: Improve switch_mm() barrier comments" has been added to the 4.1-stable tree To: luto@kernel.org, bp@alien8.de, brgerst@gmail.com, dave.hansen@linux.intel.com, dvlasenk@redhat.com, gregkh@linuxfoundation.org, hpa@zytor.com, luto@amacapital.net, mingo@kernel.org, peterz@infradead.org, riel@redhat.com, tglx@linutronix.de, torvalds@linux-foundation.org Cc: , From: Date: Sun, 24 Jan 2016 15:17:10 -0800 Message-ID: <145367743016067@kroah.com> MIME-Version: 1.0 Content-Type: text/plain; charset=ANSI_X3.4-1968 Content-Transfer-Encoding: 8bit Sender: stable-owner@vger.kernel.org List-ID: This is a note to let you know that I've just added the patch titled x86/mm: Improve switch_mm() barrier comments to the 4.1-stable tree which can be found at: http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=summary The filename of the patch is: x86-mm-improve-switch_mm-barrier-comments.patch and it can be found in the queue-4.1 subdirectory. If you, or anyone else, feels it should not be added to the stable tree, please let know about it. >>From 4eaffdd5a5fe6ff9f95e1ab4de1ac904d5e0fa8b Mon Sep 17 00:00:00 2001 From: Andy Lutomirski Date: Tue, 12 Jan 2016 12:47:40 -0800 Subject: x86/mm: Improve switch_mm() barrier comments From: Andy Lutomirski commit 4eaffdd5a5fe6ff9f95e1ab4de1ac904d5e0fa8b upstream. My previous comments were still a bit confusing and there was a typo. Fix it up. Reported-by: Peter Zijlstra Signed-off-by: Andy Lutomirski Cc: Andy Lutomirski Cc: Borislav Petkov Cc: Brian Gerst Cc: Dave Hansen Cc: Denys Vlasenko Cc: H. Peter Anvin Cc: Linus Torvalds Cc: Rik van Riel Cc: Thomas Gleixner Fixes: 71b3c126e611 ("x86/mm: Add barriers and document switch_mm()-vs-flush synchronization") Link: http://lkml.kernel.org/r/0a0b43cdcdd241c5faaaecfbcc91a155ddedc9a1.1452631609.git.luto@kernel.org Signed-off-by: Ingo Molnar Signed-off-by: Greg Kroah-Hartman --- arch/x86/include/asm/mmu_context.h | 15 ++++++++------- 1 file changed, 8 insertions(+), 7 deletions(-) --- a/arch/x86/include/asm/mmu_context.h +++ b/arch/x86/include/asm/mmu_context.h @@ -120,14 +120,16 @@ static inline void switch_mm(struct mm_s * be sent, and CPU 0's TLB will contain a stale entry.) * * The bad outcome can occur if either CPU's load is - * reordered before that CPU's store, so both CPUs much + * reordered before that CPU's store, so both CPUs must * execute full barriers to prevent this from happening. * * Thus, switch_mm needs a full barrier between the * store to mm_cpumask and any operation that could load - * from next->pgd. This barrier synchronizes with - * remote TLB flushers. Fortunately, load_cr3 is - * serializing and thus acts as a full barrier. + * from next->pgd. TLB fills are special and can happen + * due to instruction fetches or for no reason at all, + * and neither LOCK nor MFENCE orders them. + * Fortunately, load_cr3() is serializing and gives the + * ordering guarantee we need. * */ load_cr3(next->pgd); @@ -174,9 +176,8 @@ static inline void switch_mm(struct mm_s * tlb flush IPI delivery. We must reload CR3 * to make sure to use no freed page tables. * - * As above, this is a barrier that forces - * TLB repopulation to be ordered after the - * store to mm_cpumask. + * As above, load_cr3() is serializing and orders TLB + * fills with respect to the mm_cpumask write. */ load_cr3(next->pgd); trace_tlb_flush(TLB_FLUSH_ON_TASK_SWITCH, TLB_FLUSH_ALL); Patches currently in stable-queue which might be from luto@kernel.org are queue-4.1/x86-mm-add-barriers-and-document-switch_mm-vs-flush-synchronization.patch queue-4.1/x86-mm-improve-switch_mm-barrier-comments.patch