From mboxrd@z Thu Jan 1 00:00:00 1970 From: Martin Schwidefsky Subject: Re: [PATCHv2 1/3] x86/mm: Provide pmdp_establish() helper Date: Mon, 19 Jun 2017 07:48:01 +0200 Message-ID: <20170619074801.18fa2a16@mschwideX1> References: <20170615145224.66200-1-kirill.shutemov@linux.intel.com> <20170615145224.66200-2-kirill.shutemov@linux.intel.com> Mime-Version: 1.0 Content-Type: text/plain; charset=US-ASCII Content-Transfer-Encoding: 8BIT Return-path: Received: from mx0b-001b2d01.pphosted.com ([148.163.158.5]:59692 "EHLO mx0a-001b2d01.pphosted.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1751013AbdFSFsO (ORCPT ); Mon, 19 Jun 2017 01:48:14 -0400 Received: from pps.filterd (m0098420.ppops.net [127.0.0.1]) by mx0b-001b2d01.pphosted.com (8.16.0.20/8.16.0.20) with SMTP id v5J5hwnZ083868 for ; Mon, 19 Jun 2017 01:48:13 -0400 Received: from e06smtp10.uk.ibm.com (e06smtp10.uk.ibm.com [195.75.94.106]) by mx0b-001b2d01.pphosted.com with ESMTP id 2b64r205k2-1 (version=TLSv1.2 cipher=AES256-SHA bits=256 verify=NOT) for ; Mon, 19 Jun 2017 01:48:13 -0400 Received: from localhost by e06smtp10.uk.ibm.com with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted for from ; Mon, 19 Jun 2017 06:48:11 +0100 In-Reply-To: <20170615145224.66200-2-kirill.shutemov@linux.intel.com> Sender: linux-arch-owner@vger.kernel.org List-ID: To: "Kirill A. Shutemov" Cc: Andrew Morton , Vlastimil Babka , Vineet Gupta , Russell King , Will Deacon , Catalin Marinas , Ralf Baechle , "David S. Miller" , "Aneesh Kumar K . V" , Heiko Carstens , Andrea Arcangeli , linux-arch@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, Ingo Molnar , "H . Peter Anvin" , Thomas Gleixner On Thu, 15 Jun 2017 17:52:22 +0300 "Kirill A. Shutemov" wrote: > We need an atomic way to setup pmd page table entry, avoiding races with > CPU setting dirty/accessed bits. This is required to implement > pmdp_invalidate() that doesn't loose these bits. > > On PAE we have to use cmpxchg8b as we cannot assume what is value of new pmd and > setting it up half-by-half can expose broken corrupted entry to CPU. > > Signed-off-by: Kirill A. Shutemov > Cc: Ingo Molnar > Cc: H. Peter Anvin > Cc: Thomas Gleixner > --- > arch/x86/include/asm/pgtable-3level.h | 18 ++++++++++++++++++ > arch/x86/include/asm/pgtable.h | 14 ++++++++++++++ > 2 files changed, 32 insertions(+) > > diff --git a/arch/x86/include/asm/pgtable.h b/arch/x86/include/asm/pgtable.h > index f5af95a0c6b8..a924fc6a96b9 100644 > --- a/arch/x86/include/asm/pgtable.h > +++ b/arch/x86/include/asm/pgtable.h > @@ -1092,6 +1092,20 @@ static inline void pmdp_set_wrprotect(struct mm_struct *mm, > clear_bit(_PAGE_BIT_RW, (unsigned long *)pmdp); > } > > +#ifndef pmdp_establish > +#define pmdp_establish pmdp_establish > +static inline pmd_t pmdp_establish(pmd_t *pmdp, pmd_t pmd) > +{ > + if (IS_ENABLED(CONFIG_SMP)) { > + return xchg(pmdp, pmd); > + } else { > + pmd_t old = *pmdp; > + *pmdp = pmd; > + return old; > + } > +} > +#endif > + > /* > * clone_pgd_range(pgd_t *dst, pgd_t *src, int count); > * For the s390 version of the pmdp_establish function we need the mm to be able to do the TLB flush correctly. Can we please add a "struct vm_area_struct *vma" argument to pmdp_establish analog to pmdp_invalidate? The s390 patch would then look like this: -- >From 4d4641249d5e826c21c522d149553e89d73fcd4f Mon Sep 17 00:00:00 2001 From: Martin Schwidefsky Date: Mon, 19 Jun 2017 07:40:11 +0200 Subject: [PATCH] s390/mm: add pmdp_establish Define the pmdp_establish function to replace a pmd entry with a new one and return the old value. Signed-off-by: Martin Schwidefsky --- arch/s390/include/asm/pgtable.h | 7 +++++++ 1 file changed, 7 insertions(+) diff --git a/arch/s390/include/asm/pgtable.h b/arch/s390/include/asm/pgtable.h index bb59a0aa3249..dedeecd5455c 100644 --- a/arch/s390/include/asm/pgtable.h +++ b/arch/s390/include/asm/pgtable.h @@ -1511,6 +1511,13 @@ static inline void pmdp_invalidate(struct vm_area_struct *vma, pmdp_xchg_direct(vma->vm_mm, addr, pmdp, __pmd(_SEGMENT_ENTRY_EMPTY)); } +static inline pmd_t pmdp_establish(struct vm_area_struct *vma, + pmd_t *pmdp, pmd_t pmd) +{ + return pmdp_xchg_direct(vma->vm_mm, addr, pmdp, pmd); +} +#define pmdp_establish pmdp_establish + #define __HAVE_ARCH_PMDP_SET_WRPROTECT static inline void pmdp_set_wrprotect(struct mm_struct *mm, unsigned long addr, pmd_t *pmdp) -- 2.11.2 -- blue skies, Martin. "Reality continues to ruin my life." - Calvin. From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mx0b-001b2d01.pphosted.com ([148.163.158.5]:59692 "EHLO mx0a-001b2d01.pphosted.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1751013AbdFSFsO (ORCPT ); Mon, 19 Jun 2017 01:48:14 -0400 Received: from pps.filterd (m0098420.ppops.net [127.0.0.1]) by mx0b-001b2d01.pphosted.com (8.16.0.20/8.16.0.20) with SMTP id v5J5hwnZ083868 for ; Mon, 19 Jun 2017 01:48:13 -0400 Received: from e06smtp10.uk.ibm.com (e06smtp10.uk.ibm.com [195.75.94.106]) by mx0b-001b2d01.pphosted.com with ESMTP id 2b64r205k2-1 (version=TLSv1.2 cipher=AES256-SHA bits=256 verify=NOT) for ; Mon, 19 Jun 2017 01:48:13 -0400 Received: from localhost by e06smtp10.uk.ibm.com with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted for from ; Mon, 19 Jun 2017 06:48:11 +0100 Date: Mon, 19 Jun 2017 07:48:01 +0200 From: Martin Schwidefsky Subject: Re: [PATCHv2 1/3] x86/mm: Provide pmdp_establish() helper In-Reply-To: <20170615145224.66200-2-kirill.shutemov@linux.intel.com> References: <20170615145224.66200-1-kirill.shutemov@linux.intel.com> <20170615145224.66200-2-kirill.shutemov@linux.intel.com> MIME-Version: 1.0 Content-Type: text/plain; charset=US-ASCII Content-Transfer-Encoding: 8BIT Message-ID: <20170619074801.18fa2a16@mschwideX1> Sender: linux-arch-owner@vger.kernel.org List-ID: To: "Kirill A. Shutemov" Cc: Andrew Morton , Vlastimil Babka , Vineet Gupta , Russell King , Will Deacon , Catalin Marinas , Ralf Baechle , "David S. Miller" , "Aneesh Kumar K . V" , Heiko Carstens , Andrea Arcangeli , linux-arch@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, Ingo Molnar , "H . Peter Anvin" , Thomas Gleixner Message-ID: <20170619054801.HrLEcLUbnCr40-cgs5MHxpJokcVvdALCsOeu1EiPuvg@z> On Thu, 15 Jun 2017 17:52:22 +0300 "Kirill A. Shutemov" wrote: > We need an atomic way to setup pmd page table entry, avoiding races with > CPU setting dirty/accessed bits. This is required to implement > pmdp_invalidate() that doesn't loose these bits. > > On PAE we have to use cmpxchg8b as we cannot assume what is value of new pmd and > setting it up half-by-half can expose broken corrupted entry to CPU. > > Signed-off-by: Kirill A. Shutemov > Cc: Ingo Molnar > Cc: H. Peter Anvin > Cc: Thomas Gleixner > --- > arch/x86/include/asm/pgtable-3level.h | 18 ++++++++++++++++++ > arch/x86/include/asm/pgtable.h | 14 ++++++++++++++ > 2 files changed, 32 insertions(+) > > diff --git a/arch/x86/include/asm/pgtable.h b/arch/x86/include/asm/pgtable.h > index f5af95a0c6b8..a924fc6a96b9 100644 > --- a/arch/x86/include/asm/pgtable.h > +++ b/arch/x86/include/asm/pgtable.h > @@ -1092,6 +1092,20 @@ static inline void pmdp_set_wrprotect(struct mm_struct *mm, > clear_bit(_PAGE_BIT_RW, (unsigned long *)pmdp); > } > > +#ifndef pmdp_establish > +#define pmdp_establish pmdp_establish > +static inline pmd_t pmdp_establish(pmd_t *pmdp, pmd_t pmd) > +{ > + if (IS_ENABLED(CONFIG_SMP)) { > + return xchg(pmdp, pmd); > + } else { > + pmd_t old = *pmdp; > + *pmdp = pmd; > + return old; > + } > +} > +#endif > + > /* > * clone_pgd_range(pgd_t *dst, pgd_t *src, int count); > * For the s390 version of the pmdp_establish function we need the mm to be able to do the TLB flush correctly. Can we please add a "struct vm_area_struct *vma" argument to pmdp_establish analog to pmdp_invalidate? The s390 patch would then look like this: --