From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.2 (2018-09-13) on archive.lwn.net X-Spam-Level: X-Spam-Status: No, score=-6.0 required=5.0 tests=HEADER_FROM_DIFFERENT_DOMAINS, MAILING_LIST_MULTI,RCVD_IN_DNSWL_HI,SPF_HELO_NONE,SPF_NONE autolearn=unavailable autolearn_force=no version=3.4.2 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by archive.lwn.net (Postfix) with ESMTP id C7EEC7D92A for ; Thu, 6 Jun 2019 20:16:30 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726940AbfFFUQ1 (ORCPT ); Thu, 6 Jun 2019 16:16:27 -0400 Received: from mga17.intel.com ([192.55.52.151]:47825 "EHLO mga17.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1729369AbfFFUPb (ORCPT ); Thu, 6 Jun 2019 16:15:31 -0400 X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga002.jf.intel.com ([10.7.209.21]) by fmsmga107.fm.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 06 Jun 2019 13:15:30 -0700 X-ExtLoop1: 1 Received: from yyu32-desk1.sc.intel.com ([143.183.136.147]) by orsmga002.jf.intel.com with ESMTP; 06 Jun 2019 13:15:29 -0700 From: Yu-cheng Yu To: x86@kernel.org, "H. Peter Anvin" , Thomas Gleixner , Ingo Molnar , linux-kernel@vger.kernel.org, linux-doc@vger.kernel.org, linux-mm@kvack.org, linux-arch@vger.kernel.org, linux-api@vger.kernel.org, Arnd Bergmann , Andy Lutomirski , Balbir Singh , Borislav Petkov , Cyrill Gorcunov , Dave Hansen , Eugene Syromiatnikov , Florian Weimer , "H.J. Lu" , Jann Horn , Jonathan Corbet , Kees Cook , Mike Kravetz , Nadav Amit , Oleg Nesterov , Pavel Machek , Peter Zijlstra , Randy Dunlap , "Ravi V. Shankar" , Vedvyas Shanbhogue , Dave Martin Cc: Yu-cheng Yu Subject: [PATCH v7 17/27] mm: Update can_follow_write_pte/pmd for shadow stack Date: Thu, 6 Jun 2019 13:06:36 -0700 Message-Id: <20190606200646.3951-18-yu-cheng.yu@intel.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20190606200646.3951-1-yu-cheng.yu@intel.com> References: <20190606200646.3951-1-yu-cheng.yu@intel.com> Sender: linux-doc-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-doc@vger.kernel.org can_follow_write_pte/pmd look for the (RO & DIRTY) PTE/PMD to verify an exclusive RO page still exists after a broken COW. A shadow stack PTE is RO & PAGE_DIRTY_SW when it is shared, otherwise RO & PAGE_DIRTY_HW. Introduce pte_exclusive() and pmd_exclusive() to also verify a shadow stack PTE is exclusive. Also rename can_follow_write_pte/pmd() to can_follow_write() to make their meaning clear; i.e. "Can we write to the page?", not "Is the PTE writable?" Signed-off-by: Yu-cheng Yu --- arch/x86/mm/pgtable.c | 18 ++++++++++++++++++ include/asm-generic/pgtable.h | 4 ++++ mm/gup.c | 8 +++++--- mm/huge_memory.c | 8 +++++--- 4 files changed, 32 insertions(+), 6 deletions(-) diff --git a/arch/x86/mm/pgtable.c b/arch/x86/mm/pgtable.c index 8ff54bd978f3..2a89c168df7b 100644 --- a/arch/x86/mm/pgtable.c +++ b/arch/x86/mm/pgtable.c @@ -913,4 +913,22 @@ inline bool arch_copy_pte_mapping(vm_flags_t vm_flags) { return (vm_flags & VM_SHSTK); } + +inline bool pte_exclusive(pte_t pte, struct vm_area_struct *vma) +{ + if (vma->vm_flags & VM_SHSTK) + return pte_dirty_hw(pte); + else + return pte_dirty(pte); +} + +#ifdef CONFIG_TRANSPARENT_HUGEPAGE +inline bool pmd_exclusive(pmd_t pmd, struct vm_area_struct *vma) +{ + if (vma->vm_flags & VM_SHSTK) + return pmd_dirty_hw(pmd); + else + return pmd_dirty(pmd); +} +#endif /* CONFIG_TRANSPARENT_HUGEPAGE */ #endif /* CONFIG_X86_INTEL_SHADOW_STACK_USER */ diff --git a/include/asm-generic/pgtable.h b/include/asm-generic/pgtable.h index 4940411b8e1c..3324e30bb07f 100644 --- a/include/asm-generic/pgtable.h +++ b/include/asm-generic/pgtable.h @@ -1192,10 +1192,14 @@ static inline bool arch_has_pfn_modify_check(void) #define pte_set_vma_features(pte, vma) pte #define pmd_set_vma_features(pmd, vma) pmd #define arch_copy_pte_mapping(vma_flags) false +#define pte_exclusive(pte, vma) pte_dirty(pte) +#define pmd_exclusive(pmd, vma) pmd_dirty(pmd) #else pte_t pte_set_vma_features(pte_t pte, struct vm_area_struct *vma); pmd_t pmd_set_vma_features(pmd_t pmd, struct vm_area_struct *vma); bool arch_copy_pte_mapping(vm_flags_t vm_flags); +bool pte_exclusive(pte_t pte, struct vm_area_struct *vma); +bool pmd_exclusive(pmd_t pmd, struct vm_area_struct *vma); #endif #endif /* _ASM_GENERIC_PGTABLE_H */ diff --git a/mm/gup.c b/mm/gup.c index ddde097cf9e4..7d11fff1e8c3 100644 --- a/mm/gup.c +++ b/mm/gup.c @@ -178,10 +178,12 @@ static int follow_pfn_pte(struct vm_area_struct *vma, unsigned long address, * FOLL_FORCE can write to even unwritable pte's, but only * after we've gone through a COW cycle and they are dirty. */ -static inline bool can_follow_write_pte(pte_t pte, unsigned int flags) +static inline bool can_follow_write(pte_t pte, unsigned int flags, + struct vm_area_struct *vma) { return pte_write(pte) || - ((flags & FOLL_FORCE) && (flags & FOLL_COW) && pte_dirty(pte)); + ((flags & FOLL_FORCE) && (flags & FOLL_COW) && + pte_exclusive(pte, vma)); } static struct page *follow_page_pte(struct vm_area_struct *vma, @@ -219,7 +221,7 @@ static struct page *follow_page_pte(struct vm_area_struct *vma, } if ((flags & FOLL_NUMA) && pte_protnone(pte)) goto no_page; - if ((flags & FOLL_WRITE) && !can_follow_write_pte(pte, flags)) { + if ((flags & FOLL_WRITE) && !can_follow_write(pte, flags, vma)) { pte_unmap_unlock(ptep, ptl); return NULL; } diff --git a/mm/huge_memory.c b/mm/huge_memory.c index eac1ee2f8985..d65970b9ece6 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -1441,10 +1441,12 @@ vm_fault_t do_huge_pmd_wp_page(struct vm_fault *vmf, pmd_t orig_pmd) * FOLL_FORCE can write to even unwritable pmd's, but only * after we've gone through a COW cycle and they are dirty. */ -static inline bool can_follow_write_pmd(pmd_t pmd, unsigned int flags) +static inline bool can_follow_write(pmd_t pmd, unsigned int flags, + struct vm_area_struct *vma) { return pmd_write(pmd) || - ((flags & FOLL_FORCE) && (flags & FOLL_COW) && pmd_dirty(pmd)); + ((flags & FOLL_FORCE) && (flags & FOLL_COW) && + pmd_exclusive(pmd, vma)); } struct page *follow_trans_huge_pmd(struct vm_area_struct *vma, @@ -1457,7 +1459,7 @@ struct page *follow_trans_huge_pmd(struct vm_area_struct *vma, assert_spin_locked(pmd_lockptr(mm, pmd)); - if (flags & FOLL_WRITE && !can_follow_write_pmd(*pmd, flags)) + if (flags & FOLL_WRITE && !can_follow_write(*pmd, flags, vma)) goto out; /* Avoid dumping huge zero page */ -- 2.17.1