From: Yu-cheng Yu <yu-cheng.yu@intel.com> To: x86@kernel.org, "H. Peter Anvin" <hpa@zytor.com>, Thomas Gleixner <tglx@linutronix.de>, Ingo Molnar <mingo@redhat.com>, linux-kernel@vger.kernel.org, linux-doc@vger.kernel.org, linux-mm@kvack.org, linux-arch@vger.kernel.org, linux-api@vger.kernel.org, Arnd Bergmann <arnd@arndb.de>, Andy Lutomirski <luto@amacapital.net>, Balbir Singh <bsingharora@gmail.com>, Cyrill Gorcunov <gorcunov@gmail.com>, Dave Hansen <dave.hansen@linux.intel.com>, Florian Weimer <fweimer@redhat.com>, "H.J. Lu" <hjl.tools@gmail.com>, Jann Horn <jannh@google.com>, Jonathan Corbet <corbet@lwn.net>, Kees Cook <keescook@chromiun.org>, Mike Kravetz <mike.kravetz@oracle.com>, Nadav Amit <nadav.amit@gmail.com>, Oleg Nesterov <oleg@redhat.com>, Pavel Machek <pavel@ucw.cz>Peter Cc: Yu-cheng Yu <yu-cheng.yu@intel.com> Subject: [RFC PATCH v3 16/24] mm: Update can_follow_write_pte/pmd for shadow stack Date: Thu, 30 Aug 2018 07:38:56 -0700 [thread overview] Message-ID: <20180830143904.3168-17-yu-cheng.yu@intel.com> (raw) In-Reply-To: <20180830143904.3168-1-yu-cheng.yu@intel.com> can_follow_write_pte/pmd look for the (RO & DIRTY) PTE/PMD to verify an exclusive RO page still exists after a broken COW. A shadow stack PTE is RO & PAGE_DIRTY_SW when it is shared, otherwise RO & PAGE_DIRTY_HW. Introduce pte_exclusive() and pmd_exclusive() to also verify a shadow stack PTE is exclusive. Also rename can_follow_write_pte/pmd() to can_follow_write() to make their meaning clear; i.e. "Can we write to the page?", not "Is the PTE writable?" Signed-off-by: Yu-cheng Yu <yu-cheng.yu@intel.com> --- arch/x86/mm/pgtable.c | 18 ++++++++++++++++++ include/asm-generic/pgtable.h | 18 ++++++++++++++++++ mm/gup.c | 8 +++++--- mm/huge_memory.c | 8 +++++--- 4 files changed, 46 insertions(+), 6 deletions(-) diff --git a/arch/x86/mm/pgtable.c b/arch/x86/mm/pgtable.c index 0ab38bfbedfc..13dd18ad6fd8 100644 --- a/arch/x86/mm/pgtable.c +++ b/arch/x86/mm/pgtable.c @@ -889,4 +889,22 @@ inline pmd_t pmd_set_vma_features(pmd_t pmd, struct vm_area_struct *vma) else return pmd; } + +inline bool pte_exclusive(pte_t pte, struct vm_area_struct *vma) +{ + if (vma->vm_flags & VM_SHSTK) + return pte_dirty_hw(pte); + else + return pte_dirty(pte); +} + +#ifdef CONFIG_TRANSPARENT_HUGEPAGE +inline bool pmd_exclusive(pmd_t pmd, struct vm_area_struct *vma) +{ + if (vma->vm_flags & VM_SHSTK) + return pmd_dirty_hw(pmd); + else + return pmd_dirty(pmd); +} +#endif #endif /* CONFIG_X86_INTEL_SHADOW_STACK_USER */ diff --git a/include/asm-generic/pgtable.h b/include/asm-generic/pgtable.h index 0f25186cd38d..2e8e7fa4ab71 100644 --- a/include/asm-generic/pgtable.h +++ b/include/asm-generic/pgtable.h @@ -1156,9 +1156,27 @@ static inline pmd_t pmd_set_vma_features(pmd_t pmd, struct vm_area_struct *vma) { return pmd; } + +#ifdef CONFIG_MMU +static inline bool pte_exclusive(pte_t pte, struct vm_area_struct *vma) +{ + return pte_dirty(pte); +} + +#ifdef CONFIG_TRANSPARENT_HUGEPAGE +static inline bool pmd_exclusive(pmd_t pmd, struct vm_area_struct *vma) +{ + return pmd_dirty(pmd); +} +#endif +#endif /* CONFIG_MMU */ #else pte_t pte_set_vma_features(pte_t pte, struct vm_area_struct *vma); pmd_t pmd_set_vma_features(pmd_t pmd, struct vm_area_struct *vma); +bool pte_exclusive(pte_t pte, struct vm_area_struct *vma); +#ifdef CONFIG_TRANSPARENT_HUGEPAGE +bool pmd_exclusive(pmd_t pmd, struct vm_area_struct *vma); +#endif #endif #endif /* _ASM_GENERIC_PGTABLE_H */ diff --git a/mm/gup.c b/mm/gup.c index 1abc8b4afff6..03cb2e331f80 100644 --- a/mm/gup.c +++ b/mm/gup.c @@ -64,10 +64,12 @@ static int follow_pfn_pte(struct vm_area_struct *vma, unsigned long address, * FOLL_FORCE can write to even unwritable pte's, but only * after we've gone through a COW cycle and they are dirty. */ -static inline bool can_follow_write_pte(pte_t pte, unsigned int flags) +static inline bool can_follow_write(pte_t pte, unsigned int flags, + struct vm_area_struct *vma) { return pte_write(pte) || - ((flags & FOLL_FORCE) && (flags & FOLL_COW) && pte_dirty(pte)); + ((flags & FOLL_FORCE) && (flags & FOLL_COW) && + pte_exclusive(pte, vma)); } static struct page *follow_page_pte(struct vm_area_struct *vma, @@ -105,7 +107,7 @@ static struct page *follow_page_pte(struct vm_area_struct *vma, } if ((flags & FOLL_NUMA) && pte_protnone(pte)) goto no_page; - if ((flags & FOLL_WRITE) && !can_follow_write_pte(pte, flags)) { + if ((flags & FOLL_WRITE) && !can_follow_write(pte, flags, vma)) { pte_unmap_unlock(ptep, ptl); return NULL; } diff --git a/mm/huge_memory.c b/mm/huge_memory.c index 5b4c8f2fb85e..702650eec0b2 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -1387,10 +1387,12 @@ vm_fault_t do_huge_pmd_wp_page(struct vm_fault *vmf, pmd_t orig_pmd) * FOLL_FORCE can write to even unwritable pmd's, but only * after we've gone through a COW cycle and they are dirty. */ -static inline bool can_follow_write_pmd(pmd_t pmd, unsigned int flags) +static inline bool can_follow_write(pmd_t pmd, unsigned int flags, + struct vm_area_struct *vma) { return pmd_write(pmd) || - ((flags & FOLL_FORCE) && (flags & FOLL_COW) && pmd_dirty(pmd)); + ((flags & FOLL_FORCE) && (flags & FOLL_COW) && + pmd_exclusive(pmd, vma)); } struct page *follow_trans_huge_pmd(struct vm_area_struct *vma, @@ -1403,7 +1405,7 @@ struct page *follow_trans_huge_pmd(struct vm_area_struct *vma, assert_spin_locked(pmd_lockptr(mm, pmd)); - if (flags & FOLL_WRITE && !can_follow_write_pmd(*pmd, flags)) + if (flags & FOLL_WRITE && !can_follow_write(*pmd, flags, vma)) goto out; /* Avoid dumping huge zero page */ -- 2.17.1
WARNING: multiple messages have this Message-ID (diff)
From: Yu-cheng Yu <yu-cheng.yu@intel.com> To: x86@kernel.org, "H. Peter Anvin" <hpa@zytor.com>, Thomas Gleixner <tglx@linutronix.de>, Ingo Molnar <mingo@redhat.com>, linux-kernel@vger.kernel.org, linux-doc@vger.kernel.org, linux-mm@kvack.org, linux-arch@vger.kernel.org, linux-api@vger.kernel.org, Arnd Bergmann <arnd@arndb.de>, Andy Lutomirski <luto@amacapital.net>, Balbir Singh <bsingharora@gmail.com>, Cyrill Gorcunov <gorcunov@gmail.com>, Dave Hansen <dave.hansen@linux.intel.com>, Florian Weimer <fweimer@redhat.com>, "H.J. Lu" <hjl.tools@gmail.com>, Jann Horn <jannh@google.com>, Jonathan Corbet <corbet@lwn.net>, Kees Cook <keescook@chromiun.org>, Mike Kravetz <mike.kravetz@oracle.com>, Nadav Amit <nadav.amit@gmail.com>, Oleg Nesterov <oleg@redhat.com>, Pavel Machek <pavel@ucw.cz>, Peter Zijlstra <peterz@infradead.org>, "Ravi V. Shankar" <ravi.v.shankar@intel.com>, Vedvyas Shanbhogue <vedvyas.shanbhogue@intel.com> Cc: Yu-cheng Yu <yu-cheng.yu@intel.com> Subject: [RFC PATCH v3 16/24] mm: Update can_follow_write_pte/pmd for shadow stack Date: Thu, 30 Aug 2018 07:38:56 -0700 [thread overview] Message-ID: <20180830143904.3168-17-yu-cheng.yu@intel.com> (raw) Message-ID: <20180830143856.wBCiCGyTmeCzOd4vJLjtQyWrXtuXUpZXTv_BfN6S-Xg@z> (raw) In-Reply-To: <20180830143904.3168-1-yu-cheng.yu@intel.com> can_follow_write_pte/pmd look for the (RO & DIRTY) PTE/PMD to verify an exclusive RO page still exists after a broken COW. A shadow stack PTE is RO & PAGE_DIRTY_SW when it is shared, otherwise RO & PAGE_DIRTY_HW. Introduce pte_exclusive() and pmd_exclusive() to also verify a shadow stack PTE is exclusive. Also rename can_follow_write_pte/pmd() to can_follow_write() to make their meaning clear; i.e. "Can we write to the page?", not "Is the PTE writable?" Signed-off-by: Yu-cheng Yu <yu-cheng.yu@intel.com> --- arch/x86/mm/pgtable.c | 18 ++++++++++++++++++ include/asm-generic/pgtable.h | 18 ++++++++++++++++++ mm/gup.c | 8 +++++--- mm/huge_memory.c | 8 +++++--- 4 files changed, 46 insertions(+), 6 deletions(-) diff --git a/arch/x86/mm/pgtable.c b/arch/x86/mm/pgtable.c index 0ab38bfbedfc..13dd18ad6fd8 100644 --- a/arch/x86/mm/pgtable.c +++ b/arch/x86/mm/pgtable.c @@ -889,4 +889,22 @@ inline pmd_t pmd_set_vma_features(pmd_t pmd, struct vm_area_struct *vma) else return pmd; } + +inline bool pte_exclusive(pte_t pte, struct vm_area_struct *vma) +{ + if (vma->vm_flags & VM_SHSTK) + return pte_dirty_hw(pte); + else + return pte_dirty(pte); +} + +#ifdef CONFIG_TRANSPARENT_HUGEPAGE +inline bool pmd_exclusive(pmd_t pmd, struct vm_area_struct *vma) +{ + if (vma->vm_flags & VM_SHSTK) + return pmd_dirty_hw(pmd); + else + return pmd_dirty(pmd); +} +#endif #endif /* CONFIG_X86_INTEL_SHADOW_STACK_USER */ diff --git a/include/asm-generic/pgtable.h b/include/asm-generic/pgtable.h index 0f25186cd38d..2e8e7fa4ab71 100644 --- a/include/asm-generic/pgtable.h +++ b/include/asm-generic/pgtable.h @@ -1156,9 +1156,27 @@ static inline pmd_t pmd_set_vma_features(pmd_t pmd, struct vm_area_struct *vma) { return pmd; } + +#ifdef CONFIG_MMU +static inline bool pte_exclusive(pte_t pte, struct vm_area_struct *vma) +{ + return pte_dirty(pte); +} + +#ifdef CONFIG_TRANSPARENT_HUGEPAGE +static inline bool pmd_exclusive(pmd_t pmd, struct vm_area_struct *vma) +{ + return pmd_dirty(pmd); +} +#endif +#endif /* CONFIG_MMU */ #else pte_t pte_set_vma_features(pte_t pte, struct vm_area_struct *vma); pmd_t pmd_set_vma_features(pmd_t pmd, struct vm_area_struct *vma); +bool pte_exclusive(pte_t pte, struct vm_area_struct *vma); +#ifdef CONFIG_TRANSPARENT_HUGEPAGE +bool pmd_exclusive(pmd_t pmd, struct vm_area_struct *vma); +#endif #endif #endif /* _ASM_GENERIC_PGTABLE_H */ diff --git a/mm/gup.c b/mm/gup.c index 1abc8b4afff6..03cb2e331f80 100644 --- a/mm/gup.c +++ b/mm/gup.c @@ -64,10 +64,12 @@ static int follow_pfn_pte(struct vm_area_struct *vma, unsigned long address, * FOLL_FORCE can write to even unwritable pte's, but only * after we've gone through a COW cycle and they are dirty. */ -static inline bool can_follow_write_pte(pte_t pte, unsigned int flags) +static inline bool can_follow_write(pte_t pte, unsigned int flags, + struct vm_area_struct *vma) { return pte_write(pte) || - ((flags & FOLL_FORCE) && (flags & FOLL_COW) && pte_dirty(pte)); + ((flags & FOLL_FORCE) && (flags & FOLL_COW) && + pte_exclusive(pte, vma)); } static struct page *follow_page_pte(struct vm_area_struct *vma, @@ -105,7 +107,7 @@ static struct page *follow_page_pte(struct vm_area_struct *vma, } if ((flags & FOLL_NUMA) && pte_protnone(pte)) goto no_page; - if ((flags & FOLL_WRITE) && !can_follow_write_pte(pte, flags)) { + if ((flags & FOLL_WRITE) && !can_follow_write(pte, flags, vma)) { pte_unmap_unlock(ptep, ptl); return NULL; } diff --git a/mm/huge_memory.c b/mm/huge_memory.c index 5b4c8f2fb85e..702650eec0b2 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -1387,10 +1387,12 @@ vm_fault_t do_huge_pmd_wp_page(struct vm_fault *vmf, pmd_t orig_pmd) * FOLL_FORCE can write to even unwritable pmd's, but only * after we've gone through a COW cycle and they are dirty. */ -static inline bool can_follow_write_pmd(pmd_t pmd, unsigned int flags) +static inline bool can_follow_write(pmd_t pmd, unsigned int flags, + struct vm_area_struct *vma) { return pmd_write(pmd) || - ((flags & FOLL_FORCE) && (flags & FOLL_COW) && pmd_dirty(pmd)); + ((flags & FOLL_FORCE) && (flags & FOLL_COW) && + pmd_exclusive(pmd, vma)); } struct page *follow_trans_huge_pmd(struct vm_area_struct *vma, @@ -1403,7 +1405,7 @@ struct page *follow_trans_huge_pmd(struct vm_area_struct *vma, assert_spin_locked(pmd_lockptr(mm, pmd)); - if (flags & FOLL_WRITE && !can_follow_write_pmd(*pmd, flags)) + if (flags & FOLL_WRITE && !can_follow_write(*pmd, flags, vma)) goto out; /* Avoid dumping huge zero page */ -- 2.17.1
next prev parent reply other threads:[~2018-08-30 14:38 UTC|newest] Thread overview: 146+ messages / expand[flat|nested] mbox.gz Atom feed top 2018-08-30 14:38 [RFC PATCH v3 00/24] Control Flow Enforcement: Shadow Stack Yu-cheng Yu 2018-08-30 14:38 ` Yu-cheng Yu 2018-08-30 14:38 ` [RFC PATCH v3 01/24] x86/cpufeatures: Add CPUIDs for Control-flow Enforcement Technology (CET) Yu-cheng Yu 2018-08-30 14:38 ` Yu-cheng Yu 2018-08-30 14:38 ` [RFC PATCH v3 02/24] x86/fpu/xstate: Change some names to separate XSAVES system and user states Yu-cheng Yu 2018-08-30 14:38 ` Yu-cheng Yu 2018-08-30 14:38 ` [RFC PATCH v3 03/24] x86/fpu/xstate: Enable XSAVES system states Yu-cheng Yu 2018-08-30 14:38 ` Yu-cheng Yu 2018-08-30 14:38 ` [RFC PATCH v3 04/24] x86/fpu/xstate: Add XSAVES system states for shadow stack Yu-cheng Yu 2018-08-30 14:38 ` Yu-cheng Yu 2018-08-30 14:38 ` [RFC PATCH v3 05/24] Documentation/x86: Add CET description Yu-cheng Yu 2018-08-30 14:38 ` Yu-cheng Yu 2018-08-30 20:39 ` Pavel Machek 2018-08-30 20:39 ` Pavel Machek 2018-08-30 22:49 ` Yu-cheng Yu 2018-08-30 22:49 ` Yu-cheng Yu 2018-09-14 21:17 ` Yu-cheng Yu 2018-09-14 21:17 ` Yu-cheng Yu 2018-09-03 2:56 ` Randy Dunlap 2018-09-03 2:56 ` Randy Dunlap 2018-08-30 14:38 ` [RFC PATCH v3 06/24] x86/cet: Control protection exception handler Yu-cheng Yu 2018-08-30 14:38 ` Yu-cheng Yu 2018-08-31 15:01 ` Jann Horn 2018-08-31 15:01 ` Jann Horn 2018-08-31 16:20 ` Yu-cheng Yu 2018-08-31 16:20 ` Yu-cheng Yu 2018-08-30 14:38 ` [RFC PATCH v3 07/24] x86/cet/shstk: Add Kconfig option for user-mode shadow stack Yu-cheng Yu 2018-08-30 14:38 ` Yu-cheng Yu 2018-08-30 14:38 ` [RFC PATCH v3 08/24] mm: Introduce VM_SHSTK for shadow stack memory Yu-cheng Yu 2018-08-30 14:38 ` Yu-cheng Yu 2018-08-30 14:38 ` [RFC PATCH v3 09/24] x86/mm: Change _PAGE_DIRTY to _PAGE_DIRTY_HW Yu-cheng Yu 2018-08-30 14:38 ` Yu-cheng Yu 2018-08-30 14:38 ` [RFC PATCH v3 10/24] x86/mm: Introduce _PAGE_DIRTY_SW Yu-cheng Yu 2018-08-30 14:38 ` Yu-cheng Yu 2018-08-30 14:38 ` [RFC PATCH v3 11/24] drm/i915/gvt: Update _PAGE_DIRTY to _PAGE_DIRTY_BITS Yu-cheng Yu 2018-08-30 14:38 ` Yu-cheng Yu 2018-08-30 14:38 ` [RFC PATCH v3 12/24] x86/mm: Modify ptep_set_wrprotect and pmdp_set_wrprotect for _PAGE_DIRTY_SW Yu-cheng Yu 2018-08-30 14:38 ` Yu-cheng Yu 2018-08-30 15:49 ` Jann Horn 2018-08-30 15:49 ` Jann Horn 2018-08-30 16:02 ` Yu-cheng Yu 2018-08-30 16:02 ` Yu-cheng Yu 2018-08-30 16:08 ` Dave Hansen 2018-08-30 16:08 ` Dave Hansen 2018-08-30 16:23 ` Jann Horn 2018-08-30 16:23 ` Jann Horn 2018-08-30 17:19 ` Dave Hansen 2018-08-30 17:19 ` Dave Hansen 2018-08-30 17:26 ` Yu-cheng Yu 2018-08-30 17:26 ` Yu-cheng Yu 2018-08-30 17:33 ` Dave Hansen 2018-08-30 17:33 ` Dave Hansen 2018-08-30 17:54 ` Yu-cheng Yu 2018-08-30 17:54 ` Yu-cheng Yu 2018-08-30 17:59 ` Jann Horn 2018-08-30 17:59 ` Jann Horn 2018-08-30 20:21 ` Yu-cheng Yu 2018-08-30 20:21 ` Yu-cheng Yu 2018-08-30 20:44 ` Jann Horn 2018-08-30 20:44 ` Jann Horn 2018-08-30 20:52 ` Yu-cheng Yu 2018-08-30 20:52 ` Yu-cheng Yu 2018-08-30 21:01 ` Jann Horn 2018-08-30 21:01 ` Jann Horn 2018-08-30 21:47 ` Jann Horn 2018-08-30 21:47 ` Jann Horn 2018-08-31 9:53 ` Peter Zijlstra 2018-08-31 9:53 ` Peter Zijlstra 2018-08-31 14:33 ` Yu-cheng Yu 2018-08-31 14:33 ` Yu-cheng Yu 2018-08-31 14:47 ` Dave Hansen 2018-08-31 14:47 ` Dave Hansen 2018-08-31 15:48 ` Yu-cheng Yu 2018-08-31 15:48 ` Yu-cheng Yu 2018-08-31 15:58 ` Dave Hansen 2018-08-31 15:58 ` Dave Hansen 2018-08-31 16:29 ` Peter Zijlstra 2018-08-31 16:29 ` Peter Zijlstra 2018-09-14 20:39 ` Yu-cheng Yu 2018-09-14 20:39 ` Yu-cheng Yu 2018-09-14 20:46 ` Dave Hansen 2018-09-14 20:46 ` Dave Hansen 2018-09-14 21:08 ` Yu-cheng Yu 2018-09-14 21:08 ` Yu-cheng Yu 2018-09-14 21:33 ` Dave Hansen 2018-09-14 21:33 ` Dave Hansen 2018-08-31 1:23 ` Andy Lutomirski 2018-08-31 1:23 ` Andy Lutomirski 2018-08-30 17:34 ` Andy Lutomirski 2018-08-30 17:34 ` Andy Lutomirski 2018-08-30 18:55 ` Dave Hansen 2018-08-30 18:55 ` Dave Hansen 2018-08-31 17:46 ` Andy Lutomirski 2018-08-31 17:46 ` Andy Lutomirski 2018-08-31 17:52 ` Dave Hansen 2018-08-31 17:52 ` Dave Hansen 2018-08-30 19:59 ` Randy Dunlap 2018-08-30 19:59 ` Randy Dunlap 2018-08-30 20:23 ` Yu-cheng Yu 2018-08-30 20:23 ` Yu-cheng Yu 2018-08-31 16:29 ` Dave Hansen 2018-08-31 16:29 ` Dave Hansen 2018-08-30 14:38 ` [RFC PATCH v3 13/24] x86/mm: Shadow stack page fault error checking Yu-cheng Yu 2018-08-30 14:38 ` Yu-cheng Yu 2018-08-30 14:38 ` [RFC PATCH v3 14/24] mm: Handle shadow stack page fault Yu-cheng Yu 2018-08-30 14:38 ` Yu-cheng Yu 2018-08-30 14:38 ` [RFC PATCH v3 15/24] mm: Handle THP/HugeTLB " Yu-cheng Yu 2018-08-30 14:38 ` Yu-cheng Yu 2018-08-30 14:38 ` Yu-cheng Yu [this message] 2018-08-30 14:38 ` [RFC PATCH v3 16/24] mm: Update can_follow_write_pte/pmd for shadow stack Yu-cheng Yu 2018-08-30 14:38 ` [RFC PATCH v3 17/24] mm: Introduce do_mmap_locked() Yu-cheng Yu 2018-08-30 14:38 ` Yu-cheng Yu 2018-08-30 14:38 ` [RFC PATCH v3 18/24] x86/cet/shstk: User-mode shadow stack support Yu-cheng Yu 2018-08-30 14:38 ` Yu-cheng Yu 2018-08-30 16:10 ` Jann Horn 2018-08-30 16:10 ` Jann Horn 2018-08-30 16:20 ` Yu-cheng Yu 2018-08-30 16:20 ` Yu-cheng Yu 2018-08-30 14:38 ` [RFC PATCH v3 19/24] x86/cet/shstk: Introduce WRUSS instruction Yu-cheng Yu 2018-08-30 14:38 ` Yu-cheng Yu 2018-08-30 15:39 ` Jann Horn 2018-08-30 15:39 ` Jann Horn 2018-08-30 15:55 ` Andy Lutomirski 2018-08-30 15:55 ` Andy Lutomirski 2018-08-30 16:22 ` Yu-cheng Yu 2018-08-30 16:22 ` Yu-cheng Yu 2018-08-31 21:49 ` Yu-cheng Yu 2018-08-31 21:49 ` Yu-cheng Yu 2018-08-31 22:16 ` Andy Lutomirski 2018-08-31 22:16 ` Andy Lutomirski 2018-09-14 20:46 ` Yu-cheng Yu 2018-09-14 20:46 ` Yu-cheng Yu 2018-08-30 14:39 ` [RFC PATCH v3 20/24] x86/cet/shstk: Signal handling for shadow stack Yu-cheng Yu 2018-08-30 14:39 ` Yu-cheng Yu 2018-08-30 14:39 ` [RFC PATCH v3 21/24] x86/cet/shstk: ELF header parsing of Shadow Stack Yu-cheng Yu 2018-08-30 14:39 ` Yu-cheng Yu 2018-08-30 14:39 ` [RFC PATCH v3 22/24] x86/cet/shstk: Handle thread shadow stack Yu-cheng Yu 2018-08-30 14:39 ` Yu-cheng Yu 2018-08-30 14:39 ` [RFC PATCH v3 23/24] x86/cet/shstk: Add arch_prctl functions for Shadow Stack Yu-cheng Yu 2018-08-30 14:39 ` Yu-cheng Yu 2018-08-30 14:39 ` [RFC PATCH v3 24/24] x86/cet/shstk: Add Shadow Stack instructions to opcode map Yu-cheng Yu 2018-08-30 14:39 ` Yu-cheng Yu 2018-09-02 8:13 ` [RFC PATCH v3 00/24] Control Flow Enforcement: Shadow Stack Balbir Singh 2018-09-02 8:13 ` Balbir Singh 2018-09-04 14:47 ` Yu-cheng Yu 2018-09-04 14:47 ` Yu-cheng Yu
Reply instructions: You may reply publicly to this message via plain-text email using any one of the following methods: * Save the following mbox file, import it into your mail client, and reply-to-all from there: mbox Avoid top-posting and favor interleaved quoting: https://en.wikipedia.org/wiki/Posting_style#Interleaved_style * Reply using the --to, --cc, and --in-reply-to switches of git-send-email(1): git send-email \ --in-reply-to=20180830143904.3168-17-yu-cheng.yu@intel.com \ --to=yu-cheng.yu@intel.com \ --cc=arnd@arndb.de \ --cc=bsingharora@gmail.com \ --cc=corbet@lwn.net \ --cc=dave.hansen@linux.intel.com \ --cc=fweimer@redhat.com \ --cc=gorcunov@gmail.com \ --cc=hjl.tools@gmail.com \ --cc=hpa@zytor.com \ --cc=jannh@google.com \ --cc=keescook@chromiun.org \ --cc=linux-api@vger.kernel.org \ --cc=linux-arch@vger.kernel.org \ --cc=linux-doc@vger.kernel.org \ --cc=linux-kernel@vger.kernel.org \ --cc=linux-mm@kvack.org \ --cc=luto@amacapital.net \ --cc=mike.kravetz@oracle.com \ --cc=mingo@redhat.com \ --cc=nadav.amit@gmail.com \ --cc=oleg@redhat.com \ --cc=pavel@ucw.cz \ --cc=tglx@linutronix.de \ --cc=x86@kernel.org \ /path/to/YOUR_REPLY https://kernel.org/pub/software/scm/git/docs/git-send-email.html * If your mail client supports setting the In-Reply-To header via mailto: links, try the mailto: linkBe sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox; as well as URLs for NNTP newsgroup(s).