From: cdall@linaro.org (Christoffer Dall)
To: linux-arm-kernel@lists.infradead.org
Subject: [PATCH v2 6/8] KVM: arm/arm64: Preserve Exec permission across R/W permission faults
Date: Sat, 21 Oct 2017 17:17:48 +0200 [thread overview]
Message-ID: <20171021151748.GD12618@cbox> (raw)
In-Reply-To: <20171020154904.31427-7-marc.zyngier@arm.com>
On Fri, Oct 20, 2017 at 04:49:02PM +0100, Marc Zyngier wrote:
> So far, we loose the Exec property whenever we take permission
> faults, as we always reconstruct the PTE/PMD from scratch. This
> can be counter productive as we can end-up with the following
> fault sequence:
>
> X -> RO -> ROX -> RW -> RWX
>
> Instead, we can lookup the existing PTE/PMD and clear the XN bit in the
> new entry if it was already cleared in the old one, leadig to a much
> nicer fault sequence:
>
> X -> ROX -> RWX
>
> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
> ---
> arch/arm/include/asm/kvm_mmu.h | 10 ++++++++++
> arch/arm64/include/asm/kvm_mmu.h | 10 ++++++++++
> virt/kvm/arm/mmu.c | 27 +++++++++++++++++++++++++++
> 3 files changed, 47 insertions(+)
>
> diff --git a/arch/arm/include/asm/kvm_mmu.h b/arch/arm/include/asm/kvm_mmu.h
> index 4d7a54cbb3ab..aab64fe52146 100644
> --- a/arch/arm/include/asm/kvm_mmu.h
> +++ b/arch/arm/include/asm/kvm_mmu.h
> @@ -107,6 +107,11 @@ static inline bool kvm_s2pte_readonly(pte_t *pte)
> return (pte_val(*pte) & L_PTE_S2_RDWR) == L_PTE_S2_RDONLY;
> }
>
> +static inline bool kvm_s2pte_exec(pte_t *pte)
> +{
> + return !(pte_val(*pte) & L_PTE_XN);
> +}
> +
> static inline void kvm_set_s2pmd_readonly(pmd_t *pmd)
> {
> pmd_val(*pmd) = (pmd_val(*pmd) & ~L_PMD_S2_RDWR) | L_PMD_S2_RDONLY;
> @@ -117,6 +122,11 @@ static inline bool kvm_s2pmd_readonly(pmd_t *pmd)
> return (pmd_val(*pmd) & L_PMD_S2_RDWR) == L_PMD_S2_RDONLY;
> }
>
> +static inline bool kvm_s2pmd_exec(pmd_t *pmd)
> +{
> + return !(pmd_val(*pmd) & PMD_SECT_XN);
> +}
> +
> static inline bool kvm_page_empty(void *ptr)
> {
> struct page *ptr_page = virt_to_page(ptr);
> diff --git a/arch/arm64/include/asm/kvm_mmu.h b/arch/arm64/include/asm/kvm_mmu.h
> index 1e1b20cb348f..126abefffe7f 100644
> --- a/arch/arm64/include/asm/kvm_mmu.h
> +++ b/arch/arm64/include/asm/kvm_mmu.h
> @@ -203,6 +203,11 @@ static inline bool kvm_s2pte_readonly(pte_t *pte)
> return (pte_val(*pte) & PTE_S2_RDWR) == PTE_S2_RDONLY;
> }
>
> +static inline bool kvm_s2pte_exec(pte_t *pte)
> +{
> + return !(pte_val(*pte) & PTE_S2_XN);
> +}
> +
> static inline void kvm_set_s2pmd_readonly(pmd_t *pmd)
> {
> kvm_set_s2pte_readonly((pte_t *)pmd);
> @@ -213,6 +218,11 @@ static inline bool kvm_s2pmd_readonly(pmd_t *pmd)
> return kvm_s2pte_readonly((pte_t *)pmd);
> }
>
> +static inline bool kvm_s2pmd_exec(pmd_t *pmd)
> +{
> + return !(pmd_val(*pmd) & PMD_S2_XN);
> +}
> +
> static inline bool kvm_page_empty(void *ptr)
> {
> struct page *ptr_page = virt_to_page(ptr);
> diff --git a/virt/kvm/arm/mmu.c b/virt/kvm/arm/mmu.c
> index f956efbd933d..b83b5a8442bb 100644
> --- a/virt/kvm/arm/mmu.c
> +++ b/virt/kvm/arm/mmu.c
> @@ -926,6 +926,25 @@ static int stage2_set_pmd_huge(struct kvm *kvm, struct kvm_mmu_memory_cache
> return 0;
> }
>
> +static bool stage2_is_exec(struct kvm *kvm, phys_addr_t addr)
> +{
> + pmd_t *pmdp;
> + pte_t *ptep;
> +
> + pmdp = stage2_get_pmd(kvm, NULL, addr);
> + if (!pmdp || pmd_none(*pmdp) || !pmd_present(*pmdp))
> + return false;
> +
> + if (pmd_thp_or_huge(*pmdp))
> + return kvm_s2pmd_exec(pmdp);
> +
> + ptep = pte_offset_kernel(pmdp, addr);
> + if (!ptep || pte_none(*ptep) || !pte_present(*ptep))
> + return false;
> +
> + return kvm_s2pte_exec(ptep);
> +}
> +
> static int stage2_set_pte(struct kvm *kvm, struct kvm_mmu_memory_cache *cache,
> phys_addr_t addr, const pte_t *new_pte,
> unsigned long flags)
> @@ -1407,6 +1426,10 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa,
> if (exec_fault) {
> new_pmd = kvm_s2pmd_mkexec(new_pmd);
> invalidate_icache_guest_page(vcpu, pfn, PMD_SIZE);
> + } else if (fault_status == FSC_PERM) {
> + /* Preserve execute if XN was already cleared */
> + if (stage2_is_exec(kvm, fault_ipa))
> + new_pmd = kvm_s2pmd_mkexec(new_pmd);
> }
>
> ret = stage2_set_pmd_huge(kvm, memcache, fault_ipa, &new_pmd);
> @@ -1425,6 +1448,10 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa,
> if (exec_fault) {
> new_pte = kvm_s2pte_mkexec(new_pte);
> invalidate_icache_guest_page(vcpu, pfn, PAGE_SIZE);
> + } else if (fault_status == FSC_PERM) {
> + /* Preserve execute if XN was already cleared */
> + if (stage2_is_exec(kvm, fault_ipa))
> + new_pte = kvm_s2pte_mkexec(new_pte);
> }
>
> ret = stage2_set_pte(kvm, memcache, fault_ipa, &new_pte, flags);
> --
> 2.14.1
>
Reviewed-by: Christoffer Dall <christoffer.dall@linaro.org>
next prev parent reply other threads:[~2017-10-21 15:17 UTC|newest]
Thread overview: 20+ messages / expand[flat|nested] mbox.gz Atom feed top
2017-10-20 15:48 [PATCH v2 0/8] arm/arm64: KVM: limit icache invalidation to prefetch aborts Marc Zyngier
2017-10-20 15:48 ` [PATCH v2 1/8] arm64: KVM: Add invalidate_icache_range helper Marc Zyngier
2017-10-23 12:05 ` Will Deacon
2017-10-23 12:37 ` Marc Zyngier
2017-10-23 13:07 ` Will Deacon
2017-10-20 15:48 ` [PATCH v2 2/8] arm: KVM: Add optimized PIPT icache flushing Marc Zyngier
2017-10-20 16:27 ` Mark Rutland
2017-10-20 16:53 ` Marc Zyngier
2017-10-20 16:54 ` Mark Rutland
2017-10-21 15:18 ` Christoffer Dall
2017-10-31 13:51 ` Mark Rutland
2017-10-20 15:48 ` [PATCH v2 3/8] arm64: KVM: PTE/PMD S2 XN bit definition Marc Zyngier
2017-10-20 15:49 ` [PATCH v2 4/8] KVM: arm/arm64: Limit icache invalidation to prefetch aborts Marc Zyngier
2017-10-20 15:49 ` [PATCH v2 5/8] KVM: arm/arm64: Only clean the dcache on translation fault Marc Zyngier
2017-10-20 15:49 ` [PATCH v2 6/8] KVM: arm/arm64: Preserve Exec permission across R/W permission faults Marc Zyngier
2017-10-21 15:17 ` Christoffer Dall [this message]
2017-10-20 15:49 ` [PATCH v2 7/8] KVM: arm/arm64: Drop vcpu parameter from guest cache maintenance operartions Marc Zyngier
2017-10-20 15:49 ` [PATCH v2 8/8] KVM: arm/arm64: Detangle kvm_mmu.h from kvm_hyp.h Marc Zyngier
2017-10-21 15:24 ` [PATCH v2 0/8] arm/arm64: KVM: limit icache invalidation to prefetch aborts Christoffer Dall
2017-10-22 9:20 ` Marc Zyngier
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20171021151748.GD12618@cbox \
--to=cdall@linaro.org \
--cc=linux-arm-kernel@lists.infradead.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).