public inbox for kvm@vger.kernel.org
 help / color / mirror / Atom feed
From: Christoffer Dall <christoffer.dall@linaro.org>
To: Marc Zyngier <marc.zyngier@arm.com>
Cc: kvm@vger.kernel.org, kvmarm@lists.cs.columbia.edu,
	Steve Capper <steve.capper@linaro.org>
Subject: Re: [PATCH 3/3] arm/arm64: KVM: Optimize handling of Access Flag faults
Date: Thu, 12 Mar 2015 12:40:24 +0100	[thread overview]
Message-ID: <20150312114024.GC28863@cbox> (raw)
In-Reply-To: <1421865733-20034-4-git-send-email-marc.zyngier@arm.com>

On Wed, Jan 21, 2015 at 06:42:13PM +0000, Marc Zyngier wrote:
> Now that we have page aging in Stage-2, it becomes obvious that
> we're doing way too much work handling the fault.
> 
> The page is not going anywhere (it is still mapped), the page
> tables are already allocated, and all we want is to flip a bit
> in the PMD or PTE. Also, we can avoid any form of TLB invalidation,
> since a page with the AF bit off is not allowed to be cached.
> 
> An obvious solution is to have a separate handler for FSC_ACCESS,
> where we pride ourselves to only do the very minimum amount of
> work.
> 
> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
> ---
>  arch/arm/kvm/mmu.c   | 46 ++++++++++++++++++++++++++++++++++++++++++++++
>  arch/arm/kvm/trace.h | 15 +++++++++++++++
>  2 files changed, 61 insertions(+)
> 
> diff --git a/arch/arm/kvm/mmu.c b/arch/arm/kvm/mmu.c
> index ffe89a0..112bae1 100644
> --- a/arch/arm/kvm/mmu.c
> +++ b/arch/arm/kvm/mmu.c
> @@ -1073,6 +1073,46 @@ out_unlock:
>  	return ret;
>  }
>  
> +/*
> + * Resolve the access fault by making the page young again.
> + * Note that because the faulting entry is guaranteed not to be
> + * cached in the TLB, we don't need to invalidate anything.
> + */
> +static void handle_access_fault(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa)
> +{
> +	pmd_t *pmd;
> +	pte_t *pte;
> +	pfn_t pfn;
> +	bool pfn_valid = false;

I think you can just initialize pfn to KVM_PFN_ERR_FAULT and use
is_error_pfn() if you like, not sure if it's cleaner.

> +
> +	trace_kvm_access_fault(fault_ipa);
> +
> +	spin_lock(&vcpu->kvm->mmu_lock);
> +
> +	pmd = stage2_get_pmd(vcpu->kvm, NULL, fault_ipa);
> +	if (!pmd || pmd_none(*pmd))	/* Nothing there */
> +		goto out;
> +
> +	if (kvm_pmd_huge(*pmd)) {	/* THP, HugeTLB */
> +		*pmd = pmd_mkyoung(*pmd);
> +		pfn = pmd_pfn(*pmd);
> +		pfn_valid = true;
> +		goto out;
> +	}
> +
> +	pte = pte_offset_kernel(pmd, fault_ipa);
> +	if (pte_none(*pte))		/* Nothing there either */
> +		goto out;
> +
> +	*pte = pte_mkyoung(*pte);	/* Just a page... */
> +	pfn = pte_pfn(*pte);
> +	pfn_valid = true;
> +out:
> +	spin_unlock(&vcpu->kvm->mmu_lock);

do you have a race here if the page is swapped out before you go and set
pfn accessed?  Does that cause any harm?

> +	if (pfn_valid)
> +		kvm_set_pfn_accessed(pfn);
> +}
> +
>  /**
>   * kvm_handle_guest_abort - handles all 2nd stage aborts
>   * @vcpu:	the VCPU pointer
> @@ -1140,6 +1180,12 @@ int kvm_handle_guest_abort(struct kvm_vcpu *vcpu, struct kvm_run *run)
>  	/* Userspace should not be able to register out-of-bounds IPAs */
>  	VM_BUG_ON(fault_ipa >= KVM_PHYS_SIZE);
>  
> +	if (fault_status == FSC_ACCESS) {
> +		handle_access_fault(vcpu, fault_ipa);
> +		ret = 1;
> +		goto out_unlock;
> +	}
> +
>  	ret = user_mem_abort(vcpu, fault_ipa, memslot, hva, fault_status);
>  	if (ret == 0)
>  		ret = 1;
> diff --git a/arch/arm/kvm/trace.h b/arch/arm/kvm/trace.h
> index 364b5382..5665a16 100644
> --- a/arch/arm/kvm/trace.h
> +++ b/arch/arm/kvm/trace.h
> @@ -64,6 +64,21 @@ TRACE_EVENT(kvm_guest_fault,
>  		  __entry->hxfar, __entry->vcpu_pc)
>  );
>  
> +TRACE_EVENT(kvm_access_fault,
> +	TP_PROTO(unsigned long ipa),
> +	TP_ARGS(ipa),
> +
> +	TP_STRUCT__entry(
> +		__field(	unsigned long,	ipa		)
> +	),
> +
> +	TP_fast_assign(
> +		__entry->ipa		= ipa;
> +	),
> +
> +	TP_printk("IPA: %lx", __entry->ipa)
> +);
> +
>  TRACE_EVENT(kvm_irq_line,
>  	TP_PROTO(unsigned int type, int vcpu_idx, int irq_num, int level),
>  	TP_ARGS(type, vcpu_idx, irq_num, level),
> -- 
> 2.1.4
> 
Thanks,
-Christoffer

  reply	other threads:[~2015-03-12 11:40 UTC|newest]

Thread overview: 10+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2015-01-21 18:42 [PATCH 0/3] arm/arm64: KVM: Add support for page aging Marc Zyngier
2015-01-21 18:42 ` [PATCH 1/3] arm/arm64: KVM: Allow handle_hva_to_gpa to return a value Marc Zyngier
2015-03-12 11:40   ` Christoffer Dall
2015-01-21 18:42 ` [PATCH 2/3] arm/arm64: KVM: Implement Stage-2 page aging Marc Zyngier
2015-03-12 11:40   ` Christoffer Dall
2015-03-12 14:50     ` Marc Zyngier
2015-01-21 18:42 ` [PATCH 3/3] arm/arm64: KVM: Optimize handling of Access Flag faults Marc Zyngier
2015-03-12 11:40   ` Christoffer Dall [this message]
2015-03-12 15:04     ` Marc Zyngier
2015-03-12 15:07     ` Marc Zyngier

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20150312114024.GC28863@cbox \
    --to=christoffer.dall@linaro.org \
    --cc=kvm@vger.kernel.org \
    --cc=kvmarm@lists.cs.columbia.edu \
    --cc=marc.zyngier@arm.com \
    --cc=steve.capper@linaro.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox