kvm.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH 0/2] KVM: PPC: BOOKE: MMU Fixes
@ 2013-08-07 10:03 Bharat Bhushan
  2013-08-07 10:03 ` [PATCH 1/2] kvm: powerpc: mark page accessed when mapping a guest page Bharat Bhushan
                   ` (2 more replies)
  0 siblings, 3 replies; 11+ messages in thread
From: Bharat Bhushan @ 2013-08-07 10:03 UTC (permalink / raw)
  To: paulus, scottwood, agraf, kvm-ppc, kvm; +Cc: Bharat Bhushan

From: Bharat Bhushan <bharat.bhushan@freescale.com>

First Patch set missing _PAGE_ACCESSED when a guest page is accessed

Second Patch check for MMU notifier range invalidation progress
when setting a reference for a guest page. This is based on
"KVM: PPC: Book3S PR: Use mmu_notifier_retry() in kvmppc_mmu_map_page()"
patch sent by Pauls (still in review).

Bharat Bhushan (2):
  kvm: powerpc: mark page accessed when mapping a guest page
  kvm: ppc: booke: check range page invalidation progress on page setup

 arch/powerpc/kvm/e500_mmu_host.c |   22 ++++++++++++++++++++--
 1 files changed, 20 insertions(+), 2 deletions(-)

^ permalink raw reply	[flat|nested] 11+ messages in thread

* [PATCH 1/2] kvm: powerpc: mark page accessed when mapping a guest page
  2013-08-07 10:03 [PATCH 0/2] KVM: PPC: BOOKE: MMU Fixes Bharat Bhushan
@ 2013-08-07 10:03 ` Bharat Bhushan
  2013-08-10  1:12   ` Scott Wood
  2013-10-04 13:35   ` Alexander Graf
  2013-08-07 10:03 ` [PATCH 2/2] kvm: ppc: booke: check range page invalidation progress on page setup Bharat Bhushan
  2013-08-30  1:06 ` [PATCH 0/2] KVM: PPC: BOOKE: MMU Fixes Bhushan Bharat-R65777
  2 siblings, 2 replies; 11+ messages in thread
From: Bharat Bhushan @ 2013-08-07 10:03 UTC (permalink / raw)
  To: paulus, scottwood, agraf, kvm-ppc, kvm; +Cc: Bharat Bhushan, Bharat Bhushan

Mark the guest page as accessed so that there is likely
less chances of this page getting swap-out.

Signed-off-by: Bharat Bhushan <bharat.bhushan@freescale.com>
---
 arch/powerpc/kvm/e500_mmu_host.c |    3 +++
 1 files changed, 3 insertions(+), 0 deletions(-)

diff --git a/arch/powerpc/kvm/e500_mmu_host.c b/arch/powerpc/kvm/e500_mmu_host.c
index 001a2b0..ff6dd66 100644
--- a/arch/powerpc/kvm/e500_mmu_host.c
+++ b/arch/powerpc/kvm/e500_mmu_host.c
@@ -246,6 +246,9 @@ static inline void kvmppc_e500_ref_setup(struct tlbe_ref *ref,
 	/* Use guest supplied MAS2_G and MAS2_E */
 	ref->flags |= (gtlbe->mas2 & MAS2_ATTRIB_MASK) | wimg;
 
+	/* Mark the page accessed */
+	kvm_set_pfn_accessed(pfn);
+
 	if (tlbe_is_writable(gtlbe))
 		kvm_set_pfn_dirty(pfn);
 }
-- 
1.7.0.4

^ permalink raw reply related	[flat|nested] 11+ messages in thread

* [PATCH 2/2] kvm: ppc: booke: check range page invalidation progress on page setup
  2013-08-07 10:03 [PATCH 0/2] KVM: PPC: BOOKE: MMU Fixes Bharat Bhushan
  2013-08-07 10:03 ` [PATCH 1/2] kvm: powerpc: mark page accessed when mapping a guest page Bharat Bhushan
@ 2013-08-07 10:03 ` Bharat Bhushan
  2013-08-10  1:15   ` Scott Wood
  2013-10-04 13:38   ` Alexander Graf
  2013-08-30  1:06 ` [PATCH 0/2] KVM: PPC: BOOKE: MMU Fixes Bhushan Bharat-R65777
  2 siblings, 2 replies; 11+ messages in thread
From: Bharat Bhushan @ 2013-08-07 10:03 UTC (permalink / raw)
  To: paulus, scottwood, agraf, kvm-ppc, kvm; +Cc: Bharat Bhushan, Bharat Bhushan

When the MM code is invalidating a range of pages, it calls the KVM
kvm_mmu_notifier_invalidate_range_start() notifier function, which calls
kvm_unmap_hva_range(), which arranges to flush all the TLBs for guest pages.
However, the Linux PTEs for the range being flushed are still valid at
that point.  We are not supposed to establish any new references to pages
in the range until the ...range_end() notifier gets called.
The PPC-specific KVM code doesn't get any explicit notification of that;
instead, we are supposed to use mmu_notifier_retry() to test whether we
are or have been inside a range flush notifier pair while we have been
referencing a page.

This patch calls the mmu_notifier_retry() while mapping the guest
page to ensure we are not referencing a page when in range invalidation.

This call is inside a region locked with kvm->mmu_lock, which is the
same lock that is called by the KVM MMU notifier functions, thus
ensuring that no new notification can proceed while we are in the
locked region.

Signed-off-by: Bharat Bhushan <bharat.bhushan@freescale.com>
---
 arch/powerpc/kvm/e500_mmu_host.c |   19 +++++++++++++++++--
 1 files changed, 17 insertions(+), 2 deletions(-)

diff --git a/arch/powerpc/kvm/e500_mmu_host.c b/arch/powerpc/kvm/e500_mmu_host.c
index ff6dd66..ae4eaf6 100644
--- a/arch/powerpc/kvm/e500_mmu_host.c
+++ b/arch/powerpc/kvm/e500_mmu_host.c
@@ -329,8 +329,14 @@ static inline int kvmppc_e500_shadow_map(struct kvmppc_vcpu_e500 *vcpu_e500,
 	int tsize = BOOK3E_PAGESZ_4K;
 	unsigned long tsize_pages = 0;
 	pte_t *ptep;
-	int wimg = 0;
+	int wimg = 0, ret = 0;
 	pgd_t *pgdir;
+	unsigned long mmu_seq;
+	struct kvm *kvm = vcpu_e500->vcpu.kvm;
+
+	/* used to check for invalidations in progress */
+	mmu_seq = kvm->mmu_notifier_seq;
+	smp_rmb();
 
 	/*
 	 * Translate guest physical to true physical, acquiring
@@ -458,6 +464,13 @@ static inline int kvmppc_e500_shadow_map(struct kvmppc_vcpu_e500 *vcpu_e500,
 				(long)gfn, pfn);
 		return -EINVAL;
 	}
+
+	spin_lock(&kvm->mmu_lock);
+	if (mmu_notifier_retry(kvm, mmu_seq)) {
+		ret = -EAGAIN;
+		goto out;
+	}
+
 	kvmppc_e500_ref_setup(ref, gtlbe, pfn, wimg);
 
 	kvmppc_e500_setup_stlbe(&vcpu_e500->vcpu, gtlbe, tsize,
@@ -466,10 +479,12 @@ static inline int kvmppc_e500_shadow_map(struct kvmppc_vcpu_e500 *vcpu_e500,
 	/* Clear i-cache for new pages */
 	kvmppc_mmu_flush_icache(pfn);
 
+out:
+	spin_unlock(&kvm->mmu_lock);
 	/* Drop refcount on page, so that mmu notifiers can clear it */
 	kvm_release_pfn_clean(pfn);
 
-	return 0;
+	return ret;
 }
 
 /* XXX only map the one-one case, for now use TLB0 */
-- 
1.7.0.4



^ permalink raw reply related	[flat|nested] 11+ messages in thread

* Re: [PATCH 1/2] kvm: powerpc: mark page accessed when mapping a guest page
  2013-08-07 10:03 ` [PATCH 1/2] kvm: powerpc: mark page accessed when mapping a guest page Bharat Bhushan
@ 2013-08-10  1:12   ` Scott Wood
  2013-10-04 13:35   ` Alexander Graf
  1 sibling, 0 replies; 11+ messages in thread
From: Scott Wood @ 2013-08-10  1:12 UTC (permalink / raw)
  To: Bharat Bhushan; +Cc: paulus, agraf, kvm-ppc, kvm, Bharat Bhushan

On Wed, 2013-08-07 at 15:33 +0530, Bharat Bhushan wrote:
> Mark the guest page as accessed so that there is likely
> less chances of this page getting swap-out.
> 
> Signed-off-by: Bharat Bhushan <bharat.bhushan@freescale.com>
> ---
>  arch/powerpc/kvm/e500_mmu_host.c |    3 +++
>  1 files changed, 3 insertions(+), 0 deletions(-)
> 
> diff --git a/arch/powerpc/kvm/e500_mmu_host.c b/arch/powerpc/kvm/e500_mmu_host.c
> index 001a2b0..ff6dd66 100644
> --- a/arch/powerpc/kvm/e500_mmu_host.c
> +++ b/arch/powerpc/kvm/e500_mmu_host.c
> @@ -246,6 +246,9 @@ static inline void kvmppc_e500_ref_setup(struct tlbe_ref *ref,
>  	/* Use guest supplied MAS2_G and MAS2_E */
>  	ref->flags |= (gtlbe->mas2 & MAS2_ATTRIB_MASK) | wimg;
>  
> +	/* Mark the page accessed */
> +	kvm_set_pfn_accessed(pfn);
> +
>  	if (tlbe_is_writable(gtlbe))
>  		kvm_set_pfn_dirty(pfn);
>  }

Acked-by: Scott Wood <scottwood@freescale.com>

...though it would be nice to be able to handle accessed/dirty at once,
without having to repeat kvm_is_mmio_pfn() and such.

-Scott

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH 2/2] kvm: ppc: booke: check range page invalidation progress on page setup
  2013-08-07 10:03 ` [PATCH 2/2] kvm: ppc: booke: check range page invalidation progress on page setup Bharat Bhushan
@ 2013-08-10  1:15   ` Scott Wood
  2013-10-04 13:38   ` Alexander Graf
  1 sibling, 0 replies; 11+ messages in thread
From: Scott Wood @ 2013-08-10  1:15 UTC (permalink / raw)
  To: Bharat Bhushan; +Cc: paulus, agraf, kvm-ppc, kvm, Bharat Bhushan

On Wed, 2013-08-07 at 15:33 +0530, Bharat Bhushan wrote:
> When the MM code is invalidating a range of pages, it calls the KVM
> kvm_mmu_notifier_invalidate_range_start() notifier function, which calls
> kvm_unmap_hva_range(), which arranges to flush all the TLBs for guest pages.
> However, the Linux PTEs for the range being flushed are still valid at
> that point.  We are not supposed to establish any new references to pages
> in the range until the ...range_end() notifier gets called.
> The PPC-specific KVM code doesn't get any explicit notification of that;
> instead, we are supposed to use mmu_notifier_retry() to test whether we
> are or have been inside a range flush notifier pair while we have been
> referencing a page.
> 
> This patch calls the mmu_notifier_retry() while mapping the guest
> page to ensure we are not referencing a page when in range invalidation.
> 
> This call is inside a region locked with kvm->mmu_lock, which is the
> same lock that is called by the KVM MMU notifier functions, thus
> ensuring that no new notification can proceed while we are in the
> locked region.
> 
> Signed-off-by: Bharat Bhushan <bharat.bhushan@freescale.com>
> ---
>  arch/powerpc/kvm/e500_mmu_host.c |   19 +++++++++++++++++--
>  1 files changed, 17 insertions(+), 2 deletions(-)
> 
> diff --git a/arch/powerpc/kvm/e500_mmu_host.c b/arch/powerpc/kvm/e500_mmu_host.c
> index ff6dd66..ae4eaf6 100644
> --- a/arch/powerpc/kvm/e500_mmu_host.c
> +++ b/arch/powerpc/kvm/e500_mmu_host.c
> @@ -329,8 +329,14 @@ static inline int kvmppc_e500_shadow_map(struct kvmppc_vcpu_e500 *vcpu_e500,
>  	int tsize = BOOK3E_PAGESZ_4K;
>  	unsigned long tsize_pages = 0;
>  	pte_t *ptep;
> -	int wimg = 0;
> +	int wimg = 0, ret = 0;
>  	pgd_t *pgdir;
> +	unsigned long mmu_seq;
> +	struct kvm *kvm = vcpu_e500->vcpu.kvm;
> +
> +	/* used to check for invalidations in progress */
> +	mmu_seq = kvm->mmu_notifier_seq;
> +	smp_rmb();
>  
>  	/*
>  	 * Translate guest physical to true physical, acquiring
> @@ -458,6 +464,13 @@ static inline int kvmppc_e500_shadow_map(struct kvmppc_vcpu_e500 *vcpu_e500,
>  				(long)gfn, pfn);
>  		return -EINVAL;
>  	}
> +
> +	spin_lock(&kvm->mmu_lock);
> +	if (mmu_notifier_retry(kvm, mmu_seq)) {
> +		ret = -EAGAIN;
> +		goto out;
> +	}
> +
>  	kvmppc_e500_ref_setup(ref, gtlbe, pfn, wimg);
>  
>  	kvmppc_e500_setup_stlbe(&vcpu_e500->vcpu, gtlbe, tsize,
> @@ -466,10 +479,12 @@ static inline int kvmppc_e500_shadow_map(struct kvmppc_vcpu_e500 *vcpu_e500,
>  	/* Clear i-cache for new pages */
>  	kvmppc_mmu_flush_icache(pfn);
>  
> +out:
> +	spin_unlock(&kvm->mmu_lock);
>  	/* Drop refcount on page, so that mmu notifiers can clear it */
>  	kvm_release_pfn_clean(pfn);
>  
> -	return 0;
> +	return ret;
>  }

Acked-by: Scott Wood <scottwood@freescale.com> since it's currently the
standard KVM approach, though I'm not happy about the busy-waiting
aspect of it.  What if we preempted the thread responsible for
decrementing mmu_notifier_count?  What if we did so being a SCHED_FIFO
task of higher priority than the decrementing thread?

-Scott

^ permalink raw reply	[flat|nested] 11+ messages in thread

* RE: [PATCH 0/2] KVM: PPC: BOOKE: MMU Fixes
  2013-08-07 10:03 [PATCH 0/2] KVM: PPC: BOOKE: MMU Fixes Bharat Bhushan
  2013-08-07 10:03 ` [PATCH 1/2] kvm: powerpc: mark page accessed when mapping a guest page Bharat Bhushan
  2013-08-07 10:03 ` [PATCH 2/2] kvm: ppc: booke: check range page invalidation progress on page setup Bharat Bhushan
@ 2013-08-30  1:06 ` Bhushan Bharat-R65777
  2 siblings, 0 replies; 11+ messages in thread
From: Bhushan Bharat-R65777 @ 2013-08-30  1:06 UTC (permalink / raw)
  To: agraf@suse.de
  Cc: Bhushan Bharat-R65777, Wood Scott-B07421, paulus@samba.org,
	Yoder Stuart-B08248, kvm-ppc@vger.kernel.org, kvm@vger.kernel.org

Hi Alex,

Second patch (kvm: ppc: booke: check range page invalidation progress on page setup) of this patch series fixes a critical issue and we would like that to be part of 2.12.

First Patch is not that important but pretty simple.

Thanks
-Bharat

> -----Original Message-----
> From: Bhushan Bharat-R65777
> Sent: Wednesday, August 07, 2013 3:34 PM
> To: paulus@samba.org; Wood Scott-B07421; agraf@suse.de; kvm-ppc@vger.kernel.org;
> kvm@vger.kernel.org
> Cc: Bhushan Bharat-R65777
> Subject: [PATCH 0/2] KVM: PPC: BOOKE: MMU Fixes
> 
> From: Bharat Bhushan <bharat.bhushan@freescale.com>
> 
> First Patch set missing _PAGE_ACCESSED when a guest page is accessed
> 
> Second Patch check for MMU notifier range invalidation progress when setting a
> reference for a guest page. This is based on
> "KVM: PPC: Book3S PR: Use mmu_notifier_retry() in kvmppc_mmu_map_page()"
> patch sent by Pauls (still in review).
> 
> Bharat Bhushan (2):
>   kvm: powerpc: mark page accessed when mapping a guest page
>   kvm: ppc: booke: check range page invalidation progress on page setup
> 	
>  arch/powerpc/kvm/e500_mmu_host.c |   22 ++++++++++++++++++++--
>  1 files changed, 20 insertions(+), 2 deletions(-)

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH 1/2] kvm: powerpc: mark page accessed when mapping a guest page
  2013-08-07 10:03 ` [PATCH 1/2] kvm: powerpc: mark page accessed when mapping a guest page Bharat Bhushan
  2013-08-10  1:12   ` Scott Wood
@ 2013-10-04 13:35   ` Alexander Graf
  1 sibling, 0 replies; 11+ messages in thread
From: Alexander Graf @ 2013-10-04 13:35 UTC (permalink / raw)
  To: Bharat Bhushan; +Cc: paulus, scottwood, kvm-ppc, kvm, Bharat Bhushan


On 07.08.2013, at 12:03, Bharat Bhushan wrote:

> Mark the guest page as accessed so that there is likely
> less chances of this page getting swap-out.
> 
> Signed-off-by: Bharat Bhushan <bharat.bhushan@freescale.com>

Thanks, applied to kvm-ppc-queue.


Alex

> ---
> arch/powerpc/kvm/e500_mmu_host.c |    3 +++
> 1 files changed, 3 insertions(+), 0 deletions(-)
> 
> diff --git a/arch/powerpc/kvm/e500_mmu_host.c b/arch/powerpc/kvm/e500_mmu_host.c
> index 001a2b0..ff6dd66 100644
> --- a/arch/powerpc/kvm/e500_mmu_host.c
> +++ b/arch/powerpc/kvm/e500_mmu_host.c
> @@ -246,6 +246,9 @@ static inline void kvmppc_e500_ref_setup(struct tlbe_ref *ref,
> 	/* Use guest supplied MAS2_G and MAS2_E */
> 	ref->flags |= (gtlbe->mas2 & MAS2_ATTRIB_MASK) | wimg;
> 
> +	/* Mark the page accessed */
> +	kvm_set_pfn_accessed(pfn);
> +
> 	if (tlbe_is_writable(gtlbe))
> 		kvm_set_pfn_dirty(pfn);
> }
> -- 
> 1.7.0.4
> 
> 

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH 2/2] kvm: ppc: booke: check range page invalidation progress on page setup
  2013-08-07 10:03 ` [PATCH 2/2] kvm: ppc: booke: check range page invalidation progress on page setup Bharat Bhushan
  2013-08-10  1:15   ` Scott Wood
@ 2013-10-04 13:38   ` Alexander Graf
  2013-10-07 12:04     ` Paolo Bonzini
  1 sibling, 1 reply; 11+ messages in thread
From: Alexander Graf @ 2013-10-04 13:38 UTC (permalink / raw)
  To: Bharat Bhushan
  Cc: Paul Mackerras, Scott Wood, kvm-ppc,
	kvm@vger.kernel.org mailing list, Bharat Bhushan, Gleb Natapov,
	Paolo Bonzini


On 07.08.2013, at 12:03, Bharat Bhushan wrote:

> When the MM code is invalidating a range of pages, it calls the KVM
> kvm_mmu_notifier_invalidate_range_start() notifier function, which calls
> kvm_unmap_hva_range(), which arranges to flush all the TLBs for guest pages.
> However, the Linux PTEs for the range being flushed are still valid at
> that point.  We are not supposed to establish any new references to pages
> in the range until the ...range_end() notifier gets called.
> The PPC-specific KVM code doesn't get any explicit notification of that;
> instead, we are supposed to use mmu_notifier_retry() to test whether we
> are or have been inside a range flush notifier pair while we have been
> referencing a page.
> 
> This patch calls the mmu_notifier_retry() while mapping the guest
> page to ensure we are not referencing a page when in range invalidation.
> 
> This call is inside a region locked with kvm->mmu_lock, which is the
> same lock that is called by the KVM MMU notifier functions, thus
> ensuring that no new notification can proceed while we are in the
> locked region.
> 
> Signed-off-by: Bharat Bhushan <bharat.bhushan@freescale.com>

Acked-by: Alexander Graf <agraf@suse.de>

Gleb, Paolo, please queue for 3.12 directly.


Alex

> ---
> arch/powerpc/kvm/e500_mmu_host.c |   19 +++++++++++++++++--
> 1 files changed, 17 insertions(+), 2 deletions(-)
> 
> diff --git a/arch/powerpc/kvm/e500_mmu_host.c b/arch/powerpc/kvm/e500_mmu_host.c
> index ff6dd66..ae4eaf6 100644
> --- a/arch/powerpc/kvm/e500_mmu_host.c
> +++ b/arch/powerpc/kvm/e500_mmu_host.c
> @@ -329,8 +329,14 @@ static inline int kvmppc_e500_shadow_map(struct kvmppc_vcpu_e500 *vcpu_e500,
> 	int tsize = BOOK3E_PAGESZ_4K;
> 	unsigned long tsize_pages = 0;
> 	pte_t *ptep;
> -	int wimg = 0;
> +	int wimg = 0, ret = 0;
> 	pgd_t *pgdir;
> +	unsigned long mmu_seq;
> +	struct kvm *kvm = vcpu_e500->vcpu.kvm;
> +
> +	/* used to check for invalidations in progress */
> +	mmu_seq = kvm->mmu_notifier_seq;
> +	smp_rmb();
> 
> 	/*
> 	 * Translate guest physical to true physical, acquiring
> @@ -458,6 +464,13 @@ static inline int kvmppc_e500_shadow_map(struct kvmppc_vcpu_e500 *vcpu_e500,
> 				(long)gfn, pfn);
> 		return -EINVAL;
> 	}
> +
> +	spin_lock(&kvm->mmu_lock);
> +	if (mmu_notifier_retry(kvm, mmu_seq)) {
> +		ret = -EAGAIN;
> +		goto out;
> +	}
> +
> 	kvmppc_e500_ref_setup(ref, gtlbe, pfn, wimg);
> 
> 	kvmppc_e500_setup_stlbe(&vcpu_e500->vcpu, gtlbe, tsize,
> @@ -466,10 +479,12 @@ static inline int kvmppc_e500_shadow_map(struct kvmppc_vcpu_e500 *vcpu_e500,
> 	/* Clear i-cache for new pages */
> 	kvmppc_mmu_flush_icache(pfn);
> 
> +out:
> +	spin_unlock(&kvm->mmu_lock);
> 	/* Drop refcount on page, so that mmu notifiers can clear it */
> 	kvm_release_pfn_clean(pfn);
> 
> -	return 0;
> +	return ret;
> }
> 
> /* XXX only map the one-one case, for now use TLB0 */
> -- 
> 1.7.0.4
> 
> 
> --
> To unsubscribe from this list: send the line "unsubscribe kvm-ppc" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH 2/2] kvm: ppc: booke: check range page invalidation progress on page setup
  2013-10-04 13:38   ` Alexander Graf
@ 2013-10-07 12:04     ` Paolo Bonzini
  2013-10-10  8:32       ` Bhushan Bharat-R65777
  0 siblings, 1 reply; 11+ messages in thread
From: Paolo Bonzini @ 2013-10-07 12:04 UTC (permalink / raw)
  To: Alexander Graf
  Cc: Bharat Bhushan, Paul Mackerras, Scott Wood, kvm-ppc,
	kvm@vger.kernel.org mailing list, Bharat Bhushan, Gleb Natapov

Il 04/10/2013 15:38, Alexander Graf ha scritto:
> 
> On 07.08.2013, at 12:03, Bharat Bhushan wrote:
> 
>> When the MM code is invalidating a range of pages, it calls the KVM
>> kvm_mmu_notifier_invalidate_range_start() notifier function, which calls
>> kvm_unmap_hva_range(), which arranges to flush all the TLBs for guest pages.
>> However, the Linux PTEs for the range being flushed are still valid at
>> that point.  We are not supposed to establish any new references to pages
>> in the range until the ...range_end() notifier gets called.
>> The PPC-specific KVM code doesn't get any explicit notification of that;
>> instead, we are supposed to use mmu_notifier_retry() to test whether we
>> are or have been inside a range flush notifier pair while we have been
>> referencing a page.
>>
>> This patch calls the mmu_notifier_retry() while mapping the guest
>> page to ensure we are not referencing a page when in range invalidation.
>>
>> This call is inside a region locked with kvm->mmu_lock, which is the
>> same lock that is called by the KVM MMU notifier functions, thus
>> ensuring that no new notification can proceed while we are in the
>> locked region.
>>
>> Signed-off-by: Bharat Bhushan <bharat.bhushan@freescale.com>
> 
> Acked-by: Alexander Graf <agraf@suse.de>
> 
> Gleb, Paolo, please queue for 3.12 directly.

Here is the backport.  The second hunk has a nontrivial conflict, so
someone please give their {Tested,Reviewed,Compiled}-by.

Paolo

diff --git a/arch/powerpc/kvm/e500_mmu_host.c b/arch/powerpc/kvm/e500_mmu_host.c
index 1c6a9d7..c65593a 100644
--- a/arch/powerpc/kvm/e500_mmu_host.c
+++ b/arch/powerpc/kvm/e500_mmu_host.c
@@ -332,6 +332,13 @@ static inline int kvmppc_e500_shadow_map(struct kvmppc_vcpu_e500 *vcpu_e500,
 	unsigned long hva;
 	int pfnmap = 0;
 	int tsize = BOOK3E_PAGESZ_4K;
+	int ret = 0;
+	unsigned long mmu_seq;
+	struct kvm *kvm = vcpu_e500->vcpu.kvm;
+
+	/* used to check for invalidations in progress */
+	mmu_seq = kvm->mmu_notifier_seq;
+	smp_rmb();
 
 	/*
 	 * Translate guest physical to true physical, acquiring
@@ -449,6 +456,12 @@ static inline int kvmppc_e500_shadow_map(struct kvmppc_vcpu_e500 *vcpu_e500,
 		gvaddr &= ~((tsize_pages << PAGE_SHIFT) - 1);
 	}
 
+	spin_lock(&kvm->mmu_lock);
+	if (mmu_notifier_retry(kvm, mmu_seq)) {
+		ret = -EAGAIN;
+		goto out;
+	}
+
 	kvmppc_e500_ref_setup(ref, gtlbe, pfn);
 
 	kvmppc_e500_setup_stlbe(&vcpu_e500->vcpu, gtlbe, tsize,
@@ -457,10 +470,13 @@ static inline int kvmppc_e500_shadow_map(struct kvmppc_vcpu_e500 *vcpu_e500,
 	/* Clear i-cache for new pages */
 	kvmppc_mmu_flush_icache(pfn);
 
+out:
+	spin_unlock(&kvm->mmu_lock);
+
 	/* Drop refcount on page, so that mmu notifiers can clear it */
 	kvm_release_pfn_clean(pfn);
 
-	return 0;
+	return ret;
 }
 
 /* XXX only map the one-one case, for now use TLB0 */



^ permalink raw reply related	[flat|nested] 11+ messages in thread

* RE: [PATCH 2/2] kvm: ppc: booke: check range page invalidation progress on page setup
  2013-10-07 12:04     ` Paolo Bonzini
@ 2013-10-10  8:32       ` Bhushan Bharat-R65777
  2013-10-10  9:01         ` Paolo Bonzini
  0 siblings, 1 reply; 11+ messages in thread
From: Bhushan Bharat-R65777 @ 2013-10-10  8:32 UTC (permalink / raw)
  To: Paolo Bonzini, Alexander Graf
  Cc: Paul Mackerras, Wood Scott-B07421, kvm-ppc@vger.kernel.org,
	kvm@vger.kernel.org mailing list, Gleb Natapov



> -----Original Message-----
> From: Paolo Bonzini [mailto:paolo.bonzini@gmail.com] On Behalf Of Paolo Bonzini
> Sent: Monday, October 07, 2013 5:35 PM
> To: Alexander Graf
> Cc: Bhushan Bharat-R65777; Paul Mackerras; Wood Scott-B07421; kvm-
> ppc@vger.kernel.org; kvm@vger.kernel.org mailing list; Bhushan Bharat-R65777;
> Gleb Natapov
> Subject: Re: [PATCH 2/2] kvm: ppc: booke: check range page invalidation progress
> on page setup
> 
> Il 04/10/2013 15:38, Alexander Graf ha scritto:
> >
> > On 07.08.2013, at 12:03, Bharat Bhushan wrote:
> >
> >> When the MM code is invalidating a range of pages, it calls the KVM
> >> kvm_mmu_notifier_invalidate_range_start() notifier function, which calls
> >> kvm_unmap_hva_range(), which arranges to flush all the TLBs for guest pages.
> >> However, the Linux PTEs for the range being flushed are still valid at
> >> that point.  We are not supposed to establish any new references to pages
> >> in the range until the ...range_end() notifier gets called.
> >> The PPC-specific KVM code doesn't get any explicit notification of that;
> >> instead, we are supposed to use mmu_notifier_retry() to test whether we
> >> are or have been inside a range flush notifier pair while we have been
> >> referencing a page.
> >>
> >> This patch calls the mmu_notifier_retry() while mapping the guest
> >> page to ensure we are not referencing a page when in range invalidation.
> >>
> >> This call is inside a region locked with kvm->mmu_lock, which is the
> >> same lock that is called by the KVM MMU notifier functions, thus
> >> ensuring that no new notification can proceed while we are in the
> >> locked region.
> >>
> >> Signed-off-by: Bharat Bhushan <bharat.bhushan@freescale.com>
> >
> > Acked-by: Alexander Graf <agraf@suse.de>
> >
> > Gleb, Paolo, please queue for 3.12 directly.
> 
> Here is the backport.  The second hunk has a nontrivial conflict, so
> someone please give their {Tested,Reviewed,Compiled}-by.

{Compiled,Reviewed}-by: Bharat Bhushan <bharat.bhushan@freescale.com>

Thanks
-Bharat

> 
> Paolo
> 
> diff --git a/arch/powerpc/kvm/e500_mmu_host.c b/arch/powerpc/kvm/e500_mmu_host.c
> index 1c6a9d7..c65593a 100644
> --- a/arch/powerpc/kvm/e500_mmu_host.c
> +++ b/arch/powerpc/kvm/e500_mmu_host.c
> @@ -332,6 +332,13 @@ static inline int kvmppc_e500_shadow_map(struct
> kvmppc_vcpu_e500 *vcpu_e500,
>  	unsigned long hva;
>  	int pfnmap = 0;
>  	int tsize = BOOK3E_PAGESZ_4K;
> +	int ret = 0;
> +	unsigned long mmu_seq;
> +	struct kvm *kvm = vcpu_e500->vcpu.kvm;
> +
> +	/* used to check for invalidations in progress */
> +	mmu_seq = kvm->mmu_notifier_seq;
> +	smp_rmb();
> 
>  	/*
>  	 * Translate guest physical to true physical, acquiring
> @@ -449,6 +456,12 @@ static inline int kvmppc_e500_shadow_map(struct
> kvmppc_vcpu_e500 *vcpu_e500,
>  		gvaddr &= ~((tsize_pages << PAGE_SHIFT) - 1);
>  	}
> 
> +	spin_lock(&kvm->mmu_lock);
> +	if (mmu_notifier_retry(kvm, mmu_seq)) {
> +		ret = -EAGAIN;
> +		goto out;
> +	}
> +
>  	kvmppc_e500_ref_setup(ref, gtlbe, pfn);
> 
>  	kvmppc_e500_setup_stlbe(&vcpu_e500->vcpu, gtlbe, tsize,
> @@ -457,10 +470,13 @@ static inline int kvmppc_e500_shadow_map(struct
> kvmppc_vcpu_e500 *vcpu_e500,
>  	/* Clear i-cache for new pages */
>  	kvmppc_mmu_flush_icache(pfn);
> 
> +out:
> +	spin_unlock(&kvm->mmu_lock);
> +
>  	/* Drop refcount on page, so that mmu notifiers can clear it */
>  	kvm_release_pfn_clean(pfn);
> 
> -	return 0;
> +	return ret;
>  }
> 
>  /* XXX only map the one-one case, for now use TLB0 */
> 
> 

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH 2/2] kvm: ppc: booke: check range page invalidation progress on page setup
  2013-10-10  8:32       ` Bhushan Bharat-R65777
@ 2013-10-10  9:01         ` Paolo Bonzini
  0 siblings, 0 replies; 11+ messages in thread
From: Paolo Bonzini @ 2013-10-10  9:01 UTC (permalink / raw)
  To: Bhushan Bharat-R65777
  Cc: Alexander Graf, Paul Mackerras, Wood Scott-B07421,
	kvm-ppc@vger.kernel.org, kvm@vger.kernel.org mailing list,
	Gleb Natapov

Il 10/10/2013 10:32, Bhushan Bharat-R65777 ha scritto:
> 
> 
>> -----Original Message-----
>> From: Paolo Bonzini [mailto:paolo.bonzini@gmail.com] On Behalf Of Paolo Bonzini
>> Sent: Monday, October 07, 2013 5:35 PM
>> To: Alexander Graf
>> Cc: Bhushan Bharat-R65777; Paul Mackerras; Wood Scott-B07421; kvm-
>> ppc@vger.kernel.org; kvm@vger.kernel.org mailing list; Bhushan Bharat-R65777;
>> Gleb Natapov
>> Subject: Re: [PATCH 2/2] kvm: ppc: booke: check range page invalidation progress
>> on page setup
>>
>> Il 04/10/2013 15:38, Alexander Graf ha scritto:
>>>
>>> On 07.08.2013, at 12:03, Bharat Bhushan wrote:
>>>
>>>> When the MM code is invalidating a range of pages, it calls the KVM
>>>> kvm_mmu_notifier_invalidate_range_start() notifier function, which calls
>>>> kvm_unmap_hva_range(), which arranges to flush all the TLBs for guest pages.
>>>> However, the Linux PTEs for the range being flushed are still valid at
>>>> that point.  We are not supposed to establish any new references to pages
>>>> in the range until the ...range_end() notifier gets called.
>>>> The PPC-specific KVM code doesn't get any explicit notification of that;
>>>> instead, we are supposed to use mmu_notifier_retry() to test whether we
>>>> are or have been inside a range flush notifier pair while we have been
>>>> referencing a page.
>>>>
>>>> This patch calls the mmu_notifier_retry() while mapping the guest
>>>> page to ensure we are not referencing a page when in range invalidation.
>>>>
>>>> This call is inside a region locked with kvm->mmu_lock, which is the
>>>> same lock that is called by the KVM MMU notifier functions, thus
>>>> ensuring that no new notification can proceed while we are in the
>>>> locked region.
>>>>
>>>> Signed-off-by: Bharat Bhushan <bharat.bhushan@freescale.com>
>>>
>>> Acked-by: Alexander Graf <agraf@suse.de>
>>>
>>> Gleb, Paolo, please queue for 3.12 directly.
>>
>> Here is the backport.  The second hunk has a nontrivial conflict, so
>> someone please give their {Tested,Reviewed,Compiled}-by.
> 
> {Compiled,Reviewed}-by: Bharat Bhushan <bharat.bhushan@freescale.com>

Thanks, patch on its way to Linus.

Paolo

> Thanks
> -Bharat
> 
>>
>> Paolo
>>
>> diff --git a/arch/powerpc/kvm/e500_mmu_host.c b/arch/powerpc/kvm/e500_mmu_host.c
>> index 1c6a9d7..c65593a 100644
>> --- a/arch/powerpc/kvm/e500_mmu_host.c
>> +++ b/arch/powerpc/kvm/e500_mmu_host.c
>> @@ -332,6 +332,13 @@ static inline int kvmppc_e500_shadow_map(struct
>> kvmppc_vcpu_e500 *vcpu_e500,
>>  	unsigned long hva;
>>  	int pfnmap = 0;
>>  	int tsize = BOOK3E_PAGESZ_4K;
>> +	int ret = 0;
>> +	unsigned long mmu_seq;
>> +	struct kvm *kvm = vcpu_e500->vcpu.kvm;
>> +
>> +	/* used to check for invalidations in progress */
>> +	mmu_seq = kvm->mmu_notifier_seq;
>> +	smp_rmb();
>>
>>  	/*
>>  	 * Translate guest physical to true physical, acquiring
>> @@ -449,6 +456,12 @@ static inline int kvmppc_e500_shadow_map(struct
>> kvmppc_vcpu_e500 *vcpu_e500,
>>  		gvaddr &= ~((tsize_pages << PAGE_SHIFT) - 1);
>>  	}
>>
>> +	spin_lock(&kvm->mmu_lock);
>> +	if (mmu_notifier_retry(kvm, mmu_seq)) {
>> +		ret = -EAGAIN;
>> +		goto out;
>> +	}
>> +
>>  	kvmppc_e500_ref_setup(ref, gtlbe, pfn);
>>
>>  	kvmppc_e500_setup_stlbe(&vcpu_e500->vcpu, gtlbe, tsize,
>> @@ -457,10 +470,13 @@ static inline int kvmppc_e500_shadow_map(struct
>> kvmppc_vcpu_e500 *vcpu_e500,
>>  	/* Clear i-cache for new pages */
>>  	kvmppc_mmu_flush_icache(pfn);
>>
>> +out:
>> +	spin_unlock(&kvm->mmu_lock);
>> +
>>  	/* Drop refcount on page, so that mmu notifiers can clear it */
>>  	kvm_release_pfn_clean(pfn);
>>
>> -	return 0;
>> +	return ret;
>>  }
>>
>>  /* XXX only map the one-one case, for now use TLB0 */
>>
>>
> 
> 

^ permalink raw reply	[flat|nested] 11+ messages in thread

end of thread, other threads:[~2013-10-10  9:01 UTC | newest]

Thread overview: 11+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2013-08-07 10:03 [PATCH 0/2] KVM: PPC: BOOKE: MMU Fixes Bharat Bhushan
2013-08-07 10:03 ` [PATCH 1/2] kvm: powerpc: mark page accessed when mapping a guest page Bharat Bhushan
2013-08-10  1:12   ` Scott Wood
2013-10-04 13:35   ` Alexander Graf
2013-08-07 10:03 ` [PATCH 2/2] kvm: ppc: booke: check range page invalidation progress on page setup Bharat Bhushan
2013-08-10  1:15   ` Scott Wood
2013-10-04 13:38   ` Alexander Graf
2013-10-07 12:04     ` Paolo Bonzini
2013-10-10  8:32       ` Bhushan Bharat-R65777
2013-10-10  9:01         ` Paolo Bonzini
2013-08-30  1:06 ` [PATCH 0/2] KVM: PPC: BOOKE: MMU Fixes Bhushan Bharat-R65777

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).