kvm.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH A or B] KVM: kvm->tlbs_dirty handling
@ 2014-02-18  8:21 Takuya Yoshikawa
  2014-02-18  8:22 ` [PATCH A] KVM: Simplify " Takuya Yoshikawa
  2014-02-18  8:23 ` [PATCH B] KVM: Explain tlbs_dirty trick in kvm_flush_remote_tlbs() Takuya Yoshikawa
  0 siblings, 2 replies; 8+ messages in thread
From: Takuya Yoshikawa @ 2014-02-18  8:21 UTC (permalink / raw)
  To: gleb, pbonzini; +Cc: kvm

Please take patch A or B.

	Takuya

^ permalink raw reply	[flat|nested] 8+ messages in thread

* [PATCH A] KVM: Simplify kvm->tlbs_dirty handling
  2014-02-18  8:21 [PATCH A or B] KVM: kvm->tlbs_dirty handling Takuya Yoshikawa
@ 2014-02-18  8:22 ` Takuya Yoshikawa
  2014-02-18  9:07   ` Paolo Bonzini
  2014-02-18  9:43   ` Xiao Guangrong
  2014-02-18  8:23 ` [PATCH B] KVM: Explain tlbs_dirty trick in kvm_flush_remote_tlbs() Takuya Yoshikawa
  1 sibling, 2 replies; 8+ messages in thread
From: Takuya Yoshikawa @ 2014-02-18  8:22 UTC (permalink / raw)
  To: gleb, pbonzini; +Cc: kvm

When this was introduced, kvm_flush_remote_tlbs() could be called
without holding mmu_lock.  It is now acknowledged that the function
must be called before releasing mmu_lock, and all callers have already
been changed to do so.

There is no need to use smp_mb() and cmpxchg() any more.

Signed-off-by: Takuya Yoshikawa <yoshikawa_takuya_b1@lab.ntt.co.jp>
---
 arch/x86/kvm/paging_tmpl.h |    7 ++++---
 include/linux/kvm_host.h   |    2 +-
 virt/kvm/kvm_main.c        |   11 +++++++----
 3 files changed, 12 insertions(+), 8 deletions(-)

diff --git a/arch/x86/kvm/paging_tmpl.h b/arch/x86/kvm/paging_tmpl.h
index cba218a..b1e6c1b 100644
--- a/arch/x86/kvm/paging_tmpl.h
+++ b/arch/x86/kvm/paging_tmpl.h
@@ -913,7 +913,8 @@ static gpa_t FNAME(gva_to_gpa_nested)(struct kvm_vcpu *vcpu, gva_t vaddr,
  *   and kvm_mmu_notifier_invalidate_range_start detect the mapping page isn't
  *   used by guest then tlbs are not flushed, so guest is allowed to access the
  *   freed pages.
- *   And we increase kvm->tlbs_dirty to delay tlbs flush in this case.
+ *   We set tlbs_dirty to let the notifier know this change and delay the flush
+ *   until such a case actually happens.
  */
 static int FNAME(sync_page)(struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp)
 {
@@ -942,7 +943,7 @@ static int FNAME(sync_page)(struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp)
 			return -EINVAL;
 
 		if (FNAME(prefetch_invalid_gpte)(vcpu, sp, &sp->spt[i], gpte)) {
-			vcpu->kvm->tlbs_dirty++;
+			vcpu->kvm->tlbs_dirty = true;
 			continue;
 		}
 
@@ -957,7 +958,7 @@ static int FNAME(sync_page)(struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp)
 
 		if (gfn != sp->gfns[i]) {
 			drop_spte(vcpu->kvm, &sp->spt[i]);
-			vcpu->kvm->tlbs_dirty++;
+			vcpu->kvm->tlbs_dirty = true;
 			continue;
 		}
 
diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h
index f5937b8..ed1cc89 100644
--- a/include/linux/kvm_host.h
+++ b/include/linux/kvm_host.h
@@ -401,7 +401,7 @@ struct kvm {
 	unsigned long mmu_notifier_seq;
 	long mmu_notifier_count;
 #endif
-	long tlbs_dirty;
+	bool tlbs_dirty;
 	struct list_head devices;
 };
 
diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
index a9e999a..51744da 100644
--- a/virt/kvm/kvm_main.c
+++ b/virt/kvm/kvm_main.c
@@ -186,12 +186,15 @@ static bool make_all_cpus_request(struct kvm *kvm, unsigned int req)
 
 void kvm_flush_remote_tlbs(struct kvm *kvm)
 {
-	long dirty_count = kvm->tlbs_dirty;
-
-	smp_mb();
 	if (make_all_cpus_request(kvm, KVM_REQ_TLB_FLUSH))
 		++kvm->stat.remote_tlb_flush;
-	cmpxchg(&kvm->tlbs_dirty, dirty_count, 0);
+	/*
+	 * tlbs_dirty is used only for optimizing x86's shadow paging code with
+	 * mmu notifiers in mind, see the note on sync_page().  Since it is
+	 * always protected with mmu_lock there, should kvm_flush_remote_tlbs()
+	 * be called before releasing mmu_lock, this is safe.
+	 */
+	kvm->tlbs_dirty = false;
 }
 EXPORT_SYMBOL_GPL(kvm_flush_remote_tlbs);
 
-- 
1.7.9.5


^ permalink raw reply related	[flat|nested] 8+ messages in thread

* [PATCH B] KVM: Explain tlbs_dirty trick in kvm_flush_remote_tlbs()
  2014-02-18  8:21 [PATCH A or B] KVM: kvm->tlbs_dirty handling Takuya Yoshikawa
  2014-02-18  8:22 ` [PATCH A] KVM: Simplify " Takuya Yoshikawa
@ 2014-02-18  8:23 ` Takuya Yoshikawa
  1 sibling, 0 replies; 8+ messages in thread
From: Takuya Yoshikawa @ 2014-02-18  8:23 UTC (permalink / raw)
  To: gleb, pbonzini; +Cc: kvm

When this was introduced, kvm_flush_remote_tlbs() could be called
without holding mmu_lock.  It is now acknowledged that the function
must be called before releasing mmu_lock, and all callers have already
been changed to do so.

This patch adds a comment explaining this.

Signed-off-by: Takuya Yoshikawa <yoshikawa_takuya_b1@lab.ntt.co.jp>
---
 virt/kvm/kvm_main.c |    9 +++++++++
 1 file changed, 9 insertions(+)

diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
index a9e999a..53521ea 100644
--- a/virt/kvm/kvm_main.c
+++ b/virt/kvm/kvm_main.c
@@ -184,6 +184,15 @@ static bool make_all_cpus_request(struct kvm *kvm, unsigned int req)
 	return called;
 }
 
+/*
+ * tlbs_dirty is used only for optimizing x86's shadow paging code with mmu
+ * notifiers in mind, see the note on sync_page().  Since it is always protected
+ * with mmu_lock there, should kvm_flush_remote_tlbs() be called before
+ * releasing mmu_lock, the trick using smp_mb() and cmpxchg() is not necessary.
+ *
+ * Currently, the assumption about kvm_flush_remote_tlbs() callers is true, but
+ * the code is kept as is for someone who may change the rule in the future.
+ */
 void kvm_flush_remote_tlbs(struct kvm *kvm)
 {
 	long dirty_count = kvm->tlbs_dirty;
-- 
1.7.9.5


^ permalink raw reply related	[flat|nested] 8+ messages in thread

* Re: [PATCH A] KVM: Simplify kvm->tlbs_dirty handling
  2014-02-18  8:22 ` [PATCH A] KVM: Simplify " Takuya Yoshikawa
@ 2014-02-18  9:07   ` Paolo Bonzini
  2014-02-19  0:44     ` Takuya Yoshikawa
  2014-02-18  9:43   ` Xiao Guangrong
  1 sibling, 1 reply; 8+ messages in thread
From: Paolo Bonzini @ 2014-02-18  9:07 UTC (permalink / raw)
  To: Takuya Yoshikawa, gleb; +Cc: kvm

Il 18/02/2014 09:22, Takuya Yoshikawa ha scritto:
> When this was introduced, kvm_flush_remote_tlbs() could be called
> without holding mmu_lock.  It is now acknowledged that the function
> must be called before releasing mmu_lock, and all callers have already
> been changed to do so.
> 
> There is no need to use smp_mb() and cmpxchg() any more.
> 
> Signed-off-by: Takuya Yoshikawa <yoshikawa_takuya_b1@lab.ntt.co.jp>

I prefer this patch, and in fact we can make it even simpler:

diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h
index ed1cc89..9816b68 100644
--- a/include/linux/kvm_host.h
+++ b/include/linux/kvm_host.h
@@ -401,7 +401,9 @@ struct kvm {
 	unsigned long mmu_notifier_seq;
 	long mmu_notifier_count;
 #endif
+	/* Protected by mmu_lock */
 	bool tlbs_dirty;
+
 	struct list_head devices;
 };
 
diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
index 51744da..f5668a4 100644
--- a/virt/kvm/kvm_main.c
+++ b/virt/kvm/kvm_main.c
@@ -188,12 +188,6 @@ void kvm_flush_remote_tlbs(struct kvm *kvm)
 {
 	if (make_all_cpus_request(kvm, KVM_REQ_TLB_FLUSH))
 		++kvm->stat.remote_tlb_flush;
-	/*
-	 * tlbs_dirty is used only for optimizing x86's shadow paging code with
-	 * mmu notifiers in mind, see the note on sync_page().  Since it is
-	 * always protected with mmu_lock there, should kvm_flush_remote_tlbs()
-	 * be called before releasing mmu_lock, this is safe.
-	 */
 	kvm->tlbs_dirty = false;
 }
 EXPORT_SYMBOL_GPL(kvm_flush_remote_tlbs);


What do you think?

Paolo
> ---
>  arch/x86/kvm/paging_tmpl.h |    7 ++++---
>  include/linux/kvm_host.h   |    2 +-
>  virt/kvm/kvm_main.c        |   11 +++++++----
>  3 files changed, 12 insertions(+), 8 deletions(-)
> 
> diff --git a/arch/x86/kvm/paging_tmpl.h b/arch/x86/kvm/paging_tmpl.h
> index cba218a..b1e6c1b 100644
> --- a/arch/x86/kvm/paging_tmpl.h
> +++ b/arch/x86/kvm/paging_tmpl.h
> @@ -913,7 +913,8 @@ static gpa_t FNAME(gva_to_gpa_nested)(struct kvm_vcpu *vcpu, gva_t vaddr,
>   *   and kvm_mmu_notifier_invalidate_range_start detect the mapping page isn't
>   *   used by guest then tlbs are not flushed, so guest is allowed to access the
>   *   freed pages.
> - *   And we increase kvm->tlbs_dirty to delay tlbs flush in this case.
> + *   We set tlbs_dirty to let the notifier know this change and delay the flush
> + *   until such a case actually happens.
>   */
>  static int FNAME(sync_page)(struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp)
>  {
> @@ -942,7 +943,7 @@ static int FNAME(sync_page)(struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp)
>  			return -EINVAL;
>  
>  		if (FNAME(prefetch_invalid_gpte)(vcpu, sp, &sp->spt[i], gpte)) {
> -			vcpu->kvm->tlbs_dirty++;
> +			vcpu->kvm->tlbs_dirty = true;
>  			continue;
>  		}
>  
> @@ -957,7 +958,7 @@ static int FNAME(sync_page)(struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp)
>  
>  		if (gfn != sp->gfns[i]) {
>  			drop_spte(vcpu->kvm, &sp->spt[i]);
> -			vcpu->kvm->tlbs_dirty++;
> +			vcpu->kvm->tlbs_dirty = true;
>  			continue;
>  		}
>  
> diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h
> index f5937b8..ed1cc89 100644
> --- a/include/linux/kvm_host.h
> +++ b/include/linux/kvm_host.h
> @@ -401,7 +401,7 @@ struct kvm {
>  	unsigned long mmu_notifier_seq;
>  	long mmu_notifier_count;
>  #endif
> -	long tlbs_dirty;
> +	bool tlbs_dirty;
>  	struct list_head devices;
>  };
>  
> diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
> index a9e999a..51744da 100644
> --- a/virt/kvm/kvm_main.c
> +++ b/virt/kvm/kvm_main.c
> @@ -186,12 +186,15 @@ static bool make_all_cpus_request(struct kvm *kvm, unsigned int req)
>  
>  void kvm_flush_remote_tlbs(struct kvm *kvm)
>  {
> -	long dirty_count = kvm->tlbs_dirty;
> -
> -	smp_mb();
>  	if (make_all_cpus_request(kvm, KVM_REQ_TLB_FLUSH))
>  		++kvm->stat.remote_tlb_flush;
> -	cmpxchg(&kvm->tlbs_dirty, dirty_count, 0);
> +	/*
> +	 * tlbs_dirty is used only for optimizing x86's shadow paging code with
> +	 * mmu notifiers in mind, see the note on sync_page().  Since it is
> +	 * always protected with mmu_lock there, should kvm_flush_remote_tlbs()
> +	 * be called before releasing mmu_lock, this is safe.
> +	 */
> +	kvm->tlbs_dirty = false;
>  }
>  EXPORT_SYMBOL_GPL(kvm_flush_remote_tlbs);
>  
> 


^ permalink raw reply related	[flat|nested] 8+ messages in thread

* Re: [PATCH A] KVM: Simplify kvm->tlbs_dirty handling
  2014-02-18  8:22 ` [PATCH A] KVM: Simplify " Takuya Yoshikawa
  2014-02-18  9:07   ` Paolo Bonzini
@ 2014-02-18  9:43   ` Xiao Guangrong
  2014-02-19  0:40     ` Takuya Yoshikawa
  1 sibling, 1 reply; 8+ messages in thread
From: Xiao Guangrong @ 2014-02-18  9:43 UTC (permalink / raw)
  To: Takuya Yoshikawa, gleb, pbonzini; +Cc: kvm

On 02/18/2014 04:22 PM, Takuya Yoshikawa wrote:
> When this was introduced, kvm_flush_remote_tlbs() could be called
> without holding mmu_lock.  It is now acknowledged that the function
> must be called before releasing mmu_lock, and all callers have already
> been changed to do so.
> 

I have already posted the patch to moving flush tlb out of mmu-lock
when do dirty tracking:
KVM: MMU: flush tlb out of mmu lock when write-protect the sptes

Actually some patches (patch 1 - patch 4) in that patchset can be
pickup-ed separately and i should apologize for the patchset was
not updated for a long time since i was busy on other development.

And moving tlb flush out of mmu-lock should be going on in the
future, so i think the B patch is more acceptable.



^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH A] KVM: Simplify kvm->tlbs_dirty handling
  2014-02-18  9:43   ` Xiao Guangrong
@ 2014-02-19  0:40     ` Takuya Yoshikawa
  0 siblings, 0 replies; 8+ messages in thread
From: Takuya Yoshikawa @ 2014-02-19  0:40 UTC (permalink / raw)
  To: Xiao Guangrong, gleb, pbonzini; +Cc: kvm

(2014/02/18 18:43), Xiao Guangrong wrote:
> On 02/18/2014 04:22 PM, Takuya Yoshikawa wrote:
>> When this was introduced, kvm_flush_remote_tlbs() could be called
>> without holding mmu_lock.  It is now acknowledged that the function
>> must be called before releasing mmu_lock, and all callers have already
>> been changed to do so.
>>
>
> I have already posted the patch to moving flush tlb out of mmu-lock
> when do dirty tracking:
> KVM: MMU: flush tlb out of mmu lock when write-protect the sptes

Yes, I had your patch in mind, so made patch B.

>
> Actually some patches (patch 1 - patch 4) in that patchset can be
> pickup-ed separately and i should apologize for the patchset was
> not updated for a long time since i was busy on other development.

Looking forward to seeing that work again!

>
> And moving tlb flush out of mmu-lock should be going on in the
> future, so i think the B patch is more acceptable.
>

Maybe.

Some time ago, someone working on non-x86 stuff asked us why
smp_mb() and cmpxchg() were used there.  So a brief comment
which points them to the note on sync_page() will be helpful.

	Takuya


^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH A] KVM: Simplify kvm->tlbs_dirty handling
  2014-02-18  9:07   ` Paolo Bonzini
@ 2014-02-19  0:44     ` Takuya Yoshikawa
  2014-02-19 12:04       ` Paolo Bonzini
  0 siblings, 1 reply; 8+ messages in thread
From: Takuya Yoshikawa @ 2014-02-19  0:44 UTC (permalink / raw)
  To: Paolo Bonzini, gleb; +Cc: kvm

(2014/02/18 18:07), Paolo Bonzini wrote:
> Il 18/02/2014 09:22, Takuya Yoshikawa ha scritto:
>> When this was introduced, kvm_flush_remote_tlbs() could be called
>> without holding mmu_lock.  It is now acknowledged that the function
>> must be called before releasing mmu_lock, and all callers have already
>> been changed to do so.
>>
>> There is no need to use smp_mb() and cmpxchg() any more.
>>
>> Signed-off-by: Takuya Yoshikawa <yoshikawa_takuya_b1@lab.ntt.co.jp>
>
> I prefer this patch, and in fact we can make it even simpler:
>
> diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h
> index ed1cc89..9816b68 100644
> --- a/include/linux/kvm_host.h
> +++ b/include/linux/kvm_host.h
> @@ -401,7 +401,9 @@ struct kvm {
>   	unsigned long mmu_notifier_seq;
>   	long mmu_notifier_count;
>   #endif
> +	/* Protected by mmu_lock */
>   	bool tlbs_dirty;
> +
>   	struct list_head devices;
>   };
>
> diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
> index 51744da..f5668a4 100644
> --- a/virt/kvm/kvm_main.c
> +++ b/virt/kvm/kvm_main.c
> @@ -188,12 +188,6 @@ void kvm_flush_remote_tlbs(struct kvm *kvm)
>   {
>   	if (make_all_cpus_request(kvm, KVM_REQ_TLB_FLUSH))
>   		++kvm->stat.remote_tlb_flush;
> -	/*
> -	 * tlbs_dirty is used only for optimizing x86's shadow paging code with
> -	 * mmu notifiers in mind, see the note on sync_page().  Since it is
> -	 * always protected with mmu_lock there, should kvm_flush_remote_tlbs()
> -	 * be called before releasing mmu_lock, this is safe.
> -	 */
>   	kvm->tlbs_dirty = false;
>   }
>   EXPORT_SYMBOL_GPL(kvm_flush_remote_tlbs);
>
>
> What do you think?

I agree.

But please check Xiao's comment and my reply to it.

	Takuya

>
> Paolo
>> ---
>>   arch/x86/kvm/paging_tmpl.h |    7 ++++---
>>   include/linux/kvm_host.h   |    2 +-
>>   virt/kvm/kvm_main.c        |   11 +++++++----
>>   3 files changed, 12 insertions(+), 8 deletions(-)
>>
>> diff --git a/arch/x86/kvm/paging_tmpl.h b/arch/x86/kvm/paging_tmpl.h
>> index cba218a..b1e6c1b 100644
>> --- a/arch/x86/kvm/paging_tmpl.h
>> +++ b/arch/x86/kvm/paging_tmpl.h
>> @@ -913,7 +913,8 @@ static gpa_t FNAME(gva_to_gpa_nested)(struct kvm_vcpu *vcpu, gva_t vaddr,
>>    *   and kvm_mmu_notifier_invalidate_range_start detect the mapping page isn't
>>    *   used by guest then tlbs are not flushed, so guest is allowed to access the
>>    *   freed pages.
>> - *   And we increase kvm->tlbs_dirty to delay tlbs flush in this case.
>> + *   We set tlbs_dirty to let the notifier know this change and delay the flush
>> + *   until such a case actually happens.
>>    */
>>   static int FNAME(sync_page)(struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp)
>>   {
>> @@ -942,7 +943,7 @@ static int FNAME(sync_page)(struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp)
>>   			return -EINVAL;
>>
>>   		if (FNAME(prefetch_invalid_gpte)(vcpu, sp, &sp->spt[i], gpte)) {
>> -			vcpu->kvm->tlbs_dirty++;
>> +			vcpu->kvm->tlbs_dirty = true;
>>   			continue;
>>   		}
>>
>> @@ -957,7 +958,7 @@ static int FNAME(sync_page)(struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp)
>>
>>   		if (gfn != sp->gfns[i]) {
>>   			drop_spte(vcpu->kvm, &sp->spt[i]);
>> -			vcpu->kvm->tlbs_dirty++;
>> +			vcpu->kvm->tlbs_dirty = true;
>>   			continue;
>>   		}
>>
>> diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h
>> index f5937b8..ed1cc89 100644
>> --- a/include/linux/kvm_host.h
>> +++ b/include/linux/kvm_host.h
>> @@ -401,7 +401,7 @@ struct kvm {
>>   	unsigned long mmu_notifier_seq;
>>   	long mmu_notifier_count;
>>   #endif
>> -	long tlbs_dirty;
>> +	bool tlbs_dirty;
>>   	struct list_head devices;
>>   };
>>
>> diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
>> index a9e999a..51744da 100644
>> --- a/virt/kvm/kvm_main.c
>> +++ b/virt/kvm/kvm_main.c
>> @@ -186,12 +186,15 @@ static bool make_all_cpus_request(struct kvm *kvm, unsigned int req)
>>
>>   void kvm_flush_remote_tlbs(struct kvm *kvm)
>>   {
>> -	long dirty_count = kvm->tlbs_dirty;
>> -
>> -	smp_mb();
>>   	if (make_all_cpus_request(kvm, KVM_REQ_TLB_FLUSH))
>>   		++kvm->stat.remote_tlb_flush;
>> -	cmpxchg(&kvm->tlbs_dirty, dirty_count, 0);
>> +	/*
>> +	 * tlbs_dirty is used only for optimizing x86's shadow paging code with
>> +	 * mmu notifiers in mind, see the note on sync_page().  Since it is
>> +	 * always protected with mmu_lock there, should kvm_flush_remote_tlbs()
>> +	 * be called before releasing mmu_lock, this is safe.
>> +	 */
>> +	kvm->tlbs_dirty = false;
>>   }
>>   EXPORT_SYMBOL_GPL(kvm_flush_remote_tlbs);
>>
>>


^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH A] KVM: Simplify kvm->tlbs_dirty handling
  2014-02-19  0:44     ` Takuya Yoshikawa
@ 2014-02-19 12:04       ` Paolo Bonzini
  0 siblings, 0 replies; 8+ messages in thread
From: Paolo Bonzini @ 2014-02-19 12:04 UTC (permalink / raw)
  To: Takuya Yoshikawa, gleb; +Cc: kvm

Il 19/02/2014 01:44, Takuya Yoshikawa ha scritto:
> (2014/02/18 18:07), Paolo Bonzini wrote:
>> Il 18/02/2014 09:22, Takuya Yoshikawa ha scritto:
>>> When this was introduced, kvm_flush_remote_tlbs() could be called
>>> without holding mmu_lock.  It is now acknowledged that the function
>>> must be called before releasing mmu_lock, and all callers have already
>>> been changed to do so.
>>>
>>> There is no need to use smp_mb() and cmpxchg() any more.
>>>
>>> Signed-off-by: Takuya Yoshikawa <yoshikawa_takuya_b1@lab.ntt.co.jp>
>>
>> I prefer this patch, and in fact we can make it even simpler:
>>
>> diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h
>> index ed1cc89..9816b68 100644
>> --- a/include/linux/kvm_host.h
>> +++ b/include/linux/kvm_host.h
>> @@ -401,7 +401,9 @@ struct kvm {
>>       unsigned long mmu_notifier_seq;
>>       long mmu_notifier_count;
>>   #endif
>> +    /* Protected by mmu_lock */
>>       bool tlbs_dirty;
>> +
>>       struct list_head devices;
>>   };
>>
>> diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
>> index 51744da..f5668a4 100644
>> --- a/virt/kvm/kvm_main.c
>> +++ b/virt/kvm/kvm_main.c
>> @@ -188,12 +188,6 @@ void kvm_flush_remote_tlbs(struct kvm *kvm)
>>   {
>>       if (make_all_cpus_request(kvm, KVM_REQ_TLB_FLUSH))
>>           ++kvm->stat.remote_tlb_flush;
>> -    /*
>> -     * tlbs_dirty is used only for optimizing x86's shadow paging
>> code with
>> -     * mmu notifiers in mind, see the note on sync_page().  Since it is
>> -     * always protected with mmu_lock there, should
>> kvm_flush_remote_tlbs()
>> -     * be called before releasing mmu_lock, this is safe.
>> -     */
>>       kvm->tlbs_dirty = false;
>>   }
>>   EXPORT_SYMBOL_GPL(kvm_flush_remote_tlbs);
>>
>>
>> What do you think?
>
> I agree.
>
> But please check Xiao's comment and my reply to it.

Yes, I'll wait some time in case Xiao can get back to his patch set, and 
then make up my mind on what patch to apply.

In any case, if Xiao patches go in a change will be needed.  For patch A 
it would be reverted, for patch B it would edit the comment.  So the two 
patches aren't a lot different in this respect.

Paolo

^ permalink raw reply	[flat|nested] 8+ messages in thread

end of thread, other threads:[~2014-02-19 12:04 UTC | newest]

Thread overview: 8+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2014-02-18  8:21 [PATCH A or B] KVM: kvm->tlbs_dirty handling Takuya Yoshikawa
2014-02-18  8:22 ` [PATCH A] KVM: Simplify " Takuya Yoshikawa
2014-02-18  9:07   ` Paolo Bonzini
2014-02-19  0:44     ` Takuya Yoshikawa
2014-02-19 12:04       ` Paolo Bonzini
2014-02-18  9:43   ` Xiao Guangrong
2014-02-19  0:40     ` Takuya Yoshikawa
2014-02-18  8:23 ` [PATCH B] KVM: Explain tlbs_dirty trick in kvm_flush_remote_tlbs() Takuya Yoshikawa

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).