From: Paolo Bonzini <pbonzini@redhat.com>
To: Takuya Yoshikawa <yoshikawa_takuya_b1@lab.ntt.co.jp>, gleb@kernel.org
Cc: kvm@vger.kernel.org
Subject: Re: [PATCH A] KVM: Simplify kvm->tlbs_dirty handling
Date: Tue, 18 Feb 2014 10:07:27 +0100 [thread overview]
Message-ID: <530322CF.1000402@redhat.com> (raw)
In-Reply-To: <20140218172247.ca61f503.yoshikawa_takuya_b1@lab.ntt.co.jp>
Il 18/02/2014 09:22, Takuya Yoshikawa ha scritto:
> When this was introduced, kvm_flush_remote_tlbs() could be called
> without holding mmu_lock. It is now acknowledged that the function
> must be called before releasing mmu_lock, and all callers have already
> been changed to do so.
>
> There is no need to use smp_mb() and cmpxchg() any more.
>
> Signed-off-by: Takuya Yoshikawa <yoshikawa_takuya_b1@lab.ntt.co.jp>
I prefer this patch, and in fact we can make it even simpler:
diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h
index ed1cc89..9816b68 100644
--- a/include/linux/kvm_host.h
+++ b/include/linux/kvm_host.h
@@ -401,7 +401,9 @@ struct kvm {
unsigned long mmu_notifier_seq;
long mmu_notifier_count;
#endif
+ /* Protected by mmu_lock */
bool tlbs_dirty;
+
struct list_head devices;
};
diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
index 51744da..f5668a4 100644
--- a/virt/kvm/kvm_main.c
+++ b/virt/kvm/kvm_main.c
@@ -188,12 +188,6 @@ void kvm_flush_remote_tlbs(struct kvm *kvm)
{
if (make_all_cpus_request(kvm, KVM_REQ_TLB_FLUSH))
++kvm->stat.remote_tlb_flush;
- /*
- * tlbs_dirty is used only for optimizing x86's shadow paging code with
- * mmu notifiers in mind, see the note on sync_page(). Since it is
- * always protected with mmu_lock there, should kvm_flush_remote_tlbs()
- * be called before releasing mmu_lock, this is safe.
- */
kvm->tlbs_dirty = false;
}
EXPORT_SYMBOL_GPL(kvm_flush_remote_tlbs);
What do you think?
Paolo
> ---
> arch/x86/kvm/paging_tmpl.h | 7 ++++---
> include/linux/kvm_host.h | 2 +-
> virt/kvm/kvm_main.c | 11 +++++++----
> 3 files changed, 12 insertions(+), 8 deletions(-)
>
> diff --git a/arch/x86/kvm/paging_tmpl.h b/arch/x86/kvm/paging_tmpl.h
> index cba218a..b1e6c1b 100644
> --- a/arch/x86/kvm/paging_tmpl.h
> +++ b/arch/x86/kvm/paging_tmpl.h
> @@ -913,7 +913,8 @@ static gpa_t FNAME(gva_to_gpa_nested)(struct kvm_vcpu *vcpu, gva_t vaddr,
> * and kvm_mmu_notifier_invalidate_range_start detect the mapping page isn't
> * used by guest then tlbs are not flushed, so guest is allowed to access the
> * freed pages.
> - * And we increase kvm->tlbs_dirty to delay tlbs flush in this case.
> + * We set tlbs_dirty to let the notifier know this change and delay the flush
> + * until such a case actually happens.
> */
> static int FNAME(sync_page)(struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp)
> {
> @@ -942,7 +943,7 @@ static int FNAME(sync_page)(struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp)
> return -EINVAL;
>
> if (FNAME(prefetch_invalid_gpte)(vcpu, sp, &sp->spt[i], gpte)) {
> - vcpu->kvm->tlbs_dirty++;
> + vcpu->kvm->tlbs_dirty = true;
> continue;
> }
>
> @@ -957,7 +958,7 @@ static int FNAME(sync_page)(struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp)
>
> if (gfn != sp->gfns[i]) {
> drop_spte(vcpu->kvm, &sp->spt[i]);
> - vcpu->kvm->tlbs_dirty++;
> + vcpu->kvm->tlbs_dirty = true;
> continue;
> }
>
> diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h
> index f5937b8..ed1cc89 100644
> --- a/include/linux/kvm_host.h
> +++ b/include/linux/kvm_host.h
> @@ -401,7 +401,7 @@ struct kvm {
> unsigned long mmu_notifier_seq;
> long mmu_notifier_count;
> #endif
> - long tlbs_dirty;
> + bool tlbs_dirty;
> struct list_head devices;
> };
>
> diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
> index a9e999a..51744da 100644
> --- a/virt/kvm/kvm_main.c
> +++ b/virt/kvm/kvm_main.c
> @@ -186,12 +186,15 @@ static bool make_all_cpus_request(struct kvm *kvm, unsigned int req)
>
> void kvm_flush_remote_tlbs(struct kvm *kvm)
> {
> - long dirty_count = kvm->tlbs_dirty;
> -
> - smp_mb();
> if (make_all_cpus_request(kvm, KVM_REQ_TLB_FLUSH))
> ++kvm->stat.remote_tlb_flush;
> - cmpxchg(&kvm->tlbs_dirty, dirty_count, 0);
> + /*
> + * tlbs_dirty is used only for optimizing x86's shadow paging code with
> + * mmu notifiers in mind, see the note on sync_page(). Since it is
> + * always protected with mmu_lock there, should kvm_flush_remote_tlbs()
> + * be called before releasing mmu_lock, this is safe.
> + */
> + kvm->tlbs_dirty = false;
> }
> EXPORT_SYMBOL_GPL(kvm_flush_remote_tlbs);
>
>
next prev parent reply other threads:[~2014-02-18 9:07 UTC|newest]
Thread overview: 8+ messages / expand[flat|nested] mbox.gz Atom feed top
2014-02-18 8:21 [PATCH A or B] KVM: kvm->tlbs_dirty handling Takuya Yoshikawa
2014-02-18 8:22 ` [PATCH A] KVM: Simplify " Takuya Yoshikawa
2014-02-18 9:07 ` Paolo Bonzini [this message]
2014-02-19 0:44 ` Takuya Yoshikawa
2014-02-19 12:04 ` Paolo Bonzini
2014-02-18 9:43 ` Xiao Guangrong
2014-02-19 0:40 ` Takuya Yoshikawa
2014-02-18 8:23 ` [PATCH B] KVM: Explain tlbs_dirty trick in kvm_flush_remote_tlbs() Takuya Yoshikawa
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=530322CF.1000402@redhat.com \
--to=pbonzini@redhat.com \
--cc=gleb@kernel.org \
--cc=kvm@vger.kernel.org \
--cc=yoshikawa_takuya_b1@lab.ntt.co.jp \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).