From: Gleb Natapov <gleb@redhat.com>
To: Takuya Yoshikawa <yoshikawa_takuya_b1@lab.ntt.co.jp>
Cc: mtosatti@redhat.com, kvm@vger.kernel.org, linux-kernel@vger.kernel.org
Subject: Re: [PATCH 0/7 -v2] KVM: Alleviate mmu_lock hold time when we start dirty logging
Date: Mon, 14 Jan 2013 11:15:24 +0200 [thread overview]
Message-ID: <20130114091524.GA4751@redhat.com> (raw)
In-Reply-To: <20130108194238.09ebc8bd.yoshikawa_takuya_b1@lab.ntt.co.jp>
On Tue, Jan 08, 2013 at 07:42:38PM +0900, Takuya Yoshikawa wrote:
> Changelog v1->v2:
> The condition in patch 1 was changed like this:
> npages && (mem->flags & KVM_MEM_LOG_DIRTY_PAGES)
>
> This patch set makes kvm_mmu_slot_remove_write_access() rmap based and
> adds conditional rescheduling to it.
>
> The motivation for this change is of course to reduce the mmu_lock hold
> time when we start dirty logging for a large memory slot. You may not
> see the problem if you just give 8GB or less of the memory to the guest
> with THP enabled on the host -- this is for the worst case.
>
Applied, thanks.
> Takuya Yoshikawa (7):
> KVM: Write protect the updated slot only when dirty logging is enabled
> KVM: MMU: Remove unused parameter level from __rmap_write_protect()
> KVM: MMU: Make kvm_mmu_slot_remove_write_access() rmap based
> KVM: Remove unused slot_bitmap from kvm_mmu_page
> KVM: Make kvm_mmu_change_mmu_pages() take mmu_lock by itself
> KVM: Make kvm_mmu_slot_remove_write_access() take mmu_lock by itself
> KVM: MMU: Conditionally reschedule when kvm_mmu_slot_remove_write_access() takes a long time
>
> Documentation/virtual/kvm/mmu.txt | 7 ----
> arch/x86/include/asm/kvm_host.h | 5 ---
> arch/x86/kvm/mmu.c | 56 +++++++++++++++++++-----------------
> arch/x86/kvm/x86.c | 12 ++++---
> virt/kvm/kvm_main.c | 1 -
> 5 files changed, 37 insertions(+), 44 deletions(-)
>
> --
> 1.7.5.4
--
Gleb.
prev parent reply other threads:[~2013-01-14 9:15 UTC|newest]
Thread overview: 10+ messages / expand[flat|nested] mbox.gz Atom feed top
2013-01-08 10:42 [PATCH 0/7 -v2] KVM: Alleviate mmu_lock hold time when we start dirty logging Takuya Yoshikawa
2013-01-08 10:43 ` [PATCH 1/7] KVM: Write protect the updated slot only when dirty logging is enabled Takuya Yoshikawa
2013-01-08 10:44 ` [PATCH 2/7] KVM: MMU: Remove unused parameter level from __rmap_write_protect() Takuya Yoshikawa
2013-01-08 10:44 ` [PATCH 3/7] KVM: MMU: Make kvm_mmu_slot_remove_write_access() rmap based Takuya Yoshikawa
2013-01-08 10:45 ` [PATCH 4/7] KVM: Remove unused slot_bitmap from kvm_mmu_page Takuya Yoshikawa
2013-01-08 10:46 ` [PATCH 5/7] KVM: Make kvm_mmu_change_mmu_pages() take mmu_lock by itself Takuya Yoshikawa
2013-01-08 10:46 ` [PATCH 6/7] KVM: Make kvm_mmu_slot_remove_write_access() " Takuya Yoshikawa
2013-01-08 10:47 ` [PATCH 7/7] KVM: MMU: Conditionally reschedule when kvm_mmu_slot_remove_write_access() takes a long time Takuya Yoshikawa
2013-01-10 17:49 ` [PATCH 0/7 -v2] KVM: Alleviate mmu_lock hold time when we start dirty logging Marcelo Tosatti
2013-01-14 9:15 ` Gleb Natapov [this message]
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20130114091524.GA4751@redhat.com \
--to=gleb@redhat.com \
--cc=kvm@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=mtosatti@redhat.com \
--cc=yoshikawa_takuya_b1@lab.ntt.co.jp \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).