From: Gleb Natapov <gleb@redhat.com>
To: Xiao Guangrong <xiaoguangrong@linux.vnet.ibm.com>
Cc: Avi Kivity <avi@redhat.com>,
Marcelo Tosatti <mtosatti@redhat.com>,
LKML <linux-kernel@vger.kernel.org>, KVM <kvm@vger.kernel.org>
Subject: Re: [PATCH] KVM: MMU: optimize for set_spte
Date: Thu, 6 Dec 2012 09:12:39 +0200 [thread overview]
Message-ID: <20121206071239.GZ19514@redhat.com> (raw)
In-Reply-To: <50BD32F7.5080601@linux.vnet.ibm.com>
On Tue, Dec 04, 2012 at 07:17:11AM +0800, Xiao Guangrong wrote:
> There are two cases we need to adjust page size in set_spte:
> 1): the one is other vcpu creates new sp in the window between mapping_level()
> and acquiring mmu-lock.
> 2): the another case is the new sp is created by itself (page-fault path) when
> guest uses the target gfn as its page table.
>
> In current code, set_spte drop the spte and emulate the access for these case,
> it works not good:
> - for the case 1, it may destroy the mapping established by other vcpu, and
> do expensive instruction emulation.
> - for the case 2, it may emulate the access even if the guest is accessing
> the page which not used as page table. There is a example, 0~2M is used as
> huge page in guest, in this huge page, only page 3 used as page table, then
> guest read/writes on other pages can cause instruction emulation.
>
> Both of these cases can be fixed by allowing guest to retry the access, it
> will refault, then we can establish the mapping by using small page
>
> Signed-off-by: Xiao Guangrong <xiaoguangrong@linux.vnet.ibm.com>
Applied to queue. Thanks.
> ---
> arch/x86/kvm/mmu.c | 16 ++++++++++++----
> 1 files changed, 12 insertions(+), 4 deletions(-)
>
> diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
> index b875a9e..01d7c2a 100644
> --- a/arch/x86/kvm/mmu.c
> +++ b/arch/x86/kvm/mmu.c
> @@ -2382,12 +2382,20 @@ static int set_spte(struct kvm_vcpu *vcpu, u64 *sptep,
> || (!vcpu->arch.mmu.direct_map && write_fault
> && !is_write_protection(vcpu) && !user_fault)) {
>
> + /*
> + * There are two cases:
> + * - the one is other vcpu creates new sp in the window
> + * between mapping_level() and acquiring mmu-lock.
> + * - the another case is the new sp is created by itself
> + * (page-fault path) when guest uses the target gfn as
> + * its page table.
> + * Both of these cases can be fixed by allowing guest to
> + * retry the access, it will refault, then we can establish
> + * the mapping by using small page.
> + */
> if (level > PT_PAGE_TABLE_LEVEL &&
> - has_wrprotected_page(vcpu->kvm, gfn, level)) {
> - ret = 1;
> - drop_spte(vcpu->kvm, sptep);
> + has_wrprotected_page(vcpu->kvm, gfn, level))
> goto done;
> - }
>
> spte |= PT_WRITABLE_MASK | SPTE_MMU_WRITEABLE;
>
> --
> 1.7.7.6
--
Gleb.
prev parent reply other threads:[~2012-12-06 7:12 UTC|newest]
Thread overview: 4+ messages / expand[flat|nested] mbox.gz Atom feed top
2012-12-03 23:17 [PATCH] KVM: MMU: optimize for set_spte Xiao Guangrong
2012-12-05 21:09 ` Marcelo Tosatti
2012-12-06 2:30 ` Xiao Guangrong
2012-12-06 7:12 ` Gleb Natapov [this message]
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20121206071239.GZ19514@redhat.com \
--to=gleb@redhat.com \
--cc=avi@redhat.com \
--cc=kvm@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=mtosatti@redhat.com \
--cc=xiaoguangrong@linux.vnet.ibm.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).