kvm.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Avi Kivity <avi@redhat.com>
To: Xiao Guangrong <xiaoguangrong@cn.fujitsu.com>
Cc: Marcelo Tosatti <mtosatti@redhat.com>,
	LKML <linux-kernel@vger.kernel.org>,
	KVM list <kvm@vger.kernel.org>
Subject: Re: [PATCH v4 5/6] KVM: MMU: combine guest pte read between walk and pte prefetch
Date: Sat, 03 Jul 2010 15:08:21 +0300	[thread overview]
Message-ID: <4C2F2835.5060508@redhat.com> (raw)
In-Reply-To: <4C2F117C.2000006@cn.fujitsu.com>

On 07/03/2010 01:31 PM, Xiao Guangrong wrote:
>
>> See how the pte is reread inside fetch with mmu_lock held.
>>
>>      
> It looks like something is broken in 'fetch' functions, this patch will
> fix it.
>
> Subject: [PATCH] KVM: MMU: fix last level broken in FNAME(fetch)
>
> We read the guest level out of 'mmu_lock', sometimes, the host mapping is
> confusion. Consider this case:
>
> VCPU0:                                              VCPU1
>
> Read guest mapping, assume the mapping is:
> GLV3 ->  GLV2 ->  GLV1 ->  GFNA,
> And in the host, the corresponding mapping is
> HLV3 ->  HLV2 ->  HLV1(P=0)
>
>                                                     Write GLV1 and cause the
>                                                     mapping point to GFNB
>                                                     (May occur in pte_write or
>                                                        invlpg path)
>
> Mapping GLV1 to GFNA
>
> This issue only occurs in the last indirect mapping, since if the middle
> mapping is changed, the mapping will be zapped, then it will be detected
> in the FNAME(fetch) path, but when it map the last level, it not checked.
>
> Fixed by also check the last level.
>
>    

I don't really see what is fixed.  We already check the gpte.  What's 
special about the new scenario?

> @@ -322,6 +334,12 @@ static u64 *FNAME(fetch)(struct kvm_vcpu *vcpu, gva_t addr,
>   		level = iterator.level;
>   		sptep = iterator.sptep;
>   		if (iterator.level == hlevel) {
> +			if (check&&  level == gw->level&&
> +			      !FNAME(check_level_mapping)(vcpu, gw, hlevel)) {
> +				kvm_release_pfn_clean(pfn);
> +				break;
> +			}
> +
>    

Now we check here...

>   			mmu_set_spte(vcpu, sptep, access,
>   				     gw->pte_access&  access,
>   				     user_fault, write_fault,
> @@ -376,10 +394,10 @@ static u64 *FNAME(fetch)(struct kvm_vcpu *vcpu, gva_t addr,
>   		sp = kvm_mmu_get_page(vcpu, table_gfn, addr, level-1,
>   					       direct, access, sptep);
>   		if (!direct) {
> -			r = kvm_read_guest_atomic(vcpu->kvm,
> -						  gw->pte_gpa[level - 2],
> -						&curr_pte, sizeof(curr_pte));
> -			if (r || curr_pte != gw->ptes[level - 2]) {
> +			if (hlevel == level - 1)
> +				check = false;
> +
> +			if (!FNAME(check_level_mapping)(vcpu, gw, level - 1)) {
>    

... and here?  Why?


(looking at the code, we have a call to kvm_host_page_size() on every 
page fault, that takes mmap_sem... that's got to impact scaling)

-- 
I have a truly marvellous patch that fixes the bug which this
signature is too narrow to contain.


  reply	other threads:[~2010-07-03 12:08 UTC|newest]

Thread overview: 32+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2010-07-01 13:53 [PATCH v4 1/6] KVM: MMU: introduce gfn_to_pfn_atomic() function Xiao Guangrong
2010-07-01 13:53 ` [PATCH v4 2/6] KVM: MMU: introduce gfn_to_page_many_atomic() function Xiao Guangrong
2010-07-01 13:54 ` [PATCH v4 3/6] KVM: MMU: introduce pte_prefetch_topup_memory_cache() Xiao Guangrong
2010-07-01 13:55 ` [PATCH v4 4/6] KVM: MMU: prefetch ptes when intercepted guest #PF Xiao Guangrong
2010-07-02 16:54   ` Marcelo Tosatti
2010-07-03  8:08     ` Xiao Guangrong
2010-07-05 12:01       ` Marcelo Tosatti
2010-07-06  0:50         ` Xiao Guangrong
2010-07-01 13:55 ` [PATCH v4 5/6] KVM: MMU: combine guest pte read between walk and pte prefetch Xiao Guangrong
2010-07-02 17:03   ` Marcelo Tosatti
2010-07-03 10:31     ` Xiao Guangrong
2010-07-03 12:08       ` Avi Kivity [this message]
2010-07-03 12:16         ` Xiao Guangrong
2010-07-03 12:26           ` Avi Kivity
2010-07-03 12:31             ` Xiao Guangrong
2010-07-03 12:44               ` Avi Kivity
2010-07-03 12:49                 ` Avi Kivity
2010-07-03 13:03                   ` Xiao Guangrong
2010-07-04 14:30                     ` Avi Kivity
2010-07-05  2:52                       ` Xiao Guangrong
2010-07-05  8:23                         ` Avi Kivity
2010-07-05  8:45                           ` Xiao Guangrong
2010-07-05  9:05                             ` Avi Kivity
2010-07-05  9:09                               ` Xiao Guangrong
2010-07-05  9:20                                 ` Avi Kivity
2010-07-05  9:31                                   ` Xiao Guangrong
2010-07-03 12:57                 ` Xiao Guangrong
2010-07-04 14:32                   ` Avi Kivity
2010-07-03 11:48     ` Avi Kivity
2010-07-01 13:56 ` [PATCH v4 6/6] KVM: MMU: trace " Xiao Guangrong
2010-07-02 16:47 ` [PATCH v4 1/6] KVM: MMU: introduce gfn_to_pfn_atomic() function Marcelo Tosatti
2010-07-03  3:13   ` Nick Piggin

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=4C2F2835.5060508@redhat.com \
    --to=avi@redhat.com \
    --cc=kvm@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=mtosatti@redhat.com \
    --cc=xiaoguangrong@cn.fujitsu.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).