From: Paolo Bonzini <pbonzini@redhat.com>
To: Bhushan Bharat-R65777 <R65777@freescale.com>
Cc: Alexander Graf <agraf@suse.de>, Paul Mackerras <paulus@samba.org>,
Wood Scott-B07421 <B07421@freescale.com>,
"kvm-ppc@vger.kernel.org" <kvm-ppc@vger.kernel.org>,
"kvm@vger.kernel.org mailing list" <kvm@vger.kernel.org>,
Gleb Natapov <gleb@redhat.com>
Subject: Re: [PATCH 2/2] kvm: ppc: booke: check range page invalidation progress on page setup
Date: Thu, 10 Oct 2013 11:01:39 +0200 [thread overview]
Message-ID: <52566CF3.2050906@redhat.com> (raw)
In-Reply-To: <6A3DF150A5B70D4F9B66A25E3F7C888D071AF004@039-SN2MPN1-012.039d.mgd.msft.net>
Il 10/10/2013 10:32, Bhushan Bharat-R65777 ha scritto:
>
>
>> -----Original Message-----
>> From: Paolo Bonzini [mailto:paolo.bonzini@gmail.com] On Behalf Of Paolo Bonzini
>> Sent: Monday, October 07, 2013 5:35 PM
>> To: Alexander Graf
>> Cc: Bhushan Bharat-R65777; Paul Mackerras; Wood Scott-B07421; kvm-
>> ppc@vger.kernel.org; kvm@vger.kernel.org mailing list; Bhushan Bharat-R65777;
>> Gleb Natapov
>> Subject: Re: [PATCH 2/2] kvm: ppc: booke: check range page invalidation progress
>> on page setup
>>
>> Il 04/10/2013 15:38, Alexander Graf ha scritto:
>>>
>>> On 07.08.2013, at 12:03, Bharat Bhushan wrote:
>>>
>>>> When the MM code is invalidating a range of pages, it calls the KVM
>>>> kvm_mmu_notifier_invalidate_range_start() notifier function, which calls
>>>> kvm_unmap_hva_range(), which arranges to flush all the TLBs for guest pages.
>>>> However, the Linux PTEs for the range being flushed are still valid at
>>>> that point. We are not supposed to establish any new references to pages
>>>> in the range until the ...range_end() notifier gets called.
>>>> The PPC-specific KVM code doesn't get any explicit notification of that;
>>>> instead, we are supposed to use mmu_notifier_retry() to test whether we
>>>> are or have been inside a range flush notifier pair while we have been
>>>> referencing a page.
>>>>
>>>> This patch calls the mmu_notifier_retry() while mapping the guest
>>>> page to ensure we are not referencing a page when in range invalidation.
>>>>
>>>> This call is inside a region locked with kvm->mmu_lock, which is the
>>>> same lock that is called by the KVM MMU notifier functions, thus
>>>> ensuring that no new notification can proceed while we are in the
>>>> locked region.
>>>>
>>>> Signed-off-by: Bharat Bhushan <bharat.bhushan@freescale.com>
>>>
>>> Acked-by: Alexander Graf <agraf@suse.de>
>>>
>>> Gleb, Paolo, please queue for 3.12 directly.
>>
>> Here is the backport. The second hunk has a nontrivial conflict, so
>> someone please give their {Tested,Reviewed,Compiled}-by.
>
> {Compiled,Reviewed}-by: Bharat Bhushan <bharat.bhushan@freescale.com>
Thanks, patch on its way to Linus.
Paolo
> Thanks
> -Bharat
>
>>
>> Paolo
>>
>> diff --git a/arch/powerpc/kvm/e500_mmu_host.c b/arch/powerpc/kvm/e500_mmu_host.c
>> index 1c6a9d7..c65593a 100644
>> --- a/arch/powerpc/kvm/e500_mmu_host.c
>> +++ b/arch/powerpc/kvm/e500_mmu_host.c
>> @@ -332,6 +332,13 @@ static inline int kvmppc_e500_shadow_map(struct
>> kvmppc_vcpu_e500 *vcpu_e500,
>> unsigned long hva;
>> int pfnmap = 0;
>> int tsize = BOOK3E_PAGESZ_4K;
>> + int ret = 0;
>> + unsigned long mmu_seq;
>> + struct kvm *kvm = vcpu_e500->vcpu.kvm;
>> +
>> + /* used to check for invalidations in progress */
>> + mmu_seq = kvm->mmu_notifier_seq;
>> + smp_rmb();
>>
>> /*
>> * Translate guest physical to true physical, acquiring
>> @@ -449,6 +456,12 @@ static inline int kvmppc_e500_shadow_map(struct
>> kvmppc_vcpu_e500 *vcpu_e500,
>> gvaddr &= ~((tsize_pages << PAGE_SHIFT) - 1);
>> }
>>
>> + spin_lock(&kvm->mmu_lock);
>> + if (mmu_notifier_retry(kvm, mmu_seq)) {
>> + ret = -EAGAIN;
>> + goto out;
>> + }
>> +
>> kvmppc_e500_ref_setup(ref, gtlbe, pfn);
>>
>> kvmppc_e500_setup_stlbe(&vcpu_e500->vcpu, gtlbe, tsize,
>> @@ -457,10 +470,13 @@ static inline int kvmppc_e500_shadow_map(struct
>> kvmppc_vcpu_e500 *vcpu_e500,
>> /* Clear i-cache for new pages */
>> kvmppc_mmu_flush_icache(pfn);
>>
>> +out:
>> + spin_unlock(&kvm->mmu_lock);
>> +
>> /* Drop refcount on page, so that mmu notifiers can clear it */
>> kvm_release_pfn_clean(pfn);
>>
>> - return 0;
>> + return ret;
>> }
>>
>> /* XXX only map the one-one case, for now use TLB0 */
>>
>>
>
>
next prev parent reply other threads:[~2013-10-10 9:01 UTC|newest]
Thread overview: 11+ messages / expand[flat|nested] mbox.gz Atom feed top
2013-08-07 10:03 [PATCH 0/2] KVM: PPC: BOOKE: MMU Fixes Bharat Bhushan
2013-08-07 10:03 ` [PATCH 1/2] kvm: powerpc: mark page accessed when mapping a guest page Bharat Bhushan
2013-08-10 1:12 ` Scott Wood
2013-10-04 13:35 ` Alexander Graf
2013-08-07 10:03 ` [PATCH 2/2] kvm: ppc: booke: check range page invalidation progress on page setup Bharat Bhushan
2013-08-10 1:15 ` Scott Wood
2013-10-04 13:38 ` Alexander Graf
2013-10-07 12:04 ` Paolo Bonzini
2013-10-10 8:32 ` Bhushan Bharat-R65777
2013-10-10 9:01 ` Paolo Bonzini [this message]
2013-08-30 1:06 ` [PATCH 0/2] KVM: PPC: BOOKE: MMU Fixes Bhushan Bharat-R65777
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=52566CF3.2050906@redhat.com \
--to=pbonzini@redhat.com \
--cc=B07421@freescale.com \
--cc=R65777@freescale.com \
--cc=agraf@suse.de \
--cc=gleb@redhat.com \
--cc=kvm-ppc@vger.kernel.org \
--cc=kvm@vger.kernel.org \
--cc=paulus@samba.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).