From: Izik Eidus <ieidus@redhat.com>
To: Marcelo Tosatti <mtosatti@redhat.com>
Cc: avi@redhat.com, kvm@vger.kernel.org, aarcange@redhat.com
Subject: Re: [PATCH 3/3] add support for change_pte mmu notifiers
Date: Sat, 12 Sep 2009 09:41:10 +0300 [thread overview]
Message-ID: <4AAB4286.9070303@redhat.com> (raw)
In-Reply-To: <20090911212418.GC6244@amt.cnet>
Marcelo Tosatti wrote:
> On Thu, Sep 10, 2009 at 07:38:58PM +0300, Izik Eidus wrote:
>
>> this is needed for kvm if it want ksm to directly map pages into its
>> shadow page tables.
>>
>> Signed-off-by: Izik Eidus <ieidus@redhat.com>
>> ---
>> arch/x86/include/asm/kvm_host.h | 1 +
>> arch/x86/kvm/mmu.c | 70 ++++++++++++++++++++++++++++++++++----
>> virt/kvm/kvm_main.c | 14 ++++++++
>> 3 files changed, 77 insertions(+), 8 deletions(-)
>>
>> diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
>> index 6046e6f..594d131 100644
>> --- a/arch/x86/include/asm/kvm_host.h
>> +++ b/arch/x86/include/asm/kvm_host.h
>> @@ -797,6 +797,7 @@ asmlinkage void kvm_handle_fault_on_reboot(void);
>> #define KVM_ARCH_WANT_MMU_NOTIFIER
>> int kvm_unmap_hva(struct kvm *kvm, unsigned long hva);
>> int kvm_age_hva(struct kvm *kvm, unsigned long hva);
>> +void kvm_set_spte_hva(struct kvm *kvm, unsigned long hva, pte_t pte);
>> int cpuid_maxphyaddr(struct kvm_vcpu *vcpu);
>> int kvm_cpu_has_interrupt(struct kvm_vcpu *vcpu);
>> int kvm_arch_interrupt_allowed(struct kvm_vcpu *vcpu);
>> diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
>> index a7151b8..3fd19f2 100644
>> --- a/arch/x86/kvm/mmu.c
>> +++ b/arch/x86/kvm/mmu.c
>> @@ -282,6 +282,11 @@ static pfn_t spte_to_pfn(u64 pte)
>> return (pte & PT64_BASE_ADDR_MASK) >> PAGE_SHIFT;
>> }
>>
>> +static pte_t ptep_val(pte_t *ptep)
>> +{
>> + return *ptep;
>> +}
>> +
>> static gfn_t pse36_gfn_delta(u32 gpte)
>> {
>> int shift = 32 - PT32_DIR_PSE36_SHIFT - PAGE_SHIFT;
>> @@ -748,7 +753,8 @@ static int rmap_write_protect(struct kvm *kvm, u64 gfn)
>> return write_protected;
>> }
>>
>> -static int kvm_unmap_rmapp(struct kvm *kvm, unsigned long *rmapp)
>> +static int kvm_unmap_rmapp(struct kvm *kvm, unsigned long *rmapp,
>> + unsigned long data)
>> {
>> u64 *spte;
>> int need_tlb_flush = 0;
>> @@ -763,8 +769,48 @@ static int kvm_unmap_rmapp(struct kvm *kvm, unsigned long *rmapp)
>> return need_tlb_flush;
>> }
>>
>> +static int kvm_set_pte_rmapp(struct kvm *kvm, unsigned long *rmapp,
>> + unsigned long data)
>> +{
>> + int need_flush = 0;
>> + u64 *spte, new_spte;
>> + pte_t *ptep = (pte_t *)data;
>> + pfn_t new_pfn;
>> +
>> + new_pfn = pte_pfn(ptep_val(ptep));
>> + spte = rmap_next(kvm, rmapp, NULL);
>> + while (spte) {
>> + BUG_ON(!is_shadow_present_pte(*spte));
>> + rmap_printk("kvm_set_pte_rmapp: spte %p %llx\n", spte, *spte);
>> + need_flush = 1;
>> + if (pte_write(ptep_val(ptep))) {
>> + rmap_remove(kvm, spte);
>> + __set_spte(spte, shadow_trap_nonpresent_pte);
>> + spte = rmap_next(kvm, rmapp, NULL);
>> + } else {
>> + new_spte = *spte &~ (PT64_BASE_ADDR_MASK);
>> + new_spte |= new_pfn << PAGE_SHIFT;
>> +
>> + if (!pte_write(ptep_val(ptep))) {
>> + new_spte &= ~PT_WRITABLE_MASK;
>> + new_spte &= ~SPTE_HOST_WRITEABLE;
>> + if (is_writeble_pte(*spte))
>> + kvm_set_pfn_dirty(spte_to_pfn(*spte));
>> + }
>> + __set_spte(spte, new_spte);
>> + spte = rmap_next(kvm, rmapp, spte);
>> + }
>> + }
>> + if (need_flush)
>> + kvm_flush_remote_tlbs(kvm);
>> +
>> + return 0;
>> +}
>> +
>> static int kvm_handle_hva(struct kvm *kvm, unsigned long hva,
>> - int (*handler)(struct kvm *kvm, unsigned long *rmapp))
>> + unsigned long data,
>> + int (*handler)(struct kvm *kvm, unsigned long *rmapp,
>> + unsigned long data))
>> {
>> int i, j;
>> int retval = 0;
>> @@ -786,13 +832,15 @@ static int kvm_handle_hva(struct kvm *kvm, unsigned long hva,
>> if (hva >= start && hva < end) {
>> gfn_t gfn_offset = (hva - start) >> PAGE_SHIFT;
>>
>> - retval |= handler(kvm, &memslot->rmap[gfn_offset]);
>> + retval |= handler(kvm, &memslot->rmap[gfn_offset],
>> + data);
>>
>> for (j = 0; j < KVM_NR_PAGE_SIZES - 1; ++j) {
>> int idx = gfn_offset;
>> idx /= KVM_PAGES_PER_HPAGE(PT_DIRECTORY_LEVEL + j);
>> retval |= handler(kvm,
>> - &memslot->lpage_info[j][idx].rmap_pde);
>> + &memslot->lpage_info[j][idx].rmap_pde,
>> + data);
>>
>
> If change_pte is called to modify a largepage pte, and the shadow has
> that largepage mapped with 4k sptes, you'll set the wrong pfn. That is,
> the patch does not attempt to handle different page sizes properly.
>
> So either disable change_pte updates to non-4k host vmas (making it
> explictly it does not handle different pagesizes), or handle largepages
> properly, although i don't see any use for change_pte or largepage
> mappings?
>
change_pte doesn't get called on 4k pages...
So adding commet to this function saying it is working just on 4k pages
would be enough ?
next prev parent reply other threads:[~2009-09-12 6:32 UTC|newest]
Thread overview: 19+ messages / expand[flat|nested] mbox.gz Atom feed top
2009-09-10 16:38 [PATCH 0/3] ksm support for kvm Izik Eidus
2009-09-10 16:38 ` [PATCH 1/3] kvm: dont hold pagecount reference for mapped sptes pages Izik Eidus
2009-09-10 16:38 ` [PATCH 2/3] add SPTE_HOST_WRITEABLE flag to the shadow ptes Izik Eidus
2009-09-10 16:38 ` [PATCH 3/3] add support for change_pte mmu notifiers Izik Eidus
2009-09-11 21:24 ` Marcelo Tosatti
2009-09-12 6:41 ` Izik Eidus [this message]
2009-09-12 16:18 ` Marcelo Tosatti
2009-09-12 17:04 ` Izik Eidus
2009-09-12 16:59 ` Marcelo Tosatti
2009-09-11 21:14 ` [PATCH 2/3] add SPTE_HOST_WRITEABLE flag to the shadow ptes Marcelo Tosatti
2009-09-12 6:35 ` Izik Eidus
2009-09-14 5:18 ` Avi Kivity
2009-09-14 8:34 ` Marcelo Tosatti
2009-09-14 16:51 ` Izik Eidus
-- strict thread matches above, loose matches on Subject: below --
2009-09-23 18:25 [PATCH 0/3] ksm support for kvm v2 Izik Eidus
2009-09-23 18:25 ` [PATCH 1/3] kvm: dont hold pagecount reference for mapped sptes pages Izik Eidus
2009-09-23 18:25 ` [PATCH 2/3] add SPTE_HOST_WRITEABLE flag to the shadow ptes Izik Eidus
2009-09-23 18:25 ` [PATCH 3/3] add support for change_pte mmu notifiers Izik Eidus
2009-09-23 18:32 ` Izik Eidus
2009-09-23 18:47 [PATCH 0/3] kvm ksm support v3 Izik Eidus
2009-09-23 18:47 ` [PATCH 1/3] kvm: dont hold pagecount reference for mapped sptes pages Izik Eidus
2009-09-23 18:47 ` [PATCH 2/3] add SPTE_HOST_WRITEABLE flag to the shadow ptes Izik Eidus
2009-09-23 18:47 ` [PATCH 3/3] add support for change_pte mmu notifiers Izik Eidus
2009-09-24 14:55 ` Marcelo Tosatti
2009-09-24 17:11 ` Andrea Arcangeli
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=4AAB4286.9070303@redhat.com \
--to=ieidus@redhat.com \
--cc=aarcange@redhat.com \
--cc=avi@redhat.com \
--cc=kvm@vger.kernel.org \
--cc=mtosatti@redhat.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).