* [PATCH v1 0/2] KVM: nVMX: remove nested_get_page()
@ 2017-08-03 14:09 David Hildenbrand
2017-08-03 14:09 ` [PATCH v1 1/2] KVM: nVMX: get rid of nested_get_page() David Hildenbrand
` (2 more replies)
0 siblings, 3 replies; 12+ messages in thread
From: David Hildenbrand @ 2017-08-03 14:09 UTC (permalink / raw)
To: kvm; +Cc: Paolo Bonzini, Radim Krčmář, david
Let's just use the ordinary functons directly. The "nested" at that point
is just confusing. All we want is a page from G1.
David Hildenbrand (2):
KVM: nVMX: get rid of nested_get_page()
KVM: nVMX: get rid of nested_release_page*
arch/x86/kvm/vmx.c | 89 ++++++++++++++++++------------------------------
include/linux/kvm_host.h | 6 ++++
2 files changed, 40 insertions(+), 55 deletions(-)
--
2.9.4
^ permalink raw reply [flat|nested] 12+ messages in thread* [PATCH v1 1/2] KVM: nVMX: get rid of nested_get_page() 2017-08-03 14:09 [PATCH v1 0/2] KVM: nVMX: remove nested_get_page() David Hildenbrand @ 2017-08-03 14:09 ` David Hildenbrand 2017-08-03 15:36 ` Radim Krčmář ` (2 more replies) 2017-08-03 14:09 ` [PATCH v1 2/2] KVM: nVMX: get rid of nested_release_page* David Hildenbrand 2017-08-03 15:14 ` [PATCH v1 0/2] KVM: nVMX: remove nested_get_page() Paolo Bonzini 2 siblings, 3 replies; 12+ messages in thread From: David Hildenbrand @ 2017-08-03 14:09 UTC (permalink / raw) To: kvm; +Cc: Paolo Bonzini, Radim Krčmář, david nested_get_page() just sounds confusing. All we want is a page from G1. This is even unrelated to nested. Let's introduce kvm_vcpu_gpa_to_page() so we don't get too lengthy lines. Signed-off-by: David Hildenbrand <david@redhat.com> --- arch/x86/kvm/vmx.c | 48 +++++++++++++++++++----------------------------- include/linux/kvm_host.h | 6 ++++++ 2 files changed, 25 insertions(+), 29 deletions(-) diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c index 39a6222..54484cf 100644 --- a/arch/x86/kvm/vmx.c +++ b/arch/x86/kvm/vmx.c @@ -894,14 +894,6 @@ static inline struct vmcs12 *get_vmcs12(struct kvm_vcpu *vcpu) return to_vmx(vcpu)->nested.cached_vmcs12; } -static struct page *nested_get_page(struct kvm_vcpu *vcpu, gpa_t addr) -{ - struct page *page = kvm_vcpu_gfn_to_page(vcpu, addr >> PAGE_SHIFT); - if (is_error_page(page)) - return NULL; - - return page; -} static void nested_release_page(struct page *page) { @@ -7095,8 +7087,8 @@ static int handle_vmon(struct kvm_vcpu *vcpu) return kvm_skip_emulated_instruction(vcpu); } - page = nested_get_page(vcpu, vmptr); - if (page == NULL) { + page = kvm_vcpu_gpa_to_page(vcpu, vmptr); + if (is_error_page(page)) { nested_vmx_failInvalid(vcpu); return kvm_skip_emulated_instruction(vcpu); } @@ -7564,8 +7556,8 @@ static int handle_vmptrld(struct kvm_vcpu *vcpu) if (vmx->nested.current_vmptr != vmptr) { struct vmcs12 *new_vmcs12; struct page *page; - page = nested_get_page(vcpu, vmptr); - if (page == NULL) { + page = kvm_vcpu_gpa_to_page(vcpu, vmptr); + if (is_error_page(page)) { nested_vmx_failInvalid(vcpu); return kvm_skip_emulated_instruction(vcpu); } @@ -9524,6 +9516,7 @@ static void nested_get_vmcs12_pages(struct kvm_vcpu *vcpu, struct vmcs12 *vmcs12) { struct vcpu_vmx *vmx = to_vmx(vcpu); + struct page *page; u64 hpa; if (nested_cpu_has2(vmcs12, SECONDARY_EXEC_VIRTUALIZE_APIC_ACCESSES)) { @@ -9535,15 +9528,15 @@ static void nested_get_vmcs12_pages(struct kvm_vcpu *vcpu, */ if (vmx->nested.apic_access_page) /* shouldn't happen */ nested_release_page(vmx->nested.apic_access_page); - vmx->nested.apic_access_page = - nested_get_page(vcpu, vmcs12->apic_access_addr); + page = kvm_vcpu_gpa_to_page(vcpu, vmcs12->apic_access_addr); /* * If translation failed, no matter: This feature asks * to exit when accessing the given address, and if it * can never be accessed, this feature won't do * anything anyway. */ - if (vmx->nested.apic_access_page) { + if (!is_error_page(page)) { + vmx->nested.apic_access_page = page; hpa = page_to_phys(vmx->nested.apic_access_page); vmcs_write64(APIC_ACCESS_ADDR, hpa); } else { @@ -9560,8 +9553,7 @@ static void nested_get_vmcs12_pages(struct kvm_vcpu *vcpu, if (nested_cpu_has(vmcs12, CPU_BASED_TPR_SHADOW)) { if (vmx->nested.virtual_apic_page) /* shouldn't happen */ nested_release_page(vmx->nested.virtual_apic_page); - vmx->nested.virtual_apic_page = - nested_get_page(vcpu, vmcs12->virtual_apic_page_addr); + page = kvm_vcpu_gpa_to_page(vcpu, vmcs12->apic_access_addr); /* * If translation failed, VM entry will fail because @@ -9576,7 +9568,8 @@ static void nested_get_vmcs12_pages(struct kvm_vcpu *vcpu, * control. But such a configuration is useless, so * let's keep the code simple. */ - if (vmx->nested.virtual_apic_page) { + if (!is_error_page(page)) { + vmx->nested.virtual_apic_page = page; hpa = page_to_phys(vmx->nested.virtual_apic_page); vmcs_write64(VIRTUAL_APIC_PAGE_ADDR, hpa); } @@ -9587,14 +9580,11 @@ static void nested_get_vmcs12_pages(struct kvm_vcpu *vcpu, kunmap(vmx->nested.pi_desc_page); nested_release_page(vmx->nested.pi_desc_page); } - vmx->nested.pi_desc_page = - nested_get_page(vcpu, vmcs12->posted_intr_desc_addr); - vmx->nested.pi_desc = - (struct pi_desc *)kmap(vmx->nested.pi_desc_page); - if (!vmx->nested.pi_desc) { - nested_release_page_clean(vmx->nested.pi_desc_page); + page = kvm_vcpu_gpa_to_page(vcpu, vmcs12->posted_intr_desc_addr); + if (is_error_page(page)) return; - } + vmx->nested.pi_desc_page = page; + vmx->nested.pi_desc = kmap(vmx->nested.pi_desc_page); vmx->nested.pi_desc = (struct pi_desc *)((void *)vmx->nested.pi_desc + (unsigned long)(vmcs12->posted_intr_desc_addr & @@ -9676,8 +9666,8 @@ static inline bool nested_vmx_merge_msr_bitmap(struct kvm_vcpu *vcpu, if (!nested_cpu_has_virt_x2apic_mode(vmcs12)) return false; - page = nested_get_page(vcpu, vmcs12->msr_bitmap); - if (!page) + page = kvm_vcpu_gpa_to_page(vcpu, vmcs12->msr_bitmap); + if (is_error_page(page)) return false; msr_bitmap_l1 = (unsigned long *)kmap(page); @@ -11287,8 +11277,8 @@ static int vmx_write_pml_buffer(struct kvm_vcpu *vcpu) gpa = vmcs_read64(GUEST_PHYSICAL_ADDRESS) & ~0xFFFull; - page = nested_get_page(vcpu, vmcs12->pml_address); - if (!page) + page = kvm_vcpu_gpa_to_page(vcpu, vmcs12->pml_address); + if (is_error_page(page)) return 0; pml_address = kmap(page); diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h index 890b706..6ed27df 100644 --- a/include/linux/kvm_host.h +++ b/include/linux/kvm_host.h @@ -983,6 +983,12 @@ static inline hpa_t pfn_to_hpa(kvm_pfn_t pfn) return (hpa_t)pfn << PAGE_SHIFT; } +static inline struct page *kvm_vcpu_gpa_to_page(struct kvm_vcpu *vcpu, + gpa_t gpa) +{ + return kvm_vcpu_gfn_to_page(vcpu, gpa_to_gfn(gpa)); +} + static inline bool kvm_is_error_gpa(struct kvm *kvm, gpa_t gpa) { unsigned long hva = gfn_to_hva(kvm, gpa_to_gfn(gpa)); -- 2.9.4 ^ permalink raw reply related [flat|nested] 12+ messages in thread
* Re: [PATCH v1 1/2] KVM: nVMX: get rid of nested_get_page() 2017-08-03 14:09 ` [PATCH v1 1/2] KVM: nVMX: get rid of nested_get_page() David Hildenbrand @ 2017-08-03 15:36 ` Radim Krčmář 2017-08-03 15:58 ` David Hildenbrand 2017-08-03 16:05 ` Jim Mattson 2017-08-03 18:40 ` Konrad Rzeszutek Wilk 2017-08-07 10:48 ` Paolo Bonzini 2 siblings, 2 replies; 12+ messages in thread From: Radim Krčmář @ 2017-08-03 15:36 UTC (permalink / raw) To: David Hildenbrand; +Cc: kvm, Paolo Bonzini 2017-08-03 16:09+0200, David Hildenbrand: > nested_get_page() just sounds confusing. All we want is a page from G1. > This is even unrelated to nested. > > Let's introduce kvm_vcpu_gpa_to_page() so we don't get too lengthy > lines. > > Signed-off-by: David Hildenbrand <david@redhat.com> > --- I like the cleanup, but a subtle change in behavior that makes me wary: > diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c > @@ -9535,15 +9528,15 @@ static void nested_get_vmcs12_pages(struct kvm_vcpu *vcpu, > */ > if (vmx->nested.apic_access_page) /* shouldn't happen */ > nested_release_page(vmx->nested.apic_access_page); > - vmx->nested.apic_access_page = > - nested_get_page(vcpu, vmcs12->apic_access_addr); > + page = kvm_vcpu_gpa_to_page(vcpu, vmcs12->apic_access_addr); If what shouldn't happen happened and then kvm_vcpu_gpa_to_page() failed, we'd be calling put_page() twice on the same page. I think the situation currently can happen if VM entry fails after this point. Assigning 'vmx->nested.apic_access_page = NULL' when releasing the page sounds safer. Unless I'm reading something wrong, the "shouldn't happen" really shouldn't happen if we did something like this: diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c index e34373838b31..d26e6693f748 100644 --- a/arch/x86/kvm/vmx.c +++ b/arch/x86/kvm/vmx.c @@ -10462,8 +10462,6 @@ static int enter_vmx_non_root_mode(struct kvm_vcpu *vcpu, bool from_vmentry) return 1; } - nested_get_vmcs12_pages(vcpu, vmcs12); - msr_entry_idx = nested_vmx_load_msr(vcpu, vmcs12->vm_entry_msr_load_addr, vmcs12->vm_entry_msr_load_count); @@ -10475,6 +10473,8 @@ static int enter_vmx_non_root_mode(struct kvm_vcpu *vcpu, bool from_vmentry) return 1; } + nested_get_vmcs12_pages(vcpu, vmcs12); + /* * Note no nested_vmx_succeed or nested_vmx_fail here. At this point * we are no longer running L1, and VMLAUNCH/VMRESUME has not yet > /* > * If translation failed, no matter: This feature asks > * to exit when accessing the given address, and if it > * can never be accessed, this feature won't do > * anything anyway. > */ > - if (vmx->nested.apic_access_page) { > + if (!is_error_page(page)) { > + vmx->nested.apic_access_page = page; > hpa = page_to_phys(vmx->nested.apic_access_page); > vmcs_write64(APIC_ACCESS_ADDR, hpa); > } else { > @@ -9560,8 +9553,7 @@ static void nested_get_vmcs12_pages(struct kvm_vcpu *vcpu, > if (nested_cpu_has(vmcs12, CPU_BASED_TPR_SHADOW)) { > if (vmx->nested.virtual_apic_page) /* shouldn't happen */ > nested_release_page(vmx->nested.virtual_apic_page); Ditto, thanks. > - vmx->nested.virtual_apic_page = > - nested_get_page(vcpu, vmcs12->virtual_apic_page_addr); > + page = kvm_vcpu_gpa_to_page(vcpu, vmcs12->apic_access_addr); > > /* > * If translation failed, VM entry will fail because ^ permalink raw reply related [flat|nested] 12+ messages in thread
* Re: [PATCH v1 1/2] KVM: nVMX: get rid of nested_get_page() 2017-08-03 15:36 ` Radim Krčmář @ 2017-08-03 15:58 ` David Hildenbrand 2017-08-03 16:05 ` Jim Mattson 1 sibling, 0 replies; 12+ messages in thread From: David Hildenbrand @ 2017-08-03 15:58 UTC (permalink / raw) To: Radim Krčmář; +Cc: kvm, Paolo Bonzini On 03.08.2017 17:36, Radim Krčmář wrote: > 2017-08-03 16:09+0200, David Hildenbrand: >> nested_get_page() just sounds confusing. All we want is a page from G1. >> This is even unrelated to nested. >> >> Let's introduce kvm_vcpu_gpa_to_page() so we don't get too lengthy >> lines. >> >> Signed-off-by: David Hildenbrand <david@redhat.com> >> --- > > I like the cleanup, but a subtle change in behavior that makes me wary: > >> diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c >> @@ -9535,15 +9528,15 @@ static void nested_get_vmcs12_pages(struct kvm_vcpu *vcpu, >> */ >> if (vmx->nested.apic_access_page) /* shouldn't happen */ >> nested_release_page(vmx->nested.apic_access_page); >> - vmx->nested.apic_access_page = >> - nested_get_page(vcpu, vmcs12->apic_access_addr); >> + page = kvm_vcpu_gpa_to_page(vcpu, vmcs12->apic_access_addr); > > If what shouldn't happen happened and then kvm_vcpu_gpa_to_page() > failed, we'd be calling put_page() twice on the same page. I think the > situation currently can happen if VM entry fails after this point. > > Assigning 'vmx->nested.apic_access_page = NULL' when releasing the page > sounds safer. > > Unless I'm reading something wrong, the "shouldn't happen" really > shouldn't happen if we did something like this: > Very good point. I'll just clear the respective field whenever we release a page, this way we are on the safe side. Good catch! Thanks! -- Thanks, David ^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [PATCH v1 1/2] KVM: nVMX: get rid of nested_get_page() 2017-08-03 15:36 ` Radim Krčmář 2017-08-03 15:58 ` David Hildenbrand @ 2017-08-03 16:05 ` Jim Mattson 2017-08-03 17:41 ` Radim Krčmář 1 sibling, 1 reply; 12+ messages in thread From: Jim Mattson @ 2017-08-03 16:05 UTC (permalink / raw) To: Radim Krčmář; +Cc: David Hildenbrand, kvm list, Paolo Bonzini On Thu, Aug 3, 2017 at 8:36 AM, Radim Krčmář <rkrcmar@redhat.com> wrote: > 2017-08-03 16:09+0200, David Hildenbrand: >> nested_get_page() just sounds confusing. All we want is a page from G1. >> This is even unrelated to nested. >> >> Let's introduce kvm_vcpu_gpa_to_page() so we don't get too lengthy >> lines. >> >> Signed-off-by: David Hildenbrand <david@redhat.com> >> --- > > I like the cleanup, but a subtle change in behavior that makes me wary: > >> diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c >> @@ -9535,15 +9528,15 @@ static void nested_get_vmcs12_pages(struct kvm_vcpu *vcpu, >> */ >> if (vmx->nested.apic_access_page) /* shouldn't happen */ >> nested_release_page(vmx->nested.apic_access_page); >> - vmx->nested.apic_access_page = >> - nested_get_page(vcpu, vmcs12->apic_access_addr); >> + page = kvm_vcpu_gpa_to_page(vcpu, vmcs12->apic_access_addr); > > If what shouldn't happen happened and then kvm_vcpu_gpa_to_page() > failed, we'd be calling put_page() twice on the same page. I think the > situation currently can happen if VM entry fails after this point. > > Assigning 'vmx->nested.apic_access_page = NULL' when releasing the page > sounds safer. > > Unless I'm reading something wrong, the "shouldn't happen" really > shouldn't happen if we did something like this: > > diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c > index e34373838b31..d26e6693f748 100644 > --- a/arch/x86/kvm/vmx.c > +++ b/arch/x86/kvm/vmx.c > @@ -10462,8 +10462,6 @@ static int enter_vmx_non_root_mode(struct kvm_vcpu *vcpu, bool from_vmentry) > return 1; > } > > - nested_get_vmcs12_pages(vcpu, vmcs12); > - > msr_entry_idx = nested_vmx_load_msr(vcpu, > vmcs12->vm_entry_msr_load_addr, > vmcs12->vm_entry_msr_load_count); > @@ -10475,6 +10473,8 @@ static int enter_vmx_non_root_mode(struct kvm_vcpu *vcpu, bool from_vmentry) > return 1; > } > > + nested_get_vmcs12_pages(vcpu, vmcs12); > + > /* > * Note no nested_vmx_succeed or nested_vmx_fail here. At this point > * we are no longer running L1, and VMLAUNCH/VMRESUME has not yet > >> /* >> * If translation failed, no matter: This feature asks >> * to exit when accessing the given address, and if it >> * can never be accessed, this feature won't do >> * anything anyway. >> */ This comment is incorrect. On real hardware, the APIC access page doesn't have to exist (i.e. be backed by actual memory), because the APIC access page is never accessed. Think of the APIC access page as a sentinel value that the hypervisor can put in the page tables (EPT page tables if they are in use, x86 page tables otherwise) to trigger APIC virtualization. If there is an access, it is to the page at the virtual APIC address, not the APIC access page. Similarly, in a VM, there need not be a mapping for the APIC access page for the feature to work as architected. (Or, at least, that's the way it should work. :-) >> - if (vmx->nested.apic_access_page) { >> + if (!is_error_page(page)) { >> + vmx->nested.apic_access_page = page; >> hpa = page_to_phys(vmx->nested.apic_access_page); >> vmcs_write64(APIC_ACCESS_ADDR, hpa); >> } else { >> @@ -9560,8 +9553,7 @@ static void nested_get_vmcs12_pages(struct kvm_vcpu *vcpu, >> if (nested_cpu_has(vmcs12, CPU_BASED_TPR_SHADOW)) { >> if (vmx->nested.virtual_apic_page) /* shouldn't happen */ >> nested_release_page(vmx->nested.virtual_apic_page); > > Ditto, > > thanks. > >> - vmx->nested.virtual_apic_page = >> - nested_get_page(vcpu, vmcs12->virtual_apic_page_addr); >> + page = kvm_vcpu_gpa_to_page(vcpu, vmcs12->apic_access_addr); >> >> /* >> * If translation failed, VM entry will fail because > ^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [PATCH v1 1/2] KVM: nVMX: get rid of nested_get_page() 2017-08-03 16:05 ` Jim Mattson @ 2017-08-03 17:41 ` Radim Krčmář 2017-08-03 18:42 ` Jim Mattson 0 siblings, 1 reply; 12+ messages in thread From: Radim Krčmář @ 2017-08-03 17:41 UTC (permalink / raw) To: Jim Mattson; +Cc: David Hildenbrand, kvm list, Paolo Bonzini 2017-08-03 09:05-0700, Jim Mattson: > On Thu, Aug 3, 2017 at 8:36 AM, Radim Krčmář <rkrcmar@redhat.com> wrote: > > 2017-08-03 16:09+0200, David Hildenbrand: > >> /* > >> * If translation failed, no matter: This feature asks > >> * to exit when accessing the given address, and if it > >> * can never be accessed, this feature won't do > >> * anything anyway. > >> */ > > This comment is incorrect. On real hardware, the APIC access page > doesn't have to exist (i.e. be backed by actual memory), because the > APIC access page is never accessed. Think of the APIC access page as a > sentinel value that the hypervisor can put in the page tables (EPT > page tables if they are in use, x86 page tables otherwise) to trigger > APIC virtualization. If there is an access, it is to the page at the > virtual APIC address, not the APIC access page. Right, > Similarly, in a VM, there need not be a mapping for the APIC access > page for the feature to work as architected. (Or, at least, that's the > way it should work. :-) the APIC_ACCESS_ADDR is always L0 physical address, so we somehow need to map the L1 physical address somewhere in order to recognize accesses from L2. I think the correct way would be to should create a new mapping if the chosen L1 physical address has no L0 physical address yet. The code was made for the common case where hypervisors select a page that is mapped by KVM ... Do you wish to send patches? :) Thanks. ^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [PATCH v1 1/2] KVM: nVMX: get rid of nested_get_page() 2017-08-03 17:41 ` Radim Krčmář @ 2017-08-03 18:42 ` Jim Mattson 0 siblings, 0 replies; 12+ messages in thread From: Jim Mattson @ 2017-08-03 18:42 UTC (permalink / raw) To: Radim Krčmář; +Cc: David Hildenbrand, kvm list, Paolo Bonzini On Thu, Aug 3, 2017 at 10:41 AM, Radim Krčmář <rkrcmar@redhat.com> wrote: > 2017-08-03 09:05-0700, Jim Mattson: >> On Thu, Aug 3, 2017 at 8:36 AM, Radim Krčmář <rkrcmar@redhat.com> wrote: >> > 2017-08-03 16:09+0200, David Hildenbrand: >> >> /* >> >> * If translation failed, no matter: This feature asks >> >> * to exit when accessing the given address, and if it >> >> * can never be accessed, this feature won't do >> >> * anything anyway. >> >> */ >> >> This comment is incorrect. On real hardware, the APIC access page >> doesn't have to exist (i.e. be backed by actual memory), because the >> APIC access page is never accessed. Think of the APIC access page as a >> sentinel value that the hypervisor can put in the page tables (EPT >> page tables if they are in use, x86 page tables otherwise) to trigger >> APIC virtualization. If there is an access, it is to the page at the >> virtual APIC address, not the APIC access page. > > Right, > >> Similarly, in a VM, there need not be a mapping for the APIC access >> page for the feature to work as architected. (Or, at least, that's the >> way it should work. :-) > > the APIC_ACCESS_ADDR is always L0 physical address, so we somehow need > to map the L1 physical address somewhere in order to recognize accesses > from L2. > > I think the correct way would be to should create a new mapping if the > chosen L1 physical address has no L0 physical address yet. > The code was made for the common case where hypervisors select a page > that is mapped by KVM ... Yes, I think that's safest. > Do you wish to send patches? :) Unless someone beats me to it! ^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [PATCH v1 1/2] KVM: nVMX: get rid of nested_get_page() 2017-08-03 14:09 ` [PATCH v1 1/2] KVM: nVMX: get rid of nested_get_page() David Hildenbrand 2017-08-03 15:36 ` Radim Krčmář @ 2017-08-03 18:40 ` Konrad Rzeszutek Wilk 2017-08-03 19:42 ` David Hildenbrand 2017-08-07 10:48 ` Paolo Bonzini 2 siblings, 1 reply; 12+ messages in thread From: Konrad Rzeszutek Wilk @ 2017-08-03 18:40 UTC (permalink / raw) To: David Hildenbrand; +Cc: kvm, Paolo Bonzini, Radim Krčmář On Thu, Aug 03, 2017 at 04:09:06PM +0200, David Hildenbrand wrote: > nested_get_page() just sounds confusing. All we want is a page from G1. What is G1? Is that the same thing as L1? Or do you mean L2? ^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [PATCH v1 1/2] KVM: nVMX: get rid of nested_get_page() 2017-08-03 18:40 ` Konrad Rzeszutek Wilk @ 2017-08-03 19:42 ` David Hildenbrand 0 siblings, 0 replies; 12+ messages in thread From: David Hildenbrand @ 2017-08-03 19:42 UTC (permalink / raw) To: Konrad Rzeszutek Wilk; +Cc: kvm, Paolo Bonzini, Radim Krčmář On 03.08.2017 20:40, Konrad Rzeszutek Wilk wrote: > On Thu, Aug 03, 2017 at 04:09:06PM +0200, David Hildenbrand wrote: >> nested_get_page() just sounds confusing. All we want is a page from G1. > > What is G1? Is that the same thing as L1? Or do you mean L2? > L1 == Level 1 G1 == Guest 1 == Level 1 (valid question, I am used to the GX terminology (we used on s390x). L1 seems to be used in the context of VMX, so I'll try to use it for VMX in the future) -- Thanks, David ^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [PATCH v1 1/2] KVM: nVMX: get rid of nested_get_page() 2017-08-03 14:09 ` [PATCH v1 1/2] KVM: nVMX: get rid of nested_get_page() David Hildenbrand 2017-08-03 15:36 ` Radim Krčmář 2017-08-03 18:40 ` Konrad Rzeszutek Wilk @ 2017-08-07 10:48 ` Paolo Bonzini 2 siblings, 0 replies; 12+ messages in thread From: Paolo Bonzini @ 2017-08-07 10:48 UTC (permalink / raw) To: David Hildenbrand, kvm; +Cc: Radim Krčmář On 03/08/2017 16:09, David Hildenbrand wrote: > - vmx->nested.virtual_apic_page = > - nested_get_page(vcpu, vmcs12->virtual_apic_page_addr); > + page = kvm_vcpu_gpa_to_page(vcpu, vmcs12->apic_access_addr); Ouch, I missed this. :( Thanks Wanpeng for the quick fix. Paolo ^ permalink raw reply [flat|nested] 12+ messages in thread
* [PATCH v1 2/2] KVM: nVMX: get rid of nested_release_page* 2017-08-03 14:09 [PATCH v1 0/2] KVM: nVMX: remove nested_get_page() David Hildenbrand 2017-08-03 14:09 ` [PATCH v1 1/2] KVM: nVMX: get rid of nested_get_page() David Hildenbrand @ 2017-08-03 14:09 ` David Hildenbrand 2017-08-03 15:14 ` [PATCH v1 0/2] KVM: nVMX: remove nested_get_page() Paolo Bonzini 2 siblings, 0 replies; 12+ messages in thread From: David Hildenbrand @ 2017-08-03 14:09 UTC (permalink / raw) To: kvm; +Cc: Paolo Bonzini, Radim Krčmář, david Let's also just use the underlying functions directly here. Signed-off-by: David Hildenbrand <david@redhat.com> --- arch/x86/kvm/vmx.c | 41 +++++++++++++++-------------------------- 1 file changed, 15 insertions(+), 26 deletions(-) diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c index 54484cf..d138020 100644 --- a/arch/x86/kvm/vmx.c +++ b/arch/x86/kvm/vmx.c @@ -894,17 +894,6 @@ static inline struct vmcs12 *get_vmcs12(struct kvm_vcpu *vcpu) return to_vmx(vcpu)->nested.cached_vmcs12; } - -static void nested_release_page(struct page *page) -{ - kvm_release_page_dirty(page); -} - -static void nested_release_page_clean(struct page *page) -{ - kvm_release_page_clean(page); -} - static bool nested_ept_ad_enabled(struct kvm_vcpu *vcpu); static unsigned long nested_ept_get_cr3(struct kvm_vcpu *vcpu); static u64 construct_eptp(struct kvm_vcpu *vcpu, unsigned long root_hpa); @@ -7094,12 +7083,12 @@ static int handle_vmon(struct kvm_vcpu *vcpu) } if (*(u32 *)kmap(page) != VMCS12_REVISION) { kunmap(page); - nested_release_page_clean(page); + kvm_release_page_clean(page); nested_vmx_failInvalid(vcpu); return kvm_skip_emulated_instruction(vcpu); } kunmap(page); - nested_release_page_clean(page); + kvm_release_page_clean(page); vmx->nested.vmxon_ptr = vmptr; ret = enter_vmx_operation(vcpu); @@ -7151,7 +7140,7 @@ static inline void nested_release_vmcs12(struct vcpu_vmx *vmx) VMCS12_SIZE); kunmap(vmx->nested.current_vmcs12_page); - nested_release_page(vmx->nested.current_vmcs12_page); + kvm_release_page_dirty(vmx->nested.current_vmcs12_page); vmx->nested.current_vmptr = -1ull; vmx->nested.current_vmcs12 = NULL; } @@ -7180,16 +7169,16 @@ static void free_nested(struct vcpu_vmx *vmx) kfree(vmx->nested.cached_vmcs12); /* Unpin physical memory we referred to in current vmcs02 */ if (vmx->nested.apic_access_page) { - nested_release_page(vmx->nested.apic_access_page); + kvm_release_page_dirty(vmx->nested.apic_access_page); vmx->nested.apic_access_page = NULL; } if (vmx->nested.virtual_apic_page) { - nested_release_page(vmx->nested.virtual_apic_page); + kvm_release_page_dirty(vmx->nested.virtual_apic_page); vmx->nested.virtual_apic_page = NULL; } if (vmx->nested.pi_desc_page) { kunmap(vmx->nested.pi_desc_page); - nested_release_page(vmx->nested.pi_desc_page); + kvm_release_page_dirty(vmx->nested.pi_desc_page); vmx->nested.pi_desc_page = NULL; vmx->nested.pi_desc = NULL; } @@ -7564,7 +7553,7 @@ static int handle_vmptrld(struct kvm_vcpu *vcpu) new_vmcs12 = kmap(page); if (new_vmcs12->revision_id != VMCS12_REVISION) { kunmap(page); - nested_release_page_clean(page); + kvm_release_page_clean(page); nested_vmx_failValid(vcpu, VMXERR_VMPTRLD_INCORRECT_VMCS_REVISION_ID); return kvm_skip_emulated_instruction(vcpu); @@ -9527,7 +9516,7 @@ static void nested_get_vmcs12_pages(struct kvm_vcpu *vcpu, * to it so we can release it later. */ if (vmx->nested.apic_access_page) /* shouldn't happen */ - nested_release_page(vmx->nested.apic_access_page); + kvm_release_page_dirty(vmx->nested.apic_access_page); page = kvm_vcpu_gpa_to_page(vcpu, vmcs12->apic_access_addr); /* * If translation failed, no matter: This feature asks @@ -9552,7 +9541,7 @@ static void nested_get_vmcs12_pages(struct kvm_vcpu *vcpu, if (nested_cpu_has(vmcs12, CPU_BASED_TPR_SHADOW)) { if (vmx->nested.virtual_apic_page) /* shouldn't happen */ - nested_release_page(vmx->nested.virtual_apic_page); + kvm_release_page_dirty(vmx->nested.virtual_apic_page); page = kvm_vcpu_gpa_to_page(vcpu, vmcs12->apic_access_addr); /* @@ -9578,7 +9567,7 @@ static void nested_get_vmcs12_pages(struct kvm_vcpu *vcpu, if (nested_cpu_has_posted_intr(vmcs12)) { if (vmx->nested.pi_desc_page) { /* shouldn't happen */ kunmap(vmx->nested.pi_desc_page); - nested_release_page(vmx->nested.pi_desc_page); + kvm_release_page_dirty(vmx->nested.pi_desc_page); } page = kvm_vcpu_gpa_to_page(vcpu, vmcs12->posted_intr_desc_addr); if (is_error_page(page)) @@ -9697,7 +9686,7 @@ static inline bool nested_vmx_merge_msr_bitmap(struct kvm_vcpu *vcpu, } } kunmap(page); - nested_release_page_clean(page); + kvm_release_page_clean(page); return true; } @@ -11092,16 +11081,16 @@ static void nested_vmx_vmexit(struct kvm_vcpu *vcpu, u32 exit_reason, /* Unpin physical memory we referred to in vmcs02 */ if (vmx->nested.apic_access_page) { - nested_release_page(vmx->nested.apic_access_page); + kvm_release_page_dirty(vmx->nested.apic_access_page); vmx->nested.apic_access_page = NULL; } if (vmx->nested.virtual_apic_page) { - nested_release_page(vmx->nested.virtual_apic_page); + kvm_release_page_dirty(vmx->nested.virtual_apic_page); vmx->nested.virtual_apic_page = NULL; } if (vmx->nested.pi_desc_page) { kunmap(vmx->nested.pi_desc_page); - nested_release_page(vmx->nested.pi_desc_page); + kvm_release_page_dirty(vmx->nested.pi_desc_page); vmx->nested.pi_desc_page = NULL; vmx->nested.pi_desc = NULL; } @@ -11284,7 +11273,7 @@ static int vmx_write_pml_buffer(struct kvm_vcpu *vcpu) pml_address = kmap(page); pml_address[vmcs12->guest_pml_index--] = gpa; kunmap(page); - nested_release_page_clean(page); + kvm_release_page_clean(page); } return 0; -- 2.9.4 ^ permalink raw reply related [flat|nested] 12+ messages in thread
* Re: [PATCH v1 0/2] KVM: nVMX: remove nested_get_page() 2017-08-03 14:09 [PATCH v1 0/2] KVM: nVMX: remove nested_get_page() David Hildenbrand 2017-08-03 14:09 ` [PATCH v1 1/2] KVM: nVMX: get rid of nested_get_page() David Hildenbrand 2017-08-03 14:09 ` [PATCH v1 2/2] KVM: nVMX: get rid of nested_release_page* David Hildenbrand @ 2017-08-03 15:14 ` Paolo Bonzini 2 siblings, 0 replies; 12+ messages in thread From: Paolo Bonzini @ 2017-08-03 15:14 UTC (permalink / raw) To: David Hildenbrand, kvm; +Cc: Radim Krčmář On 03/08/2017 16:09, David Hildenbrand wrote: > Let's just use the ordinary functons directly. The "nested" at that point > is just confusing. All we want is a page from G1. > > David Hildenbrand (2): > KVM: nVMX: get rid of nested_get_page() > KVM: nVMX: get rid of nested_release_page* > > arch/x86/kvm/vmx.c | 89 ++++++++++++++++++------------------------------ > include/linux/kvm_host.h | 6 ++++ > 2 files changed, 40 insertions(+), 55 deletions(-) > Reviewed-by: Paolo Bonzini <pbonzini@redhat.com> ^ permalink raw reply [flat|nested] 12+ messages in thread
end of thread, other threads:[~2017-08-07 10:48 UTC | newest] Thread overview: 12+ messages (download: mbox.gz follow: Atom feed -- links below jump to the message on this page -- 2017-08-03 14:09 [PATCH v1 0/2] KVM: nVMX: remove nested_get_page() David Hildenbrand 2017-08-03 14:09 ` [PATCH v1 1/2] KVM: nVMX: get rid of nested_get_page() David Hildenbrand 2017-08-03 15:36 ` Radim Krčmář 2017-08-03 15:58 ` David Hildenbrand 2017-08-03 16:05 ` Jim Mattson 2017-08-03 17:41 ` Radim Krčmář 2017-08-03 18:42 ` Jim Mattson 2017-08-03 18:40 ` Konrad Rzeszutek Wilk 2017-08-03 19:42 ` David Hildenbrand 2017-08-07 10:48 ` Paolo Bonzini 2017-08-03 14:09 ` [PATCH v1 2/2] KVM: nVMX: get rid of nested_release_page* David Hildenbrand 2017-08-03 15:14 ` [PATCH v1 0/2] KVM: nVMX: remove nested_get_page() Paolo Bonzini
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox