From: Paolo Bonzini <pbonzini@redhat.com>
To: Eugene Korenevsky <ekorenevsky@gmail.com>, kvm@vger.kernel.org
Subject: Re: [PATCH 1/3] KVM: x86: optimization: cache physical address width to avoid excessive enumerations of CPUID entries
Date: Tue, 07 Apr 2015 13:25:06 +0200 [thread overview]
Message-ID: <5523BE92.3090107@redhat.com> (raw)
In-Reply-To: <20150329205612.GA1223@gnote>
On 29/03/2015 22:56, Eugene Korenevsky wrote:
> cpuid_maxphyaddr() which performs lot of memory accesses is called extensively
> across KVM, especially in nVMX code.
> This patch adds cached value of maxphyaddr to vcpu.arch to reduce the pressure onto
> CPU cache and simplify the code of cpuid_maxphyaddr() callers. The cached value is
> initialized in kvm_arch_vcpu_init() and reloaded every time CPUID is updated by
> usermode. It is obvious that these reloads occur infrequently.
>
> Signed-off-by: Eugene Korenevsky <ekorenevsky@gmail.com>
> ---
> arch/x86/include/asm/kvm_host.h | 4 +++-
> arch/x86/kvm/cpuid.c | 33 ++++++++++++++++++---------------
> arch/x86/kvm/cpuid.h | 6 ++++++
> arch/x86/kvm/x86.c | 2 ++
> 4 files changed, 29 insertions(+), 16 deletions(-)
>
> diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
> index a236e39..2362a60 100644
> --- a/arch/x86/include/asm/kvm_host.h
> +++ b/arch/x86/include/asm/kvm_host.h
> @@ -431,6 +431,9 @@ struct kvm_vcpu_arch {
>
> int cpuid_nent;
> struct kvm_cpuid_entry2 cpuid_entries[KVM_MAX_CPUID_ENTRIES];
> +
> + int maxphyaddr;
> +
> /* emulate context */
>
> struct x86_emulate_ctxt emulate_ctxt;
> @@ -1128,7 +1131,6 @@ int kvm_unmap_hva_range(struct kvm *kvm, unsigned long start, unsigned long end)
> int kvm_age_hva(struct kvm *kvm, unsigned long start, unsigned long end);
> int kvm_test_age_hva(struct kvm *kvm, unsigned long hva);
> void kvm_set_spte_hva(struct kvm *kvm, unsigned long hva, pte_t pte);
> -int cpuid_maxphyaddr(struct kvm_vcpu *vcpu);
> int kvm_cpu_has_injectable_intr(struct kvm_vcpu *v);
> int kvm_cpu_has_interrupt(struct kvm_vcpu *vcpu);
> int kvm_arch_interrupt_allowed(struct kvm_vcpu *vcpu);
> diff --git a/arch/x86/kvm/cpuid.c b/arch/x86/kvm/cpuid.c
> index 8a80737..59b69f6 100644
> --- a/arch/x86/kvm/cpuid.c
> +++ b/arch/x86/kvm/cpuid.c
> @@ -104,6 +104,9 @@ int kvm_update_cpuid(struct kvm_vcpu *vcpu)
> ((best->eax & 0xff00) >> 8) != 0)
> return -EINVAL;
>
> + /* Update physical-address width */
> + vcpu->arch.maxphyaddr = cpuid_query_maxphyaddr(vcpu);
> +
> kvm_pmu_cpuid_update(vcpu);
> return 0;
> }
> @@ -135,6 +138,21 @@ static void cpuid_fix_nx_cap(struct kvm_vcpu *vcpu)
> }
> }
>
> +int cpuid_query_maxphyaddr(struct kvm_vcpu *vcpu)
> +{
> + struct kvm_cpuid_entry2 *best;
> +
> + best = kvm_find_cpuid_entry(vcpu, 0x80000000, 0);
> + if (!best || best->eax < 0x80000008)
> + goto not_found;
> + best = kvm_find_cpuid_entry(vcpu, 0x80000008, 0);
> + if (best)
> + return best->eax & 0xff;
> +not_found:
> + return 36;
> +}
> +EXPORT_SYMBOL_GPL(cpuid_query_maxphyaddr);
> +
> /* when an old userspace process fills a new kernel module */
> int kvm_vcpu_ioctl_set_cpuid(struct kvm_vcpu *vcpu,
> struct kvm_cpuid *cpuid,
> @@ -757,21 +775,6 @@ struct kvm_cpuid_entry2 *kvm_find_cpuid_entry(struct kvm_vcpu *vcpu,
> }
> EXPORT_SYMBOL_GPL(kvm_find_cpuid_entry);
>
> -int cpuid_maxphyaddr(struct kvm_vcpu *vcpu)
> -{
> - struct kvm_cpuid_entry2 *best;
> -
> - best = kvm_find_cpuid_entry(vcpu, 0x80000000, 0);
> - if (!best || best->eax < 0x80000008)
> - goto not_found;
> - best = kvm_find_cpuid_entry(vcpu, 0x80000008, 0);
> - if (best)
> - return best->eax & 0xff;
> -not_found:
> - return 36;
> -}
> -EXPORT_SYMBOL_GPL(cpuid_maxphyaddr);
> -
> /*
> * If no match is found, check whether we exceed the vCPU's limit
> * and return the content of the highest valid _standard_ leaf instead.
> diff --git a/arch/x86/kvm/cpuid.h b/arch/x86/kvm/cpuid.h
> index 4452eed..78b61b4 100644
> --- a/arch/x86/kvm/cpuid.h
> +++ b/arch/x86/kvm/cpuid.h
> @@ -20,6 +20,12 @@ int kvm_vcpu_ioctl_get_cpuid2(struct kvm_vcpu *vcpu,
> struct kvm_cpuid_entry2 __user *entries);
> void kvm_cpuid(struct kvm_vcpu *vcpu, u32 *eax, u32 *ebx, u32 *ecx, u32 *edx);
>
> +int cpuid_query_maxphyaddr(struct kvm_vcpu *vcpu);
> +
> +static inline int cpuid_maxphyaddr(struct kvm_vcpu *vcpu)
> +{
> + return vcpu->arch.maxphyaddr;
> +}
>
> static inline bool guest_cpuid_has_xsave(struct kvm_vcpu *vcpu)
> {
> diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
> index bd7a70b..084e1d5 100644
> --- a/arch/x86/kvm/x86.c
> +++ b/arch/x86/kvm/x86.c
> @@ -7289,6 +7289,8 @@ int kvm_arch_vcpu_init(struct kvm_vcpu *vcpu)
> vcpu->arch.guest_supported_xcr0 = 0;
> vcpu->arch.guest_xstate_size = XSAVE_HDR_SIZE + XSAVE_HDR_OFFSET;
>
> + vcpu->arch.maxphyaddr = cpuid_query_maxphyaddr(vcpu);
> +
> kvm_async_pf_hash_reset(vcpu);
> kvm_pmu_init(vcpu);
>
>
Applied series, thanks.
Paolo
prev parent reply other threads:[~2015-04-07 11:25 UTC|newest]
Thread overview: 2+ messages / expand[flat|nested] mbox.gz Atom feed top
2015-03-29 20:56 [PATCH 1/3] KVM: x86: optimization: cache physical address width to avoid excessive enumerations of CPUID entries Eugene Korenevsky
2015-04-07 11:25 ` Paolo Bonzini [this message]
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=5523BE92.3090107@redhat.com \
--to=pbonzini@redhat.com \
--cc=ekorenevsky@gmail.com \
--cc=kvm@vger.kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).