From: Vitaly Kuznetsov <vkuznets@redhat.com>
To: Jon Doron <arilou@gmail.com>
Cc: kvm@vger.kernel.org, linux-hyperv@vger.kernel.org
Subject: Re: [PATCH v9 2/6] x86/kvm/hyper-v: Simplify addition for custom cpuid leafs
Date: Mon, 23 Mar 2020 11:16:39 +0100 [thread overview]
Message-ID: <87wo7b9y0o.fsf@vitty.brq.redhat.com> (raw)
In-Reply-To: <20200320172839.1144395-3-arilou@gmail.com>
Jon Doron <arilou@gmail.com> writes:
> Simlify the code to define a new cpuid leaf group by enabled feature.
>
> This also fixes a bug in which the max cpuid leaf was always set to
> HYPERV_CPUID_NESTED_FEATURES regardless if nesting is supported or not.
>
> Any new CPUID group needs to consider the max leaf and be added in the
> correct order, in this method there are two rules:
> 1. Each cpuid leaf group must be order in an ascending order
> 2. The appending for the cpuid leafs by features also needs to happen by
> ascending order.
>
> Signed-off-by: Jon Doron <arilou@gmail.com>
> ---
> arch/x86/kvm/hyperv.c | 46 ++++++++++++++++++++++++++++++-------------
> 1 file changed, 32 insertions(+), 14 deletions(-)
>
> diff --git a/arch/x86/kvm/hyperv.c b/arch/x86/kvm/hyperv.c
> index a86fda7a1d03..7383c7e7d4af 100644
> --- a/arch/x86/kvm/hyperv.c
> +++ b/arch/x86/kvm/hyperv.c
> @@ -1785,27 +1785,45 @@ int kvm_vm_ioctl_hv_eventfd(struct kvm *kvm, struct kvm_hyperv_eventfd *args)
> return kvm_hv_eventfd_assign(kvm, args->conn_id, args->fd);
> }
>
> +// Must be sorted in ascending order by function
scripts/checkpatch.pl should've complained here, kernel coding style
always requires /* */
> +static struct kvm_cpuid_entry2 core_cpuid_entries[] = {
> + { .function = HYPERV_CPUID_VENDOR_AND_MAX_FUNCTIONS },
> + { .function = HYPERV_CPUID_INTERFACE },
> + { .function = HYPERV_CPUID_VERSION },
> + { .function = HYPERV_CPUID_FEATURES },
> + { .function = HYPERV_CPUID_ENLIGHTMENT_INFO },
> + { .function = HYPERV_CPUID_IMPLEMENT_LIMITS },
> +};
> +
> +static struct kvm_cpuid_entry2 evmcs_cpuid_entries[] = {
> + { .function = HYPERV_CPUID_NESTED_FEATURES },
> +};
> +
> +#define HV_MAX_CPUID_ENTRIES \
> + ARRAY_SIZE(core_cpuid_entries) +\
> + ARRAY_SIZE(evmcs_cpuid_entries)
> +
> int kvm_vcpu_ioctl_get_hv_cpuid(struct kvm_vcpu *vcpu, struct kvm_cpuid2 *cpuid,
> struct kvm_cpuid_entry2 __user *entries)
> {
> uint16_t evmcs_ver = 0;
> - struct kvm_cpuid_entry2 cpuid_entries[] = {
> - { .function = HYPERV_CPUID_VENDOR_AND_MAX_FUNCTIONS },
> - { .function = HYPERV_CPUID_INTERFACE },
> - { .function = HYPERV_CPUID_VERSION },
> - { .function = HYPERV_CPUID_FEATURES },
> - { .function = HYPERV_CPUID_ENLIGHTMENT_INFO },
> - { .function = HYPERV_CPUID_IMPLEMENT_LIMITS },
> - { .function = HYPERV_CPUID_NESTED_FEATURES },
> - };
> - int i, nent = ARRAY_SIZE(cpuid_entries);
> + struct kvm_cpuid_entry2 cpuid_entries[HV_MAX_CPUID_ENTRIES];
> + int i, nent = 0;
> +
> + /* Set the core cpuid entries required for Hyper-V */
> + memcpy(&cpuid_entries[nent], &core_cpuid_entries,
> + sizeof(core_cpuid_entries));
> + nent += ARRAY_SIZE(core_cpuid_entries);
Strictly speaking "+=" is not needed here as nent is zero.
>
> if (kvm_x86_ops->nested_get_evmcs_version)
> evmcs_ver = kvm_x86_ops->nested_get_evmcs_version(vcpu);
>
> - /* Skip NESTED_FEATURES if eVMCS is not supported */
> - if (!evmcs_ver)
> - --nent;
> + if (evmcs_ver) {
> + /* EVMCS is enabled, add the required EVMCS CPUID leafs */
> + memcpy(&cpuid_entries[nent], &evmcs_cpuid_entries,
> + sizeof(evmcs_cpuid_entries));
> + nent += ARRAY_SIZE(evmcs_cpuid_entries);
> + }
>
> if (cpuid->nent < nent)
> return -E2BIG;
> @@ -1821,7 +1839,7 @@ int kvm_vcpu_ioctl_get_hv_cpuid(struct kvm_vcpu *vcpu, struct kvm_cpuid2 *cpuid,
> case HYPERV_CPUID_VENDOR_AND_MAX_FUNCTIONS:
> memcpy(signature, "Linux KVM Hv", 12);
>
> - ent->eax = HYPERV_CPUID_NESTED_FEATURES;
> + ent->eax = cpuid_entries[nent - 1].function;
> ent->ebx = signature[0];
> ent->ecx = signature[1];
> ent->edx = signature[2];
With the nitpicks mentioned above:
Reviewed-by: Vitaly Kuznetsov <vkuznets@redhat.com>
--
Vitaly
next prev parent reply other threads:[~2020-03-23 10:16 UTC|newest]
Thread overview: 12+ messages / expand[flat|nested] mbox.gz Atom feed top
2020-03-20 17:28 [PATCH v9 0/6] x86/kvm/hyper-v: add support for synthetic debugger Jon Doron
2020-03-20 17:28 ` [PATCH v9 1/6] x86/kvm/hyper-v: Explicitly align hcall param for kvm_hyperv_exit Jon Doron
2020-03-23 9:53 ` Vitaly Kuznetsov
2020-03-20 17:28 ` [PATCH v9 2/6] x86/kvm/hyper-v: Simplify addition for custom cpuid leafs Jon Doron
2020-03-23 10:16 ` Vitaly Kuznetsov [this message]
2020-03-20 17:28 ` [PATCH v9 3/6] x86/hyper-v: Add synthetic debugger definitions Jon Doron
2020-03-23 10:17 ` Vitaly Kuznetsov
2020-03-20 17:28 ` [PATCH v9 4/6] x86/kvm/hyper-v: Add support for synthetic debugger capability Jon Doron
2020-03-23 10:36 ` Vitaly Kuznetsov
2020-03-20 17:28 ` [PATCH v9 5/6] x86/kvm/hyper-v: enable hypercalls without hypercall page with syndbg Jon Doron
2020-03-20 17:28 ` [PATCH v9 6/6] x86/kvm/hyper-v: Add support for synthetic debugger via hypercalls Jon Doron
2020-03-23 9:51 ` [PATCH v9 7/6] KVM: selftests: update hyperv_cpuid with SynDBG tests Vitaly Kuznetsov
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=87wo7b9y0o.fsf@vitty.brq.redhat.com \
--to=vkuznets@redhat.com \
--cc=arilou@gmail.com \
--cc=kvm@vger.kernel.org \
--cc=linux-hyperv@vger.kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).