public inbox for linux-kernel@vger.kernel.org
 help / color / mirror / Atom feed
From: Yu Zhang <yu.c.zhang@linux.intel.com>
To: Sean Christopherson <seanjc@google.com>,
	Paolo Bonzini <pbonzini@redhat.com>
Cc: Vitaly Kuznetsov <vkuznets@redhat.com>,
	Wanpeng Li <wanpengli@tencent.com>,
	Jim Mattson <jmattson@google.com>, Joerg Roedel <joro@8bytes.org>,
	kvm@vger.kernel.org, linux-kernel@vger.kernel.org,
	Eric Li <ercli@ucdavis.edu>, David Matlack <dmatlack@google.com>,
	Oliver Upton <oupton@google.com>
Subject: Re: [PATCH v5 05/15] KVM: nVMX: Let userspace set nVMX MSR to any _host_ supported value
Date: Tue, 1 Nov 2022 00:39:07 +0800	[thread overview]
Message-ID: <20221031163907.w64vyg5twzvv2nho@linux.intel.com> (raw)
In-Reply-To: <20220607213604.3346000-6-seanjc@google.com>

Hi Sean & Paolo,

On Tue, Jun 07, 2022 at 09:35:54PM +0000, Sean Christopherson wrote:
> Restrict the nVMX MSRs based on KVM's config, not based on the guest's
> current config.  Using the guest's config to audit the new config
> prevents userspace from restoring the original config (KVM's config) if
> at any point in the past the guest's config was restricted in any way.

May I ask for an example here, to explain why we use the KVM config
here, instead of the guest's? I mean, the guest's config can be
adjusted after cpuid updates by vmx_vcpu_after_set_cpuid(). Yet the
msr settings in vmcs_config.nested might be outdated by then.

Another question is about the setting of secondary_ctls_high in
nested_vmx_setup_ctls_msrs().  I saw there's a comment saying:
	"Do not include those that depend on CPUID bits, they are
	added later by vmx_vcpu_after_set_cpuid.".

But since cpuid updates can adjust the vmx->nested.msrs.secondary_ctls_high,
do we really need to clear those flags for secondary_ctls_high in this
global config? Could we just set 
	msrs->secondary_ctls_high = vmcs_conf->cpu_based_2nd_exec_ctrl?

If yes, code(in nested_vmx_setup_ctls_msrs()) such as
	if (enable_ept) {
		/* nested EPT: emulate EPT also to L1 */
		msrs->secondary_ctls_high |=
			SECONDARY_EXEC_ENABLE_EPT;
or 
	if (cpu_has_vmx_vmfunc()) {
		msrs->secondary_ctls_high |=
			SECONDARY_EXEC_ENABLE_VMFUNC;
and other similar ones may also be uncessary.

B.R.
Yu

> 
> Fixes: 62cc6b9dc61e ("KVM: nVMX: support restore of VMX capability MSRs")
> Cc: stable@vger.kernel.org
> Cc: David Matlack <dmatlack@google.com>
> Signed-off-by: Sean Christopherson <seanjc@google.com>
> ---
>  arch/x86/kvm/vmx/nested.c | 100 ++++++++++++++++++++------------------
>  1 file changed, 52 insertions(+), 48 deletions(-)
> 
> diff --git a/arch/x86/kvm/vmx/nested.c b/arch/x86/kvm/vmx/nested.c
> index 00c7b00c017a..fca30e79b3a0 100644
> --- a/arch/x86/kvm/vmx/nested.c
> +++ b/arch/x86/kvm/vmx/nested.c
> @@ -1223,7 +1223,7 @@ static int vmx_restore_vmx_basic(struct vcpu_vmx *vmx, u64 data)
>  		BIT_ULL(49) | BIT_ULL(54) | BIT_ULL(55) |
>  		/* reserved */
>  		BIT_ULL(31) | GENMASK_ULL(47, 45) | GENMASK_ULL(63, 56);
> -	u64 vmx_basic = vmx->nested.msrs.basic;
> +	u64 vmx_basic = vmcs_config.nested.basic;
>  
>  	if (!is_bitwise_subset(vmx_basic, data, feature_and_reserved))
>  		return -EINVAL;
> @@ -1246,36 +1246,42 @@ static int vmx_restore_vmx_basic(struct vcpu_vmx *vmx, u64 data)
>  	return 0;
>  }
>  
> +static void vmx_get_control_msr(struct nested_vmx_msrs *msrs, u32 msr_index,
> +				u32 **low, u32 **high)
> +{
> +	switch (msr_index) {
> +	case MSR_IA32_VMX_TRUE_PINBASED_CTLS:
> +		*low = &msrs->pinbased_ctls_low;
> +		*high = &msrs->pinbased_ctls_high;
> +		break;
> +	case MSR_IA32_VMX_TRUE_PROCBASED_CTLS:
> +		*low = &msrs->procbased_ctls_low;
> +		*high = &msrs->procbased_ctls_high;
> +		break;
> +	case MSR_IA32_VMX_TRUE_EXIT_CTLS:
> +		*low = &msrs->exit_ctls_low;
> +		*high = &msrs->exit_ctls_high;
> +		break;
> +	case MSR_IA32_VMX_TRUE_ENTRY_CTLS:
> +		*low = &msrs->entry_ctls_low;
> +		*high = &msrs->entry_ctls_high;
> +		break;
> +	case MSR_IA32_VMX_PROCBASED_CTLS2:
> +		*low = &msrs->secondary_ctls_low;
> +		*high = &msrs->secondary_ctls_high;
> +		break;
> +	default:
> +		BUG();
> +	}
> +}
> +
>  static int
>  vmx_restore_control_msr(struct vcpu_vmx *vmx, u32 msr_index, u64 data)
>  {
> -	u64 supported;
>  	u32 *lowp, *highp;
> +	u64 supported;
>  
> -	switch (msr_index) {
> -	case MSR_IA32_VMX_TRUE_PINBASED_CTLS:
> -		lowp = &vmx->nested.msrs.pinbased_ctls_low;
> -		highp = &vmx->nested.msrs.pinbased_ctls_high;
> -		break;
> -	case MSR_IA32_VMX_TRUE_PROCBASED_CTLS:
> -		lowp = &vmx->nested.msrs.procbased_ctls_low;
> -		highp = &vmx->nested.msrs.procbased_ctls_high;
> -		break;
> -	case MSR_IA32_VMX_TRUE_EXIT_CTLS:
> -		lowp = &vmx->nested.msrs.exit_ctls_low;
> -		highp = &vmx->nested.msrs.exit_ctls_high;
> -		break;
> -	case MSR_IA32_VMX_TRUE_ENTRY_CTLS:
> -		lowp = &vmx->nested.msrs.entry_ctls_low;
> -		highp = &vmx->nested.msrs.entry_ctls_high;
> -		break;
> -	case MSR_IA32_VMX_PROCBASED_CTLS2:
> -		lowp = &vmx->nested.msrs.secondary_ctls_low;
> -		highp = &vmx->nested.msrs.secondary_ctls_high;
> -		break;
> -	default:
> -		BUG();
> -	}
> +	vmx_get_control_msr(&vmcs_config.nested, msr_index, &lowp, &highp);
>  
>  	supported = vmx_control_msr(*lowp, *highp);
>  
> @@ -1287,6 +1293,7 @@ vmx_restore_control_msr(struct vcpu_vmx *vmx, u32 msr_index, u64 data)
>  	if (!is_bitwise_subset(supported, data, GENMASK_ULL(63, 32)))
>  		return -EINVAL;
>  
> +	vmx_get_control_msr(&vmx->nested.msrs, msr_index, &lowp, &highp);
>  	*lowp = data;
>  	*highp = data >> 32;
>  	return 0;
> @@ -1300,10 +1307,8 @@ static int vmx_restore_vmx_misc(struct vcpu_vmx *vmx, u64 data)
>  		BIT_ULL(28) | BIT_ULL(29) | BIT_ULL(30) |
>  		/* reserved */
>  		GENMASK_ULL(13, 9) | BIT_ULL(31);
> -	u64 vmx_misc;
> -
> -	vmx_misc = vmx_control_msr(vmx->nested.msrs.misc_low,
> -				   vmx->nested.msrs.misc_high);
> +	u64 vmx_misc = vmx_control_msr(vmcs_config.nested.misc_low,
> +				       vmcs_config.nested.misc_high);
>  
>  	if (!is_bitwise_subset(vmx_misc, data, feature_and_reserved_bits))
>  		return -EINVAL;
> @@ -1331,10 +1336,8 @@ static int vmx_restore_vmx_misc(struct vcpu_vmx *vmx, u64 data)
>  
>  static int vmx_restore_vmx_ept_vpid_cap(struct vcpu_vmx *vmx, u64 data)
>  {
> -	u64 vmx_ept_vpid_cap;
> -
> -	vmx_ept_vpid_cap = vmx_control_msr(vmx->nested.msrs.ept_caps,
> -					   vmx->nested.msrs.vpid_caps);
> +	u64 vmx_ept_vpid_cap = vmx_control_msr(vmcs_config.nested.ept_caps,
> +					       vmcs_config.nested.vpid_caps);
>  
>  	/* Every bit is either reserved or a feature bit. */
>  	if (!is_bitwise_subset(vmx_ept_vpid_cap, data, -1ULL))
> @@ -1345,20 +1348,21 @@ static int vmx_restore_vmx_ept_vpid_cap(struct vcpu_vmx *vmx, u64 data)
>  	return 0;
>  }
>  
> +static u64 *vmx_get_fixed0_msr(struct nested_vmx_msrs *msrs, u32 msr_index)
> +{
> +	switch (msr_index) {
> +	case MSR_IA32_VMX_CR0_FIXED0:
> +		return &msrs->cr0_fixed0;
> +	case MSR_IA32_VMX_CR4_FIXED0:
> +		return &msrs->cr4_fixed0;
> +	default:
> +		BUG();
> +	}
> +}
> +
>  static int vmx_restore_fixed0_msr(struct vcpu_vmx *vmx, u32 msr_index, u64 data)
>  {
> -	u64 *msr;
> -
> -	switch (msr_index) {
> -	case MSR_IA32_VMX_CR0_FIXED0:
> -		msr = &vmx->nested.msrs.cr0_fixed0;
> -		break;
> -	case MSR_IA32_VMX_CR4_FIXED0:
> -		msr = &vmx->nested.msrs.cr4_fixed0;
> -		break;
> -	default:
> -		BUG();
> -	}
> +	const u64 *msr = vmx_get_fixed0_msr(&vmcs_config.nested, msr_index);
>  
>  	/*
>  	 * 1 bits (which indicates bits which "must-be-1" during VMX operation)
> @@ -1367,7 +1371,7 @@ static int vmx_restore_fixed0_msr(struct vcpu_vmx *vmx, u32 msr_index, u64 data)
>  	if (!is_bitwise_subset(data, *msr, -1ULL))
>  		return -EINVAL;
>  
> -	*msr = data;
> +	*vmx_get_fixed0_msr(&vmx->nested.msrs, msr_index) = data;
>  	return 0;
>  }
>  
> @@ -1428,7 +1432,7 @@ int vmx_set_vmx_msr(struct kvm_vcpu *vcpu, u32 msr_index, u64 data)
>  		vmx->nested.msrs.vmcs_enum = data;
>  		return 0;
>  	case MSR_IA32_VMX_VMFUNC:
> -		if (data & ~vmx->nested.msrs.vmfunc_controls)
> +		if (data & ~vmcs_config.nested.vmfunc_controls)
>  			return -EINVAL;
>  		vmx->nested.msrs.vmfunc_controls = data;
>  		return 0;
> -- 
> 2.36.1.255.ge46751e96f-goog
> 

  reply	other threads:[~2022-10-31 16:39 UTC|newest]

Thread overview: 31+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-06-07 21:35 [PATCH v5 00/15] KVM: nVMX: VMX MSR quirk+fixes, CR4 fixes Sean Christopherson
2022-06-07 21:35 ` [PATCH v5 01/15] KVM: x86: Split kvm_is_valid_cr4() and export only the non-vendor bits Sean Christopherson
2022-06-07 21:35 ` [PATCH v5 02/15] KVM: nVMX: Account for KVM reserved CR4 bits in consistency checks Sean Christopherson
2022-06-07 21:35 ` [PATCH v5 03/15] KVM: nVMX: Inject #UD if VMXON is attempted with incompatible CR0/CR4 Sean Christopherson
2022-06-07 21:35 ` [PATCH v5 04/15] KVM: nVMX: Rename handle_vm{on,off}() to handle_vmx{on,off}() Sean Christopherson
2022-06-07 21:35 ` [PATCH v5 05/15] KVM: nVMX: Let userspace set nVMX MSR to any _host_ supported value Sean Christopherson
2022-10-31 16:39   ` Yu Zhang [this message]
2022-10-31 17:11     ` Sean Christopherson
2022-11-01 10:18       ` Yu Zhang
2022-11-01 17:58         ` Sean Christopherson
2022-11-02  8:54           ` Yu Zhang
2022-11-03 16:53             ` Sean Christopherson
2022-11-07  8:28               ` Yu Zhang
2022-11-07 15:06                 ` Sean Christopherson
2022-11-08 10:21                   ` Yu Zhang
2022-11-08 18:35                     ` Sean Christopherson
2022-11-10  8:44                       ` Yu Zhang
2022-11-10 16:08                         ` Sean Christopherson
2022-06-07 21:35 ` [PATCH v5 06/15] KVM: nVMX: Keep KVM updates to BNDCFGS ctrl bits across MSR write Sean Christopherson
2022-07-22  9:06   ` Paolo Bonzini
2022-06-07 21:35 ` [PATCH v5 07/15] KVM: VMX: Add helper to check if the guest PMU has PERF_GLOBAL_CTRL Sean Christopherson
2022-06-07 21:35 ` [PATCH v5 08/15] KVM: nVMX: Keep KVM updates to PERF_GLOBAL_CTRL ctrl bits across MSR write Sean Christopherson
2022-06-07 21:35 ` [PATCH v5 09/15] KVM: nVMX: Drop nested_vmx_pmu_refresh() Sean Christopherson
2022-06-07 21:35 ` [PATCH v5 10/15] KVM: nVMX: Add a quirk for KVM tweaks to VMX MSRs Sean Christopherson
2022-06-07 21:36 ` [PATCH v5 11/15] KVM: nVMX: Set UMIP bit CR4_FIXED1 MSR when emulating UMIP Sean Christopherson
2022-07-22  9:49   ` Paolo Bonzini
2022-06-07 21:36 ` [PATCH v5 12/15] KVM: nVMX: Extend VMX MSRs quirk to CR0/4 fixed1 bits Sean Christopherson
2022-07-22  9:50   ` Paolo Bonzini
2022-06-07 21:36 ` [PATCH v5 13/15] KVM: selftests: Add test to verify KVM's VMX MSRs quirk for controls Sean Christopherson
2022-06-07 21:36 ` [PATCH v5 14/15] KVM: selftests: Extend VMX MSRs test to cover CR4_FIXED1 (and its quirks) Sean Christopherson
2022-06-07 21:36 ` [PATCH v5 15/15] KVM: selftests: Verify VMX MSRs can be restored to KVM-supported values Sean Christopherson

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20221031163907.w64vyg5twzvv2nho@linux.intel.com \
    --to=yu.c.zhang@linux.intel.com \
    --cc=dmatlack@google.com \
    --cc=ercli@ucdavis.edu \
    --cc=jmattson@google.com \
    --cc=joro@8bytes.org \
    --cc=kvm@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=oupton@google.com \
    --cc=pbonzini@redhat.com \
    --cc=seanjc@google.com \
    --cc=vkuznets@redhat.com \
    --cc=wanpengli@tencent.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox