public inbox for kvm@vger.kernel.org
 help / color / mirror / Atom feed
From: Xiaoyao Li <xiaoyao.li@intel.com>
To: Adrian Hunter <adrian.hunter@intel.com>,
	pbonzini@redhat.com, seanjc@google.com
Cc: kvm@vger.kernel.org, rick.p.edgecombe@intel.com,
	kai.huang@intel.com, reinette.chatre@intel.com,
	tony.lindgren@linux.intel.com, binbin.wu@linux.intel.com,
	dmatlack@google.com, isaku.yamahata@intel.com,
	nik.borisov@suse.com, linux-kernel@vger.kernel.org,
	yan.y.zhao@intel.com, chao.gao@intel.com,
	weijiang.yang@intel.com
Subject: Re: [PATCH V2 02/12] KVM: x86: Allow the use of kvm_load_host_xsave_state() with guest_state_protected
Date: Tue, 25 Feb 2025 13:56:41 +0800	[thread overview]
Message-ID: <27e31afd-2f8e-4f2e-92e3-92e52b956751@intel.com> (raw)
In-Reply-To: <96cc48a7-157b-4c42-a7d4-79181f55eed8@intel.com>

On 2/24/2025 7:38 PM, Adrian Hunter wrote:
> On 20/02/25 12:50, Xiaoyao Li wrote:
>> On 1/29/2025 5:58 PM, Adrian Hunter wrote:
>>> From: Sean Christopherson <seanjc@google.com>
>>>
>>> Allow the use of kvm_load_host_xsave_state() with
>>> vcpu->arch.guest_state_protected == true. This will allow TDX to reuse
>>> kvm_load_host_xsave_state() instead of creating its own version.
>>>
>>> For consistency, amend kvm_load_guest_xsave_state() also.
>>>
>>> Ensure that guest state that kvm_load_host_xsave_state() depends upon,
>>> such as MSR_IA32_XSS, cannot be changed by user space, if
>>> guest_state_protected.
>>>
>>> [Adrian: wrote commit message]
>>>
>>> Link: https://lore.kernel.org/r/Z2GiQS_RmYeHU09L@google.com
>>> Signed-off-by: Sean Christopherson <seanjc@google.com>
>>> Signed-off-by: Adrian Hunter <adrian.hunter@intel.com>
>>> ---
>>> TD vcpu enter/exit v2:
>>>    - New patch
>>> ---
>>>    arch/x86/kvm/svm/svm.c |  7 +++++--
>>>    arch/x86/kvm/x86.c     | 18 +++++++++++-------
>>>    2 files changed, 16 insertions(+), 9 deletions(-)
>>>
>>> diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c
>>> index 7640a84e554a..b4bcfe15ad5e 100644
>>> --- a/arch/x86/kvm/svm/svm.c
>>> +++ b/arch/x86/kvm/svm/svm.c
>>> @@ -4253,7 +4253,9 @@ static __no_kcsan fastpath_t svm_vcpu_run(struct kvm_vcpu *vcpu,
>>>            svm_set_dr6(svm, DR6_ACTIVE_LOW);
>>>          clgi();
>>> -    kvm_load_guest_xsave_state(vcpu);
>>> +
>>> +    if (!vcpu->arch.guest_state_protected)
>>> +        kvm_load_guest_xsave_state(vcpu);
>>>          kvm_wait_lapic_expire(vcpu);
>>>    @@ -4282,7 +4284,8 @@ static __no_kcsan fastpath_t svm_vcpu_run(struct kvm_vcpu *vcpu,
>>>        if (unlikely(svm->vmcb->control.exit_code == SVM_EXIT_NMI))
>>>            kvm_before_interrupt(vcpu, KVM_HANDLING_NMI);
>>>    -    kvm_load_host_xsave_state(vcpu);
>>> +    if (!vcpu->arch.guest_state_protected)
>>> +        kvm_load_host_xsave_state(vcpu);
>>>        stgi();
>>>          /* Any pending NMI will happen here */
>>> diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
>>> index bbb6b7f40b3a..5cf9f023fd4b 100644
>>> --- a/arch/x86/kvm/x86.c
>>> +++ b/arch/x86/kvm/x86.c
>>> @@ -1169,11 +1169,9 @@ EXPORT_SYMBOL_GPL(kvm_lmsw);
>>>      void kvm_load_guest_xsave_state(struct kvm_vcpu *vcpu)
>>>    {
>>> -    if (vcpu->arch.guest_state_protected)
>>> -        return;
>>> +    WARN_ON_ONCE(vcpu->arch.guest_state_protected);
>>>          if (kvm_is_cr4_bit_set(vcpu, X86_CR4_OSXSAVE)) {
>>> -
>>>            if (vcpu->arch.xcr0 != kvm_host.xcr0)
>>>                xsetbv(XCR_XFEATURE_ENABLED_MASK, vcpu->arch.xcr0);
>>>    @@ -1192,13 +1190,11 @@ EXPORT_SYMBOL_GPL(kvm_load_guest_xsave_state);
>>>      void kvm_load_host_xsave_state(struct kvm_vcpu *vcpu)
>>>    {
>>> -    if (vcpu->arch.guest_state_protected)
>>> -        return;
>>> -
>>>        if (cpu_feature_enabled(X86_FEATURE_PKU) &&
>>>            ((vcpu->arch.xcr0 & XFEATURE_MASK_PKRU) ||
>>>             kvm_is_cr4_bit_set(vcpu, X86_CR4_PKE))) {
>>> -        vcpu->arch.pkru = rdpkru();
>>> +        if (!vcpu->arch.guest_state_protected)
>>> +            vcpu->arch.pkru = rdpkru();
>>
>> this needs justification.
> 
> It was proposed by Sean here:
> 
> 	https://lore.kernel.org/all/Z2WZ091z8GmGjSbC@google.com/
> 
> which is part of the email thread referenced by the "Link:" tag above

IMHO, this change needs to be put in patch 07, which is the better place 
to justify it.

>>
>>>            if (vcpu->arch.pkru != vcpu->arch.host_pkru)
>>>                wrpkru(vcpu->arch.host_pkru);
>>>        }
>>
>>
>>> @@ -3916,6 +3912,10 @@ int kvm_set_msr_common(struct kvm_vcpu *vcpu, struct msr_data *msr_info)
>>>            if (!msr_info->host_initiated &&
>>>                !guest_cpuid_has(vcpu, X86_FEATURE_XSAVES))
>>>                return 1;
>>> +
>>> +        if (vcpu->arch.guest_state_protected)
>>> +            return 1;
>>> +
>>
>> this and below change need to be a separate patch. So that we can discuss independently.
>>
>> I see no reason to make MSR_IA32_XSS special than other MSRs. When guest_state_protected, most of the MSRs that aren't emulated by KVM are inaccessible by KVM.
> 
> Yes, TDX will block access to MSR_IA32_XSS anyway because
> tdx_has_emulated_msr() will return false for MSR_IA32_XSS.
> 
> However kvm_load_host_xsave_state() is not TDX-specific code and it
> relies upon vcpu->arch.ia32_xss, so there is reason to block
> access to it when vcpu->arch.guest_state_protected is true.

It is TDX specific logic that TDX requires vcpu->arch.ia32_xss unchanged 
since TDX is going to utilize kvm_load_host_xsave_state() to restore 
host xsave state and relies on vcpu->arch.ia32_xss to be always the 
value of XFAM & XSS_MASK.

So please put this change into the TDX specific patch with the clear 
justfication.

>>
>>>            /*
>>>             * KVM supports exposing PT to the guest, but does not support
>>>             * IA32_XSS[bit 8]. Guests have to use RDMSR/WRMSR rather than
>>> @@ -4375,6 +4375,10 @@ int kvm_get_msr_common(struct kvm_vcpu *vcpu, struct msr_data *msr_info)
>>>            if (!msr_info->host_initiated &&
>>>                !guest_cpuid_has(vcpu, X86_FEATURE_XSAVES))
>>>                return 1;
>>> +
>>> +        if (vcpu->arch.guest_state_protected)
>>> +            return 1;
>>> +
>>>            msr_info->data = vcpu->arch.ia32_xss;
>>>            break;
>>>        case MSR_K7_CLK_CTL:
>>
> 


  reply	other threads:[~2025-02-25  5:56 UTC|newest]

Thread overview: 38+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2025-01-29  9:58 [PATCH V2 00/12] KVM: TDX: TD vcpu enter/exit Adrian Hunter
2025-01-29  9:58 ` [PATCH V2 01/12] x86/virt/tdx: Make tdh_vp_enter() noinstr Adrian Hunter
2025-02-16 18:26   ` Paolo Bonzini
2025-02-27 14:13     ` Adrian Hunter
2025-01-29  9:58 ` [PATCH V2 02/12] KVM: x86: Allow the use of kvm_load_host_xsave_state() with guest_state_protected Adrian Hunter
2025-02-20 10:50   ` Xiaoyao Li
2025-02-24 11:38     ` Adrian Hunter
2025-02-25  5:56       ` Xiaoyao Li [this message]
2025-02-27 14:14         ` Adrian Hunter
2025-03-06 18:04     ` Paolo Bonzini
2025-03-06 20:43       ` Sean Christopherson
2025-03-06 22:34         ` Paolo Bonzini
2025-03-07 23:04           ` Sean Christopherson
2025-03-10 19:08             ` Paolo Bonzini
2025-01-29  9:58 ` [PATCH V2 03/12] KVM: TDX: Set arch.has_protected_state to true Adrian Hunter
2025-02-20 12:35   ` Xiaoyao Li
2025-02-27 14:17     ` Adrian Hunter
2025-01-29  9:58 ` [PATCH V2 04/12] KVM: VMX: Move common fields of struct vcpu_{vmx,tdx} to a struct Adrian Hunter
2025-01-29  9:58 ` [PATCH V2 05/12] KVM: TDX: Implement TDX vcpu enter/exit path Adrian Hunter
2025-02-20 13:16   ` Xiaoyao Li
2025-02-24 12:27     ` Adrian Hunter
2025-02-25  6:15       ` Xiaoyao Li
2025-02-27 18:37         ` Adrian Hunter
2025-03-06 18:19           ` Paolo Bonzini
2025-03-06 19:13             ` Adrian Hunter
2025-01-29  9:58 ` [PATCH V2 06/12] KVM: TDX: vcpu_run: save/restore host state(host kernel gs) Adrian Hunter
2025-01-29  9:58 ` [PATCH V2 07/12] KVM: TDX: restore host xsave state when exit from the guest TD Adrian Hunter
2025-02-25  6:43   ` Xiaoyao Li
2025-02-27 14:29     ` Adrian Hunter
2025-02-28  1:58       ` Xiaoyao Li
2025-01-29  9:58 ` [PATCH V2 08/12] KVM: x86: Allow to update cached values in kvm_user_return_msrs w/o wrmsr Adrian Hunter
2025-02-25  7:00   ` Xiaoyao Li
2025-01-29  9:58 ` [PATCH V2 09/12] KVM: TDX: restore user ret MSRs Adrian Hunter
2025-02-25  7:01   ` Xiaoyao Li
2025-02-27 14:19     ` Adrian Hunter
2025-01-29  9:58 ` [PATCH V2 10/12] KVM: TDX: Disable support for TSX and WAITPKG Adrian Hunter
2025-01-29  9:59 ` [PATCH V2 11/12] KVM: TDX: Save and restore IA32_DEBUGCTL Adrian Hunter
2025-01-29  9:59 ` [PATCH V2 12/12] KVM: x86: Add a switch_db_regs flag to handle TDX's auto-switched behavior Adrian Hunter

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=27e31afd-2f8e-4f2e-92e3-92e52b956751@intel.com \
    --to=xiaoyao.li@intel.com \
    --cc=adrian.hunter@intel.com \
    --cc=binbin.wu@linux.intel.com \
    --cc=chao.gao@intel.com \
    --cc=dmatlack@google.com \
    --cc=isaku.yamahata@intel.com \
    --cc=kai.huang@intel.com \
    --cc=kvm@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=nik.borisov@suse.com \
    --cc=pbonzini@redhat.com \
    --cc=reinette.chatre@intel.com \
    --cc=rick.p.edgecombe@intel.com \
    --cc=seanjc@google.com \
    --cc=tony.lindgren@linux.intel.com \
    --cc=weijiang.yang@intel.com \
    --cc=yan.y.zhao@intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox