From: Xiaoyao Li <xiaoyao.li@intel.com>
To: "Edgecombe, Rick P" <rick.p.edgecombe@intel.com>,
"pbonzini@redhat.com" <pbonzini@redhat.com>,
"seanjc@google.com" <seanjc@google.com>,
"Zhao, Yan Y" <yan.y.zhao@intel.com>
Cc: "Shutemov, Kirill" <kirill.shutemov@intel.com>,
"quic_eberman@quicinc.com" <quic_eberman@quicinc.com>,
"kvm@vger.kernel.org" <kvm@vger.kernel.org>,
"Hansen, Dave" <dave.hansen@intel.com>,
"david@redhat.com" <david@redhat.com>,
"thomas.lendacky@amd.com" <thomas.lendacky@amd.com>,
"tabba@google.com" <tabba@google.com>,
"Li, Zhiquan1" <zhiquan1.li@intel.com>,
"Du, Fan" <fan.du@intel.com>,
"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
"michael.roth@amd.com" <michael.roth@amd.com>,
"Weiny, Ira" <ira.weiny@intel.com>,
"vbabka@suse.cz" <vbabka@suse.cz>,
"binbin.wu@linux.intel.com" <binbin.wu@linux.intel.com>,
"ackerleytng@google.com" <ackerleytng@google.com>,
"Yamahata, Isaku" <isaku.yamahata@intel.com>,
"Peng, Chao P" <chao.p.peng@intel.com>,
"Annapurve, Vishal" <vannapurve@google.com>,
"jroedel@suse.de" <jroedel@suse.de>,
"Miao, Jun" <jun.miao@intel.com>,
"pgonda@google.com" <pgonda@google.com>,
"x86@kernel.org" <x86@kernel.org>
Subject: Re: [RFC PATCH 12/21] KVM: TDX: Determine max mapping level according to vCPU's ACCEPT level
Date: Fri, 16 May 2025 14:12:21 +0800 [thread overview]
Message-ID: <f105f60b-5823-451f-ae26-8a103aa003d4@intel.com> (raw)
In-Reply-To: <7307f6308d6e506bad00749307400fc6e65bc6e4.camel@intel.com>
On 5/14/2025 5:20 AM, Edgecombe, Rick P wrote:
> On Thu, 2025-04-24 at 11:07 +0800, Yan Zhao wrote:
>> Determine the max mapping level of a private GFN according to the vCPU's
>> ACCEPT level specified in the TDCALL TDG.MEM.PAGE.ACCEPT.
>>
>> When an EPT violation occurs due to a vCPU invoking TDG.MEM.PAGE.ACCEPT
>> before any actual memory access, the vCPU's ACCEPT level is available in
>> the extended exit qualification. Set the vCPU's ACCEPT level as the max
>> mapping level for the faulting GFN. This is necessary because if KVM
>> specifies a mapping level greater than the vCPU's ACCEPT level, and no
>> other vCPUs are accepting at KVM's mapping level, TDG.MEM.PAGE.ACCEPT will
>> produce another EPT violation on the vCPU after re-entering the TD, with
>> the vCPU's ACCEPT level indicated in the extended exit qualification.
>
> Maybe a little more info would help. It's because the TDX module wants to
> "accept" the smaller size in the real S-EPT, but KVM created a huge page. It
> can't demote to do this without help from KVM.
>
>>
>> Introduce "violation_gfn_start", "violation_gfn_end", and
>> "violation_request_level" in "struct vcpu_tdx" to pass the vCPU's ACCEPT
>> level to TDX's private_max_mapping_level hook for determining the max
>> mapping level.
>>
>> Instead of taking some bits of the error_code passed to
>> kvm_mmu_page_fault() and requiring KVM MMU core to check the error_code for
>> a fault's max_level, having TDX's private_max_mapping_level hook check for
>> request level avoids changes to the KVM MMU core. This approach also
>> accommodates future scenarios where the requested mapping level is unknown
>> at the start of tdx_handle_ept_violation() (i.e., before invoking
>> kvm_mmu_page_fault()).
>>
>> Signed-off-by: Xiaoyao Li <xiaoyao.li@intel.com>
>> Signed-off-by: Yan Zhao <yan.y.zhao@intel.com>
>> ---
>> arch/x86/kvm/vmx/tdx.c | 36 +++++++++++++++++++++++++++++++++++-
>> arch/x86/kvm/vmx/tdx.h | 4 ++++
>> arch/x86/kvm/vmx/tdx_arch.h | 3 +++
>> 3 files changed, 42 insertions(+), 1 deletion(-)
>>
>> diff --git a/arch/x86/kvm/vmx/tdx.c b/arch/x86/kvm/vmx/tdx.c
>> index 86775af85cd8..dd63a634e633 100644
>> --- a/arch/x86/kvm/vmx/tdx.c
>> +++ b/arch/x86/kvm/vmx/tdx.c
>> @@ -1859,10 +1859,34 @@ static inline bool tdx_is_sept_violation_unexpected_pending(struct kvm_vcpu *vcp
>> return !(eq & EPT_VIOLATION_PROT_MASK) && !(eq & EPT_VIOLATION_EXEC_FOR_RING3_LIN);
>> }
>>
>> +static inline void tdx_get_accept_level(struct kvm_vcpu *vcpu, gpa_t gpa)
>> +{
>> + struct vcpu_tdx *tdx = to_tdx(vcpu);
>> + int level = -1;
>> +
>> + u64 eeq_type = tdx->ext_exit_qualification & TDX_EXT_EXIT_QUAL_TYPE_MASK;
>> +
>> + u32 eeq_info = (tdx->ext_exit_qualification & TDX_EXT_EXIT_QUAL_INFO_MASK) >>
>> + TDX_EXT_EXIT_QUAL_INFO_SHIFT;
>> +
>> + if (eeq_type == TDX_EXT_EXIT_QUAL_TYPE_ACCEPT) {
>> + level = (eeq_info & GENMASK(2, 0)) + 1;
>> +
>> + tdx->violation_gfn_start = gfn_round_for_level(gpa_to_gfn(gpa), level);
>> + tdx->violation_gfn_end = tdx->violation_gfn_start + KVM_PAGES_PER_HPAGE(level);
>> + tdx->violation_request_level = level;
>> + } else {
>> + tdx->violation_gfn_start = -1;
>> + tdx->violation_gfn_end = -1;
>> + tdx->violation_request_level = -1;
>
> We had some internal conversations on how KVM used to stuff a bunch of fault
> stuff in the vcpu so it didn't have to pass it around, but now uses the fault
> struct for this. The point was (IIRC) to prevent stale data from getting
> confused on future faults, and it being hard to track what came from where.
>
> In the TDX case, I think the potential for confusion is still there. The MMU
> code could use stale data if an accept EPT violation happens and control returns
> to userspace, at which point userspace does a KVM_PRE_FAULT_MEMORY. Then it will
> see the stale tdx->violation_*. Not exactly a common case, but better to not
> have loose ends if we can avoid it.
>
> Looking more closely, I don't see why it's too hard to pass in a max_fault_level
> into the fault struct. Totally untested rough idea, what do you think?
the original huge page support patch did encode the level info in
error_code. So it has my vote.
https://lore.kernel.org/all/4d61104bff388a081ff8f6ae4ac71e05a13e53c3.1708933624.git.isaku.yamahata@intel.com/
> diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
> index faae82eefd99..3dc476da6391 100644
> --- a/arch/x86/include/asm/kvm_host.h
> +++ b/arch/x86/include/asm/kvm_host.h
> @@ -282,7 +282,11 @@ enum x86_intercept_stage;
> * when the guest was accessing private memory.
> */
> #define PFERR_PRIVATE_ACCESS BIT_ULL(49)
> -#define PFERR_SYNTHETIC_MASK (PFERR_IMPLICIT_ACCESS | PFERR_PRIVATE_ACCESS)
> +
> +#define PFERR_FAULT_LEVEL_MASK (BIT_ULL(50) | BIT_ULL(51) | BIT_ULL(52))
> +#define PFERR_FAULT_LEVEL_SHIFT 50
> +
> +#define PFERR_SYNTHETIC_MASK (PFERR_IMPLICIT_ACCESS | PFERR_PRIVATE_ACCESS |
> PFERR_FAULT_LEVEL_MASK)
>
> /* apic attention bits */
> #define KVM_APIC_CHECK_VAPIC 0
> diff --git a/arch/x86/kvm/mmu/mmu_internal.h b/arch/x86/kvm/mmu/mmu_internal.h
> index 1c1764f46e66..bdb1b0eabd67 100644
> --- a/arch/x86/kvm/mmu/mmu_internal.h
> +++ b/arch/x86/kvm/mmu/mmu_internal.h
> @@ -361,7 +361,7 @@ static inline int kvm_mmu_do_page_fault(struct kvm_vcpu
> *vcpu, gpa_t cr2_or_gpa,
> .nx_huge_page_workaround_enabled =
> is_nx_huge_page_enabled(vcpu->kvm),
>
> - .max_level = KVM_MAX_HUGEPAGE_LEVEL,
> + .max_level = (err & PFERR_FAULT_LEVEL_MASK) >>
> PFERR_FAULT_LEVEL_SHIFT,
> .req_level = PG_LEVEL_4K,
> .goal_level = PG_LEVEL_4K,
> .is_private = err & PFERR_PRIVATE_ACCESS,
> diff --git a/arch/x86/kvm/vmx/common.h b/arch/x86/kvm/vmx/common.h
> index 8f46a06e2c44..2f22b294ef8b 100644
> --- a/arch/x86/kvm/vmx/common.h
> +++ b/arch/x86/kvm/vmx/common.h
> @@ -83,7 +83,8 @@ static inline bool vt_is_tdx_private_gpa(struct kvm *kvm,
> gpa_t gpa)
> }
>
> static inline int __vmx_handle_ept_violation(struct kvm_vcpu *vcpu, gpa_t gpa,
> - unsigned long exit_qualification)
> + unsigned long exit_qualification,
> + u8 max_fault_level)
> {
> u64 error_code;
>
> @@ -107,6 +108,10 @@ static inline int __vmx_handle_ept_violation(struct
> kvm_vcpu *vcpu, gpa_t gpa,
> if (vt_is_tdx_private_gpa(vcpu->kvm, gpa))
> error_code |= PFERR_PRIVATE_ACCESS;
>
> + BUILD_BUG_ON(KVM_MAX_HUGEPAGE_LEVEL >= (1 <<
> hweight64(PFERR_FAULT_LEVEL_MASK)));
> +
> + error_code |= (u64)max_fault_level << PFERR_FAULT_LEVEL_SHIFT;
> +
> return kvm_mmu_page_fault(vcpu, gpa, error_code, NULL, 0);
> }
>
> diff --git a/arch/x86/kvm/vmx/tdx.c b/arch/x86/kvm/vmx/tdx.c
> index e994a6c08a75..19047de4d98d 100644
> --- a/arch/x86/kvm/vmx/tdx.c
> +++ b/arch/x86/kvm/vmx/tdx.c
> @@ -2027,7 +2027,7 @@ static int tdx_handle_ept_violation(struct kvm_vcpu *vcpu)
> * handle retries locally in their EPT violation handlers.
> */
> while (1) {
> - ret = __vmx_handle_ept_violation(vcpu, gpa, exit_qual);
> + ret = __vmx_handle_ept_violation(vcpu, gpa, exit_qual,
> KVM_MAX_HUGEPAGE_LEVEL);
>
> if (ret != RET_PF_RETRY || !local_retry)
> break;
> diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
> index ef2d7208dd20..b70a2ff35884 100644
> --- a/arch/x86/kvm/vmx/vmx.c
> +++ b/arch/x86/kvm/vmx/vmx.c
> @@ -5782,7 +5782,7 @@ static int handle_ept_violation(struct kvm_vcpu *vcpu)
> if (unlikely(allow_smaller_maxphyaddr && !kvm_vcpu_is_legal_gpa(vcpu,
> gpa)))
> return kvm_emulate_instruction(vcpu, 0);
>
> - return __vmx_handle_ept_violation(vcpu, gpa, exit_qualification);
> + return __vmx_handle_ept_violation(vcpu, gpa, exit_qualification,
> KVM_MAX_HUGEPAGE_LEVEL);
> }
>
> static int handle_ept_misconfig(struct kvm_vcpu *vcpu)
>
>
next prev parent reply other threads:[~2025-05-16 6:12 UTC|newest]
Thread overview: 294+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-04-24 3:00 [RFC PATCH 00/21] KVM: TDX huge page support for private memory Yan Zhao
2025-04-24 3:04 ` [RFC PATCH 01/21] KVM: gmem: Allocate 2M huge page from guest_memfd backend Yan Zhao
2025-04-24 3:04 ` [RFC PATCH 02/21] x86/virt/tdx: Enhance tdh_mem_page_aug() to support huge pages Yan Zhao
2025-04-24 7:48 ` Kirill A. Shutemov
2025-04-24 8:41 ` Yan Zhao
2025-04-25 6:51 ` Binbin Wu
2025-04-25 7:19 ` Yan Zhao
2025-05-13 18:52 ` Edgecombe, Rick P
2025-05-16 9:05 ` Yan Zhao
2025-05-16 17:10 ` Edgecombe, Rick P
2025-06-19 9:26 ` Nikolay Borisov
2025-06-23 9:32 ` Yan Zhao
2025-05-15 2:16 ` Chao Gao
2025-05-16 9:07 ` Yan Zhao
2025-07-08 8:48 ` Yan Zhao
2025-07-08 13:55 ` Edgecombe, Rick P
2025-07-08 15:29 ` Vishal Annapurve
2025-07-08 15:32 ` Edgecombe, Rick P
2025-07-08 22:06 ` Vishal Annapurve
2025-07-08 23:16 ` Edgecombe, Rick P
2025-07-08 23:31 ` Vishal Annapurve
2025-07-09 2:23 ` Yan Zhao
2025-07-09 14:08 ` Edgecombe, Rick P
2025-04-24 3:04 ` [RFC PATCH 03/21] x86/virt/tdx: Add SEAMCALL wrapper tdh_mem_page_demote() Yan Zhao
2025-04-25 7:12 ` Binbin Wu
2025-04-25 7:17 ` Yan Zhao
2025-04-25 7:25 ` Binbin Wu
2025-04-25 9:24 ` Yan Zhao
2025-05-13 18:19 ` Edgecombe, Rick P
2025-05-15 8:26 ` Yan Zhao
2025-05-15 17:28 ` Edgecombe, Rick P
2025-05-16 2:23 ` Yan Zhao
2025-07-01 21:15 ` Edgecombe, Rick P
2025-04-24 3:05 ` [RFC PATCH 04/21] KVM: TDX: Enforce 4KB mapping level during TD build Time Yan Zhao
2025-04-24 7:55 ` Kirill A. Shutemov
2025-04-24 8:49 ` Yan Zhao
2025-05-13 19:12 ` Edgecombe, Rick P
2025-05-15 9:16 ` Yan Zhao
2025-05-15 17:32 ` Edgecombe, Rick P
2025-05-16 10:05 ` Yan Zhao
2025-04-24 3:05 ` [RFC PATCH 05/21] KVM: TDX: Enhance tdx_clear_page() to support huge pages Yan Zhao
2025-05-13 19:17 ` Edgecombe, Rick P
2025-05-16 2:02 ` Yan Zhao
2025-04-24 3:05 ` [RFC PATCH 06/21] KVM: TDX: Assert the reclaimed pages were mapped as expected Yan Zhao
2025-05-13 19:25 ` Edgecombe, Rick P
2025-05-16 2:11 ` Yan Zhao
2025-05-16 17:34 ` Edgecombe, Rick P
2025-04-24 3:05 ` [RFC PATCH 07/21] KVM: TDX: Add a helper for WBINVD on huge pages with TD's keyID Yan Zhao
2025-05-06 8:37 ` Binbin Wu
2025-05-16 3:10 ` Yan Zhao
2025-05-13 19:29 ` Edgecombe, Rick P
2025-05-16 3:03 ` Yan Zhao
2025-05-16 17:35 ` Edgecombe, Rick P
2025-04-24 3:06 ` [RFC PATCH 08/21] KVM: TDX: Increase/decrease folio ref for huge pages Yan Zhao
2025-04-29 0:17 ` Vishal Annapurve
2025-04-29 0:49 ` Yan Zhao
2025-04-29 13:46 ` Vishal Annapurve
2025-05-06 0:53 ` Yan Zhao
2025-05-06 5:08 ` Vishal Annapurve
2025-05-06 6:04 ` Yan Zhao
2025-05-06 13:18 ` Vishal Annapurve
2025-05-07 7:37 ` Yan Zhao
2025-05-07 14:56 ` Vishal Annapurve
2025-05-08 1:30 ` Yan Zhao
2025-05-08 14:10 ` Vishal Annapurve
2025-05-09 3:20 ` Yan Zhao
2025-05-09 14:20 ` Vishal Annapurve
2025-05-09 23:45 ` Edgecombe, Rick P
2025-05-10 0:41 ` Vishal Annapurve
2025-05-12 21:59 ` Edgecombe, Rick P
2025-05-12 2:15 ` Yan Zhao
2025-05-12 16:53 ` Vishal Annapurve
2025-05-15 3:01 ` Yan Zhao
2025-06-04 20:02 ` Ackerley Tng
2025-06-05 2:42 ` Yan Zhao
2025-06-05 21:12 ` Ackerley Tng
2025-06-16 10:43 ` Yan Zhao
2025-06-16 23:27 ` Edgecombe, Rick P
2025-06-11 14:30 ` Vishal Annapurve
2025-06-16 9:59 ` Yan Zhao
2025-06-17 0:12 ` Edgecombe, Rick P
2025-06-17 1:38 ` Yan Zhao
2025-06-17 15:52 ` Edgecombe, Rick P
2025-06-18 0:19 ` Yan Zhao
2025-06-18 0:41 ` Edgecombe, Rick P
2025-06-23 9:27 ` Yan Zhao
2025-06-23 18:20 ` Edgecombe, Rick P
[not found] ` <draft-diqzh606mcz0.fsf@ackerleytng-ctop.c.googlers.com>
2025-06-23 22:48 ` Ackerley Tng
2025-06-24 10:18 ` Yan Zhao
2025-06-24 21:29 ` Ackerley Tng
2025-06-24 22:22 ` Edgecombe, Rick P
2025-06-24 22:00 ` Edgecombe, Rick P
2025-06-24 22:14 ` Edgecombe, Rick P
2025-06-24 23:30 ` Ackerley Tng
2025-06-25 0:01 ` Edgecombe, Rick P
2025-06-25 7:29 ` Yan Zhao
2025-06-25 23:09 ` Ackerley Tng
2025-06-25 23:19 ` Edgecombe, Rick P
2025-06-26 15:16 ` Shutemov, Kirill
2025-06-26 22:19 ` Edgecombe, Rick P
2025-06-27 17:59 ` Ackerley Tng
2025-06-30 11:13 ` Yan Zhao
2025-06-30 17:55 ` Edgecombe, Rick P
2025-06-30 19:25 ` Ackerley Tng
2025-06-30 21:45 ` Edgecombe, Rick P
2025-07-01 5:01 ` Yan Zhao
2025-07-01 5:22 ` Vishal Annapurve
2025-07-01 6:03 ` Yan Zhao
2025-07-01 7:13 ` Vishal Annapurve
2025-07-01 14:15 ` Edgecombe, Rick P
2025-07-01 22:09 ` Ackerley Tng
2025-07-02 11:24 ` Yan Zhao
2025-07-02 18:43 ` Ackerley Tng
2025-07-03 4:54 ` Yan Zhao
2025-07-14 19:32 ` Ackerley Tng
2025-07-01 16:13 ` Edgecombe, Rick P
2025-07-01 21:48 ` Ackerley Tng
2025-07-01 21:57 ` Ackerley Tng
2025-07-01 22:37 ` Edgecombe, Rick P
2025-07-02 20:57 ` Ackerley Tng
2025-07-02 23:51 ` Edgecombe, Rick P
2025-07-08 21:19 ` Ackerley Tng
2025-07-11 1:46 ` Edgecombe, Rick P
2025-07-11 5:12 ` Yan Zhao
2025-07-11 16:14 ` Edgecombe, Rick P
2025-07-14 19:49 ` Ackerley Tng
2025-07-15 15:08 ` Edgecombe, Rick P
2025-07-15 22:31 ` Ackerley Tng
2025-07-02 9:08 ` Yan Zhao
2025-07-02 15:28 ` Edgecombe, Rick P
2025-07-01 5:07 ` Yan Zhao
2025-07-01 22:01 ` Ackerley Tng
2025-07-01 22:26 ` Ackerley Tng
2025-06-30 21:47 ` Vishal Annapurve
2025-07-01 9:35 ` Yan Zhao
2025-07-01 13:32 ` Vishal Annapurve
2025-07-01 14:02 ` Vishal Annapurve
2025-07-01 15:42 ` Edgecombe, Rick P
2025-07-01 16:14 ` Edgecombe, Rick P
2025-07-02 8:54 ` Yan Zhao
2025-07-02 13:12 ` Vishal Annapurve
2025-06-25 7:08 ` Yan Zhao
2025-06-25 22:54 ` Ackerley Tng
2025-06-24 22:03 ` Edgecombe, Rick P
2025-06-17 0:25 ` Edgecombe, Rick P
2025-06-17 2:00 ` Yan Zhao
2025-06-17 3:51 ` Vishal Annapurve
2025-06-17 6:52 ` Yan Zhao
2025-06-17 8:09 ` Vishal Annapurve
2025-06-17 9:57 ` Yan Zhao
2025-06-18 4:25 ` Vishal Annapurve
2025-06-18 0:34 ` Edgecombe, Rick P
2025-06-18 0:46 ` Yan Zhao
2025-06-18 4:33 ` Vishal Annapurve
2025-06-18 6:13 ` Yan Zhao
2025-06-18 6:21 ` Vishal Annapurve
2025-06-18 6:32 ` Yan Zhao
2025-06-18 6:44 ` Vishal Annapurve
2025-06-18 6:57 ` Yan Zhao
2025-06-18 4:29 ` Vishal Annapurve
2025-06-19 0:22 ` Edgecombe, Rick P
2025-06-05 2:47 ` Yan Zhao
2025-06-05 22:35 ` Ackerley Tng
2025-06-19 8:11 ` Yan Zhao
2025-06-20 18:06 ` Vishal Annapurve
2025-07-16 1:23 ` Yan Zhao
2025-07-16 20:57 ` Ackerley Tng
2025-07-18 5:49 ` Yan Zhao
2025-07-22 5:33 ` Ackerley Tng
2025-07-22 6:37 ` Yan Zhao
2025-07-22 17:55 ` Ackerley Tng
2025-05-12 19:00 ` Ackerley Tng
2025-05-12 21:44 ` Edgecombe, Rick P
2025-04-24 3:06 ` [RFC PATCH 09/21] KVM: TDX: Enable 2MB mapping size after TD is RUNNABLE Yan Zhao
2025-05-13 20:10 ` Edgecombe, Rick P
2025-05-16 1:35 ` Huang, Kai
2025-05-16 9:43 ` Yan Zhao
2025-05-16 22:35 ` Huang, Kai
2025-05-16 23:47 ` Edgecombe, Rick P
2025-05-19 8:32 ` Yan Zhao
2025-05-19 16:53 ` Edgecombe, Rick P
2025-05-20 9:34 ` Yan Zhao
2025-05-20 23:47 ` Huang, Kai
2025-06-11 14:42 ` Sean Christopherson
2025-06-12 23:39 ` Edgecombe, Rick P
2025-06-13 0:19 ` Sean Christopherson
2025-06-13 0:25 ` Edgecombe, Rick P
2025-06-13 0:44 ` Sean Christopherson
2025-06-13 0:47 ` Edgecombe, Rick P
2025-06-13 1:32 ` Yan Zhao
2025-06-13 21:53 ` Edgecombe, Rick P
2025-06-13 22:19 ` Sean Christopherson
2025-06-13 23:33 ` Edgecombe, Rick P
2025-06-16 3:14 ` Yan Zhao
2025-06-16 22:49 ` Edgecombe, Rick P
2025-06-17 0:52 ` Yan Zhao
2025-06-18 0:30 ` Yan Zhao
2025-06-20 16:31 ` Sean Christopherson
2025-06-23 21:44 ` Edgecombe, Rick P
2025-06-24 9:57 ` Yan Zhao
2025-06-24 18:35 ` Edgecombe, Rick P
2025-06-25 9:28 ` Yan Zhao
2025-06-25 9:36 ` Yan Zhao
2025-06-25 14:48 ` Edgecombe, Rick P
2025-06-26 0:50 ` Yan Zhao
2025-06-25 14:47 ` Edgecombe, Rick P
2025-06-26 8:53 ` Yan Zhao
2025-07-01 0:42 ` Edgecombe, Rick P
2025-07-01 2:41 ` Yan Zhao
2025-07-01 15:36 ` Edgecombe, Rick P
2025-07-02 0:12 ` Yan Zhao
2025-07-02 0:18 ` Edgecombe, Rick P
2025-07-02 1:07 ` Yan Zhao
2025-07-02 15:26 ` Edgecombe, Rick P
2025-07-02 3:31 ` Yan Zhao
2025-06-25 13:47 ` Vishal Annapurve
2025-06-25 15:51 ` Edgecombe, Rick P
2025-06-18 1:22 ` Edgecombe, Rick P
2025-06-18 11:32 ` Shutemov, Kirill
2025-06-20 16:32 ` Sean Christopherson
2025-06-20 17:44 ` Kirill Shutemov
2025-06-20 18:40 ` Sean Christopherson
2025-06-20 19:26 ` Kirill Shutemov
2025-06-13 2:41 ` Xiaoyao Li
2025-06-13 3:29 ` Yan Zhao
2025-06-13 5:35 ` Yan Zhao
2025-06-13 6:08 ` Xiaoyao Li
2025-05-21 15:40 ` Edgecombe, Rick P
2025-05-22 3:52 ` Yan Zhao
2025-05-23 23:40 ` Edgecombe, Rick P
2025-05-27 1:31 ` Yan Zhao
2025-05-20 23:34 ` Huang, Kai
2025-05-21 2:35 ` Yan Zhao
2025-05-16 9:28 ` Yan Zhao
2025-04-24 3:06 ` [RFC PATCH 10/21] KVM: x86/mmu: Disallow page merging (huge page adjustment) for mirror root Yan Zhao
2025-05-13 20:15 ` Edgecombe, Rick P
2025-05-16 4:01 ` Yan Zhao
2025-05-16 17:50 ` Edgecombe, Rick P
2025-05-19 3:57 ` Yan Zhao
2025-05-19 17:42 ` Edgecombe, Rick P
2025-05-20 10:11 ` Yan Zhao
2025-04-24 3:06 ` [RFC PATCH 11/21] KVM: x86: Add "vcpu" "gfn" parameters to x86 hook private_max_mapping_level Yan Zhao
2025-04-24 3:07 ` [RFC PATCH 12/21] KVM: TDX: Determine max mapping level according to vCPU's ACCEPT level Yan Zhao
2025-05-13 21:20 ` Edgecombe, Rick P
2025-05-16 6:12 ` Xiaoyao Li [this message]
2025-05-16 6:30 ` Yan Zhao
2025-05-16 22:02 ` Edgecombe, Rick P
2025-05-19 6:39 ` Yan Zhao
2025-05-19 20:17 ` Edgecombe, Rick P
2025-04-24 3:07 ` [RFC PATCH 13/21] KVM: x86/tdp_mmu: Alloc external_spt page for mirror page table splitting Yan Zhao
2025-04-24 3:07 ` [RFC PATCH 14/21] KVM: x86/tdp_mmu: Invoke split_external_spt hook with exclusive mmu_lock Yan Zhao
2025-05-13 23:06 ` Edgecombe, Rick P
2025-05-16 9:17 ` Yan Zhao
2025-05-16 22:11 ` Edgecombe, Rick P
2025-05-19 4:01 ` Yan Zhao
2025-05-19 20:21 ` Edgecombe, Rick P
2025-05-20 5:40 ` Binbin Wu
2025-05-20 9:40 ` Yan Zhao
2025-04-24 3:08 ` [RFC PATCH 15/21] KVM: TDX: Support huge page splitting with exclusive kvm->mmu_lock Yan Zhao
2025-05-20 6:18 ` Binbin Wu
2025-05-20 9:40 ` Yan Zhao
2025-07-02 15:47 ` Edgecombe, Rick P
2025-04-24 3:08 ` [RFC PATCH 16/21] KVM: x86/mmu: Introduce kvm_split_boundary_leafs() to split boundary leafs Yan Zhao
2025-05-13 22:56 ` Edgecombe, Rick P
2025-05-16 7:46 ` Yan Zhao
2025-05-16 8:03 ` Yan Zhao
2025-05-16 22:27 ` Edgecombe, Rick P
2025-05-19 8:12 ` Yan Zhao
2025-05-16 11:44 ` Yan Zhao
2025-05-16 22:16 ` Edgecombe, Rick P
2025-04-24 3:08 ` [RFC PATCH 17/21] KVM: Change the return type of gfn_handler_t() from bool to int Yan Zhao
2025-04-24 3:08 ` [RFC PATCH 18/21] KVM: x86: Split huge boundary leafs before private to shared conversion Yan Zhao
2025-05-09 23:34 ` Edgecombe, Rick P
2025-05-12 2:25 ` Yan Zhao
2025-05-12 21:53 ` Edgecombe, Rick P
2025-04-24 3:08 ` [RFC PATCH 19/21] KVM: gmem: Split huge boundary leafs for punch hole of private memory Yan Zhao
2025-04-24 10:19 ` Francesco Lavra
2025-04-25 1:55 ` Yan Zhao
2025-05-13 22:59 ` Edgecombe, Rick P
2025-05-16 8:19 ` Yan Zhao
2025-04-24 3:09 ` [RFC PATCH 20/21] KVM: x86: Force a prefetch fault's max mapping level to 4KB for TDX Yan Zhao
2025-05-13 23:20 ` Edgecombe, Rick P
2025-05-16 8:43 ` Yan Zhao
2025-05-21 3:30 ` Binbin Wu
2025-05-21 5:03 ` Yan Zhao
2025-04-24 3:09 ` [RFC PATCH 21/21] KVM: x86: Ignore splitting huge pages in fault path " Yan Zhao
2025-05-13 21:58 ` Edgecombe, Rick P
2025-05-16 6:40 ` Yan Zhao
2025-04-24 7:35 ` [RFC PATCH 00/21] KVM: TDX huge page support for private memory Kirill A. Shutemov
2025-04-24 8:33 ` Yan Zhao
2025-04-24 9:05 ` Kirill A. Shutemov
2025-04-24 9:08 ` Juergen Gross
2025-04-24 9:49 ` Yan Zhao
2025-04-24 10:39 ` Kirill A. Shutemov
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=f105f60b-5823-451f-ae26-8a103aa003d4@intel.com \
--to=xiaoyao.li@intel.com \
--cc=ackerleytng@google.com \
--cc=binbin.wu@linux.intel.com \
--cc=chao.p.peng@intel.com \
--cc=dave.hansen@intel.com \
--cc=david@redhat.com \
--cc=fan.du@intel.com \
--cc=ira.weiny@intel.com \
--cc=isaku.yamahata@intel.com \
--cc=jroedel@suse.de \
--cc=jun.miao@intel.com \
--cc=kirill.shutemov@intel.com \
--cc=kvm@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=michael.roth@amd.com \
--cc=pbonzini@redhat.com \
--cc=pgonda@google.com \
--cc=quic_eberman@quicinc.com \
--cc=rick.p.edgecombe@intel.com \
--cc=seanjc@google.com \
--cc=tabba@google.com \
--cc=thomas.lendacky@amd.com \
--cc=vannapurve@google.com \
--cc=vbabka@suse.cz \
--cc=x86@kernel.org \
--cc=yan.y.zhao@intel.com \
--cc=zhiquan1.li@intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).