kvm.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v6 00/16] KVM TDX: TDP MMU: large page support
@ 2023-11-07 15:00 isaku.yamahata
  2023-11-07 15:00 ` [PATCH v6 01/16] KVM: TDP_MMU: Go to next level if smaller private mapping exists isaku.yamahata
                   ` (15 more replies)
  0 siblings, 16 replies; 39+ messages in thread
From: isaku.yamahata @ 2023-11-07 15:00 UTC (permalink / raw)
  To: kvm, linux-kernel
  Cc: isaku.yamahata, isaku.yamahata, Paolo Bonzini, erdemaktas,
	Sean Christopherson, Sagi Shahar, David Matlack, Kai Huang,
	Zhi Wang, chen.bo, hang.yuan, tina.zhang

From: Isaku Yamahata <isaku.yamahata@intel.com>

This patch series is based on "v17 KVM TDX: basic feature support".  It
implements large page support for TDP MMU by allowing populating of the large
page and splitting it when necessary.

Feedback for options to merge sub-pages into a large page are welcome.

Remaining TODOs
===============
* 1GB huge page support. This is out of scope of this patch series. It will
  be addressed as follow up.

Splitting large pages when necessary
====================================
* It already tracking whether GFN is private or shared.  When it's changed,
  update lpage_info to prevent a large page.
* TDX provides page level on Secure EPT violation.  Pass around the page level
  that the lower level functions needs.
* Even if the page is the large page in the host, at the EPT level, only some
  sub-pages are mapped.  In such cases abandon to map large pages and step into
  the sub-page level, unlike the conventional EPT.
* When zapping spte and the spte is for a large page, split and zap it unlike
  the conventional EPT because otherwise the protected page contents will be
  lost.

Merging small pages into a large page if possible
=================================================
On normal EPT violation, check whether pages can be merged into a large page
after mapping it.

TDX operation
=============
The following describes what TDX operations procedures.

* EPT violation trick
Such track (zapping the EPT entry to trigger EPT violation) doesn't work for
TDX.  For TDX, it will lose the contents of the protected page to zap a page
because the protected guest page is un-associated from the guest TD.  Instead,
TDX provides a different way to trigger EPT violation without losing the page
contents so that VMM can detect guest TD activity by blocking/unblocking
Secure-EPT entry.  TDH.MEM.RANGE.BLOCK and TDH.MEM.RANGE.UNBLOCK.  They
correspond to clearing/setting a present bit in an EPT entry with page contents
still kept.  By TDH.MEM.RANGE.BLOCK and TLB shoot down, VMM can cause guest TD
to trigger EPT violation.  After that, VMM can unblock it by
TDH.MEM.RANGE.UNBLOCK and resume guest TD execution.  The procedure is as
follows.

  - Block Secure-EPT entry by TDH.MEM.RANGE.BLOCK.
  - TLB shoot down.
  - Wait for guest TD to trigger EPT violation.
  - Unblock Secure-EPT entry by TDH.MEM.RANGE.UNBLOCK to resume the guest TD.

* merging sub-pages into a large page
The following steps are needed.
- Ensure that all sub-pages are mapped.
- TLB shoot down.
- Merge sub-pages into a large page (TDH.MEM.PAGE.PROMOTE).
  This requires all sub-pages are mapped.
- Cache flush Secure EPT page used to map subpages.

Thanks,
Changes from v5:
- Switched to TDX module 1.5 base.

Chnages from v4:
- Rebased to v16 TDX KVM v6.6-rc2 base

Changes from v3:
- Rebased to v15 TDX KVM v6.5-rc1 base

Changes from v2:
- implemented page merging path
- rebased to TDX KVM v11

Changes from v1:
- implemented page merging path
- rebased to UPM v10
- rebased to TDX KVM v10
- rebased to kvm.git queue + v6.1-rc8

Isaku Yamahata (4):
  KVM: x86/tdp_mmu: Allocate private page table for large page split
  KVM: x86/tdp_mmu: Try to merge pages into a large page
  KVM: x86/tdp_mmu: TDX: Implement merge pages into a large page
  KVM: x86/mmu: Make kvm fault handler aware of large page of private
    memslot

Xiaoyao Li (12):
  KVM: TDP_MMU: Go to next level if smaller private mapping exists
  KVM: TDX: Pass page level to cache flush before TDX SEAMCALL
  KVM: TDX: Pass KVM page level to tdh_mem_page_add() and
    tdh_mem_page_aug()
  KVM: TDX: Pass size to tdx_measure_page()
  KVM: TDX: Pass size to reclaim_page()
  KVM: TDX: Update tdx_sept_{set,drop}_private_spte() to support large
    page
  KVM: MMU: Introduce level info in PFERR code
  KVM: TDX: Pin pages via get_page() right before ADD/AUG'ed to TDs
  KVM: TDX: Pass desired page level in err code for page fault handler
  KVM: x86/tdp_mmu: Split the large page when zap leaf
  KVM: x86/tdp_mmu, TDX: Split a large page when 4KB page within it
    converted to shared
  KVM: TDX: Allow 2MB large page for TD GUEST

 arch/x86/include/asm/kvm-x86-ops.h |   3 +
 arch/x86/include/asm/kvm_host.h    |  11 ++
 arch/x86/kvm/Kconfig               |   1 +
 arch/x86/kvm/mmu/mmu.c             |  45 +++--
 arch/x86/kvm/mmu/mmu_internal.h    |  35 +++-
 arch/x86/kvm/mmu/tdp_iter.c        |  37 +++-
 arch/x86/kvm/mmu/tdp_iter.h        |   2 +
 arch/x86/kvm/mmu/tdp_mmu.c         | 283 +++++++++++++++++++++++++++--
 arch/x86/kvm/vmx/common.h          |   6 +-
 arch/x86/kvm/vmx/tdx.c             | 230 +++++++++++++++++------
 arch/x86/kvm/vmx/tdx_arch.h        |  21 +++
 arch/x86/kvm/vmx/tdx_errno.h       |   2 +
 arch/x86/kvm/vmx/tdx_ops.h         |  50 +++--
 arch/x86/kvm/vmx/vmx.c             |   2 +-
 14 files changed, 609 insertions(+), 119 deletions(-)

-- 
2.25.1


^ permalink raw reply	[flat|nested] 39+ messages in thread

* [PATCH v6 01/16] KVM: TDP_MMU: Go to next level if smaller private mapping exists
  2023-11-07 15:00 [PATCH v6 00/16] KVM TDX: TDP MMU: large page support isaku.yamahata
@ 2023-11-07 15:00 ` isaku.yamahata
  2023-11-16  1:32   ` Binbin Wu
  2023-11-07 15:00 ` [PATCH v6 02/16] KVM: TDX: Pass page level to cache flush before TDX SEAMCALL isaku.yamahata
                   ` (14 subsequent siblings)
  15 siblings, 1 reply; 39+ messages in thread
From: isaku.yamahata @ 2023-11-07 15:00 UTC (permalink / raw)
  To: kvm, linux-kernel
  Cc: isaku.yamahata, isaku.yamahata, Paolo Bonzini, erdemaktas,
	Sean Christopherson, Sagi Shahar, David Matlack, Kai Huang,
	Zhi Wang, chen.bo, hang.yuan, tina.zhang, Xiaoyao Li

From: Xiaoyao Li <xiaoyao.li@intel.com>

Cannot map a private page as large page if any smaller mapping exists.

It has to wait for all the not-mapped smaller page to be mapped and
promote it to larger mapping.

Signed-off-by: Xiaoyao Li <xiaoyao.li@intel.com>
---
 arch/x86/kvm/mmu/tdp_mmu.c | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/arch/x86/kvm/mmu/tdp_mmu.c
index 2c5257628881..d806574f7f2d 100644
--- a/arch/x86/kvm/mmu/tdp_mmu.c
+++ b/arch/x86/kvm/mmu/tdp_mmu.c
@@ -1287,7 +1287,8 @@ int kvm_tdp_mmu_map(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault)
 	tdp_mmu_for_each_pte(iter, mmu, is_private, raw_gfn, raw_gfn + 1) {
 		int r;
 
-		if (fault->nx_huge_page_workaround_enabled)
+		if (fault->nx_huge_page_workaround_enabled ||
+		    kvm_gfn_shared_mask(vcpu->kvm))
 			disallowed_hugepage_adjust(fault, iter.old_spte, iter.level);
 
 		/*
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 39+ messages in thread

* [PATCH v6 02/16] KVM: TDX: Pass page level to cache flush before TDX SEAMCALL
  2023-11-07 15:00 [PATCH v6 00/16] KVM TDX: TDP MMU: large page support isaku.yamahata
  2023-11-07 15:00 ` [PATCH v6 01/16] KVM: TDP_MMU: Go to next level if smaller private mapping exists isaku.yamahata
@ 2023-11-07 15:00 ` isaku.yamahata
  2023-11-16  5:36   ` Binbin Wu
  2023-11-07 15:00 ` [PATCH v6 03/16] KVM: TDX: Pass KVM page level to tdh_mem_page_add() and tdh_mem_page_aug() isaku.yamahata
                   ` (13 subsequent siblings)
  15 siblings, 1 reply; 39+ messages in thread
From: isaku.yamahata @ 2023-11-07 15:00 UTC (permalink / raw)
  To: kvm, linux-kernel
  Cc: isaku.yamahata, isaku.yamahata, Paolo Bonzini, erdemaktas,
	Sean Christopherson, Sagi Shahar, David Matlack, Kai Huang,
	Zhi Wang, chen.bo, hang.yuan, tina.zhang, Xiaoyao Li

From: Xiaoyao Li <xiaoyao.li@intel.com>

tdh_mem_page_aug() will support 2MB large page in the near future.  Cache
flush also needs to be 2MB instead of 4KB in such cases.  Introduce a
helper function to flush cache with page size info in preparation for large
pages.

Signed-off-by: Xiaoyao Li <xiaoyao.li@intel.com>
Signed-off-by: Isaku Yamahata <isaku.yamahata@intel.com>
---
 arch/x86/kvm/vmx/tdx_ops.h | 22 ++++++++++++++--------
 1 file changed, 14 insertions(+), 8 deletions(-)

diff --git a/arch/x86/kvm/vmx/tdx_ops.h b/arch/x86/kvm/vmx/tdx_ops.h
index fd73a1731bf8..e726102d3523 100644
--- a/arch/x86/kvm/vmx/tdx_ops.h
+++ b/arch/x86/kvm/vmx/tdx_ops.h
@@ -6,6 +6,7 @@
 
 #include <linux/compiler.h>
 
+#include <asm/pgtable_types.h>
 #include <asm/archrandom.h>
 #include <asm/cacheflush.h>
 #include <asm/asm.h>
@@ -62,6 +63,11 @@ static inline u64 tdx_seamcall(u64 op, u64 rcx, u64 rdx, u64 r8, u64 r9,
 void pr_tdx_error(u64 op, u64 error_code, const struct tdx_module_args *out);
 #endif
 
+static inline void tdx_clflush_page(hpa_t addr, enum pg_level level)
+{
+	clflush_cache_range(__va(addr), KVM_HPAGE_SIZE(level));
+}
+
 /*
  * TDX module acquires its internal lock for resources.  It doesn't spin to get
  * locks because of its restrictions of allowed execution time.  Instead, it
@@ -94,21 +100,21 @@ static inline u64 tdx_seamcall_sept(u64 op, u64 rcx, u64 rdx, u64 r8, u64 r9,
 
 static inline u64 tdh_mng_addcx(hpa_t tdr, hpa_t addr)
 {
-	clflush_cache_range(__va(addr), PAGE_SIZE);
+	tdx_clflush_page(addr, PG_LEVEL_4K);
 	return tdx_seamcall(TDH_MNG_ADDCX, addr, tdr, 0, 0, NULL);
 }
 
 static inline u64 tdh_mem_page_add(hpa_t tdr, gpa_t gpa, hpa_t hpa, hpa_t source,
 				   struct tdx_module_args *out)
 {
-	clflush_cache_range(__va(hpa), PAGE_SIZE);
+	tdx_clflush_page(hpa, PG_LEVEL_4K);
 	return tdx_seamcall_sept(TDH_MEM_PAGE_ADD, gpa, tdr, hpa, source, out);
 }
 
 static inline u64 tdh_mem_sept_add(hpa_t tdr, gpa_t gpa, int level, hpa_t page,
 				   struct tdx_module_args *out)
 {
-	clflush_cache_range(__va(page), PAGE_SIZE);
+	tdx_clflush_page(page, PG_LEVEL_4K);
 	return tdx_seamcall_sept(TDH_MEM_SEPT_ADD, gpa | level, tdr, page, 0, out);
 }
 
@@ -126,21 +132,21 @@ static inline u64 tdh_mem_sept_remove(hpa_t tdr, gpa_t gpa, int level,
 
 static inline u64 tdh_vp_addcx(hpa_t tdvpr, hpa_t addr)
 {
-	clflush_cache_range(__va(addr), PAGE_SIZE);
+	tdx_clflush_page(addr, PG_LEVEL_4K);
 	return tdx_seamcall(TDH_VP_ADDCX, addr, tdvpr, 0, 0, NULL);
 }
 
 static inline u64 tdh_mem_page_relocate(hpa_t tdr, gpa_t gpa, hpa_t hpa,
 					struct tdx_module_args *out)
 {
-	clflush_cache_range(__va(hpa), PAGE_SIZE);
+	tdx_clflush_page(hpa, PG_LEVEL_4K);
 	return tdx_seamcall_sept(TDH_MEM_PAGE_RELOCATE, gpa, tdr, hpa, 0, out);
 }
 
 static inline u64 tdh_mem_page_aug(hpa_t tdr, gpa_t gpa, hpa_t hpa,
 				   struct tdx_module_args *out)
 {
-	clflush_cache_range(__va(hpa), PAGE_SIZE);
+	tdx_clflush_page(hpa, PG_LEVEL_4K);
 	return tdx_seamcall_sept(TDH_MEM_PAGE_AUG, gpa, tdr, hpa, 0, out);
 }
 
@@ -157,13 +163,13 @@ static inline u64 tdh_mng_key_config(hpa_t tdr)
 
 static inline u64 tdh_mng_create(hpa_t tdr, int hkid)
 {
-	clflush_cache_range(__va(tdr), PAGE_SIZE);
+	tdx_clflush_page(tdr, PG_LEVEL_4K);
 	return tdx_seamcall(TDH_MNG_CREATE, tdr, hkid, 0, 0, NULL);
 }
 
 static inline u64 tdh_vp_create(hpa_t tdr, hpa_t tdvpr)
 {
-	clflush_cache_range(__va(tdvpr), PAGE_SIZE);
+	tdx_clflush_page(tdvpr, PG_LEVEL_4K);
 	return tdx_seamcall(TDH_VP_CREATE, tdvpr, tdr, 0, 0, NULL);
 }
 
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 39+ messages in thread

* [PATCH v6 03/16] KVM: TDX: Pass KVM page level to tdh_mem_page_add() and tdh_mem_page_aug()
  2023-11-07 15:00 [PATCH v6 00/16] KVM TDX: TDP MMU: large page support isaku.yamahata
  2023-11-07 15:00 ` [PATCH v6 01/16] KVM: TDP_MMU: Go to next level if smaller private mapping exists isaku.yamahata
  2023-11-07 15:00 ` [PATCH v6 02/16] KVM: TDX: Pass page level to cache flush before TDX SEAMCALL isaku.yamahata
@ 2023-11-07 15:00 ` isaku.yamahata
  2023-11-16  8:18   ` Binbin Wu
  2023-11-07 15:00 ` [PATCH v6 04/16] KVM: TDX: Pass size to tdx_measure_page() isaku.yamahata
                   ` (12 subsequent siblings)
  15 siblings, 1 reply; 39+ messages in thread
From: isaku.yamahata @ 2023-11-07 15:00 UTC (permalink / raw)
  To: kvm, linux-kernel
  Cc: isaku.yamahata, isaku.yamahata, Paolo Bonzini, erdemaktas,
	Sean Christopherson, Sagi Shahar, David Matlack, Kai Huang,
	Zhi Wang, chen.bo, hang.yuan, tina.zhang, Xiaoyao Li

From: Xiaoyao Li <xiaoyao.li@intel.com>

Level info is needed in tdh_clflush_page() to generate the correct page
size.

Besides, explicitly pass level info to SEAMCALL instead of assuming
it's zero. It works naturally when 2MB support lands.

Signed-off-by: Xiaoyao Li <xiaoyao.li@intel.com>
Signed-off-by: Isaku Yamahata <isaku.yamahata@intel.com>
---
 arch/x86/kvm/vmx/tdx.c     |  7 ++++---
 arch/x86/kvm/vmx/tdx_ops.h | 19 ++++++++++++-------
 2 files changed, 16 insertions(+), 10 deletions(-)

diff --git a/arch/x86/kvm/vmx/tdx.c b/arch/x86/kvm/vmx/tdx.c
index 8b58d91bda4e..2d5c86e06c5f 100644
--- a/arch/x86/kvm/vmx/tdx.c
+++ b/arch/x86/kvm/vmx/tdx.c
@@ -1468,7 +1468,7 @@ static int tdx_sept_page_aug(struct kvm *kvm, gfn_t gfn,
 	union tdx_sept_entry entry;
 	u64 err;
 
-	err = tdh_mem_page_aug(kvm_tdx->tdr_pa, gpa, hpa, &out);
+	err = tdh_mem_page_aug(kvm_tdx->tdr_pa, gpa, tdx_level, hpa, &out);
 	if (unlikely(err == TDX_ERROR_SEPT_BUSY)) {
 		tdx_unpin(kvm, pfn);
 		return -EAGAIN;
@@ -1497,6 +1497,7 @@ static int tdx_sept_page_aug(struct kvm *kvm, gfn_t gfn,
 static int tdx_sept_page_add(struct kvm *kvm, gfn_t gfn,
 			     enum pg_level level, kvm_pfn_t pfn)
 {
+	int tdx_level = pg_level_to_tdx_sept_level(level);
 	struct kvm_tdx *kvm_tdx = to_kvm_tdx(kvm);
 	hpa_t hpa = pfn_to_hpa(pfn);
 	gpa_t gpa = gfn_to_gpa(gfn);
@@ -1531,8 +1532,8 @@ static int tdx_sept_page_add(struct kvm *kvm, gfn_t gfn,
 	kvm_tdx->source_pa = INVALID_PAGE;
 
 	do {
-		err = tdh_mem_page_add(kvm_tdx->tdr_pa, gpa, hpa, source_pa,
-				       &out);
+		err = tdh_mem_page_add(kvm_tdx->tdr_pa, gpa, tdx_level, hpa,
+				       source_pa, &out);
 		/*
 		 * This path is executed during populating initial guest memory
 		 * image. i.e. before running any vcpu.  Race is rare.
diff --git a/arch/x86/kvm/vmx/tdx_ops.h b/arch/x86/kvm/vmx/tdx_ops.h
index e726102d3523..0f2df7198bde 100644
--- a/arch/x86/kvm/vmx/tdx_ops.h
+++ b/arch/x86/kvm/vmx/tdx_ops.h
@@ -63,6 +63,11 @@ static inline u64 tdx_seamcall(u64 op, u64 rcx, u64 rdx, u64 r8, u64 r9,
 void pr_tdx_error(u64 op, u64 error_code, const struct tdx_module_args *out);
 #endif
 
+static inline enum pg_level tdx_sept_level_to_pg_level(int tdx_level)
+{
+	return tdx_level + 1;
+}
+
 static inline void tdx_clflush_page(hpa_t addr, enum pg_level level)
 {
 	clflush_cache_range(__va(addr), KVM_HPAGE_SIZE(level));
@@ -104,11 +109,11 @@ static inline u64 tdh_mng_addcx(hpa_t tdr, hpa_t addr)
 	return tdx_seamcall(TDH_MNG_ADDCX, addr, tdr, 0, 0, NULL);
 }
 
-static inline u64 tdh_mem_page_add(hpa_t tdr, gpa_t gpa, hpa_t hpa, hpa_t source,
-				   struct tdx_module_args *out)
+static inline u64 tdh_mem_page_add(hpa_t tdr, gpa_t gpa, int level, hpa_t hpa,
+				   hpa_t source, struct tdx_module_args *out)
 {
-	tdx_clflush_page(hpa, PG_LEVEL_4K);
-	return tdx_seamcall_sept(TDH_MEM_PAGE_ADD, gpa, tdr, hpa, source, out);
+	tdx_clflush_page(hpa, tdx_sept_level_to_pg_level(level));
+	return tdx_seamcall_sept(TDH_MEM_PAGE_ADD, gpa | level, tdr, hpa, source, out);
 }
 
 static inline u64 tdh_mem_sept_add(hpa_t tdr, gpa_t gpa, int level, hpa_t page,
@@ -143,11 +148,11 @@ static inline u64 tdh_mem_page_relocate(hpa_t tdr, gpa_t gpa, hpa_t hpa,
 	return tdx_seamcall_sept(TDH_MEM_PAGE_RELOCATE, gpa, tdr, hpa, 0, out);
 }
 
-static inline u64 tdh_mem_page_aug(hpa_t tdr, gpa_t gpa, hpa_t hpa,
+static inline u64 tdh_mem_page_aug(hpa_t tdr, gpa_t gpa, int level, hpa_t hpa,
 				   struct tdx_module_args *out)
 {
-	tdx_clflush_page(hpa, PG_LEVEL_4K);
-	return tdx_seamcall_sept(TDH_MEM_PAGE_AUG, gpa, tdr, hpa, 0, out);
+	tdx_clflush_page(hpa, tdx_sept_level_to_pg_level(level));
+	return tdx_seamcall_sept(TDH_MEM_PAGE_AUG, gpa | level, tdr, hpa, 0, out);
 }
 
 static inline u64 tdh_mem_range_block(hpa_t tdr, gpa_t gpa, int level,
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 39+ messages in thread

* [PATCH v6 04/16] KVM: TDX: Pass size to tdx_measure_page()
  2023-11-07 15:00 [PATCH v6 00/16] KVM TDX: TDP MMU: large page support isaku.yamahata
                   ` (2 preceding siblings ...)
  2023-11-07 15:00 ` [PATCH v6 03/16] KVM: TDX: Pass KVM page level to tdh_mem_page_add() and tdh_mem_page_aug() isaku.yamahata
@ 2023-11-07 15:00 ` isaku.yamahata
  2023-11-16  8:57   ` Binbin Wu
  2023-11-07 15:00 ` [PATCH v6 05/16] KVM: TDX: Pass size to reclaim_page() isaku.yamahata
                   ` (11 subsequent siblings)
  15 siblings, 1 reply; 39+ messages in thread
From: isaku.yamahata @ 2023-11-07 15:00 UTC (permalink / raw)
  To: kvm, linux-kernel
  Cc: isaku.yamahata, isaku.yamahata, Paolo Bonzini, erdemaktas,
	Sean Christopherson, Sagi Shahar, David Matlack, Kai Huang,
	Zhi Wang, chen.bo, hang.yuan, tina.zhang, Xiaoyao Li

From: Xiaoyao Li <xiaoyao.li@intel.com>

Extend tdx_measure_page() to pass size info so that it can measure
large page as well.

Signed-off-by: Xiaoyao Li <xiaoyao.li@intel.com>
Signed-off-by: Isaku Yamahata <isaku.yamahata@intel.com>
---
 arch/x86/kvm/vmx/tdx.c | 8 +++++---
 1 file changed, 5 insertions(+), 3 deletions(-)

diff --git a/arch/x86/kvm/vmx/tdx.c b/arch/x86/kvm/vmx/tdx.c
index 2d5c86e06c5f..a728175c4a6d 100644
--- a/arch/x86/kvm/vmx/tdx.c
+++ b/arch/x86/kvm/vmx/tdx.c
@@ -1434,13 +1434,15 @@ void tdx_load_mmu_pgd(struct kvm_vcpu *vcpu, hpa_t root_hpa, int pgd_level)
 	td_vmcs_write64(to_tdx(vcpu), SHARED_EPT_POINTER, root_hpa & PAGE_MASK);
 }
 
-static void tdx_measure_page(struct kvm_tdx *kvm_tdx, hpa_t gpa)
+static void tdx_measure_page(struct kvm_tdx *kvm_tdx, hpa_t gpa, int size)
 {
 	struct tdx_module_args out;
 	u64 err;
 	int i;
 
-	for (i = 0; i < PAGE_SIZE; i += TDX_EXTENDMR_CHUNKSIZE) {
+	WARN_ON_ONCE(size % TDX_EXTENDMR_CHUNKSIZE);
+
+	for (i = 0; i < size; i += TDX_EXTENDMR_CHUNKSIZE) {
 		err = tdh_mr_extend(kvm_tdx->tdr_pa, gpa + i, &out);
 		if (KVM_BUG_ON(err, &kvm_tdx->kvm)) {
 			pr_tdx_error(TDH_MR_EXTEND, err, &out);
@@ -1544,7 +1546,7 @@ static int tdx_sept_page_add(struct kvm *kvm, gfn_t gfn,
 		tdx_unpin(kvm, pfn);
 		return -EIO;
 	} else if (measure)
-		tdx_measure_page(kvm_tdx, gpa);
+		tdx_measure_page(kvm_tdx, gpa, KVM_HPAGE_SIZE(level));
 
 	return 0;
 
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 39+ messages in thread

* [PATCH v6 05/16] KVM: TDX: Pass size to reclaim_page()
  2023-11-07 15:00 [PATCH v6 00/16] KVM TDX: TDP MMU: large page support isaku.yamahata
                   ` (3 preceding siblings ...)
  2023-11-07 15:00 ` [PATCH v6 04/16] KVM: TDX: Pass size to tdx_measure_page() isaku.yamahata
@ 2023-11-07 15:00 ` isaku.yamahata
  2023-11-19  6:42   ` Binbin Wu
  2023-11-07 15:00 ` [PATCH v6 06/16] KVM: TDX: Update tdx_sept_{set,drop}_private_spte() to support large page isaku.yamahata
                   ` (10 subsequent siblings)
  15 siblings, 1 reply; 39+ messages in thread
From: isaku.yamahata @ 2023-11-07 15:00 UTC (permalink / raw)
  To: kvm, linux-kernel
  Cc: isaku.yamahata, isaku.yamahata, Paolo Bonzini, erdemaktas,
	Sean Christopherson, Sagi Shahar, David Matlack, Kai Huang,
	Zhi Wang, chen.bo, hang.yuan, tina.zhang, Xiaoyao Li

From: Xiaoyao Li <xiaoyao.li@intel.com>

A 2MB large page can be tdh_mem_page_aug()'ed to TD directly. In this case,
it needs to reclaim and clear the page as 2MB size.

Signed-off-by: Xiaoyao Li <xiaoyao.li@intel.com>
Signed-off-by: Isaku Yamahata <isaku.yamahata@intel.com>
---
 arch/x86/kvm/vmx/tdx.c | 27 +++++++++++++++------------
 1 file changed, 15 insertions(+), 12 deletions(-)

diff --git a/arch/x86/kvm/vmx/tdx.c b/arch/x86/kvm/vmx/tdx.c
index a728175c4a6d..0fca863faeee 100644
--- a/arch/x86/kvm/vmx/tdx.c
+++ b/arch/x86/kvm/vmx/tdx.c
@@ -200,12 +200,13 @@ static void tdx_disassociate_vp_on_cpu(struct kvm_vcpu *vcpu)
 	smp_call_function_single(cpu, tdx_disassociate_vp_arg, vcpu, 1);
 }
 
-static void tdx_clear_page(unsigned long page_pa)
+static void tdx_clear_page(unsigned long page_pa, int size)
 {
 	const void *zero_page = (const void *) __va(page_to_phys(ZERO_PAGE(0)));
 	void *page = __va(page_pa);
 	unsigned long i;
 
+	WARN_ON_ONCE(size % PAGE_SIZE);
 	/*
 	 * When re-assign one page from old keyid to a new keyid, MOVDIR64B is
 	 * required to clear/write the page with new keyid to prevent integrity
@@ -214,7 +215,7 @@ static void tdx_clear_page(unsigned long page_pa)
 	 * clflush doesn't flush cache with HKID set.  The cache line could be
 	 * poisoned (even without MKTME-i), clear the poison bit.
 	 */
-	for (i = 0; i < PAGE_SIZE; i += 64)
+	for (i = 0; i < size; i += 64)
 		movdir64b(page + i, zero_page);
 	/*
 	 * MOVDIR64B store uses WC buffer.  Prevent following memory reads
@@ -223,7 +224,7 @@ static void tdx_clear_page(unsigned long page_pa)
 	__mb();
 }
 
-static int __tdx_reclaim_page(hpa_t pa)
+static int __tdx_reclaim_page(hpa_t pa, enum pg_level level)
 {
 	struct tdx_module_args out;
 	u64 err;
@@ -241,17 +242,19 @@ static int __tdx_reclaim_page(hpa_t pa)
 		pr_tdx_error(TDH_PHYMEM_PAGE_RECLAIM, err, &out);
 		return -EIO;
 	}
+	/* out.r8 == tdx sept page level */
+	WARN_ON_ONCE(out.r8 != pg_level_to_tdx_sept_level(level));
 
 	return 0;
 }
 
-static int tdx_reclaim_page(hpa_t pa)
+static int tdx_reclaim_page(hpa_t pa, enum pg_level level)
 {
 	int r;
 
-	r = __tdx_reclaim_page(pa);
+	r = __tdx_reclaim_page(pa, level);
 	if (!r)
-		tdx_clear_page(pa);
+		tdx_clear_page(pa, KVM_HPAGE_SIZE(level));
 	return r;
 }
 
@@ -265,7 +268,7 @@ static void tdx_reclaim_td_page(unsigned long td_page_pa)
 	 * was already flushed by TDH.PHYMEM.CACHE.WB before here, So
 	 * cache doesn't need to be flushed again.
 	 */
-	if (tdx_reclaim_page(td_page_pa))
+	if (tdx_reclaim_page(td_page_pa, PG_LEVEL_4K))
 		/*
 		 * Leak the page on failure:
 		 * tdx_reclaim_page() returns an error if and only if there's an
@@ -497,7 +500,7 @@ void tdx_vm_free(struct kvm *kvm)
 
 	if (!kvm_tdx->tdr_pa)
 		return;
-	if (__tdx_reclaim_page(kvm_tdx->tdr_pa))
+	if (__tdx_reclaim_page(kvm_tdx->tdr_pa, PG_LEVEL_4K))
 		return;
 	/*
 	 * TDX module maps TDR with TDX global HKID.  TDX module may access TDR
@@ -510,7 +513,7 @@ void tdx_vm_free(struct kvm *kvm)
 		pr_tdx_error(TDH_PHYMEM_PAGE_WBINVD, err, NULL);
 		return;
 	}
-	tdx_clear_page(kvm_tdx->tdr_pa);
+	tdx_clear_page(kvm_tdx->tdr_pa, PAGE_SIZE);
 
 	free_page((unsigned long)__va(kvm_tdx->tdr_pa));
 	kvm_tdx->tdr_pa = 0;
@@ -1597,7 +1600,7 @@ static int tdx_sept_drop_private_spte(struct kvm *kvm, gfn_t gfn,
 		 * The HKID assigned to this TD was already freed and cache
 		 * was already flushed. We don't have to flush again.
 		 */
-		err = tdx_reclaim_page(hpa);
+		err = tdx_reclaim_page(hpa, level);
 		if (KVM_BUG_ON(err, kvm))
 			return -EIO;
 		tdx_unpin(kvm, pfn);
@@ -1630,7 +1633,7 @@ static int tdx_sept_drop_private_spte(struct kvm *kvm, gfn_t gfn,
 		pr_tdx_error(TDH_PHYMEM_PAGE_WBINVD, err, NULL);
 		return -EIO;
 	}
-	tdx_clear_page(hpa);
+	tdx_clear_page(hpa, PAGE_SIZE);
 	tdx_unpin(kvm, pfn);
 	return 0;
 }
@@ -1742,7 +1745,7 @@ static int tdx_sept_free_private_spt(struct kvm *kvm, gfn_t gfn,
 	 * already flushed. We don't have to flush again.
 	 */
 	if (!is_hkid_assigned(kvm_tdx))
-		return tdx_reclaim_page(__pa(private_spt));
+		return tdx_reclaim_page(__pa(private_spt), PG_LEVEL_4K);
 
 	/*
 	 * free_private_spt() is (obviously) called when a shadow page is being
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 39+ messages in thread

* [PATCH v6 06/16] KVM: TDX: Update tdx_sept_{set,drop}_private_spte() to support large page
  2023-11-07 15:00 [PATCH v6 00/16] KVM TDX: TDP MMU: large page support isaku.yamahata
                   ` (4 preceding siblings ...)
  2023-11-07 15:00 ` [PATCH v6 05/16] KVM: TDX: Pass size to reclaim_page() isaku.yamahata
@ 2023-11-07 15:00 ` isaku.yamahata
  2023-11-07 15:00 ` [PATCH v6 07/16] KVM: MMU: Introduce level info in PFERR code isaku.yamahata
                   ` (9 subsequent siblings)
  15 siblings, 0 replies; 39+ messages in thread
From: isaku.yamahata @ 2023-11-07 15:00 UTC (permalink / raw)
  To: kvm, linux-kernel
  Cc: isaku.yamahata, isaku.yamahata, Paolo Bonzini, erdemaktas,
	Sean Christopherson, Sagi Shahar, David Matlack, Kai Huang,
	Zhi Wang, chen.bo, hang.yuan, tina.zhang, Xiaoyao Li

From: Xiaoyao Li <xiaoyao.li@intel.com>

Allow large page level AUG and REMOVE for TDX pages.

Signed-off-by: Xiaoyao Li <xiaoyao.li@intel.com>
Signed-off-by: Isaku Yamahata <isaku.yamahata@intel.com>
---
 arch/x86/kvm/vmx/tdx.c | 66 ++++++++++++++++++++++--------------------
 1 file changed, 34 insertions(+), 32 deletions(-)

diff --git a/arch/x86/kvm/vmx/tdx.c b/arch/x86/kvm/vmx/tdx.c
index 0fca863faeee..31598b84811f 100644
--- a/arch/x86/kvm/vmx/tdx.c
+++ b/arch/x86/kvm/vmx/tdx.c
@@ -1454,11 +1454,12 @@ static void tdx_measure_page(struct kvm_tdx *kvm_tdx, hpa_t gpa, int size)
 	}
 }
 
-static void tdx_unpin(struct kvm *kvm, kvm_pfn_t pfn)
+static void tdx_unpin(struct kvm *kvm, kvm_pfn_t pfn, int level)
 {
-	struct page *page = pfn_to_page(pfn);
+	int i;
 
-	put_page(page);
+	for (i = 0; i < KVM_PAGES_PER_HPAGE(level); i++)
+		put_page(pfn_to_page(pfn + i));
 }
 
 static int tdx_sept_page_aug(struct kvm *kvm, gfn_t gfn,
@@ -1475,7 +1476,7 @@ static int tdx_sept_page_aug(struct kvm *kvm, gfn_t gfn,
 
 	err = tdh_mem_page_aug(kvm_tdx->tdr_pa, gpa, tdx_level, hpa, &out);
 	if (unlikely(err == TDX_ERROR_SEPT_BUSY)) {
-		tdx_unpin(kvm, pfn);
+		tdx_unpin(kvm, pfn, level);
 		return -EAGAIN;
 	}
 	if (unlikely(err == (TDX_EPT_ENTRY_STATE_INCORRECT | TDX_OPERAND_ID_RCX))) {
@@ -1492,7 +1493,7 @@ static int tdx_sept_page_aug(struct kvm *kvm, gfn_t gfn,
 	}
 	if (KVM_BUG_ON(err, kvm)) {
 		pr_tdx_error(TDH_MEM_PAGE_AUG, err, &out);
-		tdx_unpin(kvm, pfn);
+		tdx_unpin(kvm, pfn, level);
 		return -EIO;
 	}
 
@@ -1528,7 +1529,7 @@ static int tdx_sept_page_add(struct kvm *kvm, gfn_t gfn,
 	 * always uses vcpu 0's page table and protected by vcpu->mutex).
 	 */
 	if (KVM_BUG_ON(kvm_tdx->source_pa == INVALID_PAGE, kvm)) {
-		tdx_unpin(kvm, pfn);
+		tdx_unpin(kvm, pfn, level);
 		return -EINVAL;
 	}
 
@@ -1546,7 +1547,7 @@ static int tdx_sept_page_add(struct kvm *kvm, gfn_t gfn,
 	} while (unlikely(err == TDX_ERROR_SEPT_BUSY));
 	if (KVM_BUG_ON(err, kvm)) {
 		pr_tdx_error(TDH_MEM_PAGE_ADD, err, &out);
-		tdx_unpin(kvm, pfn);
+		tdx_unpin(kvm, pfn, level);
 		return -EIO;
 	} else if (measure)
 		tdx_measure_page(kvm_tdx, gpa, KVM_HPAGE_SIZE(level));
@@ -1559,10 +1560,7 @@ static int tdx_sept_set_private_spte(struct kvm *kvm, gfn_t gfn,
 				     enum pg_level level, kvm_pfn_t pfn)
 {
 	struct kvm_tdx *kvm_tdx = to_kvm_tdx(kvm);
-
-	/* TODO: handle large pages. */
-	if (KVM_BUG_ON(level != PG_LEVEL_4K, kvm))
-		return -EINVAL;
+	int i;
 
 	/*
 	 * Because restricted mem doesn't support page migration with
@@ -1572,7 +1570,8 @@ static int tdx_sept_set_private_spte(struct kvm *kvm, gfn_t gfn,
 	 * TODO: Once restricted mem introduces callback on page migration,
 	 * implement it and remove get_page/put_page().
 	 */
-	get_page(pfn_to_page(pfn));
+	for (i = 0; i < KVM_PAGES_PER_HPAGE(level); i++)
+		get_page(pfn_to_page(pfn + i));
 
 	if (likely(is_td_finalized(kvm_tdx)))
 		return tdx_sept_page_aug(kvm, gfn, level, pfn);
@@ -1589,11 +1588,9 @@ static int tdx_sept_drop_private_spte(struct kvm *kvm, gfn_t gfn,
 	gpa_t gpa = gfn_to_gpa(gfn);
 	hpa_t hpa = pfn_to_hpa(pfn);
 	hpa_t hpa_with_hkid;
+	int r = 0;
 	u64 err;
-
-	/* TODO: handle large pages. */
-	if (KVM_BUG_ON(level != PG_LEVEL_4K, kvm))
-		return -EINVAL;
+	int i;
 
 	if (unlikely(!is_hkid_assigned(kvm_tdx))) {
 		/*
@@ -1603,7 +1600,7 @@ static int tdx_sept_drop_private_spte(struct kvm *kvm, gfn_t gfn,
 		err = tdx_reclaim_page(hpa, level);
 		if (KVM_BUG_ON(err, kvm))
 			return -EIO;
-		tdx_unpin(kvm, pfn);
+		tdx_unpin(kvm, pfn, level);
 		return 0;
 	}
 
@@ -1620,22 +1617,27 @@ static int tdx_sept_drop_private_spte(struct kvm *kvm, gfn_t gfn,
 		return -EIO;
 	}
 
-	hpa_with_hkid = set_hkid_to_hpa(hpa, (u16)kvm_tdx->hkid);
-	do {
-		/*
-		 * TDX_OPERAND_BUSY can happen on locking PAMT entry.  Because
-		 * this page was removed above, other thread shouldn't be
-		 * repeatedly operating on this page.  Just retry loop.
-		 */
-		err = tdh_phymem_page_wbinvd(hpa_with_hkid);
-	} while (unlikely(err == (TDX_OPERAND_BUSY | TDX_OPERAND_ID_RCX)));
-	if (KVM_BUG_ON(err, kvm)) {
-		pr_tdx_error(TDH_PHYMEM_PAGE_WBINVD, err, NULL);
-		return -EIO;
+	for (i = 0; i < KVM_PAGES_PER_HPAGE(level); i++) {
+		hpa_with_hkid = set_hkid_to_hpa(hpa, (u16)kvm_tdx->hkid);
+		do {
+			/*
+			 * TDX_OPERAND_BUSY can happen on locking PAMT entry.
+			 * Because this page was removed above, other thread
+			 * shouldn't be repeatedly operating on this page.
+			 * Simple retry should work.
+			 */
+			err = tdh_phymem_page_wbinvd(hpa_with_hkid);
+		} while (unlikely(err == (TDX_OPERAND_BUSY | TDX_OPERAND_ID_RCX)));
+		if (KVM_BUG_ON(err, kvm)) {
+			pr_tdx_error(TDH_PHYMEM_PAGE_WBINVD, err, NULL);
+			r = -EIO;
+		} else {
+			tdx_clear_page(hpa, PAGE_SIZE);
+			tdx_unpin(kvm, pfn + i, PG_LEVEL_4K);
+		}
+		hpa += PAGE_SIZE;
 	}
-	tdx_clear_page(hpa, PAGE_SIZE);
-	tdx_unpin(kvm, pfn);
-	return 0;
+	return r;
 }
 
 static int tdx_sept_link_private_spt(struct kvm *kvm, gfn_t gfn,
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 39+ messages in thread

* [PATCH v6 07/16] KVM: MMU: Introduce level info in PFERR code
  2023-11-07 15:00 [PATCH v6 00/16] KVM TDX: TDP MMU: large page support isaku.yamahata
                   ` (5 preceding siblings ...)
  2023-11-07 15:00 ` [PATCH v6 06/16] KVM: TDX: Update tdx_sept_{set,drop}_private_spte() to support large page isaku.yamahata
@ 2023-11-07 15:00 ` isaku.yamahata
  2023-11-20 10:54   ` Binbin Wu
  2023-11-07 15:00 ` [PATCH v6 08/16] KVM: TDX: Pin pages via get_page() right before ADD/AUG'ed to TDs isaku.yamahata
                   ` (8 subsequent siblings)
  15 siblings, 1 reply; 39+ messages in thread
From: isaku.yamahata @ 2023-11-07 15:00 UTC (permalink / raw)
  To: kvm, linux-kernel
  Cc: isaku.yamahata, isaku.yamahata, Paolo Bonzini, erdemaktas,
	Sean Christopherson, Sagi Shahar, David Matlack, Kai Huang,
	Zhi Wang, chen.bo, hang.yuan, tina.zhang, Xiaoyao Li

From: Xiaoyao Li <xiaoyao.li@intel.com>

For TDX, EPT violation can happen when TDG.MEM.PAGE.ACCEPT.
And TDG.MEM.PAGE.ACCEPT contains the desired accept page level of TD guest.

1. KVM can map it with 4KB page while TD guest wants to accept 2MB page.

  TD geust will get TDX_PAGE_SIZE_MISMATCH and it should try to accept
  4KB size.

2. KVM can map it with 2MB page while TD guest wants to accept 4KB page.

  KVM needs to honor it because
  a) there is no way to tell guest KVM maps it as 2MB size. And
  b) guest accepts it in 4KB size since guest knows some other 4KB page
     in the same 2MB range will be used as shared page.

For case 2, it need to pass desired page level to MMU's
page_fault_handler. Use bit 29:31 of kvm PF error code for this purpose.

Signed-off-by: Xiaoyao Li <xiaoyao.li@intel.com>
Signed-off-by: Isaku Yamahata <isaku.yamahata@intel.com>
---
 arch/x86/include/asm/kvm_host.h |  3 +++
 arch/x86/kvm/mmu/mmu.c          |  5 +++++
 arch/x86/kvm/vmx/common.h       |  6 +++++-
 arch/x86/kvm/vmx/tdx.c          | 15 ++++++++++++++-
 arch/x86/kvm/vmx/tdx.h          | 19 +++++++++++++++++++
 arch/x86/kvm/vmx/vmx.c          |  2 +-
 6 files changed, 47 insertions(+), 3 deletions(-)

diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
index edcafcd650db..eed36c1eedb7 100644
--- a/arch/x86/include/asm/kvm_host.h
+++ b/arch/x86/include/asm/kvm_host.h
@@ -261,6 +261,8 @@ enum x86_intercept_stage;
 #define PFERR_FETCH_BIT 4
 #define PFERR_PK_BIT 5
 #define PFERR_SGX_BIT 15
+#define PFERR_LEVEL_START_BIT 29
+#define PFERR_LEVEL_END_BIT 31
 #define PFERR_GUEST_FINAL_BIT 32
 #define PFERR_GUEST_PAGE_BIT 33
 #define PFERR_GUEST_ENC_BIT 34
@@ -273,6 +275,7 @@ enum x86_intercept_stage;
 #define PFERR_FETCH_MASK	BIT(PFERR_FETCH_BIT)
 #define PFERR_PK_MASK		BIT(PFERR_PK_BIT)
 #define PFERR_SGX_MASK		BIT(PFERR_SGX_BIT)
+#define PFERR_LEVEL_MASK	GENMASK_ULL(PFERR_LEVEL_END_BIT, PFERR_LEVEL_START_BIT)
 #define PFERR_GUEST_FINAL_MASK	BIT_ULL(PFERR_GUEST_FINAL_BIT)
 #define PFERR_GUEST_PAGE_MASK	BIT_ULL(PFERR_GUEST_PAGE_BIT)
 #define PFERR_GUEST_ENC_MASK	BIT_ULL(PFERR_GUEST_ENC_BIT)
diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c
index eb17a508c5d1..265177cedf37 100644
--- a/arch/x86/kvm/mmu/mmu.c
+++ b/arch/x86/kvm/mmu/mmu.c
@@ -4615,6 +4615,11 @@ bool __kvm_mmu_honors_guest_mtrrs(bool vm_has_noncoherent_dma)
 
 int kvm_tdp_page_fault(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault)
 {
+	u8 err_level = (fault->error_code & PFERR_LEVEL_MASK) >> PFERR_LEVEL_START_BIT;
+
+	if (err_level)
+		fault->max_level = min(fault->max_level, err_level);
+
 	/*
 	 * If the guest's MTRRs may be used to compute the "real" memtype,
 	 * restrict the mapping level to ensure KVM uses a consistent memtype
diff --git a/arch/x86/kvm/vmx/common.h b/arch/x86/kvm/vmx/common.h
index 027aa4175d2c..bb00433932ee 100644
--- a/arch/x86/kvm/vmx/common.h
+++ b/arch/x86/kvm/vmx/common.h
@@ -67,7 +67,8 @@ static inline void vmx_handle_external_interrupt_irqoff(struct kvm_vcpu *vcpu,
 }
 
 static inline int __vmx_handle_ept_violation(struct kvm_vcpu *vcpu, gpa_t gpa,
-					     unsigned long exit_qualification)
+					     unsigned long exit_qualification,
+					     int err_page_level)
 {
 	u64 error_code;
 
@@ -90,6 +91,9 @@ static inline int __vmx_handle_ept_violation(struct kvm_vcpu *vcpu, gpa_t gpa,
 	if (kvm_is_private_gpa(vcpu->kvm, gpa))
 		error_code |= PFERR_GUEST_ENC_MASK;
 
+	if (err_page_level > 0)
+		error_code |= (err_page_level << PFERR_LEVEL_START_BIT) & PFERR_LEVEL_MASK;
+
 	return kvm_mmu_page_fault(vcpu, gpa, error_code, NULL, 0);
 }
 
diff --git a/arch/x86/kvm/vmx/tdx.c b/arch/x86/kvm/vmx/tdx.c
index 31598b84811f..e4167f08b58b 100644
--- a/arch/x86/kvm/vmx/tdx.c
+++ b/arch/x86/kvm/vmx/tdx.c
@@ -1803,7 +1803,20 @@ void tdx_deliver_interrupt(struct kvm_lapic *apic, int delivery_mode,
 
 static int tdx_handle_ept_violation(struct kvm_vcpu *vcpu)
 {
+	union tdx_ext_exit_qualification ext_exit_qual;
 	unsigned long exit_qual;
+	int err_page_level = 0;
+
+	ext_exit_qual.full = tdexit_ext_exit_qual(vcpu);
+
+	if (ext_exit_qual.type >= NUM_EXT_EXIT_QUAL) {
+		pr_err("EPT violation at gpa 0x%lx, with invalid ext exit qualification type 0x%x\n",
+			tdexit_gpa(vcpu), ext_exit_qual.type);
+		kvm_vm_bugged(vcpu->kvm);
+		return 0;
+	} else if (ext_exit_qual.type == EXT_EXIT_QUAL_ACCEPT) {
+		err_page_level = tdx_sept_level_to_pg_level(ext_exit_qual.req_sept_level);
+	}
 
 	if (kvm_is_private_gpa(vcpu->kvm, tdexit_gpa(vcpu))) {
 		/*
@@ -1830,7 +1843,7 @@ static int tdx_handle_ept_violation(struct kvm_vcpu *vcpu)
 	}
 
 	trace_kvm_page_fault(vcpu, tdexit_gpa(vcpu), exit_qual);
-	return __vmx_handle_ept_violation(vcpu, tdexit_gpa(vcpu), exit_qual);
+	return __vmx_handle_ept_violation(vcpu, tdexit_gpa(vcpu), exit_qual, err_page_level);
 }
 
 static int tdx_handle_ept_misconfig(struct kvm_vcpu *vcpu)
diff --git a/arch/x86/kvm/vmx/tdx.h b/arch/x86/kvm/vmx/tdx.h
index 54c3f6b83571..37ee944c36a1 100644
--- a/arch/x86/kvm/vmx/tdx.h
+++ b/arch/x86/kvm/vmx/tdx.h
@@ -72,6 +72,25 @@ union tdx_exit_reason {
 	u64 full;
 };
 
+union tdx_ext_exit_qualification {
+	struct {
+		u64 type		: 4;
+		u64 reserved0		: 28;
+		u64 req_sept_level	: 3;
+		u64 err_sept_level	: 3;
+		u64 err_sept_state	: 8;
+		u64 err_sept_is_leaf	: 1;
+		u64 reserved1		: 17;
+	};
+	u64 full;
+};
+
+enum tdx_ext_exit_qualification_type {
+	EXT_EXIT_QUAL_NONE,
+	EXT_EXIT_QUAL_ACCEPT,
+	NUM_EXT_EXIT_QUAL,
+};
+
 struct vcpu_tdx {
 	struct kvm_vcpu	vcpu;
 
diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
index 28732925792e..ae9ba0731521 100644
--- a/arch/x86/kvm/vmx/vmx.c
+++ b/arch/x86/kvm/vmx/vmx.c
@@ -5753,7 +5753,7 @@ static int handle_ept_violation(struct kvm_vcpu *vcpu)
 	if (unlikely(allow_smaller_maxphyaddr && kvm_vcpu_is_illegal_gpa(vcpu, gpa)))
 		return kvm_emulate_instruction(vcpu, 0);
 
-	return __vmx_handle_ept_violation(vcpu, gpa, exit_qualification);
+	return __vmx_handle_ept_violation(vcpu, gpa, exit_qualification, 0);
 }
 
 static int handle_ept_misconfig(struct kvm_vcpu *vcpu)
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 39+ messages in thread

* [PATCH v6 08/16] KVM: TDX: Pin pages via get_page() right before ADD/AUG'ed to TDs
  2023-11-07 15:00 [PATCH v6 00/16] KVM TDX: TDP MMU: large page support isaku.yamahata
                   ` (6 preceding siblings ...)
  2023-11-07 15:00 ` [PATCH v6 07/16] KVM: MMU: Introduce level info in PFERR code isaku.yamahata
@ 2023-11-07 15:00 ` isaku.yamahata
  2023-11-20 11:05   ` Binbin Wu
  2023-11-07 15:00 ` [PATCH v6 09/16] KVM: TDX: Pass desired page level in err code for page fault handler isaku.yamahata
                   ` (7 subsequent siblings)
  15 siblings, 1 reply; 39+ messages in thread
From: isaku.yamahata @ 2023-11-07 15:00 UTC (permalink / raw)
  To: kvm, linux-kernel
  Cc: isaku.yamahata, isaku.yamahata, Paolo Bonzini, erdemaktas,
	Sean Christopherson, Sagi Shahar, David Matlack, Kai Huang,
	Zhi Wang, chen.bo, hang.yuan, tina.zhang, Xiaoyao Li

From: Xiaoyao Li <xiaoyao.li@intel.com>

When kvm_faultin_pfn(), it doesn't have the info regarding which page level
will the gfn be mapped at. Hence it doesn't know to pin a 4K page or a
2M page.

Move the guest private pages pinning logic right before
TDH_MEM_PAGE_ADD/AUG() since at that time it knows the page level info.

Signed-off-by: Xiaoyao Li <xiaoyao.li@intel.com>
---
 arch/x86/kvm/vmx/tdx.c | 15 ++++++++-------
 1 file changed, 8 insertions(+), 7 deletions(-)

diff --git a/arch/x86/kvm/vmx/tdx.c b/arch/x86/kvm/vmx/tdx.c
index e4167f08b58b..7b81811eb404 100644
--- a/arch/x86/kvm/vmx/tdx.c
+++ b/arch/x86/kvm/vmx/tdx.c
@@ -1454,7 +1454,8 @@ static void tdx_measure_page(struct kvm_tdx *kvm_tdx, hpa_t gpa, int size)
 	}
 }
 
-static void tdx_unpin(struct kvm *kvm, kvm_pfn_t pfn, int level)
+static void tdx_unpin(struct kvm *kvm, gfn_t gfn, kvm_pfn_t pfn,
+		      enum pg_level level)
 {
 	int i;
 
@@ -1476,7 +1477,7 @@ static int tdx_sept_page_aug(struct kvm *kvm, gfn_t gfn,
 
 	err = tdh_mem_page_aug(kvm_tdx->tdr_pa, gpa, tdx_level, hpa, &out);
 	if (unlikely(err == TDX_ERROR_SEPT_BUSY)) {
-		tdx_unpin(kvm, pfn, level);
+		tdx_unpin(kvm, gfn, pfn, level);
 		return -EAGAIN;
 	}
 	if (unlikely(err == (TDX_EPT_ENTRY_STATE_INCORRECT | TDX_OPERAND_ID_RCX))) {
@@ -1493,7 +1494,7 @@ static int tdx_sept_page_aug(struct kvm *kvm, gfn_t gfn,
 	}
 	if (KVM_BUG_ON(err, kvm)) {
 		pr_tdx_error(TDH_MEM_PAGE_AUG, err, &out);
-		tdx_unpin(kvm, pfn, level);
+		tdx_unpin(kvm, gfn, pfn, level);
 		return -EIO;
 	}
 
@@ -1529,7 +1530,7 @@ static int tdx_sept_page_add(struct kvm *kvm, gfn_t gfn,
 	 * always uses vcpu 0's page table and protected by vcpu->mutex).
 	 */
 	if (KVM_BUG_ON(kvm_tdx->source_pa == INVALID_PAGE, kvm)) {
-		tdx_unpin(kvm, pfn, level);
+		tdx_unpin(kvm, gfn, pfn, level);
 		return -EINVAL;
 	}
 
@@ -1547,7 +1548,7 @@ static int tdx_sept_page_add(struct kvm *kvm, gfn_t gfn,
 	} while (unlikely(err == TDX_ERROR_SEPT_BUSY));
 	if (KVM_BUG_ON(err, kvm)) {
 		pr_tdx_error(TDH_MEM_PAGE_ADD, err, &out);
-		tdx_unpin(kvm, pfn, level);
+		tdx_unpin(kvm, gfn, pfn, level);
 		return -EIO;
 	} else if (measure)
 		tdx_measure_page(kvm_tdx, gpa, KVM_HPAGE_SIZE(level));
@@ -1600,7 +1601,7 @@ static int tdx_sept_drop_private_spte(struct kvm *kvm, gfn_t gfn,
 		err = tdx_reclaim_page(hpa, level);
 		if (KVM_BUG_ON(err, kvm))
 			return -EIO;
-		tdx_unpin(kvm, pfn, level);
+		tdx_unpin(kvm, gfn, pfn, level);
 		return 0;
 	}
 
@@ -1633,7 +1634,7 @@ static int tdx_sept_drop_private_spte(struct kvm *kvm, gfn_t gfn,
 			r = -EIO;
 		} else {
 			tdx_clear_page(hpa, PAGE_SIZE);
-			tdx_unpin(kvm, pfn + i, PG_LEVEL_4K);
+			tdx_unpin(kvm, gfn + i, pfn + i, PG_LEVEL_4K);
 		}
 		hpa += PAGE_SIZE;
 	}
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 39+ messages in thread

* [PATCH v6 09/16] KVM: TDX: Pass desired page level in err code for page fault handler
  2023-11-07 15:00 [PATCH v6 00/16] KVM TDX: TDP MMU: large page support isaku.yamahata
                   ` (7 preceding siblings ...)
  2023-11-07 15:00 ` [PATCH v6 08/16] KVM: TDX: Pin pages via get_page() right before ADD/AUG'ed to TDs isaku.yamahata
@ 2023-11-07 15:00 ` isaku.yamahata
  2023-11-20 11:24   ` Binbin Wu
  2023-11-07 15:00 ` [PATCH v6 10/16] KVM: x86/tdp_mmu: Allocate private page table for large page split isaku.yamahata
                   ` (6 subsequent siblings)
  15 siblings, 1 reply; 39+ messages in thread
From: isaku.yamahata @ 2023-11-07 15:00 UTC (permalink / raw)
  To: kvm, linux-kernel
  Cc: isaku.yamahata, isaku.yamahata, Paolo Bonzini, erdemaktas,
	Sean Christopherson, Sagi Shahar, David Matlack, Kai Huang,
	Zhi Wang, chen.bo, hang.yuan, tina.zhang, Xiaoyao Li

From: Xiaoyao Li <xiaoyao.li@intel.com>

For TDX, EPT violation can happen when TDG.MEM.PAGE.ACCEPT.
And TDG.MEM.PAGE.ACCEPT contains the desired accept page level of TD guest.

1. KVM can map it with 4KB page while TD guest wants to accept 2MB page.

  TD geust will get TDX_PAGE_SIZE_MISMATCH and it should try to accept
  4KB size.

2. KVM can map it with 2MB page while TD guest wants to accept 4KB page.

  KVM needs to honor it because
  a) there is no way to tell guest KVM maps it as 2MB size. And
  b) guest accepts it in 4KB size since guest knows some other 4KB page
     in the same 2MB range will be used as shared page.

For case 2, it need to pass desired page level to MMU's
page_fault_handler. Use bit 29:31 of kvm PF error code for this purpose.

Signed-off-by: Xiaoyao Li <xiaoyao.li@intel.com>
---
 arch/x86/include/asm/kvm_host.h |  2 ++
 arch/x86/kvm/vmx/common.h       |  2 +-
 arch/x86/kvm/vmx/tdx.c          |  7 ++++++-
 arch/x86/kvm/vmx/tdx.h          | 19 -------------------
 arch/x86/kvm/vmx/tdx_arch.h     | 19 +++++++++++++++++++
 arch/x86/kvm/vmx/vmx.c          |  2 +-
 6 files changed, 29 insertions(+), 22 deletions(-)

diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
index eed36c1eedb7..c16823f3326e 100644
--- a/arch/x86/include/asm/kvm_host.h
+++ b/arch/x86/include/asm/kvm_host.h
@@ -285,6 +285,8 @@ enum x86_intercept_stage;
 				 PFERR_WRITE_MASK |		\
 				 PFERR_PRESENT_MASK)
 
+#define PFERR_LEVEL(err_code)	(((err_code) & PFERR_LEVEL_MASK) >> PFERR_LEVEL_START_BIT)
+
 /* apic attention bits */
 #define KVM_APIC_CHECK_VAPIC	0
 /*
diff --git a/arch/x86/kvm/vmx/common.h b/arch/x86/kvm/vmx/common.h
index bb00433932ee..787f59c44abc 100644
--- a/arch/x86/kvm/vmx/common.h
+++ b/arch/x86/kvm/vmx/common.h
@@ -91,7 +91,7 @@ static inline int __vmx_handle_ept_violation(struct kvm_vcpu *vcpu, gpa_t gpa,
 	if (kvm_is_private_gpa(vcpu->kvm, gpa))
 		error_code |= PFERR_GUEST_ENC_MASK;
 
-	if (err_page_level > 0)
+	if (err_page_level > PG_LEVEL_NONE)
 		error_code |= (err_page_level << PFERR_LEVEL_START_BIT) & PFERR_LEVEL_MASK;
 
 	return kvm_mmu_page_fault(vcpu, gpa, error_code, NULL, 0);
diff --git a/arch/x86/kvm/vmx/tdx.c b/arch/x86/kvm/vmx/tdx.c
index 7b81811eb404..c614ab20c191 100644
--- a/arch/x86/kvm/vmx/tdx.c
+++ b/arch/x86/kvm/vmx/tdx.c
@@ -2713,6 +2713,7 @@ static int tdx_init_mem_region(struct kvm *kvm, struct kvm_tdx_cmd *cmd)
 	struct kvm_tdx_init_mem_region region;
 	struct kvm_vcpu *vcpu;
 	struct page *page;
+	u64 error_code;
 	int idx, ret = 0;
 	bool added = false;
 
@@ -2770,7 +2771,11 @@ static int tdx_init_mem_region(struct kvm *kvm, struct kvm_tdx_cmd *cmd)
 		kvm_tdx->source_pa = pfn_to_hpa(page_to_pfn(page)) |
 				     (cmd->flags & KVM_TDX_MEASURE_MEMORY_REGION);
 
-		ret = kvm_mmu_map_tdp_page(vcpu, region.gpa, TDX_SEPT_PFERR,
+		/* TODO: large page support. */
+		error_code = TDX_SEPT_PFERR;
+		error_code |= (PG_LEVEL_4K << PFERR_LEVEL_START_BIT) &
+			PFERR_LEVEL_MASK;
+		ret = kvm_mmu_map_tdp_page(vcpu, region.gpa, error_code,
 					   PG_LEVEL_4K);
 		put_page(page);
 		if (ret)
diff --git a/arch/x86/kvm/vmx/tdx.h b/arch/x86/kvm/vmx/tdx.h
index 37ee944c36a1..54c3f6b83571 100644
--- a/arch/x86/kvm/vmx/tdx.h
+++ b/arch/x86/kvm/vmx/tdx.h
@@ -72,25 +72,6 @@ union tdx_exit_reason {
 	u64 full;
 };
 
-union tdx_ext_exit_qualification {
-	struct {
-		u64 type		: 4;
-		u64 reserved0		: 28;
-		u64 req_sept_level	: 3;
-		u64 err_sept_level	: 3;
-		u64 err_sept_state	: 8;
-		u64 err_sept_is_leaf	: 1;
-		u64 reserved1		: 17;
-	};
-	u64 full;
-};
-
-enum tdx_ext_exit_qualification_type {
-	EXT_EXIT_QUAL_NONE,
-	EXT_EXIT_QUAL_ACCEPT,
-	NUM_EXT_EXIT_QUAL,
-};
-
 struct vcpu_tdx {
 	struct kvm_vcpu	vcpu;
 
diff --git a/arch/x86/kvm/vmx/tdx_arch.h b/arch/x86/kvm/vmx/tdx_arch.h
index 9f93250d22b9..ba41fefa47ee 100644
--- a/arch/x86/kvm/vmx/tdx_arch.h
+++ b/arch/x86/kvm/vmx/tdx_arch.h
@@ -218,4 +218,23 @@ union tdx_sept_level_state {
 	u64 raw;
 };
 
+union tdx_ext_exit_qualification {
+	struct {
+		u64 type		:  4;
+		u64 reserved0		: 28;
+		u64 req_sept_level	:  3;
+		u64 err_sept_level	:  3;
+		u64 err_sept_state	:  8;
+		u64 err_sept_is_leaf	:  1;
+		u64 reserved1		: 17;
+	};
+	u64 full;
+};
+
+enum tdx_ext_exit_qualification_type {
+	EXT_EXIT_QUAL_NONE = 0,
+	EXT_EXIT_QUAL_ACCEPT,
+	NUM_EXT_EXIT_QUAL,
+};
+
 #endif /* __KVM_X86_TDX_ARCH_H */
diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
index ae9ba0731521..fb3913df6a5d 100644
--- a/arch/x86/kvm/vmx/vmx.c
+++ b/arch/x86/kvm/vmx/vmx.c
@@ -5753,7 +5753,7 @@ static int handle_ept_violation(struct kvm_vcpu *vcpu)
 	if (unlikely(allow_smaller_maxphyaddr && kvm_vcpu_is_illegal_gpa(vcpu, gpa)))
 		return kvm_emulate_instruction(vcpu, 0);
 
-	return __vmx_handle_ept_violation(vcpu, gpa, exit_qualification, 0);
+	return __vmx_handle_ept_violation(vcpu, gpa, exit_qualification, PG_LEVEL_NONE);
 }
 
 static int handle_ept_misconfig(struct kvm_vcpu *vcpu)
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 39+ messages in thread

* [PATCH v6 10/16] KVM: x86/tdp_mmu: Allocate private page table for large page split
  2023-11-07 15:00 [PATCH v6 00/16] KVM TDX: TDP MMU: large page support isaku.yamahata
                   ` (8 preceding siblings ...)
  2023-11-07 15:00 ` [PATCH v6 09/16] KVM: TDX: Pass desired page level in err code for page fault handler isaku.yamahata
@ 2023-11-07 15:00 ` isaku.yamahata
  2023-11-07 15:00 ` [PATCH v6 11/16] KVM: x86/tdp_mmu: Split the large page when zap leaf isaku.yamahata
                   ` (5 subsequent siblings)
  15 siblings, 0 replies; 39+ messages in thread
From: isaku.yamahata @ 2023-11-07 15:00 UTC (permalink / raw)
  To: kvm, linux-kernel
  Cc: isaku.yamahata, isaku.yamahata, Paolo Bonzini, erdemaktas,
	Sean Christopherson, Sagi Shahar, David Matlack, Kai Huang,
	Zhi Wang, chen.bo, hang.yuan, tina.zhang

From: Isaku Yamahata <isaku.yamahata@intel.com>

Make tdp_mmu_alloc_sp_split() aware of private page table.

Signed-off-by: Isaku Yamahata <isaku.yamahata@intel.com>
---
 arch/x86/kvm/mmu/mmu_internal.h | 14 ++++++++++++++
 arch/x86/kvm/mmu/tdp_mmu.c      |  8 ++++++--
 2 files changed, 20 insertions(+), 2 deletions(-)

diff --git a/arch/x86/kvm/mmu/mmu_internal.h b/arch/x86/kvm/mmu/mmu_internal.h
index d8d46a4d00ff..1da98be74ad2 100644
--- a/arch/x86/kvm/mmu/mmu_internal.h
+++ b/arch/x86/kvm/mmu/mmu_internal.h
@@ -202,6 +202,15 @@ static inline void kvm_mmu_alloc_private_spt(struct kvm_vcpu *vcpu, struct kvm_m
 	}
 }
 
+static inline int kvm_alloc_private_spt_for_split(struct kvm_mmu_page *sp, gfp_t gfp)
+{
+	gfp &= ~__GFP_ZERO;
+	sp->private_spt = (void *)__get_free_page(gfp);
+	if (!sp->private_spt)
+		return -ENOMEM;
+	return 0;
+}
+
 static inline void kvm_mmu_free_private_spt(struct kvm_mmu_page *sp)
 {
 	if (sp->private_spt)
@@ -230,6 +239,11 @@ static inline void kvm_mmu_alloc_private_spt(struct kvm_vcpu *vcpu, struct kvm_m
 {
 }
 
+static inline int kvm_alloc_private_spt_for_split(struct kvm_mmu_page *sp, gfp_t gfp)
+{
+	return -ENOMEM;
+}
+
 static inline void kvm_mmu_free_private_spt(struct kvm_mmu_page *sp)
 {
 }
diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/arch/x86/kvm/mmu/tdp_mmu.c
index d806574f7f2d..7873e9ee82ad 100644
--- a/arch/x86/kvm/mmu/tdp_mmu.c
+++ b/arch/x86/kvm/mmu/tdp_mmu.c
@@ -1583,8 +1583,12 @@ static struct kvm_mmu_page *__tdp_mmu_alloc_sp_for_split(gfp_t gfp, union kvm_mm
 
 	sp->role = role;
 	sp->spt = (void *)__get_free_page(gfp);
-	/* TODO: large page support for private GPA. */
-	WARN_ON_ONCE(kvm_mmu_page_role_is_private(role));
+	if (kvm_mmu_page_role_is_private(role)) {
+		if (kvm_alloc_private_spt_for_split(sp, gfp)) {
+			free_page((unsigned long)sp->spt);
+			sp->spt = NULL;
+		}
+	}
 	if (!sp->spt) {
 		kmem_cache_free(mmu_page_header_cache, sp);
 		return NULL;
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 39+ messages in thread

* [PATCH v6 11/16] KVM: x86/tdp_mmu: Split the large page when zap leaf
  2023-11-07 15:00 [PATCH v6 00/16] KVM TDX: TDP MMU: large page support isaku.yamahata
                   ` (9 preceding siblings ...)
  2023-11-07 15:00 ` [PATCH v6 10/16] KVM: x86/tdp_mmu: Allocate private page table for large page split isaku.yamahata
@ 2023-11-07 15:00 ` isaku.yamahata
  2023-11-21  9:57   ` Binbin Wu
  2023-11-07 15:00 ` [PATCH v6 12/16] KVM: x86/tdp_mmu, TDX: Split a large page when 4KB page within it converted to shared isaku.yamahata
                   ` (4 subsequent siblings)
  15 siblings, 1 reply; 39+ messages in thread
From: isaku.yamahata @ 2023-11-07 15:00 UTC (permalink / raw)
  To: kvm, linux-kernel
  Cc: isaku.yamahata, isaku.yamahata, Paolo Bonzini, erdemaktas,
	Sean Christopherson, Sagi Shahar, David Matlack, Kai Huang,
	Zhi Wang, chen.bo, hang.yuan, tina.zhang, Xiaoyao Li

From: Xiaoyao Li <xiaoyao.li@intel.com>

When TDX enabled, a large page cannot be zapped if it contains mixed
pages. In this case, it has to split the large page.

Signed-off-by: Xiaoyao Li <xiaoyao.li@intel.com>
---
 arch/x86/kvm/Kconfig            |  1 +
 arch/x86/kvm/mmu/mmu.c          |  6 +--
 arch/x86/kvm/mmu/mmu_internal.h |  9 +++++
 arch/x86/kvm/mmu/tdp_mmu.c      | 68 +++++++++++++++++++++++++++++++--
 4 files changed, 78 insertions(+), 6 deletions(-)

diff --git a/arch/x86/kvm/Kconfig b/arch/x86/kvm/Kconfig
index b0f103641547..557479737962 100644
--- a/arch/x86/kvm/Kconfig
+++ b/arch/x86/kvm/Kconfig
@@ -93,6 +93,7 @@ config KVM_INTEL
 	tristate "KVM for Intel (and compatible) processors support"
 	depends on KVM && IA32_FEAT_CTL
 	select KVM_SW_PROTECTED_VM if INTEL_TDX_HOST
+	select KVM_GENERIC_MEMORY_ATTRIBUTES if INTEL_TDX_HOST
 	select KVM_PRIVATE_MEM if INTEL_TDX_HOST
 	help
 	  Provides support for KVM on processors equipped with Intel's VT
diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c
index 265177cedf37..0bf043812644 100644
--- a/arch/x86/kvm/mmu/mmu.c
+++ b/arch/x86/kvm/mmu/mmu.c
@@ -7463,8 +7463,8 @@ bool kvm_arch_pre_set_memory_attributes(struct kvm *kvm,
 	return kvm_unmap_gfn_range(kvm, range);
 }
 
-static bool hugepage_test_mixed(struct kvm_memory_slot *slot, gfn_t gfn,
-				int level)
+bool kvm_hugepage_test_mixed(struct kvm_memory_slot *slot, gfn_t gfn,
+			     int level)
 {
 	return lpage_info_slot(gfn, slot, level)->disallow_lpage & KVM_LPAGE_MIXED_FLAG;
 }
@@ -7491,7 +7491,7 @@ static bool hugepage_has_attrs(struct kvm *kvm, struct kvm_memory_slot *slot,
 		return kvm_range_has_memory_attributes(kvm, start, end, attrs);
 
 	for (gfn = start; gfn < end; gfn += KVM_PAGES_PER_HPAGE(level - 1)) {
-		if (hugepage_test_mixed(slot, gfn, level - 1) ||
+		if (kvm_hugepage_test_mixed(slot, gfn, level - 1) ||
 		    attrs != kvm_get_memory_attributes(kvm, gfn))
 			return false;
 	}
diff --git a/arch/x86/kvm/mmu/mmu_internal.h b/arch/x86/kvm/mmu/mmu_internal.h
index 1da98be74ad2..653e96769956 100644
--- a/arch/x86/kvm/mmu/mmu_internal.h
+++ b/arch/x86/kvm/mmu/mmu_internal.h
@@ -460,4 +460,13 @@ void *mmu_memory_cache_alloc(struct kvm_mmu_memory_cache *mc);
 void track_possible_nx_huge_page(struct kvm *kvm, struct kvm_mmu_page *sp);
 void untrack_possible_nx_huge_page(struct kvm *kvm, struct kvm_mmu_page *sp);
 
+#ifdef CONFIG_KVM_GENERIC_MEMORY_ATTRIBUTES
+bool kvm_hugepage_test_mixed(struct kvm_memory_slot *slot, gfn_t gfn, int level);
+#else
+static inline bool kvm_hugepage_test_mixed(struct kvm_memory_slot *slot, gfn_t gfn, int level)
+{
+	return false;
+}
+#endif
+
 #endif /* __KVM_X86_MMU_INTERNAL_H */
diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/arch/x86/kvm/mmu/tdp_mmu.c
index 7873e9ee82ad..a209a67decae 100644
--- a/arch/x86/kvm/mmu/tdp_mmu.c
+++ b/arch/x86/kvm/mmu/tdp_mmu.c
@@ -964,6 +964,14 @@ bool kvm_tdp_mmu_zap_sp(struct kvm *kvm, struct kvm_mmu_page *sp)
 	return true;
 }
 
+
+static struct kvm_mmu_page *tdp_mmu_alloc_sp_for_split(struct kvm *kvm,
+						       struct tdp_iter *iter,
+						       bool shared);
+
+static int tdp_mmu_split_huge_page(struct kvm *kvm, struct tdp_iter *iter,
+				   struct kvm_mmu_page *sp, bool shared);
+
 /*
  * If can_yield is true, will release the MMU lock and reschedule if the
  * scheduler needs the CPU or there is contention on the MMU lock. If this
@@ -975,13 +983,15 @@ static bool tdp_mmu_zap_leafs(struct kvm *kvm, struct kvm_mmu_page *root,
 			      gfn_t start, gfn_t end, bool can_yield, bool flush,
 			      bool zap_private)
 {
+	bool is_private = is_private_sp(root);
+	struct kvm_mmu_page *split_sp = NULL;
 	struct tdp_iter iter;
 
 	end = min(end, tdp_mmu_max_gfn_exclusive());
 
 	lockdep_assert_held_write(&kvm->mmu_lock);
 
-	WARN_ON_ONCE(zap_private && !is_private_sp(root));
+	WARN_ON_ONCE(zap_private && !is_private);
 	if (!zap_private && is_private_sp(root))
 		return false;
 
@@ -1006,12 +1016,66 @@ static bool tdp_mmu_zap_leafs(struct kvm *kvm, struct kvm_mmu_page *root,
 		    !is_last_spte(iter.old_spte, iter.level))
 			continue;
 
+		if (is_private && kvm_gfn_shared_mask(kvm) &&
+		    is_large_pte(iter.old_spte)) {
+			gfn_t gfn = iter.gfn & ~kvm_gfn_shared_mask(kvm);
+			gfn_t mask = KVM_PAGES_PER_HPAGE(iter.level) - 1;
+			struct kvm_memory_slot *slot;
+			struct kvm_mmu_page *sp;
+
+			slot = gfn_to_memslot(kvm, gfn);
+			if (kvm_hugepage_test_mixed(slot, gfn, iter.level) ||
+			    (gfn & mask) < start ||
+			    end < (gfn & mask) + KVM_PAGES_PER_HPAGE(iter.level)) {
+				WARN_ON_ONCE(!can_yield);
+				if (split_sp) {
+					sp = split_sp;
+					split_sp = NULL;
+					sp->role = tdp_iter_child_role(&iter);
+				} else {
+					WARN_ON(iter.yielded);
+					if (flush && can_yield) {
+						kvm_flush_remote_tlbs(kvm);
+						flush = false;
+					}
+					sp = tdp_mmu_alloc_sp_for_split(kvm, &iter, false);
+					if (iter.yielded) {
+						split_sp = sp;
+						continue;
+					}
+				}
+				KVM_BUG_ON(!sp, kvm);
+
+				tdp_mmu_init_sp(sp, iter.sptep, iter.gfn);
+				if (tdp_mmu_split_huge_page(kvm, &iter, sp, false)) {
+					kvm_flush_remote_tlbs(kvm);
+					flush = false;
+					/* force retry on this gfn. */
+					iter.yielded = true;
+				} else
+					flush = true;
+				continue;
+			}
+		}
+
 		tdp_mmu_iter_set_spte(kvm, &iter, SHADOW_NONPRESENT_VALUE);
 		flush = true;
 	}
 
 	rcu_read_unlock();
 
+	if (split_sp) {
+		WARN_ON(!can_yield);
+		if (flush) {
+			kvm_flush_remote_tlbs(kvm);
+			flush = false;
+		}
+
+		write_unlock(&kvm->mmu_lock);
+		tdp_mmu_free_sp(split_sp);
+		write_lock(&kvm->mmu_lock);
+	}
+
 	/*
 	 * Because this flow zaps _only_ leaf SPTEs, the caller doesn't need
 	 * to provide RCU protection as no 'struct kvm_mmu_page' will be freed.
@@ -1606,8 +1670,6 @@ static struct kvm_mmu_page *tdp_mmu_alloc_sp_for_split(struct kvm *kvm,
 
 	KVM_BUG_ON(kvm_mmu_page_role_is_private(role) !=
 		   is_private_sptep(iter->sptep), kvm);
-	/* TODO: Large page isn't supported for private SPTE yet. */
-	KVM_BUG_ON(kvm_mmu_page_role_is_private(role), kvm);
 
 	/*
 	 * Since we are allocating while under the MMU lock we have to be
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 39+ messages in thread

* [PATCH v6 12/16] KVM: x86/tdp_mmu, TDX: Split a large page when 4KB page within it converted to shared
  2023-11-07 15:00 [PATCH v6 00/16] KVM TDX: TDP MMU: large page support isaku.yamahata
                   ` (10 preceding siblings ...)
  2023-11-07 15:00 ` [PATCH v6 11/16] KVM: x86/tdp_mmu: Split the large page when zap leaf isaku.yamahata
@ 2023-11-07 15:00 ` isaku.yamahata
  2023-11-22  5:45   ` Binbin Wu
  2023-11-07 15:00 ` [PATCH v6 13/16] KVM: x86/tdp_mmu: Try to merge pages into a large page isaku.yamahata
                   ` (3 subsequent siblings)
  15 siblings, 1 reply; 39+ messages in thread
From: isaku.yamahata @ 2023-11-07 15:00 UTC (permalink / raw)
  To: kvm, linux-kernel
  Cc: isaku.yamahata, isaku.yamahata, Paolo Bonzini, erdemaktas,
	Sean Christopherson, Sagi Shahar, David Matlack, Kai Huang,
	Zhi Wang, chen.bo, hang.yuan, tina.zhang, Xiaoyao Li

From: Xiaoyao Li <xiaoyao.li@intel.com>

When mapping the shared page for TDX, it needs to zap private alias.

In the case that private page is mapped as large page (2MB), it can be
removed directly only when the whole 2MB is converted to shared.
Otherwise, it has to split 2MB page into 512 4KB page, and only remove
the pages that converted to shared.

When a present large leaf spte switches to present non-leaf spte, TDX needs
to split the corresponding SEPT page to reflect it.

Signed-off-by: Xiaoyao Li <xiaoyao.li@intel.com>
Signed-off-by: Isaku Yamahata <isaku.yamahata@intel.com>
---
 arch/x86/include/asm/kvm-x86-ops.h |  1 +
 arch/x86/include/asm/kvm_host.h    |  2 ++
 arch/x86/kvm/mmu/tdp_mmu.c         | 21 ++++++++++++++++-----
 arch/x86/kvm/vmx/tdx.c             | 25 +++++++++++++++++++++++--
 arch/x86/kvm/vmx/tdx_arch.h        |  1 +
 arch/x86/kvm/vmx/tdx_ops.h         |  7 +++++++
 6 files changed, 50 insertions(+), 7 deletions(-)

diff --git a/arch/x86/include/asm/kvm-x86-ops.h b/arch/x86/include/asm/kvm-x86-ops.h
index 8ef0ed217f6e..3deb6ab4f291 100644
--- a/arch/x86/include/asm/kvm-x86-ops.h
+++ b/arch/x86/include/asm/kvm-x86-ops.h
@@ -103,6 +103,7 @@ KVM_X86_OP_OPTIONAL_RET0(get_mt_mask)
 KVM_X86_OP(load_mmu_pgd)
 KVM_X86_OP_OPTIONAL(link_private_spt)
 KVM_X86_OP_OPTIONAL(free_private_spt)
+KVM_X86_OP_OPTIONAL(split_private_spt)
 KVM_X86_OP_OPTIONAL(set_private_spte)
 KVM_X86_OP_OPTIONAL(remove_private_spte)
 KVM_X86_OP_OPTIONAL(zap_private_spte)
diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
index c16823f3326e..e75a461bdea7 100644
--- a/arch/x86/include/asm/kvm_host.h
+++ b/arch/x86/include/asm/kvm_host.h
@@ -1753,6 +1753,8 @@ struct kvm_x86_ops {
 				void *private_spt);
 	int (*free_private_spt)(struct kvm *kvm, gfn_t gfn, enum pg_level level,
 				void *private_spt);
+	int (*split_private_spt)(struct kvm *kvm, gfn_t gfn, enum pg_level level,
+				  void *private_spt);
 	int (*set_private_spte)(struct kvm *kvm, gfn_t gfn, enum pg_level level,
 				 kvm_pfn_t pfn);
 	int (*remove_private_spte)(struct kvm *kvm, gfn_t gfn, enum pg_level level,
diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/arch/x86/kvm/mmu/tdp_mmu.c
index a209a67decae..734ee822b43c 100644
--- a/arch/x86/kvm/mmu/tdp_mmu.c
+++ b/arch/x86/kvm/mmu/tdp_mmu.c
@@ -599,23 +599,34 @@ static int __must_check __set_private_spte_present(struct kvm *kvm, tdp_ptep_t s
 {
 	bool was_present = is_shadow_present_pte(old_spte);
 	bool is_present = is_shadow_present_pte(new_spte);
+	bool was_leaf = was_present && is_last_spte(old_spte, level);
 	bool is_leaf = is_present && is_last_spte(new_spte, level);
 	kvm_pfn_t new_pfn = spte_to_pfn(new_spte);
+	void *private_spt;
 	int ret = 0;
 
 	lockdep_assert_held(&kvm->mmu_lock);
-	/* TDP MMU doesn't change present -> present */
-	KVM_BUG_ON(was_present, kvm);
 
 	/*
 	 * Use different call to either set up middle level
 	 * private page table, or leaf.
 	 */
-	if (is_leaf)
+	if (level > PG_LEVEL_4K && was_leaf && !is_leaf) {
+		/*
+		 * splitting large page into 4KB.
+		 * tdp_mmu_split_huage_page() => tdp_mmu_link_sp()
+		 */
+		private_spt = get_private_spt(gfn, new_spte, level);
+		KVM_BUG_ON(!private_spt, kvm);
+		ret = static_call(kvm_x86_zap_private_spte)(kvm, gfn, level);
+		kvm_flush_remote_tlbs(kvm);
+		if (!ret)
+			ret = static_call(kvm_x86_split_private_spt)(kvm, gfn,
+								     level, private_spt);
+	} else if (is_leaf)
 		ret = static_call(kvm_x86_set_private_spte)(kvm, gfn, level, new_pfn);
 	else {
-		void *private_spt = get_private_spt(gfn, new_spte, level);
-
+		private_spt = get_private_spt(gfn, new_spte, level);
 		KVM_BUG_ON(!private_spt, kvm);
 		ret = static_call(kvm_x86_link_private_spt)(kvm, gfn, level, private_spt);
 	}
diff --git a/arch/x86/kvm/vmx/tdx.c b/arch/x86/kvm/vmx/tdx.c
index c614ab20c191..91eca578a7da 100644
--- a/arch/x86/kvm/vmx/tdx.c
+++ b/arch/x86/kvm/vmx/tdx.c
@@ -1662,6 +1662,28 @@ static int tdx_sept_link_private_spt(struct kvm *kvm, gfn_t gfn,
 	return 0;
 }
 
+static int tdx_sept_split_private_spt(struct kvm *kvm, gfn_t gfn,
+				      enum pg_level level, void *private_spt)
+{
+	int tdx_level = pg_level_to_tdx_sept_level(level);
+	struct kvm_tdx *kvm_tdx = to_kvm_tdx(kvm);
+	gpa_t gpa = gfn_to_gpa(gfn) & KVM_HPAGE_MASK(level);
+	hpa_t hpa = __pa(private_spt);
+	struct tdx_module_args out;
+	u64 err;
+
+	/* See comment in tdx_sept_set_private_spte() */
+	err = tdh_mem_page_demote(kvm_tdx->tdr_pa, gpa, tdx_level, hpa, &out);
+	if (unlikely(err == TDX_ERROR_SEPT_BUSY))
+		return -EAGAIN;
+	if (KVM_BUG_ON(err, kvm)) {
+		pr_tdx_error(TDH_MEM_PAGE_DEMOTE, err, &out);
+		return -EIO;
+	}
+
+	return 0;
+}
+
 static int tdx_sept_zap_private_spte(struct kvm *kvm, gfn_t gfn,
 				      enum pg_level level)
 {
@@ -1675,8 +1697,6 @@ static int tdx_sept_zap_private_spte(struct kvm *kvm, gfn_t gfn,
 	if (unlikely(!is_hkid_assigned(kvm_tdx)))
 		return 0;
 
-	/* For now large page isn't supported yet. */
-	WARN_ON_ONCE(level != PG_LEVEL_4K);
 	err = tdh_mem_range_block(kvm_tdx->tdr_pa, gpa, tdx_level, &out);
 	if (unlikely(err == TDX_ERROR_SEPT_BUSY))
 		return -EAGAIN;
@@ -3183,6 +3203,7 @@ int __init tdx_hardware_setup(struct kvm_x86_ops *x86_ops)
 
 	x86_ops->link_private_spt = tdx_sept_link_private_spt;
 	x86_ops->free_private_spt = tdx_sept_free_private_spt;
+	x86_ops->split_private_spt = tdx_sept_split_private_spt;
 	x86_ops->set_private_spte = tdx_sept_set_private_spte;
 	x86_ops->remove_private_spte = tdx_sept_remove_private_spte;
 	x86_ops->zap_private_spte = tdx_sept_zap_private_spte;
diff --git a/arch/x86/kvm/vmx/tdx_arch.h b/arch/x86/kvm/vmx/tdx_arch.h
index ba41fefa47ee..cab6a74446a0 100644
--- a/arch/x86/kvm/vmx/tdx_arch.h
+++ b/arch/x86/kvm/vmx/tdx_arch.h
@@ -21,6 +21,7 @@
 #define TDH_MNG_CREATE			9
 #define TDH_VP_CREATE			10
 #define TDH_MNG_RD			11
+#define TDH_MEM_PAGE_DEMOTE		15
 #define TDH_MR_EXTEND			16
 #define TDH_MR_FINALIZE			17
 #define TDH_VP_FLUSH			18
diff --git a/arch/x86/kvm/vmx/tdx_ops.h b/arch/x86/kvm/vmx/tdx_ops.h
index 0f2df7198bde..38ab0ab1509c 100644
--- a/arch/x86/kvm/vmx/tdx_ops.h
+++ b/arch/x86/kvm/vmx/tdx_ops.h
@@ -183,6 +183,13 @@ static inline u64 tdh_mng_rd(hpa_t tdr, u64 field, struct tdx_module_args *out)
 	return tdx_seamcall(TDH_MNG_RD, tdr, field, 0, 0, out);
 }
 
+static inline u64 tdh_mem_page_demote(hpa_t tdr, gpa_t gpa, int level, hpa_t page,
+				      struct tdx_module_args *out)
+{
+	tdx_clflush_page(page, PG_LEVEL_4K);
+	return tdx_seamcall_sept(TDH_MEM_PAGE_DEMOTE, gpa | level, tdr, page, 0, out);
+}
+
 static inline u64 tdh_mr_extend(hpa_t tdr, gpa_t gpa,
 				struct tdx_module_args *out)
 {
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 39+ messages in thread

* [PATCH v6 13/16] KVM: x86/tdp_mmu: Try to merge pages into a large page
  2023-11-07 15:00 [PATCH v6 00/16] KVM TDX: TDP MMU: large page support isaku.yamahata
                   ` (11 preceding siblings ...)
  2023-11-07 15:00 ` [PATCH v6 12/16] KVM: x86/tdp_mmu, TDX: Split a large page when 4KB page within it converted to shared isaku.yamahata
@ 2023-11-07 15:00 ` isaku.yamahata
  2023-11-22  7:24   ` Binbin Wu
  2023-11-07 15:00 ` [PATCH v6 14/16] KVM: x86/tdp_mmu: TDX: Implement " isaku.yamahata
                   ` (2 subsequent siblings)
  15 siblings, 1 reply; 39+ messages in thread
From: isaku.yamahata @ 2023-11-07 15:00 UTC (permalink / raw)
  To: kvm, linux-kernel
  Cc: isaku.yamahata, isaku.yamahata, Paolo Bonzini, erdemaktas,
	Sean Christopherson, Sagi Shahar, David Matlack, Kai Huang,
	Zhi Wang, chen.bo, hang.yuan, tina.zhang

From: Isaku Yamahata <isaku.yamahata@intel.com>

When a large page is passed to the KVM page fault handler and some of sub
pages are already populated, try to merge sub pages into a large page.
This situation can happen when the guest converts small pages into shared
and convert it back into private.

When a large page is passed to KVM mmu page fault handler and the spte
corresponding to the page is non-leaf (one or more of sub pages are already
populated at lower page level), the current kvm mmu zaps non-leaf spte at a
large page level, and populate a leaf spte at that level.  Thus small pages
are converted into a large page.  However, it doesn't work for TDX because
zapping and re-populating results in zeroing page content.  Instead,
populate all small pages and merge them into a large page.

Merging pages into a large page can fail when some sub pages are accepted
and some are not.  In such case, with the assumption that guest tries to
accept at large page size for performance when possible, don't try to be
smart to identify which page is still pending, map all pages at lower page
level, and let vcpu re-execute.

Signed-off-by: Isaku Yamahata <isaku.yamahata@intel.com>
---
 arch/x86/include/asm/kvm-x86-ops.h |   2 +
 arch/x86/include/asm/kvm_host.h    |   4 +
 arch/x86/kvm/mmu/tdp_iter.c        |  37 +++++--
 arch/x86/kvm/mmu/tdp_iter.h        |   2 +
 arch/x86/kvm/mmu/tdp_mmu.c         | 172 ++++++++++++++++++++++++++++-
 5 files changed, 207 insertions(+), 10 deletions(-)

diff --git a/arch/x86/include/asm/kvm-x86-ops.h b/arch/x86/include/asm/kvm-x86-ops.h
index 3deb6ab4f291..9a7d4db304c7 100644
--- a/arch/x86/include/asm/kvm-x86-ops.h
+++ b/arch/x86/include/asm/kvm-x86-ops.h
@@ -104,9 +104,11 @@ KVM_X86_OP(load_mmu_pgd)
 KVM_X86_OP_OPTIONAL(link_private_spt)
 KVM_X86_OP_OPTIONAL(free_private_spt)
 KVM_X86_OP_OPTIONAL(split_private_spt)
+KVM_X86_OP_OPTIONAL(merge_private_spt)
 KVM_X86_OP_OPTIONAL(set_private_spte)
 KVM_X86_OP_OPTIONAL(remove_private_spte)
 KVM_X86_OP_OPTIONAL(zap_private_spte)
+KVM_X86_OP_OPTIONAL(unzap_private_spte)
 KVM_X86_OP(has_wbinvd_exit)
 KVM_X86_OP(get_l2_tsc_offset)
 KVM_X86_OP(get_l2_tsc_multiplier)
diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
index e75a461bdea7..0254c382f4f0 100644
--- a/arch/x86/include/asm/kvm_host.h
+++ b/arch/x86/include/asm/kvm_host.h
@@ -146,6 +146,7 @@
 #define KVM_MAX_HUGEPAGE_LEVEL	PG_LEVEL_1G
 #define KVM_NR_PAGE_SIZES	(KVM_MAX_HUGEPAGE_LEVEL - PG_LEVEL_4K + 1)
 #define KVM_HPAGE_GFN_SHIFT(x)	(((x) - 1) * 9)
+#define KVM_HPAGE_GFN_MASK(x)	(~((1UL << KVM_HPAGE_GFN_SHIFT(x)) - 1))
 #define KVM_HPAGE_SHIFT(x)	(PAGE_SHIFT + KVM_HPAGE_GFN_SHIFT(x))
 #define KVM_HPAGE_SIZE(x)	(1UL << KVM_HPAGE_SHIFT(x))
 #define KVM_HPAGE_MASK(x)	(~(KVM_HPAGE_SIZE(x) - 1))
@@ -1755,11 +1756,14 @@ struct kvm_x86_ops {
 				void *private_spt);
 	int (*split_private_spt)(struct kvm *kvm, gfn_t gfn, enum pg_level level,
 				  void *private_spt);
+	int (*merge_private_spt)(struct kvm *kvm, gfn_t gfn, enum pg_level level,
+				 void *private_spt);
 	int (*set_private_spte)(struct kvm *kvm, gfn_t gfn, enum pg_level level,
 				 kvm_pfn_t pfn);
 	int (*remove_private_spte)(struct kvm *kvm, gfn_t gfn, enum pg_level level,
 				    kvm_pfn_t pfn);
 	int (*zap_private_spte)(struct kvm *kvm, gfn_t gfn, enum pg_level level);
+	int (*unzap_private_spte)(struct kvm *kvm, gfn_t gfn, enum pg_level level);
 
 	bool (*has_wbinvd_exit)(void);
 
diff --git a/arch/x86/kvm/mmu/tdp_iter.c b/arch/x86/kvm/mmu/tdp_iter.c
index bd30ebfb2f2c..f33226fcd62a 100644
--- a/arch/x86/kvm/mmu/tdp_iter.c
+++ b/arch/x86/kvm/mmu/tdp_iter.c
@@ -71,6 +71,14 @@ tdp_ptep_t spte_to_child_pt(u64 spte, int level)
 	return (tdp_ptep_t)__va(spte_to_pfn(spte) << PAGE_SHIFT);
 }
 
+static void step_down(struct tdp_iter *iter, tdp_ptep_t child_pt)
+{
+	iter->level--;
+	iter->pt_path[iter->level - 1] = child_pt;
+	iter->gfn = gfn_round_for_level(iter->next_last_level_gfn, iter->level);
+	tdp_iter_refresh_sptep(iter);
+}
+
 /*
  * Steps down one level in the paging structure towards the goal GFN. Returns
  * true if the iterator was able to step down a level, false otherwise.
@@ -92,14 +100,28 @@ static bool try_step_down(struct tdp_iter *iter)
 	if (!child_pt)
 		return false;
 
-	iter->level--;
-	iter->pt_path[iter->level - 1] = child_pt;
-	iter->gfn = gfn_round_for_level(iter->next_last_level_gfn, iter->level);
-	tdp_iter_refresh_sptep(iter);
-
+	step_down(iter, child_pt);
 	return true;
 }
 
+/* Steps down for freezed spte.  Don't re-read sptep because it was freezed. */
+void tdp_iter_step_down(struct tdp_iter *iter, tdp_ptep_t child_pt)
+{
+	WARN_ON_ONCE(!child_pt);
+	WARN_ON_ONCE(iter->yielded);
+	WARN_ON_ONCE(iter->level == iter->min_level);
+
+	step_down(iter, child_pt);
+}
+
+void tdp_iter_step_side(struct tdp_iter *iter)
+{
+	iter->gfn += KVM_PAGES_PER_HPAGE(iter->level);
+	iter->next_last_level_gfn = iter->gfn;
+	iter->sptep++;
+	iter->old_spte = kvm_tdp_mmu_read_spte(iter->sptep);
+}
+
 /*
  * Steps to the next entry in the current page table, at the current page table
  * level. The next entry could point to a page backing guest memory or another
@@ -117,10 +139,7 @@ static bool try_step_side(struct tdp_iter *iter)
 	    (SPTE_ENT_PER_PAGE - 1))
 		return false;
 
-	iter->gfn += KVM_PAGES_PER_HPAGE(iter->level);
-	iter->next_last_level_gfn = iter->gfn;
-	iter->sptep++;
-	iter->old_spte = kvm_tdp_mmu_read_spte(iter->sptep);
+	tdp_iter_step_side(iter);
 
 	return true;
 }
diff --git a/arch/x86/kvm/mmu/tdp_iter.h b/arch/x86/kvm/mmu/tdp_iter.h
index a9c9cd0db20a..ca00db799a50 100644
--- a/arch/x86/kvm/mmu/tdp_iter.h
+++ b/arch/x86/kvm/mmu/tdp_iter.h
@@ -134,6 +134,8 @@ void tdp_iter_start(struct tdp_iter *iter, struct kvm_mmu_page *root,
 		    int min_level, gfn_t next_last_level_gfn);
 void tdp_iter_next(struct tdp_iter *iter);
 void tdp_iter_restart(struct tdp_iter *iter);
+void tdp_iter_step_side(struct tdp_iter *iter);
+void tdp_iter_step_down(struct tdp_iter *iter, tdp_ptep_t child_pt);
 
 static inline union kvm_mmu_page_role tdp_iter_child_role(struct tdp_iter *iter)
 {
diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/arch/x86/kvm/mmu/tdp_mmu.c
index 734ee822b43c..c8a4bd052c71 100644
--- a/arch/x86/kvm/mmu/tdp_mmu.c
+++ b/arch/x86/kvm/mmu/tdp_mmu.c
@@ -1224,6 +1224,176 @@ void kvm_tdp_mmu_invalidate_all_roots(struct kvm *kvm, bool skip_private)
 	}
 }
 
+static int tdp_mmu_iter_step_side(int i, struct tdp_iter *iter)
+{
+	i++;
+
+	/*
+	 * if i = SPTE_ENT_PER_PAGE, tdp_iter_step_side() results
+	 * in reading the entry beyond the last entry.
+	 */
+	if (i < SPTE_ENT_PER_PAGE)
+		tdp_iter_step_side(iter);
+
+	return i;
+}
+
+static int tdp_mmu_merge_private_spt(struct kvm_vcpu *vcpu,
+				     struct kvm_page_fault *fault,
+				     struct tdp_iter *iter, u64 new_spte)
+{
+	u64 *sptep = rcu_dereference(iter->sptep);
+	u64 old_spte = iter->old_spte;
+	struct kvm_mmu_page *child_sp;
+	struct kvm *kvm = vcpu->kvm;
+	struct tdp_iter child_iter;
+	int level = iter->level;
+	gfn_t gfn = iter->gfn;
+	tdp_ptep_t child_pt;
+	u64 child_spte;
+	int ret = 0;
+	int i;
+
+	/*
+	 * TDX KVM supports only 2MB large page.  It's not supported to merge
+	 * 2MB pages into 1GB page at the moment.
+	 */
+	WARN_ON_ONCE(fault->goal_level != PG_LEVEL_2M);
+	WARN_ON_ONCE(iter->level != PG_LEVEL_2M);
+	WARN_ON_ONCE(!is_large_pte(new_spte));
+
+	/* Freeze the spte to prevent other threads from working spte. */
+	if (!try_cmpxchg64(sptep, &iter->old_spte, REMOVED_SPTE))
+		return -EBUSY;
+
+	/*
+	 * Step down to the child spte.  Because tdp_iter_next() assumes the
+	 * parent spte isn't freezed, do it manually.
+	 */
+	child_pt = spte_to_child_pt(iter->old_spte, iter->level);
+	child_sp = sptep_to_sp(child_pt);
+	WARN_ON_ONCE(child_sp->role.level != PG_LEVEL_4K);
+	WARN_ON_ONCE(!kvm_mmu_page_role_is_private(child_sp->role));
+
+	/* Don't modify iter as the caller will use iter after this function. */
+	child_iter = *iter;
+	/* Adjust the target gfn to the head gfn of the large page. */
+	child_iter.next_last_level_gfn &= -KVM_PAGES_PER_HPAGE(level);
+	tdp_iter_step_down(&child_iter, child_pt);
+
+	/*
+	 * All child pages are required to be populated for merging them into a
+	 * large page.  Populate all child spte.
+	 */
+	for (i = 0; i < SPTE_ENT_PER_PAGE; i = tdp_mmu_iter_step_side(i, &child_iter)) {
+		int tmp;
+
+		WARN_ON_ONCE(child_iter.level != PG_LEVEL_4K);
+
+		if (is_shadow_present_pte(child_iter.old_spte)) {
+			/* TODO: relocate page for huge page. */
+			if (WARN_ON_ONCE(spte_to_pfn(child_iter.old_spte) !=
+					 spte_to_pfn(new_spte) + i)) {
+				if (!ret)
+					ret = -EAGAIN;
+				continue;
+			}
+			/*
+			 * When SEPT_VE_DISABLE=true and the page state is
+			 * pending, this case can happen.  Just resume the vcpu
+			 * again with the expectation for other vcpu to accept
+			 * this page.
+			 */
+			if (child_iter.gfn == fault->gfn) {
+				if (!ret)
+					ret = -EAGAIN;
+			}
+			continue;
+		}
+
+		child_spte = make_huge_page_split_spte(kvm, new_spte, child_sp->role, i);
+		/*
+		 * Because other thread may have started to operate on this spte
+		 * before freezing the parent spte,  Use atomic version to
+		 * prevent race.
+		 */
+		tmp = tdp_mmu_set_spte_atomic(vcpu->kvm, &child_iter, child_spte);
+		if (tmp == -EBUSY || tmp == -EAGAIN) {
+			/*
+			 * There was a race condition.  Populate remaining 4K
+			 * spte to resolve fault->gfn to guarantee the forward
+			 * progress.
+			 */
+			if (!ret)
+				ret = tmp;
+		} else if (tmp) {
+			ret = tmp;
+			goto out;
+		}
+	}
+	if (ret)
+		goto out;
+
+	/* Prevent the Secure-EPT entry from being used. */
+	ret = static_call(kvm_x86_zap_private_spte)(kvm, gfn, level);
+	if (ret)
+		goto out;
+	kvm_flush_remote_tlbs_range(kvm, gfn & KVM_HPAGE_GFN_MASK(level),
+				    KVM_PAGES_PER_HPAGE(level));
+
+	/* Merge pages into a large page. */
+	ret = static_call(kvm_x86_merge_private_spt)(kvm, gfn, level,
+						     kvm_mmu_private_spt(child_sp));
+	/*
+	 * Failed to merge pages because some pages are accepted and some are
+	 * pending.  Since the child page was mapped above, let vcpu run.
+	 */
+	if (ret) {
+		if (static_call(kvm_x86_unzap_private_spte)(kvm, gfn, level))
+			old_spte = SHADOW_NONPRESENT_VALUE |
+				(spte_to_pfn(old_spte) << PAGE_SHIFT) |
+				PT_PAGE_SIZE_MASK;
+		goto out;
+	}
+
+	/* Unfreeze spte. */
+	iter->old_spte = new_spte;
+	__kvm_tdp_mmu_write_spte(sptep, new_spte);
+
+	/*
+	 * Free unused child sp.  Secure-EPT page was already freed at TDX level
+	 * by kvm_x86_merge_private_spt().
+	 */
+	tdp_unaccount_mmu_page(kvm, child_sp);
+	tdp_mmu_free_sp(child_sp);
+	return -EAGAIN;
+
+out:
+	iter->old_spte = old_spte;
+	__kvm_tdp_mmu_write_spte(sptep, old_spte);
+	return ret;
+}
+
+static int __tdp_mmu_map_handle_target_level(struct kvm_vcpu *vcpu,
+					     struct kvm_page_fault *fault,
+					     struct tdp_iter *iter, u64 new_spte)
+{
+	/*
+	 * The private page has smaller-size pages.  For example, the child
+	 * pages was converted from shared to page, and now it can be mapped as
+	 * a large page.  Try to merge small pages into a large page.
+	 */
+	if (fault->slot &&
+	    kvm_gfn_shared_mask(vcpu->kvm) &&
+	    iter->level > PG_LEVEL_4K &&
+	    kvm_is_private_gpa(vcpu->kvm, fault->addr) &&
+	    is_shadow_present_pte(iter->old_spte) &&
+	    !is_large_pte(iter->old_spte))
+		return tdp_mmu_merge_private_spt(vcpu, fault, iter, new_spte);
+
+	return tdp_mmu_set_spte_atomic(vcpu->kvm, iter, new_spte);
+}
+
 /*
  * Installs a last-level SPTE to handle a TDP page fault.
  * (NPT/EPT violation/misconfiguration)
@@ -1265,7 +1435,7 @@ static int tdp_mmu_map_handle_target_level(struct kvm_vcpu *vcpu,
 
 	if (new_spte == iter->old_spte)
 		ret = RET_PF_SPURIOUS;
-	else if (tdp_mmu_set_spte_atomic(vcpu->kvm, iter, new_spte))
+	else if (__tdp_mmu_map_handle_target_level(vcpu, fault, iter, new_spte))
 		return RET_PF_RETRY;
 	else if (is_shadow_present_pte(iter->old_spte) &&
 		 !is_last_spte(iter->old_spte, iter->level))
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 39+ messages in thread

* [PATCH v6 14/16] KVM: x86/tdp_mmu: TDX: Implement merge pages into a large page
  2023-11-07 15:00 [PATCH v6 00/16] KVM TDX: TDP MMU: large page support isaku.yamahata
                   ` (12 preceding siblings ...)
  2023-11-07 15:00 ` [PATCH v6 13/16] KVM: x86/tdp_mmu: Try to merge pages into a large page isaku.yamahata
@ 2023-11-07 15:00 ` isaku.yamahata
  2023-11-22  7:50   ` Binbin Wu
  2023-11-07 15:00 ` [PATCH v6 15/16] KVM: x86/mmu: Make kvm fault handler aware of large page of private memslot isaku.yamahata
  2023-11-07 15:00 ` [PATCH v6 16/16] KVM: TDX: Allow 2MB large page for TD GUEST isaku.yamahata
  15 siblings, 1 reply; 39+ messages in thread
From: isaku.yamahata @ 2023-11-07 15:00 UTC (permalink / raw)
  To: kvm, linux-kernel
  Cc: isaku.yamahata, isaku.yamahata, Paolo Bonzini, erdemaktas,
	Sean Christopherson, Sagi Shahar, David Matlack, Kai Huang,
	Zhi Wang, chen.bo, hang.yuan, tina.zhang

From: Isaku Yamahata <isaku.yamahata@intel.com>

Implement merge_private_stp callback.

Signed-off-by: Isaku Yamahata <isaku.yamahata@intel.com>
---
 arch/x86/kvm/vmx/tdx.c       | 72 ++++++++++++++++++++++++++++++++++++
 arch/x86/kvm/vmx/tdx_arch.h  |  1 +
 arch/x86/kvm/vmx/tdx_errno.h |  2 +
 arch/x86/kvm/vmx/tdx_ops.h   |  6 +++
 4 files changed, 81 insertions(+)

diff --git a/arch/x86/kvm/vmx/tdx.c b/arch/x86/kvm/vmx/tdx.c
index 91eca578a7da..df53a971ff21 100644
--- a/arch/x86/kvm/vmx/tdx.c
+++ b/arch/x86/kvm/vmx/tdx.c
@@ -1684,6 +1684,49 @@ static int tdx_sept_split_private_spt(struct kvm *kvm, gfn_t gfn,
 	return 0;
 }
 
+static int tdx_sept_merge_private_spt(struct kvm *kvm, gfn_t gfn,
+				      enum pg_level level, void *private_spt)
+{
+	int tdx_level = pg_level_to_tdx_sept_level(level);
+	struct kvm_tdx *kvm_tdx = to_kvm_tdx(kvm);
+	struct tdx_module_args out;
+	gpa_t gpa = gfn_to_gpa(gfn) & KVM_HPAGE_MASK(level);
+	u64 err;
+
+	/* See comment in tdx_sept_set_private_spte() */
+	err = tdh_mem_page_promote(kvm_tdx->tdr_pa, gpa, tdx_level, &out);
+	if (unlikely(err == TDX_ERROR_SEPT_BUSY))
+		return -EAGAIN;
+	if (unlikely(err == (TDX_EPT_INVALID_PROMOTE_CONDITIONS |
+			     TDX_OPERAND_ID_RCX)))
+		/*
+		 * Some pages are accepted, some pending.  Need to wait for TD
+		 * to accept all pages.  Tell it the caller.
+		 */
+		return -EAGAIN;
+	if (KVM_BUG_ON(err, kvm)) {
+		pr_tdx_error(TDH_MEM_PAGE_PROMOTE, err, &out);
+		return -EIO;
+	}
+	WARN_ON_ONCE(out.rcx != __pa(private_spt));
+
+	/*
+	 * TDH.MEM.PAGE.PROMOTE frees the Secure-EPT page for the lower level.
+	 * Flush cache for reuse.
+	 */
+	do {
+		err = tdh_phymem_page_wbinvd(set_hkid_to_hpa(__pa(private_spt),
+							     to_kvm_tdx(kvm)->hkid));
+	} while (unlikely(err == (TDX_OPERAND_BUSY | TDX_OPERAND_ID_RCX)));
+	if (WARN_ON_ONCE(err)) {
+		pr_tdx_error(TDH_PHYMEM_PAGE_WBINVD, err, NULL);
+		return -EIO;
+	}
+
+	tdx_clear_page(__pa(private_spt), PAGE_SIZE);
+	return 0;
+}
+
 static int tdx_sept_zap_private_spte(struct kvm *kvm, gfn_t gfn,
 				      enum pg_level level)
 {
@@ -1758,6 +1801,33 @@ static void tdx_track(struct kvm *kvm)
 
 }
 
+static int tdx_sept_unzap_private_spte(struct kvm *kvm, gfn_t gfn,
+				       enum pg_level level)
+{
+	int tdx_level = pg_level_to_tdx_sept_level(level);
+	struct kvm_tdx *kvm_tdx = to_kvm_tdx(kvm);
+	gpa_t gpa = gfn_to_gpa(gfn) & KVM_HPAGE_MASK(level);
+	struct tdx_module_args out;
+	u64 err;
+
+	do {
+		err = tdh_mem_range_unblock(kvm_tdx->tdr_pa, gpa, tdx_level, &out);
+
+		/*
+		 * tdh_mem_range_block() is accompanied with tdx_track() via kvm
+		 * remote tlb flush.  Wait for the caller of
+		 * tdh_mem_range_block() to complete TDX track.
+		 */
+	} while (err == (TDX_TLB_TRACKING_NOT_DONE | TDX_OPERAND_ID_SEPT));
+	if (unlikely(err == TDX_ERROR_SEPT_BUSY))
+		return -EAGAIN;
+	if (KVM_BUG_ON(err, kvm)) {
+		pr_tdx_error(TDH_MEM_RANGE_UNBLOCK, err, &out);
+		return -EIO;
+	}
+	return 0;
+}
+
 static int tdx_sept_free_private_spt(struct kvm *kvm, gfn_t gfn,
 				     enum pg_level level, void *private_spt)
 {
@@ -3204,9 +3274,11 @@ int __init tdx_hardware_setup(struct kvm_x86_ops *x86_ops)
 	x86_ops->link_private_spt = tdx_sept_link_private_spt;
 	x86_ops->free_private_spt = tdx_sept_free_private_spt;
 	x86_ops->split_private_spt = tdx_sept_split_private_spt;
+	x86_ops->merge_private_spt = tdx_sept_merge_private_spt;
 	x86_ops->set_private_spte = tdx_sept_set_private_spte;
 	x86_ops->remove_private_spte = tdx_sept_remove_private_spte;
 	x86_ops->zap_private_spte = tdx_sept_zap_private_spte;
+	x86_ops->unzap_private_spte = tdx_sept_unzap_private_spte;
 
 	return 0;
 
diff --git a/arch/x86/kvm/vmx/tdx_arch.h b/arch/x86/kvm/vmx/tdx_arch.h
index cab6a74446a0..fb7e54953a85 100644
--- a/arch/x86/kvm/vmx/tdx_arch.h
+++ b/arch/x86/kvm/vmx/tdx_arch.h
@@ -29,6 +29,7 @@
 #define TDH_MNG_KEY_FREEID		20
 #define TDH_MNG_INIT			21
 #define TDH_VP_INIT			22
+#define TDH_MEM_PAGE_PROMOTE		23
 #define TDH_MEM_SEPT_RD			25
 #define TDH_VP_RD			26
 #define TDH_MNG_KEY_RECLAIMID		27
diff --git a/arch/x86/kvm/vmx/tdx_errno.h b/arch/x86/kvm/vmx/tdx_errno.h
index bb093e292fef..940d6de332eb 100644
--- a/arch/x86/kvm/vmx/tdx_errno.h
+++ b/arch/x86/kvm/vmx/tdx_errno.h
@@ -23,6 +23,8 @@
 #define TDX_FLUSHVP_NOT_DONE			0x8000082400000000ULL
 #define TDX_EPT_WALK_FAILED			0xC0000B0000000000ULL
 #define TDX_EPT_ENTRY_NOT_FREE			0xC0000B0200000000ULL
+#define TDX_TLB_TRACKING_NOT_DONE		0xC0000B0800000000ULL
+#define TDX_EPT_INVALID_PROMOTE_CONDITIONS	0xC0000B0900000000ULL
 #define TDX_EPT_ENTRY_STATE_INCORRECT		0xC0000B0D00000000ULL
 
 /*
diff --git a/arch/x86/kvm/vmx/tdx_ops.h b/arch/x86/kvm/vmx/tdx_ops.h
index 38ab0ab1509c..774fee3b2d46 100644
--- a/arch/x86/kvm/vmx/tdx_ops.h
+++ b/arch/x86/kvm/vmx/tdx_ops.h
@@ -190,6 +190,12 @@ static inline u64 tdh_mem_page_demote(hpa_t tdr, gpa_t gpa, int level, hpa_t pag
 	return tdx_seamcall_sept(TDH_MEM_PAGE_DEMOTE, gpa | level, tdr, page, 0, out);
 }
 
+static inline u64 tdh_mem_page_promote(hpa_t tdr, gpa_t gpa, int level,
+				       struct tdx_module_args *out)
+{
+	return tdx_seamcall_sept(TDH_MEM_PAGE_PROMOTE, gpa | level, tdr, 0, 0, out);
+}
+
 static inline u64 tdh_mr_extend(hpa_t tdr, gpa_t gpa,
 				struct tdx_module_args *out)
 {
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 39+ messages in thread

* [PATCH v6 15/16] KVM: x86/mmu: Make kvm fault handler aware of large page of private memslot
  2023-11-07 15:00 [PATCH v6 00/16] KVM TDX: TDP MMU: large page support isaku.yamahata
                   ` (13 preceding siblings ...)
  2023-11-07 15:00 ` [PATCH v6 14/16] KVM: x86/tdp_mmu: TDX: Implement " isaku.yamahata
@ 2023-11-07 15:00 ` isaku.yamahata
  2023-11-22  9:05   ` Binbin Wu
  2023-11-07 15:00 ` [PATCH v6 16/16] KVM: TDX: Allow 2MB large page for TD GUEST isaku.yamahata
  15 siblings, 1 reply; 39+ messages in thread
From: isaku.yamahata @ 2023-11-07 15:00 UTC (permalink / raw)
  To: kvm, linux-kernel
  Cc: isaku.yamahata, isaku.yamahata, Paolo Bonzini, erdemaktas,
	Sean Christopherson, Sagi Shahar, David Matlack, Kai Huang,
	Zhi Wang, chen.bo, hang.yuan, tina.zhang

From: Isaku Yamahata <isaku.yamahata@intel.com>

struct kvm_page_fault.req_level is the page level which takes care of the
faulted-in page size.  For now its calculation is only for the conventional
kvm memslot by host_pfn_mapping_level() that traverses page table.

However, host_pfn_mapping_level() cannot be used for private kvm memslot
because pages of private kvm memlost aren't mapped into user virtual
address space.  Instead page order is given when getting pfn.  Remember it
in struct kvm_page_fault and use it.

Signed-off-by: Isaku Yamahata <isaku.yamahata@intel.com>
---
 arch/x86/kvm/mmu/mmu.c          | 34 +++++++++++++++++----------------
 arch/x86/kvm/mmu/mmu_internal.h | 12 +++++++++++-
 arch/x86/kvm/mmu/tdp_mmu.c      |  2 +-
 3 files changed, 30 insertions(+), 18 deletions(-)

diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c
index 0bf043812644..0aec7c11f4e2 100644
--- a/arch/x86/kvm/mmu/mmu.c
+++ b/arch/x86/kvm/mmu/mmu.c
@@ -3158,10 +3158,10 @@ static int host_pfn_mapping_level(struct kvm *kvm, gfn_t gfn,
 
 static int __kvm_mmu_max_mapping_level(struct kvm *kvm,
 				       const struct kvm_memory_slot *slot,
-				       gfn_t gfn, int max_level, bool is_private)
+				       gfn_t gfn, int max_level, int host_level,
+				       bool is_private)
 {
 	struct kvm_lpage_info *linfo;
-	int host_level;
 
 	max_level = min(max_level, max_huge_page_level);
 	for ( ; max_level > PG_LEVEL_4K; max_level--) {
@@ -3170,24 +3170,23 @@ static int __kvm_mmu_max_mapping_level(struct kvm *kvm,
 			break;
 	}
 
-	if (is_private)
-		return max_level;
-
 	if (max_level == PG_LEVEL_4K)
 		return PG_LEVEL_4K;
 
-	host_level = host_pfn_mapping_level(kvm, gfn, slot);
+	if (!is_private) {
+		WARN_ON_ONCE(host_level != PG_LEVEL_NONE);
+		host_level = host_pfn_mapping_level(kvm, gfn, slot);
+	}
+	WARN_ON_ONCE(host_level == PG_LEVEL_NONE);
 	return min(host_level, max_level);
 }
 
 int kvm_mmu_max_mapping_level(struct kvm *kvm,
 			      const struct kvm_memory_slot *slot, gfn_t gfn,
-			      int max_level)
+			      int max_level, bool faultin_private)
 {
-	bool is_private = kvm_slot_can_be_private(slot) &&
-			  kvm_mem_is_private(kvm, gfn);
-
-	return __kvm_mmu_max_mapping_level(kvm, slot, gfn, max_level, is_private);
+	return __kvm_mmu_max_mapping_level(kvm, slot, gfn, max_level,
+					   PG_LEVEL_NONE, faultin_private);
 }
 
 void kvm_mmu_hugepage_adjust(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault)
@@ -3212,7 +3211,8 @@ void kvm_mmu_hugepage_adjust(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault
 	 */
 	fault->req_level = __kvm_mmu_max_mapping_level(vcpu->kvm, slot,
 						       fault->gfn, fault->max_level,
-						       fault->is_private);
+						       fault->host_level,
+						       kvm_is_faultin_private(fault));
 	if (fault->req_level == PG_LEVEL_4K || fault->huge_page_disallowed)
 		return;
 
@@ -4336,6 +4336,7 @@ static int kvm_faultin_pfn_private(struct kvm_vcpu *vcpu,
 				   struct kvm_page_fault *fault)
 {
 	int max_order, r;
+	u8 max_level;
 
 	if (!kvm_slot_can_be_private(fault->slot)) {
 		kvm_mmu_prepare_memory_fault_exit(vcpu, fault);
@@ -4349,8 +4350,9 @@ static int kvm_faultin_pfn_private(struct kvm_vcpu *vcpu,
 		return r;
 	}
 
-	fault->max_level = min(kvm_max_level_for_order(max_order),
-			       fault->max_level);
+	max_level = kvm_max_level_for_order(max_order);
+	fault->host_level = max_level;
+	fault->max_level = min(max_level, fault->max_level);
 	fault->map_writable = !(fault->slot->flags & KVM_MEM_READONLY);
 
 	return RET_PF_CONTINUE;
@@ -4400,7 +4402,7 @@ static int __kvm_faultin_pfn(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault
 		return -EFAULT;
 	}
 
-	if (fault->is_private)
+	if (kvm_is_faultin_private(fault))
 		return kvm_faultin_pfn_private(vcpu, fault);
 
 	async = false;
@@ -6809,7 +6811,7 @@ static bool kvm_mmu_zap_collapsible_spte(struct kvm *kvm,
 		 */
 		if (sp->role.direct &&
 		    sp->role.level < kvm_mmu_max_mapping_level(kvm, slot, sp->gfn,
-							       PG_LEVEL_NUM)) {
+							       PG_LEVEL_NUM, false)) {
 			kvm_zap_one_rmap_spte(kvm, rmap_head, sptep);
 
 			if (kvm_available_flush_remote_tlbs_range())
diff --git a/arch/x86/kvm/mmu/mmu_internal.h b/arch/x86/kvm/mmu/mmu_internal.h
index 653e96769956..6b540a10fd67 100644
--- a/arch/x86/kvm/mmu/mmu_internal.h
+++ b/arch/x86/kvm/mmu/mmu_internal.h
@@ -357,6 +357,9 @@ struct kvm_page_fault {
 	 * is changing its own translation in the guest page tables.
 	 */
 	bool write_fault_to_shadow_pgtable;
+
+	/* valid only for private memslot && private gfn */
+	enum pg_level host_level;
 };
 
 int kvm_tdp_page_fault(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault);
@@ -451,7 +454,7 @@ static inline int kvm_mmu_do_page_fault(struct kvm_vcpu *vcpu, gpa_t cr2_or_gpa,
 
 int kvm_mmu_max_mapping_level(struct kvm *kvm,
 			      const struct kvm_memory_slot *slot, gfn_t gfn,
-			      int max_level);
+			      int max_level, bool faultin_private);
 void kvm_mmu_hugepage_adjust(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault);
 void disallowed_hugepage_adjust(struct kvm_page_fault *fault, u64 spte, int cur_level);
 
@@ -469,4 +472,11 @@ static inline bool kvm_hugepage_test_mixed(struct kvm_memory_slot *slot, gfn_t g
 }
 #endif
 
+static inline bool kvm_is_faultin_private(const struct kvm_page_fault *fault)
+{
+	if (IS_ENABLED(CONFIG_KVM_GENERIC_PRIVATE_MEM))
+		return fault->is_private && kvm_slot_can_be_private(fault->slot);
+	return false;
+}
+
 #endif /* __KVM_X86_MMU_INTERNAL_H */
diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/arch/x86/kvm/mmu/tdp_mmu.c
index c8a4bd052c71..173e4e9053fc 100644
--- a/arch/x86/kvm/mmu/tdp_mmu.c
+++ b/arch/x86/kvm/mmu/tdp_mmu.c
@@ -2179,7 +2179,7 @@ static void zap_collapsible_spte_range(struct kvm *kvm,
 			continue;
 
 		max_mapping_level = kvm_mmu_max_mapping_level(kvm, slot,
-							      iter.gfn, PG_LEVEL_NUM);
+							      iter.gfn, PG_LEVEL_NUM, false);
 		if (max_mapping_level < iter.level)
 			continue;
 
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 39+ messages in thread

* [PATCH v6 16/16] KVM: TDX: Allow 2MB large page for TD GUEST
  2023-11-07 15:00 [PATCH v6 00/16] KVM TDX: TDP MMU: large page support isaku.yamahata
                   ` (14 preceding siblings ...)
  2023-11-07 15:00 ` [PATCH v6 15/16] KVM: x86/mmu: Make kvm fault handler aware of large page of private memslot isaku.yamahata
@ 2023-11-07 15:00 ` isaku.yamahata
  15 siblings, 0 replies; 39+ messages in thread
From: isaku.yamahata @ 2023-11-07 15:00 UTC (permalink / raw)
  To: kvm, linux-kernel
  Cc: isaku.yamahata, isaku.yamahata, Paolo Bonzini, erdemaktas,
	Sean Christopherson, Sagi Shahar, David Matlack, Kai Huang,
	Zhi Wang, chen.bo, hang.yuan, tina.zhang, Xiaoyao Li

From: Xiaoyao Li <xiaoyao.li@intel.com>

Now that everything is there to support 2MB page for TD guest.  Because TDX
module TDH.MEM.PAGE.AUG supports 4KB page and 2MB page, set struct
kvm_arch.tdp_max_page_level to 2MB page level.

Signed-off-by: Xiaoyao Li <xiaoyao.li@intel.com>
Signed-off-by: Isaku Yamahata <isaku.yamahata@intel.com>
---
 arch/x86/kvm/mmu/tdp_mmu.c | 9 ++-------
 arch/x86/kvm/vmx/tdx.c     | 4 ++--
 2 files changed, 4 insertions(+), 9 deletions(-)

diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/arch/x86/kvm/mmu/tdp_mmu.c
index 173e4e9053fc..3ec823d66c79 100644
--- a/arch/x86/kvm/mmu/tdp_mmu.c
+++ b/arch/x86/kvm/mmu/tdp_mmu.c
@@ -1560,14 +1560,9 @@ int kvm_tdp_mmu_map(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault)
 
 		sp->nx_huge_page_disallowed = fault->huge_page_disallowed;
 
-		if (is_shadow_present_pte(iter.old_spte)) {
-			/*
-			 * TODO: large page support.
-			 * Doesn't support large page for TDX now
-			 */
-			KVM_BUG_ON(is_private_sptep(iter.sptep), vcpu->kvm);
+		if (is_shadow_present_pte(iter.old_spte))
 			r = tdp_mmu_split_huge_page(kvm, &iter, sp, true);
-		} else
+		else
 			r = tdp_mmu_link_sp(kvm, &iter, sp, true);
 
 		/*
diff --git a/arch/x86/kvm/vmx/tdx.c b/arch/x86/kvm/vmx/tdx.c
index df53a971ff21..9550a310177a 100644
--- a/arch/x86/kvm/vmx/tdx.c
+++ b/arch/x86/kvm/vmx/tdx.c
@@ -560,8 +560,8 @@ int tdx_vm_init(struct kvm *kvm)
 	 */
 	kvm_mmu_set_mmio_spte_value(kvm, 0);
 
-	/* TODO: Enable 2mb and 1gb large page support. */
-	kvm->arch.tdp_max_page_level = PG_LEVEL_4K;
+	/* TDH.MEM.PAGE.AUG supports up to 2MB page. */
+	kvm->arch.tdp_max_page_level = PG_LEVEL_2M;
 
 	/*
 	 * This function initializes only KVM software construct.  It doesn't
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 39+ messages in thread

* Re: [PATCH v6 01/16] KVM: TDP_MMU: Go to next level if smaller private mapping exists
  2023-11-07 15:00 ` [PATCH v6 01/16] KVM: TDP_MMU: Go to next level if smaller private mapping exists isaku.yamahata
@ 2023-11-16  1:32   ` Binbin Wu
  2023-11-17  1:05     ` Isaku Yamahata
  0 siblings, 1 reply; 39+ messages in thread
From: Binbin Wu @ 2023-11-16  1:32 UTC (permalink / raw)
  To: isaku.yamahata
  Cc: kvm, linux-kernel, isaku.yamahata, Paolo Bonzini, erdemaktas,
	Sean Christopherson, Sagi Shahar, David Matlack, Kai Huang,
	Zhi Wang, chen.bo, hang.yuan, tina.zhang, Xiaoyao Li



On 11/7/2023 11:00 PM, isaku.yamahata@intel.com wrote:
> From: Xiaoyao Li <xiaoyao.li@intel.com>
>
> Cannot map a private page as large page if any smaller mapping exists.
>
> It has to wait for all the not-mapped smaller page to be mapped and
> promote it to larger mapping.
>
> Signed-off-by: Xiaoyao Li <xiaoyao.li@intel.com>
> ---
>   arch/x86/kvm/mmu/tdp_mmu.c | 3 ++-
>   1 file changed, 2 insertions(+), 1 deletion(-)
>
> diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/arch/x86/kvm/mmu/tdp_mmu.c
> index 2c5257628881..d806574f7f2d 100644
> --- a/arch/x86/kvm/mmu/tdp_mmu.c
> +++ b/arch/x86/kvm/mmu/tdp_mmu.c
> @@ -1287,7 +1287,8 @@ int kvm_tdp_mmu_map(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault)
>   	tdp_mmu_for_each_pte(iter, mmu, is_private, raw_gfn, raw_gfn + 1) {
>   		int r;
>   
> -		if (fault->nx_huge_page_workaround_enabled)
> +		if (fault->nx_huge_page_workaround_enabled ||
> +		    kvm_gfn_shared_mask(vcpu->kvm))
As I mentioned in 
https://lore.kernel.org/kvm/fef75d54-e319-5170-5f76-f5abc4856315@linux.intel.com/,
The change of this patch will not take effect.
If "fault->nx_huge_page_workaround_enabled" is false, the condition
"spte_to_child_sp(spte)->nx_huge_page_disallowed" will not be true.

IIUC, the function disallowed_hugepage_adjust() currently is only to handle
nx_huge_page_workaround, it seems no special handling needed for TD.
>   			disallowed_hugepage_adjust(fault, iter.old_spte, iter.level);
>   
>   		/*


^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: [PATCH v6 02/16] KVM: TDX: Pass page level to cache flush before TDX SEAMCALL
  2023-11-07 15:00 ` [PATCH v6 02/16] KVM: TDX: Pass page level to cache flush before TDX SEAMCALL isaku.yamahata
@ 2023-11-16  5:36   ` Binbin Wu
  0 siblings, 0 replies; 39+ messages in thread
From: Binbin Wu @ 2023-11-16  5:36 UTC (permalink / raw)
  To: isaku.yamahata
  Cc: kvm, linux-kernel, isaku.yamahata, Paolo Bonzini, erdemaktas,
	Sean Christopherson, Sagi Shahar, David Matlack, Kai Huang,
	Zhi Wang, chen.bo, hang.yuan, tina.zhang, Xiaoyao Li



On 11/7/2023 11:00 PM, isaku.yamahata@intel.com wrote:
> From: Xiaoyao Li <xiaoyao.li@intel.com>
>
> tdh_mem_page_aug() will support 2MB large page in the near future.  Cache
> flush also needs to be 2MB instead of 4KB in such cases.  Introduce a
> helper function to flush cache with page size info in preparation for large
> pages.
>
> Signed-off-by: Xiaoyao Li <xiaoyao.li@intel.com>
> Signed-off-by: Isaku Yamahata <isaku.yamahata@intel.com>

Nit: About the shortlog, is it clearer to say "Flush cache for a page 
based on page size before TDX SEAMCALL"?

Reviewed-by: Binbin Wu <binbin.wu@linux.intel.com>

> ---
>   arch/x86/kvm/vmx/tdx_ops.h | 22 ++++++++++++++--------
>   1 file changed, 14 insertions(+), 8 deletions(-)
>
> diff --git a/arch/x86/kvm/vmx/tdx_ops.h b/arch/x86/kvm/vmx/tdx_ops.h
> index fd73a1731bf8..e726102d3523 100644
> --- a/arch/x86/kvm/vmx/tdx_ops.h
> +++ b/arch/x86/kvm/vmx/tdx_ops.h
> @@ -6,6 +6,7 @@
>   
>   #include <linux/compiler.h>
>   
> +#include <asm/pgtable_types.h>
>   #include <asm/archrandom.h>
>   #include <asm/cacheflush.h>
>   #include <asm/asm.h>
> @@ -62,6 +63,11 @@ static inline u64 tdx_seamcall(u64 op, u64 rcx, u64 rdx, u64 r8, u64 r9,
>   void pr_tdx_error(u64 op, u64 error_code, const struct tdx_module_args *out);
>   #endif
>   
> +static inline void tdx_clflush_page(hpa_t addr, enum pg_level level)
> +{
> +	clflush_cache_range(__va(addr), KVM_HPAGE_SIZE(level));
> +}
> +
>   /*
>    * TDX module acquires its internal lock for resources.  It doesn't spin to get
>    * locks because of its restrictions of allowed execution time.  Instead, it
> @@ -94,21 +100,21 @@ static inline u64 tdx_seamcall_sept(u64 op, u64 rcx, u64 rdx, u64 r8, u64 r9,
>   
>   static inline u64 tdh_mng_addcx(hpa_t tdr, hpa_t addr)
>   {
> -	clflush_cache_range(__va(addr), PAGE_SIZE);
> +	tdx_clflush_page(addr, PG_LEVEL_4K);
>   	return tdx_seamcall(TDH_MNG_ADDCX, addr, tdr, 0, 0, NULL);
>   }
>   
>   static inline u64 tdh_mem_page_add(hpa_t tdr, gpa_t gpa, hpa_t hpa, hpa_t source,
>   				   struct tdx_module_args *out)
>   {
> -	clflush_cache_range(__va(hpa), PAGE_SIZE);
> +	tdx_clflush_page(hpa, PG_LEVEL_4K);
>   	return tdx_seamcall_sept(TDH_MEM_PAGE_ADD, gpa, tdr, hpa, source, out);
>   }
>   
>   static inline u64 tdh_mem_sept_add(hpa_t tdr, gpa_t gpa, int level, hpa_t page,
>   				   struct tdx_module_args *out)
>   {
> -	clflush_cache_range(__va(page), PAGE_SIZE);
> +	tdx_clflush_page(page, PG_LEVEL_4K);
>   	return tdx_seamcall_sept(TDH_MEM_SEPT_ADD, gpa | level, tdr, page, 0, out);
>   }
>   
> @@ -126,21 +132,21 @@ static inline u64 tdh_mem_sept_remove(hpa_t tdr, gpa_t gpa, int level,
>   
>   static inline u64 tdh_vp_addcx(hpa_t tdvpr, hpa_t addr)
>   {
> -	clflush_cache_range(__va(addr), PAGE_SIZE);
> +	tdx_clflush_page(addr, PG_LEVEL_4K);
>   	return tdx_seamcall(TDH_VP_ADDCX, addr, tdvpr, 0, 0, NULL);
>   }
>   
>   static inline u64 tdh_mem_page_relocate(hpa_t tdr, gpa_t gpa, hpa_t hpa,
>   					struct tdx_module_args *out)
>   {
> -	clflush_cache_range(__va(hpa), PAGE_SIZE);
> +	tdx_clflush_page(hpa, PG_LEVEL_4K);
>   	return tdx_seamcall_sept(TDH_MEM_PAGE_RELOCATE, gpa, tdr, hpa, 0, out);
>   }
>   
>   static inline u64 tdh_mem_page_aug(hpa_t tdr, gpa_t gpa, hpa_t hpa,
>   				   struct tdx_module_args *out)
>   {
> -	clflush_cache_range(__va(hpa), PAGE_SIZE);
> +	tdx_clflush_page(hpa, PG_LEVEL_4K);
>   	return tdx_seamcall_sept(TDH_MEM_PAGE_AUG, gpa, tdr, hpa, 0, out);
>   }
>   
> @@ -157,13 +163,13 @@ static inline u64 tdh_mng_key_config(hpa_t tdr)
>   
>   static inline u64 tdh_mng_create(hpa_t tdr, int hkid)
>   {
> -	clflush_cache_range(__va(tdr), PAGE_SIZE);
> +	tdx_clflush_page(tdr, PG_LEVEL_4K);
>   	return tdx_seamcall(TDH_MNG_CREATE, tdr, hkid, 0, 0, NULL);
>   }
>   
>   static inline u64 tdh_vp_create(hpa_t tdr, hpa_t tdvpr)
>   {
> -	clflush_cache_range(__va(tdvpr), PAGE_SIZE);
> +	tdx_clflush_page(tdvpr, PG_LEVEL_4K);
>   	return tdx_seamcall(TDH_VP_CREATE, tdvpr, tdr, 0, 0, NULL);
>   }
>   


^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: [PATCH v6 03/16] KVM: TDX: Pass KVM page level to tdh_mem_page_add() and tdh_mem_page_aug()
  2023-11-07 15:00 ` [PATCH v6 03/16] KVM: TDX: Pass KVM page level to tdh_mem_page_add() and tdh_mem_page_aug() isaku.yamahata
@ 2023-11-16  8:18   ` Binbin Wu
  2023-11-17  0:23     ` Isaku Yamahata
  0 siblings, 1 reply; 39+ messages in thread
From: Binbin Wu @ 2023-11-16  8:18 UTC (permalink / raw)
  To: isaku.yamahata
  Cc: kvm, linux-kernel, isaku.yamahata, Paolo Bonzini, erdemaktas,
	Sean Christopherson, Sagi Shahar, David Matlack, Kai Huang,
	Zhi Wang, chen.bo, hang.yuan, tina.zhang, Xiaoyao Li



On 11/7/2023 11:00 PM, isaku.yamahata@intel.com wrote:
> From: Xiaoyao Li <xiaoyao.li@intel.com>
>
> Level info is needed in tdh_clflush_page() to generate the correct page
> size.

tdh_clflush_page() -> tdx_clflush_page()

>
> Besides, explicitly pass level info to SEAMCALL instead of assuming
> it's zero. It works naturally when 2MB support lands.
>
> Signed-off-by: Xiaoyao Li <xiaoyao.li@intel.com>
> Signed-off-by: Isaku Yamahata <isaku.yamahata@intel.com>
> ---
>   arch/x86/kvm/vmx/tdx.c     |  7 ++++---
>   arch/x86/kvm/vmx/tdx_ops.h | 19 ++++++++++++-------
>   2 files changed, 16 insertions(+), 10 deletions(-)
>
> diff --git a/arch/x86/kvm/vmx/tdx.c b/arch/x86/kvm/vmx/tdx.c
> index 8b58d91bda4e..2d5c86e06c5f 100644
> --- a/arch/x86/kvm/vmx/tdx.c
> +++ b/arch/x86/kvm/vmx/tdx.c
> @@ -1468,7 +1468,7 @@ static int tdx_sept_page_aug(struct kvm *kvm, gfn_t gfn,
>   	union tdx_sept_entry entry;
>   	u64 err;
>   
> -	err = tdh_mem_page_aug(kvm_tdx->tdr_pa, gpa, hpa, &out);
> +	err = tdh_mem_page_aug(kvm_tdx->tdr_pa, gpa, tdx_level, hpa, &out);
>   	if (unlikely(err == TDX_ERROR_SEPT_BUSY)) {
>   		tdx_unpin(kvm, pfn);
>   		return -EAGAIN;
> @@ -1497,6 +1497,7 @@ static int tdx_sept_page_aug(struct kvm *kvm, gfn_t gfn,
>   static int tdx_sept_page_add(struct kvm *kvm, gfn_t gfn,
>   			     enum pg_level level, kvm_pfn_t pfn)
>   {
> +	int tdx_level = pg_level_to_tdx_sept_level(level);
>   	struct kvm_tdx *kvm_tdx = to_kvm_tdx(kvm);
>   	hpa_t hpa = pfn_to_hpa(pfn);
>   	gpa_t gpa = gfn_to_gpa(gfn);
> @@ -1531,8 +1532,8 @@ static int tdx_sept_page_add(struct kvm *kvm, gfn_t gfn,
>   	kvm_tdx->source_pa = INVALID_PAGE;
>   
>   	do {
> -		err = tdh_mem_page_add(kvm_tdx->tdr_pa, gpa, hpa, source_pa,
> -				       &out);
> +		err = tdh_mem_page_add(kvm_tdx->tdr_pa, gpa, tdx_level, hpa,
> +				       source_pa, &out);
>   		/*
>   		 * This path is executed during populating initial guest memory
>   		 * image. i.e. before running any vcpu.  Race is rare.
> diff --git a/arch/x86/kvm/vmx/tdx_ops.h b/arch/x86/kvm/vmx/tdx_ops.h
> index e726102d3523..0f2df7198bde 100644
> --- a/arch/x86/kvm/vmx/tdx_ops.h
> +++ b/arch/x86/kvm/vmx/tdx_ops.h
> @@ -63,6 +63,11 @@ static inline u64 tdx_seamcall(u64 op, u64 rcx, u64 rdx, u64 r8, u64 r9,
>   void pr_tdx_error(u64 op, u64 error_code, const struct tdx_module_args *out);
>   #endif
>   
> +static inline enum pg_level tdx_sept_level_to_pg_level(int tdx_level)
> +{
> +	return tdx_level + 1;
> +}
> +
>   static inline void tdx_clflush_page(hpa_t addr, enum pg_level level)
>   {
>   	clflush_cache_range(__va(addr), KVM_HPAGE_SIZE(level));
> @@ -104,11 +109,11 @@ static inline u64 tdh_mng_addcx(hpa_t tdr, hpa_t addr)
>   	return tdx_seamcall(TDH_MNG_ADDCX, addr, tdr, 0, 0, NULL);
>   }
>   
> -static inline u64 tdh_mem_page_add(hpa_t tdr, gpa_t gpa, hpa_t hpa, hpa_t source,
> -				   struct tdx_module_args *out)
> +static inline u64 tdh_mem_page_add(hpa_t tdr, gpa_t gpa, int level, hpa_t hpa,
> +				   hpa_t source, struct tdx_module_args *out)
>   {
> -	tdx_clflush_page(hpa, PG_LEVEL_4K);
> -	return tdx_seamcall_sept(TDH_MEM_PAGE_ADD, gpa, tdr, hpa, source, out);
> +	tdx_clflush_page(hpa, tdx_sept_level_to_pg_level(level));
> +	return tdx_seamcall_sept(TDH_MEM_PAGE_ADD, gpa | level, tdr, hpa, source, out);
>   }

For TDH_MEM_PAGE_ADD, only 4K page is supported, is this change necessary?
Or maybe huge page can be supported by TDH_MEM_PAGE_ADD in the future?

>   
>   static inline u64 tdh_mem_sept_add(hpa_t tdr, gpa_t gpa, int level, hpa_t page,
> @@ -143,11 +148,11 @@ static inline u64 tdh_mem_page_relocate(hpa_t tdr, gpa_t gpa, hpa_t hpa,
>   	return tdx_seamcall_sept(TDH_MEM_PAGE_RELOCATE, gpa, tdr, hpa, 0, out);
>   }
>   
> -static inline u64 tdh_mem_page_aug(hpa_t tdr, gpa_t gpa, hpa_t hpa,
> +static inline u64 tdh_mem_page_aug(hpa_t tdr, gpa_t gpa, int level, hpa_t hpa,
>   				   struct tdx_module_args *out)
>   {
> -	tdx_clflush_page(hpa, PG_LEVEL_4K);
> -	return tdx_seamcall_sept(TDH_MEM_PAGE_AUG, gpa, tdr, hpa, 0, out);
> +	tdx_clflush_page(hpa, tdx_sept_level_to_pg_level(level));
> +	return tdx_seamcall_sept(TDH_MEM_PAGE_AUG, gpa | level, tdr, hpa, 0, out);
>   }
>   
>   static inline u64 tdh_mem_range_block(hpa_t tdr, gpa_t gpa, int level,


^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: [PATCH v6 04/16] KVM: TDX: Pass size to tdx_measure_page()
  2023-11-07 15:00 ` [PATCH v6 04/16] KVM: TDX: Pass size to tdx_measure_page() isaku.yamahata
@ 2023-11-16  8:57   ` Binbin Wu
  2023-11-17  0:36     ` Isaku Yamahata
  0 siblings, 1 reply; 39+ messages in thread
From: Binbin Wu @ 2023-11-16  8:57 UTC (permalink / raw)
  To: isaku.yamahata
  Cc: kvm, linux-kernel, isaku.yamahata, Paolo Bonzini, erdemaktas,
	Sean Christopherson, Sagi Shahar, David Matlack, Kai Huang,
	Zhi Wang, chen.bo, hang.yuan, tina.zhang, Xiaoyao Li



On 11/7/2023 11:00 PM, isaku.yamahata@intel.com wrote:
> From: Xiaoyao Li <xiaoyao.li@intel.com>
>
> Extend tdx_measure_page() to pass size info so that it can measure
> large page as well.
>
> Signed-off-by: Xiaoyao Li <xiaoyao.li@intel.com>
> Signed-off-by: Isaku Yamahata <isaku.yamahata@intel.com>
> ---
>   arch/x86/kvm/vmx/tdx.c | 8 +++++---
>   1 file changed, 5 insertions(+), 3 deletions(-)
>
> diff --git a/arch/x86/kvm/vmx/tdx.c b/arch/x86/kvm/vmx/tdx.c
> index 2d5c86e06c5f..a728175c4a6d 100644
> --- a/arch/x86/kvm/vmx/tdx.c
> +++ b/arch/x86/kvm/vmx/tdx.c
> @@ -1434,13 +1434,15 @@ void tdx_load_mmu_pgd(struct kvm_vcpu *vcpu, hpa_t root_hpa, int pgd_level)
>   	td_vmcs_write64(to_tdx(vcpu), SHARED_EPT_POINTER, root_hpa & PAGE_MASK);
>   }
>   
> -static void tdx_measure_page(struct kvm_tdx *kvm_tdx, hpa_t gpa)
> +static void tdx_measure_page(struct kvm_tdx *kvm_tdx, hpa_t gpa, int size)
IMHO, it's better to pass kvm page level instead of size here to align with
other APIs.

>   {
>   	struct tdx_module_args out;
>   	u64 err;
>   	int i;
>   
> -	for (i = 0; i < PAGE_SIZE; i += TDX_EXTENDMR_CHUNKSIZE) {
> +	WARN_ON_ONCE(size % TDX_EXTENDMR_CHUNKSIZE);

If passed level instead of size, then no need to check KVM_HPAGE_SIZE(level)
against TDX_EXTENDMR_CHUNKSIZE

But same qeustion, tdx_measure_page() is only for tdh_mem_page_add(), is 
this
change necessary?

> +
> +	for (i = 0; i < size; i += TDX_EXTENDMR_CHUNKSIZE) {
>   		err = tdh_mr_extend(kvm_tdx->tdr_pa, gpa + i, &out);
>   		if (KVM_BUG_ON(err, &kvm_tdx->kvm)) {
>   			pr_tdx_error(TDH_MR_EXTEND, err, &out);
> @@ -1544,7 +1546,7 @@ static int tdx_sept_page_add(struct kvm *kvm, gfn_t gfn,
>   		tdx_unpin(kvm, pfn);
>   		return -EIO;
>   	} else if (measure)
> -		tdx_measure_page(kvm_tdx, gpa);
> +		tdx_measure_page(kvm_tdx, gpa, KVM_HPAGE_SIZE(level));
>   
>   	return 0;
>   


^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: [PATCH v6 03/16] KVM: TDX: Pass KVM page level to tdh_mem_page_add() and tdh_mem_page_aug()
  2023-11-16  8:18   ` Binbin Wu
@ 2023-11-17  0:23     ` Isaku Yamahata
  0 siblings, 0 replies; 39+ messages in thread
From: Isaku Yamahata @ 2023-11-17  0:23 UTC (permalink / raw)
  To: Binbin Wu
  Cc: isaku.yamahata, kvm, linux-kernel, isaku.yamahata, Paolo Bonzini,
	erdemaktas, Sean Christopherson, Sagi Shahar, David Matlack,
	Kai Huang, Zhi Wang, chen.bo, hang.yuan, tina.zhang, Xiaoyao Li,
	isaku.yamahata

On Thu, Nov 16, 2023 at 04:18:28PM +0800,
Binbin Wu <binbin.wu@linux.intel.com> wrote:

> > diff --git a/arch/x86/kvm/vmx/tdx_ops.h b/arch/x86/kvm/vmx/tdx_ops.h
> > index e726102d3523..0f2df7198bde 100644
> > --- a/arch/x86/kvm/vmx/tdx_ops.h
> > +++ b/arch/x86/kvm/vmx/tdx_ops.h
> > @@ -63,6 +63,11 @@ static inline u64 tdx_seamcall(u64 op, u64 rcx, u64 rdx, u64 r8, u64 r9,
> >   void pr_tdx_error(u64 op, u64 error_code, const struct tdx_module_args *out);
> >   #endif
> > +static inline enum pg_level tdx_sept_level_to_pg_level(int tdx_level)
> > +{
> > +	return tdx_level + 1;
> > +}
> > +
> >   static inline void tdx_clflush_page(hpa_t addr, enum pg_level level)
> >   {
> >   	clflush_cache_range(__va(addr), KVM_HPAGE_SIZE(level));
> > @@ -104,11 +109,11 @@ static inline u64 tdh_mng_addcx(hpa_t tdr, hpa_t addr)
> >   	return tdx_seamcall(TDH_MNG_ADDCX, addr, tdr, 0, 0, NULL);
> >   }
> > -static inline u64 tdh_mem_page_add(hpa_t tdr, gpa_t gpa, hpa_t hpa, hpa_t source,
> > -				   struct tdx_module_args *out)
> > +static inline u64 tdh_mem_page_add(hpa_t tdr, gpa_t gpa, int level, hpa_t hpa,
> > +				   hpa_t source, struct tdx_module_args *out)
> >   {
> > -	tdx_clflush_page(hpa, PG_LEVEL_4K);
> > -	return tdx_seamcall_sept(TDH_MEM_PAGE_ADD, gpa, tdr, hpa, source, out);
> > +	tdx_clflush_page(hpa, tdx_sept_level_to_pg_level(level));
> > +	return tdx_seamcall_sept(TDH_MEM_PAGE_ADD, gpa | level, tdr, hpa, source, out);
> >   }
> 
> For TDH_MEM_PAGE_ADD, only 4K page is supported, is this change necessary?
> Or maybe huge page can be supported by TDH_MEM_PAGE_ADD in the future?


No and no. Will drop the argument. Nice catch.
-- 
Isaku Yamahata <isaku.yamahata@linux.intel.com>

^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: [PATCH v6 04/16] KVM: TDX: Pass size to tdx_measure_page()
  2023-11-16  8:57   ` Binbin Wu
@ 2023-11-17  0:36     ` Isaku Yamahata
  0 siblings, 0 replies; 39+ messages in thread
From: Isaku Yamahata @ 2023-11-17  0:36 UTC (permalink / raw)
  To: Binbin Wu
  Cc: isaku.yamahata, kvm, linux-kernel, isaku.yamahata, Paolo Bonzini,
	erdemaktas, Sean Christopherson, Sagi Shahar, David Matlack,
	Kai Huang, Zhi Wang, chen.bo, hang.yuan, tina.zhang, Xiaoyao Li,
	isaku.yamahata

On Thu, Nov 16, 2023 at 04:57:26PM +0800,
Binbin Wu <binbin.wu@linux.intel.com> wrote:

> 
> 
> On 11/7/2023 11:00 PM, isaku.yamahata@intel.com wrote:
> > From: Xiaoyao Li <xiaoyao.li@intel.com>
> > 
> > Extend tdx_measure_page() to pass size info so that it can measure
> > large page as well.
> > 
> > Signed-off-by: Xiaoyao Li <xiaoyao.li@intel.com>
> > Signed-off-by: Isaku Yamahata <isaku.yamahata@intel.com>
> > ---
> >   arch/x86/kvm/vmx/tdx.c | 8 +++++---
> >   1 file changed, 5 insertions(+), 3 deletions(-)
> > 
> > diff --git a/arch/x86/kvm/vmx/tdx.c b/arch/x86/kvm/vmx/tdx.c
> > index 2d5c86e06c5f..a728175c4a6d 100644
> > --- a/arch/x86/kvm/vmx/tdx.c
> > +++ b/arch/x86/kvm/vmx/tdx.c
> > @@ -1434,13 +1434,15 @@ void tdx_load_mmu_pgd(struct kvm_vcpu *vcpu, hpa_t root_hpa, int pgd_level)
> >   	td_vmcs_write64(to_tdx(vcpu), SHARED_EPT_POINTER, root_hpa & PAGE_MASK);
> >   }
> > -static void tdx_measure_page(struct kvm_tdx *kvm_tdx, hpa_t gpa)
> > +static void tdx_measure_page(struct kvm_tdx *kvm_tdx, hpa_t gpa, int size)
> IMHO, it's better to pass kvm page level instead of size here to align with
> other APIs.
> 
> >   {
> >   	struct tdx_module_args out;
> >   	u64 err;
> >   	int i;
> > -	for (i = 0; i < PAGE_SIZE; i += TDX_EXTENDMR_CHUNKSIZE) {
> > +	WARN_ON_ONCE(size % TDX_EXTENDMR_CHUNKSIZE);
> 
> If passed level instead of size, then no need to check KVM_HPAGE_SIZE(level)
> against TDX_EXTENDMR_CHUNKSIZE
> 
> But same qeustion, tdx_measure_page() is only for tdh_mem_page_add(), is
> this
> change necessary?

You're right. As tdx_mem_page_add() is the only caller of tdx_measure_page(),
open-code it into tdx_mem_page_add() and drop this patch.
-- 
Isaku Yamahata <isaku.yamahata@linux.intel.com>

^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: [PATCH v6 01/16] KVM: TDP_MMU: Go to next level if smaller private mapping exists
  2023-11-16  1:32   ` Binbin Wu
@ 2023-11-17  1:05     ` Isaku Yamahata
  0 siblings, 0 replies; 39+ messages in thread
From: Isaku Yamahata @ 2023-11-17  1:05 UTC (permalink / raw)
  To: Binbin Wu
  Cc: isaku.yamahata, kvm, linux-kernel, isaku.yamahata, Paolo Bonzini,
	erdemaktas, Sean Christopherson, Sagi Shahar, David Matlack,
	Kai Huang, Zhi Wang, chen.bo, hang.yuan, tina.zhang, Xiaoyao Li,
	isaku.yamahata

On Thu, Nov 16, 2023 at 09:32:22AM +0800,
Binbin Wu <binbin.wu@linux.intel.com> wrote:

> 
> 
> On 11/7/2023 11:00 PM, isaku.yamahata@intel.com wrote:
> > From: Xiaoyao Li <xiaoyao.li@intel.com>
> > 
> > Cannot map a private page as large page if any smaller mapping exists.
> > 
> > It has to wait for all the not-mapped smaller page to be mapped and
> > promote it to larger mapping.
> > 
> > Signed-off-by: Xiaoyao Li <xiaoyao.li@intel.com>
> > ---
> >   arch/x86/kvm/mmu/tdp_mmu.c | 3 ++-
> >   1 file changed, 2 insertions(+), 1 deletion(-)
> > 
> > diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/arch/x86/kvm/mmu/tdp_mmu.c
> > index 2c5257628881..d806574f7f2d 100644
> > --- a/arch/x86/kvm/mmu/tdp_mmu.c
> > +++ b/arch/x86/kvm/mmu/tdp_mmu.c
> > @@ -1287,7 +1287,8 @@ int kvm_tdp_mmu_map(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault)
> >   	tdp_mmu_for_each_pte(iter, mmu, is_private, raw_gfn, raw_gfn + 1) {
> >   		int r;
> > -		if (fault->nx_huge_page_workaround_enabled)
> > +		if (fault->nx_huge_page_workaround_enabled ||
> > +		    kvm_gfn_shared_mask(vcpu->kvm))
> As I mentioned in https://lore.kernel.org/kvm/fef75d54-e319-5170-5f76-f5abc4856315@linux.intel.com/,
> The change of this patch will not take effect.
> If "fault->nx_huge_page_workaround_enabled" is false, the condition
> "spte_to_child_sp(spte)->nx_huge_page_disallowed" will not be true.
> 
> IIUC, the function disallowed_hugepage_adjust() currently is only to handle
> nx_huge_page_workaround, it seems no special handling needed for TD.
> >   			disallowed_hugepage_adjust(fault, iter.old_spte, iter.level);
> >   		/*

You're correct. Now guest memfd memory attributes takes care of large page
mapping, this patch is uncessary.  Will drop this patch.
-- 
Isaku Yamahata <isaku.yamahata@linux.intel.com>

^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: [PATCH v6 05/16] KVM: TDX: Pass size to reclaim_page()
  2023-11-07 15:00 ` [PATCH v6 05/16] KVM: TDX: Pass size to reclaim_page() isaku.yamahata
@ 2023-11-19  6:42   ` Binbin Wu
  2023-11-19  6:58     ` Binbin Wu
  0 siblings, 1 reply; 39+ messages in thread
From: Binbin Wu @ 2023-11-19  6:42 UTC (permalink / raw)
  To: isaku.yamahata
  Cc: kvm, linux-kernel, isaku.yamahata, Paolo Bonzini, erdemaktas,
	Sean Christopherson, Sagi Shahar, David Matlack, Kai Huang,
	Zhi Wang, chen.bo, hang.yuan, tina.zhang, Xiaoyao Li



On 11/7/2023 11:00 PM, isaku.yamahata@intel.com wrote:
> From: Xiaoyao Li <xiaoyao.li@intel.com>
>
> A 2MB large page can be tdh_mem_page_aug()'ed to TD directly. In this case,
> it needs to reclaim and clear the page as 2MB size.
>
> Signed-off-by: Xiaoyao Li <xiaoyao.li@intel.com>
> Signed-off-by: Isaku Yamahata <isaku.yamahata@intel.com>
> ---
>   arch/x86/kvm/vmx/tdx.c | 27 +++++++++++++++------------
>   1 file changed, 15 insertions(+), 12 deletions(-)
>
> diff --git a/arch/x86/kvm/vmx/tdx.c b/arch/x86/kvm/vmx/tdx.c
> index a728175c4a6d..0fca863faeee 100644
> --- a/arch/x86/kvm/vmx/tdx.c
> +++ b/arch/x86/kvm/vmx/tdx.c
> @@ -200,12 +200,13 @@ static void tdx_disassociate_vp_on_cpu(struct kvm_vcpu *vcpu)
>   	smp_call_function_single(cpu, tdx_disassociate_vp_arg, vcpu, 1);
>   }
>   
> -static void tdx_clear_page(unsigned long page_pa)
> +static void tdx_clear_page(unsigned long page_pa, int size)
Should use "unsigned long" instead of "int" for size to avoid implicit 
type conversion.

>   {
>   	const void *zero_page = (const void *) __va(page_to_phys(ZERO_PAGE(0)));
>   	void *page = __va(page_pa);
>   	unsigned long i;
>   
> +	WARN_ON_ONCE(size % PAGE_SIZE);
>   	/*
>   	 * When re-assign one page from old keyid to a new keyid, MOVDIR64B is
>   	 * required to clear/write the page with new keyid to prevent integrity
> @@ -214,7 +215,7 @@ static void tdx_clear_page(unsigned long page_pa)
>   	 * clflush doesn't flush cache with HKID set.  The cache line could be
>   	 * poisoned (even without MKTME-i), clear the poison bit.
>   	 */
> -	for (i = 0; i < PAGE_SIZE; i += 64)
> +	for (i = 0; i < size; i += 64)
>   		movdir64b(page + i, zero_page);
>   	/*
>   	 * MOVDIR64B store uses WC buffer.  Prevent following memory reads
> @@ -223,7 +224,7 @@ static void tdx_clear_page(unsigned long page_pa)
>   	__mb();
>   }
>   
> -static int __tdx_reclaim_page(hpa_t pa)
> +static int __tdx_reclaim_page(hpa_t pa, enum pg_level level)
>   {
>   	struct tdx_module_args out;
>   	u64 err;
> @@ -241,17 +242,19 @@ static int __tdx_reclaim_page(hpa_t pa)
>   		pr_tdx_error(TDH_PHYMEM_PAGE_RECLAIM, err, &out);
>   		return -EIO;
>   	}
> +	/* out.r8 == tdx sept page level */
> +	WARN_ON_ONCE(out.r8 != pg_level_to_tdx_sept_level(level));
>   
>   	return 0;
>   }
>   
> -static int tdx_reclaim_page(hpa_t pa)
> +static int tdx_reclaim_page(hpa_t pa, enum pg_level level)
>   {
>   	int r;
>   
> -	r = __tdx_reclaim_page(pa);
> +	r = __tdx_reclaim_page(pa, level);
>   	if (!r)
> -		tdx_clear_page(pa);
> +		tdx_clear_page(pa, KVM_HPAGE_SIZE(level));
>   	return r;
>   }
>   
> @@ -265,7 +268,7 @@ static void tdx_reclaim_td_page(unsigned long td_page_pa)
>   	 * was already flushed by TDH.PHYMEM.CACHE.WB before here, So
>   	 * cache doesn't need to be flushed again.
>   	 */
> -	if (tdx_reclaim_page(td_page_pa))
> +	if (tdx_reclaim_page(td_page_pa, PG_LEVEL_4K))
>   		/*
>   		 * Leak the page on failure:
>   		 * tdx_reclaim_page() returns an error if and only if there's an
> @@ -497,7 +500,7 @@ void tdx_vm_free(struct kvm *kvm)
>   
>   	if (!kvm_tdx->tdr_pa)
>   		return;
> -	if (__tdx_reclaim_page(kvm_tdx->tdr_pa))
> +	if (__tdx_reclaim_page(kvm_tdx->tdr_pa, PG_LEVEL_4K))
>   		return;
>   	/*
>   	 * TDX module maps TDR with TDX global HKID.  TDX module may access TDR
> @@ -510,7 +513,7 @@ void tdx_vm_free(struct kvm *kvm)
>   		pr_tdx_error(TDH_PHYMEM_PAGE_WBINVD, err, NULL);
>   		return;
>   	}
> -	tdx_clear_page(kvm_tdx->tdr_pa);
> +	tdx_clear_page(kvm_tdx->tdr_pa, PAGE_SIZE);
>   
>   	free_page((unsigned long)__va(kvm_tdx->tdr_pa));
>   	kvm_tdx->tdr_pa = 0;
> @@ -1597,7 +1600,7 @@ static int tdx_sept_drop_private_spte(struct kvm *kvm, gfn_t gfn,
>   		 * The HKID assigned to this TD was already freed and cache
>   		 * was already flushed. We don't have to flush again.
>   		 */
> -		err = tdx_reclaim_page(hpa);
> +		err = tdx_reclaim_page(hpa, level);
>   		if (KVM_BUG_ON(err, kvm))
>   			return -EIO;
>   		tdx_unpin(kvm, pfn);
> @@ -1630,7 +1633,7 @@ static int tdx_sept_drop_private_spte(struct kvm *kvm, gfn_t gfn,
>   		pr_tdx_error(TDH_PHYMEM_PAGE_WBINVD, err, NULL);
>   		return -EIO;
>   	}
> -	tdx_clear_page(hpa);
> +	tdx_clear_page(hpa, PAGE_SIZE);
Should here be KVM_HPAGE_SIZE(level) instead of  PAGE_SIZE?

>   	tdx_unpin(kvm, pfn);
>   	return 0;
>   }
> @@ -1742,7 +1745,7 @@ static int tdx_sept_free_private_spt(struct kvm *kvm, gfn_t gfn,
>   	 * already flushed. We don't have to flush again.
>   	 */
>   	if (!is_hkid_assigned(kvm_tdx))
> -		return tdx_reclaim_page(__pa(private_spt));
> +		return tdx_reclaim_page(__pa(private_spt), PG_LEVEL_4K);
>   
>   	/*
>   	 * free_private_spt() is (obviously) called when a shadow page is being


^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: [PATCH v6 05/16] KVM: TDX: Pass size to reclaim_page()
  2023-11-19  6:42   ` Binbin Wu
@ 2023-11-19  6:58     ` Binbin Wu
  0 siblings, 0 replies; 39+ messages in thread
From: Binbin Wu @ 2023-11-19  6:58 UTC (permalink / raw)
  To: isaku.yamahata
  Cc: kvm, linux-kernel, isaku.yamahata, Paolo Bonzini, erdemaktas,
	Sean Christopherson, Sagi Shahar, David Matlack, Kai Huang,
	Zhi Wang, chen.bo, hang.yuan, tina.zhang, Xiaoyao Li



On 11/19/2023 2:42 PM, Binbin Wu wrote:
>
>
> On 11/7/2023 11:00 PM, isaku.yamahata@intel.com wrote:
>> @@ -1597,7 +1600,7 @@ static int tdx_sept_drop_private_spte(struct 
>> kvm *kvm, gfn_t gfn,
>>            * The HKID assigned to this TD was already freed and cache
>>            * was already flushed. We don't have to flush again.
>>            */
>> -        err = tdx_reclaim_page(hpa);
>> +        err = tdx_reclaim_page(hpa, level);
>>           if (KVM_BUG_ON(err, kvm))
>>               return -EIO;
>>           tdx_unpin(kvm, pfn);
>> @@ -1630,7 +1633,7 @@ static int tdx_sept_drop_private_spte(struct 
>> kvm *kvm, gfn_t gfn,
>>           pr_tdx_error(TDH_PHYMEM_PAGE_WBINVD, err, NULL);
>>           return -EIO;
>>       }
>> -    tdx_clear_page(hpa);
>> +    tdx_clear_page(hpa, PAGE_SIZE);
> Should here be KVM_HPAGE_SIZE(level) instead of  PAGE_SIZE?

OK, please ignore this comment, I see this is handled by the following 
patch.

>
>>       tdx_unpin(kvm, pfn);
>>       return 0;
>>   }
>> @@ -1742,7 +1745,7 @@ static int tdx_sept_free_private_spt(struct kvm 
>> *kvm, gfn_t gfn,
>>        * already flushed. We don't have to flush again.
>>        */
>>       if (!is_hkid_assigned(kvm_tdx))
>> -        return tdx_reclaim_page(__pa(private_spt));
>> +        return tdx_reclaim_page(__pa(private_spt), PG_LEVEL_4K);
>>         /*
>>        * free_private_spt() is (obviously) called when a shadow page 
>> is being
>
>


^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: [PATCH v6 07/16] KVM: MMU: Introduce level info in PFERR code
  2023-11-07 15:00 ` [PATCH v6 07/16] KVM: MMU: Introduce level info in PFERR code isaku.yamahata
@ 2023-11-20 10:54   ` Binbin Wu
  2023-11-21 10:02     ` Isaku Yamahata
  0 siblings, 1 reply; 39+ messages in thread
From: Binbin Wu @ 2023-11-20 10:54 UTC (permalink / raw)
  To: isaku.yamahata
  Cc: kvm, linux-kernel, isaku.yamahata, Paolo Bonzini, erdemaktas,
	Sean Christopherson, Sagi Shahar, David Matlack, Kai Huang,
	Zhi Wang, chen.bo, hang.yuan, tina.zhang, Xiaoyao Li



On 11/7/2023 11:00 PM, isaku.yamahata@intel.com wrote:
> From: Xiaoyao Li <xiaoyao.li@intel.com>
>
> For TDX, EPT violation can happen when TDG.MEM.PAGE.ACCEPT.
> And TDG.MEM.PAGE.ACCEPT contains the desired accept page level of TD guest.
>
> 1. KVM can map it with 4KB page while TD guest wants to accept 2MB page.
>
>    TD geust will get TDX_PAGE_SIZE_MISMATCH and it should try to accept
s/geust/guest

>    4KB size.
>
> 2. KVM can map it with 2MB page while TD guest wants to accept 4KB page.
>
>    KVM needs to honor it because
>    a) there is no way to tell guest KVM maps it as 2MB size. And
>    b) guest accepts it in 4KB size since guest knows some other 4KB page
>       in the same 2MB range will be used as shared page.
>
> For case 2, it need to pass desired page level to MMU's
> page_fault_handler. Use bit 29:31 of kvm PF error code for this purpose.

The level info is needed not only for case 2, KVM also needs the info so 
that
it can map a 2MB page when TD guest wants to accept a 2MB page.


>
> Signed-off-by: Xiaoyao Li <xiaoyao.li@intel.com>
> Signed-off-by: Isaku Yamahata <isaku.yamahata@intel.com>
> ---
>   arch/x86/include/asm/kvm_host.h |  3 +++
>   arch/x86/kvm/mmu/mmu.c          |  5 +++++
>   arch/x86/kvm/vmx/common.h       |  6 +++++-
>   arch/x86/kvm/vmx/tdx.c          | 15 ++++++++++++++-
>   arch/x86/kvm/vmx/tdx.h          | 19 +++++++++++++++++++
>   arch/x86/kvm/vmx/vmx.c          |  2 +-
>   6 files changed, 47 insertions(+), 3 deletions(-)
>
> diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
> index edcafcd650db..eed36c1eedb7 100644
> --- a/arch/x86/include/asm/kvm_host.h
> +++ b/arch/x86/include/asm/kvm_host.h
> @@ -261,6 +261,8 @@ enum x86_intercept_stage;
>   #define PFERR_FETCH_BIT 4
>   #define PFERR_PK_BIT 5
>   #define PFERR_SGX_BIT 15
> +#define PFERR_LEVEL_START_BIT 29
> +#define PFERR_LEVEL_END_BIT 31
>   #define PFERR_GUEST_FINAL_BIT 32
>   #define PFERR_GUEST_PAGE_BIT 33
>   #define PFERR_GUEST_ENC_BIT 34
> @@ -273,6 +275,7 @@ enum x86_intercept_stage;
>   #define PFERR_FETCH_MASK	BIT(PFERR_FETCH_BIT)
>   #define PFERR_PK_MASK		BIT(PFERR_PK_BIT)
>   #define PFERR_SGX_MASK		BIT(PFERR_SGX_BIT)
> +#define PFERR_LEVEL_MASK	GENMASK_ULL(PFERR_LEVEL_END_BIT, PFERR_LEVEL_START_BIT)
>   #define PFERR_GUEST_FINAL_MASK	BIT_ULL(PFERR_GUEST_FINAL_BIT)
>   #define PFERR_GUEST_PAGE_MASK	BIT_ULL(PFERR_GUEST_PAGE_BIT)
>   #define PFERR_GUEST_ENC_MASK	BIT_ULL(PFERR_GUEST_ENC_BIT)
> diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c
> index eb17a508c5d1..265177cedf37 100644
> --- a/arch/x86/kvm/mmu/mmu.c
> +++ b/arch/x86/kvm/mmu/mmu.c
> @@ -4615,6 +4615,11 @@ bool __kvm_mmu_honors_guest_mtrrs(bool vm_has_noncoherent_dma)
>   
>   int kvm_tdp_page_fault(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault)
>   {
> +	u8 err_level = (fault->error_code & PFERR_LEVEL_MASK) >> PFERR_LEVEL_START_BIT;
> +
> +	if (err_level)
> +		fault->max_level = min(fault->max_level, err_level);
> +
>   	/*
>   	 * If the guest's MTRRs may be used to compute the "real" memtype,
>   	 * restrict the mapping level to ensure KVM uses a consistent memtype
> diff --git a/arch/x86/kvm/vmx/common.h b/arch/x86/kvm/vmx/common.h
> index 027aa4175d2c..bb00433932ee 100644
> --- a/arch/x86/kvm/vmx/common.h
> +++ b/arch/x86/kvm/vmx/common.h
> @@ -67,7 +67,8 @@ static inline void vmx_handle_external_interrupt_irqoff(struct kvm_vcpu *vcpu,
>   }
>   
>   static inline int __vmx_handle_ept_violation(struct kvm_vcpu *vcpu, gpa_t gpa,
> -					     unsigned long exit_qualification)
> +					     unsigned long exit_qualification,
> +					     int err_page_level)
>   {
>   	u64 error_code;
>   
> @@ -90,6 +91,9 @@ static inline int __vmx_handle_ept_violation(struct kvm_vcpu *vcpu, gpa_t gpa,
>   	if (kvm_is_private_gpa(vcpu->kvm, gpa))
>   		error_code |= PFERR_GUEST_ENC_MASK;
>   
> +	if (err_page_level > 0)
> +		error_code |= (err_page_level << PFERR_LEVEL_START_BIT) & PFERR_LEVEL_MASK;
> +
>   	return kvm_mmu_page_fault(vcpu, gpa, error_code, NULL, 0);
>   }
>   
> diff --git a/arch/x86/kvm/vmx/tdx.c b/arch/x86/kvm/vmx/tdx.c
> index 31598b84811f..e4167f08b58b 100644
> --- a/arch/x86/kvm/vmx/tdx.c
> +++ b/arch/x86/kvm/vmx/tdx.c
> @@ -1803,7 +1803,20 @@ void tdx_deliver_interrupt(struct kvm_lapic *apic, int delivery_mode,
>   
>   static int tdx_handle_ept_violation(struct kvm_vcpu *vcpu)
>   {
> +	union tdx_ext_exit_qualification ext_exit_qual;
>   	unsigned long exit_qual;
> +	int err_page_level = 0;
> +
> +	ext_exit_qual.full = tdexit_ext_exit_qual(vcpu);
> +
> +	if (ext_exit_qual.type >= NUM_EXT_EXIT_QUAL) {
Can we add unlikely() hint here?

> +		pr_err("EPT violation at gpa 0x%lx, with invalid ext exit qualification type 0x%x\n",
> +			tdexit_gpa(vcpu), ext_exit_qual.type);
> +		kvm_vm_bugged(vcpu->kvm);
> +		return 0;
> +	} else if (ext_exit_qual.type == EXT_EXIT_QUAL_ACCEPT) {
> +		err_page_level = tdx_sept_level_to_pg_level(ext_exit_qual.req_sept_level);
> +	}
>   
>   	if (kvm_is_private_gpa(vcpu->kvm, tdexit_gpa(vcpu))) {
>   		/*
> @@ -1830,7 +1843,7 @@ static int tdx_handle_ept_violation(struct kvm_vcpu *vcpu)
>   	}
>   
>   	trace_kvm_page_fault(vcpu, tdexit_gpa(vcpu), exit_qual);
> -	return __vmx_handle_ept_violation(vcpu, tdexit_gpa(vcpu), exit_qual);
> +	return __vmx_handle_ept_violation(vcpu, tdexit_gpa(vcpu), exit_qual, err_page_level);
>   }
>   
>   static int tdx_handle_ept_misconfig(struct kvm_vcpu *vcpu)
> diff --git a/arch/x86/kvm/vmx/tdx.h b/arch/x86/kvm/vmx/tdx.h
> index 54c3f6b83571..37ee944c36a1 100644
> --- a/arch/x86/kvm/vmx/tdx.h
> +++ b/arch/x86/kvm/vmx/tdx.h
> @@ -72,6 +72,25 @@ union tdx_exit_reason {
>   	u64 full;
>   };
>   
> +union tdx_ext_exit_qualification {
> +	struct {
> +		u64 type		: 4;
> +		u64 reserved0		: 28;
> +		u64 req_sept_level	: 3;
> +		u64 err_sept_level	: 3;
> +		u64 err_sept_state	: 8;
> +		u64 err_sept_is_leaf	: 1;
> +		u64 reserved1		: 17;
> +	};
> +	u64 full;
> +};
> +
> +enum tdx_ext_exit_qualification_type {
> +	EXT_EXIT_QUAL_NONE,
> +	EXT_EXIT_QUAL_ACCEPT,
> +	NUM_EXT_EXIT_QUAL,
> +};
> +
>   struct vcpu_tdx {
>   	struct kvm_vcpu	vcpu;
>   
> diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
> index 28732925792e..ae9ba0731521 100644
> --- a/arch/x86/kvm/vmx/vmx.c
> +++ b/arch/x86/kvm/vmx/vmx.c
> @@ -5753,7 +5753,7 @@ static int handle_ept_violation(struct kvm_vcpu *vcpu)
>   	if (unlikely(allow_smaller_maxphyaddr && kvm_vcpu_is_illegal_gpa(vcpu, gpa)))
>   		return kvm_emulate_instruction(vcpu, 0);
>   
> -	return __vmx_handle_ept_violation(vcpu, gpa, exit_qualification);
> +	return __vmx_handle_ept_violation(vcpu, gpa, exit_qualification, 0);
>   }
>   
>   static int handle_ept_misconfig(struct kvm_vcpu *vcpu)


^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: [PATCH v6 08/16] KVM: TDX: Pin pages via get_page() right before ADD/AUG'ed to TDs
  2023-11-07 15:00 ` [PATCH v6 08/16] KVM: TDX: Pin pages via get_page() right before ADD/AUG'ed to TDs isaku.yamahata
@ 2023-11-20 11:05   ` Binbin Wu
  2023-11-21 10:04     ` Isaku Yamahata
  0 siblings, 1 reply; 39+ messages in thread
From: Binbin Wu @ 2023-11-20 11:05 UTC (permalink / raw)
  To: isaku.yamahata
  Cc: kvm, linux-kernel, isaku.yamahata, Paolo Bonzini, erdemaktas,
	Sean Christopherson, Sagi Shahar, David Matlack, Kai Huang,
	Zhi Wang, chen.bo, hang.yuan, tina.zhang, Xiaoyao Li



On 11/7/2023 11:00 PM, isaku.yamahata@intel.com wrote:
> From: Xiaoyao Li <xiaoyao.li@intel.com>
>
> When kvm_faultin_pfn(), it doesn't have the info regarding which page level
> will the gfn be mapped at. Hence it doesn't know to pin a 4K page or a
> 2M page.
>
> Move the guest private pages pinning logic right before
> TDH_MEM_PAGE_ADD/AUG() since at that time it knows the page level info.
This patch looks strange, the code has nothing to do with the shortlog.
It seems that the change of this patch has already been covered by 06/16.

Something went wrong when formatting the patch?

>
> Signed-off-by: Xiaoyao Li <xiaoyao.li@intel.com>
> ---
>   arch/x86/kvm/vmx/tdx.c | 15 ++++++++-------
>   1 file changed, 8 insertions(+), 7 deletions(-)
>
> diff --git a/arch/x86/kvm/vmx/tdx.c b/arch/x86/kvm/vmx/tdx.c
> index e4167f08b58b..7b81811eb404 100644
> --- a/arch/x86/kvm/vmx/tdx.c
> +++ b/arch/x86/kvm/vmx/tdx.c
> @@ -1454,7 +1454,8 @@ static void tdx_measure_page(struct kvm_tdx *kvm_tdx, hpa_t gpa, int size)
>   	}
>   }
>   
> -static void tdx_unpin(struct kvm *kvm, kvm_pfn_t pfn, int level)
> +static void tdx_unpin(struct kvm *kvm, gfn_t gfn, kvm_pfn_t pfn,
> +		      enum pg_level level)
>   {
>   	int i;
>   
> @@ -1476,7 +1477,7 @@ static int tdx_sept_page_aug(struct kvm *kvm, gfn_t gfn,
>   
>   	err = tdh_mem_page_aug(kvm_tdx->tdr_pa, gpa, tdx_level, hpa, &out);
>   	if (unlikely(err == TDX_ERROR_SEPT_BUSY)) {
> -		tdx_unpin(kvm, pfn, level);
> +		tdx_unpin(kvm, gfn, pfn, level);
>   		return -EAGAIN;
>   	}
>   	if (unlikely(err == (TDX_EPT_ENTRY_STATE_INCORRECT | TDX_OPERAND_ID_RCX))) {
> @@ -1493,7 +1494,7 @@ static int tdx_sept_page_aug(struct kvm *kvm, gfn_t gfn,
>   	}
>   	if (KVM_BUG_ON(err, kvm)) {
>   		pr_tdx_error(TDH_MEM_PAGE_AUG, err, &out);
> -		tdx_unpin(kvm, pfn, level);
> +		tdx_unpin(kvm, gfn, pfn, level);
>   		return -EIO;
>   	}
>   
> @@ -1529,7 +1530,7 @@ static int tdx_sept_page_add(struct kvm *kvm, gfn_t gfn,
>   	 * always uses vcpu 0's page table and protected by vcpu->mutex).
>   	 */
>   	if (KVM_BUG_ON(kvm_tdx->source_pa == INVALID_PAGE, kvm)) {
> -		tdx_unpin(kvm, pfn, level);
> +		tdx_unpin(kvm, gfn, pfn, level);
>   		return -EINVAL;
>   	}
>   
> @@ -1547,7 +1548,7 @@ static int tdx_sept_page_add(struct kvm *kvm, gfn_t gfn,
>   	} while (unlikely(err == TDX_ERROR_SEPT_BUSY));
>   	if (KVM_BUG_ON(err, kvm)) {
>   		pr_tdx_error(TDH_MEM_PAGE_ADD, err, &out);
> -		tdx_unpin(kvm, pfn, level);
> +		tdx_unpin(kvm, gfn, pfn, level);
>   		return -EIO;
>   	} else if (measure)
>   		tdx_measure_page(kvm_tdx, gpa, KVM_HPAGE_SIZE(level));
> @@ -1600,7 +1601,7 @@ static int tdx_sept_drop_private_spte(struct kvm *kvm, gfn_t gfn,
>   		err = tdx_reclaim_page(hpa, level);
>   		if (KVM_BUG_ON(err, kvm))
>   			return -EIO;
> -		tdx_unpin(kvm, pfn, level);
> +		tdx_unpin(kvm, gfn, pfn, level);
>   		return 0;
>   	}
>   
> @@ -1633,7 +1634,7 @@ static int tdx_sept_drop_private_spte(struct kvm *kvm, gfn_t gfn,
>   			r = -EIO;
>   		} else {
>   			tdx_clear_page(hpa, PAGE_SIZE);
> -			tdx_unpin(kvm, pfn + i, PG_LEVEL_4K);
> +			tdx_unpin(kvm, gfn + i, pfn + i, PG_LEVEL_4K);
>   		}
>   		hpa += PAGE_SIZE;
>   	}


^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: [PATCH v6 09/16] KVM: TDX: Pass desired page level in err code for page fault handler
  2023-11-07 15:00 ` [PATCH v6 09/16] KVM: TDX: Pass desired page level in err code for page fault handler isaku.yamahata
@ 2023-11-20 11:24   ` Binbin Wu
  2023-11-21 10:27     ` Isaku Yamahata
  0 siblings, 1 reply; 39+ messages in thread
From: Binbin Wu @ 2023-11-20 11:24 UTC (permalink / raw)
  To: isaku.yamahata
  Cc: kvm, linux-kernel, isaku.yamahata, Paolo Bonzini, erdemaktas,
	Sean Christopherson, Sagi Shahar, David Matlack, Kai Huang,
	Zhi Wang, chen.bo, hang.yuan, tina.zhang, Xiaoyao Li



On 11/7/2023 11:00 PM, isaku.yamahata@intel.com wrote:
> From: Xiaoyao Li <xiaoyao.li@intel.com>
>
> For TDX, EPT violation can happen when TDG.MEM.PAGE.ACCEPT.
> And TDG.MEM.PAGE.ACCEPT contains the desired accept page level of TD guest.
>
> 1. KVM can map it with 4KB page while TD guest wants to accept 2MB page.
>
>    TD geust will get TDX_PAGE_SIZE_MISMATCH and it should try to accept
>    4KB size.
>
> 2. KVM can map it with 2MB page while TD guest wants to accept 4KB page.
>
>    KVM needs to honor it because
>    a) there is no way to tell guest KVM maps it as 2MB size. And
>    b) guest accepts it in 4KB size since guest knows some other 4KB page
>       in the same 2MB range will be used as shared page.
>
> For case 2, it need to pass desired page level to MMU's
> page_fault_handler. Use bit 29:31 of kvm PF error code for this purpose.
The shortlog is the same as patch 7/16..., I am a bit confused by the 
structure of this patch series...
Can this patch be squashed into 7/16?

>
> Signed-off-by: Xiaoyao Li <xiaoyao.li@intel.com>
> ---
>   arch/x86/include/asm/kvm_host.h |  2 ++
>   arch/x86/kvm/vmx/common.h       |  2 +-
>   arch/x86/kvm/vmx/tdx.c          |  7 ++++++-
>   arch/x86/kvm/vmx/tdx.h          | 19 -------------------
>   arch/x86/kvm/vmx/tdx_arch.h     | 19 +++++++++++++++++++
>   arch/x86/kvm/vmx/vmx.c          |  2 +-
>   6 files changed, 29 insertions(+), 22 deletions(-)
>
> diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
> index eed36c1eedb7..c16823f3326e 100644
> --- a/arch/x86/include/asm/kvm_host.h
> +++ b/arch/x86/include/asm/kvm_host.h
> @@ -285,6 +285,8 @@ enum x86_intercept_stage;
>   				 PFERR_WRITE_MASK |		\
>   				 PFERR_PRESENT_MASK)
>   
> +#define PFERR_LEVEL(err_code)	(((err_code) & PFERR_LEVEL_MASK) >> PFERR_LEVEL_START_BIT)
It's defined, but never used?


> +
>   /* apic attention bits */
>   #define KVM_APIC_CHECK_VAPIC	0
>   /*
> diff --git a/arch/x86/kvm/vmx/common.h b/arch/x86/kvm/vmx/common.h
> index bb00433932ee..787f59c44abc 100644
> --- a/arch/x86/kvm/vmx/common.h
> +++ b/arch/x86/kvm/vmx/common.h
> @@ -91,7 +91,7 @@ static inline int __vmx_handle_ept_violation(struct kvm_vcpu *vcpu, gpa_t gpa,
>   	if (kvm_is_private_gpa(vcpu->kvm, gpa))
>   		error_code |= PFERR_GUEST_ENC_MASK;
>   
> -	if (err_page_level > 0)
> +	if (err_page_level > PG_LEVEL_NONE)
>   		error_code |= (err_page_level << PFERR_LEVEL_START_BIT) & PFERR_LEVEL_MASK;
>   
>   	return kvm_mmu_page_fault(vcpu, gpa, error_code, NULL, 0);
> diff --git a/arch/x86/kvm/vmx/tdx.c b/arch/x86/kvm/vmx/tdx.c
> index 7b81811eb404..c614ab20c191 100644
> --- a/arch/x86/kvm/vmx/tdx.c
> +++ b/arch/x86/kvm/vmx/tdx.c
> @@ -2713,6 +2713,7 @@ static int tdx_init_mem_region(struct kvm *kvm, struct kvm_tdx_cmd *cmd)
>   	struct kvm_tdx_init_mem_region region;
>   	struct kvm_vcpu *vcpu;
>   	struct page *page;
> +	u64 error_code;
>   	int idx, ret = 0;
>   	bool added = false;
>   
> @@ -2770,7 +2771,11 @@ static int tdx_init_mem_region(struct kvm *kvm, struct kvm_tdx_cmd *cmd)
>   		kvm_tdx->source_pa = pfn_to_hpa(page_to_pfn(page)) |
>   				     (cmd->flags & KVM_TDX_MEASURE_MEMORY_REGION);
>   
> -		ret = kvm_mmu_map_tdp_page(vcpu, region.gpa, TDX_SEPT_PFERR,
> +		/* TODO: large page support. */
> +		error_code = TDX_SEPT_PFERR;
> +		error_code |= (PG_LEVEL_4K << PFERR_LEVEL_START_BIT) &
> +			PFERR_LEVEL_MASK;
> +		ret = kvm_mmu_map_tdp_page(vcpu, region.gpa, error_code,
>   					   PG_LEVEL_4K);
>   		put_page(page);
>   		if (ret)
> diff --git a/arch/x86/kvm/vmx/tdx.h b/arch/x86/kvm/vmx/tdx.h
> index 37ee944c36a1..54c3f6b83571 100644
> --- a/arch/x86/kvm/vmx/tdx.h
> +++ b/arch/x86/kvm/vmx/tdx.h
> @@ -72,25 +72,6 @@ union tdx_exit_reason {
>   	u64 full;
>   };
>   
> -union tdx_ext_exit_qualification {
> -	struct {
> -		u64 type		: 4;
> -		u64 reserved0		: 28;
> -		u64 req_sept_level	: 3;
> -		u64 err_sept_level	: 3;
> -		u64 err_sept_state	: 8;
> -		u64 err_sept_is_leaf	: 1;
> -		u64 reserved1		: 17;
> -	};
> -	u64 full;
> -};
> -
> -enum tdx_ext_exit_qualification_type {
> -	EXT_EXIT_QUAL_NONE,
> -	EXT_EXIT_QUAL_ACCEPT,
> -	NUM_EXT_EXIT_QUAL,
> -};
> -
>   struct vcpu_tdx {
>   	struct kvm_vcpu	vcpu;
>   
> diff --git a/arch/x86/kvm/vmx/tdx_arch.h b/arch/x86/kvm/vmx/tdx_arch.h
> index 9f93250d22b9..ba41fefa47ee 100644
> --- a/arch/x86/kvm/vmx/tdx_arch.h
> +++ b/arch/x86/kvm/vmx/tdx_arch.h
> @@ -218,4 +218,23 @@ union tdx_sept_level_state {
>   	u64 raw;
>   };
>   
> +union tdx_ext_exit_qualification {
> +	struct {
> +		u64 type		:  4;
> +		u64 reserved0		: 28;
> +		u64 req_sept_level	:  3;
> +		u64 err_sept_level	:  3;
> +		u64 err_sept_state	:  8;
> +		u64 err_sept_is_leaf	:  1;
> +		u64 reserved1		: 17;
> +	};
> +	u64 full;
> +};
> +
> +enum tdx_ext_exit_qualification_type {
> +	EXT_EXIT_QUAL_NONE = 0,
> +	EXT_EXIT_QUAL_ACCEPT,
Since this value should be fixed to 1, maybe better to initialize it to 
1 for future proof?

> +	NUM_EXT_EXIT_QUAL,
> +};
> +
>   #endif /* __KVM_X86_TDX_ARCH_H */
> diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
> index ae9ba0731521..fb3913df6a5d 100644
> --- a/arch/x86/kvm/vmx/vmx.c
> +++ b/arch/x86/kvm/vmx/vmx.c
> @@ -5753,7 +5753,7 @@ static int handle_ept_violation(struct kvm_vcpu *vcpu)
>   	if (unlikely(allow_smaller_maxphyaddr && kvm_vcpu_is_illegal_gpa(vcpu, gpa)))
>   		return kvm_emulate_instruction(vcpu, 0);
>   
> -	return __vmx_handle_ept_violation(vcpu, gpa, exit_qualification, 0);
> +	return __vmx_handle_ept_violation(vcpu, gpa, exit_qualification, PG_LEVEL_NONE);
>   }
>   
>   static int handle_ept_misconfig(struct kvm_vcpu *vcpu)


^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: [PATCH v6 11/16] KVM: x86/tdp_mmu: Split the large page when zap leaf
  2023-11-07 15:00 ` [PATCH v6 11/16] KVM: x86/tdp_mmu: Split the large page when zap leaf isaku.yamahata
@ 2023-11-21  9:57   ` Binbin Wu
  2023-11-21 11:00     ` Isaku Yamahata
  0 siblings, 1 reply; 39+ messages in thread
From: Binbin Wu @ 2023-11-21  9:57 UTC (permalink / raw)
  To: isaku.yamahata
  Cc: kvm, linux-kernel, isaku.yamahata, Paolo Bonzini, erdemaktas,
	Sean Christopherson, Sagi Shahar, David Matlack, Kai Huang,
	Zhi Wang, chen.bo, hang.yuan, tina.zhang, Xiaoyao Li



On 11/7/2023 11:00 PM, isaku.yamahata@intel.com wrote:
> From: Xiaoyao Li <xiaoyao.li@intel.com>
>
> When TDX enabled, a large page cannot be zapped if it contains mixed
> pages. In this case, it has to split the large page.
>
> Signed-off-by: Xiaoyao Li <xiaoyao.li@intel.com>
> ---
>   arch/x86/kvm/Kconfig            |  1 +
>   arch/x86/kvm/mmu/mmu.c          |  6 +--
>   arch/x86/kvm/mmu/mmu_internal.h |  9 +++++
>   arch/x86/kvm/mmu/tdp_mmu.c      | 68 +++++++++++++++++++++++++++++++--
>   4 files changed, 78 insertions(+), 6 deletions(-)
>
> diff --git a/arch/x86/kvm/Kconfig b/arch/x86/kvm/Kconfig
> index b0f103641547..557479737962 100644
> --- a/arch/x86/kvm/Kconfig
> +++ b/arch/x86/kvm/Kconfig
> @@ -93,6 +93,7 @@ config KVM_INTEL
>   	tristate "KVM for Intel (and compatible) processors support"
>   	depends on KVM && IA32_FEAT_CTL
>   	select KVM_SW_PROTECTED_VM if INTEL_TDX_HOST
> +	select KVM_GENERIC_MEMORY_ATTRIBUTES if INTEL_TDX_HOST
>   	select KVM_PRIVATE_MEM if INTEL_TDX_HOST
>   	help
>   	  Provides support for KVM on processors equipped with Intel's VT
> diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c
> index 265177cedf37..0bf043812644 100644
> --- a/arch/x86/kvm/mmu/mmu.c
> +++ b/arch/x86/kvm/mmu/mmu.c
> @@ -7463,8 +7463,8 @@ bool kvm_arch_pre_set_memory_attributes(struct kvm *kvm,
>   	return kvm_unmap_gfn_range(kvm, range);
>   }
>   
> -static bool hugepage_test_mixed(struct kvm_memory_slot *slot, gfn_t gfn,
> -				int level)
> +bool kvm_hugepage_test_mixed(struct kvm_memory_slot *slot, gfn_t gfn,
> +			     int level)
>   {
>   	return lpage_info_slot(gfn, slot, level)->disallow_lpage & KVM_LPAGE_MIXED_FLAG;
>   }
> @@ -7491,7 +7491,7 @@ static bool hugepage_has_attrs(struct kvm *kvm, struct kvm_memory_slot *slot,
>   		return kvm_range_has_memory_attributes(kvm, start, end, attrs);
>   
>   	for (gfn = start; gfn < end; gfn += KVM_PAGES_PER_HPAGE(level - 1)) {
> -		if (hugepage_test_mixed(slot, gfn, level - 1) ||
> +		if (kvm_hugepage_test_mixed(slot, gfn, level - 1) ||
>   		    attrs != kvm_get_memory_attributes(kvm, gfn))
>   			return false;
>   	}
> diff --git a/arch/x86/kvm/mmu/mmu_internal.h b/arch/x86/kvm/mmu/mmu_internal.h
> index 1da98be74ad2..653e96769956 100644
> --- a/arch/x86/kvm/mmu/mmu_internal.h
> +++ b/arch/x86/kvm/mmu/mmu_internal.h
> @@ -460,4 +460,13 @@ void *mmu_memory_cache_alloc(struct kvm_mmu_memory_cache *mc);
>   void track_possible_nx_huge_page(struct kvm *kvm, struct kvm_mmu_page *sp);
>   void untrack_possible_nx_huge_page(struct kvm *kvm, struct kvm_mmu_page *sp);
>   
> +#ifdef CONFIG_KVM_GENERIC_MEMORY_ATTRIBUTES
> +bool kvm_hugepage_test_mixed(struct kvm_memory_slot *slot, gfn_t gfn, int level);
> +#else
> +static inline bool kvm_hugepage_test_mixed(struct kvm_memory_slot *slot, gfn_t gfn, int level)
> +{
> +	return false;
> +}
> +#endif
> +
>   #endif /* __KVM_X86_MMU_INTERNAL_H */
> diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/arch/x86/kvm/mmu/tdp_mmu.c
> index 7873e9ee82ad..a209a67decae 100644
> --- a/arch/x86/kvm/mmu/tdp_mmu.c
> +++ b/arch/x86/kvm/mmu/tdp_mmu.c
> @@ -964,6 +964,14 @@ bool kvm_tdp_mmu_zap_sp(struct kvm *kvm, struct kvm_mmu_page *sp)
>   	return true;
>   }
>   
> +
> +static struct kvm_mmu_page *tdp_mmu_alloc_sp_for_split(struct kvm *kvm,
> +						       struct tdp_iter *iter,
> +						       bool shared);
> +
> +static int tdp_mmu_split_huge_page(struct kvm *kvm, struct tdp_iter *iter,
> +				   struct kvm_mmu_page *sp, bool shared);
> +
>   /*
>    * If can_yield is true, will release the MMU lock and reschedule if the
>    * scheduler needs the CPU or there is contention on the MMU lock. If this
> @@ -975,13 +983,15 @@ static bool tdp_mmu_zap_leafs(struct kvm *kvm, struct kvm_mmu_page *root,
>   			      gfn_t start, gfn_t end, bool can_yield, bool flush,
>   			      bool zap_private)
>   {
> +	bool is_private = is_private_sp(root);
> +	struct kvm_mmu_page *split_sp = NULL;
>   	struct tdp_iter iter;
>   
>   	end = min(end, tdp_mmu_max_gfn_exclusive());
>   
>   	lockdep_assert_held_write(&kvm->mmu_lock);
>   
> -	WARN_ON_ONCE(zap_private && !is_private_sp(root));
> +	WARN_ON_ONCE(zap_private && !is_private);
>   	if (!zap_private && is_private_sp(root))
Can use is_private instead of is_private_sp(root) here as well.

>   		return false;
>   
> @@ -1006,12 +1016,66 @@ static bool tdp_mmu_zap_leafs(struct kvm *kvm, struct kvm_mmu_page *root,
>   		    !is_last_spte(iter.old_spte, iter.level))
>   			continue;
>   
> +		if (is_private && kvm_gfn_shared_mask(kvm) &&
> +		    is_large_pte(iter.old_spte)) {
> +			gfn_t gfn = iter.gfn & ~kvm_gfn_shared_mask(kvm);
> +			gfn_t mask = KVM_PAGES_PER_HPAGE(iter.level) - 1;
> +			struct kvm_memory_slot *slot;
> +			struct kvm_mmu_page *sp;
> +
> +			slot = gfn_to_memslot(kvm, gfn);
> +			if (kvm_hugepage_test_mixed(slot, gfn, iter.level) ||
> +			    (gfn & mask) < start ||
> +			    end < (gfn & mask) + KVM_PAGES_PER_HPAGE(iter.level)) {
> +				WARN_ON_ONCE(!can_yield);
> +				if (split_sp) {
> +					sp = split_sp;
> +					split_sp = NULL;
> +					sp->role = tdp_iter_child_role(&iter);
> +				} else {
> +					WARN_ON(iter.yielded);
> +					if (flush && can_yield) {
> +						kvm_flush_remote_tlbs(kvm);
> +						flush = false;
> +					}
Is it necessary to do the flush here?

> +					sp = tdp_mmu_alloc_sp_for_split(kvm, &iter, false);
> +					if (iter.yielded) {
> +						split_sp = sp;
> +						continue;
> +					}
> +				}
> +				KVM_BUG_ON(!sp, kvm);
> +
> +				tdp_mmu_init_sp(sp, iter.sptep, iter.gfn);
> +				if (tdp_mmu_split_huge_page(kvm, &iter, sp, false)) {
> +					kvm_flush_remote_tlbs(kvm);
> +					flush = false;
Why it needs to flush TLB immediately if tdp_mmu_split_huge_page() fails?

Also, when KVM MMU write lock is held, it seems tdp_mmu_split_huge_page()
will not fail. But let's assume this condition can be triggered, since 
sp is local
variable, it will lost its value after continue, and split_sp is also NULL,
it will try to allocate a new sp, memory leakage here?

> +					/* force retry on this gfn. */
> +					iter.yielded = true;
> +				} else
> +					flush = true;
> +				continue;
> +			}
> +		}
> +
>   		tdp_mmu_iter_set_spte(kvm, &iter, SHADOW_NONPRESENT_VALUE);
>   		flush = true;
>   	}
>   
>   	rcu_read_unlock();
>   
> +	if (split_sp) {
> +		WARN_ON(!can_yield);
> +		if (flush) {
> +			kvm_flush_remote_tlbs(kvm);
> +			flush = false;
> +		}
Same here, why we need to do the flush here?
Can we delay it till the caller do the flush?

> +
> +		write_unlock(&kvm->mmu_lock);
> +		tdp_mmu_free_sp(split_sp);
> +		write_lock(&kvm->mmu_lock);
> +	}
> +
>   	/*
>   	 * Because this flow zaps _only_ leaf SPTEs, the caller doesn't need
>   	 * to provide RCU protection as no 'struct kvm_mmu_page' will be freed.
> @@ -1606,8 +1670,6 @@ static struct kvm_mmu_page *tdp_mmu_alloc_sp_for_split(struct kvm *kvm,
>   
>   	KVM_BUG_ON(kvm_mmu_page_role_is_private(role) !=
>   		   is_private_sptep(iter->sptep), kvm);
> -	/* TODO: Large page isn't supported for private SPTE yet. */
> -	KVM_BUG_ON(kvm_mmu_page_role_is_private(role), kvm);
>   
>   	/*
>   	 * Since we are allocating while under the MMU lock we have to be


^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: [PATCH v6 07/16] KVM: MMU: Introduce level info in PFERR code
  2023-11-20 10:54   ` Binbin Wu
@ 2023-11-21 10:02     ` Isaku Yamahata
  0 siblings, 0 replies; 39+ messages in thread
From: Isaku Yamahata @ 2023-11-21 10:02 UTC (permalink / raw)
  To: Binbin Wu
  Cc: isaku.yamahata, kvm, linux-kernel, isaku.yamahata, Paolo Bonzini,
	erdemaktas, Sean Christopherson, Sagi Shahar, David Matlack,
	Kai Huang, Zhi Wang, chen.bo, hang.yuan, tina.zhang, Xiaoyao Li,
	isaku.yamahata

On Mon, Nov 20, 2023 at 06:54:07PM +0800,
Binbin Wu <binbin.wu@linux.intel.com> wrote:

> 
> 
> On 11/7/2023 11:00 PM, isaku.yamahata@intel.com wrote:
> > From: Xiaoyao Li <xiaoyao.li@intel.com>
> > 
> > For TDX, EPT violation can happen when TDG.MEM.PAGE.ACCEPT.
> > And TDG.MEM.PAGE.ACCEPT contains the desired accept page level of TD guest.
> > 
> > 1. KVM can map it with 4KB page while TD guest wants to accept 2MB page.
> > 
> >    TD geust will get TDX_PAGE_SIZE_MISMATCH and it should try to accept
> s/geust/guest
> 
> >    4KB size.
> > 
> > 2. KVM can map it with 2MB page while TD guest wants to accept 4KB page.
> > 
> >    KVM needs to honor it because
> >    a) there is no way to tell guest KVM maps it as 2MB size. And
> >    b) guest accepts it in 4KB size since guest knows some other 4KB page
> >       in the same 2MB range will be used as shared page.
> > 
> > For case 2, it need to pass desired page level to MMU's
> > page_fault_handler. Use bit 29:31 of kvm PF error code for this purpose.
> 
> The level info is needed not only for case 2, KVM also needs the info so
> that
> it can map a 2MB page when TD guest wants to accept a 2MB page.

"MMU's page_fault_handler" = KVM MMU page fault handler, isn't it?
I'll replace it with KVM MMU page fault handler for clarity.
-- 
Isaku Yamahata <isaku.yamahata@linux.intel.com>

^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: [PATCH v6 08/16] KVM: TDX: Pin pages via get_page() right before ADD/AUG'ed to TDs
  2023-11-20 11:05   ` Binbin Wu
@ 2023-11-21 10:04     ` Isaku Yamahata
  0 siblings, 0 replies; 39+ messages in thread
From: Isaku Yamahata @ 2023-11-21 10:04 UTC (permalink / raw)
  To: Binbin Wu
  Cc: isaku.yamahata, kvm, linux-kernel, isaku.yamahata, Paolo Bonzini,
	erdemaktas, Sean Christopherson, Sagi Shahar, David Matlack,
	Kai Huang, Zhi Wang, chen.bo, hang.yuan, tina.zhang, Xiaoyao Li,
	isaku.yamahata

On Mon, Nov 20, 2023 at 07:05:39PM +0800,
Binbin Wu <binbin.wu@linux.intel.com> wrote:

> 
> 
> On 11/7/2023 11:00 PM, isaku.yamahata@intel.com wrote:
> > From: Xiaoyao Li <xiaoyao.li@intel.com>
> > 
> > When kvm_faultin_pfn(), it doesn't have the info regarding which page level
> > will the gfn be mapped at. Hence it doesn't know to pin a 4K page or a
> > 2M page.
> > 
> > Move the guest private pages pinning logic right before
> > TDH_MEM_PAGE_ADD/AUG() since at that time it knows the page level info.
> This patch looks strange, the code has nothing to do with the shortlog.
> It seems that the change of this patch has already been covered by 06/16.
> 
> Something went wrong when formatting the patch?

Oh, right. This patch doesn't make sense any more. I''ll drop this patch.
-- 
Isaku Yamahata <isaku.yamahata@linux.intel.com>

^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: [PATCH v6 09/16] KVM: TDX: Pass desired page level in err code for page fault handler
  2023-11-20 11:24   ` Binbin Wu
@ 2023-11-21 10:27     ` Isaku Yamahata
  0 siblings, 0 replies; 39+ messages in thread
From: Isaku Yamahata @ 2023-11-21 10:27 UTC (permalink / raw)
  To: Binbin Wu
  Cc: isaku.yamahata, kvm, linux-kernel, isaku.yamahata, Paolo Bonzini,
	erdemaktas, Sean Christopherson, Sagi Shahar, David Matlack,
	Kai Huang, Zhi Wang, chen.bo, hang.yuan, tina.zhang, Xiaoyao Li,
	isaku.yamahata

On Mon, Nov 20, 2023 at 07:24:51PM +0800,
Binbin Wu <binbin.wu@linux.intel.com> wrote:

> 
> 
> On 11/7/2023 11:00 PM, isaku.yamahata@intel.com wrote:
> > From: Xiaoyao Li <xiaoyao.li@intel.com>
> > 
> > For TDX, EPT violation can happen when TDG.MEM.PAGE.ACCEPT.
> > And TDG.MEM.PAGE.ACCEPT contains the desired accept page level of TD guest.
> > 
> > 1. KVM can map it with 4KB page while TD guest wants to accept 2MB page.
> > 
> >    TD geust will get TDX_PAGE_SIZE_MISMATCH and it should try to accept
> >    4KB size.
> > 
> > 2. KVM can map it with 2MB page while TD guest wants to accept 4KB page.
> > 
> >    KVM needs to honor it because
> >    a) there is no way to tell guest KVM maps it as 2MB size. And
> >    b) guest accepts it in 4KB size since guest knows some other 4KB page
> >       in the same 2MB range will be used as shared page.
> > 
> > For case 2, it need to pass desired page level to MMU's
> > page_fault_handler. Use bit 29:31 of kvm PF error code for this purpose.
> The shortlog is the same as patch 7/16..., I am a bit confused by the
> structure of this patch series...
> Can this patch be squashed into 7/16?

Patch 7 should include the changes to arch/x86/include/asm/kvm_host.h and
arch/x86/kvm/mmu/mmu.c. and use PFERR_LEVEL().
Patch 9 should include the changes arch/x86/kvm/vmx/*.

I'll fix them.


> > diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
> > index eed36c1eedb7..c16823f3326e 100644
> > --- a/arch/x86/include/asm/kvm_host.h
> > +++ b/arch/x86/include/asm/kvm_host.h
> > @@ -285,6 +285,8 @@ enum x86_intercept_stage;
> >   				 PFERR_WRITE_MASK |		\
> >   				 PFERR_PRESENT_MASK)
> > +#define PFERR_LEVEL(err_code)	(((err_code) & PFERR_LEVEL_MASK) >> PFERR_LEVEL_START_BIT)
> It's defined, but never used?

I'll make kvm_tdp_page_fault() use it.


> > diff --git a/arch/x86/kvm/vmx/tdx_arch.h b/arch/x86/kvm/vmx/tdx_arch.h
> > index 9f93250d22b9..ba41fefa47ee 100644
> > --- a/arch/x86/kvm/vmx/tdx_arch.h
> > +++ b/arch/x86/kvm/vmx/tdx_arch.h
> > @@ -218,4 +218,23 @@ union tdx_sept_level_state {
> >   	u64 raw;
> >   };
> > +union tdx_ext_exit_qualification {
> > +	struct {
> > +		u64 type		:  4;
> > +		u64 reserved0		: 28;
> > +		u64 req_sept_level	:  3;
> > +		u64 err_sept_level	:  3;
> > +		u64 err_sept_state	:  8;
> > +		u64 err_sept_is_leaf	:  1;
> > +		u64 reserved1		: 17;
> > +	};
> > +	u64 full;
> > +};
> > +
> > +enum tdx_ext_exit_qualification_type {
> > +	EXT_EXIT_QUAL_NONE = 0,
> > +	EXT_EXIT_QUAL_ACCEPT,
> Since this value should be fixed to 1, maybe better to initialize it to 1
> for future proof?

ok.
-- 
Isaku Yamahata <isaku.yamahata@linux.intel.com>

^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: [PATCH v6 11/16] KVM: x86/tdp_mmu: Split the large page when zap leaf
  2023-11-21  9:57   ` Binbin Wu
@ 2023-11-21 11:00     ` Isaku Yamahata
  2023-11-22  2:18       ` Binbin Wu
  0 siblings, 1 reply; 39+ messages in thread
From: Isaku Yamahata @ 2023-11-21 11:00 UTC (permalink / raw)
  To: Binbin Wu
  Cc: isaku.yamahata, kvm, linux-kernel, isaku.yamahata, Paolo Bonzini,
	erdemaktas, Sean Christopherson, Sagi Shahar, David Matlack,
	Kai Huang, Zhi Wang, chen.bo, hang.yuan, tina.zhang, Xiaoyao Li,
	isaku.yamahata

On Tue, Nov 21, 2023 at 05:57:28PM +0800,
Binbin Wu <binbin.wu@linux.intel.com> wrote:

> > diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/arch/x86/kvm/mmu/tdp_mmu.c
> > index 7873e9ee82ad..a209a67decae 100644
> > --- a/arch/x86/kvm/mmu/tdp_mmu.c
> > +++ b/arch/x86/kvm/mmu/tdp_mmu.c
> > @@ -964,6 +964,14 @@ bool kvm_tdp_mmu_zap_sp(struct kvm *kvm, struct kvm_mmu_page *sp)
> >   	return true;
> >   }
> > +
> > +static struct kvm_mmu_page *tdp_mmu_alloc_sp_for_split(struct kvm *kvm,
> > +						       struct tdp_iter *iter,
> > +						       bool shared);
> > +
> > +static int tdp_mmu_split_huge_page(struct kvm *kvm, struct tdp_iter *iter,
> > +				   struct kvm_mmu_page *sp, bool shared);
> > +
> >   /*
> >    * If can_yield is true, will release the MMU lock and reschedule if the
> >    * scheduler needs the CPU or there is contention on the MMU lock. If this
> > @@ -975,13 +983,15 @@ static bool tdp_mmu_zap_leafs(struct kvm *kvm, struct kvm_mmu_page *root,
> >   			      gfn_t start, gfn_t end, bool can_yield, bool flush,
> >   			      bool zap_private)
> >   {
> > +	bool is_private = is_private_sp(root);
> > +	struct kvm_mmu_page *split_sp = NULL;
> >   	struct tdp_iter iter;
> >   	end = min(end, tdp_mmu_max_gfn_exclusive());
> >   	lockdep_assert_held_write(&kvm->mmu_lock);
> > -	WARN_ON_ONCE(zap_private && !is_private_sp(root));
> > +	WARN_ON_ONCE(zap_private && !is_private);
> >   	if (!zap_private && is_private_sp(root))
> Can use is_private instead of is_private_sp(root) here as well.

I'll update it.

> 
> >   		return false;
> > @@ -1006,12 +1016,66 @@ static bool tdp_mmu_zap_leafs(struct kvm *kvm, struct kvm_mmu_page *root,
> >   		    !is_last_spte(iter.old_spte, iter.level))
> >   			continue;
> > +		if (is_private && kvm_gfn_shared_mask(kvm) &&
> > +		    is_large_pte(iter.old_spte)) {
> > +			gfn_t gfn = iter.gfn & ~kvm_gfn_shared_mask(kvm);
> > +			gfn_t mask = KVM_PAGES_PER_HPAGE(iter.level) - 1;
> > +			struct kvm_memory_slot *slot;
> > +			struct kvm_mmu_page *sp;
> > +
> > +			slot = gfn_to_memslot(kvm, gfn);
> > +			if (kvm_hugepage_test_mixed(slot, gfn, iter.level) ||
> > +			    (gfn & mask) < start ||
> > +			    end < (gfn & mask) + KVM_PAGES_PER_HPAGE(iter.level)) {
> > +				WARN_ON_ONCE(!can_yield);
> > +				if (split_sp) {
> > +					sp = split_sp;
> > +					split_sp = NULL;
> > +					sp->role = tdp_iter_child_role(&iter);
> > +				} else {
> > +					WARN_ON(iter.yielded);
> > +					if (flush && can_yield) {
> > +						kvm_flush_remote_tlbs(kvm);
> > +						flush = false;
> > +					}
> Is it necessary to do the flush here?

Because tdp_mmu_alloc_sp_for_split() may unlock mmu_lock and block.
While blocking, other thread operates on KVM MMU and gets confused due to
remaining TLB cache.


> > +					sp = tdp_mmu_alloc_sp_for_split(kvm, &iter, false);
> > +					if (iter.yielded) {
> > +						split_sp = sp;
> > +						continue;
> > +					}
> > +				}
> > +				KVM_BUG_ON(!sp, kvm);
> > +
> > +				tdp_mmu_init_sp(sp, iter.sptep, iter.gfn);
> > +				if (tdp_mmu_split_huge_page(kvm, &iter, sp, false)) {
> > +					kvm_flush_remote_tlbs(kvm);
> > +					flush = false;
> Why it needs to flush TLB immediately if tdp_mmu_split_huge_page() fails?

Hmm, we don't need it.  When breaking up page table, we need to tlb flush
before issuing TDH.MEM.PAGE.DEMOTE(), not after it.  Will remove those two lines.


> Also, when KVM MMU write lock is held, it seems tdp_mmu_split_huge_page()
> will not fail.

This can happen with TDX_OPERAND_BUSY with secure-ept tree lock with other
vcpus TDH.VP.ENTER(). TDH.VP.ENTER() can take exclusive lock of secure-EPT.


> But let's assume this condition can be triggered, since sp is
> local
> variable, it will lost its value after continue, and split_sp is also NULL,
> it will try to allocate a new sp, memory leakage here?

Nice catch. I'll add split_sp = sp;


> > +					/* force retry on this gfn. */
> > +					iter.yielded = true;
> > +				} else
> > +					flush = true;
> > +				continue;
> > +			}
> > +		}
> > +
> >   		tdp_mmu_iter_set_spte(kvm, &iter, SHADOW_NONPRESENT_VALUE);
> >   		flush = true;
> >   	}
> >   	rcu_read_unlock();
> > +	if (split_sp) {
> > +		WARN_ON(!can_yield);
> > +		if (flush) {
> > +			kvm_flush_remote_tlbs(kvm);
> > +			flush = false;
> > +		}
> Same here, why we need to do the flush here?
> Can we delay it till the caller do the flush?

No. Because we unlock mmu_lock and may block when freeing memory.


> > +
> > +		write_unlock(&kvm->mmu_lock);
> > +		tdp_mmu_free_sp(split_sp);
> > +		write_lock(&kvm->mmu_lock);
> > +	}
> > +
> >   	/*
> >   	 * Because this flow zaps _only_ leaf SPTEs, the caller doesn't need
> >   	 * to provide RCU protection as no 'struct kvm_mmu_page' will be freed.
> > @@ -1606,8 +1670,6 @@ static struct kvm_mmu_page *tdp_mmu_alloc_sp_for_split(struct kvm *kvm,
> >   	KVM_BUG_ON(kvm_mmu_page_role_is_private(role) !=
> >   		   is_private_sptep(iter->sptep), kvm);
> > -	/* TODO: Large page isn't supported for private SPTE yet. */
> > -	KVM_BUG_ON(kvm_mmu_page_role_is_private(role), kvm);
> >   	/*
> >   	 * Since we are allocating while under the MMU lock we have to be
> 
> 

-- 
Isaku Yamahata <isaku.yamahata@linux.intel.com>

^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: [PATCH v6 11/16] KVM: x86/tdp_mmu: Split the large page when zap leaf
  2023-11-21 11:00     ` Isaku Yamahata
@ 2023-11-22  2:18       ` Binbin Wu
  0 siblings, 0 replies; 39+ messages in thread
From: Binbin Wu @ 2023-11-22  2:18 UTC (permalink / raw)
  To: Isaku Yamahata
  Cc: isaku.yamahata, kvm, linux-kernel, isaku.yamahata, Paolo Bonzini,
	erdemaktas, Sean Christopherson, Sagi Shahar, David Matlack,
	Kai Huang, Zhi Wang, chen.bo, hang.yuan, tina.zhang, Xiaoyao Li



On 11/21/2023 7:00 PM, Isaku Yamahata wrote:
> On Tue, Nov 21, 2023 at 05:57:28PM +0800,
> Binbin Wu <binbin.wu@linux.intel.com> wrote:
>
>>> diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/arch/x86/kvm/mmu/tdp_mmu.c
>>> index 7873e9ee82ad..a209a67decae 100644
>>> --- a/arch/x86/kvm/mmu/tdp_mmu.c
>>> +++ b/arch/x86/kvm/mmu/tdp_mmu.c
>>> @@ -964,6 +964,14 @@ bool kvm_tdp_mmu_zap_sp(struct kvm *kvm, struct kvm_mmu_page *sp)
>>>    	return true;
>>>    }
>>> +
>>> +static struct kvm_mmu_page *tdp_mmu_alloc_sp_for_split(struct kvm *kvm,
>>> +						       struct tdp_iter *iter,
>>> +						       bool shared);
>>> +
>>> +static int tdp_mmu_split_huge_page(struct kvm *kvm, struct tdp_iter *iter,
>>> +				   struct kvm_mmu_page *sp, bool shared);
>>> +
>>>    /*
>>>     * If can_yield is true, will release the MMU lock and reschedule if the
>>>     * scheduler needs the CPU or there is contention on the MMU lock. If this
>>> @@ -975,13 +983,15 @@ static bool tdp_mmu_zap_leafs(struct kvm *kvm, struct kvm_mmu_page *root,
>>>    			      gfn_t start, gfn_t end, bool can_yield, bool flush,
>>>    			      bool zap_private)
>>>    {
>>> +	bool is_private = is_private_sp(root);
>>> +	struct kvm_mmu_page *split_sp = NULL;
>>>    	struct tdp_iter iter;
>>>    	end = min(end, tdp_mmu_max_gfn_exclusive());
>>>    	lockdep_assert_held_write(&kvm->mmu_lock);
>>> -	WARN_ON_ONCE(zap_private && !is_private_sp(root));
>>> +	WARN_ON_ONCE(zap_private && !is_private);
>>>    	if (!zap_private && is_private_sp(root))
>> Can use is_private instead of is_private_sp(root) here as well.
> I'll update it.
>
>>>    		return false;
>>> @@ -1006,12 +1016,66 @@ static bool tdp_mmu_zap_leafs(struct kvm *kvm, struct kvm_mmu_page *root,
>>>    		    !is_last_spte(iter.old_spte, iter.level))
>>>    			continue;
>>> +		if (is_private && kvm_gfn_shared_mask(kvm) &&
>>> +		    is_large_pte(iter.old_spte)) {
>>> +			gfn_t gfn = iter.gfn & ~kvm_gfn_shared_mask(kvm);
>>> +			gfn_t mask = KVM_PAGES_PER_HPAGE(iter.level) - 1;
>>> +			struct kvm_memory_slot *slot;
>>> +			struct kvm_mmu_page *sp;
>>> +
>>> +			slot = gfn_to_memslot(kvm, gfn);
>>> +			if (kvm_hugepage_test_mixed(slot, gfn, iter.level) ||
>>> +			    (gfn & mask) < start ||
>>> +			    end < (gfn & mask) + KVM_PAGES_PER_HPAGE(iter.level)) {
>>> +				WARN_ON_ONCE(!can_yield);
>>> +				if (split_sp) {
>>> +					sp = split_sp;
>>> +					split_sp = NULL;
>>> +					sp->role = tdp_iter_child_role(&iter);
>>> +				} else {
>>> +					WARN_ON(iter.yielded);
>>> +					if (flush && can_yield) {
>>> +						kvm_flush_remote_tlbs(kvm);
>>> +						flush = false;
>>> +					}
>> Is it necessary to do the flush here?
> Because tdp_mmu_alloc_sp_for_split() may unlock mmu_lock and block.
> While blocking, other thread operates on KVM MMU and gets confused due to
> remaining TLB cache.
>
>
>>> +					sp = tdp_mmu_alloc_sp_for_split(kvm, &iter, false);
>>> +					if (iter.yielded) {
>>> +						split_sp = sp;
>>> +						continue;
>>> +					}
>>> +				}
>>> +				KVM_BUG_ON(!sp, kvm);
>>> +
>>> +				tdp_mmu_init_sp(sp, iter.sptep, iter.gfn);
>>> +				if (tdp_mmu_split_huge_page(kvm, &iter, sp, false)) {
>>> +					kvm_flush_remote_tlbs(kvm);
>>> +					flush = false;
>> Why it needs to flush TLB immediately if tdp_mmu_split_huge_page() fails?
> Hmm, we don't need it.  When breaking up page table, we need to tlb flush
> before issuing TDH.MEM.PAGE.DEMOTE(), not after it.  Will remove those two lines.
>
>
>> Also, when KVM MMU write lock is held, it seems tdp_mmu_split_huge_page()
>> will not fail.
> This can happen with TDX_OPERAND_BUSY with secure-ept tree lock with other
> vcpus TDH.VP.ENTER(). TDH.VP.ENTER() can take exclusive lock of secure-EPT.
>
>
>> But let's assume this condition can be triggered, since sp is
>> local
>> variable, it will lost its value after continue, and split_sp is also NULL,
>> it will try to allocate a new sp, memory leakage here?
> Nice catch. I'll add split_sp = sp;
>
>
>>> +					/* force retry on this gfn. */
>>> +					iter.yielded = true;
>>> +				} else
>>> +					flush = true;
>>> +				continue;
>>> +			}
>>> +		}
>>> +
>>>    		tdp_mmu_iter_set_spte(kvm, &iter, SHADOW_NONPRESENT_VALUE);
>>>    		flush = true;
>>>    	}
>>>    	rcu_read_unlock();
>>> +	if (split_sp) {
>>> +		WARN_ON(!can_yield);
>>> +		if (flush) {
>>> +			kvm_flush_remote_tlbs(kvm);
>>> +			flush = false;
>>> +		}
>> Same here, why we need to do the flush here?
>> Can we delay it till the caller do the flush?
> No. Because we unlock mmu_lock and may block when freeing memory.
But I don't find it may block during freeing memory.
Did I miss anything?


>
>>> +
>>> +		write_unlock(&kvm->mmu_lock);
>>> +		tdp_mmu_free_sp(split_sp);
>>> +		write_lock(&kvm->mmu_lock);
>>> +	}
>>> +
>>>    	/*
>>>    	 * Because this flow zaps _only_ leaf SPTEs, the caller doesn't need
>>>    	 * to provide RCU protection as no 'struct kvm_mmu_page' will be freed.
>>> @@ -1606,8 +1670,6 @@ static struct kvm_mmu_page *tdp_mmu_alloc_sp_for_split(struct kvm *kvm,
>>>    	KVM_BUG_ON(kvm_mmu_page_role_is_private(role) !=
>>>    		   is_private_sptep(iter->sptep), kvm);
>>> -	/* TODO: Large page isn't supported for private SPTE yet. */
>>> -	KVM_BUG_ON(kvm_mmu_page_role_is_private(role), kvm);
>>>    	/*
>>>    	 * Since we are allocating while under the MMU lock we have to be
>>


^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: [PATCH v6 12/16] KVM: x86/tdp_mmu, TDX: Split a large page when 4KB page within it converted to shared
  2023-11-07 15:00 ` [PATCH v6 12/16] KVM: x86/tdp_mmu, TDX: Split a large page when 4KB page within it converted to shared isaku.yamahata
@ 2023-11-22  5:45   ` Binbin Wu
  0 siblings, 0 replies; 39+ messages in thread
From: Binbin Wu @ 2023-11-22  5:45 UTC (permalink / raw)
  To: isaku.yamahata
  Cc: kvm, linux-kernel, isaku.yamahata, Paolo Bonzini, erdemaktas,
	Sean Christopherson, Sagi Shahar, David Matlack, Kai Huang,
	Zhi Wang, chen.bo, hang.yuan, tina.zhang, Xiaoyao Li



On 11/7/2023 11:00 PM, isaku.yamahata@intel.com wrote:
> From: Xiaoyao Li <xiaoyao.li@intel.com>
>
> When mapping the shared page for TDX, it needs to zap private alias.
>
> In the case that private page is mapped as large page (2MB), it can be
> removed directly only when the whole 2MB is converted to shared.
> Otherwise, it has to split 2MB page into 512 4KB page, and only remove
> the pages that converted to shared.
>
> When a present large leaf spte switches to present non-leaf spte, TDX needs
> to split the corresponding SEPT page to reflect it.
>
> Signed-off-by: Xiaoyao Li <xiaoyao.li@intel.com>
> Signed-off-by: Isaku Yamahata <isaku.yamahata@intel.com>
> ---
>   arch/x86/include/asm/kvm-x86-ops.h |  1 +
>   arch/x86/include/asm/kvm_host.h    |  2 ++
>   arch/x86/kvm/mmu/tdp_mmu.c         | 21 ++++++++++++++++-----
>   arch/x86/kvm/vmx/tdx.c             | 25 +++++++++++++++++++++++--
>   arch/x86/kvm/vmx/tdx_arch.h        |  1 +
>   arch/x86/kvm/vmx/tdx_ops.h         |  7 +++++++
>   6 files changed, 50 insertions(+), 7 deletions(-)
>
> diff --git a/arch/x86/include/asm/kvm-x86-ops.h b/arch/x86/include/asm/kvm-x86-ops.h
> index 8ef0ed217f6e..3deb6ab4f291 100644
> --- a/arch/x86/include/asm/kvm-x86-ops.h
> +++ b/arch/x86/include/asm/kvm-x86-ops.h
> @@ -103,6 +103,7 @@ KVM_X86_OP_OPTIONAL_RET0(get_mt_mask)
>   KVM_X86_OP(load_mmu_pgd)
>   KVM_X86_OP_OPTIONAL(link_private_spt)
>   KVM_X86_OP_OPTIONAL(free_private_spt)
> +KVM_X86_OP_OPTIONAL(split_private_spt)
>   KVM_X86_OP_OPTIONAL(set_private_spte)
>   KVM_X86_OP_OPTIONAL(remove_private_spte)
>   KVM_X86_OP_OPTIONAL(zap_private_spte)
> diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
> index c16823f3326e..e75a461bdea7 100644
> --- a/arch/x86/include/asm/kvm_host.h
> +++ b/arch/x86/include/asm/kvm_host.h
> @@ -1753,6 +1753,8 @@ struct kvm_x86_ops {
>   				void *private_spt);
>   	int (*free_private_spt)(struct kvm *kvm, gfn_t gfn, enum pg_level level,
>   				void *private_spt);
> +	int (*split_private_spt)(struct kvm *kvm, gfn_t gfn, enum pg_level level,
> +				  void *private_spt);
>   	int (*set_private_spte)(struct kvm *kvm, gfn_t gfn, enum pg_level level,
>   				 kvm_pfn_t pfn);
>   	int (*remove_private_spte)(struct kvm *kvm, gfn_t gfn, enum pg_level level,
> diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/arch/x86/kvm/mmu/tdp_mmu.c
> index a209a67decae..734ee822b43c 100644
> --- a/arch/x86/kvm/mmu/tdp_mmu.c
> +++ b/arch/x86/kvm/mmu/tdp_mmu.c
> @@ -599,23 +599,34 @@ static int __must_check __set_private_spte_present(struct kvm *kvm, tdp_ptep_t s
>   {
>   	bool was_present = is_shadow_present_pte(old_spte);
>   	bool is_present = is_shadow_present_pte(new_spte);
> +	bool was_leaf = was_present && is_last_spte(old_spte, level);
>   	bool is_leaf = is_present && is_last_spte(new_spte, level);
>   	kvm_pfn_t new_pfn = spte_to_pfn(new_spte);
> +	void *private_spt;
>   	int ret = 0;
>   
>   	lockdep_assert_held(&kvm->mmu_lock);
> -	/* TDP MMU doesn't change present -> present */
> -	KVM_BUG_ON(was_present, kvm);
>   
>   	/*
>   	 * Use different call to either set up middle level
>   	 * private page table, or leaf.
>   	 */
> -	if (is_leaf)
> +	if (level > PG_LEVEL_4K && was_leaf && !is_leaf) {
> +		/*
> +		 * splitting large page into 4KB.
> +		 * tdp_mmu_split_huage_page() => tdp_mmu_link_sp()
Typo, tdp_mmu_split_huage_page -> tdp_mmu_split_huge_page

> +		 */
> +		private_spt = get_private_spt(gfn, new_spte, level);
> +		KVM_BUG_ON(!private_spt, kvm);
> +		ret = static_call(kvm_x86_zap_private_spte)(kvm, gfn, level);
> +		kvm_flush_remote_tlbs(kvm);
> +		if (!ret)
> +			ret = static_call(kvm_x86_split_private_spt)(kvm, gfn,
> +								     level, private_spt);
> +	} else if (is_leaf)
>   		ret = static_call(kvm_x86_set_private_spte)(kvm, gfn, level, new_pfn);
>   	else {
> -		void *private_spt = get_private_spt(gfn, new_spte, level);
> -
> +		private_spt = get_private_spt(gfn, new_spte, level);
>   		KVM_BUG_ON(!private_spt, kvm);
>   		ret = static_call(kvm_x86_link_private_spt)(kvm, gfn, level, private_spt);
>   	}
> diff --git a/arch/x86/kvm/vmx/tdx.c b/arch/x86/kvm/vmx/tdx.c
> index c614ab20c191..91eca578a7da 100644
> --- a/arch/x86/kvm/vmx/tdx.c
> +++ b/arch/x86/kvm/vmx/tdx.c
> @@ -1662,6 +1662,28 @@ static int tdx_sept_link_private_spt(struct kvm *kvm, gfn_t gfn,
>   	return 0;
>   }
>   
> +static int tdx_sept_split_private_spt(struct kvm *kvm, gfn_t gfn,
> +				      enum pg_level level, void *private_spt)
> +{
> +	int tdx_level = pg_level_to_tdx_sept_level(level);
> +	struct kvm_tdx *kvm_tdx = to_kvm_tdx(kvm);
> +	gpa_t gpa = gfn_to_gpa(gfn) & KVM_HPAGE_MASK(level);
> +	hpa_t hpa = __pa(private_spt);
> +	struct tdx_module_args out;
> +	u64 err;
> +
> +	/* See comment in tdx_sept_set_private_spte() */
Do you mean the comment about the pages are pinned to prevent migration 
part?
Can you add some specific short information in this comment, in case
tdx_sept_set_private_spte() is extended to have more comments in the future?

> +	err = tdh_mem_page_demote(kvm_tdx->tdr_pa, gpa, tdx_level, hpa, &out);
> +	if (unlikely(err == TDX_ERROR_SEPT_BUSY))
> +		return -EAGAIN;
> +	if (KVM_BUG_ON(err, kvm)) {
> +		pr_tdx_error(TDH_MEM_PAGE_DEMOTE, err, &out);
> +		return -EIO;
> +	}
> +
> +	return 0;
> +}
> +
>   static int tdx_sept_zap_private_spte(struct kvm *kvm, gfn_t gfn,
>   				      enum pg_level level)
>   {
> @@ -1675,8 +1697,6 @@ static int tdx_sept_zap_private_spte(struct kvm *kvm, gfn_t gfn,
>   	if (unlikely(!is_hkid_assigned(kvm_tdx)))
>   		return 0;
>   
> -	/* For now large page isn't supported yet. */
> -	WARN_ON_ONCE(level != PG_LEVEL_4K);
>   	err = tdh_mem_range_block(kvm_tdx->tdr_pa, gpa, tdx_level, &out);
>   	if (unlikely(err == TDX_ERROR_SEPT_BUSY))
>   		return -EAGAIN;
> @@ -3183,6 +3203,7 @@ int __init tdx_hardware_setup(struct kvm_x86_ops *x86_ops)
>   
>   	x86_ops->link_private_spt = tdx_sept_link_private_spt;
>   	x86_ops->free_private_spt = tdx_sept_free_private_spt;
> +	x86_ops->split_private_spt = tdx_sept_split_private_spt;
>   	x86_ops->set_private_spte = tdx_sept_set_private_spte;
>   	x86_ops->remove_private_spte = tdx_sept_remove_private_spte;
>   	x86_ops->zap_private_spte = tdx_sept_zap_private_spte;
> diff --git a/arch/x86/kvm/vmx/tdx_arch.h b/arch/x86/kvm/vmx/tdx_arch.h
> index ba41fefa47ee..cab6a74446a0 100644
> --- a/arch/x86/kvm/vmx/tdx_arch.h
> +++ b/arch/x86/kvm/vmx/tdx_arch.h
> @@ -21,6 +21,7 @@
>   #define TDH_MNG_CREATE			9
>   #define TDH_VP_CREATE			10
>   #define TDH_MNG_RD			11
> +#define TDH_MEM_PAGE_DEMOTE		15
>   #define TDH_MR_EXTEND			16
>   #define TDH_MR_FINALIZE			17
>   #define TDH_VP_FLUSH			18
> diff --git a/arch/x86/kvm/vmx/tdx_ops.h b/arch/x86/kvm/vmx/tdx_ops.h
> index 0f2df7198bde..38ab0ab1509c 100644
> --- a/arch/x86/kvm/vmx/tdx_ops.h
> +++ b/arch/x86/kvm/vmx/tdx_ops.h
> @@ -183,6 +183,13 @@ static inline u64 tdh_mng_rd(hpa_t tdr, u64 field, struct tdx_module_args *out)
>   	return tdx_seamcall(TDH_MNG_RD, tdr, field, 0, 0, out);
>   }
>   
> +static inline u64 tdh_mem_page_demote(hpa_t tdr, gpa_t gpa, int level, hpa_t page,
> +				      struct tdx_module_args *out)
> +{
> +	tdx_clflush_page(page, PG_LEVEL_4K);
> +	return tdx_seamcall_sept(TDH_MEM_PAGE_DEMOTE, gpa | level, tdr, page, 0, out);
> +}
> +
>   static inline u64 tdh_mr_extend(hpa_t tdr, gpa_t gpa,
>   				struct tdx_module_args *out)
>   {


^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: [PATCH v6 13/16] KVM: x86/tdp_mmu: Try to merge pages into a large page
  2023-11-07 15:00 ` [PATCH v6 13/16] KVM: x86/tdp_mmu: Try to merge pages into a large page isaku.yamahata
@ 2023-11-22  7:24   ` Binbin Wu
  0 siblings, 0 replies; 39+ messages in thread
From: Binbin Wu @ 2023-11-22  7:24 UTC (permalink / raw)
  To: isaku.yamahata
  Cc: kvm, linux-kernel, isaku.yamahata, Paolo Bonzini, erdemaktas,
	Sean Christopherson, Sagi Shahar, David Matlack, Kai Huang,
	Zhi Wang, chen.bo, hang.yuan, tina.zhang



On 11/7/2023 11:00 PM, isaku.yamahata@intel.com wrote:
> From: Isaku Yamahata <isaku.yamahata@intel.com>
>
> When a large page is passed to the KVM page fault handler and some of sub
> pages are already populated, try to merge sub pages into a large page.
> This situation can happen when the guest converts small pages into shared
> and convert it back into private.
>
> When a large page is passed to KVM mmu page fault handler and the spte
> corresponding to the page is non-leaf (one or more of sub pages are already
> populated at lower page level), the current kvm mmu zaps non-leaf spte at a
> large page level, and populate a leaf spte at that level.  Thus small pages
> are converted into a large page.  However, it doesn't work for TDX because
> zapping and re-populating results in zeroing page content.  Instead,
> populate all small pages and merge them into a large page.
>
> Merging pages into a large page can fail when some sub pages are accepted
> and some are not.  In such case, with the assumption that guest tries to
> accept at large page size for performance when possible, don't try to be
> smart to identify which page is still pending, map all pages at lower page
> level, and let vcpu re-execute.
>
> Signed-off-by: Isaku Yamahata <isaku.yamahata@intel.com>
> ---
>   arch/x86/include/asm/kvm-x86-ops.h |   2 +
>   arch/x86/include/asm/kvm_host.h    |   4 +
>   arch/x86/kvm/mmu/tdp_iter.c        |  37 +++++--
>   arch/x86/kvm/mmu/tdp_iter.h        |   2 +
>   arch/x86/kvm/mmu/tdp_mmu.c         | 172 ++++++++++++++++++++++++++++-
>   5 files changed, 207 insertions(+), 10 deletions(-)
>
> diff --git a/arch/x86/include/asm/kvm-x86-ops.h b/arch/x86/include/asm/kvm-x86-ops.h
> index 3deb6ab4f291..9a7d4db304c7 100644
> --- a/arch/x86/include/asm/kvm-x86-ops.h
> +++ b/arch/x86/include/asm/kvm-x86-ops.h
> @@ -104,9 +104,11 @@ KVM_X86_OP(load_mmu_pgd)
>   KVM_X86_OP_OPTIONAL(link_private_spt)
>   KVM_X86_OP_OPTIONAL(free_private_spt)
>   KVM_X86_OP_OPTIONAL(split_private_spt)
> +KVM_X86_OP_OPTIONAL(merge_private_spt)
>   KVM_X86_OP_OPTIONAL(set_private_spte)
>   KVM_X86_OP_OPTIONAL(remove_private_spte)
>   KVM_X86_OP_OPTIONAL(zap_private_spte)
> +KVM_X86_OP_OPTIONAL(unzap_private_spte)
>   KVM_X86_OP(has_wbinvd_exit)
>   KVM_X86_OP(get_l2_tsc_offset)
>   KVM_X86_OP(get_l2_tsc_multiplier)
> diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
> index e75a461bdea7..0254c382f4f0 100644
> --- a/arch/x86/include/asm/kvm_host.h
> +++ b/arch/x86/include/asm/kvm_host.h
> @@ -146,6 +146,7 @@
>   #define KVM_MAX_HUGEPAGE_LEVEL	PG_LEVEL_1G
>   #define KVM_NR_PAGE_SIZES	(KVM_MAX_HUGEPAGE_LEVEL - PG_LEVEL_4K + 1)
>   #define KVM_HPAGE_GFN_SHIFT(x)	(((x) - 1) * 9)
> +#define KVM_HPAGE_GFN_MASK(x)	(~((1UL << KVM_HPAGE_GFN_SHIFT(x)) - 1))
>   #define KVM_HPAGE_SHIFT(x)	(PAGE_SHIFT + KVM_HPAGE_GFN_SHIFT(x))
>   #define KVM_HPAGE_SIZE(x)	(1UL << KVM_HPAGE_SHIFT(x))
>   #define KVM_HPAGE_MASK(x)	(~(KVM_HPAGE_SIZE(x) - 1))
> @@ -1755,11 +1756,14 @@ struct kvm_x86_ops {
>   				void *private_spt);
>   	int (*split_private_spt)(struct kvm *kvm, gfn_t gfn, enum pg_level level,
>   				  void *private_spt);
> +	int (*merge_private_spt)(struct kvm *kvm, gfn_t gfn, enum pg_level level,
> +				 void *private_spt);
>   	int (*set_private_spte)(struct kvm *kvm, gfn_t gfn, enum pg_level level,
>   				 kvm_pfn_t pfn);
>   	int (*remove_private_spte)(struct kvm *kvm, gfn_t gfn, enum pg_level level,
>   				    kvm_pfn_t pfn);
>   	int (*zap_private_spte)(struct kvm *kvm, gfn_t gfn, enum pg_level level);
> +	int (*unzap_private_spte)(struct kvm *kvm, gfn_t gfn, enum pg_level level);
>   
>   	bool (*has_wbinvd_exit)(void);
>   
> diff --git a/arch/x86/kvm/mmu/tdp_iter.c b/arch/x86/kvm/mmu/tdp_iter.c
> index bd30ebfb2f2c..f33226fcd62a 100644
> --- a/arch/x86/kvm/mmu/tdp_iter.c
> +++ b/arch/x86/kvm/mmu/tdp_iter.c
> @@ -71,6 +71,14 @@ tdp_ptep_t spte_to_child_pt(u64 spte, int level)
>   	return (tdp_ptep_t)__va(spte_to_pfn(spte) << PAGE_SHIFT);
>   }
>   
> +static void step_down(struct tdp_iter *iter, tdp_ptep_t child_pt)
> +{
> +	iter->level--;
> +	iter->pt_path[iter->level - 1] = child_pt;
> +	iter->gfn = gfn_round_for_level(iter->next_last_level_gfn, iter->level);
> +	tdp_iter_refresh_sptep(iter);
> +}
> +
>   /*
>    * Steps down one level in the paging structure towards the goal GFN. Returns
>    * true if the iterator was able to step down a level, false otherwise.
> @@ -92,14 +100,28 @@ static bool try_step_down(struct tdp_iter *iter)
>   	if (!child_pt)
>   		return false;
>   
> -	iter->level--;
> -	iter->pt_path[iter->level - 1] = child_pt;
> -	iter->gfn = gfn_round_for_level(iter->next_last_level_gfn, iter->level);
> -	tdp_iter_refresh_sptep(iter);
> -
> +	step_down(iter, child_pt);
>   	return true;
>   }
>   
> +/* Steps down for freezed spte.  Don't re-read sptep because it was freezed. */
> +void tdp_iter_step_down(struct tdp_iter *iter, tdp_ptep_t child_pt)
> +{
> +	WARN_ON_ONCE(!child_pt);
> +	WARN_ON_ONCE(iter->yielded);
> +	WARN_ON_ONCE(iter->level == iter->min_level);
> +
> +	step_down(iter, child_pt);
> +}
> +
> +void tdp_iter_step_side(struct tdp_iter *iter)
> +{
> +	iter->gfn += KVM_PAGES_PER_HPAGE(iter->level);
> +	iter->next_last_level_gfn = iter->gfn;
> +	iter->sptep++;
> +	iter->old_spte = kvm_tdp_mmu_read_spte(iter->sptep);
> +}
> +
>   /*
>    * Steps to the next entry in the current page table, at the current page table
>    * level. The next entry could point to a page backing guest memory or another
> @@ -117,10 +139,7 @@ static bool try_step_side(struct tdp_iter *iter)
>   	    (SPTE_ENT_PER_PAGE - 1))
>   		return false;
>   
> -	iter->gfn += KVM_PAGES_PER_HPAGE(iter->level);
> -	iter->next_last_level_gfn = iter->gfn;
> -	iter->sptep++;
> -	iter->old_spte = kvm_tdp_mmu_read_spte(iter->sptep);
> +	tdp_iter_step_side(iter);
>   
>   	return true;
>   }
> diff --git a/arch/x86/kvm/mmu/tdp_iter.h b/arch/x86/kvm/mmu/tdp_iter.h
> index a9c9cd0db20a..ca00db799a50 100644
> --- a/arch/x86/kvm/mmu/tdp_iter.h
> +++ b/arch/x86/kvm/mmu/tdp_iter.h
> @@ -134,6 +134,8 @@ void tdp_iter_start(struct tdp_iter *iter, struct kvm_mmu_page *root,
>   		    int min_level, gfn_t next_last_level_gfn);
>   void tdp_iter_next(struct tdp_iter *iter);
>   void tdp_iter_restart(struct tdp_iter *iter);
> +void tdp_iter_step_side(struct tdp_iter *iter);
> +void tdp_iter_step_down(struct tdp_iter *iter, tdp_ptep_t child_pt);
>   
>   static inline union kvm_mmu_page_role tdp_iter_child_role(struct tdp_iter *iter)
>   {
> diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/arch/x86/kvm/mmu/tdp_mmu.c
> index 734ee822b43c..c8a4bd052c71 100644
> --- a/arch/x86/kvm/mmu/tdp_mmu.c
> +++ b/arch/x86/kvm/mmu/tdp_mmu.c
> @@ -1224,6 +1224,176 @@ void kvm_tdp_mmu_invalidate_all_roots(struct kvm *kvm, bool skip_private)
>   	}
>   }
>   
> +static int tdp_mmu_iter_step_side(int i, struct tdp_iter *iter)
> +{
> +	i++;
> +
> +	/*
> +	 * if i = SPTE_ENT_PER_PAGE, tdp_iter_step_side() results
> +	 * in reading the entry beyond the last entry.
> +	 */
> +	if (i < SPTE_ENT_PER_PAGE)
> +		tdp_iter_step_side(iter);
> +
> +	return i;
> +}
> +
> +static int tdp_mmu_merge_private_spt(struct kvm_vcpu *vcpu,
> +				     struct kvm_page_fault *fault,
> +				     struct tdp_iter *iter, u64 new_spte)
> +{
> +	u64 *sptep = rcu_dereference(iter->sptep);
> +	u64 old_spte = iter->old_spte;
> +	struct kvm_mmu_page *child_sp;
> +	struct kvm *kvm = vcpu->kvm;
> +	struct tdp_iter child_iter;
> +	int level = iter->level;
> +	gfn_t gfn = iter->gfn;
> +	tdp_ptep_t child_pt;
> +	u64 child_spte;
> +	int ret = 0;
> +	int i;
> +
> +	/*
> +	 * TDX KVM supports only 2MB large page.  It's not supported to merge
> +	 * 2MB pages into 1GB page at the moment.
> +	 */
> +	WARN_ON_ONCE(fault->goal_level != PG_LEVEL_2M);
> +	WARN_ON_ONCE(iter->level != PG_LEVEL_2M);
> +	WARN_ON_ONCE(!is_large_pte(new_spte));
> +
> +	/* Freeze the spte to prevent other threads from working spte. */
> +	if (!try_cmpxchg64(sptep, &iter->old_spte, REMOVED_SPTE))
> +		return -EBUSY;
> +
> +	/*
> +	 * Step down to the child spte.  Because tdp_iter_next() assumes the
> +	 * parent spte isn't freezed, do it manually.
s/freezed/frozen

> +	 */
> +	child_pt = spte_to_child_pt(iter->old_spte, iter->level);
> +	child_sp = sptep_to_sp(child_pt);
> +	WARN_ON_ONCE(child_sp->role.level != PG_LEVEL_4K);
> +	WARN_ON_ONCE(!kvm_mmu_page_role_is_private(child_sp->role));
> +
> +	/* Don't modify iter as the caller will use iter after this function. */
> +	child_iter = *iter;
> +	/* Adjust the target gfn to the head gfn of the large page. */
> +	child_iter.next_last_level_gfn &= -KVM_PAGES_PER_HPAGE(level);
> +	tdp_iter_step_down(&child_iter, child_pt);
> +
> +	/*
> +	 * All child pages are required to be populated for merging them into a
> +	 * large page.  Populate all child spte.
> +	 */
> +	for (i = 0; i < SPTE_ENT_PER_PAGE; i = tdp_mmu_iter_step_side(i, &child_iter)) {
> +		int tmp;
> +
> +		WARN_ON_ONCE(child_iter.level != PG_LEVEL_4K);
> +
> +		if (is_shadow_present_pte(child_iter.old_spte)) {
> +			/* TODO: relocate page for huge page. */
> +			if (WARN_ON_ONCE(spte_to_pfn(child_iter.old_spte) !=
> +					 spte_to_pfn(new_spte) + i)) {
> +				if (!ret)
> +					ret = -EAGAIN;
> +				continue;
> +			}
> +			/*
> +			 * When SEPT_VE_DISABLE=true and the page state is
> +			 * pending, this case can happen.  Just resume the vcpu
> +			 * again with the expectation for other vcpu to accept
> +			 * this page.
> +			 */
> +			if (child_iter.gfn == fault->gfn) {
> +				if (!ret)
> +					ret = -EAGAIN;
> +			}
> +			continue;
> +		}
> +
> +		child_spte = make_huge_page_split_spte(kvm, new_spte, child_sp->role, i);
> +		/*
> +		 * Because other thread may have started to operate on this spte
> +		 * before freezing the parent spte,  Use atomic version to
> +		 * prevent race.
> +		 */
> +		tmp = tdp_mmu_set_spte_atomic(vcpu->kvm, &child_iter, child_spte);
> +		if (tmp == -EBUSY || tmp == -EAGAIN) {
> +			/*
> +			 * There was a race condition.  Populate remaining 4K
> +			 * spte to resolve fault->gfn to guarantee the forward
> +			 * progress.
> +			 */
> +			if (!ret)
> +				ret = tmp;
> +		} else if (tmp) {
> +			ret = tmp;
> +			goto out;
> +		}
> +	}
> +	if (ret)
> +		goto out;
> +
> +	/* Prevent the Secure-EPT entry from being used. */
> +	ret = static_call(kvm_x86_zap_private_spte)(kvm, gfn, level);
> +	if (ret)
> +		goto out;
> +	kvm_flush_remote_tlbs_range(kvm, gfn & KVM_HPAGE_GFN_MASK(level),
> +				    KVM_PAGES_PER_HPAGE(level));
> +
> +	/* Merge pages into a large page. */
> +	ret = static_call(kvm_x86_merge_private_spt)(kvm, gfn, level,
> +						     kvm_mmu_private_spt(child_sp));
> +	/*
> +	 * Failed to merge pages because some pages are accepted and some are
> +	 * pending.  Since the child page was mapped above, let vcpu run.
> +	 */
> +	if (ret) {
> +		if (static_call(kvm_x86_unzap_private_spte)(kvm, gfn, level))
> +			old_spte = SHADOW_NONPRESENT_VALUE |
> +				(spte_to_pfn(old_spte) << PAGE_SHIFT) |
> +				PT_PAGE_SIZE_MASK;
> +		goto out;
> +	}
> +
> +	/* Unfreeze spte. */
> +	iter->old_spte = new_spte;
> +	__kvm_tdp_mmu_write_spte(sptep, new_spte);
> +
> +	/*
> +	 * Free unused child sp.  Secure-EPT page was already freed at TDX level
> +	 * by kvm_x86_merge_private_spt().
> +	 */
> +	tdp_unaccount_mmu_page(kvm, child_sp);
> +	tdp_mmu_free_sp(child_sp);
> +	return -EAGAIN;
It successfully promoted the page, why still return -EAGAIN?

> +
> +out:
> +	iter->old_spte = old_spte;
> +	__kvm_tdp_mmu_write_spte(sptep, old_spte);
> +	return ret;
> +}
> +
> +static int __tdp_mmu_map_handle_target_level(struct kvm_vcpu *vcpu,
> +					     struct kvm_page_fault *fault,
> +					     struct tdp_iter *iter, u64 new_spte)
> +{
> +	/*
> +	 * The private page has smaller-size pages.  For example, the child
> +	 * pages was converted from shared to page, and now it can be mapped as
> +	 * a large page.  Try to merge small pages into a large page.
> +	 */
> +	if (fault->slot &&
> +	    kvm_gfn_shared_mask(vcpu->kvm) &&
> +	    iter->level > PG_LEVEL_4K &&
> +	    kvm_is_private_gpa(vcpu->kvm, fault->addr) &&
> +	    is_shadow_present_pte(iter->old_spte) &&
> +	    !is_large_pte(iter->old_spte))
> +		return tdp_mmu_merge_private_spt(vcpu, fault, iter, new_spte);
> +
> +	return tdp_mmu_set_spte_atomic(vcpu->kvm, iter, new_spte);
> +}
> +
>   /*
>    * Installs a last-level SPTE to handle a TDP page fault.
>    * (NPT/EPT violation/misconfiguration)
> @@ -1265,7 +1435,7 @@ static int tdp_mmu_map_handle_target_level(struct kvm_vcpu *vcpu,
>   
>   	if (new_spte == iter->old_spte)
>   		ret = RET_PF_SPURIOUS;
> -	else if (tdp_mmu_set_spte_atomic(vcpu->kvm, iter, new_spte))
> +	else if (__tdp_mmu_map_handle_target_level(vcpu, fault, iter, new_spte))
>   		return RET_PF_RETRY;
>   	else if (is_shadow_present_pte(iter->old_spte) &&
>   		 !is_last_spte(iter->old_spte, iter->level))


^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: [PATCH v6 14/16] KVM: x86/tdp_mmu: TDX: Implement merge pages into a large page
  2023-11-07 15:00 ` [PATCH v6 14/16] KVM: x86/tdp_mmu: TDX: Implement " isaku.yamahata
@ 2023-11-22  7:50   ` Binbin Wu
  0 siblings, 0 replies; 39+ messages in thread
From: Binbin Wu @ 2023-11-22  7:50 UTC (permalink / raw)
  To: isaku.yamahata
  Cc: kvm, linux-kernel, isaku.yamahata, Paolo Bonzini, erdemaktas,
	Sean Christopherson, Sagi Shahar, David Matlack, Kai Huang,
	Zhi Wang, chen.bo, hang.yuan, tina.zhang



On 11/7/2023 11:00 PM, isaku.yamahata@intel.com wrote:
> From: Isaku Yamahata <isaku.yamahata@intel.com>
>
> Implement merge_private_stp callback.

Not just merge_private_stp, but also unzap_private_spte.

Also, for the shortlog, should it be "KVM: VMX" instead of "KVM: 
x86/tdp_mmu"?

>
> Signed-off-by: Isaku Yamahata <isaku.yamahata@intel.com>
> ---
>   arch/x86/kvm/vmx/tdx.c       | 72 ++++++++++++++++++++++++++++++++++++
>   arch/x86/kvm/vmx/tdx_arch.h  |  1 +
>   arch/x86/kvm/vmx/tdx_errno.h |  2 +
>   arch/x86/kvm/vmx/tdx_ops.h   |  6 +++
>   4 files changed, 81 insertions(+)
>
> diff --git a/arch/x86/kvm/vmx/tdx.c b/arch/x86/kvm/vmx/tdx.c
> index 91eca578a7da..df53a971ff21 100644
> --- a/arch/x86/kvm/vmx/tdx.c
> +++ b/arch/x86/kvm/vmx/tdx.c
> @@ -1684,6 +1684,49 @@ static int tdx_sept_split_private_spt(struct kvm *kvm, gfn_t gfn,
>   	return 0;
>   }
>   
> +static int tdx_sept_merge_private_spt(struct kvm *kvm, gfn_t gfn,
> +				      enum pg_level level, void *private_spt)
> +{
> +	int tdx_level = pg_level_to_tdx_sept_level(level);
> +	struct kvm_tdx *kvm_tdx = to_kvm_tdx(kvm);
> +	struct tdx_module_args out;
> +	gpa_t gpa = gfn_to_gpa(gfn) & KVM_HPAGE_MASK(level);
> +	u64 err;
> +
> +	/* See comment in tdx_sept_set_private_spte() */
> +	err = tdh_mem_page_promote(kvm_tdx->tdr_pa, gpa, tdx_level, &out);
> +	if (unlikely(err == TDX_ERROR_SEPT_BUSY))
> +		return -EAGAIN;
> +	if (unlikely(err == (TDX_EPT_INVALID_PROMOTE_CONDITIONS |
> +			     TDX_OPERAND_ID_RCX)))
> +		/*
> +		 * Some pages are accepted, some pending.  Need to wait for TD
> +		 * to accept all pages.  Tell it the caller.
> +		 */
> +		return -EAGAIN;
> +	if (KVM_BUG_ON(err, kvm)) {
> +		pr_tdx_error(TDH_MEM_PAGE_PROMOTE, err, &out);
> +		return -EIO;
> +	}
> +	WARN_ON_ONCE(out.rcx != __pa(private_spt));
> +
> +	/*
> +	 * TDH.MEM.PAGE.PROMOTE frees the Secure-EPT page for the lower level.
Is it better to use "unlink" instead of "free" to avoid confusion?

> +	 * Flush cache for reuse.
> +	 */
> +	do {
> +		err = tdh_phymem_page_wbinvd(set_hkid_to_hpa(__pa(private_spt),
> +							     to_kvm_tdx(kvm)->hkid));
> +	} while (unlikely(err == (TDX_OPERAND_BUSY | TDX_OPERAND_ID_RCX)));
> +	if (WARN_ON_ONCE(err)) {
> +		pr_tdx_error(TDH_PHYMEM_PAGE_WBINVD, err, NULL);
> +		return -EIO;
> +	}
> +
> +	tdx_clear_page(__pa(private_spt), PAGE_SIZE);
> +	return 0;
> +}
> +
>   static int tdx_sept_zap_private_spte(struct kvm *kvm, gfn_t gfn,
>   				      enum pg_level level)
>   {
> @@ -1758,6 +1801,33 @@ static void tdx_track(struct kvm *kvm)
>   
>   }
>   
> +static int tdx_sept_unzap_private_spte(struct kvm *kvm, gfn_t gfn,
> +				       enum pg_level level)
> +{
> +	int tdx_level = pg_level_to_tdx_sept_level(level);
> +	struct kvm_tdx *kvm_tdx = to_kvm_tdx(kvm);
> +	gpa_t gpa = gfn_to_gpa(gfn) & KVM_HPAGE_MASK(level);
> +	struct tdx_module_args out;
> +	u64 err;
> +
> +	do {
> +		err = tdh_mem_range_unblock(kvm_tdx->tdr_pa, gpa, tdx_level, &out);
> +
> +		/*
> +		 * tdh_mem_range_block() is accompanied with tdx_track() via kvm
> +		 * remote tlb flush.  Wait for the caller of
> +		 * tdh_mem_range_block() to complete TDX track.
> +		 */
> +	} while (err == (TDX_TLB_TRACKING_NOT_DONE | TDX_OPERAND_ID_SEPT));
> +	if (unlikely(err == TDX_ERROR_SEPT_BUSY))
> +		return -EAGAIN;
> +	if (KVM_BUG_ON(err, kvm)) {
> +		pr_tdx_error(TDH_MEM_RANGE_UNBLOCK, err, &out);
> +		return -EIO;
> +	}
> +	return 0;
> +}
> +
>   static int tdx_sept_free_private_spt(struct kvm *kvm, gfn_t gfn,
>   				     enum pg_level level, void *private_spt)
>   {
> @@ -3204,9 +3274,11 @@ int __init tdx_hardware_setup(struct kvm_x86_ops *x86_ops)
>   	x86_ops->link_private_spt = tdx_sept_link_private_spt;
>   	x86_ops->free_private_spt = tdx_sept_free_private_spt;
>   	x86_ops->split_private_spt = tdx_sept_split_private_spt;
> +	x86_ops->merge_private_spt = tdx_sept_merge_private_spt;
>   	x86_ops->set_private_spte = tdx_sept_set_private_spte;
>   	x86_ops->remove_private_spte = tdx_sept_remove_private_spte;
>   	x86_ops->zap_private_spte = tdx_sept_zap_private_spte;
> +	x86_ops->unzap_private_spte = tdx_sept_unzap_private_spte;
>   
>   	return 0;
>   
> diff --git a/arch/x86/kvm/vmx/tdx_arch.h b/arch/x86/kvm/vmx/tdx_arch.h
> index cab6a74446a0..fb7e54953a85 100644
> --- a/arch/x86/kvm/vmx/tdx_arch.h
> +++ b/arch/x86/kvm/vmx/tdx_arch.h
> @@ -29,6 +29,7 @@
>   #define TDH_MNG_KEY_FREEID		20
>   #define TDH_MNG_INIT			21
>   #define TDH_VP_INIT			22
> +#define TDH_MEM_PAGE_PROMOTE		23
>   #define TDH_MEM_SEPT_RD			25
>   #define TDH_VP_RD			26
>   #define TDH_MNG_KEY_RECLAIMID		27
> diff --git a/arch/x86/kvm/vmx/tdx_errno.h b/arch/x86/kvm/vmx/tdx_errno.h
> index bb093e292fef..940d6de332eb 100644
> --- a/arch/x86/kvm/vmx/tdx_errno.h
> +++ b/arch/x86/kvm/vmx/tdx_errno.h
> @@ -23,6 +23,8 @@
>   #define TDX_FLUSHVP_NOT_DONE			0x8000082400000000ULL
>   #define TDX_EPT_WALK_FAILED			0xC0000B0000000000ULL
>   #define TDX_EPT_ENTRY_NOT_FREE			0xC0000B0200000000ULL
> +#define TDX_TLB_TRACKING_NOT_DONE		0xC0000B0800000000ULL
> +#define TDX_EPT_INVALID_PROMOTE_CONDITIONS	0xC0000B0900000000ULL
>   #define TDX_EPT_ENTRY_STATE_INCORRECT		0xC0000B0D00000000ULL
>   
>   /*
> diff --git a/arch/x86/kvm/vmx/tdx_ops.h b/arch/x86/kvm/vmx/tdx_ops.h
> index 38ab0ab1509c..774fee3b2d46 100644
> --- a/arch/x86/kvm/vmx/tdx_ops.h
> +++ b/arch/x86/kvm/vmx/tdx_ops.h
> @@ -190,6 +190,12 @@ static inline u64 tdh_mem_page_demote(hpa_t tdr, gpa_t gpa, int level, hpa_t pag
>   	return tdx_seamcall_sept(TDH_MEM_PAGE_DEMOTE, gpa | level, tdr, page, 0, out);
>   }
>   
> +static inline u64 tdh_mem_page_promote(hpa_t tdr, gpa_t gpa, int level,
> +				       struct tdx_module_args *out)
> +{
> +	return tdx_seamcall_sept(TDH_MEM_PAGE_PROMOTE, gpa | level, tdr, 0, 0, out);
> +}
> +
>   static inline u64 tdh_mr_extend(hpa_t tdr, gpa_t gpa,
>   				struct tdx_module_args *out)
>   {


^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: [PATCH v6 15/16] KVM: x86/mmu: Make kvm fault handler aware of large page of private memslot
  2023-11-07 15:00 ` [PATCH v6 15/16] KVM: x86/mmu: Make kvm fault handler aware of large page of private memslot isaku.yamahata
@ 2023-11-22  9:05   ` Binbin Wu
  0 siblings, 0 replies; 39+ messages in thread
From: Binbin Wu @ 2023-11-22  9:05 UTC (permalink / raw)
  To: isaku.yamahata
  Cc: kvm, linux-kernel, isaku.yamahata, Paolo Bonzini, erdemaktas,
	Sean Christopherson, Sagi Shahar, David Matlack, Kai Huang,
	Zhi Wang, chen.bo, hang.yuan, tina.zhang



On 11/7/2023 11:00 PM, isaku.yamahata@intel.com wrote:
> From: Isaku Yamahata <isaku.yamahata@intel.com>
>
> struct kvm_page_fault.req_level is the page level which takes care of the
> faulted-in page size.  For now its calculation is only for the conventional
> kvm memslot by host_pfn_mapping_level() that traverses page table.
>
> However, host_pfn_mapping_level() cannot be used for private kvm memslot
> because pages of private kvm memlost aren't mapped into user virtual
> address space.

The description here is not accurate.  A memslot can be private doesn't mean
all pages of the memslot can't be mapped into user virtual address space.

> Instead page order is given when getting pfn.  Remember it
> in struct kvm_page_fault and use it.
>
> Signed-off-by: Isaku Yamahata <isaku.yamahata@intel.com>
> ---
>   arch/x86/kvm/mmu/mmu.c          | 34 +++++++++++++++++----------------
>   arch/x86/kvm/mmu/mmu_internal.h | 12 +++++++++++-
>   arch/x86/kvm/mmu/tdp_mmu.c      |  2 +-
>   3 files changed, 30 insertions(+), 18 deletions(-)
>
> diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c
> index 0bf043812644..0aec7c11f4e2 100644
> --- a/arch/x86/kvm/mmu/mmu.c
> +++ b/arch/x86/kvm/mmu/mmu.c
> @@ -3158,10 +3158,10 @@ static int host_pfn_mapping_level(struct kvm *kvm, gfn_t gfn,
>   
>   static int __kvm_mmu_max_mapping_level(struct kvm *kvm,
>   				       const struct kvm_memory_slot *slot,
> -				       gfn_t gfn, int max_level, bool is_private)
> +				       gfn_t gfn, int max_level, int host_level,
> +				       bool is_private)
>   {
>   	struct kvm_lpage_info *linfo;
> -	int host_level;
>   
>   	max_level = min(max_level, max_huge_page_level);
>   	for ( ; max_level > PG_LEVEL_4K; max_level--) {
> @@ -3170,24 +3170,23 @@ static int __kvm_mmu_max_mapping_level(struct kvm *kvm,
>   			break;
>   	}
>   
> -	if (is_private)
> -		return max_level;
> -
>   	if (max_level == PG_LEVEL_4K)
>   		return PG_LEVEL_4K;
>   
> -	host_level = host_pfn_mapping_level(kvm, gfn, slot);
> +	if (!is_private) {
> +		WARN_ON_ONCE(host_level != PG_LEVEL_NONE);
> +		host_level = host_pfn_mapping_level(kvm, gfn, slot);
> +	}
> +	WARN_ON_ONCE(host_level == PG_LEVEL_NONE);
>   	return min(host_level, max_level);
>   }
>   
>   int kvm_mmu_max_mapping_level(struct kvm *kvm,
>   			      const struct kvm_memory_slot *slot, gfn_t gfn,
> -			      int max_level)
> +			      int max_level, bool faultin_private)

When the parameter "faultin_private" is added, the only valid value is
"false".  If the caller passes in "faultin_private = true", then it 
would be a
problem based on this patch.
It seems meaningless and confusing to introduce the parameter 
"faultin_private"
here.

>   {
> -	bool is_private = kvm_slot_can_be_private(slot) &&
> -			  kvm_mem_is_private(kvm, gfn);
> -
> -	return __kvm_mmu_max_mapping_level(kvm, slot, gfn, max_level, is_private);
> +	return __kvm_mmu_max_mapping_level(kvm, slot, gfn, max_level,
> +					   PG_LEVEL_NONE, faultin_private);
>   }
>   
>   void kvm_mmu_hugepage_adjust(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault)
> @@ -3212,7 +3211,8 @@ void kvm_mmu_hugepage_adjust(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault
>   	 */
>   	fault->req_level = __kvm_mmu_max_mapping_level(vcpu->kvm, slot,
>   						       fault->gfn, fault->max_level,
> -						       fault->is_private);
> +						       fault->host_level,
> +						       kvm_is_faultin_private(fault));
>   	if (fault->req_level == PG_LEVEL_4K || fault->huge_page_disallowed)
>   		return;
>   
> @@ -4336,6 +4336,7 @@ static int kvm_faultin_pfn_private(struct kvm_vcpu *vcpu,
>   				   struct kvm_page_fault *fault)
>   {
>   	int max_order, r;
> +	u8 max_level;
>   
>   	if (!kvm_slot_can_be_private(fault->slot)) {
>   		kvm_mmu_prepare_memory_fault_exit(vcpu, fault);
> @@ -4349,8 +4350,9 @@ static int kvm_faultin_pfn_private(struct kvm_vcpu *vcpu,
>   		return r;
>   	}
>   
> -	fault->max_level = min(kvm_max_level_for_order(max_order),
> -			       fault->max_level);
> +	max_level = kvm_max_level_for_order(max_order);
> +	fault->host_level = max_level;
> +	fault->max_level = min(max_level, fault->max_level);
>   	fault->map_writable = !(fault->slot->flags & KVM_MEM_READONLY);
>   
>   	return RET_PF_CONTINUE;
> @@ -4400,7 +4402,7 @@ static int __kvm_faultin_pfn(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault
>   		return -EFAULT;
>   	}
>   
> -	if (fault->is_private)
> +	if (kvm_is_faultin_private(fault))
>   		return kvm_faultin_pfn_private(vcpu, fault);
>   
>   	async = false;
> @@ -6809,7 +6811,7 @@ static bool kvm_mmu_zap_collapsible_spte(struct kvm *kvm,
>   		 */
>   		if (sp->role.direct &&
>   		    sp->role.level < kvm_mmu_max_mapping_level(kvm, slot, sp->gfn,
> -							       PG_LEVEL_NUM)) {
> +							       PG_LEVEL_NUM, false)) {
>   			kvm_zap_one_rmap_spte(kvm, rmap_head, sptep);
>   
>   			if (kvm_available_flush_remote_tlbs_range())
> diff --git a/arch/x86/kvm/mmu/mmu_internal.h b/arch/x86/kvm/mmu/mmu_internal.h
> index 653e96769956..6b540a10fd67 100644
> --- a/arch/x86/kvm/mmu/mmu_internal.h
> +++ b/arch/x86/kvm/mmu/mmu_internal.h
> @@ -357,6 +357,9 @@ struct kvm_page_fault {
>   	 * is changing its own translation in the guest page tables.
>   	 */
>   	bool write_fault_to_shadow_pgtable;
> +
> +	/* valid only for private memslot && private gfn */
> +	enum pg_level host_level;
>   };
>   
>   int kvm_tdp_page_fault(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault);
> @@ -451,7 +454,7 @@ static inline int kvm_mmu_do_page_fault(struct kvm_vcpu *vcpu, gpa_t cr2_or_gpa,
>   
>   int kvm_mmu_max_mapping_level(struct kvm *kvm,
>   			      const struct kvm_memory_slot *slot, gfn_t gfn,
> -			      int max_level);
> +			      int max_level, bool faultin_private);
>   void kvm_mmu_hugepage_adjust(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault);
>   void disallowed_hugepage_adjust(struct kvm_page_fault *fault, u64 spte, int cur_level);
>   
> @@ -469,4 +472,11 @@ static inline bool kvm_hugepage_test_mixed(struct kvm_memory_slot *slot, gfn_t g
>   }
>   #endif
>   
> +static inline bool kvm_is_faultin_private(const struct kvm_page_fault *fault)
> +{
> +	if (IS_ENABLED(CONFIG_KVM_GENERIC_PRIVATE_MEM))
> +		return fault->is_private && kvm_slot_can_be_private(fault->slot);
> +	return false;
> +}
> +
>   #endif /* __KVM_X86_MMU_INTERNAL_H */
> diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/arch/x86/kvm/mmu/tdp_mmu.c
> index c8a4bd052c71..173e4e9053fc 100644
> --- a/arch/x86/kvm/mmu/tdp_mmu.c
> +++ b/arch/x86/kvm/mmu/tdp_mmu.c
> @@ -2179,7 +2179,7 @@ static void zap_collapsible_spte_range(struct kvm *kvm,
>   			continue;
>   
>   		max_mapping_level = kvm_mmu_max_mapping_level(kvm, slot,
> -							      iter.gfn, PG_LEVEL_NUM);
> +							      iter.gfn, PG_LEVEL_NUM, false);
>   		if (max_mapping_level < iter.level)
>   			continue;
>   


^ permalink raw reply	[flat|nested] 39+ messages in thread

end of thread, other threads:[~2023-11-22  9:05 UTC | newest]

Thread overview: 39+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2023-11-07 15:00 [PATCH v6 00/16] KVM TDX: TDP MMU: large page support isaku.yamahata
2023-11-07 15:00 ` [PATCH v6 01/16] KVM: TDP_MMU: Go to next level if smaller private mapping exists isaku.yamahata
2023-11-16  1:32   ` Binbin Wu
2023-11-17  1:05     ` Isaku Yamahata
2023-11-07 15:00 ` [PATCH v6 02/16] KVM: TDX: Pass page level to cache flush before TDX SEAMCALL isaku.yamahata
2023-11-16  5:36   ` Binbin Wu
2023-11-07 15:00 ` [PATCH v6 03/16] KVM: TDX: Pass KVM page level to tdh_mem_page_add() and tdh_mem_page_aug() isaku.yamahata
2023-11-16  8:18   ` Binbin Wu
2023-11-17  0:23     ` Isaku Yamahata
2023-11-07 15:00 ` [PATCH v6 04/16] KVM: TDX: Pass size to tdx_measure_page() isaku.yamahata
2023-11-16  8:57   ` Binbin Wu
2023-11-17  0:36     ` Isaku Yamahata
2023-11-07 15:00 ` [PATCH v6 05/16] KVM: TDX: Pass size to reclaim_page() isaku.yamahata
2023-11-19  6:42   ` Binbin Wu
2023-11-19  6:58     ` Binbin Wu
2023-11-07 15:00 ` [PATCH v6 06/16] KVM: TDX: Update tdx_sept_{set,drop}_private_spte() to support large page isaku.yamahata
2023-11-07 15:00 ` [PATCH v6 07/16] KVM: MMU: Introduce level info in PFERR code isaku.yamahata
2023-11-20 10:54   ` Binbin Wu
2023-11-21 10:02     ` Isaku Yamahata
2023-11-07 15:00 ` [PATCH v6 08/16] KVM: TDX: Pin pages via get_page() right before ADD/AUG'ed to TDs isaku.yamahata
2023-11-20 11:05   ` Binbin Wu
2023-11-21 10:04     ` Isaku Yamahata
2023-11-07 15:00 ` [PATCH v6 09/16] KVM: TDX: Pass desired page level in err code for page fault handler isaku.yamahata
2023-11-20 11:24   ` Binbin Wu
2023-11-21 10:27     ` Isaku Yamahata
2023-11-07 15:00 ` [PATCH v6 10/16] KVM: x86/tdp_mmu: Allocate private page table for large page split isaku.yamahata
2023-11-07 15:00 ` [PATCH v6 11/16] KVM: x86/tdp_mmu: Split the large page when zap leaf isaku.yamahata
2023-11-21  9:57   ` Binbin Wu
2023-11-21 11:00     ` Isaku Yamahata
2023-11-22  2:18       ` Binbin Wu
2023-11-07 15:00 ` [PATCH v6 12/16] KVM: x86/tdp_mmu, TDX: Split a large page when 4KB page within it converted to shared isaku.yamahata
2023-11-22  5:45   ` Binbin Wu
2023-11-07 15:00 ` [PATCH v6 13/16] KVM: x86/tdp_mmu: Try to merge pages into a large page isaku.yamahata
2023-11-22  7:24   ` Binbin Wu
2023-11-07 15:00 ` [PATCH v6 14/16] KVM: x86/tdp_mmu: TDX: Implement " isaku.yamahata
2023-11-22  7:50   ` Binbin Wu
2023-11-07 15:00 ` [PATCH v6 15/16] KVM: x86/mmu: Make kvm fault handler aware of large page of private memslot isaku.yamahata
2023-11-22  9:05   ` Binbin Wu
2023-11-07 15:00 ` [PATCH v6 16/16] KVM: TDX: Allow 2MB large page for TD GUEST isaku.yamahata

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).