From: Binbin Wu <binbin.wu@linux.intel.com>
To: isaku.yamahata@intel.com
Cc: kvm@vger.kernel.org, linux-kernel@vger.kernel.org,
isaku.yamahata@gmail.com, Paolo Bonzini <pbonzini@redhat.com>,
erdemaktas@google.com, Sean Christopherson <seanjc@google.com>,
Sagi Shahar <sagis@google.com>,
David Matlack <dmatlack@google.com>,
Kai Huang <kai.huang@intel.com>,
Zhi Wang <zhi.wang.linux@gmail.com>,
chen.bo@intel.com, hang.yuan@intel.com, tina.zhang@intel.com,
Xiaoyao Li <xiaoyao.li@intel.com>
Subject: Re: [PATCH v6 12/16] KVM: x86/tdp_mmu, TDX: Split a large page when 4KB page within it converted to shared
Date: Wed, 22 Nov 2023 13:45:46 +0800 [thread overview]
Message-ID: <e789b9f5-a7cb-479d-8678-76cfb7bb946e@linux.intel.com> (raw)
In-Reply-To: <051d18f03ff70a66387ec37988d1ffd29f43f4f5.1699368363.git.isaku.yamahata@intel.com>
On 11/7/2023 11:00 PM, isaku.yamahata@intel.com wrote:
> From: Xiaoyao Li <xiaoyao.li@intel.com>
>
> When mapping the shared page for TDX, it needs to zap private alias.
>
> In the case that private page is mapped as large page (2MB), it can be
> removed directly only when the whole 2MB is converted to shared.
> Otherwise, it has to split 2MB page into 512 4KB page, and only remove
> the pages that converted to shared.
>
> When a present large leaf spte switches to present non-leaf spte, TDX needs
> to split the corresponding SEPT page to reflect it.
>
> Signed-off-by: Xiaoyao Li <xiaoyao.li@intel.com>
> Signed-off-by: Isaku Yamahata <isaku.yamahata@intel.com>
> ---
> arch/x86/include/asm/kvm-x86-ops.h | 1 +
> arch/x86/include/asm/kvm_host.h | 2 ++
> arch/x86/kvm/mmu/tdp_mmu.c | 21 ++++++++++++++++-----
> arch/x86/kvm/vmx/tdx.c | 25 +++++++++++++++++++++++--
> arch/x86/kvm/vmx/tdx_arch.h | 1 +
> arch/x86/kvm/vmx/tdx_ops.h | 7 +++++++
> 6 files changed, 50 insertions(+), 7 deletions(-)
>
> diff --git a/arch/x86/include/asm/kvm-x86-ops.h b/arch/x86/include/asm/kvm-x86-ops.h
> index 8ef0ed217f6e..3deb6ab4f291 100644
> --- a/arch/x86/include/asm/kvm-x86-ops.h
> +++ b/arch/x86/include/asm/kvm-x86-ops.h
> @@ -103,6 +103,7 @@ KVM_X86_OP_OPTIONAL_RET0(get_mt_mask)
> KVM_X86_OP(load_mmu_pgd)
> KVM_X86_OP_OPTIONAL(link_private_spt)
> KVM_X86_OP_OPTIONAL(free_private_spt)
> +KVM_X86_OP_OPTIONAL(split_private_spt)
> KVM_X86_OP_OPTIONAL(set_private_spte)
> KVM_X86_OP_OPTIONAL(remove_private_spte)
> KVM_X86_OP_OPTIONAL(zap_private_spte)
> diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
> index c16823f3326e..e75a461bdea7 100644
> --- a/arch/x86/include/asm/kvm_host.h
> +++ b/arch/x86/include/asm/kvm_host.h
> @@ -1753,6 +1753,8 @@ struct kvm_x86_ops {
> void *private_spt);
> int (*free_private_spt)(struct kvm *kvm, gfn_t gfn, enum pg_level level,
> void *private_spt);
> + int (*split_private_spt)(struct kvm *kvm, gfn_t gfn, enum pg_level level,
> + void *private_spt);
> int (*set_private_spte)(struct kvm *kvm, gfn_t gfn, enum pg_level level,
> kvm_pfn_t pfn);
> int (*remove_private_spte)(struct kvm *kvm, gfn_t gfn, enum pg_level level,
> diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/arch/x86/kvm/mmu/tdp_mmu.c
> index a209a67decae..734ee822b43c 100644
> --- a/arch/x86/kvm/mmu/tdp_mmu.c
> +++ b/arch/x86/kvm/mmu/tdp_mmu.c
> @@ -599,23 +599,34 @@ static int __must_check __set_private_spte_present(struct kvm *kvm, tdp_ptep_t s
> {
> bool was_present = is_shadow_present_pte(old_spte);
> bool is_present = is_shadow_present_pte(new_spte);
> + bool was_leaf = was_present && is_last_spte(old_spte, level);
> bool is_leaf = is_present && is_last_spte(new_spte, level);
> kvm_pfn_t new_pfn = spte_to_pfn(new_spte);
> + void *private_spt;
> int ret = 0;
>
> lockdep_assert_held(&kvm->mmu_lock);
> - /* TDP MMU doesn't change present -> present */
> - KVM_BUG_ON(was_present, kvm);
>
> /*
> * Use different call to either set up middle level
> * private page table, or leaf.
> */
> - if (is_leaf)
> + if (level > PG_LEVEL_4K && was_leaf && !is_leaf) {
> + /*
> + * splitting large page into 4KB.
> + * tdp_mmu_split_huage_page() => tdp_mmu_link_sp()
Typo, tdp_mmu_split_huage_page -> tdp_mmu_split_huge_page
> + */
> + private_spt = get_private_spt(gfn, new_spte, level);
> + KVM_BUG_ON(!private_spt, kvm);
> + ret = static_call(kvm_x86_zap_private_spte)(kvm, gfn, level);
> + kvm_flush_remote_tlbs(kvm);
> + if (!ret)
> + ret = static_call(kvm_x86_split_private_spt)(kvm, gfn,
> + level, private_spt);
> + } else if (is_leaf)
> ret = static_call(kvm_x86_set_private_spte)(kvm, gfn, level, new_pfn);
> else {
> - void *private_spt = get_private_spt(gfn, new_spte, level);
> -
> + private_spt = get_private_spt(gfn, new_spte, level);
> KVM_BUG_ON(!private_spt, kvm);
> ret = static_call(kvm_x86_link_private_spt)(kvm, gfn, level, private_spt);
> }
> diff --git a/arch/x86/kvm/vmx/tdx.c b/arch/x86/kvm/vmx/tdx.c
> index c614ab20c191..91eca578a7da 100644
> --- a/arch/x86/kvm/vmx/tdx.c
> +++ b/arch/x86/kvm/vmx/tdx.c
> @@ -1662,6 +1662,28 @@ static int tdx_sept_link_private_spt(struct kvm *kvm, gfn_t gfn,
> return 0;
> }
>
> +static int tdx_sept_split_private_spt(struct kvm *kvm, gfn_t gfn,
> + enum pg_level level, void *private_spt)
> +{
> + int tdx_level = pg_level_to_tdx_sept_level(level);
> + struct kvm_tdx *kvm_tdx = to_kvm_tdx(kvm);
> + gpa_t gpa = gfn_to_gpa(gfn) & KVM_HPAGE_MASK(level);
> + hpa_t hpa = __pa(private_spt);
> + struct tdx_module_args out;
> + u64 err;
> +
> + /* See comment in tdx_sept_set_private_spte() */
Do you mean the comment about the pages are pinned to prevent migration
part?
Can you add some specific short information in this comment, in case
tdx_sept_set_private_spte() is extended to have more comments in the future?
> + err = tdh_mem_page_demote(kvm_tdx->tdr_pa, gpa, tdx_level, hpa, &out);
> + if (unlikely(err == TDX_ERROR_SEPT_BUSY))
> + return -EAGAIN;
> + if (KVM_BUG_ON(err, kvm)) {
> + pr_tdx_error(TDH_MEM_PAGE_DEMOTE, err, &out);
> + return -EIO;
> + }
> +
> + return 0;
> +}
> +
> static int tdx_sept_zap_private_spte(struct kvm *kvm, gfn_t gfn,
> enum pg_level level)
> {
> @@ -1675,8 +1697,6 @@ static int tdx_sept_zap_private_spte(struct kvm *kvm, gfn_t gfn,
> if (unlikely(!is_hkid_assigned(kvm_tdx)))
> return 0;
>
> - /* For now large page isn't supported yet. */
> - WARN_ON_ONCE(level != PG_LEVEL_4K);
> err = tdh_mem_range_block(kvm_tdx->tdr_pa, gpa, tdx_level, &out);
> if (unlikely(err == TDX_ERROR_SEPT_BUSY))
> return -EAGAIN;
> @@ -3183,6 +3203,7 @@ int __init tdx_hardware_setup(struct kvm_x86_ops *x86_ops)
>
> x86_ops->link_private_spt = tdx_sept_link_private_spt;
> x86_ops->free_private_spt = tdx_sept_free_private_spt;
> + x86_ops->split_private_spt = tdx_sept_split_private_spt;
> x86_ops->set_private_spte = tdx_sept_set_private_spte;
> x86_ops->remove_private_spte = tdx_sept_remove_private_spte;
> x86_ops->zap_private_spte = tdx_sept_zap_private_spte;
> diff --git a/arch/x86/kvm/vmx/tdx_arch.h b/arch/x86/kvm/vmx/tdx_arch.h
> index ba41fefa47ee..cab6a74446a0 100644
> --- a/arch/x86/kvm/vmx/tdx_arch.h
> +++ b/arch/x86/kvm/vmx/tdx_arch.h
> @@ -21,6 +21,7 @@
> #define TDH_MNG_CREATE 9
> #define TDH_VP_CREATE 10
> #define TDH_MNG_RD 11
> +#define TDH_MEM_PAGE_DEMOTE 15
> #define TDH_MR_EXTEND 16
> #define TDH_MR_FINALIZE 17
> #define TDH_VP_FLUSH 18
> diff --git a/arch/x86/kvm/vmx/tdx_ops.h b/arch/x86/kvm/vmx/tdx_ops.h
> index 0f2df7198bde..38ab0ab1509c 100644
> --- a/arch/x86/kvm/vmx/tdx_ops.h
> +++ b/arch/x86/kvm/vmx/tdx_ops.h
> @@ -183,6 +183,13 @@ static inline u64 tdh_mng_rd(hpa_t tdr, u64 field, struct tdx_module_args *out)
> return tdx_seamcall(TDH_MNG_RD, tdr, field, 0, 0, out);
> }
>
> +static inline u64 tdh_mem_page_demote(hpa_t tdr, gpa_t gpa, int level, hpa_t page,
> + struct tdx_module_args *out)
> +{
> + tdx_clflush_page(page, PG_LEVEL_4K);
> + return tdx_seamcall_sept(TDH_MEM_PAGE_DEMOTE, gpa | level, tdr, page, 0, out);
> +}
> +
> static inline u64 tdh_mr_extend(hpa_t tdr, gpa_t gpa,
> struct tdx_module_args *out)
> {
next prev parent reply other threads:[~2023-11-22 5:45 UTC|newest]
Thread overview: 39+ messages / expand[flat|nested] mbox.gz Atom feed top
2023-11-07 15:00 [PATCH v6 00/16] KVM TDX: TDP MMU: large page support isaku.yamahata
2023-11-07 15:00 ` [PATCH v6 01/16] KVM: TDP_MMU: Go to next level if smaller private mapping exists isaku.yamahata
2023-11-16 1:32 ` Binbin Wu
2023-11-17 1:05 ` Isaku Yamahata
2023-11-07 15:00 ` [PATCH v6 02/16] KVM: TDX: Pass page level to cache flush before TDX SEAMCALL isaku.yamahata
2023-11-16 5:36 ` Binbin Wu
2023-11-07 15:00 ` [PATCH v6 03/16] KVM: TDX: Pass KVM page level to tdh_mem_page_add() and tdh_mem_page_aug() isaku.yamahata
2023-11-16 8:18 ` Binbin Wu
2023-11-17 0:23 ` Isaku Yamahata
2023-11-07 15:00 ` [PATCH v6 04/16] KVM: TDX: Pass size to tdx_measure_page() isaku.yamahata
2023-11-16 8:57 ` Binbin Wu
2023-11-17 0:36 ` Isaku Yamahata
2023-11-07 15:00 ` [PATCH v6 05/16] KVM: TDX: Pass size to reclaim_page() isaku.yamahata
2023-11-19 6:42 ` Binbin Wu
2023-11-19 6:58 ` Binbin Wu
2023-11-07 15:00 ` [PATCH v6 06/16] KVM: TDX: Update tdx_sept_{set,drop}_private_spte() to support large page isaku.yamahata
2023-11-07 15:00 ` [PATCH v6 07/16] KVM: MMU: Introduce level info in PFERR code isaku.yamahata
2023-11-20 10:54 ` Binbin Wu
2023-11-21 10:02 ` Isaku Yamahata
2023-11-07 15:00 ` [PATCH v6 08/16] KVM: TDX: Pin pages via get_page() right before ADD/AUG'ed to TDs isaku.yamahata
2023-11-20 11:05 ` Binbin Wu
2023-11-21 10:04 ` Isaku Yamahata
2023-11-07 15:00 ` [PATCH v6 09/16] KVM: TDX: Pass desired page level in err code for page fault handler isaku.yamahata
2023-11-20 11:24 ` Binbin Wu
2023-11-21 10:27 ` Isaku Yamahata
2023-11-07 15:00 ` [PATCH v6 10/16] KVM: x86/tdp_mmu: Allocate private page table for large page split isaku.yamahata
2023-11-07 15:00 ` [PATCH v6 11/16] KVM: x86/tdp_mmu: Split the large page when zap leaf isaku.yamahata
2023-11-21 9:57 ` Binbin Wu
2023-11-21 11:00 ` Isaku Yamahata
2023-11-22 2:18 ` Binbin Wu
2023-11-07 15:00 ` [PATCH v6 12/16] KVM: x86/tdp_mmu, TDX: Split a large page when 4KB page within it converted to shared isaku.yamahata
2023-11-22 5:45 ` Binbin Wu [this message]
2023-11-07 15:00 ` [PATCH v6 13/16] KVM: x86/tdp_mmu: Try to merge pages into a large page isaku.yamahata
2023-11-22 7:24 ` Binbin Wu
2023-11-07 15:00 ` [PATCH v6 14/16] KVM: x86/tdp_mmu: TDX: Implement " isaku.yamahata
2023-11-22 7:50 ` Binbin Wu
2023-11-07 15:00 ` [PATCH v6 15/16] KVM: x86/mmu: Make kvm fault handler aware of large page of private memslot isaku.yamahata
2023-11-22 9:05 ` Binbin Wu
2023-11-07 15:00 ` [PATCH v6 16/16] KVM: TDX: Allow 2MB large page for TD GUEST isaku.yamahata
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=e789b9f5-a7cb-479d-8678-76cfb7bb946e@linux.intel.com \
--to=binbin.wu@linux.intel.com \
--cc=chen.bo@intel.com \
--cc=dmatlack@google.com \
--cc=erdemaktas@google.com \
--cc=hang.yuan@intel.com \
--cc=isaku.yamahata@gmail.com \
--cc=isaku.yamahata@intel.com \
--cc=kai.huang@intel.com \
--cc=kvm@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=pbonzini@redhat.com \
--cc=sagis@google.com \
--cc=seanjc@google.com \
--cc=tina.zhang@intel.com \
--cc=xiaoyao.li@intel.com \
--cc=zhi.wang.linux@gmail.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).