From: Xiaoyao Li <xiaoyao.li@intel.com>
To: isaku.yamahata@intel.com, kvm@vger.kernel.org,
linux-kernel@vger.kernel.org
Cc: isaku.yamahata@gmail.com, Paolo Bonzini <pbonzini@redhat.com>,
erdemaktas@google.com, Sean Christopherson <seanjc@google.com>,
Sagi Shahar <sagis@google.com>
Subject: Re: [RFC PATCH 13/13] KVM: x86: remove struct kvm_arch.tdp_max_page_level
Date: Mon, 8 Aug 2022 13:40:53 +0800 [thread overview]
Message-ID: <e275d842-d115-d1d2-a4c2-07ddd057ece1@intel.com> (raw)
In-Reply-To: <1469a0a4aabcaf51f67ed4b4e25155267e07bfd1.1659854957.git.isaku.yamahata@intel.com>
On 8/8/2022 6:18 AM, isaku.yamahata@intel.com wrote:
> From: Xiaoyao Li <xiaoyao.li@intel.com>
>
> Now that everything is there to support large page for TD guest. Remove
> tdp_max_page_level from struct kvm_arch that limits the page size.
Isaku, we cannot do this to remove tdp_max_page_level. Instead, we need
assign it as PG_LEVEL_2M, because TDX currently only supports AUG'ing a
4K/2M page, 1G is not supported yet.
> Signed-off-by: Xiaoyao Li <xiaoyao.li@intel.com>
> Signed-off-by: Isaku Yamahata <isaku.yamahata@intel.com>
> ---
> arch/x86/include/asm/kvm_host.h | 1 -
> arch/x86/kvm/mmu/mmu.c | 1 -
> arch/x86/kvm/mmu/mmu_internal.h | 2 +-
> arch/x86/kvm/vmx/tdx.c | 3 ---
> 4 files changed, 1 insertion(+), 6 deletions(-)
>
> diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
> index a6bfcabcbbd7..80f2bc3fbf0c 100644
> --- a/arch/x86/include/asm/kvm_host.h
> +++ b/arch/x86/include/asm/kvm_host.h
> @@ -1190,7 +1190,6 @@ struct kvm_arch {
> unsigned long n_requested_mmu_pages;
> unsigned long n_max_mmu_pages;
> unsigned int indirect_shadow_pages;
> - int tdp_max_page_level;
> u8 mmu_valid_gen;
> struct hlist_head mmu_page_hash[KVM_NUM_MMU_PAGES];
> struct list_head active_mmu_pages;
> diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c
> index ba21503fa46f..0cbd52c476d7 100644
> --- a/arch/x86/kvm/mmu/mmu.c
> +++ b/arch/x86/kvm/mmu/mmu.c
> @@ -6232,7 +6232,6 @@ int kvm_mmu_init_vm(struct kvm *kvm)
> kvm->arch.split_desc_cache.kmem_cache = pte_list_desc_cache;
> kvm->arch.split_desc_cache.gfp_zero = __GFP_ZERO;
>
> - kvm->arch.tdp_max_page_level = KVM_MAX_HUGEPAGE_LEVEL;
> return 0;
> }
>
> diff --git a/arch/x86/kvm/mmu/mmu_internal.h b/arch/x86/kvm/mmu/mmu_internal.h
> index e5d5fea29bfa..82b220c4d1bd 100644
> --- a/arch/x86/kvm/mmu/mmu_internal.h
> +++ b/arch/x86/kvm/mmu/mmu_internal.h
> @@ -395,7 +395,7 @@ static inline int kvm_mmu_do_page_fault(struct kvm_vcpu *vcpu, gpa_t cr2_or_gpa,
> is_nx_huge_page_enabled(vcpu->kvm),
> .is_private = kvm_is_private_gpa(vcpu->kvm, cr2_or_gpa),
>
> - .max_level = vcpu->kvm->arch.tdp_max_page_level,
> + .max_level = KVM_MAX_HUGEPAGE_LEVEL,
> .req_level = PG_LEVEL_4K,
> .goal_level = PG_LEVEL_4K,
> };
> diff --git a/arch/x86/kvm/vmx/tdx.c b/arch/x86/kvm/vmx/tdx.c
> index a340caeb9c62..72f21f5f78af 100644
> --- a/arch/x86/kvm/vmx/tdx.c
> +++ b/arch/x86/kvm/vmx/tdx.c
> @@ -460,9 +460,6 @@ int tdx_vm_init(struct kvm *kvm)
> */
> kvm_mmu_set_mmio_spte_mask(kvm, 0, VMX_EPT_RWX_MASK);
>
> - /* TODO: Enable 2mb and 1gb large page support. */
> - kvm->arch.tdp_max_page_level = PG_LEVEL_4K;
> -
> /* vCPUs can't be created until after KVM_TDX_INIT_VM. */
> kvm->max_vcpus = 0;
>
next prev parent reply other threads:[~2022-08-08 5:41 UTC|newest]
Thread overview: 16+ messages / expand[flat|nested] mbox.gz Atom feed top
2022-08-07 22:18 [RFC PATCH 00/13] KVM TDX: TDP MMU: large page support isaku.yamahata
2022-08-07 22:18 ` [RFC PATCH 01/13] KVM: Update lpage info when private/shared memory are mixed isaku.yamahata
2022-08-07 22:18 ` [RFC PATCH 02/13] KVM: TDP_MMU: Go to next level if smaller private mapping exists isaku.yamahata
2022-08-07 22:18 ` [RFC PATCH 03/13] KVM: TDX: Pass page level to cache flush before TDX SEAMCALL isaku.yamahata
2022-08-07 22:18 ` [RFC PATCH 04/13] KVM: TDX: Pass KVM page level to tdh_mem_page_add() and tdh_mem_page_aug() isaku.yamahata
2022-08-07 22:18 ` [RFC PATCH 05/13] KVM: TDX: Pass size to tdx_measure_page() isaku.yamahata
2022-08-07 22:18 ` [RFC PATCH 06/13] KVM: TDX: Pass size to reclaim_page() isaku.yamahata
2022-08-07 22:18 ` [RFC PATCH 07/13] KVM: TDX: Update tdx_sept_{set,drop}_private_spte() to support large page isaku.yamahata
2022-08-07 22:18 ` [RFC PATCH 08/13] KVM: MMU: Introduce level info in PFERR code isaku.yamahata
2022-08-07 22:18 ` [RFC PATCH 09/13] KVM: TDX: Pin pages via get_page() right before ADD/AUG'ed to TDs isaku.yamahata
2022-08-07 22:18 ` [RFC PATCH 10/13] KVM: MMU: Pass desired page level in err code for page fault handler isaku.yamahata
2022-08-07 22:18 ` [RFC PATCH 11/13] KVM: TDP_MMU: Split the large page when zap leaf isaku.yamahata
2022-08-07 22:18 ` [RFC PATCH 12/13] KVM: TDX: Split a large page when 4KB page within it converted to shared isaku.yamahata
2022-08-07 22:18 ` [RFC PATCH 13/13] KVM: x86: remove struct kvm_arch.tdp_max_page_level isaku.yamahata
2022-08-08 5:40 ` Xiaoyao Li [this message]
2022-08-11 22:56 ` Isaku Yamahata
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=e275d842-d115-d1d2-a4c2-07ddd057ece1@intel.com \
--to=xiaoyao.li@intel.com \
--cc=erdemaktas@google.com \
--cc=isaku.yamahata@gmail.com \
--cc=isaku.yamahata@intel.com \
--cc=kvm@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=pbonzini@redhat.com \
--cc=sagis@google.com \
--cc=seanjc@google.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox