From: Zhi Wang <zhi.wang.linux@gmail.com>
To: Sagi Shahar <sagis@google.com>
Cc: kvm@vger.kernel.org, linux-kernel@vger.kernel.org,
x86@kernel.org, Sean Christopherson <seanjc@google.com>,
Paolo Bonzini <pbonzini@redhat.com>,
Isaku Yamahata <isaku.yamahata@intel.com>,
Erdem Aktas <erdemaktas@google.com>,
David Matlack <dmatlack@google.com>,
Kai Huang <kai.huang@intel.com>,
Chao Peng <chao.p.peng@linux.intel.com>,
Thomas Gleixner <tglx@linutronix.de>,
Ingo Molnar <mingo@redhat.com>, Borislav Petkov <bp@alien8.de>,
Dave Hansen <dave.hansen@linux.intel.com>
Subject: Re: [RFC PATCH 1/5] KVM: Split tdp_mmu_pages to private and shared lists
Date: Mon, 17 Apr 2023 22:36:11 +0300 [thread overview]
Message-ID: <20230417223611.00004aee.zhi.wang.linux@gmail.com> (raw)
In-Reply-To: <20230407201921.2703758-2-sagis@google.com>
On Fri, 7 Apr 2023 20:19:17 +0000
Sagi Shahar <sagis@google.com> wrote:
This patch is actually adding a separate counter for accounting private
tdp mmu page not really introducing a new tdp_mmu_pages list for private
pages. I guess better refine the tittle to reflect what this patch
is doing.
> tdp_mmu_pages holds all the active pages used by the mmu. When we
> transfer the state during intra-host migration we need to transfer the
> private pages but not the shared ones.
>
Maybe explain a little bit about how the shared one is processed. Guess
one sentence is enough.
> Keeping them in separate counters makes this transfer more efficient.
>
> Signed-off-by: Sagi Shahar <sagis@google.com>
> ---
> arch/x86/include/asm/kvm_host.h | 5 ++++-
> arch/x86/kvm/mmu/tdp_mmu.c | 11 +++++++++--
> 2 files changed, 13 insertions(+), 3 deletions(-)
>
> diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
> index ae377eec81987..5ed70cd9d74bf 100644
> --- a/arch/x86/include/asm/kvm_host.h
> +++ b/arch/x86/include/asm/kvm_host.h
> @@ -1426,9 +1426,12 @@ struct kvm_arch {
> struct task_struct *nx_huge_page_recovery_thread;
>
> #ifdef CONFIG_X86_64
> - /* The number of TDP MMU pages across all roots. */
> + /* The number of non-private TDP MMU pages across all roots. */
> atomic64_t tdp_mmu_pages;
>
> + /* Same as tdp_mmu_pages but only for private pages. */
> + atomic64_t tdp_private_mmu_pages;
> +
> /*
> * List of struct kvm_mmu_pages being used as roots.
> * All struct kvm_mmu_pages in the list should have
> diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/arch/x86/kvm/mmu/tdp_mmu.c
> index 58a236a69ec72..327dee4f6170e 100644
> --- a/arch/x86/kvm/mmu/tdp_mmu.c
> +++ b/arch/x86/kvm/mmu/tdp_mmu.c
> @@ -44,6 +44,7 @@ void kvm_mmu_uninit_tdp_mmu(struct kvm *kvm)
> destroy_workqueue(kvm->arch.tdp_mmu_zap_wq);
>
> WARN_ON(atomic64_read(&kvm->arch.tdp_mmu_pages));
> + WARN_ON(atomic64_read(&kvm->arch.tdp_private_mmu_pages));
> WARN_ON(!list_empty(&kvm->arch.tdp_mmu_roots));
>
> /*
> @@ -373,13 +374,19 @@ static void handle_changed_spte_dirty_log(struct kvm *kvm, int as_id, gfn_t gfn,
> static void tdp_account_mmu_page(struct kvm *kvm, struct kvm_mmu_page *sp)
> {
> kvm_account_pgtable_pages((void *)sp->spt, +1);
> - atomic64_inc(&kvm->arch.tdp_mmu_pages);
> + if (is_private_sp(sp))
> + atomic64_inc(&kvm->arch.tdp_private_mmu_pages);
> + else
> + atomic64_inc(&kvm->arch.tdp_mmu_pages);
> }
>
> static void tdp_unaccount_mmu_page(struct kvm *kvm, struct kvm_mmu_page *sp)
> {
> kvm_account_pgtable_pages((void *)sp->spt, -1);
> - atomic64_dec(&kvm->arch.tdp_mmu_pages);
> + if (is_private_sp(sp))
> + atomic64_dec(&kvm->arch.tdp_private_mmu_pages);
> + else
> + atomic64_dec(&kvm->arch.tdp_mmu_pages);
> }
>
> /**
next prev parent reply other threads:[~2023-04-17 19:37 UTC|newest]
Thread overview: 21+ messages / expand[flat|nested] mbox.gz Atom feed top
2023-04-07 20:19 [RFC PATCH 0/5] Add TDX intra host migration support Sagi Shahar
2023-04-07 20:19 ` [RFC PATCH 1/5] KVM: Split tdp_mmu_pages to private and shared lists Sagi Shahar
2023-04-17 19:36 ` Zhi Wang [this message]
2023-04-18 17:14 ` Sagi Shahar
2023-04-07 20:19 ` [RFC PATCH 2/5] KVM: SEV: Refactor common code out of sev_vm_move_enc_context_from Sagi Shahar
2023-04-17 19:45 ` Zhi Wang
2023-04-18 17:17 ` Sagi Shahar
2023-04-07 20:19 ` [RFC PATCH 3/5] KVM: TDX: Add base implementation for tdx_vm_move_enc_context_from Sagi Shahar
2023-04-18 6:28 ` Zhi Wang
2023-04-18 17:47 ` Sagi Shahar
2023-04-19 6:34 ` Zhi Wang
2023-04-27 21:25 ` Sagi Shahar
2023-04-28 16:08 ` Zhi Wang
2023-04-18 12:11 ` Zhi Wang
2023-04-18 17:51 ` Sagi Shahar
2023-04-07 20:19 ` [RFC PATCH 4/5] KVM: TDX: Implement moving private pages between 2 TDs Sagi Shahar
2023-06-02 7:00 ` Isaku Yamahata
2023-04-07 20:19 ` [RFC PATCH 5/5] KVM: TDX: Add core logic for TDX intra-host migration Sagi Shahar
2023-04-19 7:08 ` Zhi Wang
2023-04-14 7:03 ` [RFC PATCH 0/5] Add TDX intra host migration support Zhi Wang
2023-04-14 19:09 ` Sagi Shahar
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20230417223611.00004aee.zhi.wang.linux@gmail.com \
--to=zhi.wang.linux@gmail.com \
--cc=bp@alien8.de \
--cc=chao.p.peng@linux.intel.com \
--cc=dave.hansen@linux.intel.com \
--cc=dmatlack@google.com \
--cc=erdemaktas@google.com \
--cc=isaku.yamahata@intel.com \
--cc=kai.huang@intel.com \
--cc=kvm@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=mingo@redhat.com \
--cc=pbonzini@redhat.com \
--cc=sagis@google.com \
--cc=seanjc@google.com \
--cc=tglx@linutronix.de \
--cc=x86@kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox