kvm.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Sean Christopherson <seanjc@google.com>
To: Yosry Ahmed <yosryahmed@google.com>
Cc: Paolo Bonzini <pbonzini@redhat.com>,
	Vitaly Kuznetsov <vkuznets@redhat.com>,
	Wanpeng Li <wanpengli@tencent.com>,
	Jim Mattson <jmattson@google.com>, Joerg Roedel <joro@8bytes.org>,
	Johannes Weiner <hannes@cmpxchg.org>,
	Michal Hocko <mhocko@kernel.org>,
	Roman Gushchin <roman.gushchin@linux.dev>,
	Shakeel Butt <shakeelb@google.com>,
	Junaid Shahid <junaids@google.com>,
	David Matlack <dmatlack@google.com>,
	kvm@vger.kernel.org
Subject: Re: [PATCH] KVM: memcg: count KVM page table pages used by KVM in memcg pagetable stats
Date: Wed, 30 Mar 2022 00:48:42 +0000	[thread overview]
Message-ID: <YkOo6iM9YUACsNGF@google.com> (raw)
In-Reply-To: <20220311001252.195690-1-yosryahmed@google.com>

On Fri, Mar 11, 2022, Yosry Ahmed wrote:
> Count the pages used by KVM for page tables in pagetable memcg stats in
> memory.stat.

Why?  Is it problematic to count these as kernel memory as opposed to page tables?
What is gained/lost by tracking these as page table allocations?  E.g. won't this
pollute the information about the host page tables for the userpace process?

When you asked about stats, I thought you meant KVM stats :-)

> Most pages used for KVM page tables come from the mmu_shadow_page_cache,
> in addition to a few allocations in __kvm_mmu_create() and
> mmu_alloc_special_roots().
> 
> For allocations from the mmu_shadow_page_cache, the pages are counted as
> pagetables when they are actually used by KVM (when
> mmu_memory_cache_alloc_obj() is called), rather than when they are
> allocated in the cache itself. In other words, pages sitting in the
> cache are not counted as pagetables (they are still accounted as kernel
> memory).
> 
> The reason for this is to avoid the complexity and confusion of
> incrementing the stats in the cache layer, while decerementing them
> by the cache users when they are being freed (pages are freed directly
> and not returned to the cache).
> For the sake of simplicity, the stats are incremented and decremented by
> the users of the cache when they get the page and when they free it.
> 
> Signed-off-by: Yosry Ahmed <yosryahmed@google.com>
> ---
>  arch/x86/include/asm/kvm_host.h |  7 +++++++
>  arch/x86/kvm/mmu/mmu.c          | 19 +++++++++++++++++++
>  arch/x86/kvm/mmu/tdp_mmu.c      |  4 ++++
>  virt/kvm/kvm_main.c             | 17 +++++++++++++++++
>  4 files changed, 47 insertions(+)
> 
> diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
> index f72e80178ffc..4a1dda2f56e1 100644
> --- a/arch/x86/include/asm/kvm_host.h
> +++ b/arch/x86/include/asm/kvm_host.h
> @@ -458,6 +458,13 @@ struct kvm_mmu {
>  	*/
>  	u32 pkru_mask;
>  
> +	/*
> +	 * After a page is allocated for any of these roots,
> +	 * increment per-memcg pagetable stats by calling:
> +	 * inc_lruvec_page_state(page, NR_PAGETABLE)
> +	 * Before the page is freed, decrement the stats by calling:
> +	 * dec_lruvec_page_state(page, NR_PAGETABLE).
> +	 */
>  	u64 *pae_root;
>  	u64 *pml4_root;
>  	u64 *pml5_root;

Eh, I would much prefer we don't bother counting these.  They're barely page
tables, more like necessary evils.  And hopefully they'll be gone soon[*].

[*] https://lore.kernel.org/all/20220329153604.507475-1-jiangshanlai@gmail.com

> diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c
> index 3b8da8b0745e..5f87e1b0da91 100644
> --- a/arch/x86/kvm/mmu/mmu.c
> +++ b/arch/x86/kvm/mmu/mmu.c
> @@ -1673,7 +1673,10 @@ static void kvm_mmu_free_page(struct kvm_mmu_page *sp)
>  	MMU_WARN_ON(!is_empty_shadow_page(sp->spt));
>  	hlist_del(&sp->hash_link);
>  	list_del(&sp->link);
> +
> +	dec_lruvec_page_state(virt_to_page(sp->spt), NR_PAGETABLE);

I would strongly prefer to add new helpers to combine this accounting with KVM's
existing accounting.  E.g. for the legacy (not tdp_mmu.c) MMU code

diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c
index 1361eb4599b4..c2cb642157cc 100644
--- a/arch/x86/kvm/mmu/mmu.c
+++ b/arch/x86/kvm/mmu/mmu.c
@@ -1668,6 +1668,18 @@ static inline void kvm_mod_used_mmu_pages(struct kvm *kvm, long nr)
        percpu_counter_add(&kvm_total_used_mmu_pages, nr);
 }

+static void kvm_account_mmu_page(struct kvm *kvm, struct kvm_mmu_page *sp)
+{
+       kvm_mod_used_mmu_pages(kvm, 1);
+       inc_lruvec_page_state(..., NR_PAGETABLE);
+}
+
+static void kvm_unaccount_mmu_page(struct kvm *kvm, struct kvm_mmu_page *sp)
+{
+       kvm_mod_used_mmu_pages(kvm, -1);
+       dec_lruvec_page_state(..., NR_PAGETABLE);
+}
+
 static void kvm_mmu_free_page(struct kvm_mmu_page *sp)
 {
        MMU_WARN_ON(!is_empty_shadow_page(sp->spt));
@@ -1723,7 +1735,7 @@ static struct kvm_mmu_page *kvm_mmu_alloc_page(struct kvm_vcpu *vcpu, int direct
         */
        sp->mmu_valid_gen = vcpu->kvm->arch.mmu_valid_gen;
        list_add(&sp->link, &vcpu->kvm->arch.active_mmu_pages);
-       kvm_mod_used_mmu_pages(vcpu->kvm, +1);
+       kvm_account_mmu_page(vcpu->kvm, sp);
        return sp;
 }

@@ -2339,7 +2351,7 @@ static bool __kvm_mmu_prepare_zap_page(struct kvm *kvm,
                        list_add(&sp->link, invalid_list);
                else
                        list_move(&sp->link, invalid_list);
-               kvm_mod_used_mmu_pages(kvm, -1);
+               kvm_unaccount_mmu_page(kvm, sp);
        } else {
                /*
                 * Remove the active root from the active page list, the root


>  	free_page((unsigned long)sp->spt);
> +

There's a lot of spurious whitespace change in this patch.

>  	if (!sp->role.direct)
>  		free_page((unsigned long)sp->gfns);
>  	kmem_cache_free(mmu_page_header_cache, sp);
> @@ -1711,7 +1714,10 @@ static struct kvm_mmu_page *kvm_mmu_alloc_page(struct kvm_vcpu *vcpu, int direct
>  	struct kvm_mmu_page *sp;
>  
>  	sp = kvm_mmu_memory_cache_alloc(&vcpu->arch.mmu_page_header_cache);
> +

More whitespace, though it should just naturally go away.

>  	sp->spt = kvm_mmu_memory_cache_alloc(&vcpu->arch.mmu_shadow_page_cache);
> +	inc_lruvec_page_state(virt_to_page(sp->spt), NR_PAGETABLE);
> +
>  	if (!direct)
>  		sp->gfns = kvm_mmu_memory_cache_alloc(&vcpu->arch.mmu_gfn_array_cache);
>  	set_page_private(virt_to_page(sp->spt), (unsigned long)sp);
> @@ -3602,6 +3608,10 @@ static int mmu_alloc_special_roots(struct kvm_vcpu *vcpu)
>  	mmu->pml4_root = pml4_root;
>  	mmu->pml5_root = pml5_root;
>  
> +	/* Update per-memcg pagetable stats */
> +	inc_lruvec_page_state(virt_to_page(pae_root), NR_PAGETABLE);
> +	inc_lruvec_page_state(virt_to_page(pml4_root), NR_PAGETABLE);
> +	inc_lruvec_page_state(virt_to_page(pml5_root), NR_PAGETABLE);
>  	return 0;
>  
>  #ifdef CONFIG_X86_64
> @@ -5554,6 +5564,12 @@ static void free_mmu_pages(struct kvm_mmu *mmu)
>  {
>  	if (!tdp_enabled && mmu->pae_root)
>  		set_memory_encrypted((unsigned long)mmu->pae_root, 1);
> +
> +	/* Update per-memcg pagetable stats */
> +	dec_lruvec_page_state(virt_to_page(mmu->pae_root), NR_PAGETABLE);
> +	dec_lruvec_page_state(virt_to_page(mmu->pml4_root), NR_PAGETABLE);
> +	dec_lruvec_page_state(virt_to_page(mmu->pml5_root), NR_PAGETABLE);
> +
>  	free_page((unsigned long)mmu->pae_root);
>  	free_page((unsigned long)mmu->pml4_root);
>  	free_page((unsigned long)mmu->pml5_root);
> @@ -5591,6 +5607,9 @@ static int __kvm_mmu_create(struct kvm_vcpu *vcpu, struct kvm_mmu *mmu)
>  	if (!page)
>  		return -ENOMEM;
>  
> +	/* Update per-memcg pagetable stats */
> +	inc_lruvec_page_state(page, NR_PAGETABLE);
> +
>  	mmu->pae_root = page_address(page);
>  
>  	/*
> diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/arch/x86/kvm/mmu/tdp_mmu.c
> index af60922906ef..ce8930fd0835 100644
> --- a/arch/x86/kvm/mmu/tdp_mmu.c
> +++ b/arch/x86/kvm/mmu/tdp_mmu.c
> @@ -64,6 +64,7 @@ void kvm_mmu_uninit_tdp_mmu(struct kvm *kvm)
>  
>  static void tdp_mmu_free_sp(struct kvm_mmu_page *sp)
>  {
> +	dec_lruvec_page_state(virt_to_page(sp->spt), NR_PAGETABLE);

I'd prefer to do this in tdp_mmu_{,un}link_sp(), it saves having to add calls in
all paths that allocate TDP MMU pages.

  parent reply	other threads:[~2022-03-30  0:48 UTC|newest]

Thread overview: 5+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-03-11  0:12 [PATCH] KVM: memcg: count KVM page table pages used by KVM in memcg pagetable stats Yosry Ahmed
2022-03-11 19:20 ` Shakeel Butt
2022-04-05 18:43   ` Yosry Ahmed
2022-03-30  0:48 ` Sean Christopherson [this message]
2022-04-05 18:42   ` Yosry Ahmed

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=YkOo6iM9YUACsNGF@google.com \
    --to=seanjc@google.com \
    --cc=dmatlack@google.com \
    --cc=hannes@cmpxchg.org \
    --cc=jmattson@google.com \
    --cc=joro@8bytes.org \
    --cc=junaids@google.com \
    --cc=kvm@vger.kernel.org \
    --cc=mhocko@kernel.org \
    --cc=pbonzini@redhat.com \
    --cc=roman.gushchin@linux.dev \
    --cc=shakeelb@google.com \
    --cc=vkuznets@redhat.com \
    --cc=wanpengli@tencent.com \
    --cc=yosryahmed@google.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).