Linux KVM/arm64 development list
 help / color / mirror / Atom feed
From: Peter Xu <peterx@redhat.com>
To: David Matlack <dmatlack@google.com>
Cc: Marc Zyngier <maz@kernel.org>, Albert Ou <aou@eecs.berkeley.edu>,
	"open list:KERNEL VIRTUAL MACHINE FOR MIPS \(KVM/mips\)"
	<kvm@vger.kernel.org>, Huacai Chen <chenhuacai@kernel.org>,
	"open list:KERNEL VIRTUAL MACHINE FOR MIPS \(KVM/mips\)"
	<linux-mips@vger.kernel.org>,
	Aleksandar Markovic <aleksandar.qemu.devel@gmail.com>,
	Palmer Dabbelt <palmer@dabbelt.com>,
	"open list:KERNEL VIRTUAL MACHINE FOR RISC-V \(KVM/riscv\)"
	<kvm-riscv@lists.infradead.org>,
	Paul Walmsley <paul.walmsley@sifive.com>,
	Ben Gardon <bgardon@google.com>,
	Paolo Bonzini <pbonzini@redhat.com>,
	maciej.szmigiero@oracle.com,
	"moderated list:KERNEL VIRTUAL MACHINE FOR ARM64 \(KVM/arm64\)"
	<kvmarm@lists.cs.columbia.edu>, Peter Feiner <pfeiner@google.com>
Subject: Re: [PATCH v2 04/26] KVM: x86/mmu: Decompose kvm_mmu_get_page() into separate functions
Date: Tue, 15 Mar 2022 16:50:50 +0800	[thread overview]
Message-ID: <YjBTal9gWoEKybxi@xz-m1.local> (raw)
In-Reply-To: <20220311002528.2230172-5-dmatlack@google.com>

On Fri, Mar 11, 2022 at 12:25:06AM +0000, David Matlack wrote:
> Decompose kvm_mmu_get_page() into separate helper functions to increase
> readability and prepare for allocating shadow pages without a vcpu
> pointer.
> 
> Specifically, pull the guts of kvm_mmu_get_page() into 3 helper
> functions:
> 
> __kvm_mmu_find_shadow_page() -
>   Walks the page hash checking for any existing mmu pages that match the
>   given gfn and role. Does not attempt to synchronize the page if it is
>   unsync.
> 
> kvm_mmu_find_shadow_page() -
>   Wraps __kvm_mmu_find_shadow_page() and handles syncing if necessary.
> 
> kvm_mmu_new_shadow_page()
>   Allocates and initializes an entirely new kvm_mmu_page. This currently
>   requries a vcpu pointer for allocation and looking up the memslot but
>   that will be removed in a future commit.
> 
>   Note, kvm_mmu_new_shadow_page() is temporary and will be removed in a
>   subsequent commit. The name uses "new" rather than the more typical
>   "alloc" to avoid clashing with the existing kvm_mmu_alloc_page().
> 
> No functional change intended.
> 
> Signed-off-by: David Matlack <dmatlack@google.com>

Looks good to me, a few nitpicks and questions below.

> ---
>  arch/x86/kvm/mmu/mmu.c         | 132 ++++++++++++++++++++++++---------
>  arch/x86/kvm/mmu/paging_tmpl.h |   5 +-
>  arch/x86/kvm/mmu/spte.c        |   5 +-
>  3 files changed, 101 insertions(+), 41 deletions(-)
> 
> diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c
> index 23c2004c6435..80dbfe07c87b 100644
> --- a/arch/x86/kvm/mmu/mmu.c
> +++ b/arch/x86/kvm/mmu/mmu.c
> @@ -2027,16 +2027,25 @@ static void clear_sp_write_flooding_count(u64 *spte)
>  	__clear_sp_write_flooding_count(sptep_to_sp(spte));
>  }
>  
> -static struct kvm_mmu_page *kvm_mmu_get_page(struct kvm_vcpu *vcpu, gfn_t gfn,
> -					     union kvm_mmu_page_role role)
> +/*
> + * Searches for an existing SP for the given gfn and role. Makes no attempt to
> + * sync the SP if it is marked unsync.
> + *
> + * If creating an upper-level page table, zaps unsynced pages for the same
> + * gfn and adds them to the invalid_list. It's the callers responsibility
> + * to call kvm_mmu_commit_zap_page() on invalid_list.
> + */
> +static struct kvm_mmu_page *__kvm_mmu_find_shadow_page(struct kvm *kvm,
> +						       gfn_t gfn,
> +						       union kvm_mmu_page_role role,
> +						       struct list_head *invalid_list)
>  {
>  	struct hlist_head *sp_list;
>  	struct kvm_mmu_page *sp;
>  	int collisions = 0;
> -	LIST_HEAD(invalid_list);
>  
> -	sp_list = &vcpu->kvm->arch.mmu_page_hash[kvm_page_table_hashfn(gfn)];
> -	for_each_valid_sp(vcpu->kvm, sp, sp_list) {
> +	sp_list = &kvm->arch.mmu_page_hash[kvm_page_table_hashfn(gfn)];
> +	for_each_valid_sp(kvm, sp, sp_list) {
>  		if (sp->gfn != gfn) {
>  			collisions++;
>  			continue;
> @@ -2053,60 +2062,109 @@ static struct kvm_mmu_page *kvm_mmu_get_page(struct kvm_vcpu *vcpu, gfn_t gfn,
>  			 * upper-level page will be write-protected.
>  			 */
>  			if (role.level > PG_LEVEL_4K && sp->unsync)
> -				kvm_mmu_prepare_zap_page(vcpu->kvm, sp,
> -							 &invalid_list);
> +				kvm_mmu_prepare_zap_page(kvm, sp, invalid_list);
> +
>  			continue;
>  		}
>  
> -		/* unsync and write-flooding only apply to indirect SPs. */
> -		if (sp->role.direct)
> -			goto trace_get_page;
> +		/* Write-flooding is only tracked for indirect SPs. */
> +		if (!sp->role.direct)
> +			__clear_sp_write_flooding_count(sp);
>  
> -		if (sp->unsync) {
> -			/*
> -			 * The page is good, but is stale.  kvm_sync_page does
> -			 * get the latest guest state, but (unlike mmu_unsync_children)
> -			 * it doesn't write-protect the page or mark it synchronized!
> -			 * This way the validity of the mapping is ensured, but the
> -			 * overhead of write protection is not incurred until the
> -			 * guest invalidates the TLB mapping.  This allows multiple
> -			 * SPs for a single gfn to be unsync.
> -			 *
> -			 * If the sync fails, the page is zapped.  If so, break
> -			 * in order to rebuild it.
> -			 */
> -			if (!kvm_sync_page(vcpu, sp, &invalid_list))
> -				break;
> +		goto out;
> +	}
>  
> -			WARN_ON(!list_empty(&invalid_list));
> -			kvm_flush_remote_tlbs(vcpu->kvm);
> -		}
> +	sp = NULL;
>  
> -		__clear_sp_write_flooding_count(sp);
> +out:
> +	if (collisions > kvm->stat.max_mmu_page_hash_collisions)
> +		kvm->stat.max_mmu_page_hash_collisions = collisions;
> +
> +	return sp;
> +}
>  
> -trace_get_page:
> -		trace_kvm_mmu_get_page(sp, false);
> +/*
> + * Looks up an existing SP for the given gfn and role if one exists. The
> + * return SP is guaranteed to be synced.
> + */
> +static struct kvm_mmu_page *kvm_mmu_find_shadow_page(struct kvm_vcpu *vcpu,
> +						     gfn_t gfn,
> +						     union kvm_mmu_page_role role)
> +{
> +	struct kvm_mmu_page *sp;
> +	LIST_HEAD(invalid_list);
> +
> +	sp = __kvm_mmu_find_shadow_page(vcpu->kvm, gfn, role, &invalid_list);
> +	if (!sp)
>  		goto out;
> +
> +	if (sp->unsync) {
> +		/*
> +		 * The page is good, but is stale.  kvm_sync_page does
> +		 * get the latest guest state, but (unlike mmu_unsync_children)
> +		 * it doesn't write-protect the page or mark it synchronized!
> +		 * This way the validity of the mapping is ensured, but the
> +		 * overhead of write protection is not incurred until the
> +		 * guest invalidates the TLB mapping.  This allows multiple
> +		 * SPs for a single gfn to be unsync.
> +		 *
> +		 * If the sync fails, the page is zapped and added to the
> +		 * invalid_list.
> +		 */
> +		if (!kvm_sync_page(vcpu, sp, &invalid_list)) {
> +			sp = NULL;
> +			goto out;
> +		}
> +
> +		WARN_ON(!list_empty(&invalid_list));

Not related to this patch because I think it's a pure movement here,
however I have a question on why invalid_list is guaranteed to be empty..

I'm thinking the case where when lookup the page we could have already
called kvm_mmu_prepare_zap_page() there, then when reach here (which is the
kvm_sync_page==true case) invalid_list shouldn't be touched in
kvm_sync_page(), so it looks possible that it still contains some page to
be commited?

> +		kvm_flush_remote_tlbs(vcpu->kvm);
>  	}
>  
> +out:

I'm wondering whether this "out" can be dropped.. with something like:

        sp = __kvm_mmu_find_shadow_page(...);

        if (sp && sp->unsync) {
                if (kvm_sync_page(vcpu, sp, &invalid_list)) {
                        ..
                } else {
                        sp = NULL;
                }
        }

[...]

> +static struct kvm_mmu_page *kvm_mmu_get_page(struct kvm_vcpu *vcpu, gfn_t gfn,
> +					     union kvm_mmu_page_role role)
> +{
> +	struct kvm_mmu_page *sp;
> +	bool created = false;
> +
> +	sp = kvm_mmu_find_shadow_page(vcpu, gfn, role);
> +	if (sp)
> +		goto out;
> +
> +	created = true;
> +	sp = kvm_mmu_new_shadow_page(vcpu, gfn, role);
> +
> +out:
> +	trace_kvm_mmu_get_page(sp, created);
>  	return sp;

Same here, wondering whether we could drop the "out" by:

        sp = kvm_mmu_find_shadow_page(vcpu, gfn, role);
        if (!sp) {
                created = true;
                sp = kvm_mmu_new_shadow_page(vcpu, gfn, role);
        }

        trace_kvm_mmu_get_page(sp, created);
        return sp;

Thanks,

-- 
Peter Xu

_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

  reply	other threads:[~2022-03-15  8:51 UTC|newest]

Thread overview: 67+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-03-11  0:25 [PATCH v2 00/26] Extend Eager Page Splitting to the shadow MMU David Matlack
2022-03-11  0:25 ` [PATCH v2 01/26] KVM: x86/mmu: Optimize MMU page cache lookup for all direct SPs David Matlack
2022-03-15  7:40   ` Peter Xu
2022-03-22 18:16     ` David Matlack
2022-03-11  0:25 ` [PATCH v2 02/26] KVM: x86/mmu: Use a bool for direct David Matlack
2022-03-15  7:46   ` Peter Xu
2022-03-22 18:21     ` David Matlack
2022-03-11  0:25 ` [PATCH v2 03/26] KVM: x86/mmu: Derive shadow MMU page role from parent David Matlack
2022-03-15  8:15   ` Peter Xu
2022-03-22 18:30     ` David Matlack
2022-03-30 14:25       ` Peter Xu
2022-03-11  0:25 ` [PATCH v2 04/26] KVM: x86/mmu: Decompose kvm_mmu_get_page() into separate functions David Matlack
2022-03-15  8:50   ` Peter Xu [this message]
2022-03-22 22:09     ` David Matlack
2022-03-11  0:25 ` [PATCH v2 05/26] KVM: x86/mmu: Rename shadow MMU functions that deal with shadow pages David Matlack
2022-03-15  8:52   ` Peter Xu
2022-03-22 21:35     ` David Matlack
2022-03-30 14:28       ` Peter Xu
2022-03-11  0:25 ` [PATCH v2 06/26] KVM: x86/mmu: Pass memslot to kvm_mmu_new_shadow_page() David Matlack
2022-03-15  9:03   ` Peter Xu
2022-03-22 22:05     ` David Matlack
2022-03-11  0:25 ` [PATCH v2 07/26] KVM: x86/mmu: Separate shadow MMU sp allocation from initialization David Matlack
2022-03-15  9:54   ` Peter Xu
2022-03-11  0:25 ` [PATCH v2 08/26] KVM: x86/mmu: Link spt to sp during allocation David Matlack
2022-03-15 10:04   ` Peter Xu
2022-03-22 22:30     ` David Matlack
2022-03-11  0:25 ` [PATCH v2 09/26] KVM: x86/mmu: Move huge page split sp allocation code to mmu.c David Matlack
2022-03-15 10:17   ` Peter Xu
2022-03-11  0:25 ` [PATCH v2 10/26] KVM: x86/mmu: Use common code to free kvm_mmu_page structs David Matlack
2022-03-15 10:22   ` Peter Xu
2022-03-22 22:33     ` David Matlack
2022-03-11  0:25 ` [PATCH v2 11/26] KVM: x86/mmu: Use common code to allocate kvm_mmu_page structs from vCPU caches David Matlack
2022-03-15 10:27   ` Peter Xu
2022-03-22 22:35     ` David Matlack
2022-03-11  0:25 ` [PATCH v2 12/26] KVM: x86/mmu: Pass const memslot to rmap_add() David Matlack
2022-03-11  0:25 ` [PATCH v2 13/26] KVM: x86/mmu: Pass const memslot to init_shadow_page() and descendants David Matlack
2022-03-11  0:25 ` [PATCH v2 14/26] KVM: x86/mmu: Decouple rmap_add() and link_shadow_page() from kvm_vcpu David Matlack
2022-03-15 10:37   ` Peter Xu
2022-03-11  0:25 ` [PATCH v2 15/26] KVM: x86/mmu: Update page stats in __rmap_add() David Matlack
2022-03-15 10:39   ` Peter Xu
2022-03-11  0:25 ` [PATCH v2 16/26] KVM: x86/mmu: Cache the access bits of shadowed translations David Matlack
2022-03-16  8:32   ` Peter Xu
2022-03-22 22:51     ` David Matlack
2022-03-30 18:30       ` Peter Xu
2022-03-31 21:40         ` David Matlack
2022-03-11  0:25 ` [PATCH v2 17/26] KVM: x86/mmu: Pass access information to make_huge_page_split_spte() David Matlack
2022-03-16  8:44   ` Peter Xu
2022-03-22 23:08     ` David Matlack
2022-03-11  0:25 ` [PATCH v2 18/26] KVM: x86/mmu: Zap collapsible SPTEs at all levels in the shadow MMU David Matlack
2022-03-16  8:49   ` Peter Xu
2022-03-22 23:11     ` David Matlack
2022-03-11  0:25 ` [PATCH v2 19/26] KVM: x86/mmu: Refactor drop_large_spte() David Matlack
2022-03-16  8:53   ` Peter Xu
2022-03-11  0:25 ` [PATCH v2 20/26] KVM: x86/mmu: Extend Eager Page Splitting to the shadow MMU David Matlack
2022-03-16 10:26   ` Peter Xu
2022-03-22  0:07     ` David Matlack
2022-03-22 23:58     ` David Matlack
2022-03-30 18:34       ` Peter Xu
2022-03-31 19:57         ` David Matlack
2022-03-11  0:25 ` [PATCH v2 21/26] KVM: Allow for different capacities in kvm_mmu_memory_cache structs David Matlack
2022-03-19  5:27   ` Anup Patel
2022-03-22 23:13     ` David Matlack
2022-03-11  0:25 ` [PATCH v2 22/26] KVM: Allow GFP flags to be passed when topping up MMU caches David Matlack
2022-03-11  0:25 ` [PATCH v2 23/26] KVM: x86/mmu: Fully split huge pages that require extra pte_list_desc structs David Matlack
2022-03-11  0:25 ` [PATCH v2 24/26] KVM: x86/mmu: Split huge pages aliased by multiple SPTEs David Matlack
2022-03-11  0:25 ` [PATCH v2 25/26] KVM: x86/mmu: Drop NULL pte_list_desc_cache fallback David Matlack
2022-03-11  0:25 ` [PATCH v2 26/26] KVM: selftests: Map x86_64 guest virtual memory with huge pages David Matlack

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=YjBTal9gWoEKybxi@xz-m1.local \
    --to=peterx@redhat.com \
    --cc=aleksandar.qemu.devel@gmail.com \
    --cc=aou@eecs.berkeley.edu \
    --cc=bgardon@google.com \
    --cc=chenhuacai@kernel.org \
    --cc=dmatlack@google.com \
    --cc=kvm-riscv@lists.infradead.org \
    --cc=kvm@vger.kernel.org \
    --cc=kvmarm@lists.cs.columbia.edu \
    --cc=linux-mips@vger.kernel.org \
    --cc=maciej.szmigiero@oracle.com \
    --cc=maz@kernel.org \
    --cc=palmer@dabbelt.com \
    --cc=paul.walmsley@sifive.com \
    --cc=pbonzini@redhat.com \
    --cc=pfeiner@google.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox