linux-mips.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Ricardo Koller <ricarkol@google.com>
To: David Matlack <dmatlack@google.com>
Cc: Paolo Bonzini <pbonzini@redhat.com>,
	Marc Zyngier <maz@kernel.org>,
	Huacai Chen <chenhuacai@kernel.org>,
	Aleksandar Markovic <aleksandar.qemu.devel@gmail.com>,
	Anup Patel <anup@brainfault.org>,
	Paul Walmsley <paul.walmsley@sifive.com>,
	Palmer Dabbelt <palmer@dabbelt.com>,
	Albert Ou <aou@eecs.berkeley.edu>,
	Sean Christopherson <seanjc@google.com>,
	Andrew Jones <drjones@redhat.com>,
	Ben Gardon <bgardon@google.com>, Peter Xu <peterx@redhat.com>,
	maciej.szmigiero@oracle.com,
	"moderated list:KERNEL VIRTUAL MACHINE FOR ARM64 (KVM/arm64)" 
	<kvmarm@lists.cs.columbia.edu>,
	"open list:KERNEL VIRTUAL MACHINE FOR MIPS (KVM/mips)" 
	<linux-mips@vger.kernel.org>,
	"open list:KERNEL VIRTUAL MACHINE FOR MIPS (KVM/mips)" 
	<kvm@vger.kernel.org>,
	"open list:KERNEL VIRTUAL MACHINE FOR RISC-V (KVM/riscv)" 
	<kvm-riscv@lists.infradead.org>,
	Peter Feiner <pfeiner@google.com>,
	Lai Jiangshan <jiangshanlai@gmail.com>
Subject: Re: [PATCH v6 22/22] KVM: x86/mmu: Extend Eager Page Splitting to nested MMUs
Date: Wed, 1 Jun 2022 14:50:10 -0700	[thread overview]
Message-ID: <YpffEuTSJ3QJwLiy@google.com> (raw)
In-Reply-To: <20220516232138.1783324-23-dmatlack@google.com>

Hi David,

On Mon, May 16, 2022 at 11:21:38PM +0000, David Matlack wrote:
> Add support for Eager Page Splitting pages that are mapped by nested
> MMUs. Walk through the rmap first splitting all 1GiB pages to 2MiB
> pages, and then splitting all 2MiB pages to 4KiB pages.
> 
> Note, Eager Page Splitting is limited to nested MMUs as a policy rather
> than due to any technical reason (the sp->role.guest_mode check could
> just be deleted and Eager Page Splitting would work correctly for all
> shadow MMU pages). There is really no reason to support Eager Page
> Splitting for tdp_mmu=N, since such support will eventually be phased
> out, and there is no current use case supporting Eager Page Splitting on
> hosts where TDP is either disabled or unavailable in hardware.
> Furthermore, future improvements to nested MMU scalability may diverge
> the code from the legacy shadow paging implementation. These
> improvements will be simpler to make if Eager Page Splitting does not
> have to worry about legacy shadow paging.
> 
> Splitting huge pages mapped by nested MMUs requires dealing with some
> extra complexity beyond that of the TDP MMU:
> 
> (1) The shadow MMU has a limit on the number of shadow pages that are
>     allowed to be allocated. So, as a policy, Eager Page Splitting
>     refuses to split if there are KVM_MIN_FREE_MMU_PAGES or fewer
>     pages available.
> 
> (2) Splitting a huge page may end up re-using an existing lower level
>     shadow page tables. This is unlike the TDP MMU which always allocates
>     new shadow page tables when splitting.
> 
> (3) When installing the lower level SPTEs, they must be added to the
>     rmap which may require allocating additional pte_list_desc structs.
> 
> Case (2) is especially interesting since it may require a TLB flush,
> unlike the TDP MMU which can fully split huge pages without any TLB
> flushes. Specifically, an existing lower level page table may point to
> even lower level page tables that are not fully populated, effectively
> unmapping a portion of the huge page, which requires a flush.
> 
> This commit performs such flushes after dropping the huge page and
> before installing the lower level page table. This TLB flush could
> instead be delayed until the MMU lock is about to be dropped, which
> would batch flushes for multiple splits.  However these flushes should
> be rare in practice (a huge page must be aliased in multiple SPTEs and
> have been split for NX Huge Pages in only some of them). Flushing
> immediately is simpler to plumb and also reduces the chances of tripping
> over a CPU bug (e.g. see iTLB multihit).
> 
> Suggested-by: Peter Feiner <pfeiner@google.com>
> [ This commit is based off of the original implementation of Eager Page
>   Splitting from Peter in Google's kernel from 2016. ]
> Signed-off-by: David Matlack <dmatlack@google.com>
> ---
>  .../admin-guide/kernel-parameters.txt         |   3 +-
>  arch/x86/include/asm/kvm_host.h               |  24 ++
>  arch/x86/kvm/mmu/mmu.c                        | 267 +++++++++++++++++-
>  arch/x86/kvm/x86.c                            |   6 +
>  include/linux/kvm_host.h                      |   1 +
>  virt/kvm/kvm_main.c                           |   2 +-
>  6 files changed, 293 insertions(+), 10 deletions(-)
> 
> diff --git a/Documentation/admin-guide/kernel-parameters.txt b/Documentation/admin-guide/kernel-parameters.txt
> index 3f1cc5e317ed..bc3ad3d4df0b 100644
> --- a/Documentation/admin-guide/kernel-parameters.txt
> +++ b/Documentation/admin-guide/kernel-parameters.txt
> @@ -2387,8 +2387,7 @@
>  			the KVM_CLEAR_DIRTY ioctl, and only for the pages being
>  			cleared.
>  
> -			Eager page splitting currently only supports splitting
> -			huge pages mapped by the TDP MMU.
> +			Eager page splitting is only supported when kvm.tdp_mmu=Y.
>  
>  			Default is Y (on).
>  
> diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
> index 9193a700fe2d..ea99e61cc556 100644
> --- a/arch/x86/include/asm/kvm_host.h
> +++ b/arch/x86/include/asm/kvm_host.h
> @@ -1265,6 +1265,28 @@ struct kvm_arch {
>  	 * the global KVM_MAX_VCPU_IDS may lead to significant memory waste.
>  	 */
>  	u32 max_vcpu_ids;
> +
> +	/*
> +	 * Memory caches used to allocate shadow pages when performing eager
> +	 * page splitting. No need for a shadowed_info_cache since eager page
> +	 * splitting only allocates direct shadow pages.
> +	 *
> +	 * Protected by kvm->slots_lock.
> +	 */
> +	struct kvm_mmu_memory_cache split_shadow_page_cache;
> +	struct kvm_mmu_memory_cache split_page_header_cache;
> +
> +	/*
> +	 * Memory cache used to allocate pte_list_desc structs while splitting
> +	 * huge pages. In the worst case, to split one huge page, 512
> +	 * pte_list_desc structs are needed to add each lower level leaf sptep
> +	 * to the rmap plus 1 to extend the parent_ptes rmap of the lower level
> +	 * page table.
> +	 *
> +	 * Protected by kvm->slots_lock.
> +	 */
> +#define SPLIT_DESC_CACHE_CAPACITY 513
> +	struct kvm_mmu_memory_cache split_desc_cache;
>  };
>  
>  struct kvm_vm_stat {
> @@ -1639,6 +1661,8 @@ void kvm_mmu_zap_all(struct kvm *kvm);
>  void kvm_mmu_invalidate_mmio_sptes(struct kvm *kvm, u64 gen);
>  void kvm_mmu_change_mmu_pages(struct kvm *kvm, unsigned long kvm_nr_mmu_pages);
>  
> +void free_split_caches(struct kvm *kvm);
> +
>  int load_pdptrs(struct kvm_vcpu *vcpu, unsigned long cr3);
>  
>  int emulator_write_phys(struct kvm_vcpu *vcpu, gpa_t gpa,
> diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c
> index 964a8fa63e1b..7c5eab61c4ea 100644
> --- a/arch/x86/kvm/mmu/mmu.c
> +++ b/arch/x86/kvm/mmu/mmu.c
> @@ -5966,6 +5966,15 @@ int kvm_mmu_init_vm(struct kvm *kvm)
>  	node->track_write = kvm_mmu_pte_write;
>  	node->track_flush_slot = kvm_mmu_invalidate_zap_pages_in_memslot;
>  	kvm_page_track_register_notifier(kvm, node);
> +
> +	kvm->arch.split_page_header_cache.kmem_cache = mmu_page_header_cache;
> +	kvm->arch.split_page_header_cache.gfp_zero = __GFP_ZERO;
> +
> +	kvm->arch.split_shadow_page_cache.gfp_zero = __GFP_ZERO;
> +
> +	kvm->arch.split_desc_cache.kmem_cache = pte_list_desc_cache;
> +	kvm->arch.split_desc_cache.gfp_zero = __GFP_ZERO;
> +
>  	return 0;
>  }
>  
> @@ -6097,15 +6106,252 @@ void kvm_mmu_slot_remove_write_access(struct kvm *kvm,
>  		kvm_arch_flush_remote_tlbs_memslot(kvm, memslot);
>  }
>  
> +void free_split_caches(struct kvm *kvm)
> +{
> +	lockdep_assert_held(&kvm->slots_lock);
> +
> +	kvm_mmu_free_memory_cache(&kvm->arch.split_desc_cache);
> +	kvm_mmu_free_memory_cache(&kvm->arch.split_page_header_cache);
> +	kvm_mmu_free_memory_cache(&kvm->arch.split_shadow_page_cache);
> +}
> +
> +static inline bool need_topup(struct kvm_mmu_memory_cache *cache, int min)
> +{
> +	return kvm_mmu_memory_cache_nr_free_objects(cache) < min;
> +}
> +
> +static bool need_topup_split_caches_or_resched(struct kvm *kvm)
> +{
> +	if (need_resched() || rwlock_needbreak(&kvm->mmu_lock))
> +		return true;
> +
> +	/*
> +	 * In the worst case, SPLIT_DESC_CACHE_CAPACITY descriptors are needed
> +	 * to split a single huge page. Calculating how many are actually needed
> +	 * is possible but not worth the complexity.
> +	 */
> +	return need_topup(&kvm->arch.split_desc_cache, SPLIT_DESC_CACHE_CAPACITY) ||
> +	       need_topup(&kvm->arch.split_page_header_cache, 1) ||
> +	       need_topup(&kvm->arch.split_shadow_page_cache, 1);
> +}
> +
> +static int topup_split_caches(struct kvm *kvm)
> +{
> +	int r;
> +
> +	lockdep_assert_held(&kvm->slots_lock);
> +
> +	r = __kvm_mmu_topup_memory_cache(&kvm->arch.split_desc_cache,
> +					 SPLIT_DESC_CACHE_CAPACITY,
> +					 SPLIT_DESC_CACHE_CAPACITY);
> +	if (r)
> +		return r;
> +
> +	r = kvm_mmu_topup_memory_cache(&kvm->arch.split_page_header_cache, 1);
> +	if (r)
> +		return r;
> +
> +	return kvm_mmu_topup_memory_cache(&kvm->arch.split_shadow_page_cache, 1);
> +}
> +
> +static struct kvm_mmu_page *nested_mmu_get_sp_for_split(struct kvm *kvm, u64 *huge_sptep)
> +{
> +	struct kvm_mmu_page *huge_sp = sptep_to_sp(huge_sptep);
> +	struct shadow_page_caches caches = {};
> +	union kvm_mmu_page_role role;
> +	unsigned int access;
> +	gfn_t gfn;
> +
> +	gfn = kvm_mmu_page_get_gfn(huge_sp, huge_sptep - huge_sp->spt);
> +	access = kvm_mmu_page_get_access(huge_sp, huge_sptep - huge_sp->spt);
> +
> +	/*
> +	 * Note, huge page splitting always uses direct shadow pages, regardless
> +	 * of whether the huge page itself is mapped by a direct or indirect
> +	 * shadow page, since the huge page region itself is being directly
> +	 * mapped with smaller pages.
> +	 */
> +	role = kvm_mmu_child_role(huge_sptep, /*direct=*/true, access);
> +
> +	/* Direct SPs do not require a shadowed_info_cache. */
> +	caches.page_header_cache = &kvm->arch.split_page_header_cache;
> +	caches.shadow_page_cache = &kvm->arch.split_shadow_page_cache;
> +
> +	/* Safe to pass NULL for vCPU since requesting a direct SP. */
> +	return __kvm_mmu_get_shadow_page(kvm, NULL, &caches, gfn, role);
> +}
> +
> +static void nested_mmu_split_huge_page(struct kvm *kvm,
> +				       const struct kvm_memory_slot *slot,
> +				       u64 *huge_sptep)
> +
> +{
> +	struct kvm_mmu_memory_cache *cache = &kvm->arch.split_desc_cache;
> +	u64 huge_spte = READ_ONCE(*huge_sptep);
> +	struct kvm_mmu_page *sp;
> +	bool flush = false;
> +	u64 *sptep, spte;
> +	gfn_t gfn;
> +	int index;
> +
> +	sp = nested_mmu_get_sp_for_split(kvm, huge_sptep);
> +
> +	for (index = 0; index < PT64_ENT_PER_PAGE; index++) {
> +		sptep = &sp->spt[index];
> +		gfn = kvm_mmu_page_get_gfn(sp, index);
> +
> +		/*
> +		 * The SP may already have populated SPTEs, e.g. if this huge
> +		 * page is aliased by multiple sptes with the same access
> +		 * permissions. These entries are guaranteed to map the same
> +		 * gfn-to-pfn translation since the SP is direct, so no need to
> +		 * modify them.
> +		 *
> +		 * However, if a given SPTE points to a lower level page table,
> +		 * that lower level page table may only be partially populated.
> +		 * Installing such SPTEs would effectively unmap a potion of the
> +		 * huge page. Unmapping guest memory always requires a TLB flush
> +		 * since a subsequent operation on the unmapped regions would
> +		 * fail to detect the need to flush.
> +		 */
> +		if (is_shadow_present_pte(*sptep)) {
> +			flush |= !is_last_spte(*sptep, sp->role.level);
> +			continue;
> +		}
> +
> +		spte = make_huge_page_split_spte(huge_spte, sp->role, index);
> +		mmu_spte_set(sptep, spte);
> +		__rmap_add(kvm, cache, slot, sptep, gfn, sp->role.access);
> +	}
> +
> +	/*
> +	 * Replace the huge spte with a pointer to the populated lower level
> +	 * page table. If the lower-level page table indentically maps the huge
> +	 * page (i.e. no memory is unmapped), there's no need for a TLB flush.
> +	 * Otherwise, flush TLBs after dropping the huge page and before
> +	 * installing the shadow page table.
> +	 */
> +	__drop_large_spte(kvm, huge_sptep, flush);
> +	__link_shadow_page(cache, huge_sptep, sp);
> +}
> +
> +static int nested_mmu_try_split_huge_page(struct kvm *kvm,
> +					  const struct kvm_memory_slot *slot,
> +					  u64 *huge_sptep)
> +{
> +	struct kvm_mmu_page *huge_sp = sptep_to_sp(huge_sptep);
> +	int level, r = 0;
> +	gfn_t gfn;
> +	u64 spte;
> +
> +	/* Grab information for the tracepoint before dropping the MMU lock. */
> +	gfn = kvm_mmu_page_get_gfn(huge_sp, huge_sptep - huge_sp->spt);
> +	level = huge_sp->role.level;
> +	spte = *huge_sptep;
> +
> +	if (kvm_mmu_available_pages(kvm) <= KVM_MIN_FREE_MMU_PAGES) {
> +		r = -ENOSPC;
> +		goto out;
> +	}
> +
> +	if (need_topup_split_caches_or_resched(kvm)) {
> +		write_unlock(&kvm->mmu_lock);
> +		cond_resched();
> +		/*
> +		 * If the topup succeeds, return -EAGAIN to indicate that the
> +		 * rmap iterator should be restarted because the MMU lock was
> +		 * dropped.
> +		 */
> +		r = topup_split_caches(kvm) ?: -EAGAIN;
> +		write_lock(&kvm->mmu_lock);
> +		goto out;
> +	}
> +
> +	nested_mmu_split_huge_page(kvm, slot, huge_sptep);
> +
> +out:
> +	trace_kvm_mmu_split_huge_page(gfn, spte, level, r);
> +	return r;
> +}
> +
> +static bool nested_mmu_try_split_huge_pages(struct kvm *kvm,
> +					    struct kvm_rmap_head *rmap_head,
> +					    const struct kvm_memory_slot *slot)
> +{
> +	struct rmap_iterator iter;
> +	struct kvm_mmu_page *sp;
> +	u64 *huge_sptep;
> +	int r;
> +
> +restart:
> +	for_each_rmap_spte(rmap_head, &iter, huge_sptep) {
> +		sp = sptep_to_sp(huge_sptep);
> +
> +		/* TDP MMU is enabled, so rmap only contains nested MMU SPs. */
> +		if (WARN_ON_ONCE(!sp->role.guest_mode))
> +			continue;
> +
> +		/* The rmaps should never contain non-leaf SPTEs. */
> +		if (WARN_ON_ONCE(!is_large_pte(*huge_sptep)))
> +			continue;
> +
> +		/* SPs with level >PG_LEVEL_4K should never by unsync. */
> +		if (WARN_ON_ONCE(sp->unsync))
> +			continue;
> +
> +		/* Don't bother splitting huge pages on invalid SPs. */
> +		if (sp->role.invalid)
> +			continue;
> +
> +		r = nested_mmu_try_split_huge_page(kvm, slot, huge_sptep);
> +
> +		/*
> +		 * The split succeeded or needs to be retried because the MMU
> +		 * lock was dropped. Either way, restart the iterator to get it
> +		 * back into a consistent state.
> +		 */
> +		if (!r || r == -EAGAIN)
> +			goto restart;
> +
> +		/* The split failed and shouldn't be retried (e.g. -ENOMEM). */
> +		break;
> +	}
> +
> +	return false;
> +}
> +
> +static void kvm_nested_mmu_try_split_huge_pages(struct kvm *kvm,
> +						const struct kvm_memory_slot *slot,
> +						gfn_t start, gfn_t end,
> +						int target_level)
> +{
> +	int level;
> +
> +	/*
> +	 * Split huge pages starting with KVM_MAX_HUGEPAGE_LEVEL and working
> +	 * down to the target level. This ensures pages are recursively split
> +	 * all the way to the target level. There's no need to split pages
> +	 * already at the target level.
> +	 */
> +	for (level = KVM_MAX_HUGEPAGE_LEVEL; level > target_level; level--) {
> +		slot_handle_level_range(kvm, slot, nested_mmu_try_split_huge_pages,
> +					level, level, start, end - 1, true, false);
> +	}
> +}
> +
>  /* Must be called with the mmu_lock held in write-mode. */
>  void kvm_mmu_try_split_huge_pages(struct kvm *kvm,
>  				   const struct kvm_memory_slot *memslot,
>  				   u64 start, u64 end,
>  				   int target_level)
>  {
> -	if (is_tdp_mmu_enabled(kvm))
> -		kvm_tdp_mmu_try_split_huge_pages(kvm, memslot, start, end,
> -						 target_level, false);
> +	if (!is_tdp_mmu_enabled(kvm))
> +		return;
> +
> +	if (kvm_memslots_have_rmaps(kvm))
> +		kvm_nested_mmu_try_split_huge_pages(kvm, memslot, start, end, target_level);
> +
> +	kvm_tdp_mmu_try_split_huge_pages(kvm, memslot, start, end, target_level, false);
>  
>  	/*
>  	 * A TLB flush is unnecessary at this point for the same resons as in
> @@ -6120,12 +6366,19 @@ void kvm_mmu_slot_try_split_huge_pages(struct kvm *kvm,
>  	u64 start = memslot->base_gfn;
>  	u64 end = start + memslot->npages;
>  
> -	if (is_tdp_mmu_enabled(kvm)) {
> -		read_lock(&kvm->mmu_lock);
> -		kvm_tdp_mmu_try_split_huge_pages(kvm, memslot, start, end, target_level, true);
> -		read_unlock(&kvm->mmu_lock);
> +	if (!is_tdp_mmu_enabled(kvm))
> +		return;
> +
> +	if (kvm_memslots_have_rmaps(kvm)) {
> +		write_lock(&kvm->mmu_lock);
> +		kvm_nested_mmu_try_split_huge_pages(kvm, memslot, start, end, target_level);
> +		write_unlock(&kvm->mmu_lock);
>  	}
>  
> +	read_lock(&kvm->mmu_lock);
> +	kvm_tdp_mmu_try_split_huge_pages(kvm, memslot, start, end, target_level, true);
> +	read_unlock(&kvm->mmu_lock);
> +
>  	/*
>  	 * No TLB flush is necessary here. KVM will flush TLBs after
>  	 * write-protecting and/or clearing dirty on the newly split SPTEs to
> diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
> index 04812eaaf61b..4fe018ddd1cd 100644
> --- a/arch/x86/kvm/x86.c
> +++ b/arch/x86/kvm/x86.c
> @@ -12197,6 +12197,12 @@ static void kvm_mmu_slot_apply_flags(struct kvm *kvm,
>  		 * page faults will create the large-page sptes.
>  		 */
>  		kvm_mmu_zap_collapsible_sptes(kvm, new);
> +
> +		/*
> +		 * Free any memory left behind by eager page splitting. Ignore
> +		 * the module parameter since userspace might have changed it.
> +		 */
> +		free_split_caches(kvm);
>  	} else {
>  		/*
>  		 * Initially-all-set does not require write protecting any page,
> diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h
> index f94f72bbd2d3..17fc9247504d 100644
> --- a/include/linux/kvm_host.h
> +++ b/include/linux/kvm_host.h
> @@ -1336,6 +1336,7 @@ void kvm_flush_remote_tlbs(struct kvm *kvm);
>  
>  #ifdef KVM_ARCH_NR_OBJS_PER_MEMORY_CACHE
>  int kvm_mmu_topup_memory_cache(struct kvm_mmu_memory_cache *mc, int min);
> +int __kvm_mmu_topup_memory_cache(struct kvm_mmu_memory_cache *mc, int capacity, int min);

If you end up with a v7, could you move this to the previous commit,
please. In that case this would include not making
__kvm_mmu_topup_memory_cache a static in the previous one as well.

Thanks,
Ricardo

>  int kvm_mmu_memory_cache_nr_free_objects(struct kvm_mmu_memory_cache *mc);
>  void kvm_mmu_free_memory_cache(struct kvm_mmu_memory_cache *mc);
>  void *kvm_mmu_memory_cache_alloc(struct kvm_mmu_memory_cache *mc);
> diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
> index 5e2e75014256..b9573e958a03 100644
> --- a/virt/kvm/kvm_main.c
> +++ b/virt/kvm/kvm_main.c
> @@ -369,7 +369,7 @@ static inline void *mmu_memory_cache_alloc_obj(struct kvm_mmu_memory_cache *mc,
>  		return (void *)__get_free_page(gfp_flags);
>  }
>  
> -static int __kvm_mmu_topup_memory_cache(struct kvm_mmu_memory_cache *mc, int capacity, int min)
> +int __kvm_mmu_topup_memory_cache(struct kvm_mmu_memory_cache *mc, int capacity, int min)
>  {
>  	gfp_t gfp = GFP_KERNEL_ACCOUNT;
>  	void *obj;
> -- 
> 2.36.0.550.gb090851708-goog
> 

  reply	other threads:[~2022-06-01 21:50 UTC|newest]

Thread overview: 56+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-05-16 23:21 [PATCH v6 00/22] KVM: Extend Eager Page Splitting to the shadow MMU David Matlack
2022-05-16 23:21 ` [PATCH v6 01/22] KVM: x86/mmu: Optimize MMU page cache lookup for all direct SPs David Matlack
2022-05-16 23:21 ` [PATCH v6 02/22] KVM: x86/mmu: Use a bool for direct David Matlack
2022-05-16 23:21 ` [PATCH v6 03/22] KVM: x86/mmu: Stop passing @direct to mmu_alloc_root() David Matlack
2022-06-16 18:47   ` Sean Christopherson
2022-06-22 14:06     ` Paolo Bonzini
2022-06-22 14:19       ` Sean Christopherson
2022-05-16 23:21 ` [PATCH v6 04/22] KVM: x86/mmu: Derive shadow MMU page role from parent David Matlack
2022-06-17  1:19   ` Sean Christopherson
2022-06-17 15:12   ` Sean Christopherson
2022-05-16 23:21 ` [PATCH v6 05/22] KVM: x86/mmu: Always pass 0 for @quadrant when gptes are 8 bytes David Matlack
2022-06-17 15:20   ` Sean Christopherson
2022-05-16 23:21 ` [PATCH v6 06/22] KVM: x86/mmu: Decompose kvm_mmu_get_page() into separate functions David Matlack
2022-05-16 23:21 ` [PATCH v6 07/22] KVM: x86/mmu: Consolidate shadow page allocation and initialization David Matlack
2022-05-16 23:21 ` [PATCH v6 08/22] KVM: x86/mmu: Rename shadow MMU functions that deal with shadow pages David Matlack
2022-05-16 23:21 ` [PATCH v6 09/22] KVM: x86/mmu: Move guest PT write-protection to account_shadowed() David Matlack
2022-05-16 23:21 ` [PATCH v6 10/22] KVM: x86/mmu: Pass memory caches to allocate SPs separately David Matlack
2022-06-17 15:01   ` Sean Christopherson
2022-06-21 17:06     ` David Matlack
2022-06-21 17:27       ` Sean Christopherson
2022-05-16 23:21 ` [PATCH v6 11/22] KVM: x86/mmu: Replace vcpu with kvm in kvm_mmu_alloc_shadow_page() David Matlack
2022-05-16 23:21 ` [PATCH v6 12/22] KVM: x86/mmu: Pass kvm pointer separately from vcpu to kvm_mmu_find_shadow_page() David Matlack
2022-05-16 23:21 ` [PATCH v6 13/22] KVM: x86/mmu: Allow NULL @vcpu in kvm_mmu_find_shadow_page() David Matlack
2022-06-17 15:28   ` Sean Christopherson
2022-06-22 14:26     ` Paolo Bonzini
2022-05-16 23:21 ` [PATCH v6 14/22] KVM: x86/mmu: Pass const memslot to rmap_add() David Matlack
2022-06-17 15:30   ` Sean Christopherson
2022-05-16 23:21 ` [PATCH v6 15/22] KVM: x86/mmu: Decouple rmap_add() and link_shadow_page() from kvm_vcpu David Matlack
2022-06-17 16:39   ` Sean Christopherson
2022-05-16 23:21 ` [PATCH v6 16/22] KVM: x86/mmu: Update page stats in __rmap_add() David Matlack
2022-06-17 16:40   ` Sean Christopherson
2022-05-16 23:21 ` [PATCH v6 17/22] KVM: x86/mmu: Cache the access bits of shadowed translations David Matlack
2022-06-17 16:53   ` Sean Christopherson
2022-05-16 23:21 ` [PATCH v6 18/22] KVM: x86/mmu: Extend make_huge_page_split_spte() for the shadow MMU David Matlack
2022-06-17 16:56   ` Sean Christopherson
2022-05-16 23:21 ` [PATCH v6 19/22] KVM: x86/mmu: Zap collapsible SPTEs in shadow MMU at all possible levels David Matlack
2022-06-17 17:01   ` Sean Christopherson
2022-06-21 17:24     ` David Matlack
2022-06-21 17:59       ` Sean Christopherson
2022-05-16 23:21 ` [PATCH v6 20/22] KVM: x86/mmu: Refactor drop_large_spte() David Matlack
2022-06-17 17:11   ` Sean Christopherson
2022-06-22 16:13     ` Paolo Bonzini
2022-06-22 16:50       ` Paolo Bonzini
2022-05-16 23:21 ` [PATCH v6 21/22] KVM: Allow for different capacities in kvm_mmu_memory_cache structs David Matlack
2022-05-19 15:33   ` Anup Patel
2022-05-20 23:21   ` Mingwei Zhang
2022-05-23 17:37     ` Sean Christopherson
2022-05-23 17:44       ` David Matlack
2022-05-23 18:13         ` Mingwei Zhang
2022-05-23 18:22           ` David Matlack
2022-05-23 23:53             ` David Matlack
2022-06-17 17:41   ` Sean Christopherson
2022-06-17 18:34     ` Sean Christopherson
2022-05-16 23:21 ` [PATCH v6 22/22] KVM: x86/mmu: Extend Eager Page Splitting to nested MMUs David Matlack
2022-06-01 21:50   ` Ricardo Koller [this message]
2022-06-17 19:08   ` Sean Christopherson

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=YpffEuTSJ3QJwLiy@google.com \
    --to=ricarkol@google.com \
    --cc=aleksandar.qemu.devel@gmail.com \
    --cc=anup@brainfault.org \
    --cc=aou@eecs.berkeley.edu \
    --cc=bgardon@google.com \
    --cc=chenhuacai@kernel.org \
    --cc=dmatlack@google.com \
    --cc=drjones@redhat.com \
    --cc=jiangshanlai@gmail.com \
    --cc=kvm-riscv@lists.infradead.org \
    --cc=kvm@vger.kernel.org \
    --cc=kvmarm@lists.cs.columbia.edu \
    --cc=linux-mips@vger.kernel.org \
    --cc=maciej.szmigiero@oracle.com \
    --cc=maz@kernel.org \
    --cc=palmer@dabbelt.com \
    --cc=paul.walmsley@sifive.com \
    --cc=pbonzini@redhat.com \
    --cc=peterx@redhat.com \
    --cc=pfeiner@google.com \
    --cc=seanjc@google.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).