From: Yan Zhao <yan.y.zhao@intel.com>
To: Sean Christopherson <seanjc@google.com>
Cc: Paolo Bonzini <pbonzini@redhat.com>, <kvm@vger.kernel.org>,
<linux-kernel@vger.kernel.org>,
Michael Roth <michael.roth@amd.com>
Subject: Re: [PATCH] KVM: x86/mmu: Prevent installing hugepages when mem attributes are changing
Date: Mon, 28 Apr 2025 09:32:19 +0800 [thread overview]
Message-ID: <aA7aozbc1grlevOm@yzhao56-desk.sh.intel.com> (raw)
In-Reply-To: <20250426001056.1025157-1-seanjc@google.com>
On Fri, Apr 25, 2025 at 05:10:56PM -0700, Sean Christopherson wrote:
> When changing memory attributes on a subset of a potential hugepage, add
> the hugepage to the invalidation range tracking to prevent installing a
> hugepage until the attributes are fully updated. Like the actual hugepage
> tracking updates in kvm_arch_post_set_memory_attributes(), process only
> the head and tail pages, as any potential hugepages that are entirely
> covered by the range will already be tracked.
>
> Note, only hugepage chunks whose current attributes are NOT mixed need to
> be added to the invalidation set, as mixed attributes already prevent
> installing a hugepage, and it's perfectly safe to install a smaller
> mapping for a gfn whose attributes aren't changing.
>
> Fixes: 8dd2eee9d526 ("KVM: x86/mmu: Handle page fault for private memory")
> Cc: stable@vger.kernel.org
> Reported-by: Michael Roth <michael.roth@amd.com>
> Tested-by: Michael Roth <michael.roth@amd.com>
> Signed-off-by: Sean Christopherson <seanjc@google.com>
> ---
> arch/x86/kvm/mmu/mmu.c | 68 ++++++++++++++++++++++++++++++++----------
> 1 file changed, 52 insertions(+), 16 deletions(-)
>
> diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c
> index 63bb77ee1bb1..218ba866a40e 100644
> --- a/arch/x86/kvm/mmu/mmu.c
> +++ b/arch/x86/kvm/mmu/mmu.c
> @@ -7669,9 +7669,30 @@ void kvm_mmu_pre_destroy_vm(struct kvm *kvm)
> }
>
> #ifdef CONFIG_KVM_GENERIC_MEMORY_ATTRIBUTES
> +static bool hugepage_test_mixed(struct kvm_memory_slot *slot, gfn_t gfn,
> + int level)
> +{
> + return lpage_info_slot(gfn, slot, level)->disallow_lpage & KVM_LPAGE_MIXED_FLAG;
> +}
> +
> +static void hugepage_clear_mixed(struct kvm_memory_slot *slot, gfn_t gfn,
> + int level)
> +{
> + lpage_info_slot(gfn, slot, level)->disallow_lpage &= ~KVM_LPAGE_MIXED_FLAG;
> +}
> +
> +static void hugepage_set_mixed(struct kvm_memory_slot *slot, gfn_t gfn,
> + int level)
> +{
> + lpage_info_slot(gfn, slot, level)->disallow_lpage |= KVM_LPAGE_MIXED_FLAG;
> +}
> +
> bool kvm_arch_pre_set_memory_attributes(struct kvm *kvm,
> struct kvm_gfn_range *range)
> {
> + struct kvm_memory_slot *slot = range->slot;
> + int level;
> +
> /*
> * Zap SPTEs even if the slot can't be mapped PRIVATE. KVM x86 only
> * supports KVM_MEMORY_ATTRIBUTE_PRIVATE, and so it *seems* like KVM
> @@ -7686,6 +7707,37 @@ bool kvm_arch_pre_set_memory_attributes(struct kvm *kvm,
> if (WARN_ON_ONCE(!kvm_arch_has_private_mem(kvm)))
> return false;
>
> + if (WARN_ON_ONCE(range->end <= range->start))
> + return false;
> +
> + /*
> + * If the head and tail pages of the range currently allow a hugepage,
> + * i.e. reside fully in the slot and don't have mixed attributes, then
> + * add each corresponding hugepage range to the ongoing invalidation,
> + * e.g. to prevent KVM from creating a hugepage in response to a fault
> + * for a gfn whose attributes aren't changing. Note, only the range
> + * of gfns whose attributes are being modified needs to be explicitly
> + * unmapped, as that will unmap any existing hugepages.
> + */
> + for (level = PG_LEVEL_2M; level <= KVM_MAX_HUGEPAGE_LEVEL; level++) {
> + gfn_t start = gfn_round_for_level(range->start, level);
> + gfn_t end = gfn_round_for_level(range->end - 1, level);
> + gfn_t nr_pages = KVM_PAGES_PER_HPAGE(level);
> +
> + if ((start != range->start || start + nr_pages > range->end) &&
> + start >= slot->base_gfn &&
> + start + nr_pages <= slot->base_gfn + slot->npages &&
> + !hugepage_test_mixed(slot, start, level))
Instead of checking mixed flag in disallow_lpage, could we check disallow_lpage
directly?
So, if mixed flag is not set but disallow_lpage is 1, there's no need to update
the invalidate range.
> + kvm_mmu_invalidate_range_add(kvm, start, start + nr_pages);
> +
> + if (end == start)
> + continue;
> +
> + if ((end + nr_pages) <= (slot->base_gfn + slot->npages) &&
> + !hugepage_test_mixed(slot, end, level))
if ((end + nr_pages > range->end) &&
((end + nr_pages) <= (slot->base_gfn + slot->npages)) &&
!lpage_info_slot(gfn, slot, level)->disallow_lpage)
?
> + kvm_mmu_invalidate_range_add(kvm, end, end + nr_pages);
> + }
> +
> /* Unmap the old attribute page. */
> if (range->arg.attributes & KVM_MEMORY_ATTRIBUTE_PRIVATE)
> range->attr_filter = KVM_FILTER_SHARED;
> @@ -7695,23 +7747,7 @@ bool kvm_arch_pre_set_memory_attributes(struct kvm *kvm,
> return kvm_unmap_gfn_range(kvm, range);
> }
>
> -static bool hugepage_test_mixed(struct kvm_memory_slot *slot, gfn_t gfn,
> - int level)
> -{
> - return lpage_info_slot(gfn, slot, level)->disallow_lpage & KVM_LPAGE_MIXED_FLAG;
> -}
>
> -static void hugepage_clear_mixed(struct kvm_memory_slot *slot, gfn_t gfn,
> - int level)
> -{
> - lpage_info_slot(gfn, slot, level)->disallow_lpage &= ~KVM_LPAGE_MIXED_FLAG;
> -}
> -
> -static void hugepage_set_mixed(struct kvm_memory_slot *slot, gfn_t gfn,
> - int level)
> -{
> - lpage_info_slot(gfn, slot, level)->disallow_lpage |= KVM_LPAGE_MIXED_FLAG;
> -}
>
> static bool hugepage_has_attrs(struct kvm *kvm, struct kvm_memory_slot *slot,
> gfn_t gfn, int level, unsigned long attrs)
>
> base-commit: 2d7124941a273c7233849a7a2bbfbeb7e28f1caa
> --
> 2.49.0.850.g28803427d3-goog
>
>
next prev parent reply other threads:[~2025-04-28 1:34 UTC|newest]
Thread overview: 6+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-04-26 0:10 [PATCH] KVM: x86/mmu: Prevent installing hugepages when mem attributes are changing Sean Christopherson
2025-04-28 1:32 ` Yan Zhao [this message]
2025-04-28 14:50 ` Sean Christopherson
2025-04-29 1:09 ` Yan Zhao
2025-04-29 1:23 ` Yan Zhao
2025-04-29 12:57 ` Michael Roth
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=aA7aozbc1grlevOm@yzhao56-desk.sh.intel.com \
--to=yan.y.zhao@intel.com \
--cc=kvm@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=michael.roth@amd.com \
--cc=pbonzini@redhat.com \
--cc=seanjc@google.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox