Kernel KVM virtualization development
 help / color / mirror / Atom feed
From: Sean Christopherson <seanjc@google.com>
To: Sean Christopherson <seanjc@google.com>,
	Paolo Bonzini <pbonzini@redhat.com>
Cc: kvm@vger.kernel.org, linux-kernel@vger.kernel.org,
	 Chao Gao <chao.gao@intel.com>, Borislav Petkov <bp@alien8.de>,
	Xin Li <xin@zytor.com>,  Dapeng Mi <dapeng1.mi@linux.intel.com>,
	Francesco Lavra <francescolavra.fl@gmail.com>,
	 Manali Shukla <Manali.Shukla@amd.com>
Subject: [PATCH v2 27/32] KVM: nSVM: Access MSRPM in 4-byte chunks only for merging L0 and L1 bitmaps
Date: Tue, 10 Jun 2025 15:57:32 -0700	[thread overview]
Message-ID: <20250610225737.156318-28-seanjc@google.com> (raw)
In-Reply-To: <20250610225737.156318-1-seanjc@google.com>

Access the MSRPM using u32/4-byte chunks (and appropriately adjusted
offsets) only when merging L0 and L1 bitmaps as part of emulating VMRUN.
The only reason to batch accesses to MSRPMs is to avoid the overhead of
uaccess operations (e.g. STAC/CLAC and bounds checks) when reading L1's
bitmap pointed at by vmcb12.  For all other uses, either per-bit accesses
are more than fast enough (no uaccess), or KVM is only accessing a single
bit (nested_svm_exit_handled_msr()) and so there's nothing to batch.

In addition to (hopefully) documenting the uniqueness of the merging code,
restricting chunked access to _just_ the merging code will allow for
increasing the chunk size (to unsigned long) with minimal risk.

Signed-off-by: Sean Christopherson <seanjc@google.com>
---
 arch/x86/kvm/svm/nested.c | 52 ++++++++++++++-------------------------
 1 file changed, 18 insertions(+), 34 deletions(-)

diff --git a/arch/x86/kvm/svm/nested.c b/arch/x86/kvm/svm/nested.c
index f9bda148273e..fb0ac87df00a 100644
--- a/arch/x86/kvm/svm/nested.c
+++ b/arch/x86/kvm/svm/nested.c
@@ -197,29 +197,6 @@ void recalc_intercepts(struct vcpu_svm *svm)
 static int nested_svm_msrpm_merge_offsets[6] __ro_after_init;
 static int nested_svm_nr_msrpm_merge_offsets __ro_after_init;
 
-static const u32 msrpm_ranges[] = {0, 0xc0000000, 0xc0010000};
-
-static u32 svm_msrpm_offset(u32 msr)
-{
-	u32 offset;
-	int i;
-
-	for (i = 0; i < ARRAY_SIZE(msrpm_ranges); i++) {
-		if (msr < msrpm_ranges[i] ||
-		    msr >= msrpm_ranges[i] + SVM_MSRS_PER_RANGE)
-			continue;
-
-		offset  = (msr - msrpm_ranges[i]) / SVM_MSRS_PER_BYTE;
-		offset += (i * SVM_MSRPM_BYTES_PER_RANGE);  /* add range offset */
-
-		/* Now we have the u8 offset - but need the u32 offset */
-		return offset / 4;
-	}
-
-	/* MSR not in any range */
-	return MSR_INVALID;
-}
-
 int __init nested_svm_init_msrpm_merge_offsets(void)
 {
 	static const u32 merge_msrs[] __initconst = {
@@ -246,11 +223,18 @@ int __init nested_svm_init_msrpm_merge_offsets(void)
 	int i, j;
 
 	for (i = 0; i < ARRAY_SIZE(merge_msrs); i++) {
-		u32 offset = svm_msrpm_offset(merge_msrs[i]);
+		u32 bit_nr = svm_msrpm_bit_nr(merge_msrs[i]);
+		u32 offset;
 
-		if (WARN_ON(offset == MSR_INVALID))
+		if (WARN_ON(bit_nr == MSR_INVALID))
 			return -EIO;
 
+		/*
+		 * Merging is done in 32-bit chunks to reduce the number of
+		 * accesses to L1's bitmap.
+		 */
+		offset = bit_nr / BITS_PER_BYTE / sizeof(u32);
+
 		for (j = 0; j < nested_svm_nr_msrpm_merge_offsets; j++) {
 			if (nested_svm_msrpm_merge_offsets[j] == offset)
 				break;
@@ -1369,26 +1353,26 @@ void svm_leave_nested(struct kvm_vcpu *vcpu)
 
 static int nested_svm_exit_handled_msr(struct vcpu_svm *svm)
 {
-	u32 offset, msr, value;
-	int write, mask;
+	gpa_t base = svm->nested.ctl.msrpm_base_pa;
+	u32 msr, bit_nr;
+	u8 value, mask;
+	int write;
 
 	if (!(vmcb12_is_intercept(&svm->nested.ctl, INTERCEPT_MSR_PROT)))
 		return NESTED_EXIT_HOST;
 
 	msr    = svm->vcpu.arch.regs[VCPU_REGS_RCX];
-	offset = svm_msrpm_offset(msr);
+	bit_nr = svm_msrpm_bit_nr(msr);
 	write  = svm->vmcb->control.exit_info_1 & 1;
-	mask   = 1 << ((2 * (msr & 0xf)) + write);
 
-	if (offset == MSR_INVALID)
+	if (bit_nr == MSR_INVALID)
 		return NESTED_EXIT_DONE;
 
-	/* Offset is in 32 bit units but need in 8 bit units */
-	offset *= 4;
-
-	if (kvm_vcpu_read_guest(&svm->vcpu, svm->nested.ctl.msrpm_base_pa + offset, &value, 4))
+	if (kvm_vcpu_read_guest(&svm->vcpu, base + bit_nr / BITS_PER_BYTE,
+				&value, sizeof(value)))
 		return NESTED_EXIT_DONE;
 
+	mask = BIT(write) << (bit_nr & (BITS_PER_BYTE - 1));
 	return (value & mask) ? NESTED_EXIT_DONE : NESTED_EXIT_HOST;
 }
 
-- 
2.50.0.rc0.642.g800a2b2222-goog


  parent reply	other threads:[~2025-06-10 22:58 UTC|newest]

Thread overview: 46+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2025-06-10 22:57 [PATCH v2 00/32] KVM: x86: Clean up MSR interception code Sean Christopherson
2025-06-10 22:57 ` [PATCH v2 01/32] KVM: SVM: Disable interception of SPEC_CTRL iff the MSR exists for the guest Sean Christopherson
2025-06-11  4:38   ` Binbin Wu
2025-06-11  7:14     ` Binbin Wu
2025-06-10 22:57 ` [PATCH v2 02/32] KVM: SVM: Allocate IOPM pages after initial setup in svm_hardware_setup() Sean Christopherson
2025-06-10 22:57 ` [PATCH v2 03/32] KVM: SVM: Don't BUG if setting up the MSR intercept bitmaps fails Sean Christopherson
2025-06-10 22:57 ` [PATCH v2 04/32] KVM: SVM: Tag MSR bitmap initialization helpers with __init Sean Christopherson
2025-06-10 22:57 ` [PATCH v2 05/32] KVM: SVM: Use ARRAY_SIZE() to iterate over direct_access_msrs Sean Christopherson
2025-06-10 22:57 ` [PATCH v2 06/32] KVM: SVM: Kill the VM instead of the host if MSR interception is buggy Sean Christopherson
2025-06-11  2:16   ` Mi, Dapeng
2025-06-10 22:57 ` [PATCH v2 07/32] KVM: x86: Use non-atomic bit ops to manipulate "shadow" MSR intercepts Sean Christopherson
2025-06-11  6:38   ` Binbin Wu
2025-06-10 22:57 ` [PATCH v2 08/32] KVM: SVM: Massage name and param of helper that merges vmcb01 and vmcb12 MSRPMs Sean Christopherson
2025-06-11  2:22   ` Mi, Dapeng
2025-06-10 22:57 ` [PATCH v2 09/32] KVM: SVM: Clean up macros related to architectural MSRPM definitions Sean Christopherson
2025-06-11  6:09   ` Binbin Wu
2025-06-10 22:57 ` [PATCH v2 10/32] KVM: nSVM: Use dedicated array of MSRPM offsets to merge L0 and L1 bitmaps Sean Christopherson
2025-06-10 22:57 ` [PATCH v2 11/32] KVM: nSVM: Omit SEV-ES specific passthrough MSRs from L0+L1 bitmap merge Sean Christopherson
2025-06-10 22:57 ` [PATCH v2 12/32] KVM: nSVM: Don't initialize vmcb02 MSRPM with vmcb01's "always passthrough" Sean Christopherson
2025-06-10 22:57 ` [PATCH v2 13/32] KVM: SVM: Add helpers for accessing MSR bitmap that don't rely on offsets Sean Christopherson
2025-06-10 22:57 ` [PATCH v2 14/32] KVM: SVM: Implement and adopt VMX style MSR intercepts APIs Sean Christopherson
2025-06-11  7:31   ` Binbin Wu
2025-06-10 22:57 ` [PATCH v2 15/32] KVM: SVM: Pass through GHCB MSR if and only if VM is an SEV-ES guest Sean Christopherson
2025-06-10 22:57 ` [PATCH v2 16/32] KVM: SVM: Drop "always" flag from list of possible passthrough MSRs Sean Christopherson
2025-06-10 22:57 ` [PATCH v2 17/32] KVM: x86: Move definition of X2APIC_MSR() to lapic.h Sean Christopherson
2025-06-11  2:29   ` Mi, Dapeng
2025-06-10 22:57 ` [PATCH v2 18/32] KVM: VMX: Manually recalc all MSR intercepts on userspace MSR filter change Sean Christopherson
2025-06-11  6:52   ` Binbin Wu
2025-06-10 22:57 ` [PATCH v2 19/32] KVM: SVM: " Sean Christopherson
2025-06-10 22:57 ` [PATCH v2 20/32] KVM: x86: Rename msr_filter_changed() => recalc_msr_intercepts() Sean Christopherson
2025-06-11  7:09   ` Binbin Wu
2025-06-10 22:57 ` [PATCH v2 21/32] KVM: SVM: Rename init_vmcb_after_set_cpuid() to make it intercepts specific Sean Christopherson
2025-06-10 22:57 ` [PATCH v2 22/32] KVM: SVM: Fold svm_vcpu_init_msrpm() into its sole caller Sean Christopherson
2025-06-10 22:57 ` [PATCH v2 23/32] KVM: SVM: Merge "after set CPUID" intercept recalc helpers Sean Christopherson
2025-06-10 22:57 ` [PATCH v2 24/32] KVM: SVM: Drop explicit check on MSRPM offset when emulating SEV-ES accesses Sean Christopherson
2025-06-10 22:57 ` [PATCH v2 25/32] KVM: SVM: Move svm_msrpm_offset() to nested.c Sean Christopherson
2025-06-10 22:57 ` [PATCH v2 26/32] KVM: SVM: Store MSRPM pointer as "void *" instead of "u32 *" Sean Christopherson
2025-06-10 22:57 ` Sean Christopherson [this message]
2025-06-10 22:57 ` [PATCH v2 28/32] KVM: SVM: Return -EINVAL instead of MSR_INVALID to signal out-of-range MSR Sean Christopherson
2025-06-10 22:57 ` [PATCH v2 29/32] KVM: nSVM: Merge MSRPM in 64-bit chunks on 64-bit kernels Sean Christopherson
2025-06-10 22:57 ` [PATCH v2 30/32] KVM: SVM: Add a helper to allocate and initialize permissions bitmaps Sean Christopherson
2025-06-10 22:57 ` [PATCH v2 31/32] KVM: x86: Simplify userspace filter logic when disabling MSR interception Sean Christopherson
2025-06-11  2:35   ` Mi, Dapeng
2025-06-10 22:57 ` [PATCH v2 32/32] KVM: selftests: Verify KVM disable interception (for userspace) on filter change Sean Christopherson
2025-06-24 19:38 ` [PATCH v2 00/32] KVM: x86: Clean up MSR interception code Sean Christopherson
2025-06-25 12:03 ` Manali Shukla

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20250610225737.156318-28-seanjc@google.com \
    --to=seanjc@google.com \
    --cc=Manali.Shukla@amd.com \
    --cc=bp@alien8.de \
    --cc=chao.gao@intel.com \
    --cc=dapeng1.mi@linux.intel.com \
    --cc=francescolavra.fl@gmail.com \
    --cc=kvm@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=pbonzini@redhat.com \
    --cc=xin@zytor.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox