* [PATCH 00/15] Unify MSR intercepts in x86
@ 2024-11-27 20:19 Aaron Lewis
2024-11-27 20:19 ` [PATCH 01/15] KVM: x86: Use non-atomic bit ops to manipulate "shadow" MSR intercepts Aaron Lewis
` (15 more replies)
0 siblings, 16 replies; 32+ messages in thread
From: Aaron Lewis @ 2024-11-27 20:19 UTC (permalink / raw)
To: kvm; +Cc: pbonzini, jmattson, seanjc, Aaron Lewis
The goal of this series is to unify MSR intercepts into common code between
VMX and SVM.
The high level structure of this series is to:
1. Modify SVM MSR intercepts to adopt how VMX does it.
2. Hoist the newly updated SVM MSR intercept implementation to common x86 code.
3. Hoist the VMX MSR intercept implementation to common x86 code.
Aaron Lewis (8):
KVM: SVM: Invert the polarity of the "shadow" MSR interception bitmaps
KVM: SVM: Track MSRPM as "unsigned long", not "u32"
KVM: x86: SVM: Adopt VMX style MSR intercepts in SVM
KVM: SVM: Don't "NULL terminate" the list of possible passthrough MSRs
KVM: x86: Track possible passthrough MSRs in kvm_x86_ops
KVM: x86: Move ownership of passthrough MSR "shadow" to common x86
KVM: x86: Hoist SVM MSR intercepts to common x86 code
KVM: x86: Hoist VMX MSR intercepts to common x86 code
Anish Ghulati (2):
KVM: SVM: Disable intercepts for all direct access MSRs on MSR filter changes
KVM: SVM: Delete old SVM MSR management code
Sean Christopherson (5):
KVM: x86: Use non-atomic bit ops to manipulate "shadow" MSR intercepts
KVM: SVM: Use non-atomic bit ops to manipulate MSR interception bitmaps
KVM: SVM: Pass through GHCB MSR if and only if VM is SEV-ES
KVM: SVM: Drop "always" flag from list of possible passthrough MSRs
KVM: VMX: Make list of possible passthrough MSRs "const"
arch/x86/include/asm/kvm-x86-ops.h | 5 +-
arch/x86/include/asm/kvm_host.h | 18 ++
arch/x86/kvm/svm/sev.c | 11 +-
arch/x86/kvm/svm/svm.c | 300 ++++++++++++-----------------
arch/x86/kvm/svm/svm.h | 30 +--
arch/x86/kvm/vmx/main.c | 30 +++
arch/x86/kvm/vmx/vmx.c | 144 +++-----------
arch/x86/kvm/vmx/vmx.h | 11 +-
arch/x86/kvm/x86.c | 129 ++++++++++++-
arch/x86/kvm/x86.h | 3 +
10 files changed, 358 insertions(+), 323 deletions(-)
--
2.47.0.338.g60cca15819-goog
^ permalink raw reply [flat|nested] 32+ messages in thread
* [PATCH 01/15] KVM: x86: Use non-atomic bit ops to manipulate "shadow" MSR intercepts
2024-11-27 20:19 [PATCH 00/15] Unify MSR intercepts in x86 Aaron Lewis
@ 2024-11-27 20:19 ` Aaron Lewis
2024-11-27 20:38 ` Sean Christopherson
2024-11-27 20:19 ` [PATCH 02/15] KVM: SVM: Use non-atomic bit ops to manipulate MSR interception bitmaps Aaron Lewis
` (14 subsequent siblings)
15 siblings, 1 reply; 32+ messages in thread
From: Aaron Lewis @ 2024-11-27 20:19 UTC (permalink / raw)
To: kvm; +Cc: pbonzini, jmattson, seanjc
From: Sean Christopherson <seanjc@google.com>
Signed-off-by: Sean Christopherson <seanjc@google.com>
---
arch/x86/kvm/svm/svm.c | 8 ++++----
arch/x86/kvm/vmx/vmx.c | 8 ++++----
2 files changed, 8 insertions(+), 8 deletions(-)
diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c
index dd15cc6356553..35bcf3a63b606 100644
--- a/arch/x86/kvm/svm/svm.c
+++ b/arch/x86/kvm/svm/svm.c
@@ -781,14 +781,14 @@ static void set_shadow_msr_intercept(struct kvm_vcpu *vcpu, u32 msr, int read,
/* Set the shadow bitmaps to the desired intercept states */
if (read)
- set_bit(slot, svm->shadow_msr_intercept.read);
+ __set_bit(slot, svm->shadow_msr_intercept.read);
else
- clear_bit(slot, svm->shadow_msr_intercept.read);
+ __clear_bit(slot, svm->shadow_msr_intercept.read);
if (write)
- set_bit(slot, svm->shadow_msr_intercept.write);
+ __set_bit(slot, svm->shadow_msr_intercept.write);
else
- clear_bit(slot, svm->shadow_msr_intercept.write);
+ __clear_bit(slot, svm->shadow_msr_intercept.write);
}
static bool valid_msr_intercept(u32 index)
diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
index 3d4a8d5b0b808..0577a7961b9f0 100644
--- a/arch/x86/kvm/vmx/vmx.c
+++ b/arch/x86/kvm/vmx/vmx.c
@@ -4015,9 +4015,9 @@ void vmx_disable_intercept_for_msr(struct kvm_vcpu *vcpu, u32 msr, int type)
idx = vmx_get_passthrough_msr_slot(msr);
if (idx >= 0) {
if (type & MSR_TYPE_R)
- clear_bit(idx, vmx->shadow_msr_intercept.read);
+ __clear_bit(idx, vmx->shadow_msr_intercept.read);
if (type & MSR_TYPE_W)
- clear_bit(idx, vmx->shadow_msr_intercept.write);
+ __clear_bit(idx, vmx->shadow_msr_intercept.write);
}
if ((type & MSR_TYPE_R) &&
@@ -4057,9 +4057,9 @@ void vmx_enable_intercept_for_msr(struct kvm_vcpu *vcpu, u32 msr, int type)
idx = vmx_get_passthrough_msr_slot(msr);
if (idx >= 0) {
if (type & MSR_TYPE_R)
- set_bit(idx, vmx->shadow_msr_intercept.read);
+ __set_bit(idx, vmx->shadow_msr_intercept.read);
if (type & MSR_TYPE_W)
- set_bit(idx, vmx->shadow_msr_intercept.write);
+ __set_bit(idx, vmx->shadow_msr_intercept.write);
}
if (type & MSR_TYPE_R)
--
2.47.0.338.g60cca15819-goog
^ permalink raw reply related [flat|nested] 32+ messages in thread
* [PATCH 02/15] KVM: SVM: Use non-atomic bit ops to manipulate MSR interception bitmaps
2024-11-27 20:19 [PATCH 00/15] Unify MSR intercepts in x86 Aaron Lewis
2024-11-27 20:19 ` [PATCH 01/15] KVM: x86: Use non-atomic bit ops to manipulate "shadow" MSR intercepts Aaron Lewis
@ 2024-11-27 20:19 ` Aaron Lewis
2024-11-27 20:19 ` [PATCH 03/15] KVM: SVM: Invert the polarity of the "shadow" " Aaron Lewis
` (13 subsequent siblings)
15 siblings, 0 replies; 32+ messages in thread
From: Aaron Lewis @ 2024-11-27 20:19 UTC (permalink / raw)
To: kvm; +Cc: pbonzini, jmattson, seanjc
From: Sean Christopherson <seanjc@google.com>
Signed-off-by: Sean Christopherson <seanjc@google.com>
---
arch/x86/kvm/svm/svm.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c
index 35bcf3a63b606..7433dd2a32925 100644
--- a/arch/x86/kvm/svm/svm.c
+++ b/arch/x86/kvm/svm/svm.c
@@ -852,8 +852,8 @@ static void set_msr_interception_bitmap(struct kvm_vcpu *vcpu, u32 *msrpm,
BUG_ON(offset == MSR_INVALID);
- read ? clear_bit(bit_read, &tmp) : set_bit(bit_read, &tmp);
- write ? clear_bit(bit_write, &tmp) : set_bit(bit_write, &tmp);
+ read ? __clear_bit(bit_read, &tmp) : __set_bit(bit_read, &tmp);
+ write ? __clear_bit(bit_write, &tmp) : __set_bit(bit_write, &tmp);
msrpm[offset] = tmp;
--
2.47.0.338.g60cca15819-goog
^ permalink raw reply related [flat|nested] 32+ messages in thread
* [PATCH 03/15] KVM: SVM: Invert the polarity of the "shadow" MSR interception bitmaps
2024-11-27 20:19 [PATCH 00/15] Unify MSR intercepts in x86 Aaron Lewis
2024-11-27 20:19 ` [PATCH 01/15] KVM: x86: Use non-atomic bit ops to manipulate "shadow" MSR intercepts Aaron Lewis
2024-11-27 20:19 ` [PATCH 02/15] KVM: SVM: Use non-atomic bit ops to manipulate MSR interception bitmaps Aaron Lewis
@ 2024-11-27 20:19 ` Aaron Lewis
2024-11-27 20:42 ` Sean Christopherson
2024-11-27 20:19 ` [PATCH 04/15] KVM: SVM: Track MSRPM as "unsigned long", not "u32" Aaron Lewis
` (12 subsequent siblings)
15 siblings, 1 reply; 32+ messages in thread
From: Aaron Lewis @ 2024-11-27 20:19 UTC (permalink / raw)
To: kvm; +Cc: pbonzini, jmattson, seanjc, Aaron Lewis
Note, a "FIXME" tag was added to svm_msr_filter_changed(). This will
be addressed later in the series after the VMX style MSR intercepts
are added to SVM.
Signed-off-by: Sean Christopherson <seanjc@google.com>
Co-developed-by: Aaron Lewis <aaronlewis@google.com>
---
arch/x86/kvm/svm/svm.c | 17 +++++++++++------
1 file changed, 11 insertions(+), 6 deletions(-)
diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c
index 7433dd2a32925..f534cdbba0585 100644
--- a/arch/x86/kvm/svm/svm.c
+++ b/arch/x86/kvm/svm/svm.c
@@ -781,14 +781,14 @@ static void set_shadow_msr_intercept(struct kvm_vcpu *vcpu, u32 msr, int read,
/* Set the shadow bitmaps to the desired intercept states */
if (read)
- __set_bit(slot, svm->shadow_msr_intercept.read);
- else
__clear_bit(slot, svm->shadow_msr_intercept.read);
+ else
+ __set_bit(slot, svm->shadow_msr_intercept.read);
if (write)
- __set_bit(slot, svm->shadow_msr_intercept.write);
- else
__clear_bit(slot, svm->shadow_msr_intercept.write);
+ else
+ __set_bit(slot, svm->shadow_msr_intercept.write);
}
static bool valid_msr_intercept(u32 index)
@@ -934,9 +934,10 @@ static void svm_msr_filter_changed(struct kvm_vcpu *vcpu)
*/
for (i = 0; direct_access_msrs[i].index != MSR_INVALID; i++) {
u32 msr = direct_access_msrs[i].index;
- u32 read = test_bit(i, svm->shadow_msr_intercept.read);
- u32 write = test_bit(i, svm->shadow_msr_intercept.write);
+ u32 read = !test_bit(i, svm->shadow_msr_intercept.read);
+ u32 write = !test_bit(i, svm->shadow_msr_intercept.write);
+ /* FIXME: Align the polarity of the bitmaps and params. */
set_msr_interception_bitmap(vcpu, svm->msrpm, msr, read, write);
}
}
@@ -1453,6 +1454,10 @@ static int svm_vcpu_create(struct kvm_vcpu *vcpu)
if (err)
goto error_free_vmsa_page;
+ /* All MSRs start out in the "intercepted" state. */
+ bitmap_fill(svm->shadow_msr_intercept.read, MAX_DIRECT_ACCESS_MSRS);
+ bitmap_fill(svm->shadow_msr_intercept.write, MAX_DIRECT_ACCESS_MSRS);
+
svm->msrpm = svm_vcpu_alloc_msrpm();
if (!svm->msrpm) {
err = -ENOMEM;
--
2.47.0.338.g60cca15819-goog
^ permalink raw reply related [flat|nested] 32+ messages in thread
* [PATCH 04/15] KVM: SVM: Track MSRPM as "unsigned long", not "u32"
2024-11-27 20:19 [PATCH 00/15] Unify MSR intercepts in x86 Aaron Lewis
` (2 preceding siblings ...)
2024-11-27 20:19 ` [PATCH 03/15] KVM: SVM: Invert the polarity of the "shadow" " Aaron Lewis
@ 2024-11-27 20:19 ` Aaron Lewis
2024-11-27 20:19 ` [PATCH 05/15] KVM: x86: SVM: Adopt VMX style MSR intercepts in SVM Aaron Lewis
` (11 subsequent siblings)
15 siblings, 0 replies; 32+ messages in thread
From: Aaron Lewis @ 2024-11-27 20:19 UTC (permalink / raw)
To: kvm; +Cc: pbonzini, jmattson, seanjc, Aaron Lewis
Use "unsigned long" instead of "u32" to track MSRPM to match the
bitmap API.
Signed-off-by: Sean Christopherson <seanjc@google.com>
Co-developed-by: Aaron Lewis <aaronlewis@google.com>
---
arch/x86/kvm/svm/svm.c | 18 +++++++++---------
arch/x86/kvm/svm/svm.h | 12 ++++++------
2 files changed, 15 insertions(+), 15 deletions(-)
diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c
index f534cdbba0585..5dd621f78e474 100644
--- a/arch/x86/kvm/svm/svm.c
+++ b/arch/x86/kvm/svm/svm.c
@@ -276,8 +276,8 @@ u32 svm_msrpm_offset(u32 msr)
offset = (msr - msrpm_ranges[i]) / 4; /* 4 msrs per u8 */
offset += (i * MSRS_RANGE_SIZE); /* add range offset */
- /* Now we have the u8 offset - but need the u32 offset */
- return offset / 4;
+ /* Now we have the u8 offset - but need the ulong offset */
+ return offset / sizeof(unsigned long);
}
/* MSR not in any range */
@@ -799,9 +799,9 @@ static bool valid_msr_intercept(u32 index)
static bool msr_write_intercepted(struct kvm_vcpu *vcpu, u32 msr)
{
u8 bit_write;
+ unsigned long *msrpm;
unsigned long tmp;
u32 offset;
- u32 *msrpm;
/*
* For non-nested case:
@@ -824,7 +824,7 @@ static bool msr_write_intercepted(struct kvm_vcpu *vcpu, u32 msr)
return test_bit(bit_write, &tmp);
}
-static void set_msr_interception_bitmap(struct kvm_vcpu *vcpu, u32 *msrpm,
+static void set_msr_interception_bitmap(struct kvm_vcpu *vcpu, unsigned long *msrpm,
u32 msr, int read, int write)
{
struct vcpu_svm *svm = to_svm(vcpu);
@@ -861,18 +861,18 @@ static void set_msr_interception_bitmap(struct kvm_vcpu *vcpu, u32 *msrpm,
svm->nested.force_msr_bitmap_recalc = true;
}
-void set_msr_interception(struct kvm_vcpu *vcpu, u32 *msrpm, u32 msr,
+void set_msr_interception(struct kvm_vcpu *vcpu, unsigned long *msrpm, u32 msr,
int read, int write)
{
set_shadow_msr_intercept(vcpu, msr, read, write);
set_msr_interception_bitmap(vcpu, msrpm, msr, read, write);
}
-u32 *svm_vcpu_alloc_msrpm(void)
+unsigned long *svm_vcpu_alloc_msrpm(void)
{
unsigned int order = get_order(MSRPM_SIZE);
struct page *pages = alloc_pages(GFP_KERNEL_ACCOUNT, order);
- u32 *msrpm;
+ unsigned long *msrpm;
if (!pages)
return NULL;
@@ -883,7 +883,7 @@ u32 *svm_vcpu_alloc_msrpm(void)
return msrpm;
}
-void svm_vcpu_init_msrpm(struct kvm_vcpu *vcpu, u32 *msrpm)
+void svm_vcpu_init_msrpm(struct kvm_vcpu *vcpu, unsigned long *msrpm)
{
int i;
@@ -917,7 +917,7 @@ void svm_set_x2apic_msr_interception(struct vcpu_svm *svm, bool intercept)
svm->x2avic_msrs_intercepted = intercept;
}
-void svm_vcpu_free_msrpm(u32 *msrpm)
+void svm_vcpu_free_msrpm(unsigned long *msrpm)
{
__free_pages(virt_to_page(msrpm), get_order(MSRPM_SIZE));
}
diff --git a/arch/x86/kvm/svm/svm.h b/arch/x86/kvm/svm/svm.h
index 43fa6a16eb191..d73b184675641 100644
--- a/arch/x86/kvm/svm/svm.h
+++ b/arch/x86/kvm/svm/svm.h
@@ -185,7 +185,7 @@ struct svm_nested_state {
u64 last_vmcb12_gpa;
/* These are the merged vectors */
- u32 *msrpm;
+ unsigned long *msrpm;
/* A VMRUN has started but has not yet been performed, so
* we cannot inject a nested vmexit yet. */
@@ -266,7 +266,7 @@ struct vcpu_svm {
*/
u64 virt_spec_ctrl;
- u32 *msrpm;
+ unsigned long *msrpm;
ulong nmi_iret_rip;
@@ -596,9 +596,9 @@ static inline bool is_vnmi_enabled(struct vcpu_svm *svm)
extern bool dump_invalid_vmcb;
u32 svm_msrpm_offset(u32 msr);
-u32 *svm_vcpu_alloc_msrpm(void);
-void svm_vcpu_init_msrpm(struct kvm_vcpu *vcpu, u32 *msrpm);
-void svm_vcpu_free_msrpm(u32 *msrpm);
+unsigned long *svm_vcpu_alloc_msrpm(void);
+void svm_vcpu_init_msrpm(struct kvm_vcpu *vcpu, unsigned long *msrpm);
+void svm_vcpu_free_msrpm(unsigned long *msrpm);
void svm_copy_lbrs(struct vmcb *to_vmcb, struct vmcb *from_vmcb);
void svm_enable_lbrv(struct kvm_vcpu *vcpu);
void svm_update_lbrv(struct kvm_vcpu *vcpu);
@@ -612,7 +612,7 @@ bool svm_nmi_blocked(struct kvm_vcpu *vcpu);
bool svm_interrupt_blocked(struct kvm_vcpu *vcpu);
void svm_set_gif(struct vcpu_svm *svm, bool value);
int svm_invoke_exit_handler(struct kvm_vcpu *vcpu, u64 exit_code);
-void set_msr_interception(struct kvm_vcpu *vcpu, u32 *msrpm, u32 msr,
+void set_msr_interception(struct kvm_vcpu *vcpu, unsigned long *msrpm, u32 msr,
int read, int write);
void svm_set_x2apic_msr_interception(struct vcpu_svm *svm, bool disable);
void svm_complete_interrupt_delivery(struct kvm_vcpu *vcpu, int delivery_mode,
--
2.47.0.338.g60cca15819-goog
^ permalink raw reply related [flat|nested] 32+ messages in thread
* [PATCH 05/15] KVM: x86: SVM: Adopt VMX style MSR intercepts in SVM
2024-11-27 20:19 [PATCH 00/15] Unify MSR intercepts in x86 Aaron Lewis
` (3 preceding siblings ...)
2024-11-27 20:19 ` [PATCH 04/15] KVM: SVM: Track MSRPM as "unsigned long", not "u32" Aaron Lewis
@ 2024-11-27 20:19 ` Aaron Lewis
2024-11-27 20:43 ` Sean Christopherson
2024-11-27 20:19 ` [PATCH 06/15] KVM: SVM: Disable intercepts for all direct access MSRs on MSR filter changes Aaron Lewis
` (10 subsequent siblings)
15 siblings, 1 reply; 32+ messages in thread
From: Aaron Lewis @ 2024-11-27 20:19 UTC (permalink / raw)
To: kvm; +Cc: pbonzini, jmattson, seanjc, Aaron Lewis, Anish Ghulati
VMX MSR interception is done via three functions:
vmx_disable_intercept_for_msr(vcpu, msr, type)
vmx_enable_intercept_for_msr(vcpu, msr, type)
vmx_set_intercept_for_msr(vcpu, msr, type, value)
While SVM uses
set_msr_interception(vcpu, msrpm, msr, read, write)
The SVM code is not very intuitive (using 0 for enable and 1 for
disable), and forces both read and write changes with each call which
is not always required.
Add helpers functions to SVM to match VMX:
svm_disable_intercept_for_msr(vcpu, msr, type)
svm_enable_intercept_for_msr(vcpu, msr, type)
svm_set_intercept_for_msr(vcpu, msr, type, enable_intercept)
Additionally, update calls to set_msr_interception() to use the new
functions. This update is only made to calls that toggle interception
for both read and write.
Keep the old paths for now, they will be deleted once all code is
converted to the new helpers.
Opportunistically, the function svm_get_msr_bitmap_entries() is added
to abstract the MSR bitmap from the intercept functions. This will be
needed later in the series when this code is hoisted to common code.
No functional change.
Suggested-by: Sean Christopherson <seanjc@google.com>
Co-Developed-by: Anish Ghulati <aghulati@google.com>
Signed-off-by: Aaron Lewis <aaronlewis@google.com>
---
arch/x86/kvm/svm/sev.c | 11 ++--
arch/x86/kvm/svm/svm.c | 144 ++++++++++++++++++++++++++++++++++-------
arch/x86/kvm/svm/svm.h | 12 ++++
3 files changed, 138 insertions(+), 29 deletions(-)
diff --git a/arch/x86/kvm/svm/sev.c b/arch/x86/kvm/svm/sev.c
index c6c8524859001..cdd3799e71f24 100644
--- a/arch/x86/kvm/svm/sev.c
+++ b/arch/x86/kvm/svm/sev.c
@@ -4448,7 +4448,8 @@ static void sev_es_vcpu_after_set_cpuid(struct vcpu_svm *svm)
bool v_tsc_aux = guest_cpuid_has(vcpu, X86_FEATURE_RDTSCP) ||
guest_cpuid_has(vcpu, X86_FEATURE_RDPID);
- set_msr_interception(vcpu, svm->msrpm, MSR_TSC_AUX, v_tsc_aux, v_tsc_aux);
+ if (v_tsc_aux)
+ svm_disable_intercept_for_msr(vcpu, MSR_TSC_AUX, MSR_TYPE_RW);
}
/*
@@ -4466,9 +4467,9 @@ static void sev_es_vcpu_after_set_cpuid(struct vcpu_svm *svm)
*/
if (guest_can_use(vcpu, X86_FEATURE_XSAVES) &&
guest_cpuid_has(vcpu, X86_FEATURE_XSAVES))
- set_msr_interception(vcpu, svm->msrpm, MSR_IA32_XSS, 1, 1);
+ svm_disable_intercept_for_msr(vcpu, MSR_IA32_XSS, MSR_TYPE_RW);
else
- set_msr_interception(vcpu, svm->msrpm, MSR_IA32_XSS, 0, 0);
+ svm_enable_intercept_for_msr(vcpu, MSR_IA32_XSS, MSR_TYPE_RW);
}
void sev_vcpu_after_set_cpuid(struct vcpu_svm *svm)
@@ -4540,8 +4541,8 @@ static void sev_es_init_vmcb(struct vcpu_svm *svm)
svm_clr_intercept(svm, INTERCEPT_XSETBV);
/* Clear intercepts on selected MSRs */
- set_msr_interception(vcpu, svm->msrpm, MSR_EFER, 1, 1);
- set_msr_interception(vcpu, svm->msrpm, MSR_IA32_CR_PAT, 1, 1);
+ svm_disable_intercept_for_msr(vcpu, MSR_EFER, MSR_TYPE_RW);
+ svm_disable_intercept_for_msr(vcpu, MSR_IA32_CR_PAT, MSR_TYPE_RW);
}
void sev_init_vmcb(struct vcpu_svm *svm)
diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c
index 5dd621f78e474..b982729ef7638 100644
--- a/arch/x86/kvm/svm/svm.c
+++ b/arch/x86/kvm/svm/svm.c
@@ -868,6 +868,102 @@ void set_msr_interception(struct kvm_vcpu *vcpu, unsigned long *msrpm, u32 msr,
set_msr_interception_bitmap(vcpu, msrpm, msr, read, write);
}
+static void svm_get_msr_bitmap_entries(struct kvm_vcpu *vcpu, u32 msr,
+ unsigned long **read_map, u8 *read_bit,
+ unsigned long **write_map, u8 *write_bit)
+{
+ struct vcpu_svm *svm = to_svm(vcpu);
+ u32 offset;
+
+ offset = svm_msrpm_offset(msr);
+ *read_bit = 2 * (msr & 0x0f);
+ *write_bit = 2 * (msr & 0x0f) + 1;
+ BUG_ON(offset == MSR_INVALID);
+
+ *read_map = &svm->msrpm[offset];
+ *write_map = &svm->msrpm[offset];
+}
+
+#define BUILD_SVM_MSR_BITMAP_HELPER(fn, bitop, access) \
+static inline void fn(struct kvm_vcpu *vcpu, u32 msr) \
+{ \
+ unsigned long *read_map, *write_map; \
+ u8 read_bit, write_bit; \
+ \
+ svm_get_msr_bitmap_entries(vcpu, msr, &read_map, &read_bit, \
+ &write_map, &write_bit); \
+ bitop(access##_bit, access##_map); \
+}
+
+BUILD_SVM_MSR_BITMAP_HELPER(svm_set_msr_bitmap_read, __set_bit, read)
+BUILD_SVM_MSR_BITMAP_HELPER(svm_set_msr_bitmap_write, __set_bit, write)
+BUILD_SVM_MSR_BITMAP_HELPER(svm_clear_msr_bitmap_read, __clear_bit, read)
+BUILD_SVM_MSR_BITMAP_HELPER(svm_clear_msr_bitmap_write, __clear_bit, write)
+
+void svm_disable_intercept_for_msr(struct kvm_vcpu *vcpu, u32 msr, int type)
+{
+ struct vcpu_svm *svm = to_svm(vcpu);
+ int slot;
+
+ slot = direct_access_msr_slot(msr);
+ WARN_ON(slot == -ENOENT);
+ if (slot >= 0) {
+ /* Set the shadow bitmaps to the desired intercept states */
+ if (type & MSR_TYPE_R)
+ __clear_bit(slot, svm->shadow_msr_intercept.read);
+ if (type & MSR_TYPE_W)
+ __clear_bit(slot, svm->shadow_msr_intercept.write);
+ }
+
+ /*
+ * Don't disabled interception for the MSR if userspace wants to
+ * handle it.
+ */
+ if ((type & MSR_TYPE_R) && !kvm_msr_allowed(vcpu, msr, KVM_MSR_FILTER_READ)) {
+ svm_set_msr_bitmap_read(vcpu, msr);
+ type &= ~MSR_TYPE_R;
+ }
+
+ if ((type & MSR_TYPE_W) && !kvm_msr_allowed(vcpu, msr, KVM_MSR_FILTER_WRITE)) {
+ svm_set_msr_bitmap_write(vcpu, msr);
+ type &= ~MSR_TYPE_W;
+ }
+
+ if (type & MSR_TYPE_R)
+ svm_clear_msr_bitmap_read(vcpu, msr);
+
+ if (type & MSR_TYPE_W)
+ svm_clear_msr_bitmap_write(vcpu, msr);
+
+ svm_hv_vmcb_dirty_nested_enlightenments(vcpu);
+ svm->nested.force_msr_bitmap_recalc = true;
+}
+
+void svm_enable_intercept_for_msr(struct kvm_vcpu *vcpu, u32 msr, int type)
+{
+ struct vcpu_svm *svm = to_svm(vcpu);
+ int slot;
+
+ slot = direct_access_msr_slot(msr);
+ WARN_ON(slot == -ENOENT);
+ if (slot >= 0) {
+ /* Set the shadow bitmaps to the desired intercept states */
+ if (type & MSR_TYPE_R)
+ __set_bit(slot, svm->shadow_msr_intercept.read);
+ if (type & MSR_TYPE_W)
+ __set_bit(slot, svm->shadow_msr_intercept.write);
+ }
+
+ if (type & MSR_TYPE_R)
+ svm_set_msr_bitmap_read(vcpu, msr);
+
+ if (type & MSR_TYPE_W)
+ svm_set_msr_bitmap_write(vcpu, msr);
+
+ svm_hv_vmcb_dirty_nested_enlightenments(vcpu);
+ svm->nested.force_msr_bitmap_recalc = true;
+}
+
unsigned long *svm_vcpu_alloc_msrpm(void)
{
unsigned int order = get_order(MSRPM_SIZE);
@@ -890,7 +986,8 @@ void svm_vcpu_init_msrpm(struct kvm_vcpu *vcpu, unsigned long *msrpm)
for (i = 0; direct_access_msrs[i].index != MSR_INVALID; i++) {
if (!direct_access_msrs[i].always)
continue;
- set_msr_interception(vcpu, msrpm, direct_access_msrs[i].index, 1, 1);
+ svm_disable_intercept_for_msr(vcpu, direct_access_msrs[i].index,
+ MSR_TYPE_RW);
}
}
@@ -910,8 +1007,8 @@ void svm_set_x2apic_msr_interception(struct vcpu_svm *svm, bool intercept)
if ((index < APIC_BASE_MSR) ||
(index > APIC_BASE_MSR + 0xff))
continue;
- set_msr_interception(&svm->vcpu, svm->msrpm, index,
- !intercept, !intercept);
+
+ svm_set_intercept_for_msr(&svm->vcpu, index, MSR_TYPE_RW, intercept);
}
svm->x2avic_msrs_intercepted = intercept;
@@ -1001,13 +1098,13 @@ void svm_enable_lbrv(struct kvm_vcpu *vcpu)
struct vcpu_svm *svm = to_svm(vcpu);
svm->vmcb->control.virt_ext |= LBR_CTL_ENABLE_MASK;
- set_msr_interception(vcpu, svm->msrpm, MSR_IA32_LASTBRANCHFROMIP, 1, 1);
- set_msr_interception(vcpu, svm->msrpm, MSR_IA32_LASTBRANCHTOIP, 1, 1);
- set_msr_interception(vcpu, svm->msrpm, MSR_IA32_LASTINTFROMIP, 1, 1);
- set_msr_interception(vcpu, svm->msrpm, MSR_IA32_LASTINTTOIP, 1, 1);
+ svm_disable_intercept_for_msr(vcpu, MSR_IA32_LASTBRANCHFROMIP, MSR_TYPE_RW);
+ svm_disable_intercept_for_msr(vcpu, MSR_IA32_LASTBRANCHTOIP, MSR_TYPE_RW);
+ svm_disable_intercept_for_msr(vcpu, MSR_IA32_LASTINTFROMIP, MSR_TYPE_RW);
+ svm_disable_intercept_for_msr(vcpu, MSR_IA32_LASTINTTOIP, MSR_TYPE_RW);
if (sev_es_guest(vcpu->kvm))
- set_msr_interception(vcpu, svm->msrpm, MSR_IA32_DEBUGCTLMSR, 1, 1);
+ svm_disable_intercept_for_msr(vcpu, MSR_IA32_DEBUGCTLMSR, MSR_TYPE_RW);
/* Move the LBR msrs to the vmcb02 so that the guest can see them. */
if (is_guest_mode(vcpu))
@@ -1021,10 +1118,10 @@ static void svm_disable_lbrv(struct kvm_vcpu *vcpu)
KVM_BUG_ON(sev_es_guest(vcpu->kvm), vcpu->kvm);
svm->vmcb->control.virt_ext &= ~LBR_CTL_ENABLE_MASK;
- set_msr_interception(vcpu, svm->msrpm, MSR_IA32_LASTBRANCHFROMIP, 0, 0);
- set_msr_interception(vcpu, svm->msrpm, MSR_IA32_LASTBRANCHTOIP, 0, 0);
- set_msr_interception(vcpu, svm->msrpm, MSR_IA32_LASTINTFROMIP, 0, 0);
- set_msr_interception(vcpu, svm->msrpm, MSR_IA32_LASTINTTOIP, 0, 0);
+ svm_enable_intercept_for_msr(vcpu, MSR_IA32_LASTBRANCHFROMIP, MSR_TYPE_RW);
+ svm_enable_intercept_for_msr(vcpu, MSR_IA32_LASTBRANCHTOIP, MSR_TYPE_RW);
+ svm_enable_intercept_for_msr(vcpu, MSR_IA32_LASTINTFROMIP, MSR_TYPE_RW);
+ svm_enable_intercept_for_msr(vcpu, MSR_IA32_LASTINTTOIP, MSR_TYPE_RW);
/*
* Move the LBR msrs back to the vmcb01 to avoid copying them
@@ -1216,8 +1313,8 @@ static inline void init_vmcb_after_set_cpuid(struct kvm_vcpu *vcpu)
svm_set_intercept(svm, INTERCEPT_VMSAVE);
svm->vmcb->control.virt_ext &= ~VIRTUAL_VMLOAD_VMSAVE_ENABLE_MASK;
- set_msr_interception(vcpu, svm->msrpm, MSR_IA32_SYSENTER_EIP, 0, 0);
- set_msr_interception(vcpu, svm->msrpm, MSR_IA32_SYSENTER_ESP, 0, 0);
+ svm_enable_intercept_for_msr(vcpu, MSR_IA32_SYSENTER_EIP, MSR_TYPE_RW);
+ svm_enable_intercept_for_msr(vcpu, MSR_IA32_SYSENTER_ESP, MSR_TYPE_RW);
} else {
/*
* If hardware supports Virtual VMLOAD VMSAVE then enable it
@@ -1229,8 +1326,8 @@ static inline void init_vmcb_after_set_cpuid(struct kvm_vcpu *vcpu)
svm->vmcb->control.virt_ext |= VIRTUAL_VMLOAD_VMSAVE_ENABLE_MASK;
}
/* No need to intercept these MSRs */
- set_msr_interception(vcpu, svm->msrpm, MSR_IA32_SYSENTER_EIP, 1, 1);
- set_msr_interception(vcpu, svm->msrpm, MSR_IA32_SYSENTER_ESP, 1, 1);
+ svm_disable_intercept_for_msr(vcpu, MSR_IA32_SYSENTER_EIP, MSR_TYPE_RW);
+ svm_disable_intercept_for_msr(vcpu, MSR_IA32_SYSENTER_ESP, MSR_TYPE_RW);
}
}
@@ -1359,7 +1456,7 @@ static void init_vmcb(struct kvm_vcpu *vcpu)
* of MSR_IA32_SPEC_CTRL.
*/
if (boot_cpu_has(X86_FEATURE_V_SPEC_CTRL))
- set_msr_interception(vcpu, svm->msrpm, MSR_IA32_SPEC_CTRL, 1, 1);
+ svm_disable_intercept_for_msr(vcpu, MSR_IA32_SPEC_CTRL, MSR_TYPE_RW);
if (kvm_vcpu_apicv_active(vcpu))
avic_init_vmcb(svm, vmcb);
@@ -3092,7 +3189,8 @@ static int svm_set_msr(struct kvm_vcpu *vcpu, struct msr_data *msr)
* We update the L1 MSR bit as well since it will end up
* touching the MSR anyway now.
*/
- set_msr_interception(vcpu, svm->msrpm, MSR_IA32_SPEC_CTRL, 1, 1);
+ svm_disable_intercept_for_msr(vcpu, MSR_IA32_SPEC_CTRL,
+ MSR_TYPE_RW);
break;
case MSR_AMD64_VIRT_SPEC_CTRL:
if (!msr->host_initiated &&
@@ -4430,13 +4528,11 @@ static void svm_vcpu_after_set_cpuid(struct kvm_vcpu *vcpu)
svm_recalc_instruction_intercepts(vcpu, svm);
- if (boot_cpu_has(X86_FEATURE_IBPB))
- set_msr_interception(vcpu, svm->msrpm, MSR_IA32_PRED_CMD, 0,
- !!guest_has_pred_cmd_msr(vcpu));
+ if (boot_cpu_has(X86_FEATURE_IBPB) && guest_has_pred_cmd_msr(vcpu))
+ svm_disable_intercept_for_msr(vcpu, MSR_IA32_PRED_CMD, MSR_TYPE_W);
- if (boot_cpu_has(X86_FEATURE_FLUSH_L1D))
- set_msr_interception(vcpu, svm->msrpm, MSR_IA32_FLUSH_CMD, 0,
- !!guest_cpuid_has(vcpu, X86_FEATURE_FLUSH_L1D));
+ if (boot_cpu_has(X86_FEATURE_FLUSH_L1D) && guest_cpuid_has(vcpu, X86_FEATURE_FLUSH_L1D))
+ svm_disable_intercept_for_msr(vcpu, MSR_IA32_FLUSH_CMD, MSR_TYPE_W);
if (sev_guest(vcpu->kvm))
sev_vcpu_after_set_cpuid(svm);
diff --git a/arch/x86/kvm/svm/svm.h b/arch/x86/kvm/svm/svm.h
index d73b184675641..b008c190188a2 100644
--- a/arch/x86/kvm/svm/svm.h
+++ b/arch/x86/kvm/svm/svm.h
@@ -618,6 +618,18 @@ void svm_set_x2apic_msr_interception(struct vcpu_svm *svm, bool disable);
void svm_complete_interrupt_delivery(struct kvm_vcpu *vcpu, int delivery_mode,
int trig_mode, int vec);
+void svm_enable_intercept_for_msr(struct kvm_vcpu *vcpu, u32 msr, int type);
+void svm_disable_intercept_for_msr(struct kvm_vcpu *vcpu, u32 msr, int type);
+
+static inline void svm_set_intercept_for_msr(struct kvm_vcpu *vcpu, u32 msr,
+ int type, bool enable_intercept)
+{
+ if (enable_intercept)
+ svm_enable_intercept_for_msr(vcpu, msr, type);
+ else
+ svm_disable_intercept_for_msr(vcpu, msr, type);
+}
+
/* nested.c */
#define NESTED_EXIT_HOST 0 /* Exit handled on host level */
--
2.47.0.338.g60cca15819-goog
^ permalink raw reply related [flat|nested] 32+ messages in thread
* [PATCH 06/15] KVM: SVM: Disable intercepts for all direct access MSRs on MSR filter changes
2024-11-27 20:19 [PATCH 00/15] Unify MSR intercepts in x86 Aaron Lewis
` (4 preceding siblings ...)
2024-11-27 20:19 ` [PATCH 05/15] KVM: x86: SVM: Adopt VMX style MSR intercepts in SVM Aaron Lewis
@ 2024-11-27 20:19 ` Aaron Lewis
2024-11-27 20:47 ` Sean Christopherson
2024-11-27 20:19 ` [PATCH 07/15] KVM: SVM: Delete old SVM MSR management code Aaron Lewis
` (9 subsequent siblings)
15 siblings, 1 reply; 32+ messages in thread
From: Aaron Lewis @ 2024-11-27 20:19 UTC (permalink / raw)
To: kvm; +Cc: pbonzini, jmattson, seanjc, Anish Ghulati, Aaron Lewis
From: Anish Ghulati <aghulati@google.com>
For all direct access MSRs, disable the MSR interception explicitly.
svm_disable_intercept_for_msr() checks the new MSR filter and ensures that
KVM enables interception if userspace wants to filter the MSR.
This change is similar to the VMX change:
d895f28ed6da ("KVM: VMX: Skip filter updates for MSRs that KVM is already intercepting")
Adopting in SVM to align the implementations.
Suggested-by: Sean Christopherson <seanjc@google.com>
Co-developed-by: Aaron Lewis <aaronlewis@google.com>
Signed-off-by: Anish Ghulati <aghulati@google.com>
---
arch/x86/kvm/svm/svm.c | 18 +++++++++++-------
1 file changed, 11 insertions(+), 7 deletions(-)
diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c
index b982729ef7638..37b8683849ed2 100644
--- a/arch/x86/kvm/svm/svm.c
+++ b/arch/x86/kvm/svm/svm.c
@@ -1025,17 +1025,21 @@ static void svm_msr_filter_changed(struct kvm_vcpu *vcpu)
u32 i;
/*
- * Set intercept permissions for all direct access MSRs again. They
- * will automatically get filtered through the MSR filter, so we are
- * back in sync after this.
+ * Redo intercept permissions for MSRs that KVM is passing through to
+ * the guest. Disabling interception will check the new MSR filter and
+ * ensure that KVM enables interception if usersepace wants to filter
+ * the MSR. MSRs that KVM is already intercepting don't need to be
+ * refreshed since KVM is going to intercept them regardless of what
+ * userspace wants.
*/
for (i = 0; direct_access_msrs[i].index != MSR_INVALID; i++) {
u32 msr = direct_access_msrs[i].index;
- u32 read = !test_bit(i, svm->shadow_msr_intercept.read);
- u32 write = !test_bit(i, svm->shadow_msr_intercept.write);
- /* FIXME: Align the polarity of the bitmaps and params. */
- set_msr_interception_bitmap(vcpu, svm->msrpm, msr, read, write);
+ if (!test_bit(i, svm->shadow_msr_intercept.read))
+ svm_disable_intercept_for_msr(vcpu, msr, MSR_TYPE_R);
+
+ if (!test_bit(i, svm->shadow_msr_intercept.write))
+ svm_disable_intercept_for_msr(vcpu, msr, MSR_TYPE_W);
}
}
--
2.47.0.338.g60cca15819-goog
^ permalink raw reply related [flat|nested] 32+ messages in thread
* [PATCH 07/15] KVM: SVM: Delete old SVM MSR management code
2024-11-27 20:19 [PATCH 00/15] Unify MSR intercepts in x86 Aaron Lewis
` (5 preceding siblings ...)
2024-11-27 20:19 ` [PATCH 06/15] KVM: SVM: Disable intercepts for all direct access MSRs on MSR filter changes Aaron Lewis
@ 2024-11-27 20:19 ` Aaron Lewis
2024-11-27 20:19 ` [PATCH 08/15] KVM: SVM: Pass through GHCB MSR if and only if VM is SEV-ES Aaron Lewis
` (8 subsequent siblings)
15 siblings, 0 replies; 32+ messages in thread
From: Aaron Lewis @ 2024-11-27 20:19 UTC (permalink / raw)
To: kvm; +Cc: pbonzini, jmattson, seanjc, Anish Ghulati
From: Anish Ghulati <aghulati@google.com>
Delete the old SVM code to manage MSR interception. There are no more
calls to these functions:
set_msr_interception_bitmap()
set_msr_interception()
set_shadow_msr_intercept()
valid_msr_intercept()
Suggested-by: Sean Christopherson <seanjc@google.com>
Signed-off-by: Anish Ghulati <aghulati@google.com>
---
arch/x86/kvm/svm/svm.c | 70 ------------------------------------------
arch/x86/kvm/svm/svm.h | 2 --
2 files changed, 72 deletions(-)
diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c
index 37b8683849ed2..2380059727168 100644
--- a/arch/x86/kvm/svm/svm.c
+++ b/arch/x86/kvm/svm/svm.c
@@ -770,32 +770,6 @@ static int direct_access_msr_slot(u32 msr)
return -ENOENT;
}
-static void set_shadow_msr_intercept(struct kvm_vcpu *vcpu, u32 msr, int read,
- int write)
-{
- struct vcpu_svm *svm = to_svm(vcpu);
- int slot = direct_access_msr_slot(msr);
-
- if (slot == -ENOENT)
- return;
-
- /* Set the shadow bitmaps to the desired intercept states */
- if (read)
- __clear_bit(slot, svm->shadow_msr_intercept.read);
- else
- __set_bit(slot, svm->shadow_msr_intercept.read);
-
- if (write)
- __clear_bit(slot, svm->shadow_msr_intercept.write);
- else
- __set_bit(slot, svm->shadow_msr_intercept.write);
-}
-
-static bool valid_msr_intercept(u32 index)
-{
- return direct_access_msr_slot(index) != -ENOENT;
-}
-
static bool msr_write_intercepted(struct kvm_vcpu *vcpu, u32 msr)
{
u8 bit_write;
@@ -824,50 +798,6 @@ static bool msr_write_intercepted(struct kvm_vcpu *vcpu, u32 msr)
return test_bit(bit_write, &tmp);
}
-static void set_msr_interception_bitmap(struct kvm_vcpu *vcpu, unsigned long *msrpm,
- u32 msr, int read, int write)
-{
- struct vcpu_svm *svm = to_svm(vcpu);
- u8 bit_read, bit_write;
- unsigned long tmp;
- u32 offset;
-
- /*
- * If this warning triggers extend the direct_access_msrs list at the
- * beginning of the file
- */
- WARN_ON(!valid_msr_intercept(msr));
-
- /* Enforce non allowed MSRs to trap */
- if (read && !kvm_msr_allowed(vcpu, msr, KVM_MSR_FILTER_READ))
- read = 0;
-
- if (write && !kvm_msr_allowed(vcpu, msr, KVM_MSR_FILTER_WRITE))
- write = 0;
-
- offset = svm_msrpm_offset(msr);
- bit_read = 2 * (msr & 0x0f);
- bit_write = 2 * (msr & 0x0f) + 1;
- tmp = msrpm[offset];
-
- BUG_ON(offset == MSR_INVALID);
-
- read ? __clear_bit(bit_read, &tmp) : __set_bit(bit_read, &tmp);
- write ? __clear_bit(bit_write, &tmp) : __set_bit(bit_write, &tmp);
-
- msrpm[offset] = tmp;
-
- svm_hv_vmcb_dirty_nested_enlightenments(vcpu);
- svm->nested.force_msr_bitmap_recalc = true;
-}
-
-void set_msr_interception(struct kvm_vcpu *vcpu, unsigned long *msrpm, u32 msr,
- int read, int write)
-{
- set_shadow_msr_intercept(vcpu, msr, read, write);
- set_msr_interception_bitmap(vcpu, msrpm, msr, read, write);
-}
-
static void svm_get_msr_bitmap_entries(struct kvm_vcpu *vcpu, u32 msr,
unsigned long **read_map, u8 *read_bit,
unsigned long **write_map, u8 *write_bit)
diff --git a/arch/x86/kvm/svm/svm.h b/arch/x86/kvm/svm/svm.h
index b008c190188a2..2513990c5b6e6 100644
--- a/arch/x86/kvm/svm/svm.h
+++ b/arch/x86/kvm/svm/svm.h
@@ -612,8 +612,6 @@ bool svm_nmi_blocked(struct kvm_vcpu *vcpu);
bool svm_interrupt_blocked(struct kvm_vcpu *vcpu);
void svm_set_gif(struct vcpu_svm *svm, bool value);
int svm_invoke_exit_handler(struct kvm_vcpu *vcpu, u64 exit_code);
-void set_msr_interception(struct kvm_vcpu *vcpu, unsigned long *msrpm, u32 msr,
- int read, int write);
void svm_set_x2apic_msr_interception(struct vcpu_svm *svm, bool disable);
void svm_complete_interrupt_delivery(struct kvm_vcpu *vcpu, int delivery_mode,
int trig_mode, int vec);
--
2.47.0.338.g60cca15819-goog
^ permalink raw reply related [flat|nested] 32+ messages in thread
* [PATCH 08/15] KVM: SVM: Pass through GHCB MSR if and only if VM is SEV-ES
2024-11-27 20:19 [PATCH 00/15] Unify MSR intercepts in x86 Aaron Lewis
` (6 preceding siblings ...)
2024-11-27 20:19 ` [PATCH 07/15] KVM: SVM: Delete old SVM MSR management code Aaron Lewis
@ 2024-11-27 20:19 ` Aaron Lewis
2024-12-03 21:21 ` Tom Lendacky
2024-11-27 20:19 ` [PATCH 09/15] KVM: SVM: Drop "always" flag from list of possible passthrough MSRs Aaron Lewis
` (7 subsequent siblings)
15 siblings, 1 reply; 32+ messages in thread
From: Aaron Lewis @ 2024-11-27 20:19 UTC (permalink / raw)
To: kvm; +Cc: pbonzini, jmattson, seanjc
From: Sean Christopherson <seanjc@google.com>
Signed-off-by: Sean Christopherson <seanjc@google.com>
---
arch/x86/kvm/svm/svm.c | 5 ++++-
1 file changed, 4 insertions(+), 1 deletion(-)
diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c
index 2380059727168..25d41709a0eaa 100644
--- a/arch/x86/kvm/svm/svm.c
+++ b/arch/x86/kvm/svm/svm.c
@@ -108,7 +108,7 @@ static const struct svm_direct_access_msrs {
{ .index = MSR_IA32_XSS, .always = false },
{ .index = MSR_EFER, .always = false },
{ .index = MSR_IA32_CR_PAT, .always = false },
- { .index = MSR_AMD64_SEV_ES_GHCB, .always = true },
+ { .index = MSR_AMD64_SEV_ES_GHCB, .always = false },
{ .index = MSR_TSC_AUX, .always = false },
{ .index = X2APIC_MSR(APIC_ID), .always = false },
{ .index = X2APIC_MSR(APIC_LVR), .always = false },
@@ -919,6 +919,9 @@ void svm_vcpu_init_msrpm(struct kvm_vcpu *vcpu, unsigned long *msrpm)
svm_disable_intercept_for_msr(vcpu, direct_access_msrs[i].index,
MSR_TYPE_RW);
}
+
+ if (sev_es_guest(vcpu->kvm))
+ svm_disable_intercept_for_msr(vcpu, MSR_AMD64_SEV_ES_GHCB, MSR_TYPE_RW);
}
void svm_set_x2apic_msr_interception(struct vcpu_svm *svm, bool intercept)
--
2.47.0.338.g60cca15819-goog
^ permalink raw reply related [flat|nested] 32+ messages in thread
* [PATCH 09/15] KVM: SVM: Drop "always" flag from list of possible passthrough MSRs
2024-11-27 20:19 [PATCH 00/15] Unify MSR intercepts in x86 Aaron Lewis
` (7 preceding siblings ...)
2024-11-27 20:19 ` [PATCH 08/15] KVM: SVM: Pass through GHCB MSR if and only if VM is SEV-ES Aaron Lewis
@ 2024-11-27 20:19 ` Aaron Lewis
2024-12-03 21:26 ` Tom Lendacky
2024-11-27 20:19 ` [PATCH 10/15] KVM: SVM: Don't "NULL terminate" the " Aaron Lewis
` (6 subsequent siblings)
15 siblings, 1 reply; 32+ messages in thread
From: Aaron Lewis @ 2024-11-27 20:19 UTC (permalink / raw)
To: kvm; +Cc: pbonzini, jmattson, seanjc
From: Sean Christopherson <seanjc@google.com>
Signed-off-by: Sean Christopherson <seanjc@google.com>
---
arch/x86/kvm/svm/svm.c | 134 ++++++++++++++++++++---------------------
1 file changed, 67 insertions(+), 67 deletions(-)
diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c
index 25d41709a0eaa..3813258497e49 100644
--- a/arch/x86/kvm/svm/svm.c
+++ b/arch/x86/kvm/svm/svm.c
@@ -81,51 +81,48 @@ static DEFINE_PER_CPU(u64, current_tsc_ratio);
#define X2APIC_MSR(x) (APIC_BASE_MSR + (x >> 4))
-static const struct svm_direct_access_msrs {
- u32 index; /* Index of the MSR */
- bool always; /* True if intercept is initially cleared */
-} direct_access_msrs[MAX_DIRECT_ACCESS_MSRS] = {
- { .index = MSR_STAR, .always = true },
- { .index = MSR_IA32_SYSENTER_CS, .always = true },
- { .index = MSR_IA32_SYSENTER_EIP, .always = false },
- { .index = MSR_IA32_SYSENTER_ESP, .always = false },
+static const u32 direct_access_msrs[MAX_DIRECT_ACCESS_MSRS] = {
+ MSR_STAR,
+ MSR_IA32_SYSENTER_CS,
+ MSR_IA32_SYSENTER_EIP,
+ MSR_IA32_SYSENTER_ESP,
#ifdef CONFIG_X86_64
- { .index = MSR_GS_BASE, .always = true },
- { .index = MSR_FS_BASE, .always = true },
- { .index = MSR_KERNEL_GS_BASE, .always = true },
- { .index = MSR_LSTAR, .always = true },
- { .index = MSR_CSTAR, .always = true },
- { .index = MSR_SYSCALL_MASK, .always = true },
+ MSR_GS_BASE,
+ MSR_FS_BASE,
+ MSR_KERNEL_GS_BASE,
+ MSR_LSTAR,
+ MSR_CSTAR,
+ MSR_SYSCALL_MASK,
#endif
- { .index = MSR_IA32_SPEC_CTRL, .always = false },
- { .index = MSR_IA32_PRED_CMD, .always = false },
- { .index = MSR_IA32_FLUSH_CMD, .always = false },
- { .index = MSR_IA32_DEBUGCTLMSR, .always = false },
- { .index = MSR_IA32_LASTBRANCHFROMIP, .always = false },
- { .index = MSR_IA32_LASTBRANCHTOIP, .always = false },
- { .index = MSR_IA32_LASTINTFROMIP, .always = false },
- { .index = MSR_IA32_LASTINTTOIP, .always = false },
- { .index = MSR_IA32_XSS, .always = false },
- { .index = MSR_EFER, .always = false },
- { .index = MSR_IA32_CR_PAT, .always = false },
- { .index = MSR_AMD64_SEV_ES_GHCB, .always = false },
- { .index = MSR_TSC_AUX, .always = false },
- { .index = X2APIC_MSR(APIC_ID), .always = false },
- { .index = X2APIC_MSR(APIC_LVR), .always = false },
- { .index = X2APIC_MSR(APIC_TASKPRI), .always = false },
- { .index = X2APIC_MSR(APIC_ARBPRI), .always = false },
- { .index = X2APIC_MSR(APIC_PROCPRI), .always = false },
- { .index = X2APIC_MSR(APIC_EOI), .always = false },
- { .index = X2APIC_MSR(APIC_RRR), .always = false },
- { .index = X2APIC_MSR(APIC_LDR), .always = false },
- { .index = X2APIC_MSR(APIC_DFR), .always = false },
- { .index = X2APIC_MSR(APIC_SPIV), .always = false },
- { .index = X2APIC_MSR(APIC_ISR), .always = false },
- { .index = X2APIC_MSR(APIC_TMR), .always = false },
- { .index = X2APIC_MSR(APIC_IRR), .always = false },
- { .index = X2APIC_MSR(APIC_ESR), .always = false },
- { .index = X2APIC_MSR(APIC_ICR), .always = false },
- { .index = X2APIC_MSR(APIC_ICR2), .always = false },
+ MSR_IA32_SPEC_CTRL,
+ MSR_IA32_PRED_CMD,
+ MSR_IA32_FLUSH_CMD,
+ MSR_IA32_DEBUGCTLMSR,
+ MSR_IA32_LASTBRANCHFROMIP,
+ MSR_IA32_LASTBRANCHTOIP,
+ MSR_IA32_LASTINTFROMIP,
+ MSR_IA32_LASTINTTOIP,
+ MSR_IA32_XSS,
+ MSR_EFER,
+ MSR_IA32_CR_PAT,
+ MSR_AMD64_SEV_ES_GHCB,
+ MSR_TSC_AUX,
+ X2APIC_MSR(APIC_ID),
+ X2APIC_MSR(APIC_LVR),
+ X2APIC_MSR(APIC_TASKPRI),
+ X2APIC_MSR(APIC_ARBPRI),
+ X2APIC_MSR(APIC_PROCPRI),
+ X2APIC_MSR(APIC_EOI),
+ X2APIC_MSR(APIC_RRR),
+ X2APIC_MSR(APIC_LDR),
+ X2APIC_MSR(APIC_DFR),
+ X2APIC_MSR(APIC_SPIV),
+ X2APIC_MSR(APIC_ISR),
+ X2APIC_MSR(APIC_TMR),
+ X2APIC_MSR(APIC_IRR),
+ X2APIC_MSR(APIC_ESR),
+ X2APIC_MSR(APIC_ICR),
+ X2APIC_MSR(APIC_ICR2),
/*
* Note:
@@ -134,15 +131,15 @@ static const struct svm_direct_access_msrs {
* the AVIC hardware would generate GP fault. Therefore, always
* intercept the MSR 0x832, and do not setup direct_access_msr.
*/
- { .index = X2APIC_MSR(APIC_LVTTHMR), .always = false },
- { .index = X2APIC_MSR(APIC_LVTPC), .always = false },
- { .index = X2APIC_MSR(APIC_LVT0), .always = false },
- { .index = X2APIC_MSR(APIC_LVT1), .always = false },
- { .index = X2APIC_MSR(APIC_LVTERR), .always = false },
- { .index = X2APIC_MSR(APIC_TMICT), .always = false },
- { .index = X2APIC_MSR(APIC_TMCCT), .always = false },
- { .index = X2APIC_MSR(APIC_TDCR), .always = false },
- { .index = MSR_INVALID, .always = false },
+ X2APIC_MSR(APIC_LVTTHMR),
+ X2APIC_MSR(APIC_LVTPC),
+ X2APIC_MSR(APIC_LVT0),
+ X2APIC_MSR(APIC_LVT1),
+ X2APIC_MSR(APIC_LVTERR),
+ X2APIC_MSR(APIC_TMICT),
+ X2APIC_MSR(APIC_TMCCT),
+ X2APIC_MSR(APIC_TDCR),
+ MSR_INVALID,
};
/*
@@ -763,9 +760,10 @@ static int direct_access_msr_slot(u32 msr)
{
u32 i;
- for (i = 0; direct_access_msrs[i].index != MSR_INVALID; i++)
- if (direct_access_msrs[i].index == msr)
+ for (i = 0; direct_access_msrs[i] != MSR_INVALID; i++) {
+ if (direct_access_msrs[i] == msr)
return i;
+ }
return -ENOENT;
}
@@ -911,15 +909,17 @@ unsigned long *svm_vcpu_alloc_msrpm(void)
void svm_vcpu_init_msrpm(struct kvm_vcpu *vcpu, unsigned long *msrpm)
{
- int i;
-
- for (i = 0; direct_access_msrs[i].index != MSR_INVALID; i++) {
- if (!direct_access_msrs[i].always)
- continue;
- svm_disable_intercept_for_msr(vcpu, direct_access_msrs[i].index,
- MSR_TYPE_RW);
- }
+ svm_disable_intercept_for_msr(vcpu, MSR_STAR, MSR_TYPE_RW);
+ svm_disable_intercept_for_msr(vcpu, MSR_IA32_SYSENTER_CS, MSR_TYPE_RW);
+#ifdef CONFIG_X86_64
+ svm_disable_intercept_for_msr(vcpu, MSR_GS_BASE, MSR_TYPE_RW);
+ svm_disable_intercept_for_msr(vcpu, MSR_FS_BASE, MSR_TYPE_RW);
+ svm_disable_intercept_for_msr(vcpu, MSR_KERNEL_GS_BASE, MSR_TYPE_RW);
+ svm_disable_intercept_for_msr(vcpu, MSR_LSTAR, MSR_TYPE_RW);
+ svm_disable_intercept_for_msr(vcpu, MSR_CSTAR, MSR_TYPE_RW);
+ svm_disable_intercept_for_msr(vcpu, MSR_SYSCALL_MASK, MSR_TYPE_RW);
+#endif
if (sev_es_guest(vcpu->kvm))
svm_disable_intercept_for_msr(vcpu, MSR_AMD64_SEV_ES_GHCB, MSR_TYPE_RW);
}
@@ -935,7 +935,7 @@ void svm_set_x2apic_msr_interception(struct vcpu_svm *svm, bool intercept)
return;
for (i = 0; i < MAX_DIRECT_ACCESS_MSRS; i++) {
- int index = direct_access_msrs[i].index;
+ int index = direct_access_msrs[i];
if ((index < APIC_BASE_MSR) ||
(index > APIC_BASE_MSR + 0xff))
@@ -965,8 +965,8 @@ static void svm_msr_filter_changed(struct kvm_vcpu *vcpu)
* refreshed since KVM is going to intercept them regardless of what
* userspace wants.
*/
- for (i = 0; direct_access_msrs[i].index != MSR_INVALID; i++) {
- u32 msr = direct_access_msrs[i].index;
+ for (i = 0; direct_access_msrs[i] != MSR_INVALID; i++) {
+ u32 msr = direct_access_msrs[i];
if (!test_bit(i, svm->shadow_msr_intercept.read))
svm_disable_intercept_for_msr(vcpu, msr, MSR_TYPE_R);
@@ -1009,10 +1009,10 @@ static void init_msrpm_offsets(void)
memset(msrpm_offsets, 0xff, sizeof(msrpm_offsets));
- for (i = 0; direct_access_msrs[i].index != MSR_INVALID; i++) {
+ for (i = 0; direct_access_msrs[i] != MSR_INVALID; i++) {
u32 offset;
- offset = svm_msrpm_offset(direct_access_msrs[i].index);
+ offset = svm_msrpm_offset(direct_access_msrs[i]);
BUG_ON(offset == MSR_INVALID);
add_msr_offset(offset);
--
2.47.0.338.g60cca15819-goog
^ permalink raw reply related [flat|nested] 32+ messages in thread
* [PATCH 10/15] KVM: SVM: Don't "NULL terminate" the list of possible passthrough MSRs
2024-11-27 20:19 [PATCH 00/15] Unify MSR intercepts in x86 Aaron Lewis
` (8 preceding siblings ...)
2024-11-27 20:19 ` [PATCH 09/15] KVM: SVM: Drop "always" flag from list of possible passthrough MSRs Aaron Lewis
@ 2024-11-27 20:19 ` Aaron Lewis
2024-12-03 21:30 ` Tom Lendacky
2024-11-27 20:19 ` [PATCH 11/15] KVM: VMX: Make list of possible passthrough MSRs "const" Aaron Lewis
` (5 subsequent siblings)
15 siblings, 1 reply; 32+ messages in thread
From: Aaron Lewis @ 2024-11-27 20:19 UTC (permalink / raw)
To: kvm; +Cc: pbonzini, jmattson, seanjc, Aaron Lewis
Signed-off-by: Sean Christopherson <seanjc@google.com>
Co-developed-by: Aaron Lewis <aaronlewis@google.com>
---
arch/x86/kvm/svm/svm.c | 11 +++++------
1 file changed, 5 insertions(+), 6 deletions(-)
diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c
index 3813258497e49..4e30efe90c541 100644
--- a/arch/x86/kvm/svm/svm.c
+++ b/arch/x86/kvm/svm/svm.c
@@ -81,7 +81,7 @@ static DEFINE_PER_CPU(u64, current_tsc_ratio);
#define X2APIC_MSR(x) (APIC_BASE_MSR + (x >> 4))
-static const u32 direct_access_msrs[MAX_DIRECT_ACCESS_MSRS] = {
+static const u32 direct_access_msrs[] = {
MSR_STAR,
MSR_IA32_SYSENTER_CS,
MSR_IA32_SYSENTER_EIP,
@@ -139,7 +139,6 @@ static const u32 direct_access_msrs[MAX_DIRECT_ACCESS_MSRS] = {
X2APIC_MSR(APIC_TMICT),
X2APIC_MSR(APIC_TMCCT),
X2APIC_MSR(APIC_TDCR),
- MSR_INVALID,
};
/*
@@ -760,7 +759,7 @@ static int direct_access_msr_slot(u32 msr)
{
u32 i;
- for (i = 0; direct_access_msrs[i] != MSR_INVALID; i++) {
+ for (i = 0; i < ARRAY_SIZE(direct_access_msrs); i++) {
if (direct_access_msrs[i] == msr)
return i;
}
@@ -934,7 +933,7 @@ void svm_set_x2apic_msr_interception(struct vcpu_svm *svm, bool intercept)
if (!x2avic_enabled)
return;
- for (i = 0; i < MAX_DIRECT_ACCESS_MSRS; i++) {
+ for (i = 0; i < ARRAY_SIZE(direct_access_msrs); i++) {
int index = direct_access_msrs[i];
if ((index < APIC_BASE_MSR) ||
@@ -965,7 +964,7 @@ static void svm_msr_filter_changed(struct kvm_vcpu *vcpu)
* refreshed since KVM is going to intercept them regardless of what
* userspace wants.
*/
- for (i = 0; direct_access_msrs[i] != MSR_INVALID; i++) {
+ for (i = 0; i < ARRAY_SIZE(direct_access_msrs); i++) {
u32 msr = direct_access_msrs[i];
if (!test_bit(i, svm->shadow_msr_intercept.read))
@@ -1009,7 +1008,7 @@ static void init_msrpm_offsets(void)
memset(msrpm_offsets, 0xff, sizeof(msrpm_offsets));
- for (i = 0; direct_access_msrs[i] != MSR_INVALID; i++) {
+ for (i = 0; i < ARRAY_SIZE(direct_access_msrs); i++) {
u32 offset;
offset = svm_msrpm_offset(direct_access_msrs[i]);
--
2.47.0.338.g60cca15819-goog
^ permalink raw reply related [flat|nested] 32+ messages in thread
* [PATCH 11/15] KVM: VMX: Make list of possible passthrough MSRs "const"
2024-11-27 20:19 [PATCH 00/15] Unify MSR intercepts in x86 Aaron Lewis
` (9 preceding siblings ...)
2024-11-27 20:19 ` [PATCH 10/15] KVM: SVM: Don't "NULL terminate" the " Aaron Lewis
@ 2024-11-27 20:19 ` Aaron Lewis
2024-11-27 20:19 ` [PATCH 12/15] KVM: x86: Track possible passthrough MSRs in kvm_x86_ops Aaron Lewis
` (4 subsequent siblings)
15 siblings, 0 replies; 32+ messages in thread
From: Aaron Lewis @ 2024-11-27 20:19 UTC (permalink / raw)
To: kvm; +Cc: pbonzini, jmattson, seanjc
From: Sean Christopherson <seanjc@google.com>
Signed-off-by: Sean Christopherson <seanjc@google.com>
---
arch/x86/kvm/vmx/vmx.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
index 0577a7961b9f0..bc64e7cc02704 100644
--- a/arch/x86/kvm/vmx/vmx.c
+++ b/arch/x86/kvm/vmx/vmx.c
@@ -167,7 +167,7 @@ module_param(allow_smaller_maxphyaddr, bool, S_IRUGO);
* List of MSRs that can be directly passed to the guest.
* In addition to these x2apic, PT and LBR MSRs are handled specially.
*/
-static u32 vmx_possible_passthrough_msrs[MAX_POSSIBLE_PASSTHROUGH_MSRS] = {
+static const u32 vmx_possible_passthrough_msrs[MAX_POSSIBLE_PASSTHROUGH_MSRS] = {
MSR_IA32_SPEC_CTRL,
MSR_IA32_PRED_CMD,
MSR_IA32_FLUSH_CMD,
--
2.47.0.338.g60cca15819-goog
^ permalink raw reply related [flat|nested] 32+ messages in thread
* [PATCH 12/15] KVM: x86: Track possible passthrough MSRs in kvm_x86_ops
2024-11-27 20:19 [PATCH 00/15] Unify MSR intercepts in x86 Aaron Lewis
` (10 preceding siblings ...)
2024-11-27 20:19 ` [PATCH 11/15] KVM: VMX: Make list of possible passthrough MSRs "const" Aaron Lewis
@ 2024-11-27 20:19 ` Aaron Lewis
2024-11-27 21:57 ` Sean Christopherson
2024-11-27 20:19 ` [PATCH 13/15] KVM: x86: Move ownership of passthrough MSR "shadow" to common x86 Aaron Lewis
` (3 subsequent siblings)
15 siblings, 1 reply; 32+ messages in thread
From: Aaron Lewis @ 2024-11-27 20:19 UTC (permalink / raw)
To: kvm; +Cc: pbonzini, jmattson, seanjc, Aaron Lewis
Move the possible passthrough MSRs to kvm_x86_ops. Doing this allows
them to be accessed from common x86 code.
In order to set the passthrough MSRs in kvm_x86_ops for VMX,
"vmx_possible_passthrough_msrs" had to be relocated to main.c, and with
that vmx_msr_filter_changed() had to be moved too because it uses
"vmx_possible_passthrough_msrs".
Signed-off-by: Sean Christopherson <seanjc@google.com>
Co-developed-by: Aaron Lewis <aaronlewis@google.com>
---
arch/x86/include/asm/kvm_host.h | 3 ++
arch/x86/kvm/svm/svm.c | 18 ++-------
arch/x86/kvm/vmx/main.c | 58 ++++++++++++++++++++++++++++
arch/x86/kvm/vmx/vmx.c | 67 ++-------------------------------
arch/x86/kvm/x86.c | 13 +++++++
arch/x86/kvm/x86.h | 1 +
6 files changed, 83 insertions(+), 77 deletions(-)
diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
index 3e8afc82ae2fb..7e9fee4d36cc2 100644
--- a/arch/x86/include/asm/kvm_host.h
+++ b/arch/x86/include/asm/kvm_host.h
@@ -1817,6 +1817,9 @@ struct kvm_x86_ops {
int (*enable_l2_tlb_flush)(struct kvm_vcpu *vcpu);
void (*migrate_timers)(struct kvm_vcpu *vcpu);
+
+ const u32 * const possible_passthrough_msrs;
+ const u32 nr_possible_passthrough_msrs;
void (*msr_filter_changed)(struct kvm_vcpu *vcpu);
int (*complete_emulated_msr)(struct kvm_vcpu *vcpu, int err);
diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c
index 4e30efe90c541..23e6515bb7904 100644
--- a/arch/x86/kvm/svm/svm.c
+++ b/arch/x86/kvm/svm/svm.c
@@ -755,18 +755,6 @@ static void clr_dr_intercepts(struct vcpu_svm *svm)
recalc_intercepts(svm);
}
-static int direct_access_msr_slot(u32 msr)
-{
- u32 i;
-
- for (i = 0; i < ARRAY_SIZE(direct_access_msrs); i++) {
- if (direct_access_msrs[i] == msr)
- return i;
- }
-
- return -ENOENT;
-}
-
static bool msr_write_intercepted(struct kvm_vcpu *vcpu, u32 msr)
{
u8 bit_write;
@@ -832,7 +820,7 @@ void svm_disable_intercept_for_msr(struct kvm_vcpu *vcpu, u32 msr, int type)
struct vcpu_svm *svm = to_svm(vcpu);
int slot;
- slot = direct_access_msr_slot(msr);
+ slot = kvm_passthrough_msr_slot(msr);
WARN_ON(slot == -ENOENT);
if (slot >= 0) {
/* Set the shadow bitmaps to the desired intercept states */
@@ -871,7 +859,7 @@ void svm_enable_intercept_for_msr(struct kvm_vcpu *vcpu, u32 msr, int type)
struct vcpu_svm *svm = to_svm(vcpu);
int slot;
- slot = direct_access_msr_slot(msr);
+ slot = kvm_passthrough_msr_slot(msr);
WARN_ON(slot == -ENOENT);
if (slot >= 0) {
/* Set the shadow bitmaps to the desired intercept states */
@@ -5165,6 +5153,8 @@ static struct kvm_x86_ops svm_x86_ops __initdata = {
.apic_init_signal_blocked = svm_apic_init_signal_blocked,
+ .possible_passthrough_msrs = direct_access_msrs,
+ .nr_possible_passthrough_msrs = ARRAY_SIZE(direct_access_msrs),
.msr_filter_changed = svm_msr_filter_changed,
.complete_emulated_msr = svm_complete_emulated_msr,
diff --git a/arch/x86/kvm/vmx/main.c b/arch/x86/kvm/vmx/main.c
index 92d35cc6cd15d..6d52693b0fd6c 100644
--- a/arch/x86/kvm/vmx/main.c
+++ b/arch/x86/kvm/vmx/main.c
@@ -7,6 +7,62 @@
#include "pmu.h"
#include "posted_intr.h"
+/*
+ * List of MSRs that can be directly passed to the guest.
+ * In addition to these x2apic, PT and LBR MSRs are handled specially.
+ */
+static const u32 vmx_possible_passthrough_msrs[] = {
+ MSR_IA32_SPEC_CTRL,
+ MSR_IA32_PRED_CMD,
+ MSR_IA32_FLUSH_CMD,
+ MSR_IA32_TSC,
+#ifdef CONFIG_X86_64
+ MSR_FS_BASE,
+ MSR_GS_BASE,
+ MSR_KERNEL_GS_BASE,
+ MSR_IA32_XFD,
+ MSR_IA32_XFD_ERR,
+#endif
+ MSR_IA32_SYSENTER_CS,
+ MSR_IA32_SYSENTER_ESP,
+ MSR_IA32_SYSENTER_EIP,
+ MSR_CORE_C1_RES,
+ MSR_CORE_C3_RESIDENCY,
+ MSR_CORE_C6_RESIDENCY,
+ MSR_CORE_C7_RESIDENCY,
+};
+
+void vmx_msr_filter_changed(struct kvm_vcpu *vcpu)
+{
+ struct vcpu_vmx *vmx = to_vmx(vcpu);
+ u32 i;
+
+ if (!cpu_has_vmx_msr_bitmap())
+ return;
+
+ /*
+ * Redo intercept permissions for MSRs that KVM is passing through to
+ * the guest. Disabling interception will check the new MSR filter and
+ * ensure that KVM enables interception if usersepace wants to filter
+ * the MSR. MSRs that KVM is already intercepting don't need to be
+ * refreshed since KVM is going to intercept them regardless of what
+ * userspace wants.
+ */
+ for (i = 0; i < ARRAY_SIZE(vmx_possible_passthrough_msrs); i++) {
+ u32 msr = vmx_possible_passthrough_msrs[i];
+
+ if (!test_bit(i, vmx->shadow_msr_intercept.read))
+ vmx_disable_intercept_for_msr(vcpu, msr, MSR_TYPE_R);
+
+ if (!test_bit(i, vmx->shadow_msr_intercept.write))
+ vmx_disable_intercept_for_msr(vcpu, msr, MSR_TYPE_W);
+ }
+
+ /* PT MSRs can be passed through iff PT is exposed to the guest. */
+ if (vmx_pt_mode_is_host_guest())
+ pt_update_intercept_for_msr(vcpu);
+}
+
#define VMX_REQUIRED_APICV_INHIBITS \
(BIT(APICV_INHIBIT_REASON_DISABLED) | \
BIT(APICV_INHIBIT_REASON_ABSENT) | \
@@ -152,6 +208,8 @@ struct kvm_x86_ops vt_x86_ops __initdata = {
.apic_init_signal_blocked = vmx_apic_init_signal_blocked,
.migrate_timers = vmx_migrate_timers,
+ .possible_passthrough_msrs = vmx_possible_passthrough_msrs,
+ .nr_possible_passthrough_msrs = ARRAY_SIZE(vmx_possible_passthrough_msrs),
.msr_filter_changed = vmx_msr_filter_changed,
.complete_emulated_msr = kvm_complete_insn_gp,
diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
index bc64e7cc02704..1c2c0c06f3d35 100644
--- a/arch/x86/kvm/vmx/vmx.c
+++ b/arch/x86/kvm/vmx/vmx.c
@@ -163,31 +163,6 @@ module_param(allow_smaller_maxphyaddr, bool, S_IRUGO);
RTIT_STATUS_ERROR | RTIT_STATUS_STOPPED | \
RTIT_STATUS_BYTECNT))
-/*
- * List of MSRs that can be directly passed to the guest.
- * In addition to these x2apic, PT and LBR MSRs are handled specially.
- */
-static const u32 vmx_possible_passthrough_msrs[MAX_POSSIBLE_PASSTHROUGH_MSRS] = {
- MSR_IA32_SPEC_CTRL,
- MSR_IA32_PRED_CMD,
- MSR_IA32_FLUSH_CMD,
- MSR_IA32_TSC,
-#ifdef CONFIG_X86_64
- MSR_FS_BASE,
- MSR_GS_BASE,
- MSR_KERNEL_GS_BASE,
- MSR_IA32_XFD,
- MSR_IA32_XFD_ERR,
-#endif
- MSR_IA32_SYSENTER_CS,
- MSR_IA32_SYSENTER_ESP,
- MSR_IA32_SYSENTER_EIP,
- MSR_CORE_C1_RES,
- MSR_CORE_C3_RESIDENCY,
- MSR_CORE_C6_RESIDENCY,
- MSR_CORE_C7_RESIDENCY,
-};
-
/*
* These 2 parameters are used to config the controls for Pause-Loop Exiting:
* ple_gap: upper bound on the amount of time between two successive
@@ -669,7 +644,7 @@ static inline bool cpu_need_virtualize_apic_accesses(struct kvm_vcpu *vcpu)
static int vmx_get_passthrough_msr_slot(u32 msr)
{
- int i;
+ int r;
switch (msr) {
case 0x800 ... 0x8ff:
@@ -692,13 +667,10 @@ static int vmx_get_passthrough_msr_slot(u32 msr)
return -ENOENT;
}
- for (i = 0; i < ARRAY_SIZE(vmx_possible_passthrough_msrs); i++) {
- if (vmx_possible_passthrough_msrs[i] == msr)
- return i;
- }
+ r = kvm_passthrough_msr_slot(msr);
- WARN(1, "Invalid MSR %x, please adapt vmx_possible_passthrough_msrs[]", msr);
- return -ENOENT;
+ WARN(!r, "Invalid MSR %x, please adapt vmx_possible_passthrough_msrs[]", msr);
+ return r;
}
struct vmx_uret_msr *vmx_find_uret_msr(struct vcpu_vmx *vmx, u32 msr)
@@ -4145,37 +4117,6 @@ void pt_update_intercept_for_msr(struct kvm_vcpu *vcpu)
}
}
-void vmx_msr_filter_changed(struct kvm_vcpu *vcpu)
-{
- struct vcpu_vmx *vmx = to_vmx(vcpu);
- u32 i;
-
- if (!cpu_has_vmx_msr_bitmap())
- return;
-
- /*
- * Redo intercept permissions for MSRs that KVM is passing through to
- * the guest. Disabling interception will check the new MSR filter and
- * ensure that KVM enables interception if usersepace wants to filter
- * the MSR. MSRs that KVM is already intercepting don't need to be
- * refreshed since KVM is going to intercept them regardless of what
- * userspace wants.
- */
- for (i = 0; i < ARRAY_SIZE(vmx_possible_passthrough_msrs); i++) {
- u32 msr = vmx_possible_passthrough_msrs[i];
-
- if (!test_bit(i, vmx->shadow_msr_intercept.read))
- vmx_disable_intercept_for_msr(vcpu, msr, MSR_TYPE_R);
-
- if (!test_bit(i, vmx->shadow_msr_intercept.write))
- vmx_disable_intercept_for_msr(vcpu, msr, MSR_TYPE_W);
- }
-
- /* PT MSRs can be passed through iff PT is exposed to the guest. */
- if (vmx_pt_mode_is_host_guest())
- pt_update_intercept_for_msr(vcpu);
-}
-
static inline void kvm_vcpu_trigger_posted_interrupt(struct kvm_vcpu *vcpu,
int pi_vec)
{
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index 8637bc0010965..20b6cce793af5 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -1806,6 +1806,19 @@ bool kvm_msr_allowed(struct kvm_vcpu *vcpu, u32 index, u32 type)
}
EXPORT_SYMBOL_GPL(kvm_msr_allowed);
+int kvm_passthrough_msr_slot(u32 msr)
+{
+ u32 i;
+
+ for (i = 0; i < kvm_x86_ops.nr_possible_passthrough_msrs; i++) {
+ if (kvm_x86_ops.possible_passthrough_msrs[i] == msr)
+ return i;
+ }
+
+ return -ENOENT;
+}
+EXPORT_SYMBOL_GPL(kvm_passthrough_msr_slot);
+
/*
* Write @data into the MSR specified by @index. Select MSR specific fault
* checks are bypassed if @host_initiated is %true.
diff --git a/arch/x86/kvm/x86.h b/arch/x86/kvm/x86.h
index ec623d23d13d2..208f0698c64e2 100644
--- a/arch/x86/kvm/x86.h
+++ b/arch/x86/kvm/x86.h
@@ -555,6 +555,7 @@ int kvm_handle_memory_failure(struct kvm_vcpu *vcpu, int r,
struct x86_exception *e);
int kvm_handle_invpcid(struct kvm_vcpu *vcpu, unsigned long type, gva_t gva);
bool kvm_msr_allowed(struct kvm_vcpu *vcpu, u32 index, u32 type);
+int kvm_passthrough_msr_slot(u32 msr);
enum kvm_msr_access {
MSR_TYPE_R = BIT(0),
--
2.47.0.338.g60cca15819-goog
^ permalink raw reply related [flat|nested] 32+ messages in thread
* [PATCH 13/15] KVM: x86: Move ownership of passthrough MSR "shadow" to common x86
2024-11-27 20:19 [PATCH 00/15] Unify MSR intercepts in x86 Aaron Lewis
` (11 preceding siblings ...)
2024-11-27 20:19 ` [PATCH 12/15] KVM: x86: Track possible passthrough MSRs in kvm_x86_ops Aaron Lewis
@ 2024-11-27 20:19 ` Aaron Lewis
2024-11-27 20:19 ` [PATCH 14/15] KVM: x86: Hoist SVM MSR intercepts to common x86 code Aaron Lewis
` (2 subsequent siblings)
15 siblings, 0 replies; 32+ messages in thread
From: Aaron Lewis @ 2024-11-27 20:19 UTC (permalink / raw)
To: kvm; +Cc: pbonzini, jmattson, seanjc, Aaron Lewis
Signed-off-by: Sean Christopherson <seanjc@google.com>
Co-developed-by: Aaron Lewis <aaronlewis@google.com>
---
arch/x86/include/asm/kvm-x86-ops.h | 3 ++-
arch/x86/include/asm/kvm_host.h | 11 +++++++++
arch/x86/kvm/svm/svm.c | 38 ++++--------------------------
arch/x86/kvm/svm/svm.h | 6 -----
arch/x86/kvm/vmx/main.c | 32 +------------------------
arch/x86/kvm/vmx/vmx.c | 22 ++++++++++-------
arch/x86/kvm/vmx/vmx.h | 7 ------
arch/x86/kvm/x86.c | 37 ++++++++++++++++++++++++++++-
8 files changed, 69 insertions(+), 87 deletions(-)
diff --git a/arch/x86/include/asm/kvm-x86-ops.h b/arch/x86/include/asm/kvm-x86-ops.h
index 5aff7222e40fa..124c2e1e42026 100644
--- a/arch/x86/include/asm/kvm-x86-ops.h
+++ b/arch/x86/include/asm/kvm-x86-ops.h
@@ -131,7 +131,8 @@ KVM_X86_OP(check_emulate_instruction)
KVM_X86_OP(apic_init_signal_blocked)
KVM_X86_OP_OPTIONAL(enable_l2_tlb_flush)
KVM_X86_OP_OPTIONAL(migrate_timers)
-KVM_X86_OP(msr_filter_changed)
+KVM_X86_OP_OPTIONAL(msr_filter_changed)
+KVM_X86_OP(disable_intercept_for_msr)
KVM_X86_OP(complete_emulated_msr)
KVM_X86_OP(vcpu_deliver_sipi_vector)
KVM_X86_OP_OPTIONAL_RET0(vcpu_get_apicv_inhibit_reasons);
diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
index 7e9fee4d36cc2..808b5365e4bd2 100644
--- a/arch/x86/include/asm/kvm_host.h
+++ b/arch/x86/include/asm/kvm_host.h
@@ -777,6 +777,16 @@ struct kvm_vcpu_arch {
u64 arch_capabilities;
u64 perf_capabilities;
+ /*
+ * KVM's "shadow" of the MSR intercepts, i.e. bitmaps that track KVM's
+ * desired behavior irrespective of userspace MSR filtering.
+ */
+#define KVM_MAX_POSSIBLE_PASSTHROUGH_MSRS 64
+ struct {
+ DECLARE_BITMAP(read, KVM_MAX_POSSIBLE_PASSTHROUGH_MSRS);
+ DECLARE_BITMAP(write, KVM_MAX_POSSIBLE_PASSTHROUGH_MSRS);
+ } shadow_msr_intercept;
+
/*
* Paging state of the vcpu
*
@@ -1820,6 +1830,7 @@ struct kvm_x86_ops {
const u32 * const possible_passthrough_msrs;
const u32 nr_possible_passthrough_msrs;
+ void (*disable_intercept_for_msr)(struct kvm_vcpu *vcpu, u32 msr, int type);
void (*msr_filter_changed)(struct kvm_vcpu *vcpu);
int (*complete_emulated_msr)(struct kvm_vcpu *vcpu, int err);
diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c
index 23e6515bb7904..31ed6c68e8194 100644
--- a/arch/x86/kvm/svm/svm.c
+++ b/arch/x86/kvm/svm/svm.c
@@ -825,9 +825,9 @@ void svm_disable_intercept_for_msr(struct kvm_vcpu *vcpu, u32 msr, int type)
if (slot >= 0) {
/* Set the shadow bitmaps to the desired intercept states */
if (type & MSR_TYPE_R)
- __clear_bit(slot, svm->shadow_msr_intercept.read);
+ __clear_bit(slot, vcpu->arch.shadow_msr_intercept.read);
if (type & MSR_TYPE_W)
- __clear_bit(slot, svm->shadow_msr_intercept.write);
+ __clear_bit(slot, vcpu->arch.shadow_msr_intercept.write);
}
/*
@@ -864,9 +864,9 @@ void svm_enable_intercept_for_msr(struct kvm_vcpu *vcpu, u32 msr, int type)
if (slot >= 0) {
/* Set the shadow bitmaps to the desired intercept states */
if (type & MSR_TYPE_R)
- __set_bit(slot, svm->shadow_msr_intercept.read);
+ __set_bit(slot, vcpu->arch.shadow_msr_intercept.read);
if (type & MSR_TYPE_W)
- __set_bit(slot, svm->shadow_msr_intercept.write);
+ __set_bit(slot, vcpu->arch.shadow_msr_intercept.write);
}
if (type & MSR_TYPE_R)
@@ -939,30 +939,6 @@ void svm_vcpu_free_msrpm(unsigned long *msrpm)
__free_pages(virt_to_page(msrpm), get_order(MSRPM_SIZE));
}
-static void svm_msr_filter_changed(struct kvm_vcpu *vcpu)
-{
- struct vcpu_svm *svm = to_svm(vcpu);
- u32 i;
-
- /*
- * Redo intercept permissions for MSRs that KVM is passing through to
- * the guest. Disabling interception will check the new MSR filter and
- * ensure that KVM enables interception if usersepace wants to filter
- * the MSR. MSRs that KVM is already intercepting don't need to be
- * refreshed since KVM is going to intercept them regardless of what
- * userspace wants.
- */
- for (i = 0; i < ARRAY_SIZE(direct_access_msrs); i++) {
- u32 msr = direct_access_msrs[i];
-
- if (!test_bit(i, svm->shadow_msr_intercept.read))
- svm_disable_intercept_for_msr(vcpu, msr, MSR_TYPE_R);
-
- if (!test_bit(i, svm->shadow_msr_intercept.write))
- svm_disable_intercept_for_msr(vcpu, msr, MSR_TYPE_W);
- }
-}
-
static void add_msr_offset(u32 offset)
{
int i;
@@ -1475,10 +1451,6 @@ static int svm_vcpu_create(struct kvm_vcpu *vcpu)
if (err)
goto error_free_vmsa_page;
- /* All MSRs start out in the "intercepted" state. */
- bitmap_fill(svm->shadow_msr_intercept.read, MAX_DIRECT_ACCESS_MSRS);
- bitmap_fill(svm->shadow_msr_intercept.write, MAX_DIRECT_ACCESS_MSRS);
-
svm->msrpm = svm_vcpu_alloc_msrpm();
if (!svm->msrpm) {
err = -ENOMEM;
@@ -5155,7 +5127,7 @@ static struct kvm_x86_ops svm_x86_ops __initdata = {
.possible_passthrough_msrs = direct_access_msrs,
.nr_possible_passthrough_msrs = ARRAY_SIZE(direct_access_msrs),
- .msr_filter_changed = svm_msr_filter_changed,
+ .disable_intercept_for_msr = svm_disable_intercept_for_msr,
.complete_emulated_msr = svm_complete_emulated_msr,
.vcpu_deliver_sipi_vector = svm_vcpu_deliver_sipi_vector,
diff --git a/arch/x86/kvm/svm/svm.h b/arch/x86/kvm/svm/svm.h
index 2513990c5b6e6..a73da8ca73b49 100644
--- a/arch/x86/kvm/svm/svm.h
+++ b/arch/x86/kvm/svm/svm.h
@@ -313,12 +313,6 @@ struct vcpu_svm {
struct list_head ir_list;
spinlock_t ir_list_lock;
- /* Save desired MSR intercept (read: pass-through) state */
- struct {
- DECLARE_BITMAP(read, MAX_DIRECT_ACCESS_MSRS);
- DECLARE_BITMAP(write, MAX_DIRECT_ACCESS_MSRS);
- } shadow_msr_intercept;
-
struct vcpu_sev_es_state sev_es;
bool guest_state_loaded;
diff --git a/arch/x86/kvm/vmx/main.c b/arch/x86/kvm/vmx/main.c
index 6d52693b0fd6c..5279c82648fe6 100644
--- a/arch/x86/kvm/vmx/main.c
+++ b/arch/x86/kvm/vmx/main.c
@@ -32,37 +32,6 @@ static const u32 vmx_possible_passthrough_msrs[] = {
MSR_CORE_C7_RESIDENCY,
};
-void vmx_msr_filter_changed(struct kvm_vcpu *vcpu)
-{
- struct vcpu_vmx *vmx = to_vmx(vcpu);
- u32 i;
-
- if (!cpu_has_vmx_msr_bitmap())
- return;
-
- /*
- * Redo intercept permissions for MSRs that KVM is passing through to
- * the guest. Disabling interception will check the new MSR filter and
- * ensure that KVM enables interception if usersepace wants to filter
- * the MSR. MSRs that KVM is already intercepting don't need to be
- * refreshed since KVM is going to intercept them regardless of what
- * userspace wants.
- */
- for (i = 0; i < ARRAY_SIZE(vmx_possible_passthrough_msrs); i++) {
- u32 msr = vmx_possible_passthrough_msrs[i];
-
- if (!test_bit(i, vmx->shadow_msr_intercept.read))
- vmx_disable_intercept_for_msr(vcpu, msr, MSR_TYPE_R);
-
- if (!test_bit(i, vmx->shadow_msr_intercept.write))
- vmx_disable_intercept_for_msr(vcpu, msr, MSR_TYPE_W);
- }
-
- /* PT MSRs can be passed through iff PT is exposed to the guest. */
- if (vmx_pt_mode_is_host_guest())
- pt_update_intercept_for_msr(vcpu);
-}
-
#define VMX_REQUIRED_APICV_INHIBITS \
(BIT(APICV_INHIBIT_REASON_DISABLED) | \
BIT(APICV_INHIBIT_REASON_ABSENT) | \
@@ -210,6 +179,7 @@ struct kvm_x86_ops vt_x86_ops __initdata = {
.possible_passthrough_msrs = vmx_possible_passthrough_msrs,
.nr_possible_passthrough_msrs = ARRAY_SIZE(vmx_possible_passthrough_msrs),
+ .disable_intercept_for_msr = vmx_disable_intercept_for_msr,
.msr_filter_changed = vmx_msr_filter_changed,
.complete_emulated_msr = kvm_complete_insn_gp,
diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
index 1c2c0c06f3d35..4cb3e9a8df2c0 100644
--- a/arch/x86/kvm/vmx/vmx.c
+++ b/arch/x86/kvm/vmx/vmx.c
@@ -3987,9 +3987,9 @@ void vmx_disable_intercept_for_msr(struct kvm_vcpu *vcpu, u32 msr, int type)
idx = vmx_get_passthrough_msr_slot(msr);
if (idx >= 0) {
if (type & MSR_TYPE_R)
- __clear_bit(idx, vmx->shadow_msr_intercept.read);
+ __clear_bit(idx, vcpu->arch.shadow_msr_intercept.read);
if (type & MSR_TYPE_W)
- __clear_bit(idx, vmx->shadow_msr_intercept.write);
+ __clear_bit(idx, vcpu->arch.shadow_msr_intercept.write);
}
if ((type & MSR_TYPE_R) &&
@@ -4029,9 +4029,9 @@ void vmx_enable_intercept_for_msr(struct kvm_vcpu *vcpu, u32 msr, int type)
idx = vmx_get_passthrough_msr_slot(msr);
if (idx >= 0) {
if (type & MSR_TYPE_R)
- __set_bit(idx, vmx->shadow_msr_intercept.read);
+ __set_bit(idx, vcpu->arch.shadow_msr_intercept.read);
if (type & MSR_TYPE_W)
- __set_bit(idx, vmx->shadow_msr_intercept.write);
+ __set_bit(idx, vcpu->arch.shadow_msr_intercept.write);
}
if (type & MSR_TYPE_R)
@@ -4117,6 +4117,16 @@ void pt_update_intercept_for_msr(struct kvm_vcpu *vcpu)
}
}
+void vmx_msr_filter_changed(struct kvm_vcpu *vcpu)
+{
+ if (!cpu_has_vmx_msr_bitmap())
+ return;
+
+ /* PT MSRs can be passed through iff PT is exposed to the guest. */
+ if (vmx_pt_mode_is_host_guest())
+ pt_update_intercept_for_msr(vcpu);
+}
+
static inline void kvm_vcpu_trigger_posted_interrupt(struct kvm_vcpu *vcpu,
int pi_vec)
{
@@ -7513,10 +7523,6 @@ int vmx_vcpu_create(struct kvm_vcpu *vcpu)
evmcs->hv_enlightenments_control.msr_bitmap = 1;
}
- /* The MSR bitmap starts with all ones */
- bitmap_fill(vmx->shadow_msr_intercept.read, MAX_POSSIBLE_PASSTHROUGH_MSRS);
- bitmap_fill(vmx->shadow_msr_intercept.write, MAX_POSSIBLE_PASSTHROUGH_MSRS);
-
vmx_disable_intercept_for_msr(vcpu, MSR_IA32_TSC, MSR_TYPE_R);
#ifdef CONFIG_X86_64
vmx_disable_intercept_for_msr(vcpu, MSR_FS_BASE, MSR_TYPE_RW);
diff --git a/arch/x86/kvm/vmx/vmx.h b/arch/x86/kvm/vmx/vmx.h
index 43f573f6ca46a..c40e7c880764f 100644
--- a/arch/x86/kvm/vmx/vmx.h
+++ b/arch/x86/kvm/vmx/vmx.h
@@ -353,13 +353,6 @@ struct vcpu_vmx {
struct pt_desc pt_desc;
struct lbr_desc lbr_desc;
- /* Save desired MSR intercept (read: pass-through) state */
-#define MAX_POSSIBLE_PASSTHROUGH_MSRS 16
- struct {
- DECLARE_BITMAP(read, MAX_POSSIBLE_PASSTHROUGH_MSRS);
- DECLARE_BITMAP(write, MAX_POSSIBLE_PASSTHROUGH_MSRS);
- } shadow_msr_intercept;
-
/* ve_info must be page aligned. */
struct vmx_ve_information *ve_info;
};
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index 20b6cce793af5..2082ae8dc5db1 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -1819,6 +1819,31 @@ int kvm_passthrough_msr_slot(u32 msr)
}
EXPORT_SYMBOL_GPL(kvm_passthrough_msr_slot);
+static void kvm_msr_filter_changed(struct kvm_vcpu *vcpu)
+{
+ u32 msr, i;
+
+ /*
+ * Redo intercept permissions for MSRs that KVM is passing through to
+ * the guest. Disabling interception will check the new MSR filter and
+ * ensure that KVM enables interception if usersepace wants to filter
+ * the MSR. MSRs that KVM is already intercepting don't need to be
+ * refreshed since KVM is going to intercept them regardless of what
+ * userspace wants.
+ */
+ for (i = 0; i < kvm_x86_ops.nr_possible_passthrough_msrs; i++) {
+ msr = kvm_x86_ops.possible_passthrough_msrs[i];
+
+ if (!test_bit(i, vcpu->arch.shadow_msr_intercept.read))
+ static_call(kvm_x86_disable_intercept_for_msr)(vcpu, msr, MSR_TYPE_R);
+
+ if (!test_bit(i, vcpu->arch.shadow_msr_intercept.write))
+ static_call(kvm_x86_disable_intercept_for_msr)(vcpu, msr, MSR_TYPE_W);
+ }
+
+ static_call_cond(kvm_x86_msr_filter_changed)(vcpu);
+}
+
/*
* Write @data into the MSR specified by @index. Select MSR specific fault
* checks are bypassed if @host_initiated is %true.
@@ -9747,6 +9772,10 @@ int kvm_x86_vendor_init(struct kvm_x86_init_ops *ops)
if (boot_cpu_has(X86_FEATURE_ARCH_CAPABILITIES))
rdmsrl(MSR_IA32_ARCH_CAPABILITIES, kvm_host.arch_capabilities);
+ if (ops->runtime_ops->nr_possible_passthrough_msrs >
+ KVM_MAX_POSSIBLE_PASSTHROUGH_MSRS)
+ return -E2BIG;
+
r = ops->hardware_setup();
if (r != 0)
goto out_mmu_exit;
@@ -10851,7 +10880,7 @@ static int vcpu_enter_guest(struct kvm_vcpu *vcpu)
if (kvm_check_request(KVM_REQ_APF_READY, vcpu))
kvm_check_async_pf_completion(vcpu);
if (kvm_check_request(KVM_REQ_MSR_FILTER_CHANGED, vcpu))
- kvm_x86_call(msr_filter_changed)(vcpu);
+ kvm_msr_filter_changed(vcpu);
if (kvm_check_request(KVM_REQ_UPDATE_CPU_DIRTY_LOGGING, vcpu))
kvm_x86_call(update_cpu_dirty_logging)(vcpu);
@@ -12305,6 +12334,12 @@ int kvm_arch_vcpu_create(struct kvm_vcpu *vcpu)
vcpu->arch.hv_root_tdp = INVALID_PAGE;
#endif
+ /* All MSRs start out in the "intercepted" state. */
+ bitmap_fill(vcpu->arch.shadow_msr_intercept.read,
+ KVM_MAX_POSSIBLE_PASSTHROUGH_MSRS);
+ bitmap_fill(vcpu->arch.shadow_msr_intercept.write,
+ KVM_MAX_POSSIBLE_PASSTHROUGH_MSRS);
+
r = kvm_x86_call(vcpu_create)(vcpu);
if (r)
goto free_guest_fpu;
--
2.47.0.338.g60cca15819-goog
^ permalink raw reply related [flat|nested] 32+ messages in thread
* [PATCH 14/15] KVM: x86: Hoist SVM MSR intercepts to common x86 code
2024-11-27 20:19 [PATCH 00/15] Unify MSR intercepts in x86 Aaron Lewis
` (12 preceding siblings ...)
2024-11-27 20:19 ` [PATCH 13/15] KVM: x86: Move ownership of passthrough MSR "shadow" to common x86 Aaron Lewis
@ 2024-11-27 20:19 ` Aaron Lewis
2024-11-27 20:19 ` [PATCH 15/15] KVM: x86: Hoist VMX " Aaron Lewis
2024-11-27 20:56 ` [PATCH 00/15] Unify MSR intercepts in x86 Sean Christopherson
15 siblings, 0 replies; 32+ messages in thread
From: Aaron Lewis @ 2024-11-27 20:19 UTC (permalink / raw)
To: kvm; +Cc: pbonzini, jmattson, seanjc, Aaron Lewis
Now that the SVM and VMX implementations for MSR intercepts are the
same hoist the SVM implementation to common x86 code.
Suggested-by: Sean Christopherson <seanjc@google.com>
Signed-off-by: Aaron Lewis <aaronlewis@google.com>
---
arch/x86/include/asm/kvm-x86-ops.h | 1 +
arch/x86/include/asm/kvm_host.h | 3 ++
arch/x86/kvm/svm/svm.c | 73 ++---------------------------
arch/x86/kvm/x86.c | 75 ++++++++++++++++++++++++++++++
arch/x86/kvm/x86.h | 2 +
5 files changed, 86 insertions(+), 68 deletions(-)
diff --git a/arch/x86/include/asm/kvm-x86-ops.h b/arch/x86/include/asm/kvm-x86-ops.h
index 124c2e1e42026..3f10ce4957f74 100644
--- a/arch/x86/include/asm/kvm-x86-ops.h
+++ b/arch/x86/include/asm/kvm-x86-ops.h
@@ -132,6 +132,7 @@ KVM_X86_OP(apic_init_signal_blocked)
KVM_X86_OP_OPTIONAL(enable_l2_tlb_flush)
KVM_X86_OP_OPTIONAL(migrate_timers)
KVM_X86_OP_OPTIONAL(msr_filter_changed)
+KVM_X86_OP_OPTIONAL(get_msr_bitmap_entries)
KVM_X86_OP(disable_intercept_for_msr)
KVM_X86_OP(complete_emulated_msr)
KVM_X86_OP(vcpu_deliver_sipi_vector)
diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
index 808b5365e4bd2..763fc054a2c56 100644
--- a/arch/x86/include/asm/kvm_host.h
+++ b/arch/x86/include/asm/kvm_host.h
@@ -1830,6 +1830,9 @@ struct kvm_x86_ops {
const u32 * const possible_passthrough_msrs;
const u32 nr_possible_passthrough_msrs;
+ void (*get_msr_bitmap_entries)(struct kvm_vcpu *vcpu, u32 msr,
+ unsigned long **read_map, u8 *read_bit,
+ unsigned long **write_map, u8 *write_bit);
void (*disable_intercept_for_msr)(struct kvm_vcpu *vcpu, u32 msr, int type);
void (*msr_filter_changed)(struct kvm_vcpu *vcpu);
int (*complete_emulated_msr)(struct kvm_vcpu *vcpu, int err);
diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c
index 31ed6c68e8194..aaf244e233b90 100644
--- a/arch/x86/kvm/svm/svm.c
+++ b/arch/x86/kvm/svm/svm.c
@@ -799,84 +799,20 @@ static void svm_get_msr_bitmap_entries(struct kvm_vcpu *vcpu, u32 msr,
*write_map = &svm->msrpm[offset];
}
-#define BUILD_SVM_MSR_BITMAP_HELPER(fn, bitop, access) \
-static inline void fn(struct kvm_vcpu *vcpu, u32 msr) \
-{ \
- unsigned long *read_map, *write_map; \
- u8 read_bit, write_bit; \
- \
- svm_get_msr_bitmap_entries(vcpu, msr, &read_map, &read_bit, \
- &write_map, &write_bit); \
- bitop(access##_bit, access##_map); \
-}
-
-BUILD_SVM_MSR_BITMAP_HELPER(svm_set_msr_bitmap_read, __set_bit, read)
-BUILD_SVM_MSR_BITMAP_HELPER(svm_set_msr_bitmap_write, __set_bit, write)
-BUILD_SVM_MSR_BITMAP_HELPER(svm_clear_msr_bitmap_read, __clear_bit, read)
-BUILD_SVM_MSR_BITMAP_HELPER(svm_clear_msr_bitmap_write, __clear_bit, write)
-
void svm_disable_intercept_for_msr(struct kvm_vcpu *vcpu, u32 msr, int type)
{
- struct vcpu_svm *svm = to_svm(vcpu);
- int slot;
-
- slot = kvm_passthrough_msr_slot(msr);
- WARN_ON(slot == -ENOENT);
- if (slot >= 0) {
- /* Set the shadow bitmaps to the desired intercept states */
- if (type & MSR_TYPE_R)
- __clear_bit(slot, vcpu->arch.shadow_msr_intercept.read);
- if (type & MSR_TYPE_W)
- __clear_bit(slot, vcpu->arch.shadow_msr_intercept.write);
- }
-
- /*
- * Don't disabled interception for the MSR if userspace wants to
- * handle it.
- */
- if ((type & MSR_TYPE_R) && !kvm_msr_allowed(vcpu, msr, KVM_MSR_FILTER_READ)) {
- svm_set_msr_bitmap_read(vcpu, msr);
- type &= ~MSR_TYPE_R;
- }
-
- if ((type & MSR_TYPE_W) && !kvm_msr_allowed(vcpu, msr, KVM_MSR_FILTER_WRITE)) {
- svm_set_msr_bitmap_write(vcpu, msr);
- type &= ~MSR_TYPE_W;
- }
-
- if (type & MSR_TYPE_R)
- svm_clear_msr_bitmap_read(vcpu, msr);
-
- if (type & MSR_TYPE_W)
- svm_clear_msr_bitmap_write(vcpu, msr);
+ kvm_disable_intercept_for_msr(vcpu, msr, type);
svm_hv_vmcb_dirty_nested_enlightenments(vcpu);
- svm->nested.force_msr_bitmap_recalc = true;
+ to_svm(vcpu)->nested.force_msr_bitmap_recalc = true;
}
void svm_enable_intercept_for_msr(struct kvm_vcpu *vcpu, u32 msr, int type)
{
- struct vcpu_svm *svm = to_svm(vcpu);
- int slot;
-
- slot = kvm_passthrough_msr_slot(msr);
- WARN_ON(slot == -ENOENT);
- if (slot >= 0) {
- /* Set the shadow bitmaps to the desired intercept states */
- if (type & MSR_TYPE_R)
- __set_bit(slot, vcpu->arch.shadow_msr_intercept.read);
- if (type & MSR_TYPE_W)
- __set_bit(slot, vcpu->arch.shadow_msr_intercept.write);
- }
-
- if (type & MSR_TYPE_R)
- svm_set_msr_bitmap_read(vcpu, msr);
-
- if (type & MSR_TYPE_W)
- svm_set_msr_bitmap_write(vcpu, msr);
+ kvm_enable_intercept_for_msr(vcpu, msr, type);
svm_hv_vmcb_dirty_nested_enlightenments(vcpu);
- svm->nested.force_msr_bitmap_recalc = true;
+ to_svm(vcpu)->nested.force_msr_bitmap_recalc = true;
}
unsigned long *svm_vcpu_alloc_msrpm(void)
@@ -5127,6 +5063,7 @@ static struct kvm_x86_ops svm_x86_ops __initdata = {
.possible_passthrough_msrs = direct_access_msrs,
.nr_possible_passthrough_msrs = ARRAY_SIZE(direct_access_msrs),
+ .get_msr_bitmap_entries = svm_get_msr_bitmap_entries,
.disable_intercept_for_msr = svm_disable_intercept_for_msr,
.complete_emulated_msr = svm_complete_emulated_msr,
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index 2082ae8dc5db1..1e607a0eb58a0 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -1819,6 +1819,81 @@ int kvm_passthrough_msr_slot(u32 msr)
}
EXPORT_SYMBOL_GPL(kvm_passthrough_msr_slot);
+#define BUILD_KVM_MSR_BITMAP_HELPER(fn, bitop, access) \
+static inline void fn(struct kvm_vcpu *vcpu, u32 msr) \
+{ \
+ unsigned long *read_map, *write_map; \
+ u8 read_bit, write_bit; \
+ \
+ static_call(kvm_x86_get_msr_bitmap_entries)(vcpu, msr, \
+ &read_map, &read_bit, \
+ &write_map, &write_bit); \
+ bitop(access##_bit, access##_map); \
+}
+
+BUILD_KVM_MSR_BITMAP_HELPER(kvm_set_msr_bitmap_read, __set_bit, read)
+BUILD_KVM_MSR_BITMAP_HELPER(kvm_set_msr_bitmap_write, __set_bit, write)
+BUILD_KVM_MSR_BITMAP_HELPER(kvm_clear_msr_bitmap_read, __clear_bit, read)
+BUILD_KVM_MSR_BITMAP_HELPER(kvm_clear_msr_bitmap_write, __clear_bit, write)
+
+void kvm_disable_intercept_for_msr(struct kvm_vcpu *vcpu, u32 msr, int type)
+{
+ int slot;
+
+ slot = kvm_passthrough_msr_slot(msr);
+ WARN_ON(slot == -ENOENT);
+ if (slot >= 0) {
+ /* Set the shadow bitmaps to the desired intercept states */
+ if (type & MSR_TYPE_R)
+ __clear_bit(slot, vcpu->arch.shadow_msr_intercept.read);
+ if (type & MSR_TYPE_W)
+ __clear_bit(slot, vcpu->arch.shadow_msr_intercept.write);
+ }
+
+ /*
+ * Don't disabled interception for the MSR if userspace wants to
+ * handle it.
+ */
+ if ((type & MSR_TYPE_R) && !kvm_msr_allowed(vcpu, msr, KVM_MSR_FILTER_READ)) {
+ kvm_set_msr_bitmap_read(vcpu, msr);
+ type &= ~MSR_TYPE_R;
+ }
+
+ if ((type & MSR_TYPE_W) && !kvm_msr_allowed(vcpu, msr, KVM_MSR_FILTER_WRITE)) {
+ kvm_set_msr_bitmap_write(vcpu, msr);
+ type &= ~MSR_TYPE_W;
+ }
+
+ if (type & MSR_TYPE_R)
+ kvm_clear_msr_bitmap_read(vcpu, msr);
+
+ if (type & MSR_TYPE_W)
+ kvm_clear_msr_bitmap_write(vcpu, msr);
+}
+EXPORT_SYMBOL_GPL(kvm_disable_intercept_for_msr);
+
+void kvm_enable_intercept_for_msr(struct kvm_vcpu *vcpu, u32 msr, int type)
+{
+ int slot;
+
+ slot = kvm_passthrough_msr_slot(msr);
+ WARN_ON(slot == -ENOENT);
+ if (slot >= 0) {
+ /* Set the shadow bitmaps to the desired intercept states */
+ if (type & MSR_TYPE_R)
+ __set_bit(slot, vcpu->arch.shadow_msr_intercept.read);
+ if (type & MSR_TYPE_W)
+ __set_bit(slot, vcpu->arch.shadow_msr_intercept.write);
+ }
+
+ if (type & MSR_TYPE_R)
+ kvm_set_msr_bitmap_read(vcpu, msr);
+
+ if (type & MSR_TYPE_W)
+ kvm_set_msr_bitmap_write(vcpu, msr);
+}
+EXPORT_SYMBOL_GPL(kvm_enable_intercept_for_msr);
+
static void kvm_msr_filter_changed(struct kvm_vcpu *vcpu)
{
u32 msr, i;
diff --git a/arch/x86/kvm/x86.h b/arch/x86/kvm/x86.h
index 208f0698c64e2..239cc4de49c58 100644
--- a/arch/x86/kvm/x86.h
+++ b/arch/x86/kvm/x86.h
@@ -556,6 +556,8 @@ int kvm_handle_memory_failure(struct kvm_vcpu *vcpu, int r,
int kvm_handle_invpcid(struct kvm_vcpu *vcpu, unsigned long type, gva_t gva);
bool kvm_msr_allowed(struct kvm_vcpu *vcpu, u32 index, u32 type);
int kvm_passthrough_msr_slot(u32 msr);
+void kvm_enable_intercept_for_msr(struct kvm_vcpu *vcpu, u32 msr, int type);
+void kvm_disable_intercept_for_msr(struct kvm_vcpu *vcpu, u32 msr, int type);
enum kvm_msr_access {
MSR_TYPE_R = BIT(0),
--
2.47.0.338.g60cca15819-goog
^ permalink raw reply related [flat|nested] 32+ messages in thread
* [PATCH 15/15] KVM: x86: Hoist VMX MSR intercepts to common x86 code
2024-11-27 20:19 [PATCH 00/15] Unify MSR intercepts in x86 Aaron Lewis
` (13 preceding siblings ...)
2024-11-27 20:19 ` [PATCH 14/15] KVM: x86: Hoist SVM MSR intercepts to common x86 code Aaron Lewis
@ 2024-11-27 20:19 ` Aaron Lewis
2024-11-27 20:56 ` [PATCH 00/15] Unify MSR intercepts in x86 Sean Christopherson
15 siblings, 0 replies; 32+ messages in thread
From: Aaron Lewis @ 2024-11-27 20:19 UTC (permalink / raw)
To: kvm; +Cc: pbonzini, jmattson, seanjc, Aaron Lewis
Complete the transition of unifying the MSR intercepts for x86 by
hoisting the VMX implementation to common x86 code.
The only new addition to the common implementation over what SVM
already contributed is the check for is_valid_passthrough_msr() which
VMX uses to disallow MSRs from being used as possible passthrough
MSRs. To distinguish between MSRs that are not valid from MSRs that
are missing from the list kvm_passthrough_msr_slot() returns -EINVAL
for MSRs that are not allowed to be in the list and -ENOENT for MSRs
that it is expecting to be in the list, but aren't. For the latter
case KVM warns.
Suggested-by: Sean Christopherson <seanjc@google.com>
Signed-off-by: Aaron Lewis <aaronlewis@google.com>
---
arch/x86/include/asm/kvm-x86-ops.h | 1 +
arch/x86/include/asm/kvm_host.h | 1 +
arch/x86/kvm/svm/svm.c | 6 ++
arch/x86/kvm/vmx/main.c | 2 +
arch/x86/kvm/vmx/vmx.c | 91 +++++++++---------------------
arch/x86/kvm/vmx/vmx.h | 4 ++
arch/x86/kvm/x86.c | 4 ++
7 files changed, 45 insertions(+), 64 deletions(-)
diff --git a/arch/x86/include/asm/kvm-x86-ops.h b/arch/x86/include/asm/kvm-x86-ops.h
index 3f10ce4957f74..db1e0fc002805 100644
--- a/arch/x86/include/asm/kvm-x86-ops.h
+++ b/arch/x86/include/asm/kvm-x86-ops.h
@@ -134,6 +134,7 @@ KVM_X86_OP_OPTIONAL(migrate_timers)
KVM_X86_OP_OPTIONAL(msr_filter_changed)
KVM_X86_OP_OPTIONAL(get_msr_bitmap_entries)
KVM_X86_OP(disable_intercept_for_msr)
+KVM_X86_OP(is_valid_passthrough_msr)
KVM_X86_OP(complete_emulated_msr)
KVM_X86_OP(vcpu_deliver_sipi_vector)
KVM_X86_OP_OPTIONAL_RET0(vcpu_get_apicv_inhibit_reasons);
diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
index 763fc054a2c56..22ae4dfa94f2c 100644
--- a/arch/x86/include/asm/kvm_host.h
+++ b/arch/x86/include/asm/kvm_host.h
@@ -1834,6 +1834,7 @@ struct kvm_x86_ops {
unsigned long **read_map, u8 *read_bit,
unsigned long **write_map, u8 *write_bit);
void (*disable_intercept_for_msr)(struct kvm_vcpu *vcpu, u32 msr, int type);
+ bool (*is_valid_passthrough_msr)(u32 msr);
void (*msr_filter_changed)(struct kvm_vcpu *vcpu);
int (*complete_emulated_msr)(struct kvm_vcpu *vcpu, int err);
diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c
index aaf244e233b90..2e746abeda215 100644
--- a/arch/x86/kvm/svm/svm.c
+++ b/arch/x86/kvm/svm/svm.c
@@ -799,6 +799,11 @@ static void svm_get_msr_bitmap_entries(struct kvm_vcpu *vcpu, u32 msr,
*write_map = &svm->msrpm[offset];
}
+static bool svm_is_valid_passthrough_msr(u32 msr)
+{
+ return true;
+}
+
void svm_disable_intercept_for_msr(struct kvm_vcpu *vcpu, u32 msr, int type)
{
kvm_disable_intercept_for_msr(vcpu, msr, type);
@@ -5065,6 +5070,7 @@ static struct kvm_x86_ops svm_x86_ops __initdata = {
.nr_possible_passthrough_msrs = ARRAY_SIZE(direct_access_msrs),
.get_msr_bitmap_entries = svm_get_msr_bitmap_entries,
.disable_intercept_for_msr = svm_disable_intercept_for_msr,
+ .is_valid_passthrough_msr = svm_is_valid_passthrough_msr,
.complete_emulated_msr = svm_complete_emulated_msr,
.vcpu_deliver_sipi_vector = svm_vcpu_deliver_sipi_vector,
diff --git a/arch/x86/kvm/vmx/main.c b/arch/x86/kvm/vmx/main.c
index 5279c82648fe6..e89c472179dd5 100644
--- a/arch/x86/kvm/vmx/main.c
+++ b/arch/x86/kvm/vmx/main.c
@@ -179,7 +179,9 @@ struct kvm_x86_ops vt_x86_ops __initdata = {
.possible_passthrough_msrs = vmx_possible_passthrough_msrs,
.nr_possible_passthrough_msrs = ARRAY_SIZE(vmx_possible_passthrough_msrs),
+ .get_msr_bitmap_entries = vmx_get_msr_bitmap_entries,
.disable_intercept_for_msr = vmx_disable_intercept_for_msr,
+ .is_valid_passthrough_msr = vmx_is_valid_passthrough_msr,
.msr_filter_changed = vmx_msr_filter_changed,
.complete_emulated_msr = kvm_complete_insn_gp,
diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
index 4cb3e9a8df2c0..5493a24febd50 100644
--- a/arch/x86/kvm/vmx/vmx.c
+++ b/arch/x86/kvm/vmx/vmx.c
@@ -642,14 +642,12 @@ static inline bool cpu_need_virtualize_apic_accesses(struct kvm_vcpu *vcpu)
return flexpriority_enabled && lapic_in_kernel(vcpu);
}
-static int vmx_get_passthrough_msr_slot(u32 msr)
+bool vmx_is_valid_passthrough_msr(u32 msr)
{
- int r;
-
switch (msr) {
case 0x800 ... 0x8ff:
/* x2APIC MSRs. These are handled in vmx_update_msr_bitmap_x2apic() */
- return -ENOENT;
+ return false;
case MSR_IA32_RTIT_STATUS:
case MSR_IA32_RTIT_OUTPUT_BASE:
case MSR_IA32_RTIT_OUTPUT_MASK:
@@ -664,13 +662,10 @@ static int vmx_get_passthrough_msr_slot(u32 msr)
case MSR_LBR_CORE_FROM ... MSR_LBR_CORE_FROM + 8:
case MSR_LBR_CORE_TO ... MSR_LBR_CORE_TO + 8:
/* LBR MSRs. These are handled in vmx_update_intercept_for_lbr_msrs() */
- return -ENOENT;
+ return false;
}
- r = kvm_passthrough_msr_slot(msr);
-
- WARN(!r, "Invalid MSR %x, please adapt vmx_possible_passthrough_msrs[]", msr);
- return r;
+ return true;
}
struct vmx_uret_msr *vmx_find_uret_msr(struct vcpu_vmx *vmx, u32 msr)
@@ -3969,76 +3964,44 @@ static void vmx_msr_bitmap_l01_changed(struct vcpu_vmx *vmx)
vmx->nested.force_msr_bitmap_recalc = true;
}
-void vmx_disable_intercept_for_msr(struct kvm_vcpu *vcpu, u32 msr, int type)
+void vmx_get_msr_bitmap_entries(struct kvm_vcpu *vcpu, u32 msr,
+ unsigned long **read_map, u8 *read_bit,
+ unsigned long **write_map, u8 *write_bit)
{
- struct vcpu_vmx *vmx = to_vmx(vcpu);
- unsigned long *msr_bitmap = vmx->vmcs01.msr_bitmap;
- int idx;
-
- if (!cpu_has_vmx_msr_bitmap())
- return;
+ unsigned long *bitmap = to_vmx(vcpu)->vmcs01.msr_bitmap;
+ u32 offset;
- vmx_msr_bitmap_l01_changed(vmx);
+ *read_bit = *write_bit = msr & 0x1fff;
- /*
- * Mark the desired intercept state in shadow bitmap, this is needed
- * for resync when the MSR filters change.
- */
- idx = vmx_get_passthrough_msr_slot(msr);
- if (idx >= 0) {
- if (type & MSR_TYPE_R)
- __clear_bit(idx, vcpu->arch.shadow_msr_intercept.read);
- if (type & MSR_TYPE_W)
- __clear_bit(idx, vcpu->arch.shadow_msr_intercept.write);
- }
+ if (msr <= 0x1fff)
+ offset = 0;
+ else if ((msr >= 0xc0000000) && (msr <= 0xc0001fff))
+ offset = 0x400;
+ else
+ BUG();
- if ((type & MSR_TYPE_R) &&
- !kvm_msr_allowed(vcpu, msr, KVM_MSR_FILTER_READ)) {
- vmx_set_msr_bitmap_read(msr_bitmap, msr);
- type &= ~MSR_TYPE_R;
- }
+ *read_map = bitmap + (0 + offset) / sizeof(unsigned long);
+ *write_map = bitmap + (0x800 + offset) / sizeof(unsigned long);
+}
- if ((type & MSR_TYPE_W) &&
- !kvm_msr_allowed(vcpu, msr, KVM_MSR_FILTER_WRITE)) {
- vmx_set_msr_bitmap_write(msr_bitmap, msr);
- type &= ~MSR_TYPE_W;
- }
+void vmx_disable_intercept_for_msr(struct kvm_vcpu *vcpu, u32 msr, int type)
+{
+ if (!cpu_has_vmx_msr_bitmap())
+ return;
- if (type & MSR_TYPE_R)
- vmx_clear_msr_bitmap_read(msr_bitmap, msr);
+ kvm_disable_intercept_for_msr(vcpu, msr, type);
- if (type & MSR_TYPE_W)
- vmx_clear_msr_bitmap_write(msr_bitmap, msr);
+ vmx_msr_bitmap_l01_changed(to_vmx(vcpu));
}
void vmx_enable_intercept_for_msr(struct kvm_vcpu *vcpu, u32 msr, int type)
{
- struct vcpu_vmx *vmx = to_vmx(vcpu);
- unsigned long *msr_bitmap = vmx->vmcs01.msr_bitmap;
- int idx;
-
if (!cpu_has_vmx_msr_bitmap())
return;
- vmx_msr_bitmap_l01_changed(vmx);
-
- /*
- * Mark the desired intercept state in shadow bitmap, this is needed
- * for resync when the MSR filter changes.
- */
- idx = vmx_get_passthrough_msr_slot(msr);
- if (idx >= 0) {
- if (type & MSR_TYPE_R)
- __set_bit(idx, vcpu->arch.shadow_msr_intercept.read);
- if (type & MSR_TYPE_W)
- __set_bit(idx, vcpu->arch.shadow_msr_intercept.write);
- }
-
- if (type & MSR_TYPE_R)
- vmx_set_msr_bitmap_read(msr_bitmap, msr);
+ kvm_enable_intercept_for_msr(vcpu, msr, type);
- if (type & MSR_TYPE_W)
- vmx_set_msr_bitmap_write(msr_bitmap, msr);
+ vmx_msr_bitmap_l01_changed(to_vmx(vcpu));
}
static void vmx_update_msr_bitmap_x2apic(struct kvm_vcpu *vcpu)
diff --git a/arch/x86/kvm/vmx/vmx.h b/arch/x86/kvm/vmx/vmx.h
index c40e7c880764f..6b87dcab46e48 100644
--- a/arch/x86/kvm/vmx/vmx.h
+++ b/arch/x86/kvm/vmx/vmx.h
@@ -409,8 +409,12 @@ bool __vmx_vcpu_run(struct vcpu_vmx *vmx, unsigned long *regs,
int vmx_find_loadstore_msr_slot(struct vmx_msrs *m, u32 msr);
void vmx_ept_load_pdptrs(struct kvm_vcpu *vcpu);
+void vmx_get_msr_bitmap_entries(struct kvm_vcpu *vcpu, u32 msr,
+ unsigned long **read_map, u8 *read_bit,
+ unsigned long **write_map, u8 *write_bit);
void vmx_disable_intercept_for_msr(struct kvm_vcpu *vcpu, u32 msr, int type);
void vmx_enable_intercept_for_msr(struct kvm_vcpu *vcpu, u32 msr, int type);
+bool vmx_is_valid_passthrough_msr(u32 msr);
u64 vmx_get_l2_tsc_offset(struct kvm_vcpu *vcpu);
u64 vmx_get_l2_tsc_multiplier(struct kvm_vcpu *vcpu);
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index 1e607a0eb58a0..3c4a580d51517 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -1810,6 +1810,10 @@ int kvm_passthrough_msr_slot(u32 msr)
{
u32 i;
+ if (!static_call(kvm_x86_is_valid_passthrough_msr)(msr)) {
+ return -EINVAL;
+ }
+
for (i = 0; i < kvm_x86_ops.nr_possible_passthrough_msrs; i++) {
if (kvm_x86_ops.possible_passthrough_msrs[i] == msr)
return i;
--
2.47.0.338.g60cca15819-goog
^ permalink raw reply related [flat|nested] 32+ messages in thread
* Re: [PATCH 01/15] KVM: x86: Use non-atomic bit ops to manipulate "shadow" MSR intercepts
2024-11-27 20:19 ` [PATCH 01/15] KVM: x86: Use non-atomic bit ops to manipulate "shadow" MSR intercepts Aaron Lewis
@ 2024-11-27 20:38 ` Sean Christopherson
0 siblings, 0 replies; 32+ messages in thread
From: Sean Christopherson @ 2024-11-27 20:38 UTC (permalink / raw)
To: Aaron Lewis; +Cc: kvm, pbonzini, jmattson
On Wed, Nov 27, 2024, Aaron Lewis wrote:
> From: Sean Christopherson <seanjc@google.com>
Heh, I'll write changelogs for the patches I authored.
> Signed-off-by: Sean Christopherson <seanjc@google.com>
When sending patches authored by someone else, you need to provide your SoB.
^ permalink raw reply [flat|nested] 32+ messages in thread
* Re: [PATCH 03/15] KVM: SVM: Invert the polarity of the "shadow" MSR interception bitmaps
2024-11-27 20:19 ` [PATCH 03/15] KVM: SVM: Invert the polarity of the "shadow" " Aaron Lewis
@ 2024-11-27 20:42 ` Sean Christopherson
2024-12-03 21:08 ` Tom Lendacky
0 siblings, 1 reply; 32+ messages in thread
From: Sean Christopherson @ 2024-11-27 20:42 UTC (permalink / raw)
To: Aaron Lewis; +Cc: kvm, pbonzini, jmattson
On Wed, Nov 27, 2024, Aaron Lewis wrote:
I'll write a changelog for this too.
> Note, a "FIXME" tag was added to svm_msr_filter_changed(). This will
Write changelogs in imperative mood, i.e. state what the patch is doing as a
command. Don't describe what will have happened after the patch is applied.
Using imperative mood allows for using indicative mood to describe what was
already there, and/or what happened in the past.
> be addressed later in the series after the VMX style MSR intercepts
> are added to SVM.
>
> Signed-off-by: Sean Christopherson <seanjc@google.com>
> Co-developed-by: Aaron Lewis <aaronlewis@google.com>
Your SoB is needed here too. See "When to use Acked-by:, Cc:, and Co-developed-by:"
in Documentation/process/submitting-patches.rst.
^ permalink raw reply [flat|nested] 32+ messages in thread
* Re: [PATCH 05/15] KVM: x86: SVM: Adopt VMX style MSR intercepts in SVM
2024-11-27 20:19 ` [PATCH 05/15] KVM: x86: SVM: Adopt VMX style MSR intercepts in SVM Aaron Lewis
@ 2024-11-27 20:43 ` Sean Christopherson
0 siblings, 0 replies; 32+ messages in thread
From: Sean Christopherson @ 2024-11-27 20:43 UTC (permalink / raw)
To: Aaron Lewis; +Cc: kvm, pbonzini, jmattson, Anish Ghulati
On Wed, Nov 27, 2024, Aaron Lewis wrote:
> VMX MSR interception is done via three functions:
>
> vmx_disable_intercept_for_msr(vcpu, msr, type)
> vmx_enable_intercept_for_msr(vcpu, msr, type)
> vmx_set_intercept_for_msr(vcpu, msr, type, value)
>
> While SVM uses
>
> set_msr_interception(vcpu, msrpm, msr, read, write)
>
> The SVM code is not very intuitive (using 0 for enable and 1 for
> disable), and forces both read and write changes with each call which
> is not always required.
>
> Add helpers functions to SVM to match VMX:
>
> svm_disable_intercept_for_msr(vcpu, msr, type)
> svm_enable_intercept_for_msr(vcpu, msr, type)
> svm_set_intercept_for_msr(vcpu, msr, type, enable_intercept)
>
> Additionally, update calls to set_msr_interception() to use the new
> functions. This update is only made to calls that toggle interception
> for both read and write.
>
> Keep the old paths for now, they will be deleted once all code is
> converted to the new helpers.
>
> Opportunistically, the function svm_get_msr_bitmap_entries() is added
> to abstract the MSR bitmap from the intercept functions. This will be
> needed later in the series when this code is hoisted to common code.
>
> No functional change.
>
> Suggested-by: Sean Christopherson <seanjc@google.com>
> Co-Developed-by: Anish Ghulati <aghulati@google.com>
Needs Anish's SoB.
> Signed-off-by: Aaron Lewis <aaronlewis@google.com>
^ permalink raw reply [flat|nested] 32+ messages in thread
* Re: [PATCH 06/15] KVM: SVM: Disable intercepts for all direct access MSRs on MSR filter changes
2024-11-27 20:19 ` [PATCH 06/15] KVM: SVM: Disable intercepts for all direct access MSRs on MSR filter changes Aaron Lewis
@ 2024-11-27 20:47 ` Sean Christopherson
0 siblings, 0 replies; 32+ messages in thread
From: Sean Christopherson @ 2024-11-27 20:47 UTC (permalink / raw)
To: Aaron Lewis; +Cc: kvm, pbonzini, jmattson, Anish Ghulati
On Wed, Nov 27, 2024, Aaron Lewis wrote:
> From: Anish Ghulati <aghulati@google.com>
>
> For all direct access MSRs, disable the MSR interception explicitly.
> svm_disable_intercept_for_msr() checks the new MSR filter and ensures that
> KVM enables interception if userspace wants to filter the MSR.
>
> This change is similar to the VMX change:
> d895f28ed6da ("KVM: VMX: Skip filter updates for MSRs that KVM is already intercepting")
>
> Adopting in SVM to align the implementations.
Wording and mood are all funky.
Give SVM the same treatment as was given VMX in commit d895f28ed6da ("KVM:
VMX: Skip filter updates for MSRs that KVM is already intercepting"), and
explicitly disable MSR interception when reacting to an MSR filter change.
There is no need to change anything for MSRs KVM is already intercepting,
and svm_disable_intercept_for_msr() performs the necessary filter checks.
>
> Suggested-by: Sean Christopherson <seanjc@google.com>
> Co-developed-by: Aaron Lewis <aaronlewis@google.com>
> Signed-off-by: Anish Ghulati <aghulati@google.com>
See the docs again. The order is wrong, and your SoB is missing.
^ permalink raw reply [flat|nested] 32+ messages in thread
* Re: [PATCH 00/15] Unify MSR intercepts in x86
2024-11-27 20:19 [PATCH 00/15] Unify MSR intercepts in x86 Aaron Lewis
` (14 preceding siblings ...)
2024-11-27 20:19 ` [PATCH 15/15] KVM: x86: Hoist VMX " Aaron Lewis
@ 2024-11-27 20:56 ` Sean Christopherson
15 siblings, 0 replies; 32+ messages in thread
From: Sean Christopherson @ 2024-11-27 20:56 UTC (permalink / raw)
To: Aaron Lewis; +Cc: kvm, pbonzini, jmattson
On Wed, Nov 27, 2024, Aaron Lewis wrote:
> The goal of this series is to unify MSR intercepts into common code between
> VMX and SVM.
>
> The high level structure of this series is to:
> 1. Modify SVM MSR intercepts to adopt how VMX does it.
> 2. Hoist the newly updated SVM MSR intercept implementation to common x86 code.
> 3. Hoist the VMX MSR intercept implementation to common x86 code.
>
> Aaron Lewis (8):
> KVM: SVM: Invert the polarity of the "shadow" MSR interception bitmaps
> KVM: SVM: Track MSRPM as "unsigned long", not "u32"
> KVM: x86: SVM: Adopt VMX style MSR intercepts in SVM
> KVM: SVM: Don't "NULL terminate" the list of possible passthrough MSRs
> KVM: x86: Track possible passthrough MSRs in kvm_x86_ops
> KVM: x86: Move ownership of passthrough MSR "shadow" to common x86
> KVM: x86: Hoist SVM MSR intercepts to common x86 code
> KVM: x86: Hoist VMX MSR intercepts to common x86 code
>
> Anish Ghulati (2):
> KVM: SVM: Disable intercepts for all direct access MSRs on MSR filter changes
> KVM: SVM: Delete old SVM MSR management code
>
> Sean Christopherson (5):
> KVM: x86: Use non-atomic bit ops to manipulate "shadow" MSR intercepts
> KVM: SVM: Use non-atomic bit ops to manipulate MSR interception bitmaps
> KVM: SVM: Pass through GHCB MSR if and only if VM is SEV-ES
> KVM: SVM: Drop "always" flag from list of possible passthrough MSRs
> KVM: VMX: Make list of possible passthrough MSRs "const"
>
> arch/x86/include/asm/kvm-x86-ops.h | 5 +-
> arch/x86/include/asm/kvm_host.h | 18 ++
> arch/x86/kvm/svm/sev.c | 11 +-
> arch/x86/kvm/svm/svm.c | 300 ++++++++++++-----------------
> arch/x86/kvm/svm/svm.h | 30 +--
> arch/x86/kvm/vmx/main.c | 30 +++
> arch/x86/kvm/vmx/vmx.c | 144 +++-----------
> arch/x86/kvm/vmx/vmx.h | 11 +-
> arch/x86/kvm/x86.c | 129 ++++++++++++-
> arch/x86/kvm/x86.h | 3 +
> 10 files changed, 358 insertions(+), 323 deletions(-)
>
> --
> 2.47.0.338.g60cca15819-goog
Please use `git format-patch` with `--base`, and in general read
Documentation/process/maintainer-kvm-x86.rst and
Documentation/process/maintainer-tip.rst
^ permalink raw reply [flat|nested] 32+ messages in thread
* Re: [PATCH 12/15] KVM: x86: Track possible passthrough MSRs in kvm_x86_ops
2024-11-27 20:19 ` [PATCH 12/15] KVM: x86: Track possible passthrough MSRs in kvm_x86_ops Aaron Lewis
@ 2024-11-27 21:57 ` Sean Christopherson
2024-11-28 16:46 ` Borislav Petkov
0 siblings, 1 reply; 32+ messages in thread
From: Sean Christopherson @ 2024-11-27 21:57 UTC (permalink / raw)
To: Aaron Lewis; +Cc: kvm, pbonzini, jmattson, Xin Li, Borislav Petkov, Dapeng Mi
[-- Attachment #1: Type: text/plain, Size: 3271 bytes --]
+Xin, Boris, and Dapeng
On Wed, Nov 27, 2024, Aaron Lewis wrote:
> Move the possible passthrough MSRs to kvm_x86_ops. Doing this allows
> them to be accessed from common x86 code.
...
> diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
> index 3e8afc82ae2fb..7e9fee4d36cc2 100644
> --- a/arch/x86/include/asm/kvm_host.h
> +++ b/arch/x86/include/asm/kvm_host.h
> @@ -1817,6 +1817,9 @@ struct kvm_x86_ops {
> int (*enable_l2_tlb_flush)(struct kvm_vcpu *vcpu);
>
> void (*migrate_timers)(struct kvm_vcpu *vcpu);
> +
> + const u32 * const possible_passthrough_msrs;
> + const u32 nr_possible_passthrough_msrs;
> void (*msr_filter_changed)(struct kvm_vcpu *vcpu);
> int (*complete_emulated_msr)(struct kvm_vcpu *vcpu, int err);
...
> +/*
> + * List of MSRs that can be directly passed to the guest.
> + * In addition to these x2apic, PT and LBR MSRs are handled specially.
> + */
> +static const u32 vmx_possible_passthrough_msrs[] = {
> + MSR_IA32_SPEC_CTRL,
> + MSR_IA32_PRED_CMD,
> + MSR_IA32_FLUSH_CMD,
> + MSR_IA32_TSC,
> +#ifdef CONFIG_X86_64
> + MSR_FS_BASE,
> + MSR_GS_BASE,
> + MSR_KERNEL_GS_BASE,
> + MSR_IA32_XFD,
> + MSR_IA32_XFD_ERR,
> +#endif
> + MSR_IA32_SYSENTER_CS,
> + MSR_IA32_SYSENTER_ESP,
> + MSR_IA32_SYSENTER_EIP,
> + MSR_CORE_C1_RES,
> + MSR_CORE_C3_RESIDENCY,
> + MSR_CORE_C6_RESIDENCY,
> + MSR_CORE_C7_RESIDENCY,
> +};
Looking at this with fresh eyes, the "possible" passthrough MSR list, and SVM's
direct_access_msrs, are confusing and flat out stupid. VMX's list isn't the set
of "possible" passthrough MSRs, it's the set of MSRs for which KVM may disable
interceptions without dedicated handling in .msr_filter_changed(). Ditto for
SVM's list, but at least SVM's array uses a slightly less awful name.
Xin Li and Boris have been bikeshedding over the VMX array, and it's all a giant
waste of time.
In all cases, KVM *already knows* which MSRs it wants to pass-through to the
guest. In a few cases the logic isn't super intuitive, e.g. for SPEC_CTRL, but
it's always fairly easy to understand what KVM wants to do.
Rather than expose the lists to common code, I think we should pivot after
"KVM: SVM: Drop "always" flag from list of possible passthrough MSRs" and rip
them out entirely.
The attached patch is compile-tested only (the nested interactions in particular
need a bit of scrutiny) and needs to be chunked into multiple patches, but I don't
see any obvious blockers, and the diffstats speak volumes:
arch/x86/include/asm/kvm-x86-ops.h | 2 +-
arch/x86/include/asm/kvm_host.h | 2 +-
arch/x86/kvm/lapic.h | 2 +
arch/x86/kvm/svm/svm.c | 310 ++++++++++++++++++++++++++++++++++++++--------------------------------------------------------------------------------------------
arch/x86/kvm/svm/svm.h | 6 ---
arch/x86/kvm/vmx/main.c | 2 +-
arch/x86/kvm/vmx/vmx.c | 179 ++++++++++++++++++---------------------------------------------------------
arch/x86/kvm/vmx/vmx.h | 9 ----
arch/x86/kvm/vmx/x86_ops.h | 2 +-
arch/x86/kvm/x86.c | 10 ++++-
10 files changed, 147 insertions(+), 377 deletions(-)
[*] https://lore.kernel.org/all/20241001050110.3643764-10-xin@zytor.com
[-- Attachment #2: 0001-tmp.patch --]
[-- Type: text/x-diff, Size: 29690 bytes --]
From 83928fe0ccd81ac46d48b62ec31580e725998436 Mon Sep 17 00:00:00 2001
From: Sean Christopherson <seanjc@google.com>
Date: Wed, 27 Nov 2024 13:54:37 -0800
Subject: [PATCH] tmp
---
arch/x86/include/asm/kvm-x86-ops.h | 2 +-
arch/x86/include/asm/kvm_host.h | 2 +-
arch/x86/kvm/lapic.h | 2 +
arch/x86/kvm/svm/svm.c | 310 +++++++++--------------------
arch/x86/kvm/svm/svm.h | 6 -
arch/x86/kvm/vmx/main.c | 2 +-
arch/x86/kvm/vmx/vmx.c | 179 ++++-------------
arch/x86/kvm/vmx/vmx.h | 9 -
arch/x86/kvm/vmx/x86_ops.h | 2 +-
arch/x86/kvm/x86.c | 10 +-
10 files changed, 147 insertions(+), 377 deletions(-)
diff --git a/arch/x86/include/asm/kvm-x86-ops.h b/arch/x86/include/asm/kvm-x86-ops.h
index 5aff7222e40f..8750fc49434b 100644
--- a/arch/x86/include/asm/kvm-x86-ops.h
+++ b/arch/x86/include/asm/kvm-x86-ops.h
@@ -131,7 +131,7 @@ KVM_X86_OP(check_emulate_instruction)
KVM_X86_OP(apic_init_signal_blocked)
KVM_X86_OP_OPTIONAL(enable_l2_tlb_flush)
KVM_X86_OP_OPTIONAL(migrate_timers)
-KVM_X86_OP(msr_filter_changed)
+KVM_X86_OP(refresh_msr_intercepts)
KVM_X86_OP(complete_emulated_msr)
KVM_X86_OP(vcpu_deliver_sipi_vector)
KVM_X86_OP_OPTIONAL_RET0(vcpu_get_apicv_inhibit_reasons);
diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
index e159e44a6a1b..a0854c1dbb3e 100644
--- a/arch/x86/include/asm/kvm_host.h
+++ b/arch/x86/include/asm/kvm_host.h
@@ -1819,7 +1819,7 @@ struct kvm_x86_ops {
int (*enable_l2_tlb_flush)(struct kvm_vcpu *vcpu);
void (*migrate_timers)(struct kvm_vcpu *vcpu);
- void (*msr_filter_changed)(struct kvm_vcpu *vcpu);
+ void (*refresh_msr_intercepts)(struct kvm_vcpu *vcpu);
int (*complete_emulated_msr)(struct kvm_vcpu *vcpu, int err);
void (*vcpu_deliver_sipi_vector)(struct kvm_vcpu *vcpu, u8 vector);
diff --git a/arch/x86/kvm/lapic.h b/arch/x86/kvm/lapic.h
index 24add38beaf0..150fcaa8430f 100644
--- a/arch/x86/kvm/lapic.h
+++ b/arch/x86/kvm/lapic.h
@@ -21,6 +21,8 @@
#define APIC_BROADCAST 0xFF
#define X2APIC_BROADCAST 0xFFFFFFFFul
+#define X2APIC_MSR(r) (APIC_BASE_MSR + ((r) >> 4))
+
enum lapic_mode {
LAPIC_MODE_DISABLED = 0,
LAPIC_MODE_INVALID = X2APIC_ENABLE,
diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c
index 3813258497e4..0b2a88251f10 100644
--- a/arch/x86/kvm/svm/svm.c
+++ b/arch/x86/kvm/svm/svm.c
@@ -79,69 +79,6 @@ static uint64_t osvw_len = 4, osvw_status;
static DEFINE_PER_CPU(u64, current_tsc_ratio);
-#define X2APIC_MSR(x) (APIC_BASE_MSR + (x >> 4))
-
-static const u32 direct_access_msrs[MAX_DIRECT_ACCESS_MSRS] = {
- MSR_STAR,
- MSR_IA32_SYSENTER_CS,
- MSR_IA32_SYSENTER_EIP,
- MSR_IA32_SYSENTER_ESP,
-#ifdef CONFIG_X86_64
- MSR_GS_BASE,
- MSR_FS_BASE,
- MSR_KERNEL_GS_BASE,
- MSR_LSTAR,
- MSR_CSTAR,
- MSR_SYSCALL_MASK,
-#endif
- MSR_IA32_SPEC_CTRL,
- MSR_IA32_PRED_CMD,
- MSR_IA32_FLUSH_CMD,
- MSR_IA32_DEBUGCTLMSR,
- MSR_IA32_LASTBRANCHFROMIP,
- MSR_IA32_LASTBRANCHTOIP,
- MSR_IA32_LASTINTFROMIP,
- MSR_IA32_LASTINTTOIP,
- MSR_IA32_XSS,
- MSR_EFER,
- MSR_IA32_CR_PAT,
- MSR_AMD64_SEV_ES_GHCB,
- MSR_TSC_AUX,
- X2APIC_MSR(APIC_ID),
- X2APIC_MSR(APIC_LVR),
- X2APIC_MSR(APIC_TASKPRI),
- X2APIC_MSR(APIC_ARBPRI),
- X2APIC_MSR(APIC_PROCPRI),
- X2APIC_MSR(APIC_EOI),
- X2APIC_MSR(APIC_RRR),
- X2APIC_MSR(APIC_LDR),
- X2APIC_MSR(APIC_DFR),
- X2APIC_MSR(APIC_SPIV),
- X2APIC_MSR(APIC_ISR),
- X2APIC_MSR(APIC_TMR),
- X2APIC_MSR(APIC_IRR),
- X2APIC_MSR(APIC_ESR),
- X2APIC_MSR(APIC_ICR),
- X2APIC_MSR(APIC_ICR2),
-
- /*
- * Note:
- * AMD does not virtualize APIC TSC-deadline timer mode, but it is
- * emulated by KVM. When setting APIC LVTT (0x832) register bit 18,
- * the AVIC hardware would generate GP fault. Therefore, always
- * intercept the MSR 0x832, and do not setup direct_access_msr.
- */
- X2APIC_MSR(APIC_LVTTHMR),
- X2APIC_MSR(APIC_LVTPC),
- X2APIC_MSR(APIC_LVT0),
- X2APIC_MSR(APIC_LVT1),
- X2APIC_MSR(APIC_LVTERR),
- X2APIC_MSR(APIC_TMICT),
- X2APIC_MSR(APIC_TMCCT),
- X2APIC_MSR(APIC_TDCR),
- MSR_INVALID,
-};
-
/*
* These 2 parameters are used to config the controls for Pause-Loop Exiting:
* pause_filter_count: On processors that support Pause filtering(indicated
@@ -756,18 +693,6 @@ static void clr_dr_intercepts(struct vcpu_svm *svm)
recalc_intercepts(svm);
}
-static int direct_access_msr_slot(u32 msr)
-{
- u32 i;
-
- for (i = 0; direct_access_msrs[i] != MSR_INVALID; i++) {
- if (direct_access_msrs[i] == msr)
- return i;
- }
-
- return -ENOENT;
-}
-
static bool msr_write_intercepted(struct kvm_vcpu *vcpu, u32 msr)
{
u8 bit_write;
@@ -831,17 +756,6 @@ BUILD_SVM_MSR_BITMAP_HELPER(svm_clear_msr_bitmap_write, __clear_bit, write)
void svm_disable_intercept_for_msr(struct kvm_vcpu *vcpu, u32 msr, int type)
{
struct vcpu_svm *svm = to_svm(vcpu);
- int slot;
-
- slot = direct_access_msr_slot(msr);
- WARN_ON(slot == -ENOENT);
- if (slot >= 0) {
- /* Set the shadow bitmaps to the desired intercept states */
- if (type & MSR_TYPE_R)
- __clear_bit(slot, svm->shadow_msr_intercept.read);
- if (type & MSR_TYPE_W)
- __clear_bit(slot, svm->shadow_msr_intercept.write);
- }
/*
* Don't disabled interception for the MSR if userspace wants to
@@ -870,17 +784,6 @@ void svm_disable_intercept_for_msr(struct kvm_vcpu *vcpu, u32 msr, int type)
void svm_enable_intercept_for_msr(struct kvm_vcpu *vcpu, u32 msr, int type)
{
struct vcpu_svm *svm = to_svm(vcpu);
- int slot;
-
- slot = direct_access_msr_slot(msr);
- WARN_ON(slot == -ENOENT);
- if (slot >= 0) {
- /* Set the shadow bitmaps to the desired intercept states */
- if (type & MSR_TYPE_R)
- __set_bit(slot, svm->shadow_msr_intercept.read);
- if (type & MSR_TYPE_W)
- __set_bit(slot, svm->shadow_msr_intercept.write);
- }
if (type & MSR_TYPE_R)
svm_set_msr_bitmap_read(vcpu, msr);
@@ -907,6 +810,20 @@ unsigned long *svm_vcpu_alloc_msrpm(void)
return msrpm;
}
+static void svm_refresh_lbr_msr_intercepts(struct kvm_vcpu *vcpu)
+{
+ bool intercept = !(to_svm(vcpu)->vmcb->control.virt_ext & LBR_CTL_ENABLE_MASK);
+
+ svm_set_intercept_for_msr(vcpu, MSR_IA32_LASTBRANCHFROMIP, MSR_TYPE_RW, intercept);
+ svm_set_intercept_for_msr(vcpu, MSR_IA32_LASTBRANCHTOIP, MSR_TYPE_RW, intercept);
+ svm_set_intercept_for_msr(vcpu, MSR_IA32_LASTINTFROMIP, MSR_TYPE_RW, intercept);
+ svm_set_intercept_for_msr(vcpu, MSR_IA32_LASTINTTOIP, MSR_TYPE_RW, intercept);
+
+ if (sev_es_guest(vcpu->kvm))
+ svm_set_intercept_for_msr(vcpu, MSR_IA32_DEBUGCTLMSR, MSR_TYPE_RW, intercept);
+
+}
+
void svm_vcpu_init_msrpm(struct kvm_vcpu *vcpu, unsigned long *msrpm)
{
svm_disable_intercept_for_msr(vcpu, MSR_STAR, MSR_TYPE_RW);
@@ -924,8 +841,76 @@ void svm_vcpu_init_msrpm(struct kvm_vcpu *vcpu, unsigned long *msrpm)
svm_disable_intercept_for_msr(vcpu, MSR_AMD64_SEV_ES_GHCB, MSR_TYPE_RW);
}
+static void svm_refresh_msr_intercepts(struct kvm_vcpu *vcpu)
+{
+ struct vcpu_svm *svm = to_svm(vcpu);
+
+ svm_vcpu_init_msrpm(vcpu, svm->msrpm);
+
+ if (lbrv)
+ svm_refresh_lbr_msr_intercepts(vcpu);
+
+ if (boot_cpu_has(X86_FEATURE_IBPB) && guest_has_pred_cmd_msr(vcpu))
+ svm_disable_intercept_for_msr(vcpu, MSR_IA32_PRED_CMD, MSR_TYPE_W);
+
+ if (boot_cpu_has(X86_FEATURE_FLUSH_L1D) && guest_cpuid_has(vcpu, X86_FEATURE_FLUSH_L1D))
+ svm_disable_intercept_for_msr(vcpu, MSR_IA32_FLUSH_CMD, MSR_TYPE_W);
+
+ /*
+ * If the host supports V_SPEC_CTRL then disable the interception
+ * of MSR_IA32_SPEC_CTRL.
+ */
+ if (boot_cpu_has(X86_FEATURE_V_SPEC_CTRL))
+ svm_disable_intercept_for_msr(vcpu, MSR_IA32_SPEC_CTRL, MSR_TYPE_RW);
+ else
+ svm_set_intercept_for_msr(vcpu, MSR_IA32_SPEC_CTRL, MSR_TYPE_RW, !svm->spec_ctrl);
+
+ /*
+ * Intercept SYSENTER_EIP and SYSENTER_ESP when emulating an Intel CPU,
+ * as AMD hardware only store 32 bits, whereas Intel CPUs track 64 bits.
+ */
+ svm_set_intercept_for_msr(vcpu, MSR_IA32_SYSENTER_EIP, MSR_TYPE_RW,
+ guest_cpuid_is_intel_compatible(vcpu));
+ svm_set_intercept_for_msr(vcpu, MSR_IA32_SYSENTER_ESP, MSR_TYPE_RW,
+ guest_cpuid_is_intel_compatible(vcpu));
+}
+
void svm_set_x2apic_msr_interception(struct vcpu_svm *svm, bool intercept)
{
+ static const u32 x2avic_passthrough_msrs[] = {
+ X2APIC_MSR(APIC_ID),
+ X2APIC_MSR(APIC_LVR),
+ X2APIC_MSR(APIC_TASKPRI),
+ X2APIC_MSR(APIC_ARBPRI),
+ X2APIC_MSR(APIC_PROCPRI),
+ X2APIC_MSR(APIC_EOI),
+ X2APIC_MSR(APIC_RRR),
+ X2APIC_MSR(APIC_LDR),
+ X2APIC_MSR(APIC_DFR),
+ X2APIC_MSR(APIC_SPIV),
+ X2APIC_MSR(APIC_ISR),
+ X2APIC_MSR(APIC_TMR),
+ X2APIC_MSR(APIC_IRR),
+ X2APIC_MSR(APIC_ESR),
+ X2APIC_MSR(APIC_ICR),
+ X2APIC_MSR(APIC_ICR2),
+
+ /*
+ * Note:
+ * AMD does not virtualize APIC TSC-deadline timer mode, but it is
+ * emulated by KVM. When setting APIC LVTT (0x832) register bit 18,
+ * the AVIC hardware would generate GP fault. Therefore, always
+ * intercept the MSR 0x832, and do not setup direct_access_msr.
+ */
+ X2APIC_MSR(APIC_LVTTHMR),
+ X2APIC_MSR(APIC_LVTPC),
+ X2APIC_MSR(APIC_LVT0),
+ X2APIC_MSR(APIC_LVT1),
+ X2APIC_MSR(APIC_LVTERR),
+ X2APIC_MSR(APIC_TMICT),
+ X2APIC_MSR(APIC_TMCCT),
+ X2APIC_MSR(APIC_TDCR),
+ };
int i;
if (intercept == svm->x2avic_msrs_intercepted)
@@ -934,15 +919,9 @@ void svm_set_x2apic_msr_interception(struct vcpu_svm *svm, bool intercept)
if (!x2avic_enabled)
return;
- for (i = 0; i < MAX_DIRECT_ACCESS_MSRS; i++) {
- int index = direct_access_msrs[i];
-
- if ((index < APIC_BASE_MSR) ||
- (index > APIC_BASE_MSR + 0xff))
- continue;
-
- svm_set_intercept_for_msr(&svm->vcpu, index, MSR_TYPE_RW, intercept);
- }
+ for (i = 0; i < ARRAY_SIZE(x2avic_passthrough_msrs); i++)
+ svm_set_intercept_for_msr(&svm->vcpu, x2avic_passthrough_msrs[i],
+ MSR_TYPE_RW, intercept);
svm->x2avic_msrs_intercepted = intercept;
}
@@ -952,73 +931,6 @@ void svm_vcpu_free_msrpm(unsigned long *msrpm)
__free_pages(virt_to_page(msrpm), get_order(MSRPM_SIZE));
}
-static void svm_msr_filter_changed(struct kvm_vcpu *vcpu)
-{
- struct vcpu_svm *svm = to_svm(vcpu);
- u32 i;
-
- /*
- * Redo intercept permissions for MSRs that KVM is passing through to
- * the guest. Disabling interception will check the new MSR filter and
- * ensure that KVM enables interception if usersepace wants to filter
- * the MSR. MSRs that KVM is already intercepting don't need to be
- * refreshed since KVM is going to intercept them regardless of what
- * userspace wants.
- */
- for (i = 0; direct_access_msrs[i] != MSR_INVALID; i++) {
- u32 msr = direct_access_msrs[i];
-
- if (!test_bit(i, svm->shadow_msr_intercept.read))
- svm_disable_intercept_for_msr(vcpu, msr, MSR_TYPE_R);
-
- if (!test_bit(i, svm->shadow_msr_intercept.write))
- svm_disable_intercept_for_msr(vcpu, msr, MSR_TYPE_W);
- }
-}
-
-static void add_msr_offset(u32 offset)
-{
- int i;
-
- for (i = 0; i < MSRPM_OFFSETS; ++i) {
-
- /* Offset already in list? */
- if (msrpm_offsets[i] == offset)
- return;
-
- /* Slot used by another offset? */
- if (msrpm_offsets[i] != MSR_INVALID)
- continue;
-
- /* Add offset to list */
- msrpm_offsets[i] = offset;
-
- return;
- }
-
- /*
- * If this BUG triggers the msrpm_offsets table has an overflow. Just
- * increase MSRPM_OFFSETS in this case.
- */
- BUG();
-}
-
-static void init_msrpm_offsets(void)
-{
- int i;
-
- memset(msrpm_offsets, 0xff, sizeof(msrpm_offsets));
-
- for (i = 0; direct_access_msrs[i] != MSR_INVALID; i++) {
- u32 offset;
-
- offset = svm_msrpm_offset(direct_access_msrs[i]);
- BUG_ON(offset == MSR_INVALID);
-
- add_msr_offset(offset);
- }
-}
-
void svm_copy_lbrs(struct vmcb *to_vmcb, struct vmcb *from_vmcb)
{
to_vmcb->save.dbgctl = from_vmcb->save.dbgctl;
@@ -1035,13 +947,7 @@ void svm_enable_lbrv(struct kvm_vcpu *vcpu)
struct vcpu_svm *svm = to_svm(vcpu);
svm->vmcb->control.virt_ext |= LBR_CTL_ENABLE_MASK;
- svm_disable_intercept_for_msr(vcpu, MSR_IA32_LASTBRANCHFROMIP, MSR_TYPE_RW);
- svm_disable_intercept_for_msr(vcpu, MSR_IA32_LASTBRANCHTOIP, MSR_TYPE_RW);
- svm_disable_intercept_for_msr(vcpu, MSR_IA32_LASTINTFROMIP, MSR_TYPE_RW);
- svm_disable_intercept_for_msr(vcpu, MSR_IA32_LASTINTTOIP, MSR_TYPE_RW);
-
- if (sev_es_guest(vcpu->kvm))
- svm_disable_intercept_for_msr(vcpu, MSR_IA32_DEBUGCTLMSR, MSR_TYPE_RW);
+ svm_refresh_lbr_msr_intercepts(vcpu);
/* Move the LBR msrs to the vmcb02 so that the guest can see them. */
if (is_guest_mode(vcpu))
@@ -1053,12 +959,8 @@ static void svm_disable_lbrv(struct kvm_vcpu *vcpu)
struct vcpu_svm *svm = to_svm(vcpu);
KVM_BUG_ON(sev_es_guest(vcpu->kvm), vcpu->kvm);
-
svm->vmcb->control.virt_ext &= ~LBR_CTL_ENABLE_MASK;
- svm_enable_intercept_for_msr(vcpu, MSR_IA32_LASTBRANCHFROMIP, MSR_TYPE_RW);
- svm_enable_intercept_for_msr(vcpu, MSR_IA32_LASTBRANCHTOIP, MSR_TYPE_RW);
- svm_enable_intercept_for_msr(vcpu, MSR_IA32_LASTINTFROMIP, MSR_TYPE_RW);
- svm_enable_intercept_for_msr(vcpu, MSR_IA32_LASTINTTOIP, MSR_TYPE_RW);
+ svm_refresh_lbr_msr_intercepts(vcpu);
/*
* Move the LBR msrs back to the vmcb01 to avoid copying them
@@ -1241,17 +1143,9 @@ static inline void init_vmcb_after_set_cpuid(struct kvm_vcpu *vcpu)
struct vcpu_svm *svm = to_svm(vcpu);
if (guest_cpuid_is_intel_compatible(vcpu)) {
- /*
- * We must intercept SYSENTER_EIP and SYSENTER_ESP
- * accesses because the processor only stores 32 bits.
- * For the same reason we cannot use virtual VMLOAD/VMSAVE.
- */
svm_set_intercept(svm, INTERCEPT_VMLOAD);
svm_set_intercept(svm, INTERCEPT_VMSAVE);
svm->vmcb->control.virt_ext &= ~VIRTUAL_VMLOAD_VMSAVE_ENABLE_MASK;
-
- svm_enable_intercept_for_msr(vcpu, MSR_IA32_SYSENTER_EIP, MSR_TYPE_RW);
- svm_enable_intercept_for_msr(vcpu, MSR_IA32_SYSENTER_ESP, MSR_TYPE_RW);
} else {
/*
* If hardware supports Virtual VMLOAD VMSAVE then enable it
@@ -1262,9 +1156,6 @@ static inline void init_vmcb_after_set_cpuid(struct kvm_vcpu *vcpu)
svm_clr_intercept(svm, INTERCEPT_VMSAVE);
svm->vmcb->control.virt_ext |= VIRTUAL_VMLOAD_VMSAVE_ENABLE_MASK;
}
- /* No need to intercept these MSRs */
- svm_disable_intercept_for_msr(vcpu, MSR_IA32_SYSENTER_EIP, MSR_TYPE_RW);
- svm_disable_intercept_for_msr(vcpu, MSR_IA32_SYSENTER_ESP, MSR_TYPE_RW);
}
}
@@ -1388,13 +1279,6 @@ static void init_vmcb(struct kvm_vcpu *vcpu)
svm_recalc_instruction_intercepts(vcpu, svm);
- /*
- * If the host supports V_SPEC_CTRL then disable the interception
- * of MSR_IA32_SPEC_CTRL.
- */
- if (boot_cpu_has(X86_FEATURE_V_SPEC_CTRL))
- svm_disable_intercept_for_msr(vcpu, MSR_IA32_SPEC_CTRL, MSR_TYPE_RW);
-
if (kvm_vcpu_apicv_active(vcpu))
avic_init_vmcb(svm, vmcb);
@@ -1422,8 +1306,6 @@ static void __svm_vcpu_reset(struct kvm_vcpu *vcpu)
{
struct vcpu_svm *svm = to_svm(vcpu);
- svm_vcpu_init_msrpm(vcpu, svm->msrpm);
-
svm_init_osvw(vcpu);
if (kvm_check_has_quirk(vcpu->kvm, KVM_X86_QUIRK_STUFF_FEATURE_MSRS))
@@ -1448,6 +1330,7 @@ static void svm_vcpu_reset(struct kvm_vcpu *vcpu, bool init_event)
sev_snp_init_protected_guest_state(vcpu);
init_vmcb(vcpu);
+ svm_refresh_msr_intercepts(vcpu);
if (!init_event)
__svm_vcpu_reset(vcpu);
@@ -1488,10 +1371,6 @@ static int svm_vcpu_create(struct kvm_vcpu *vcpu)
if (err)
goto error_free_vmsa_page;
- /* All MSRs start out in the "intercepted" state. */
- bitmap_fill(svm->shadow_msr_intercept.read, MAX_DIRECT_ACCESS_MSRS);
- bitmap_fill(svm->shadow_msr_intercept.write, MAX_DIRECT_ACCESS_MSRS);
-
svm->msrpm = svm_vcpu_alloc_msrpm();
if (!svm->msrpm) {
err = -ENOMEM;
@@ -3193,8 +3072,7 @@ static int svm_set_msr(struct kvm_vcpu *vcpu, struct msr_data *msr)
/*
* TSC_AUX is usually changed only during boot and never read
- * directly. Intercept TSC_AUX instead of exposing it to the
- * guest via direct_access_msrs, and switch it via user return.
+ * directly. Intercept TSC_AUX and switch it via user return.
*/
preempt_disable();
ret = kvm_set_user_return_msr(tsc_aux_uret_slot, data, -1ull);
@@ -4465,12 +4343,6 @@ static void svm_vcpu_after_set_cpuid(struct kvm_vcpu *vcpu)
svm_recalc_instruction_intercepts(vcpu, svm);
- if (boot_cpu_has(X86_FEATURE_IBPB) && guest_has_pred_cmd_msr(vcpu))
- svm_disable_intercept_for_msr(vcpu, MSR_IA32_PRED_CMD, MSR_TYPE_W);
-
- if (boot_cpu_has(X86_FEATURE_FLUSH_L1D) && guest_cpuid_has(vcpu, X86_FEATURE_FLUSH_L1D))
- svm_disable_intercept_for_msr(vcpu, MSR_IA32_FLUSH_CMD, MSR_TYPE_W);
-
if (sev_guest(vcpu->kvm))
sev_vcpu_after_set_cpuid(svm);
@@ -5166,7 +5038,7 @@ static struct kvm_x86_ops svm_x86_ops __initdata = {
.apic_init_signal_blocked = svm_apic_init_signal_blocked,
- .msr_filter_changed = svm_msr_filter_changed,
+ .refresh_msr_intercepts = svm_refresh_msr_intercepts,
.complete_emulated_msr = svm_complete_emulated_msr,
.vcpu_deliver_sipi_vector = svm_vcpu_deliver_sipi_vector,
@@ -5324,8 +5196,6 @@ static __init int svm_hardware_setup(void)
memset(iopm_va, 0xff, PAGE_SIZE * (1 << order));
iopm_base = __sme_page_pa(iopm_pages);
- init_msrpm_offsets();
-
kvm_caps.supported_xcr0 &= ~(XFEATURE_MASK_BNDREGS |
XFEATURE_MASK_BNDCSR);
diff --git a/arch/x86/kvm/svm/svm.h b/arch/x86/kvm/svm/svm.h
index 2513990c5b6e..a73da8ca73b4 100644
--- a/arch/x86/kvm/svm/svm.h
+++ b/arch/x86/kvm/svm/svm.h
@@ -313,12 +313,6 @@ struct vcpu_svm {
struct list_head ir_list;
spinlock_t ir_list_lock;
- /* Save desired MSR intercept (read: pass-through) state */
- struct {
- DECLARE_BITMAP(read, MAX_DIRECT_ACCESS_MSRS);
- DECLARE_BITMAP(write, MAX_DIRECT_ACCESS_MSRS);
- } shadow_msr_intercept;
-
struct vcpu_sev_es_state sev_es;
bool guest_state_loaded;
diff --git a/arch/x86/kvm/vmx/main.c b/arch/x86/kvm/vmx/main.c
index 92d35cc6cd15..915df0f5f1eb 100644
--- a/arch/x86/kvm/vmx/main.c
+++ b/arch/x86/kvm/vmx/main.c
@@ -152,7 +152,7 @@ struct kvm_x86_ops vt_x86_ops __initdata = {
.apic_init_signal_blocked = vmx_apic_init_signal_blocked,
.migrate_timers = vmx_migrate_timers,
- .msr_filter_changed = vmx_msr_filter_changed,
+ .refresh_msr_intercepts = vmx_refresh_msr_intercepts,
.complete_emulated_msr = kvm_complete_insn_gp,
.vcpu_deliver_sipi_vector = kvm_vcpu_deliver_sipi_vector,
diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
index 0701bf32e59e..88f71b66e673 100644
--- a/arch/x86/kvm/vmx/vmx.c
+++ b/arch/x86/kvm/vmx/vmx.c
@@ -163,31 +163,6 @@ module_param(allow_smaller_maxphyaddr, bool, S_IRUGO);
RTIT_STATUS_ERROR | RTIT_STATUS_STOPPED | \
RTIT_STATUS_BYTECNT))
-/*
- * List of MSRs that can be directly passed to the guest.
- * In addition to these x2apic, PT and LBR MSRs are handled specially.
- */
-static u32 vmx_possible_passthrough_msrs[MAX_POSSIBLE_PASSTHROUGH_MSRS] = {
- MSR_IA32_SPEC_CTRL,
- MSR_IA32_PRED_CMD,
- MSR_IA32_FLUSH_CMD,
- MSR_IA32_TSC,
-#ifdef CONFIG_X86_64
- MSR_FS_BASE,
- MSR_GS_BASE,
- MSR_KERNEL_GS_BASE,
- MSR_IA32_XFD,
- MSR_IA32_XFD_ERR,
-#endif
- MSR_IA32_SYSENTER_CS,
- MSR_IA32_SYSENTER_ESP,
- MSR_IA32_SYSENTER_EIP,
- MSR_CORE_C1_RES,
- MSR_CORE_C3_RESIDENCY,
- MSR_CORE_C6_RESIDENCY,
- MSR_CORE_C7_RESIDENCY,
-};
-
/*
* These 2 parameters are used to config the controls for Pause-Loop Exiting:
* ple_gap: upper bound on the amount of time between two successive
@@ -669,40 +644,6 @@ static inline bool cpu_need_virtualize_apic_accesses(struct kvm_vcpu *vcpu)
return flexpriority_enabled && lapic_in_kernel(vcpu);
}
-static int vmx_get_passthrough_msr_slot(u32 msr)
-{
- int i;
-
- switch (msr) {
- case 0x800 ... 0x8ff:
- /* x2APIC MSRs. These are handled in vmx_update_msr_bitmap_x2apic() */
- return -ENOENT;
- case MSR_IA32_RTIT_STATUS:
- case MSR_IA32_RTIT_OUTPUT_BASE:
- case MSR_IA32_RTIT_OUTPUT_MASK:
- case MSR_IA32_RTIT_CR3_MATCH:
- case MSR_IA32_RTIT_ADDR0_A ... MSR_IA32_RTIT_ADDR3_B:
- /* PT MSRs. These are handled in pt_update_intercept_for_msr() */
- case MSR_LBR_SELECT:
- case MSR_LBR_TOS:
- case MSR_LBR_INFO_0 ... MSR_LBR_INFO_0 + 31:
- case MSR_LBR_NHM_FROM ... MSR_LBR_NHM_FROM + 31:
- case MSR_LBR_NHM_TO ... MSR_LBR_NHM_TO + 31:
- case MSR_LBR_CORE_FROM ... MSR_LBR_CORE_FROM + 8:
- case MSR_LBR_CORE_TO ... MSR_LBR_CORE_TO + 8:
- /* LBR MSRs. These are handled in vmx_update_intercept_for_lbr_msrs() */
- return -ENOENT;
- }
-
- for (i = 0; i < ARRAY_SIZE(vmx_possible_passthrough_msrs); i++) {
- if (vmx_possible_passthrough_msrs[i] == msr)
- return i;
- }
-
- WARN(1, "Invalid MSR %x, please adapt vmx_possible_passthrough_msrs[]", msr);
- return -ENOENT;
-}
-
struct vmx_uret_msr *vmx_find_uret_msr(struct vcpu_vmx *vmx, u32 msr)
{
int i;
@@ -4002,25 +3943,12 @@ void vmx_disable_intercept_for_msr(struct kvm_vcpu *vcpu, u32 msr, int type)
{
struct vcpu_vmx *vmx = to_vmx(vcpu);
unsigned long *msr_bitmap = vmx->vmcs01.msr_bitmap;
- int idx;
if (!cpu_has_vmx_msr_bitmap())
return;
vmx_msr_bitmap_l01_changed(vmx);
- /*
- * Mark the desired intercept state in shadow bitmap, this is needed
- * for resync when the MSR filters change.
- */
- idx = vmx_get_passthrough_msr_slot(msr);
- if (idx >= 0) {
- if (type & MSR_TYPE_R)
- __clear_bit(idx, vmx->shadow_msr_intercept.read);
- if (type & MSR_TYPE_W)
- __clear_bit(idx, vmx->shadow_msr_intercept.write);
- }
-
if ((type & MSR_TYPE_R) &&
!kvm_msr_allowed(vcpu, msr, KVM_MSR_FILTER_READ)) {
vmx_set_msr_bitmap_read(msr_bitmap, msr);
@@ -4044,25 +3972,12 @@ void vmx_enable_intercept_for_msr(struct kvm_vcpu *vcpu, u32 msr, int type)
{
struct vcpu_vmx *vmx = to_vmx(vcpu);
unsigned long *msr_bitmap = vmx->vmcs01.msr_bitmap;
- int idx;
if (!cpu_has_vmx_msr_bitmap())
return;
vmx_msr_bitmap_l01_changed(vmx);
- /*
- * Mark the desired intercept state in shadow bitmap, this is needed
- * for resync when the MSR filter changes.
- */
- idx = vmx_get_passthrough_msr_slot(msr);
- if (idx >= 0) {
- if (type & MSR_TYPE_R)
- __set_bit(idx, vmx->shadow_msr_intercept.read);
- if (type & MSR_TYPE_W)
- __set_bit(idx, vmx->shadow_msr_intercept.write);
- }
-
if (type & MSR_TYPE_R)
vmx_set_msr_bitmap_read(msr_bitmap, msr);
@@ -4146,35 +4061,54 @@ void pt_update_intercept_for_msr(struct kvm_vcpu *vcpu)
}
}
-void vmx_msr_filter_changed(struct kvm_vcpu *vcpu)
+void vmx_refresh_msr_intercepts(struct kvm_vcpu *vcpu)
{
- struct vcpu_vmx *vmx = to_vmx(vcpu);
- u32 i;
-
if (!cpu_has_vmx_msr_bitmap())
return;
- /*
- * Redo intercept permissions for MSRs that KVM is passing through to
- * the guest. Disabling interception will check the new MSR filter and
- * ensure that KVM enables interception if usersepace wants to filter
- * the MSR. MSRs that KVM is already intercepting don't need to be
- * refreshed since KVM is going to intercept them regardless of what
- * userspace wants.
- */
- for (i = 0; i < ARRAY_SIZE(vmx_possible_passthrough_msrs); i++) {
- u32 msr = vmx_possible_passthrough_msrs[i];
-
- if (!test_bit(i, vmx->shadow_msr_intercept.read))
- vmx_disable_intercept_for_msr(vcpu, msr, MSR_TYPE_R);
-
- if (!test_bit(i, vmx->shadow_msr_intercept.write))
- vmx_disable_intercept_for_msr(vcpu, msr, MSR_TYPE_W);
+ vmx_disable_intercept_for_msr(vcpu, MSR_IA32_TSC, MSR_TYPE_R);
+#ifdef CONFIG_X86_64
+ vmx_disable_intercept_for_msr(vcpu, MSR_FS_BASE, MSR_TYPE_RW);
+ vmx_disable_intercept_for_msr(vcpu, MSR_GS_BASE, MSR_TYPE_RW);
+ vmx_disable_intercept_for_msr(vcpu, MSR_KERNEL_GS_BASE, MSR_TYPE_RW);
+#endif
+ vmx_disable_intercept_for_msr(vcpu, MSR_IA32_SYSENTER_CS, MSR_TYPE_RW);
+ vmx_disable_intercept_for_msr(vcpu, MSR_IA32_SYSENTER_ESP, MSR_TYPE_RW);
+ vmx_disable_intercept_for_msr(vcpu, MSR_IA32_SYSENTER_EIP, MSR_TYPE_RW);
+ if (kvm_cstate_in_guest(vcpu->kvm)) {
+ vmx_disable_intercept_for_msr(vcpu, MSR_CORE_C1_RES, MSR_TYPE_R);
+ vmx_disable_intercept_for_msr(vcpu, MSR_CORE_C3_RESIDENCY, MSR_TYPE_R);
+ vmx_disable_intercept_for_msr(vcpu, MSR_CORE_C6_RESIDENCY, MSR_TYPE_R);
+ vmx_disable_intercept_for_msr(vcpu, MSR_CORE_C7_RESIDENCY, MSR_TYPE_R);
}
/* PT MSRs can be passed through iff PT is exposed to the guest. */
if (vmx_pt_mode_is_host_guest())
pt_update_intercept_for_msr(vcpu);
+
+ if (vcpu->arch.xfd_no_write_intercept)
+ vmx_disable_intercept_for_msr(vcpu, MSR_IA32_XFD, MSR_TYPE_RW);
+
+
+ vmx_set_intercept_for_msr(vcpu, MSR_IA32_SPEC_CTRL, MSR_TYPE_RW,
+ !to_vmx(vcpu)->spec_ctrl);
+
+ if (kvm_cpu_cap_has(X86_FEATURE_XFD))
+ vmx_set_intercept_for_msr(vcpu, MSR_IA32_XFD_ERR, MSR_TYPE_R,
+ !guest_cpuid_has(vcpu, X86_FEATURE_XFD));
+
+ if (boot_cpu_has(X86_FEATURE_IBPB))
+ vmx_set_intercept_for_msr(vcpu, MSR_IA32_PRED_CMD, MSR_TYPE_W,
+ !guest_has_pred_cmd_msr(vcpu));
+
+ if (boot_cpu_has(X86_FEATURE_FLUSH_L1D))
+ vmx_set_intercept_for_msr(vcpu, MSR_IA32_FLUSH_CMD, MSR_TYPE_W,
+ !guest_cpuid_has(vcpu, X86_FEATURE_FLUSH_L1D));
+
+ /*
+ * x2APIC and LBR MSR intercepts are modified on-demand and cannot be
+ * filtered by userspace.
+ */
}
static inline void kvm_vcpu_trigger_posted_interrupt(struct kvm_vcpu *vcpu,
@@ -7566,26 +7500,6 @@ int vmx_vcpu_create(struct kvm_vcpu *vcpu)
evmcs->hv_enlightenments_control.msr_bitmap = 1;
}
- /* The MSR bitmap starts with all ones */
- bitmap_fill(vmx->shadow_msr_intercept.read, MAX_POSSIBLE_PASSTHROUGH_MSRS);
- bitmap_fill(vmx->shadow_msr_intercept.write, MAX_POSSIBLE_PASSTHROUGH_MSRS);
-
- vmx_disable_intercept_for_msr(vcpu, MSR_IA32_TSC, MSR_TYPE_R);
-#ifdef CONFIG_X86_64
- vmx_disable_intercept_for_msr(vcpu, MSR_FS_BASE, MSR_TYPE_RW);
- vmx_disable_intercept_for_msr(vcpu, MSR_GS_BASE, MSR_TYPE_RW);
- vmx_disable_intercept_for_msr(vcpu, MSR_KERNEL_GS_BASE, MSR_TYPE_RW);
-#endif
- vmx_disable_intercept_for_msr(vcpu, MSR_IA32_SYSENTER_CS, MSR_TYPE_RW);
- vmx_disable_intercept_for_msr(vcpu, MSR_IA32_SYSENTER_ESP, MSR_TYPE_RW);
- vmx_disable_intercept_for_msr(vcpu, MSR_IA32_SYSENTER_EIP, MSR_TYPE_RW);
- if (kvm_cstate_in_guest(vcpu->kvm)) {
- vmx_disable_intercept_for_msr(vcpu, MSR_CORE_C1_RES, MSR_TYPE_R);
- vmx_disable_intercept_for_msr(vcpu, MSR_CORE_C3_RESIDENCY, MSR_TYPE_R);
- vmx_disable_intercept_for_msr(vcpu, MSR_CORE_C6_RESIDENCY, MSR_TYPE_R);
- vmx_disable_intercept_for_msr(vcpu, MSR_CORE_C7_RESIDENCY, MSR_TYPE_R);
- }
-
vmx->loaded_vmcs = &vmx->vmcs01;
if (cpu_need_virtualize_apic_accesses(vcpu)) {
@@ -7866,18 +7780,6 @@ void vmx_vcpu_after_set_cpuid(struct kvm_vcpu *vcpu)
}
}
- if (kvm_cpu_cap_has(X86_FEATURE_XFD))
- vmx_set_intercept_for_msr(vcpu, MSR_IA32_XFD_ERR, MSR_TYPE_R,
- !guest_cpuid_has(vcpu, X86_FEATURE_XFD));
-
- if (boot_cpu_has(X86_FEATURE_IBPB))
- vmx_set_intercept_for_msr(vcpu, MSR_IA32_PRED_CMD, MSR_TYPE_W,
- !guest_has_pred_cmd_msr(vcpu));
-
- if (boot_cpu_has(X86_FEATURE_FLUSH_L1D))
- vmx_set_intercept_for_msr(vcpu, MSR_IA32_FLUSH_CMD, MSR_TYPE_W,
- !guest_cpuid_has(vcpu, X86_FEATURE_FLUSH_L1D));
-
set_cr4_guest_host_mask(vmx);
vmx_write_encls_bitmap(vcpu, NULL);
@@ -7893,6 +7795,9 @@ void vmx_vcpu_after_set_cpuid(struct kvm_vcpu *vcpu)
vmx->msr_ia32_feature_control_valid_bits &=
~FEAT_CTL_SGX_LC_ENABLED;
+ /* Refresh MSR interception to account for feature changes. */
+ vmx_refresh_msr_intercepts(vcpu);
+
/* Refresh #PF interception to account for MAXPHYADDR changes. */
vmx_update_exception_bitmap(vcpu);
}
diff --git a/arch/x86/kvm/vmx/vmx.h b/arch/x86/kvm/vmx/vmx.h
index 43f573f6ca46..d38f39935a52 100644
--- a/arch/x86/kvm/vmx/vmx.h
+++ b/arch/x86/kvm/vmx/vmx.h
@@ -17,8 +17,6 @@
#include "run_flags.h"
#include "../mmu.h"
-#define X2APIC_MSR(r) (APIC_BASE_MSR + ((r) >> 4))
-
#ifdef CONFIG_X86_64
#define MAX_NR_USER_RETURN_MSRS 7
#else
@@ -353,13 +351,6 @@ struct vcpu_vmx {
struct pt_desc pt_desc;
struct lbr_desc lbr_desc;
- /* Save desired MSR intercept (read: pass-through) state */
-#define MAX_POSSIBLE_PASSTHROUGH_MSRS 16
- struct {
- DECLARE_BITMAP(read, MAX_POSSIBLE_PASSTHROUGH_MSRS);
- DECLARE_BITMAP(write, MAX_POSSIBLE_PASSTHROUGH_MSRS);
- } shadow_msr_intercept;
-
/* ve_info must be page aligned. */
struct vmx_ve_information *ve_info;
};
diff --git a/arch/x86/kvm/vmx/x86_ops.h b/arch/x86/kvm/vmx/x86_ops.h
index a55981c5216e..ee16bbdd9a3e 100644
--- a/arch/x86/kvm/vmx/x86_ops.h
+++ b/arch/x86/kvm/vmx/x86_ops.h
@@ -54,7 +54,7 @@ void vmx_deliver_interrupt(struct kvm_lapic *apic, int delivery_mode,
int trig_mode, int vector);
void vmx_vcpu_after_set_cpuid(struct kvm_vcpu *vcpu);
bool vmx_has_emulated_msr(struct kvm *kvm, u32 index);
-void vmx_msr_filter_changed(struct kvm_vcpu *vcpu);
+void vmx_refresh_msr_intercepts(struct kvm_vcpu *vcpu);
void vmx_prepare_switch_to_guest(struct kvm_vcpu *vcpu);
void vmx_update_exception_bitmap(struct kvm_vcpu *vcpu);
int vmx_get_feature_msr(u32 msr, u64 *data);
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index 2e713480933a..5d4e049e5725 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -10840,8 +10840,16 @@ static int vcpu_enter_guest(struct kvm_vcpu *vcpu)
kvm_vcpu_update_apicv(vcpu);
if (kvm_check_request(KVM_REQ_APF_READY, vcpu))
kvm_check_async_pf_completion(vcpu);
+
+ /*
+ * Refresh intercept permissions for MSRs that KVM is passing
+ * through to the guest, as userspace may want to trap accesses.
+ * Disabling interception will check the new MSR filter and
+ * ensure that KVM enables interception if usersepace wants to
+ * filter the MSR.
+ */
if (kvm_check_request(KVM_REQ_MSR_FILTER_CHANGED, vcpu))
- kvm_x86_call(msr_filter_changed)(vcpu);
+ kvm_x86_call(refresh_msr_intercepts)(vcpu);
if (kvm_check_request(KVM_REQ_UPDATE_CPU_DIRTY_LOGGING, vcpu))
kvm_x86_call(update_cpu_dirty_logging)(vcpu);
base-commit: c109f5c273abb98684209280d4b07d596ee6a54a
--
2.47.0.338.g60cca15819-goog
^ permalink raw reply related [flat|nested] 32+ messages in thread
* Re: [PATCH 12/15] KVM: x86: Track possible passthrough MSRs in kvm_x86_ops
2024-11-27 21:57 ` Sean Christopherson
@ 2024-11-28 16:46 ` Borislav Petkov
2024-12-03 19:47 ` Sean Christopherson
0 siblings, 1 reply; 32+ messages in thread
From: Borislav Petkov @ 2024-11-28 16:46 UTC (permalink / raw)
To: Sean Christopherson
Cc: Aaron Lewis, kvm, pbonzini, jmattson, Xin Li, Dapeng Mi
On Wed, Nov 27, 2024 at 01:57:54PM -0800, Sean Christopherson wrote:
> The attached patch is compile-tested only (the nested interactions in particular
> need a bit of scrutiny) and needs to be chunked into multiple patches, but I don't
> see any obvious blockers, and the diffstats speak volumes:
I'd like to apply this and take a closer look but I don't know what it goes
against. Btw, you could point me to some documentation explaining which
branches in the kvm tree people should use to base off work ontop.
In any case, the overall idea makes sense to me - SVM and VMX both know which
MSRs should be intercepted and so on.
Thx.
--
Regards/Gruss,
Boris.
https://people.kernel.org/tglx/notes-about-netiquette
^ permalink raw reply [flat|nested] 32+ messages in thread
* Re: [PATCH 12/15] KVM: x86: Track possible passthrough MSRs in kvm_x86_ops
2024-11-28 16:46 ` Borislav Petkov
@ 2024-12-03 19:47 ` Sean Christopherson
2024-12-05 17:56 ` Borislav Petkov
2024-12-05 18:06 ` Borislav Petkov
0 siblings, 2 replies; 32+ messages in thread
From: Sean Christopherson @ 2024-12-03 19:47 UTC (permalink / raw)
To: Borislav Petkov; +Cc: Aaron Lewis, kvm, pbonzini, jmattson, Xin Li, Dapeng Mi
On Thu, Nov 28, 2024, Borislav Petkov wrote:
> On Wed, Nov 27, 2024 at 01:57:54PM -0800, Sean Christopherson wrote:
> > The attached patch is compile-tested only (the nested interactions in particular
> > need a bit of scrutiny) and needs to be chunked into multiple patches, but I don't
> > see any obvious blockers, and the diffstats speak volumes:
>
> I'd like to apply this and take a closer look but I don't know what it goes
> against.
It applies cleanly on my tree (github.com/kvm-x86/linux.git next) or Paolo's
(git://git.kernel.org/pub/scm/virt/kvm/kvm.git next).
> Btw, you could point me to some documentation explaining which branches in
> the kvm tree people should use to base off work ontop.
For KVM x86, from Documentation/process/maintainer-kvm-x86.rst:
Base Tree/Branch
~~~~~~~~~~~~~~~~
Fixes that target the current release, a.k.a. mainline, should be based on
``git://git.kernel.org/pub/scm/virt/kvm/kvm.git master``. Note, fixes do not
automatically warrant inclusion in the current release. There is no singular
rule, but typically only fixes for bugs that are urgent, critical, and/or were
introduced in the current release should target the current release.
Everything else should be based on ``kvm-x86/next``, i.e. there is no need to
select a specific topic branch as the base. If there are conflicts and/or
dependencies across topic branches, it is the maintainer's job to sort them
out.
The only exception to using ``kvm-x86/next`` as the base is if a patch/series
is a multi-arch series, i.e. has non-trivial modifications to common KVM code
and/or has more than superficial changes to other architectures' code. Multi-
arch patch/series should instead be based on a common, stable point in KVM's
history, e.g. the release candidate upon which ``kvm-x86 next`` is based. If
you're unsure whether a patch/series is truly multi-arch, err on the side of
caution and treat it as multi-arch, i.e. use a common base.
where kvm-x86 is the aforementioned github.com/kvm-x86/linux.git.
^ permalink raw reply [flat|nested] 32+ messages in thread
* Re: [PATCH 03/15] KVM: SVM: Invert the polarity of the "shadow" MSR interception bitmaps
2024-11-27 20:42 ` Sean Christopherson
@ 2024-12-03 21:08 ` Tom Lendacky
0 siblings, 0 replies; 32+ messages in thread
From: Tom Lendacky @ 2024-12-03 21:08 UTC (permalink / raw)
To: Sean Christopherson, Aaron Lewis; +Cc: kvm, pbonzini, jmattson
On 11/27/24 14:42, Sean Christopherson wrote:
> On Wed, Nov 27, 2024, Aaron Lewis wrote:
>
> I'll write a changelog for this too.
>
>> Note, a "FIXME" tag was added to svm_msr_filter_changed(). This will
>
> Write changelogs in imperative mood, i.e. state what the patch is doing as a
> command. Don't describe what will have happened after the patch is applied.
> Using imperative mood allows for using indicative mood to describe what was
> already there, and/or what happened in the past.
>
>> be addressed later in the series after the VMX style MSR intercepts
>> are added to SVM.
>>
>> Signed-off-by: Sean Christopherson <seanjc@google.com>
>> Co-developed-by: Aaron Lewis <aaronlewis@google.com>
>
> Your SoB is needed here too. See "When to use Acked-by:, Cc:, and Co-developed-by:"
> in Documentation/process/submitting-patches.rst.
And actually, since the From: is Aaron's name, Sean needs to be listed
as the Co-developed-by: (with his Signed-off-by:) and not Aaron.
Thanks,
Tom
>
^ permalink raw reply [flat|nested] 32+ messages in thread
* Re: [PATCH 08/15] KVM: SVM: Pass through GHCB MSR if and only if VM is SEV-ES
2024-11-27 20:19 ` [PATCH 08/15] KVM: SVM: Pass through GHCB MSR if and only if VM is SEV-ES Aaron Lewis
@ 2024-12-03 21:21 ` Tom Lendacky
0 siblings, 0 replies; 32+ messages in thread
From: Tom Lendacky @ 2024-12-03 21:21 UTC (permalink / raw)
To: Aaron Lewis, kvm; +Cc: pbonzini, jmattson, seanjc
On 11/27/24 14:19, Aaron Lewis wrote:
> From: Sean Christopherson <seanjc@google.com>
>
> Signed-off-by: Sean Christopherson <seanjc@google.com>
> ---
> arch/x86/kvm/svm/svm.c | 5 ++++-
> 1 file changed, 4 insertions(+), 1 deletion(-)
>
> diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c
> index 2380059727168..25d41709a0eaa 100644
> --- a/arch/x86/kvm/svm/svm.c
> +++ b/arch/x86/kvm/svm/svm.c
> @@ -108,7 +108,7 @@ static const struct svm_direct_access_msrs {
> { .index = MSR_IA32_XSS, .always = false },
> { .index = MSR_EFER, .always = false },
> { .index = MSR_IA32_CR_PAT, .always = false },
> - { .index = MSR_AMD64_SEV_ES_GHCB, .always = true },
> + { .index = MSR_AMD64_SEV_ES_GHCB, .always = false },
> { .index = MSR_TSC_AUX, .always = false },
> { .index = X2APIC_MSR(APIC_ID), .always = false },
> { .index = X2APIC_MSR(APIC_LVR), .always = false },
> @@ -919,6 +919,9 @@ void svm_vcpu_init_msrpm(struct kvm_vcpu *vcpu, unsigned long *msrpm)
> svm_disable_intercept_for_msr(vcpu, direct_access_msrs[i].index,
> MSR_TYPE_RW);
> }
> +
> + if (sev_es_guest(vcpu->kvm))
> + svm_disable_intercept_for_msr(vcpu, MSR_AMD64_SEV_ES_GHCB, MSR_TYPE_RW);
It would probably be better to put this in sev_es_init_vmcb() with the
other MSRs that are removed from interception.
Thanks,
Tom
> }
>
> void svm_set_x2apic_msr_interception(struct vcpu_svm *svm, bool intercept)
^ permalink raw reply [flat|nested] 32+ messages in thread
* Re: [PATCH 09/15] KVM: SVM: Drop "always" flag from list of possible passthrough MSRs
2024-11-27 20:19 ` [PATCH 09/15] KVM: SVM: Drop "always" flag from list of possible passthrough MSRs Aaron Lewis
@ 2024-12-03 21:26 ` Tom Lendacky
0 siblings, 0 replies; 32+ messages in thread
From: Tom Lendacky @ 2024-12-03 21:26 UTC (permalink / raw)
To: Aaron Lewis, kvm; +Cc: pbonzini, jmattson, seanjc
On 11/27/24 14:19, Aaron Lewis wrote:
> From: Sean Christopherson <seanjc@google.com>
>
> Signed-off-by: Sean Christopherson <seanjc@google.com>
> ---
> arch/x86/kvm/svm/svm.c | 134 ++++++++++++++++++++---------------------
> 1 file changed, 67 insertions(+), 67 deletions(-)
>
> diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c
> index 25d41709a0eaa..3813258497e49 100644
> --- a/arch/x86/kvm/svm/svm.c
> +++ b/arch/x86/kvm/svm/svm.c
> @@ -81,51 +81,48 @@ static DEFINE_PER_CPU(u64, current_tsc_ratio);
>
> #define X2APIC_MSR(x) (APIC_BASE_MSR + (x >> 4))
>
> -static const struct svm_direct_access_msrs {
> - u32 index; /* Index of the MSR */
> - bool always; /* True if intercept is initially cleared */
> -} direct_access_msrs[MAX_DIRECT_ACCESS_MSRS] = {
> - { .index = MSR_STAR, .always = true },
> - { .index = MSR_IA32_SYSENTER_CS, .always = true },
> - { .index = MSR_IA32_SYSENTER_EIP, .always = false },
> - { .index = MSR_IA32_SYSENTER_ESP, .always = false },
> +static const u32 direct_access_msrs[MAX_DIRECT_ACCESS_MSRS] = {
> + MSR_STAR,
> + MSR_IA32_SYSENTER_CS,
> + MSR_IA32_SYSENTER_EIP,
> + MSR_IA32_SYSENTER_ESP,
> #ifdef CONFIG_X86_64
> - { .index = MSR_GS_BASE, .always = true },
> - { .index = MSR_FS_BASE, .always = true },
> - { .index = MSR_KERNEL_GS_BASE, .always = true },
> - { .index = MSR_LSTAR, .always = true },
> - { .index = MSR_CSTAR, .always = true },
> - { .index = MSR_SYSCALL_MASK, .always = true },
> + MSR_GS_BASE,
> + MSR_FS_BASE,
> + MSR_KERNEL_GS_BASE,
> + MSR_LSTAR,
> + MSR_CSTAR,
> + MSR_SYSCALL_MASK,
> #endif
> - { .index = MSR_IA32_SPEC_CTRL, .always = false },
> - { .index = MSR_IA32_PRED_CMD, .always = false },
> - { .index = MSR_IA32_FLUSH_CMD, .always = false },
> - { .index = MSR_IA32_DEBUGCTLMSR, .always = false },
> - { .index = MSR_IA32_LASTBRANCHFROMIP, .always = false },
> - { .index = MSR_IA32_LASTBRANCHTOIP, .always = false },
> - { .index = MSR_IA32_LASTINTFROMIP, .always = false },
> - { .index = MSR_IA32_LASTINTTOIP, .always = false },
> - { .index = MSR_IA32_XSS, .always = false },
> - { .index = MSR_EFER, .always = false },
> - { .index = MSR_IA32_CR_PAT, .always = false },
> - { .index = MSR_AMD64_SEV_ES_GHCB, .always = false },
> - { .index = MSR_TSC_AUX, .always = false },
> - { .index = X2APIC_MSR(APIC_ID), .always = false },
> - { .index = X2APIC_MSR(APIC_LVR), .always = false },
> - { .index = X2APIC_MSR(APIC_TASKPRI), .always = false },
> - { .index = X2APIC_MSR(APIC_ARBPRI), .always = false },
> - { .index = X2APIC_MSR(APIC_PROCPRI), .always = false },
> - { .index = X2APIC_MSR(APIC_EOI), .always = false },
> - { .index = X2APIC_MSR(APIC_RRR), .always = false },
> - { .index = X2APIC_MSR(APIC_LDR), .always = false },
> - { .index = X2APIC_MSR(APIC_DFR), .always = false },
> - { .index = X2APIC_MSR(APIC_SPIV), .always = false },
> - { .index = X2APIC_MSR(APIC_ISR), .always = false },
> - { .index = X2APIC_MSR(APIC_TMR), .always = false },
> - { .index = X2APIC_MSR(APIC_IRR), .always = false },
> - { .index = X2APIC_MSR(APIC_ESR), .always = false },
> - { .index = X2APIC_MSR(APIC_ICR), .always = false },
> - { .index = X2APIC_MSR(APIC_ICR2), .always = false },
> + MSR_IA32_SPEC_CTRL,
> + MSR_IA32_PRED_CMD,
> + MSR_IA32_FLUSH_CMD,
> + MSR_IA32_DEBUGCTLMSR,
> + MSR_IA32_LASTBRANCHFROMIP,
> + MSR_IA32_LASTBRANCHTOIP,
> + MSR_IA32_LASTINTFROMIP,
> + MSR_IA32_LASTINTTOIP,
> + MSR_IA32_XSS,
> + MSR_EFER,
> + MSR_IA32_CR_PAT,
> + MSR_AMD64_SEV_ES_GHCB,
> + MSR_TSC_AUX,
> + X2APIC_MSR(APIC_ID),
> + X2APIC_MSR(APIC_LVR),
> + X2APIC_MSR(APIC_TASKPRI),
> + X2APIC_MSR(APIC_ARBPRI),
> + X2APIC_MSR(APIC_PROCPRI),
> + X2APIC_MSR(APIC_EOI),
> + X2APIC_MSR(APIC_RRR),
> + X2APIC_MSR(APIC_LDR),
> + X2APIC_MSR(APIC_DFR),
> + X2APIC_MSR(APIC_SPIV),
> + X2APIC_MSR(APIC_ISR),
> + X2APIC_MSR(APIC_TMR),
> + X2APIC_MSR(APIC_IRR),
> + X2APIC_MSR(APIC_ESR),
> + X2APIC_MSR(APIC_ICR),
> + X2APIC_MSR(APIC_ICR2),
>
> /*
> * Note:
> @@ -134,15 +131,15 @@ static const struct svm_direct_access_msrs {
> * the AVIC hardware would generate GP fault. Therefore, always
> * intercept the MSR 0x832, and do not setup direct_access_msr.
> */
> - { .index = X2APIC_MSR(APIC_LVTTHMR), .always = false },
> - { .index = X2APIC_MSR(APIC_LVTPC), .always = false },
> - { .index = X2APIC_MSR(APIC_LVT0), .always = false },
> - { .index = X2APIC_MSR(APIC_LVT1), .always = false },
> - { .index = X2APIC_MSR(APIC_LVTERR), .always = false },
> - { .index = X2APIC_MSR(APIC_TMICT), .always = false },
> - { .index = X2APIC_MSR(APIC_TMCCT), .always = false },
> - { .index = X2APIC_MSR(APIC_TDCR), .always = false },
> - { .index = MSR_INVALID, .always = false },
> + X2APIC_MSR(APIC_LVTTHMR),
> + X2APIC_MSR(APIC_LVTPC),
> + X2APIC_MSR(APIC_LVT0),
> + X2APIC_MSR(APIC_LVT1),
> + X2APIC_MSR(APIC_LVTERR),
> + X2APIC_MSR(APIC_TMICT),
> + X2APIC_MSR(APIC_TMCCT),
> + X2APIC_MSR(APIC_TDCR),
> + MSR_INVALID,
By adding this there are two things being done in this patch. I think it
would be easier to see the changes related specifically to the "always"
flag being removed if the MSR_INVALID addition was a separate patch.
Thanks,
Tom
> };
>
> /*
> @@ -763,9 +760,10 @@ static int direct_access_msr_slot(u32 msr)
> {
> u32 i;
>
> - for (i = 0; direct_access_msrs[i].index != MSR_INVALID; i++)
> - if (direct_access_msrs[i].index == msr)
> + for (i = 0; direct_access_msrs[i] != MSR_INVALID; i++) {
> + if (direct_access_msrs[i] == msr)
> return i;
> + }
>
> return -ENOENT;
> }
> @@ -911,15 +909,17 @@ unsigned long *svm_vcpu_alloc_msrpm(void)
>
> void svm_vcpu_init_msrpm(struct kvm_vcpu *vcpu, unsigned long *msrpm)
> {
> - int i;
> -
> - for (i = 0; direct_access_msrs[i].index != MSR_INVALID; i++) {
> - if (!direct_access_msrs[i].always)
> - continue;
> - svm_disable_intercept_for_msr(vcpu, direct_access_msrs[i].index,
> - MSR_TYPE_RW);
> - }
> + svm_disable_intercept_for_msr(vcpu, MSR_STAR, MSR_TYPE_RW);
> + svm_disable_intercept_for_msr(vcpu, MSR_IA32_SYSENTER_CS, MSR_TYPE_RW);
>
> +#ifdef CONFIG_X86_64
> + svm_disable_intercept_for_msr(vcpu, MSR_GS_BASE, MSR_TYPE_RW);
> + svm_disable_intercept_for_msr(vcpu, MSR_FS_BASE, MSR_TYPE_RW);
> + svm_disable_intercept_for_msr(vcpu, MSR_KERNEL_GS_BASE, MSR_TYPE_RW);
> + svm_disable_intercept_for_msr(vcpu, MSR_LSTAR, MSR_TYPE_RW);
> + svm_disable_intercept_for_msr(vcpu, MSR_CSTAR, MSR_TYPE_RW);
> + svm_disable_intercept_for_msr(vcpu, MSR_SYSCALL_MASK, MSR_TYPE_RW);
> +#endif
> if (sev_es_guest(vcpu->kvm))
> svm_disable_intercept_for_msr(vcpu, MSR_AMD64_SEV_ES_GHCB, MSR_TYPE_RW);
> }
> @@ -935,7 +935,7 @@ void svm_set_x2apic_msr_interception(struct vcpu_svm *svm, bool intercept)
> return;
>
> for (i = 0; i < MAX_DIRECT_ACCESS_MSRS; i++) {
> - int index = direct_access_msrs[i].index;
> + int index = direct_access_msrs[i];
>
> if ((index < APIC_BASE_MSR) ||
> (index > APIC_BASE_MSR + 0xff))
> @@ -965,8 +965,8 @@ static void svm_msr_filter_changed(struct kvm_vcpu *vcpu)
> * refreshed since KVM is going to intercept them regardless of what
> * userspace wants.
> */
> - for (i = 0; direct_access_msrs[i].index != MSR_INVALID; i++) {
> - u32 msr = direct_access_msrs[i].index;
> + for (i = 0; direct_access_msrs[i] != MSR_INVALID; i++) {
> + u32 msr = direct_access_msrs[i];
>
> if (!test_bit(i, svm->shadow_msr_intercept.read))
> svm_disable_intercept_for_msr(vcpu, msr, MSR_TYPE_R);
> @@ -1009,10 +1009,10 @@ static void init_msrpm_offsets(void)
>
> memset(msrpm_offsets, 0xff, sizeof(msrpm_offsets));
>
> - for (i = 0; direct_access_msrs[i].index != MSR_INVALID; i++) {
> + for (i = 0; direct_access_msrs[i] != MSR_INVALID; i++) {
> u32 offset;
>
> - offset = svm_msrpm_offset(direct_access_msrs[i].index);
> + offset = svm_msrpm_offset(direct_access_msrs[i]);
> BUG_ON(offset == MSR_INVALID);
>
> add_msr_offset(offset);
^ permalink raw reply [flat|nested] 32+ messages in thread
* Re: [PATCH 10/15] KVM: SVM: Don't "NULL terminate" the list of possible passthrough MSRs
2024-11-27 20:19 ` [PATCH 10/15] KVM: SVM: Don't "NULL terminate" the " Aaron Lewis
@ 2024-12-03 21:30 ` Tom Lendacky
0 siblings, 0 replies; 32+ messages in thread
From: Tom Lendacky @ 2024-12-03 21:30 UTC (permalink / raw)
To: Aaron Lewis, kvm; +Cc: pbonzini, jmattson, seanjc
On 11/27/24 14:19, Aaron Lewis wrote:
> Signed-off-by: Sean Christopherson <seanjc@google.com>
> Co-developed-by: Aaron Lewis <aaronlewis@google.com>
> ---
> arch/x86/kvm/svm/svm.c | 11 +++++------
> 1 file changed, 5 insertions(+), 6 deletions(-)
>
> diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c
> index 3813258497e49..4e30efe90c541 100644
> --- a/arch/x86/kvm/svm/svm.c
> +++ b/arch/x86/kvm/svm/svm.c
> @@ -81,7 +81,7 @@ static DEFINE_PER_CPU(u64, current_tsc_ratio);
>
> #define X2APIC_MSR(x) (APIC_BASE_MSR + (x >> 4))
>
> -static const u32 direct_access_msrs[MAX_DIRECT_ACCESS_MSRS] = {
> +static const u32 direct_access_msrs[] = {
> MSR_STAR,
> MSR_IA32_SYSENTER_CS,
> MSR_IA32_SYSENTER_EIP,
> @@ -139,7 +139,6 @@ static const u32 direct_access_msrs[MAX_DIRECT_ACCESS_MSRS] = {
> X2APIC_MSR(APIC_TMICT),
> X2APIC_MSR(APIC_TMCCT),
> X2APIC_MSR(APIC_TDCR),
> - MSR_INVALID,
Given my comment on the previous patch and then this patch, can't the
MSR_INVALID addition just be removed all together?
Thanks,
Tom
> };
>
> /*
> @@ -760,7 +759,7 @@ static int direct_access_msr_slot(u32 msr)
> {
> u32 i;
>
> - for (i = 0; direct_access_msrs[i] != MSR_INVALID; i++) {
> + for (i = 0; i < ARRAY_SIZE(direct_access_msrs); i++) {
> if (direct_access_msrs[i] == msr)
> return i;
> }
> @@ -934,7 +933,7 @@ void svm_set_x2apic_msr_interception(struct vcpu_svm *svm, bool intercept)
> if (!x2avic_enabled)
> return;
>
> - for (i = 0; i < MAX_DIRECT_ACCESS_MSRS; i++) {
> + for (i = 0; i < ARRAY_SIZE(direct_access_msrs); i++) {
> int index = direct_access_msrs[i];
>
> if ((index < APIC_BASE_MSR) ||
> @@ -965,7 +964,7 @@ static void svm_msr_filter_changed(struct kvm_vcpu *vcpu)
> * refreshed since KVM is going to intercept them regardless of what
> * userspace wants.
> */
> - for (i = 0; direct_access_msrs[i] != MSR_INVALID; i++) {
> + for (i = 0; i < ARRAY_SIZE(direct_access_msrs); i++) {
> u32 msr = direct_access_msrs[i];
>
> if (!test_bit(i, svm->shadow_msr_intercept.read))
> @@ -1009,7 +1008,7 @@ static void init_msrpm_offsets(void)
>
> memset(msrpm_offsets, 0xff, sizeof(msrpm_offsets));
>
> - for (i = 0; direct_access_msrs[i] != MSR_INVALID; i++) {
> + for (i = 0; i < ARRAY_SIZE(direct_access_msrs); i++) {
> u32 offset;
>
> offset = svm_msrpm_offset(direct_access_msrs[i]);
^ permalink raw reply [flat|nested] 32+ messages in thread
* Re: [PATCH 12/15] KVM: x86: Track possible passthrough MSRs in kvm_x86_ops
2024-12-03 19:47 ` Sean Christopherson
@ 2024-12-05 17:56 ` Borislav Petkov
2024-12-05 18:06 ` Borislav Petkov
1 sibling, 0 replies; 32+ messages in thread
From: Borislav Petkov @ 2024-12-05 17:56 UTC (permalink / raw)
To: Sean Christopherson
Cc: Aaron Lewis, kvm, pbonzini, jmattson, Xin Li, Dapeng Mi
On Tue, Dec 03, 2024 at 11:47:33AM -0800, Sean Christopherson wrote:
> For KVM x86, from Documentation/process/maintainer-kvm-x86.rst:
Thanks, I've been looking for this text! :-)
--
Regards/Gruss,
Boris.
https://people.kernel.org/tglx/notes-about-netiquette
^ permalink raw reply [flat|nested] 32+ messages in thread
* Re: [PATCH 12/15] KVM: x86: Track possible passthrough MSRs in kvm_x86_ops
2024-12-03 19:47 ` Sean Christopherson
2024-12-05 17:56 ` Borislav Petkov
@ 2024-12-05 18:06 ` Borislav Petkov
2024-12-06 15:23 ` Sean Christopherson
1 sibling, 1 reply; 32+ messages in thread
From: Borislav Petkov @ 2024-12-05 18:06 UTC (permalink / raw)
To: Sean Christopherson
Cc: Aaron Lewis, kvm, pbonzini, jmattson, Xin Li, Dapeng Mi
On Tue, Dec 03, 2024 at 11:47:33AM -0800, Sean Christopherson wrote:
> It applies cleanly on my tree (github.com/kvm-x86/linux.git next)
Could it be that you changed things in the meantime?
(Very similar result on Paolo's next branch.)
$ git log -1
commit c55f6b8a2441b20ef12e4b35d4888a22299ddc90 (HEAD -> refs/heads/kvm-next, tag: refs/tags/kvm-x86-next-2024.11.04, refs/remotes/kvm-x86/next)
Merge: f29af315c943 bc17fccb37c8
Author: Sean Christopherson <seanjc@google.com>
Date: Tue Nov 5 05:13:01 2024 +0000
Merge branch 'vmx'
* vmx:
KVM: VMX: Remove the unused variable "gpa" in __invept()
$ patch -p1 --dry-run -i /tmp/0001-tmp.patch
checking file arch/x86/include/asm/kvm-x86-ops.h
checking file arch/x86/include/asm/kvm_host.h
Hunk #1 succeeded at 1817 (offset -2 lines).
checking file arch/x86/kvm/lapic.h
checking file arch/x86/kvm/svm/svm.c
Reversed (or previously applied) patch detected! Assume -R? [n] n
Apply anyway? [n] y
Hunk #1 FAILED at 79.
Hunk #2 FAILED at 756.
Hunk #3 FAILED at 831.
Hunk #4 FAILED at 870.
Hunk #5 FAILED at 907.
Hunk #6 succeeded at 894 with fuzz 1 (offset -30 lines).
Hunk #7 FAILED at 1002.
Hunk #8 FAILED at 1020.
Hunk #9 FAILED at 1103.
Hunk #10 FAILED at 1121.
Hunk #11 FAILED at 1309.
Hunk #12 FAILED at 1330.
Hunk #13 FAILED at 1456.
Hunk #14 succeeded at 1455 (offset -35 lines).
Hunk #15 succeeded at 1479 (offset -35 lines).
Hunk #16 FAILED at 1555.
Hunk #17 succeeded at 3220 (offset -40 lines).
Hunk #18 FAILED at 4531.
Hunk #19 succeeded at 5194 (offset -38 lines).
Hunk #20 succeeded at 5352 (offset -38 lines).
14 out of 20 hunks FAILED
checking file arch/x86/kvm/svm/svm.h
checking file arch/x86/kvm/vmx/main.c
checking file arch/x86/kvm/vmx/vmx.c
Hunk #2 succeeded at 642 (offset -2 lines).
Hunk #3 FAILED at 3943.
Hunk #4 FAILED at 3985.
Hunk #5 succeeded at 4086 (offset -1 lines).
Hunk #6 succeeded at 7532 (offset 6 lines).
Hunk #7 succeeded at 7812 (offset 6 lines).
Hunk #8 succeeded at 7827 (offset 6 lines).
2 out of 8 hunks FAILED
checking file arch/x86/kvm/vmx/vmx.h
checking file arch/x86/kvm/vmx/x86_ops.h
checking file arch/x86/kvm/x86.c
Hunk #1 succeeded at 10837 (offset -3 lines).
--
Regards/Gruss,
Boris.
https://people.kernel.org/tglx/notes-about-netiquette
^ permalink raw reply [flat|nested] 32+ messages in thread
* Re: [PATCH 12/15] KVM: x86: Track possible passthrough MSRs in kvm_x86_ops
2024-12-05 18:06 ` Borislav Petkov
@ 2024-12-06 15:23 ` Sean Christopherson
2024-12-06 16:01 ` Borislav Petkov
0 siblings, 1 reply; 32+ messages in thread
From: Sean Christopherson @ 2024-12-06 15:23 UTC (permalink / raw)
To: Borislav Petkov; +Cc: Aaron Lewis, kvm, pbonzini, jmattson, Xin Li, Dapeng Mi
On Thu, Dec 05, 2024, Borislav Petkov wrote:
> On Tue, Dec 03, 2024 at 11:47:33AM -0800, Sean Christopherson wrote:
> > It applies cleanly on my tree (github.com/kvm-x86/linux.git next)
>
> Could it be that you changed things in the meantime?
Nope, I double checked that I'm using the same base.
> (Very similar result on Paolo's next branch.)
>
> $ git log -1
> commit c55f6b8a2441b20ef12e4b35d4888a22299ddc90 (HEAD -> refs/heads/kvm-next, tag: refs/tags/kvm-x86-next-2024.11.04, refs/remotes/kvm-x86/next)
> Merge: f29af315c943 bc17fccb37c8
> Author: Sean Christopherson <seanjc@google.com>
> Date: Tue Nov 5 05:13:01 2024 +0000
>
> Merge branch 'vmx'
>
> * vmx:
> KVM: VMX: Remove the unused variable "gpa" in __invept()
>
>
> $ patch -p1 --dry-run -i /tmp/0001-tmp.patch
> checking file arch/x86/include/asm/kvm-x86-ops.h
> checking file arch/x86/include/asm/kvm_host.h
Are you trying to apply this patch directly on kvm/next | kvm-x86/next? This is
patch 12 of 15.
> Hunk #1 succeeded at 1817 (offset -2 lines).
> checking file arch/x86/kvm/lapic.h
> checking file arch/x86/kvm/svm/svm.c
> Reversed (or previously applied) patch detected! Assume -R? [n] n
> Apply anyway? [n] y
> Hunk #1 FAILED at 79.
> Hunk #2 FAILED at 756.
> Hunk #3 FAILED at 831.
> Hunk #4 FAILED at 870.
> Hunk #5 FAILED at 907.
> Hunk #6 succeeded at 894 with fuzz 1 (offset -30 lines).
> Hunk #7 FAILED at 1002.
> Hunk #8 FAILED at 1020.
> Hunk #9 FAILED at 1103.
> Hunk #10 FAILED at 1121.
> Hunk #11 FAILED at 1309.
> Hunk #12 FAILED at 1330.
> Hunk #13 FAILED at 1456.
> Hunk #14 succeeded at 1455 (offset -35 lines).
> Hunk #15 succeeded at 1479 (offset -35 lines).
> Hunk #16 FAILED at 1555.
> Hunk #17 succeeded at 3220 (offset -40 lines).
> Hunk #18 FAILED at 4531.
> Hunk #19 succeeded at 5194 (offset -38 lines).
> Hunk #20 succeeded at 5352 (offset -38 lines).
> 14 out of 20 hunks FAILED
> checking file arch/x86/kvm/svm/svm.h
> checking file arch/x86/kvm/vmx/main.c
> checking file arch/x86/kvm/vmx/vmx.c
> Hunk #2 succeeded at 642 (offset -2 lines).
> Hunk #3 FAILED at 3943.
> Hunk #4 FAILED at 3985.
> Hunk #5 succeeded at 4086 (offset -1 lines).
> Hunk #6 succeeded at 7532 (offset 6 lines).
> Hunk #7 succeeded at 7812 (offset 6 lines).
> Hunk #8 succeeded at 7827 (offset 6 lines).
> 2 out of 8 hunks FAILED
> checking file arch/x86/kvm/vmx/vmx.h
> checking file arch/x86/kvm/vmx/x86_ops.h
> checking file arch/x86/kvm/x86.c
> Hunk #1 succeeded at 10837 (offset -3 lines).
>
> --
> Regards/Gruss,
> Boris.
>
> https://people.kernel.org/tglx/notes-about-netiquette
^ permalink raw reply [flat|nested] 32+ messages in thread
* Re: [PATCH 12/15] KVM: x86: Track possible passthrough MSRs in kvm_x86_ops
2024-12-06 15:23 ` Sean Christopherson
@ 2024-12-06 16:01 ` Borislav Petkov
0 siblings, 0 replies; 32+ messages in thread
From: Borislav Petkov @ 2024-12-06 16:01 UTC (permalink / raw)
To: Sean Christopherson
Cc: Aaron Lewis, kvm, pbonzini, jmattson, Xin Li, Dapeng Mi
On Fri, Dec 06, 2024 at 07:23:43AM -0800, Sean Christopherson wrote:
> Are you trying to apply this patch directly on kvm/next | kvm-x86/next? This is
> patch 12 of 15.
Oh, that's why. I thought it is a single, standalone patch, being called
0001-tmp. :-)
--
Regards/Gruss,
Boris.
https://people.kernel.org/tglx/notes-about-netiquette
^ permalink raw reply [flat|nested] 32+ messages in thread
end of thread, other threads:[~2024-12-06 16:02 UTC | newest]
Thread overview: 32+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2024-11-27 20:19 [PATCH 00/15] Unify MSR intercepts in x86 Aaron Lewis
2024-11-27 20:19 ` [PATCH 01/15] KVM: x86: Use non-atomic bit ops to manipulate "shadow" MSR intercepts Aaron Lewis
2024-11-27 20:38 ` Sean Christopherson
2024-11-27 20:19 ` [PATCH 02/15] KVM: SVM: Use non-atomic bit ops to manipulate MSR interception bitmaps Aaron Lewis
2024-11-27 20:19 ` [PATCH 03/15] KVM: SVM: Invert the polarity of the "shadow" " Aaron Lewis
2024-11-27 20:42 ` Sean Christopherson
2024-12-03 21:08 ` Tom Lendacky
2024-11-27 20:19 ` [PATCH 04/15] KVM: SVM: Track MSRPM as "unsigned long", not "u32" Aaron Lewis
2024-11-27 20:19 ` [PATCH 05/15] KVM: x86: SVM: Adopt VMX style MSR intercepts in SVM Aaron Lewis
2024-11-27 20:43 ` Sean Christopherson
2024-11-27 20:19 ` [PATCH 06/15] KVM: SVM: Disable intercepts for all direct access MSRs on MSR filter changes Aaron Lewis
2024-11-27 20:47 ` Sean Christopherson
2024-11-27 20:19 ` [PATCH 07/15] KVM: SVM: Delete old SVM MSR management code Aaron Lewis
2024-11-27 20:19 ` [PATCH 08/15] KVM: SVM: Pass through GHCB MSR if and only if VM is SEV-ES Aaron Lewis
2024-12-03 21:21 ` Tom Lendacky
2024-11-27 20:19 ` [PATCH 09/15] KVM: SVM: Drop "always" flag from list of possible passthrough MSRs Aaron Lewis
2024-12-03 21:26 ` Tom Lendacky
2024-11-27 20:19 ` [PATCH 10/15] KVM: SVM: Don't "NULL terminate" the " Aaron Lewis
2024-12-03 21:30 ` Tom Lendacky
2024-11-27 20:19 ` [PATCH 11/15] KVM: VMX: Make list of possible passthrough MSRs "const" Aaron Lewis
2024-11-27 20:19 ` [PATCH 12/15] KVM: x86: Track possible passthrough MSRs in kvm_x86_ops Aaron Lewis
2024-11-27 21:57 ` Sean Christopherson
2024-11-28 16:46 ` Borislav Petkov
2024-12-03 19:47 ` Sean Christopherson
2024-12-05 17:56 ` Borislav Petkov
2024-12-05 18:06 ` Borislav Petkov
2024-12-06 15:23 ` Sean Christopherson
2024-12-06 16:01 ` Borislav Petkov
2024-11-27 20:19 ` [PATCH 13/15] KVM: x86: Move ownership of passthrough MSR "shadow" to common x86 Aaron Lewis
2024-11-27 20:19 ` [PATCH 14/15] KVM: x86: Hoist SVM MSR intercepts to common x86 code Aaron Lewis
2024-11-27 20:19 ` [PATCH 15/15] KVM: x86: Hoist VMX " Aaron Lewis
2024-11-27 20:56 ` [PATCH 00/15] Unify MSR intercepts in x86 Sean Christopherson
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox