* [PATCH v2 0/5] kvm: x86: better handling of optional kvm_x86_ops
@ 2022-02-14 13:16 Paolo Bonzini
2022-02-14 13:16 ` [PATCH v2 1/5] KVM: x86: use static_call_cond for optional callbacks Paolo Bonzini
` (4 more replies)
0 siblings, 5 replies; 13+ messages in thread
From: Paolo Bonzini @ 2022-02-14 13:16 UTC (permalink / raw)
To: linux-kernel, kvm; +Cc: seanjc
This series is really two changes:
- patch 1 to 4 clean up optional kvm_x86_ops so that they are marked
in kvm-x86-ops.h and the non-optional ones WARN if used incorrectly.
As an additional outcome of the review, a few more uses of
static_call_cond are introduced.
- patch 5 allows to NULL a few kvm_x86_ops that return a value, by
using __static_call_ret0.
v1->v2:
- use KVM_X86_OP_OPTIONAL and KVM_X86_OP_OPTIONAL_RET0
- mark load_eoi_exitmap and set_virtual_apic_mode as optional
- fix module compilation of KVM
Paolo Bonzini (5):
KVM: x86: use static_call_cond for optional callbacks
KVM: x86: remove KVM_X86_OP_NULL and mark optional kvm_x86_ops
KVM: x86: warn on incorrectly NULL static calls
KVM: x86: make several AVIC callbacks optional
KVM: x86: allow defining return-0 static calls
arch/x86/include/asm/kvm-x86-ops.h | 104 +++++++++++++++--------------
arch/x86/include/asm/kvm_host.h | 11 ++-
arch/x86/kvm/lapic.c | 24 +++----
arch/x86/kvm/svm/avic.c | 23 -------
arch/x86/kvm/svm/svm.c | 30 ---------
arch/x86/kvm/svm/svm.h | 1 -
arch/x86/kvm/x86.c | 16 ++---
kernel/static_call.c | 1 +
8 files changed, 79 insertions(+), 131 deletions(-)
--
2.31.1
^ permalink raw reply [flat|nested] 13+ messages in thread
* [PATCH v2 1/5] KVM: x86: use static_call_cond for optional callbacks
2022-02-14 13:16 [PATCH v2 0/5] kvm: x86: better handling of optional kvm_x86_ops Paolo Bonzini
@ 2022-02-14 13:16 ` Paolo Bonzini
2022-02-15 16:55 ` Sean Christopherson
2022-02-14 13:16 ` [PATCH v2 2/5] KVM: x86: remove KVM_X86_OP_NULL and mark optional kvm_x86_ops Paolo Bonzini
` (3 subsequent siblings)
4 siblings, 1 reply; 13+ messages in thread
From: Paolo Bonzini @ 2022-02-14 13:16 UTC (permalink / raw)
To: linux-kernel, kvm; +Cc: seanjc
SVM implements neither update_emulated_instruction nor
set_apic_access_page_addr. Remove an "if" by calling them
with static_call_cond().
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
---
arch/x86/kvm/x86.c | 8 ++------
1 file changed, 2 insertions(+), 6 deletions(-)
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index eaa3b5b89c5e..a48c5004801c 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -8423,8 +8423,7 @@ int x86_emulate_instruction(struct kvm_vcpu *vcpu, gpa_t cr2_or_gpa,
kvm_rip_write(vcpu, ctxt->eip);
if (r && (ctxt->tf || (vcpu->guest_debug & KVM_GUESTDBG_SINGLESTEP)))
r = kvm_vcpu_do_singlestep(vcpu);
- if (kvm_x86_ops.update_emulated_instruction)
- static_call(kvm_x86_update_emulated_instruction)(vcpu);
+ static_call_cond(kvm_x86_update_emulated_instruction)(vcpu);
__kvm_set_rflags(vcpu, ctxt->eflags);
}
@@ -9793,10 +9792,7 @@ static void kvm_vcpu_reload_apic_access_page(struct kvm_vcpu *vcpu)
if (!lapic_in_kernel(vcpu))
return;
- if (!kvm_x86_ops.set_apic_access_page_addr)
- return;
-
- static_call(kvm_x86_set_apic_access_page_addr)(vcpu);
+ static_call_cond(kvm_x86_set_apic_access_page_addr)(vcpu);
}
void __kvm_request_immediate_exit(struct kvm_vcpu *vcpu)
--
2.31.1
^ permalink raw reply related [flat|nested] 13+ messages in thread
* [PATCH v2 2/5] KVM: x86: remove KVM_X86_OP_NULL and mark optional kvm_x86_ops
2022-02-14 13:16 [PATCH v2 0/5] kvm: x86: better handling of optional kvm_x86_ops Paolo Bonzini
2022-02-14 13:16 ` [PATCH v2 1/5] KVM: x86: use static_call_cond for optional callbacks Paolo Bonzini
@ 2022-02-14 13:16 ` Paolo Bonzini
2022-02-15 17:08 ` Sean Christopherson
2022-02-14 13:16 ` [PATCH v2 3/5] KVM: x86: warn on incorrectly NULL static calls Paolo Bonzini
` (2 subsequent siblings)
4 siblings, 1 reply; 13+ messages in thread
From: Paolo Bonzini @ 2022-02-14 13:16 UTC (permalink / raw)
To: linux-kernel, kvm; +Cc: seanjc
The original use of KVM_X86_OP_NULL, which was to mark calls
that do not follow a specific naming convention, is not in use
anymore. Instead, let's mark calls that are optional because
they are always invoked within conditionals or with static_call_cond.
Those that are _not_, i.e. those that are defined with KVM_X86_OP,
must be defined by both vendor modules or some kind of NULL pointer
dereference is bound to happen at runtime.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
---
arch/x86/include/asm/kvm-x86-ops.h | 84 +++++++++++++++---------------
arch/x86/include/asm/kvm_host.h | 4 +-
arch/x86/kvm/x86.c | 2 +-
3 files changed, 44 insertions(+), 46 deletions(-)
diff --git a/arch/x86/include/asm/kvm-x86-ops.h b/arch/x86/include/asm/kvm-x86-ops.h
index 9e37dc3d8863..9415d9af204c 100644
--- a/arch/x86/include/asm/kvm-x86-ops.h
+++ b/arch/x86/include/asm/kvm-x86-ops.h
@@ -1,25 +1,23 @@
/* SPDX-License-Identifier: GPL-2.0 */
-#if !defined(KVM_X86_OP) || !defined(KVM_X86_OP_NULL)
+#if !defined(KVM_X86_OP) || !defined(KVM_X86_OP_OPTIONAL)
BUILD_BUG_ON(1)
#endif
/*
- * KVM_X86_OP() and KVM_X86_OP_NULL() are used to help generate
+ * KVM_X86_OP() and KVM_X86_OP_OPTIONAL() are used to help generate
* "static_call()"s. They are also intended for use when defining
- * the vmx/svm kvm_x86_ops. KVM_X86_OP() can be used for those
- * functions that follow the [svm|vmx]_func_name convention.
- * KVM_X86_OP_NULL() can leave a NULL definition for the
- * case where there is no definition or a function name that
- * doesn't match the typical naming convention is supplied.
+ * the vmx/svm kvm_x86_ops. KVM_X86_OP_OPTIONAL() can be used for those
+ * functions that can have a NULL definition, for example if
+ * "static_call_cond()" will be used at the call sites.
*/
-KVM_X86_OP_NULL(hardware_enable)
-KVM_X86_OP_NULL(hardware_disable)
-KVM_X86_OP_NULL(hardware_unsetup)
-KVM_X86_OP_NULL(cpu_has_accelerated_tpr)
+KVM_X86_OP(hardware_enable)
+KVM_X86_OP(hardware_disable)
+KVM_X86_OP(hardware_unsetup)
+KVM_X86_OP(cpu_has_accelerated_tpr)
KVM_X86_OP(has_emulated_msr)
KVM_X86_OP(vcpu_after_set_cpuid)
KVM_X86_OP(vm_init)
-KVM_X86_OP_NULL(vm_destroy)
+KVM_X86_OP_OPTIONAL(vm_destroy)
KVM_X86_OP(vcpu_create)
KVM_X86_OP(vcpu_free)
KVM_X86_OP(vcpu_reset)
@@ -33,9 +31,9 @@ KVM_X86_OP(get_segment_base)
KVM_X86_OP(get_segment)
KVM_X86_OP(get_cpl)
KVM_X86_OP(set_segment)
-KVM_X86_OP_NULL(get_cs_db_l_bits)
+KVM_X86_OP(get_cs_db_l_bits)
KVM_X86_OP(set_cr0)
-KVM_X86_OP_NULL(post_set_cr3)
+KVM_X86_OP_OPTIONAL(post_set_cr3)
KVM_X86_OP(is_valid_cr4)
KVM_X86_OP(set_cr4)
KVM_X86_OP(set_efer)
@@ -51,15 +49,15 @@ KVM_X86_OP(set_rflags)
KVM_X86_OP(get_if_flag)
KVM_X86_OP(flush_tlb_all)
KVM_X86_OP(flush_tlb_current)
-KVM_X86_OP_NULL(tlb_remote_flush)
-KVM_X86_OP_NULL(tlb_remote_flush_with_range)
+KVM_X86_OP_OPTIONAL(tlb_remote_flush)
+KVM_X86_OP_OPTIONAL(tlb_remote_flush_with_range)
KVM_X86_OP(flush_tlb_gva)
KVM_X86_OP(flush_tlb_guest)
KVM_X86_OP(vcpu_pre_run)
KVM_X86_OP(vcpu_run)
-KVM_X86_OP_NULL(handle_exit)
-KVM_X86_OP_NULL(skip_emulated_instruction)
-KVM_X86_OP_NULL(update_emulated_instruction)
+KVM_X86_OP(handle_exit)
+KVM_X86_OP(skip_emulated_instruction)
+KVM_X86_OP_OPTIONAL(update_emulated_instruction)
KVM_X86_OP(set_interrupt_shadow)
KVM_X86_OP(get_interrupt_shadow)
KVM_X86_OP(patch_hypercall)
@@ -73,22 +71,22 @@ KVM_X86_OP(get_nmi_mask)
KVM_X86_OP(set_nmi_mask)
KVM_X86_OP(enable_nmi_window)
KVM_X86_OP(enable_irq_window)
-KVM_X86_OP(update_cr8_intercept)
+KVM_X86_OP_OPTIONAL(update_cr8_intercept)
KVM_X86_OP(check_apicv_inhibit_reasons)
KVM_X86_OP(refresh_apicv_exec_ctrl)
KVM_X86_OP(hwapic_irr_update)
KVM_X86_OP(hwapic_isr_update)
-KVM_X86_OP_NULL(guest_apic_has_interrupt)
+KVM_X86_OP_OPTIONAL(guest_apic_has_interrupt)
KVM_X86_OP(load_eoi_exitmap)
KVM_X86_OP(set_virtual_apic_mode)
-KVM_X86_OP_NULL(set_apic_access_page_addr)
+KVM_X86_OP_OPTIONAL(set_apic_access_page_addr)
KVM_X86_OP(deliver_interrupt)
-KVM_X86_OP_NULL(sync_pir_to_irr)
+KVM_X86_OP_OPTIONAL(sync_pir_to_irr)
KVM_X86_OP(set_tss_addr)
KVM_X86_OP(set_identity_map_addr)
KVM_X86_OP(get_mt_mask)
KVM_X86_OP(load_mmu_pgd)
-KVM_X86_OP_NULL(has_wbinvd_exit)
+KVM_X86_OP(has_wbinvd_exit)
KVM_X86_OP(get_l2_tsc_offset)
KVM_X86_OP(get_l2_tsc_multiplier)
KVM_X86_OP(write_tsc_offset)
@@ -96,35 +94,35 @@ KVM_X86_OP(write_tsc_multiplier)
KVM_X86_OP(get_exit_info)
KVM_X86_OP(check_intercept)
KVM_X86_OP(handle_exit_irqoff)
-KVM_X86_OP_NULL(request_immediate_exit)
+KVM_X86_OP(request_immediate_exit)
KVM_X86_OP(sched_in)
-KVM_X86_OP_NULL(update_cpu_dirty_logging)
-KVM_X86_OP_NULL(vcpu_blocking)
-KVM_X86_OP_NULL(vcpu_unblocking)
-KVM_X86_OP_NULL(pi_update_irte)
-KVM_X86_OP_NULL(pi_start_assignment)
-KVM_X86_OP_NULL(apicv_post_state_restore)
-KVM_X86_OP_NULL(dy_apicv_has_pending_interrupt)
-KVM_X86_OP_NULL(set_hv_timer)
-KVM_X86_OP_NULL(cancel_hv_timer)
+KVM_X86_OP_OPTIONAL(update_cpu_dirty_logging)
+KVM_X86_OP_OPTIONAL(vcpu_blocking)
+KVM_X86_OP_OPTIONAL(vcpu_unblocking)
+KVM_X86_OP_OPTIONAL(pi_update_irte)
+KVM_X86_OP_OPTIONAL(pi_start_assignment)
+KVM_X86_OP(apicv_post_state_restore)
+KVM_X86_OP_OPTIONAL(dy_apicv_has_pending_interrupt)
+KVM_X86_OP_OPTIONAL(set_hv_timer)
+KVM_X86_OP_OPTIONAL(cancel_hv_timer)
KVM_X86_OP(setup_mce)
KVM_X86_OP(smi_allowed)
KVM_X86_OP(enter_smm)
KVM_X86_OP(leave_smm)
KVM_X86_OP(enable_smi_window)
-KVM_X86_OP_NULL(mem_enc_ioctl)
-KVM_X86_OP_NULL(mem_enc_register_region)
-KVM_X86_OP_NULL(mem_enc_unregister_region)
-KVM_X86_OP_NULL(vm_copy_enc_context_from)
-KVM_X86_OP_NULL(vm_move_enc_context_from)
+KVM_X86_OP_OPTIONAL(mem_enc_ioctl)
+KVM_X86_OP_OPTIONAL(mem_enc_register_region)
+KVM_X86_OP_OPTIONAL(mem_enc_unregister_region)
+KVM_X86_OP_OPTIONAL(vm_copy_enc_context_from)
+KVM_X86_OP_OPTIONAL(vm_move_enc_context_from)
KVM_X86_OP(get_msr_feature)
KVM_X86_OP(can_emulate_instruction)
KVM_X86_OP(apic_init_signal_blocked)
-KVM_X86_OP_NULL(enable_direct_tlbflush)
-KVM_X86_OP_NULL(migrate_timers)
+KVM_X86_OP_OPTIONAL(enable_direct_tlbflush)
+KVM_X86_OP_OPTIONAL(migrate_timers)
KVM_X86_OP(msr_filter_changed)
-KVM_X86_OP_NULL(complete_emulated_msr)
+KVM_X86_OP(complete_emulated_msr)
KVM_X86_OP(vcpu_deliver_sipi_vector)
#undef KVM_X86_OP
-#undef KVM_X86_OP_NULL
+#undef KVM_X86_OP_OPTIONAL
diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
index 10815b672a26..e3f7d958c150 100644
--- a/arch/x86/include/asm/kvm_host.h
+++ b/arch/x86/include/asm/kvm_host.h
@@ -1541,14 +1541,14 @@ extern struct kvm_x86_ops kvm_x86_ops;
#define KVM_X86_OP(func) \
DECLARE_STATIC_CALL(kvm_x86_##func, *(((struct kvm_x86_ops *)0)->func));
-#define KVM_X86_OP_NULL KVM_X86_OP
+#define KVM_X86_OP_OPTIONAL KVM_X86_OP
#include <asm/kvm-x86-ops.h>
static inline void kvm_ops_static_call_update(void)
{
#define KVM_X86_OP(func) \
static_call_update(kvm_x86_##func, kvm_x86_ops.func);
-#define KVM_X86_OP_NULL KVM_X86_OP
+#define KVM_X86_OP_OPTIONAL KVM_X86_OP
#include <asm/kvm-x86-ops.h>
}
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index a48c5004801c..b6f5c6454f11 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -130,7 +130,7 @@ struct kvm_x86_ops kvm_x86_ops __read_mostly;
#define KVM_X86_OP(func) \
DEFINE_STATIC_CALL_NULL(kvm_x86_##func, \
*(((struct kvm_x86_ops *)0)->func));
-#define KVM_X86_OP_NULL KVM_X86_OP
+#define KVM_X86_OP_OPTIONAL KVM_X86_OP
#include <asm/kvm-x86-ops.h>
EXPORT_STATIC_CALL_GPL(kvm_x86_get_cs_db_l_bits);
EXPORT_STATIC_CALL_GPL(kvm_x86_cache_reg);
--
2.31.1
^ permalink raw reply related [flat|nested] 13+ messages in thread
* [PATCH v2 3/5] KVM: x86: warn on incorrectly NULL static calls
2022-02-14 13:16 [PATCH v2 0/5] kvm: x86: better handling of optional kvm_x86_ops Paolo Bonzini
2022-02-14 13:16 ` [PATCH v2 1/5] KVM: x86: use static_call_cond for optional callbacks Paolo Bonzini
2022-02-14 13:16 ` [PATCH v2 2/5] KVM: x86: remove KVM_X86_OP_NULL and mark optional kvm_x86_ops Paolo Bonzini
@ 2022-02-14 13:16 ` Paolo Bonzini
2022-02-15 17:10 ` Sean Christopherson
2022-02-14 13:16 ` [PATCH v2 4/5] KVM: x86: make several APIC virtualization callbacks optional Paolo Bonzini
2022-02-14 13:16 ` [PATCH v2 5/5] KVM: x86: allow defining return-0 static calls Paolo Bonzini
4 siblings, 1 reply; 13+ messages in thread
From: Paolo Bonzini @ 2022-02-14 13:16 UTC (permalink / raw)
To: linux-kernel, kvm; +Cc: seanjc
Use the newly corrected KVM_X86_OP annotations to warn about possible
NULL pointer dereferences as soon as the vendor module is loaded.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
---
arch/x86/include/asm/kvm_host.h | 5 +++--
1 file changed, 3 insertions(+), 2 deletions(-)
diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
index e3f7d958c150..5dce6fbd9ab6 100644
--- a/arch/x86/include/asm/kvm_host.h
+++ b/arch/x86/include/asm/kvm_host.h
@@ -1546,9 +1546,10 @@ extern struct kvm_x86_ops kvm_x86_ops;
static inline void kvm_ops_static_call_update(void)
{
-#define KVM_X86_OP(func) \
+#define KVM_X86_OP_OPTIONAL(func) \
static_call_update(kvm_x86_##func, kvm_x86_ops.func);
-#define KVM_X86_OP_OPTIONAL KVM_X86_OP
+#define KVM_X86_OP(func) \
+ WARN_ON(!kvm_x86_ops.func); KVM_X86_OP_OPTIONAL(func)
#include <asm/kvm-x86-ops.h>
}
--
2.31.1
^ permalink raw reply related [flat|nested] 13+ messages in thread
* [PATCH v2 4/5] KVM: x86: make several APIC virtualization callbacks optional
2022-02-14 13:16 [PATCH v2 0/5] kvm: x86: better handling of optional kvm_x86_ops Paolo Bonzini
` (2 preceding siblings ...)
2022-02-14 13:16 ` [PATCH v2 3/5] KVM: x86: warn on incorrectly NULL static calls Paolo Bonzini
@ 2022-02-14 13:16 ` Paolo Bonzini
2022-02-15 17:12 ` Sean Christopherson
2022-02-14 13:16 ` [PATCH v2 5/5] KVM: x86: allow defining return-0 static calls Paolo Bonzini
4 siblings, 1 reply; 13+ messages in thread
From: Paolo Bonzini @ 2022-02-14 13:16 UTC (permalink / raw)
To: linux-kernel, kvm; +Cc: seanjc
SVM does not need most of them, and there's no need for vendors to
support APIC virtualization to be supported in KVM, so mark them as
optional and delete the implementation.
For consistency, apicv_post_state_restore is also marked as optional
and called with static_call_cond, even though it is implemented for
both Intel APICv and AMD AVIC.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
---
arch/x86/include/asm/kvm-x86-ops.h | 10 +++++-----
arch/x86/kvm/lapic.c | 24 ++++++++++--------------
arch/x86/kvm/svm/avic.c | 18 ------------------
arch/x86/kvm/svm/svm.c | 4 ----
arch/x86/kvm/svm/svm.h | 1 -
arch/x86/kvm/x86.c | 4 ++--
6 files changed, 17 insertions(+), 44 deletions(-)
diff --git a/arch/x86/include/asm/kvm-x86-ops.h b/arch/x86/include/asm/kvm-x86-ops.h
index 9415d9af204c..0a074354aaf7 100644
--- a/arch/x86/include/asm/kvm-x86-ops.h
+++ b/arch/x86/include/asm/kvm-x86-ops.h
@@ -74,11 +74,11 @@ KVM_X86_OP(enable_irq_window)
KVM_X86_OP_OPTIONAL(update_cr8_intercept)
KVM_X86_OP(check_apicv_inhibit_reasons)
KVM_X86_OP(refresh_apicv_exec_ctrl)
-KVM_X86_OP(hwapic_irr_update)
-KVM_X86_OP(hwapic_isr_update)
+KVM_X86_OP_OPTIONAL(hwapic_irr_update)
+KVM_X86_OP_OPTIONAL(hwapic_isr_update)
KVM_X86_OP_OPTIONAL(guest_apic_has_interrupt)
-KVM_X86_OP(load_eoi_exitmap)
-KVM_X86_OP(set_virtual_apic_mode)
+KVM_X86_OP_OPTIONAL(load_eoi_exitmap)
+KVM_X86_OP_OPTIONAL(set_virtual_apic_mode)
KVM_X86_OP_OPTIONAL(set_apic_access_page_addr)
KVM_X86_OP(deliver_interrupt)
KVM_X86_OP_OPTIONAL(sync_pir_to_irr)
@@ -101,7 +101,7 @@ KVM_X86_OP_OPTIONAL(vcpu_blocking)
KVM_X86_OP_OPTIONAL(vcpu_unblocking)
KVM_X86_OP_OPTIONAL(pi_update_irte)
KVM_X86_OP_OPTIONAL(pi_start_assignment)
-KVM_X86_OP(apicv_post_state_restore)
+KVM_X86_OP_OPTIONAL(apicv_post_state_restore)
KVM_X86_OP_OPTIONAL(dy_apicv_has_pending_interrupt)
KVM_X86_OP_OPTIONAL(set_hv_timer)
KVM_X86_OP_OPTIONAL(cancel_hv_timer)
diff --git a/arch/x86/kvm/lapic.c b/arch/x86/kvm/lapic.c
index dd4e2888c244..47f8606559a9 100644
--- a/arch/x86/kvm/lapic.c
+++ b/arch/x86/kvm/lapic.c
@@ -492,8 +492,7 @@ static inline void apic_clear_irr(int vec, struct kvm_lapic *apic)
if (unlikely(vcpu->arch.apicv_active)) {
/* need to update RVI */
kvm_lapic_clear_vector(vec, apic->regs + APIC_IRR);
- static_call(kvm_x86_hwapic_irr_update)(vcpu,
- apic_find_highest_irr(apic));
+ static_call_cond(kvm_x86_hwapic_irr_update)(vcpu, apic_find_highest_irr(apic));
} else {
apic->irr_pending = false;
kvm_lapic_clear_vector(vec, apic->regs + APIC_IRR);
@@ -523,7 +522,7 @@ static inline void apic_set_isr(int vec, struct kvm_lapic *apic)
* just set SVI.
*/
if (unlikely(vcpu->arch.apicv_active))
- static_call(kvm_x86_hwapic_isr_update)(vcpu, vec);
+ static_call_cond(kvm_x86_hwapic_isr_update)(vcpu, vec);
else {
++apic->isr_count;
BUG_ON(apic->isr_count > MAX_APIC_VECTOR);
@@ -571,8 +570,7 @@ static inline void apic_clear_isr(int vec, struct kvm_lapic *apic)
* and must be left alone.
*/
if (unlikely(vcpu->arch.apicv_active))
- static_call(kvm_x86_hwapic_isr_update)(vcpu,
- apic_find_highest_isr(apic));
+ static_call_cond(kvm_x86_hwapic_isr_update)(vcpu, apic_find_highest_isr(apic));
else {
--apic->isr_count;
BUG_ON(apic->isr_count < 0);
@@ -2288,7 +2286,7 @@ void kvm_lapic_set_base(struct kvm_vcpu *vcpu, u64 value)
kvm_apic_set_x2apic_id(apic, vcpu->vcpu_id);
if ((old_value ^ value) & (MSR_IA32_APICBASE_ENABLE | X2APIC_ENABLE))
- static_call(kvm_x86_set_virtual_apic_mode)(vcpu);
+ static_call_cond(kvm_x86_set_virtual_apic_mode)(vcpu);
apic->base_address = apic->vcpu->arch.apic_base &
MSR_IA32_APICBASE_BASE;
@@ -2374,9 +2372,9 @@ void kvm_lapic_reset(struct kvm_vcpu *vcpu, bool init_event)
vcpu->arch.pv_eoi.msr_val = 0;
apic_update_ppr(apic);
if (vcpu->arch.apicv_active) {
- static_call(kvm_x86_apicv_post_state_restore)(vcpu);
- static_call(kvm_x86_hwapic_irr_update)(vcpu, -1);
- static_call(kvm_x86_hwapic_isr_update)(vcpu, -1);
+ static_call_cond(kvm_x86_apicv_post_state_restore)(vcpu);
+ static_call_cond(kvm_x86_hwapic_irr_update)(vcpu, -1);
+ static_call_cond(kvm_x86_hwapic_isr_update)(vcpu, -1);
}
vcpu->arch.apic_arb_prio = 0;
@@ -2639,11 +2637,9 @@ int kvm_apic_set_state(struct kvm_vcpu *vcpu, struct kvm_lapic_state *s)
kvm_apic_update_apicv(vcpu);
apic->highest_isr_cache = -1;
if (vcpu->arch.apicv_active) {
- static_call(kvm_x86_apicv_post_state_restore)(vcpu);
- static_call(kvm_x86_hwapic_irr_update)(vcpu,
- apic_find_highest_irr(apic));
- static_call(kvm_x86_hwapic_isr_update)(vcpu,
- apic_find_highest_isr(apic));
+ static_call_cond(kvm_x86_apicv_post_state_restore)(vcpu);
+ static_call_cond(kvm_x86_hwapic_irr_update)(vcpu, apic_find_highest_irr(apic));
+ static_call_cond(kvm_x86_hwapic_isr_update)(vcpu, apic_find_highest_isr(apic));
}
kvm_make_request(KVM_REQ_EVENT, vcpu);
if (ioapic_in_kernel(vcpu->kvm))
diff --git a/arch/x86/kvm/svm/avic.c b/arch/x86/kvm/svm/avic.c
index abd0e664bf22..4245cb99b497 100644
--- a/arch/x86/kvm/svm/avic.c
+++ b/arch/x86/kvm/svm/avic.c
@@ -586,19 +586,6 @@ void avic_apicv_post_state_restore(struct kvm_vcpu *vcpu)
avic_handle_ldr_update(vcpu);
}
-void avic_set_virtual_apic_mode(struct kvm_vcpu *vcpu)
-{
- return;
-}
-
-void avic_hwapic_irr_update(struct kvm_vcpu *vcpu, int max_irr)
-{
-}
-
-void avic_hwapic_isr_update(struct kvm_vcpu *vcpu, int max_isr)
-{
-}
-
static int avic_set_pi_irte_mode(struct kvm_vcpu *vcpu, bool activate)
{
int ret = 0;
@@ -663,11 +650,6 @@ void avic_refresh_apicv_exec_ctrl(struct kvm_vcpu *vcpu)
avic_set_pi_irte_mode(vcpu, activated);
}
-void avic_load_eoi_exitmap(struct kvm_vcpu *vcpu, u64 *eoi_exit_bitmap)
-{
- return;
-}
-
bool avic_dy_apicv_has_pending_interrupt(struct kvm_vcpu *vcpu)
{
return false;
diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c
index 4243bb355db0..4fa385490f22 100644
--- a/arch/x86/kvm/svm/svm.c
+++ b/arch/x86/kvm/svm/svm.c
@@ -4595,12 +4595,8 @@ static struct kvm_x86_ops svm_x86_ops __initdata = {
.enable_nmi_window = svm_enable_nmi_window,
.enable_irq_window = svm_enable_irq_window,
.update_cr8_intercept = svm_update_cr8_intercept,
- .set_virtual_apic_mode = avic_set_virtual_apic_mode,
.refresh_apicv_exec_ctrl = avic_refresh_apicv_exec_ctrl,
.check_apicv_inhibit_reasons = avic_check_apicv_inhibit_reasons,
- .load_eoi_exitmap = avic_load_eoi_exitmap,
- .hwapic_irr_update = avic_hwapic_irr_update,
- .hwapic_isr_update = avic_hwapic_isr_update,
.apicv_post_state_restore = avic_apicv_post_state_restore,
.set_tss_addr = svm_set_tss_addr,
diff --git a/arch/x86/kvm/svm/svm.h b/arch/x86/kvm/svm/svm.h
index c1f0cc4a9369..4f90d1f673b5 100644
--- a/arch/x86/kvm/svm/svm.h
+++ b/arch/x86/kvm/svm/svm.h
@@ -581,7 +581,6 @@ void avic_apicv_post_state_restore(struct kvm_vcpu *vcpu);
void avic_set_virtual_apic_mode(struct kvm_vcpu *vcpu);
void avic_refresh_apicv_exec_ctrl(struct kvm_vcpu *vcpu);
bool avic_check_apicv_inhibit_reasons(ulong bit);
-void avic_load_eoi_exitmap(struct kvm_vcpu *vcpu, u64 *eoi_exit_bitmap);
void avic_hwapic_irr_update(struct kvm_vcpu *vcpu, int max_irr);
void avic_hwapic_isr_update(struct kvm_vcpu *vcpu, int max_isr);
bool avic_dy_apicv_has_pending_interrupt(struct kvm_vcpu *vcpu);
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index b6f5c6454f11..c921d68fc244 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -9765,11 +9765,11 @@ static void vcpu_load_eoi_exitmap(struct kvm_vcpu *vcpu)
bitmap_or((ulong *)eoi_exit_bitmap,
vcpu->arch.ioapic_handled_vectors,
to_hv_synic(vcpu)->vec_bitmap, 256);
- static_call(kvm_x86_load_eoi_exitmap)(vcpu, eoi_exit_bitmap);
+ static_call_cond(kvm_x86_load_eoi_exitmap)(vcpu, eoi_exit_bitmap);
return;
}
- static_call(kvm_x86_load_eoi_exitmap)(
+ static_call_cond(kvm_x86_load_eoi_exitmap)(
vcpu, (u64 *)vcpu->arch.ioapic_handled_vectors);
}
--
2.31.1
^ permalink raw reply related [flat|nested] 13+ messages in thread
* [PATCH v2 5/5] KVM: x86: allow defining return-0 static calls
2022-02-14 13:16 [PATCH v2 0/5] kvm: x86: better handling of optional kvm_x86_ops Paolo Bonzini
` (3 preceding siblings ...)
2022-02-14 13:16 ` [PATCH v2 4/5] KVM: x86: make several APIC virtualization callbacks optional Paolo Bonzini
@ 2022-02-14 13:16 ` Paolo Bonzini
2022-02-15 17:29 ` Sean Christopherson
4 siblings, 1 reply; 13+ messages in thread
From: Paolo Bonzini @ 2022-02-14 13:16 UTC (permalink / raw)
To: linux-kernel, kvm; +Cc: seanjc
A few vendor callbacks are only used by VMX, but they return an integer
or bool value. Introduce KVM_X86_OP_RET0 for them: a NULL value in
struct kvm_x86_ops will be changed to __static_call_return0.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
---
arch/x86/include/asm/kvm-x86-ops.h | 20 +++++++++++++-------
arch/x86/include/asm/kvm_host.h | 4 ++++
arch/x86/kvm/svm/avic.c | 5 -----
arch/x86/kvm/svm/svm.c | 26 --------------------------
arch/x86/kvm/x86.c | 2 +-
kernel/static_call.c | 1 +
6 files changed, 19 insertions(+), 39 deletions(-)
diff --git a/arch/x86/include/asm/kvm-x86-ops.h b/arch/x86/include/asm/kvm-x86-ops.h
index 0a074354aaf7..ad75ff5ac220 100644
--- a/arch/x86/include/asm/kvm-x86-ops.h
+++ b/arch/x86/include/asm/kvm-x86-ops.h
@@ -6,14 +6,19 @@ BUILD_BUG_ON(1)
/*
* KVM_X86_OP() and KVM_X86_OP_OPTIONAL() are used to help generate
* "static_call()"s. They are also intended for use when defining
- * the vmx/svm kvm_x86_ops. KVM_X86_OP_OPTIONAL() can be used for those
+ * the vmx/svm kvm_x86_ops.
+ *
+ * KVM_X86_OP_OPTIONAL() can be used for those
* functions that can have a NULL definition, for example if
* "static_call_cond()" will be used at the call sites.
+ * KVM_X86_OP_OPTIONAL_RET0() can be used likewise to make
+ * a definition optional, but in this case the default will
+ * be __static_call_return0.
*/
KVM_X86_OP(hardware_enable)
KVM_X86_OP(hardware_disable)
KVM_X86_OP(hardware_unsetup)
-KVM_X86_OP(cpu_has_accelerated_tpr)
+KVM_X86_OP_OPTIONAL_RET0(cpu_has_accelerated_tpr)
KVM_X86_OP(has_emulated_msr)
KVM_X86_OP(vcpu_after_set_cpuid)
KVM_X86_OP(vm_init)
@@ -76,15 +81,15 @@ KVM_X86_OP(check_apicv_inhibit_reasons)
KVM_X86_OP(refresh_apicv_exec_ctrl)
KVM_X86_OP_OPTIONAL(hwapic_irr_update)
KVM_X86_OP_OPTIONAL(hwapic_isr_update)
-KVM_X86_OP_OPTIONAL(guest_apic_has_interrupt)
+KVM_X86_OP_OPTIONAL_RET0(guest_apic_has_interrupt)
KVM_X86_OP_OPTIONAL(load_eoi_exitmap)
KVM_X86_OP_OPTIONAL(set_virtual_apic_mode)
KVM_X86_OP_OPTIONAL(set_apic_access_page_addr)
KVM_X86_OP(deliver_interrupt)
KVM_X86_OP_OPTIONAL(sync_pir_to_irr)
-KVM_X86_OP(set_tss_addr)
-KVM_X86_OP(set_identity_map_addr)
-KVM_X86_OP(get_mt_mask)
+KVM_X86_OP_OPTIONAL_RET0(set_tss_addr)
+KVM_X86_OP_OPTIONAL_RET0(set_identity_map_addr)
+KVM_X86_OP_OPTIONAL_RET0(get_mt_mask)
KVM_X86_OP(load_mmu_pgd)
KVM_X86_OP(has_wbinvd_exit)
KVM_X86_OP(get_l2_tsc_offset)
@@ -102,7 +107,7 @@ KVM_X86_OP_OPTIONAL(vcpu_unblocking)
KVM_X86_OP_OPTIONAL(pi_update_irte)
KVM_X86_OP_OPTIONAL(pi_start_assignment)
KVM_X86_OP_OPTIONAL(apicv_post_state_restore)
-KVM_X86_OP_OPTIONAL(dy_apicv_has_pending_interrupt)
+KVM_X86_OP_OPTIONAL_RET0(dy_apicv_has_pending_interrupt)
KVM_X86_OP_OPTIONAL(set_hv_timer)
KVM_X86_OP_OPTIONAL(cancel_hv_timer)
KVM_X86_OP(setup_mce)
@@ -126,3 +131,4 @@ KVM_X86_OP(vcpu_deliver_sipi_vector)
#undef KVM_X86_OP
#undef KVM_X86_OP_OPTIONAL
+#undef KVM_X86_OP_OPTIONAL_RET0
diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
index 5dce6fbd9ab6..a0d2925b6651 100644
--- a/arch/x86/include/asm/kvm_host.h
+++ b/arch/x86/include/asm/kvm_host.h
@@ -1542,12 +1542,16 @@ extern struct kvm_x86_ops kvm_x86_ops;
#define KVM_X86_OP(func) \
DECLARE_STATIC_CALL(kvm_x86_##func, *(((struct kvm_x86_ops *)0)->func));
#define KVM_X86_OP_OPTIONAL KVM_X86_OP
+#define KVM_X86_OP_OPTIONAL_RET0 KVM_X86_OP
#include <asm/kvm-x86-ops.h>
static inline void kvm_ops_static_call_update(void)
{
#define KVM_X86_OP_OPTIONAL(func) \
static_call_update(kvm_x86_##func, kvm_x86_ops.func);
+#define KVM_X86_OP_OPTIONAL_RET0(func) \
+ static_call_update(kvm_x86_##func, kvm_x86_ops.func ? : \
+ (void *) __static_call_return0);
#define KVM_X86_OP(func) \
WARN_ON(!kvm_x86_ops.func); KVM_X86_OP_OPTIONAL(func)
#include <asm/kvm-x86-ops.h>
diff --git a/arch/x86/kvm/svm/avic.c b/arch/x86/kvm/svm/avic.c
index 4245cb99b497..d4fa8c4f3a9a 100644
--- a/arch/x86/kvm/svm/avic.c
+++ b/arch/x86/kvm/svm/avic.c
@@ -650,11 +650,6 @@ void avic_refresh_apicv_exec_ctrl(struct kvm_vcpu *vcpu)
avic_set_pi_irte_mode(vcpu, activated);
}
-bool avic_dy_apicv_has_pending_interrupt(struct kvm_vcpu *vcpu)
-{
- return false;
-}
-
static void svm_ir_list_del(struct vcpu_svm *svm, struct amd_iommu_pi_data *pi)
{
unsigned long flags;
diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c
index 4fa385490f22..7038c76fa841 100644
--- a/arch/x86/kvm/svm/svm.c
+++ b/arch/x86/kvm/svm/svm.c
@@ -3528,16 +3528,6 @@ static void svm_enable_nmi_window(struct kvm_vcpu *vcpu)
svm->vmcb->save.rflags |= (X86_EFLAGS_TF | X86_EFLAGS_RF);
}
-static int svm_set_tss_addr(struct kvm *kvm, unsigned int addr)
-{
- return 0;
-}
-
-static int svm_set_identity_map_addr(struct kvm *kvm, u64 ident_addr)
-{
- return 0;
-}
-
static void svm_flush_tlb_current(struct kvm_vcpu *vcpu)
{
struct vcpu_svm *svm = to_svm(vcpu);
@@ -3912,11 +3902,6 @@ static int __init svm_check_processor_compat(void)
return 0;
}
-static bool svm_cpu_has_accelerated_tpr(void)
-{
- return false;
-}
-
/*
* The kvm parameter can be NULL (module initialization, or invocation before
* VM creation). Be sure to check the kvm parameter before using it.
@@ -3939,11 +3924,6 @@ static bool svm_has_emulated_msr(struct kvm *kvm, u32 index)
return true;
}
-static u64 svm_get_mt_mask(struct kvm_vcpu *vcpu, gfn_t gfn, bool is_mmio)
-{
- return 0;
-}
-
static void svm_vcpu_after_set_cpuid(struct kvm_vcpu *vcpu)
{
struct vcpu_svm *svm = to_svm(vcpu);
@@ -4529,7 +4509,6 @@ static struct kvm_x86_ops svm_x86_ops __initdata = {
.hardware_unsetup = svm_hardware_unsetup,
.hardware_enable = svm_hardware_enable,
.hardware_disable = svm_hardware_disable,
- .cpu_has_accelerated_tpr = svm_cpu_has_accelerated_tpr,
.has_emulated_msr = svm_has_emulated_msr,
.vcpu_create = svm_vcpu_create,
@@ -4599,10 +4578,6 @@ static struct kvm_x86_ops svm_x86_ops __initdata = {
.check_apicv_inhibit_reasons = avic_check_apicv_inhibit_reasons,
.apicv_post_state_restore = avic_apicv_post_state_restore,
- .set_tss_addr = svm_set_tss_addr,
- .set_identity_map_addr = svm_set_identity_map_addr,
- .get_mt_mask = svm_get_mt_mask,
-
.get_exit_info = svm_get_exit_info,
.vcpu_after_set_cpuid = svm_vcpu_after_set_cpuid,
@@ -4627,7 +4602,6 @@ static struct kvm_x86_ops svm_x86_ops __initdata = {
.nested_ops = &svm_nested_ops,
.deliver_interrupt = svm_deliver_interrupt,
- .dy_apicv_has_pending_interrupt = avic_dy_apicv_has_pending_interrupt,
.pi_update_irte = avic_pi_update_irte,
.setup_mce = svm_setup_mce,
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index c921d68fc244..9a9006226501 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -131,6 +131,7 @@ struct kvm_x86_ops kvm_x86_ops __read_mostly;
DEFINE_STATIC_CALL_NULL(kvm_x86_##func, \
*(((struct kvm_x86_ops *)0)->func));
#define KVM_X86_OP_OPTIONAL KVM_X86_OP
+#define KVM_X86_OP_OPTIONAL_RET0 KVM_X86_OP
#include <asm/kvm-x86-ops.h>
EXPORT_STATIC_CALL_GPL(kvm_x86_get_cs_db_l_bits);
EXPORT_STATIC_CALL_GPL(kvm_x86_cache_reg);
@@ -12018,7 +12019,6 @@ void kvm_arch_flush_shadow_memslot(struct kvm *kvm,
static inline bool kvm_guest_apic_has_interrupt(struct kvm_vcpu *vcpu)
{
return (is_guest_mode(vcpu) &&
- kvm_x86_ops.guest_apic_has_interrupt &&
static_call(kvm_x86_guest_apic_has_interrupt)(vcpu));
}
diff --git a/kernel/static_call.c b/kernel/static_call.c
index 43ba0b1e0edb..76abd46fe6ee 100644
--- a/kernel/static_call.c
+++ b/kernel/static_call.c
@@ -503,6 +503,7 @@ long __static_call_return0(void)
{
return 0;
}
+EXPORT_SYMBOL_GPL(__static_call_return0)
#ifdef CONFIG_STATIC_CALL_SELFTEST
--
2.31.1
^ permalink raw reply related [flat|nested] 13+ messages in thread
* Re: [PATCH v2 1/5] KVM: x86: use static_call_cond for optional callbacks
2022-02-14 13:16 ` [PATCH v2 1/5] KVM: x86: use static_call_cond for optional callbacks Paolo Bonzini
@ 2022-02-15 16:55 ` Sean Christopherson
0 siblings, 0 replies; 13+ messages in thread
From: Sean Christopherson @ 2022-02-15 16:55 UTC (permalink / raw)
To: Paolo Bonzini; +Cc: linux-kernel, kvm
On Mon, Feb 14, 2022, Paolo Bonzini wrote:
> SVM implements neither update_emulated_instruction nor
> set_apic_access_page_addr. Remove an "if" by calling them
> with static_call_cond().
>
> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Reviewed-by: Sean Christopherson <seanjc@google.com>
^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: [PATCH v2 2/5] KVM: x86: remove KVM_X86_OP_NULL and mark optional kvm_x86_ops
2022-02-14 13:16 ` [PATCH v2 2/5] KVM: x86: remove KVM_X86_OP_NULL and mark optional kvm_x86_ops Paolo Bonzini
@ 2022-02-15 17:08 ` Sean Christopherson
2022-02-15 17:09 ` Paolo Bonzini
0 siblings, 1 reply; 13+ messages in thread
From: Sean Christopherson @ 2022-02-15 17:08 UTC (permalink / raw)
To: Paolo Bonzini; +Cc: linux-kernel, kvm
On Mon, Feb 14, 2022, Paolo Bonzini wrote:
> The original use of KVM_X86_OP_NULL, which was to mark calls
> that do not follow a specific naming convention, is not in use
> anymore. Instead, let's mark calls that are optional because
> they are always invoked within conditionals or with static_call_cond.
> Those that are _not_, i.e. those that are defined with KVM_X86_OP,
> must be defined by both vendor modules or some kind of NULL pointer
> dereference is bound to happen at runtime.
>
> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
> ---
> arch/x86/include/asm/kvm-x86-ops.h | 84 +++++++++++++++---------------
> arch/x86/include/asm/kvm_host.h | 4 +-
> arch/x86/kvm/x86.c | 2 +-
> 3 files changed, 44 insertions(+), 46 deletions(-)
>
> diff --git a/arch/x86/include/asm/kvm-x86-ops.h b/arch/x86/include/asm/kvm-x86-ops.h
> index 9e37dc3d8863..9415d9af204c 100644
> --- a/arch/x86/include/asm/kvm-x86-ops.h
> +++ b/arch/x86/include/asm/kvm-x86-ops.h
> @@ -1,25 +1,23 @@
> /* SPDX-License-Identifier: GPL-2.0 */
> -#if !defined(KVM_X86_OP) || !defined(KVM_X86_OP_NULL)
> +#if !defined(KVM_X86_OP) || !defined(KVM_X86_OP_OPTIONAL)
> BUILD_BUG_ON(1)
> #endif
>
> /*
> - * KVM_X86_OP() and KVM_X86_OP_NULL() are used to help generate
> + * KVM_X86_OP() and KVM_X86_OP_OPTIONAL() are used to help generate
> * "static_call()"s. They are also intended for use when defining
> - * the vmx/svm kvm_x86_ops. KVM_X86_OP() can be used for those
> - * functions that follow the [svm|vmx]_func_name convention.
> - * KVM_X86_OP_NULL() can leave a NULL definition for the
> - * case where there is no definition or a function name that
> - * doesn't match the typical naming convention is supplied.
> + * the vmx/svm kvm_x86_ops.
But assuming your veto of actually using kvm-x86-ops to fill vendor ops isn't
overriden, they're _not_ "intended for use when defining the vmx/svm kvm_x86_ops."
> KVM_X86_OP_OPTIONAL() can be used for those
> + * functions that can have a NULL definition, for example if
> + * "static_call_cond()" will be used at the call sites.
> */
^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: [PATCH v2 2/5] KVM: x86: remove KVM_X86_OP_NULL and mark optional kvm_x86_ops
2022-02-15 17:08 ` Sean Christopherson
@ 2022-02-15 17:09 ` Paolo Bonzini
0 siblings, 0 replies; 13+ messages in thread
From: Paolo Bonzini @ 2022-02-15 17:09 UTC (permalink / raw)
To: Sean Christopherson; +Cc: linux-kernel, kvm
On 2/15/22 18:08, Sean Christopherson wrote:
>> - * KVM_X86_OP() and KVM_X86_OP_NULL() are used to help generate
>> + * KVM_X86_OP() and KVM_X86_OP_OPTIONAL() are used to help generate
>> * "static_call()"s. They are also intended for use when defining
>> - * the vmx/svm kvm_x86_ops. KVM_X86_OP() can be used for those
>> - * functions that follow the [svm|vmx]_func_name convention.
>> - * KVM_X86_OP_NULL() can leave a NULL definition for the
>> - * case where there is no definition or a function name that
>> - * doesn't match the typical naming convention is supplied.
>> + * the vmx/svm kvm_x86_ops.
> But assuming your veto of actually using kvm-x86-ops to fill vendor ops isn't
> overriden, they're_not_ "intended for use when defining the vmx/svm kvm_x86_ops."
True, and the original veto was actually how KVM_X86_OP_NULL() became
unused.
Paolo
^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: [PATCH v2 3/5] KVM: x86: warn on incorrectly NULL static calls
2022-02-14 13:16 ` [PATCH v2 3/5] KVM: x86: warn on incorrectly NULL static calls Paolo Bonzini
@ 2022-02-15 17:10 ` Sean Christopherson
0 siblings, 0 replies; 13+ messages in thread
From: Sean Christopherson @ 2022-02-15 17:10 UTC (permalink / raw)
To: Paolo Bonzini; +Cc: linux-kernel, kvm
For the shortlog, it's not the "static call" that's incorrectly NULL, it's the
kvm_x86_ops.func that's wrong. E.g. yank out static calls and KVM could still
keep the WARN.
On Mon, Feb 14, 2022, Paolo Bonzini wrote:
> Use the newly corrected KVM_X86_OP annotations to warn about possible
> NULL pointer dereferences as soon as the vendor module is loaded.
>
> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
> ---
> arch/x86/include/asm/kvm_host.h | 5 +++--
> 1 file changed, 3 insertions(+), 2 deletions(-)
>
> diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
> index e3f7d958c150..5dce6fbd9ab6 100644
> --- a/arch/x86/include/asm/kvm_host.h
> +++ b/arch/x86/include/asm/kvm_host.h
> @@ -1546,9 +1546,10 @@ extern struct kvm_x86_ops kvm_x86_ops;
>
> static inline void kvm_ops_static_call_update(void)
> {
> -#define KVM_X86_OP(func) \
> +#define KVM_X86_OP_OPTIONAL(func) \
> static_call_update(kvm_x86_##func, kvm_x86_ops.func);
> -#define KVM_X86_OP_OPTIONAL KVM_X86_OP
> +#define KVM_X86_OP(func) \
> + WARN_ON(!kvm_x86_ops.func); KVM_X86_OP_OPTIONAL(func)
As before, I'd prefer that we not have a KVM_X86_OP => KVM_X86_OP_OPTIONAL
dependency and instead have a common __KVM_X86_OP for this one case.
> #include <asm/kvm-x86-ops.h>
> }
>
> --
> 2.31.1
>
>
^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: [PATCH v2 4/5] KVM: x86: make several APIC virtualization callbacks optional
2022-02-14 13:16 ` [PATCH v2 4/5] KVM: x86: make several APIC virtualization callbacks optional Paolo Bonzini
@ 2022-02-15 17:12 ` Sean Christopherson
0 siblings, 0 replies; 13+ messages in thread
From: Sean Christopherson @ 2022-02-15 17:12 UTC (permalink / raw)
To: Paolo Bonzini; +Cc: linux-kernel, kvm
On Mon, Feb 14, 2022, Paolo Bonzini wrote:
> SVM does not need most of them, and there's no need for vendors to
> support APIC virtualization to be supported in KVM, so mark them as
> optional and delete the implementation.
>
> For consistency, apicv_post_state_restore is also marked as optional
> and called with static_call_cond, even though it is implemented for
> both Intel APICv and AMD AVIC.
>
> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
> ---
Reviewed-by: Sean Christopherson <seanjc@google.com>
^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: [PATCH v2 5/5] KVM: x86: allow defining return-0 static calls
2022-02-14 13:16 ` [PATCH v2 5/5] KVM: x86: allow defining return-0 static calls Paolo Bonzini
@ 2022-02-15 17:29 ` Sean Christopherson
2022-02-15 18:07 ` Paolo Bonzini
0 siblings, 1 reply; 13+ messages in thread
From: Sean Christopherson @ 2022-02-15 17:29 UTC (permalink / raw)
To: Paolo Bonzini; +Cc: linux-kernel, kvm
On Mon, Feb 14, 2022, Paolo Bonzini wrote:
> A few vendor callbacks are only used by VMX, but they return an integer
> or bool value. Introduce KVM_X86_OP_RET0 for them: a NULL value in
s/KVM_X86_OP_RET0/KVM_X86_OP_OPTIONAL_RET0
And maybe "NULL func" instead of "NULL value", since some members of kvm_x86_ops
hold a value, not a func.
> struct kvm_x86_ops will be changed to __static_call_return0.
This implies kvm_x86_ops itself is changed, which is incorrect. "will be patched
to __static_call_return0() when updating static calls" or so.
> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
> ---
> arch/x86/include/asm/kvm-x86-ops.h | 20 +++++++++++++-------
> arch/x86/include/asm/kvm_host.h | 4 ++++
> arch/x86/kvm/svm/avic.c | 5 -----
> arch/x86/kvm/svm/svm.c | 26 --------------------------
> arch/x86/kvm/x86.c | 2 +-
> kernel/static_call.c | 1 +
> 6 files changed, 19 insertions(+), 39 deletions(-)
>
> diff --git a/arch/x86/include/asm/kvm-x86-ops.h b/arch/x86/include/asm/kvm-x86-ops.h
> index 0a074354aaf7..ad75ff5ac220 100644
> --- a/arch/x86/include/asm/kvm-x86-ops.h
> +++ b/arch/x86/include/asm/kvm-x86-ops.h
> @@ -6,14 +6,19 @@ BUILD_BUG_ON(1)
> /*
> * KVM_X86_OP() and KVM_X86_OP_OPTIONAL() are used to help generate
> * "static_call()"s. They are also intended for use when defining
> - * the vmx/svm kvm_x86_ops. KVM_X86_OP_OPTIONAL() can be used for those
> + * the vmx/svm kvm_x86_ops.
> + *
> + * KVM_X86_OP_OPTIONAL() can be used for those
> * functions that can have a NULL definition, for example if
> * "static_call_cond()" will be used at the call sites.
> + * KVM_X86_OP_OPTIONAL_RET0() can be used likewise to make
> + * a definition optional, but in this case the default will
ERROR: trailing whitespace
#35: FILE: arch/x86/include/asm/kvm-x86-ops.h:15:
+ * a definition optional, but in this case the default will $
> + * be __static_call_return0.
Uber nit, __static_call_return0() to make it clear that that's a function, not a
magic return value (though arguably it's that too).
> */
> KVM_X86_OP(hardware_enable)
> KVM_X86_OP(hardware_disable)
> KVM_X86_OP(hardware_unsetup)
> -KVM_X86_OP(cpu_has_accelerated_tpr)
> +KVM_X86_OP_OPTIONAL_RET0(cpu_has_accelerated_tpr)
Can we instead just remove this helper entirely and return '1' unconditionally
from KVM_CAP_VAPIC?
The usage appears to be wrong, this will return '0' for VMX, '1' for SVM.
case KVM_CAP_VAPIC:
r = !static_call(kvm_x86_cpu_has_accelerated_tpr)();
break;
Further more, our uapi says:
/* Available with KVM_CAP_VAPIC */
#define KVM_TPR_ACCESS_REPORTING _IOWR(KVMIO, 0x92, struct kvm_tpr_access_ctl)
/* Available with KVM_CAP_VAPIC */
#define KVM_SET_VAPIC_ADDR _IOW(KVMIO, 0x93, struct kvm_vapic_addr)
But neither of those check cpu_has_accelerated_tpr(). QEMU doesn't check the
cap, and AFAICT neither does our VMM.
^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: [PATCH v2 5/5] KVM: x86: allow defining return-0 static calls
2022-02-15 17:29 ` Sean Christopherson
@ 2022-02-15 18:07 ` Paolo Bonzini
0 siblings, 0 replies; 13+ messages in thread
From: Paolo Bonzini @ 2022-02-15 18:07 UTC (permalink / raw)
To: Sean Christopherson; +Cc: linux-kernel, kvm
On 2/15/22 18:29, Sean Christopherson wrote:
> s/KVM_X86_OP_RET0/KVM_X86_OP_OPTIONAL_RET0
>
> And maybe "NULL func" instead of "NULL value", since some members of kvm_x86_ops
> hold a value, not a func.
>
>> struct kvm_x86_ops will be changed to __static_call_return0.
> This implies kvm_x86_ops itself is changed, which is incorrect. "will be patched
> to __static_call_return0() when updating static calls" or so.
>
Very good point, thanks.
Paolo
^ permalink raw reply [flat|nested] 13+ messages in thread
end of thread, other threads:[~2022-02-15 18:07 UTC | newest]
Thread overview: 13+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2022-02-14 13:16 [PATCH v2 0/5] kvm: x86: better handling of optional kvm_x86_ops Paolo Bonzini
2022-02-14 13:16 ` [PATCH v2 1/5] KVM: x86: use static_call_cond for optional callbacks Paolo Bonzini
2022-02-15 16:55 ` Sean Christopherson
2022-02-14 13:16 ` [PATCH v2 2/5] KVM: x86: remove KVM_X86_OP_NULL and mark optional kvm_x86_ops Paolo Bonzini
2022-02-15 17:08 ` Sean Christopherson
2022-02-15 17:09 ` Paolo Bonzini
2022-02-14 13:16 ` [PATCH v2 3/5] KVM: x86: warn on incorrectly NULL static calls Paolo Bonzini
2022-02-15 17:10 ` Sean Christopherson
2022-02-14 13:16 ` [PATCH v2 4/5] KVM: x86: make several APIC virtualization callbacks optional Paolo Bonzini
2022-02-15 17:12 ` Sean Christopherson
2022-02-14 13:16 ` [PATCH v2 5/5] KVM: x86: allow defining return-0 static calls Paolo Bonzini
2022-02-15 17:29 ` Sean Christopherson
2022-02-15 18:07 ` Paolo Bonzini
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox