kvm.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH 0/5] kvm: x86: better handling of NULL-able kvm_x86_ops
@ 2022-02-02 18:18 Paolo Bonzini
  2022-02-02 18:18 ` [PATCH 1/5] KVM: x86: use static_call_cond for optional callbacks Paolo Bonzini
                   ` (5 more replies)
  0 siblings, 6 replies; 9+ messages in thread
From: Paolo Bonzini @ 2022-02-02 18:18 UTC (permalink / raw)
  To: linux-kernel, kvm; +Cc: seanjc

This series is really two changes:

- patch 1 to 4 clean up NULLable kvm_x86_ops so that they are marked
  in kvm-x86-ops.h and the non-NULLable ones WARN if used incorrectly.
  As an additional outcome of the review, a few more uses of
  static_call_cond are introduced.

- patch 5 allows to NULL a few kvm_x86_ops that return a value, by
  using __static_call_ret0.

Paolo Bonzini (5):
  KVM: x86: use static_call_cond for optional callbacks
  KVM: x86: mark NULL-able kvm_x86_ops
  KVM: x86: warn on incorrectly NULL static calls
  KVM: x86: change hwapic_{irr,isr}_update to NULLable calls
  KVM: x86: allow defining return-0 static calls

 arch/x86/include/asm/kvm-x86-ops.h | 45 +++++++++++++++---------------
 arch/x86/include/asm/kvm_host.h    |  9 ++++--
 arch/x86/kvm/lapic.c               | 22 ++++++---------
 arch/x86/kvm/svm/avic.c            | 13 ---------
 arch/x86/kvm/svm/svm.c             | 28 -------------------
 arch/x86/kvm/x86.c                 | 10 ++-----
 6 files changed, 41 insertions(+), 86 deletions(-)

-- 
2.31.1


^ permalink raw reply	[flat|nested] 9+ messages in thread

* [PATCH 1/5] KVM: x86: use static_call_cond for optional callbacks
  2022-02-02 18:18 [PATCH 0/5] kvm: x86: better handling of NULL-able kvm_x86_ops Paolo Bonzini
@ 2022-02-02 18:18 ` Paolo Bonzini
  2022-02-02 18:18 ` [PATCH 2/5] KVM: x86: mark NULL-able kvm_x86_ops Paolo Bonzini
                   ` (4 subsequent siblings)
  5 siblings, 0 replies; 9+ messages in thread
From: Paolo Bonzini @ 2022-02-02 18:18 UTC (permalink / raw)
  To: linux-kernel, kvm; +Cc: seanjc

SVM implements neither update_emulated_instruction nor
set_apic_access_page_addr.  Remove an "if" by calling them
with static_call_cond().

Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
---
 arch/x86/kvm/x86.c | 8 ++------
 1 file changed, 2 insertions(+), 6 deletions(-)

diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index 862d654caedf..a527cffd0a2b 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -8421,8 +8421,7 @@ int x86_emulate_instruction(struct kvm_vcpu *vcpu, gpa_t cr2_or_gpa,
 			kvm_rip_write(vcpu, ctxt->eip);
 			if (r && (ctxt->tf || (vcpu->guest_debug & KVM_GUESTDBG_SINGLESTEP)))
 				r = kvm_vcpu_do_singlestep(vcpu);
-			if (kvm_x86_ops.update_emulated_instruction)
-				static_call(kvm_x86_update_emulated_instruction)(vcpu);
+			static_call_cond(kvm_x86_update_emulated_instruction)(vcpu);
 			__kvm_set_rflags(vcpu, ctxt->eflags);
 		}
 
@@ -9838,10 +9837,7 @@ static void kvm_vcpu_reload_apic_access_page(struct kvm_vcpu *vcpu)
 	if (!lapic_in_kernel(vcpu))
 		return;
 
-	if (!kvm_x86_ops.set_apic_access_page_addr)
-		return;
-
-	static_call(kvm_x86_set_apic_access_page_addr)(vcpu);
+	static_call_cond(kvm_x86_set_apic_access_page_addr)(vcpu);
 }
 
 void __kvm_request_immediate_exit(struct kvm_vcpu *vcpu)
-- 
2.31.1



^ permalink raw reply related	[flat|nested] 9+ messages in thread

* [PATCH 2/5] KVM: x86: mark NULL-able kvm_x86_ops
  2022-02-02 18:18 [PATCH 0/5] kvm: x86: better handling of NULL-able kvm_x86_ops Paolo Bonzini
  2022-02-02 18:18 ` [PATCH 1/5] KVM: x86: use static_call_cond for optional callbacks Paolo Bonzini
@ 2022-02-02 18:18 ` Paolo Bonzini
  2022-02-02 18:18 ` [PATCH 3/5] KVM: x86: warn on incorrectly NULL static calls Paolo Bonzini
                   ` (3 subsequent siblings)
  5 siblings, 0 replies; 9+ messages in thread
From: Paolo Bonzini @ 2022-02-02 18:18 UTC (permalink / raw)
  To: linux-kernel, kvm; +Cc: seanjc

The original use of KVM_X86_OP_NULL, which was to mark calls
that do not follow a specific naming convention, is not in use
anymore.  Repurpose it to identify calls that are invoked within
conditionals or with static_call_cond.  Those that are _not_,
i.e. those that are defined with KVM_X86_OP, must be defined by
both vendor modules or some kind of NULL pointer dereference
is bound to happen at runtime.

In the case of apicv_post_status_restore, rather than changing
the kvm-x86-ops.h declaration, I decided to use static_call_cond;
there's no absolute requirement for vendor modules to support
APIC virtualization.

Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
---
 arch/x86/include/asm/kvm-x86-ops.h | 30 ++++++++++++++----------------
 arch/x86/kvm/lapic.c               |  4 ++--
 2 files changed, 16 insertions(+), 18 deletions(-)

diff --git a/arch/x86/include/asm/kvm-x86-ops.h b/arch/x86/include/asm/kvm-x86-ops.h
index 9e37dc3d8863..a842f10f5778 100644
--- a/arch/x86/include/asm/kvm-x86-ops.h
+++ b/arch/x86/include/asm/kvm-x86-ops.h
@@ -6,16 +6,14 @@ BUILD_BUG_ON(1)
 /*
  * KVM_X86_OP() and KVM_X86_OP_NULL() are used to help generate
  * "static_call()"s. They are also intended for use when defining
- * the vmx/svm kvm_x86_ops. KVM_X86_OP() can be used for those
- * functions that follow the [svm|vmx]_func_name convention.
- * KVM_X86_OP_NULL() can leave a NULL definition for the
- * case where there is no definition or a function name that
- * doesn't match the typical naming convention is supplied.
+ * the vmx/svm kvm_x86_ops. KVM_X86_OP_NULL() can be used for those
+ * functions that can have a NULL definition, for example if
+ * "static_call_cond()" will be used at the call sites.
  */
-KVM_X86_OP_NULL(hardware_enable)
-KVM_X86_OP_NULL(hardware_disable)
-KVM_X86_OP_NULL(hardware_unsetup)
-KVM_X86_OP_NULL(cpu_has_accelerated_tpr)
+KVM_X86_OP(hardware_enable)
+KVM_X86_OP(hardware_disable)
+KVM_X86_OP(hardware_unsetup)
+KVM_X86_OP(cpu_has_accelerated_tpr)
 KVM_X86_OP(has_emulated_msr)
 KVM_X86_OP(vcpu_after_set_cpuid)
 KVM_X86_OP(vm_init)
@@ -33,7 +31,7 @@ KVM_X86_OP(get_segment_base)
 KVM_X86_OP(get_segment)
 KVM_X86_OP(get_cpl)
 KVM_X86_OP(set_segment)
-KVM_X86_OP_NULL(get_cs_db_l_bits)
+KVM_X86_OP(get_cs_db_l_bits)
 KVM_X86_OP(set_cr0)
 KVM_X86_OP_NULL(post_set_cr3)
 KVM_X86_OP(is_valid_cr4)
@@ -57,8 +55,8 @@ KVM_X86_OP(flush_tlb_gva)
 KVM_X86_OP(flush_tlb_guest)
 KVM_X86_OP(vcpu_pre_run)
 KVM_X86_OP(vcpu_run)
-KVM_X86_OP_NULL(handle_exit)
-KVM_X86_OP_NULL(skip_emulated_instruction)
+KVM_X86_OP(handle_exit)
+KVM_X86_OP(skip_emulated_instruction)
 KVM_X86_OP_NULL(update_emulated_instruction)
 KVM_X86_OP(set_interrupt_shadow)
 KVM_X86_OP(get_interrupt_shadow)
@@ -73,7 +71,7 @@ KVM_X86_OP(get_nmi_mask)
 KVM_X86_OP(set_nmi_mask)
 KVM_X86_OP(enable_nmi_window)
 KVM_X86_OP(enable_irq_window)
-KVM_X86_OP(update_cr8_intercept)
+KVM_X86_OP_NULL(update_cr8_intercept)
 KVM_X86_OP(check_apicv_inhibit_reasons)
 KVM_X86_OP(refresh_apicv_exec_ctrl)
 KVM_X86_OP(hwapic_irr_update)
@@ -88,7 +86,7 @@ KVM_X86_OP(set_tss_addr)
 KVM_X86_OP(set_identity_map_addr)
 KVM_X86_OP(get_mt_mask)
 KVM_X86_OP(load_mmu_pgd)
-KVM_X86_OP_NULL(has_wbinvd_exit)
+KVM_X86_OP(has_wbinvd_exit)
 KVM_X86_OP(get_l2_tsc_offset)
 KVM_X86_OP(get_l2_tsc_multiplier)
 KVM_X86_OP(write_tsc_offset)
@@ -96,7 +94,7 @@ KVM_X86_OP(write_tsc_multiplier)
 KVM_X86_OP(get_exit_info)
 KVM_X86_OP(check_intercept)
 KVM_X86_OP(handle_exit_irqoff)
-KVM_X86_OP_NULL(request_immediate_exit)
+KVM_X86_OP(request_immediate_exit)
 KVM_X86_OP(sched_in)
 KVM_X86_OP_NULL(update_cpu_dirty_logging)
 KVM_X86_OP_NULL(vcpu_blocking)
@@ -123,7 +121,7 @@ KVM_X86_OP(apic_init_signal_blocked)
 KVM_X86_OP_NULL(enable_direct_tlbflush)
 KVM_X86_OP_NULL(migrate_timers)
 KVM_X86_OP(msr_filter_changed)
-KVM_X86_OP_NULL(complete_emulated_msr)
+KVM_X86_OP(complete_emulated_msr)
 KVM_X86_OP(vcpu_deliver_sipi_vector)
 
 #undef KVM_X86_OP
diff --git a/arch/x86/kvm/lapic.c b/arch/x86/kvm/lapic.c
index 0da7d0960fcb..09bbb6a01c1d 100644
--- a/arch/x86/kvm/lapic.c
+++ b/arch/x86/kvm/lapic.c
@@ -2369,7 +2369,7 @@ void kvm_lapic_reset(struct kvm_vcpu *vcpu, bool init_event)
 	vcpu->arch.pv_eoi.msr_val = 0;
 	apic_update_ppr(apic);
 	if (vcpu->arch.apicv_active) {
-		static_call(kvm_x86_apicv_post_state_restore)(vcpu);
+		static_call_cond(kvm_x86_apicv_post_state_restore)(vcpu);
 		static_call(kvm_x86_hwapic_irr_update)(vcpu, -1);
 		static_call(kvm_x86_hwapic_isr_update)(vcpu, -1);
 	}
@@ -2634,7 +2634,7 @@ int kvm_apic_set_state(struct kvm_vcpu *vcpu, struct kvm_lapic_state *s)
 	kvm_apic_update_apicv(vcpu);
 	apic->highest_isr_cache = -1;
 	if (vcpu->arch.apicv_active) {
-		static_call(kvm_x86_apicv_post_state_restore)(vcpu);
+		static_call_cond(kvm_x86_apicv_post_state_restore)(vcpu);
 		static_call(kvm_x86_hwapic_irr_update)(vcpu,
 				apic_find_highest_irr(apic));
 		static_call(kvm_x86_hwapic_isr_update)(vcpu,
-- 
2.31.1



^ permalink raw reply related	[flat|nested] 9+ messages in thread

* [PATCH 3/5] KVM: x86: warn on incorrectly NULL static calls
  2022-02-02 18:18 [PATCH 0/5] kvm: x86: better handling of NULL-able kvm_x86_ops Paolo Bonzini
  2022-02-02 18:18 ` [PATCH 1/5] KVM: x86: use static_call_cond for optional callbacks Paolo Bonzini
  2022-02-02 18:18 ` [PATCH 2/5] KVM: x86: mark NULL-able kvm_x86_ops Paolo Bonzini
@ 2022-02-02 18:18 ` Paolo Bonzini
  2022-02-02 18:18 ` [PATCH 4/5] KVM: x86: change hwapic_{irr,isr}_update to NULLable calls Paolo Bonzini
                   ` (2 subsequent siblings)
  5 siblings, 0 replies; 9+ messages in thread
From: Paolo Bonzini @ 2022-02-02 18:18 UTC (permalink / raw)
  To: linux-kernel, kvm; +Cc: seanjc

Use the newly corrected KVM_X86_OP annotations to warn about possible
NULL pointer dereferences as soon as the vendor module is loaded.

Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
---
 arch/x86/include/asm/kvm_host.h | 5 +++--
 1 file changed, 3 insertions(+), 2 deletions(-)

diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
index c371ee7e45f7..61faeb57889c 100644
--- a/arch/x86/include/asm/kvm_host.h
+++ b/arch/x86/include/asm/kvm_host.h
@@ -1544,9 +1544,10 @@ extern struct kvm_x86_ops kvm_x86_ops;
 
 static inline void kvm_ops_static_call_update(void)
 {
-#define KVM_X86_OP(func) \
+#define KVM_X86_OP_NULL(func) \
 	static_call_update(kvm_x86_##func, kvm_x86_ops.func);
-#define KVM_X86_OP_NULL KVM_X86_OP
+#define KVM_X86_OP(func) \
+	WARN_ON(!kvm_x86_ops.func); KVM_X86_OP_NULL(func)
 #include <asm/kvm-x86-ops.h>
 }
 
-- 
2.31.1



^ permalink raw reply related	[flat|nested] 9+ messages in thread

* [PATCH 4/5] KVM: x86: change hwapic_{irr,isr}_update to NULLable calls
  2022-02-02 18:18 [PATCH 0/5] kvm: x86: better handling of NULL-able kvm_x86_ops Paolo Bonzini
                   ` (2 preceding siblings ...)
  2022-02-02 18:18 ` [PATCH 3/5] KVM: x86: warn on incorrectly NULL static calls Paolo Bonzini
@ 2022-02-02 18:18 ` Paolo Bonzini
  2022-02-02 18:18 ` [PATCH 5/5] KVM: x86: allow defining return-0 static calls Paolo Bonzini
  2022-02-08  0:41 ` [PATCH 0/5] kvm: x86: better handling of NULL-able kvm_x86_ops Sean Christopherson
  5 siblings, 0 replies; 9+ messages in thread
From: Paolo Bonzini @ 2022-02-02 18:18 UTC (permalink / raw)
  To: linux-kernel, kvm; +Cc: seanjc

SVM does not need them, so mark them as optional.

Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
---
 arch/x86/include/asm/kvm-x86-ops.h |  4 ++--
 arch/x86/kvm/lapic.c               | 18 +++++++-----------
 arch/x86/kvm/svm/avic.c            |  8 --------
 arch/x86/kvm/svm/svm.c             |  2 --
 4 files changed, 9 insertions(+), 23 deletions(-)

diff --git a/arch/x86/include/asm/kvm-x86-ops.h b/arch/x86/include/asm/kvm-x86-ops.h
index a842f10f5778..843bd9efd2ae 100644
--- a/arch/x86/include/asm/kvm-x86-ops.h
+++ b/arch/x86/include/asm/kvm-x86-ops.h
@@ -74,8 +74,8 @@ KVM_X86_OP(enable_irq_window)
 KVM_X86_OP_NULL(update_cr8_intercept)
 KVM_X86_OP(check_apicv_inhibit_reasons)
 KVM_X86_OP(refresh_apicv_exec_ctrl)
-KVM_X86_OP(hwapic_irr_update)
-KVM_X86_OP(hwapic_isr_update)
+KVM_X86_OP_NULL(hwapic_irr_update)
+KVM_X86_OP_NULL(hwapic_isr_update)
 KVM_X86_OP_NULL(guest_apic_has_interrupt)
 KVM_X86_OP(load_eoi_exitmap)
 KVM_X86_OP(set_virtual_apic_mode)
diff --git a/arch/x86/kvm/lapic.c b/arch/x86/kvm/lapic.c
index 09bbb6a01c1d..fd10dd070d26 100644
--- a/arch/x86/kvm/lapic.c
+++ b/arch/x86/kvm/lapic.c
@@ -492,8 +492,7 @@ static inline void apic_clear_irr(int vec, struct kvm_lapic *apic)
 	if (unlikely(vcpu->arch.apicv_active)) {
 		/* need to update RVI */
 		kvm_lapic_clear_vector(vec, apic->regs + APIC_IRR);
-		static_call(kvm_x86_hwapic_irr_update)(vcpu,
-				apic_find_highest_irr(apic));
+		static_call_cond(kvm_x86_hwapic_irr_update)(vcpu, apic_find_highest_irr(apic));
 	} else {
 		apic->irr_pending = false;
 		kvm_lapic_clear_vector(vec, apic->regs + APIC_IRR);
@@ -523,7 +522,7 @@ static inline void apic_set_isr(int vec, struct kvm_lapic *apic)
 	 * just set SVI.
 	 */
 	if (unlikely(vcpu->arch.apicv_active))
-		static_call(kvm_x86_hwapic_isr_update)(vcpu, vec);
+		static_call_cond(kvm_x86_hwapic_isr_update)(vcpu, vec);
 	else {
 		++apic->isr_count;
 		BUG_ON(apic->isr_count > MAX_APIC_VECTOR);
@@ -571,8 +570,7 @@ static inline void apic_clear_isr(int vec, struct kvm_lapic *apic)
 	 * and must be left alone.
 	 */
 	if (unlikely(vcpu->arch.apicv_active))
-		static_call(kvm_x86_hwapic_isr_update)(vcpu,
-						apic_find_highest_isr(apic));
+		static_call_cond(kvm_x86_hwapic_isr_update)(vcpu, apic_find_highest_isr(apic));
 	else {
 		--apic->isr_count;
 		BUG_ON(apic->isr_count < 0);
@@ -2370,8 +2368,8 @@ void kvm_lapic_reset(struct kvm_vcpu *vcpu, bool init_event)
 	apic_update_ppr(apic);
 	if (vcpu->arch.apicv_active) {
 		static_call_cond(kvm_x86_apicv_post_state_restore)(vcpu);
-		static_call(kvm_x86_hwapic_irr_update)(vcpu, -1);
-		static_call(kvm_x86_hwapic_isr_update)(vcpu, -1);
+		static_call_cond(kvm_x86_hwapic_irr_update)(vcpu, -1);
+		static_call_cond(kvm_x86_hwapic_isr_update)(vcpu, -1);
 	}
 
 	vcpu->arch.apic_arb_prio = 0;
@@ -2635,10 +2633,8 @@ int kvm_apic_set_state(struct kvm_vcpu *vcpu, struct kvm_lapic_state *s)
 	apic->highest_isr_cache = -1;
 	if (vcpu->arch.apicv_active) {
 		static_call_cond(kvm_x86_apicv_post_state_restore)(vcpu);
-		static_call(kvm_x86_hwapic_irr_update)(vcpu,
-				apic_find_highest_irr(apic));
-		static_call(kvm_x86_hwapic_isr_update)(vcpu,
-				apic_find_highest_isr(apic));
+		static_call_cond(kvm_x86_hwapic_irr_update)(vcpu, apic_find_highest_irr(apic));
+		static_call_cond(kvm_x86_hwapic_isr_update)(vcpu, apic_find_highest_isr(apic));
 	}
 	kvm_make_request(KVM_REQ_EVENT, vcpu);
 	if (ioapic_in_kernel(vcpu->kvm))
diff --git a/arch/x86/kvm/svm/avic.c b/arch/x86/kvm/svm/avic.c
index 99f907ec5aa8..b49ee6f34fe7 100644
--- a/arch/x86/kvm/svm/avic.c
+++ b/arch/x86/kvm/svm/avic.c
@@ -592,14 +592,6 @@ void avic_set_virtual_apic_mode(struct kvm_vcpu *vcpu)
 	return;
 }
 
-void avic_hwapic_irr_update(struct kvm_vcpu *vcpu, int max_irr)
-{
-}
-
-void avic_hwapic_isr_update(struct kvm_vcpu *vcpu, int max_isr)
-{
-}
-
 static int avic_set_pi_irte_mode(struct kvm_vcpu *vcpu, bool activate)
 {
 	int ret = 0;
diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c
index 7f70f456a5a5..ab50d73b1e2e 100644
--- a/arch/x86/kvm/svm/svm.c
+++ b/arch/x86/kvm/svm/svm.c
@@ -4540,8 +4540,6 @@ static struct kvm_x86_ops svm_x86_ops __initdata = {
 	.refresh_apicv_exec_ctrl = avic_refresh_apicv_exec_ctrl,
 	.check_apicv_inhibit_reasons = avic_check_apicv_inhibit_reasons,
 	.load_eoi_exitmap = avic_load_eoi_exitmap,
-	.hwapic_irr_update = avic_hwapic_irr_update,
-	.hwapic_isr_update = avic_hwapic_isr_update,
 	.apicv_post_state_restore = avic_apicv_post_state_restore,
 
 	.set_tss_addr = svm_set_tss_addr,
-- 
2.31.1



^ permalink raw reply related	[flat|nested] 9+ messages in thread

* [PATCH 5/5] KVM: x86: allow defining return-0 static calls
  2022-02-02 18:18 [PATCH 0/5] kvm: x86: better handling of NULL-able kvm_x86_ops Paolo Bonzini
                   ` (3 preceding siblings ...)
  2022-02-02 18:18 ` [PATCH 4/5] KVM: x86: change hwapic_{irr,isr}_update to NULLable calls Paolo Bonzini
@ 2022-02-02 18:18 ` Paolo Bonzini
  2022-02-03 18:40   ` Paolo Bonzini
  2022-02-08  0:41 ` [PATCH 0/5] kvm: x86: better handling of NULL-able kvm_x86_ops Sean Christopherson
  5 siblings, 1 reply; 9+ messages in thread
From: Paolo Bonzini @ 2022-02-02 18:18 UTC (permalink / raw)
  To: linux-kernel, kvm; +Cc: seanjc

A few vendor callbacks are only used by VMX, but they return an integer
or bool value.  Introduce KVM_X86_OP_RET0 for them: a NULL value in
struct kvm_x86_ops will be changed to __static_call_return0.

Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
---
 arch/x86/include/asm/kvm-x86-ops.h | 13 +++++++------
 arch/x86/include/asm/kvm_host.h    |  4 ++++
 arch/x86/kvm/svm/avic.c            |  5 -----
 arch/x86/kvm/svm/svm.c             | 26 --------------------------
 arch/x86/kvm/x86.c                 |  2 +-
 5 files changed, 12 insertions(+), 38 deletions(-)

diff --git a/arch/x86/include/asm/kvm-x86-ops.h b/arch/x86/include/asm/kvm-x86-ops.h
index 843bd9efd2ae..89fa5dd21f34 100644
--- a/arch/x86/include/asm/kvm-x86-ops.h
+++ b/arch/x86/include/asm/kvm-x86-ops.h
@@ -13,7 +13,7 @@ BUILD_BUG_ON(1)
 KVM_X86_OP(hardware_enable)
 KVM_X86_OP(hardware_disable)
 KVM_X86_OP(hardware_unsetup)
-KVM_X86_OP(cpu_has_accelerated_tpr)
+KVM_X86_OP_RET0(cpu_has_accelerated_tpr)
 KVM_X86_OP(has_emulated_msr)
 KVM_X86_OP(vcpu_after_set_cpuid)
 KVM_X86_OP(vm_init)
@@ -76,15 +76,15 @@ KVM_X86_OP(check_apicv_inhibit_reasons)
 KVM_X86_OP(refresh_apicv_exec_ctrl)
 KVM_X86_OP_NULL(hwapic_irr_update)
 KVM_X86_OP_NULL(hwapic_isr_update)
-KVM_X86_OP_NULL(guest_apic_has_interrupt)
+KVM_X86_OP_RET0(guest_apic_has_interrupt)
 KVM_X86_OP(load_eoi_exitmap)
 KVM_X86_OP(set_virtual_apic_mode)
 KVM_X86_OP_NULL(set_apic_access_page_addr)
 KVM_X86_OP(deliver_interrupt)
 KVM_X86_OP_NULL(sync_pir_to_irr)
-KVM_X86_OP(set_tss_addr)
-KVM_X86_OP(set_identity_map_addr)
-KVM_X86_OP(get_mt_mask)
+KVM_X86_OP_RET0(set_tss_addr)
+KVM_X86_OP_RET0(set_identity_map_addr)
+KVM_X86_OP_RET0(get_mt_mask)
 KVM_X86_OP(load_mmu_pgd)
 KVM_X86_OP(has_wbinvd_exit)
 KVM_X86_OP(get_l2_tsc_offset)
@@ -102,7 +102,7 @@ KVM_X86_OP_NULL(vcpu_unblocking)
 KVM_X86_OP_NULL(pi_update_irte)
 KVM_X86_OP_NULL(pi_start_assignment)
 KVM_X86_OP_NULL(apicv_post_state_restore)
-KVM_X86_OP_NULL(dy_apicv_has_pending_interrupt)
+KVM_X86_OP_RET0(dy_apicv_has_pending_interrupt)
 KVM_X86_OP_NULL(set_hv_timer)
 KVM_X86_OP_NULL(cancel_hv_timer)
 KVM_X86_OP(setup_mce)
@@ -126,3 +126,4 @@ KVM_X86_OP(vcpu_deliver_sipi_vector)
 
 #undef KVM_X86_OP
 #undef KVM_X86_OP_NULL
+#undef KVM_X86_OP_RET0
diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
index 61faeb57889c..e7e5bd9a984d 100644
--- a/arch/x86/include/asm/kvm_host.h
+++ b/arch/x86/include/asm/kvm_host.h
@@ -1540,6 +1540,7 @@ extern struct kvm_x86_ops kvm_x86_ops;
 #define KVM_X86_OP(func) \
 	DECLARE_STATIC_CALL(kvm_x86_##func, *(((struct kvm_x86_ops *)0)->func));
 #define KVM_X86_OP_NULL KVM_X86_OP
+#define KVM_X86_OP_RET0 KVM_X86_OP
 #include <asm/kvm-x86-ops.h>
 
 static inline void kvm_ops_static_call_update(void)
@@ -1548,6 +1549,9 @@ static inline void kvm_ops_static_call_update(void)
 	static_call_update(kvm_x86_##func, kvm_x86_ops.func);
 #define KVM_X86_OP(func) \
 	WARN_ON(!kvm_x86_ops.func); KVM_X86_OP_NULL(func)
+#define KVM_X86_OP_RET0(func) \
+	static_call_update(kvm_x86_##func, kvm_x86_ops.func ? : \
+			   (typeof(kvm_x86_ops.func)) __static_call_return0);
 #include <asm/kvm-x86-ops.h>
 }
 
diff --git a/arch/x86/kvm/svm/avic.c b/arch/x86/kvm/svm/avic.c
index b49ee6f34fe7..c82457793fc8 100644
--- a/arch/x86/kvm/svm/avic.c
+++ b/arch/x86/kvm/svm/avic.c
@@ -707,11 +707,6 @@ int svm_deliver_avic_intr(struct kvm_vcpu *vcpu, int vec)
 	return 0;
 }
 
-bool avic_dy_apicv_has_pending_interrupt(struct kvm_vcpu *vcpu)
-{
-	return false;
-}
-
 static void svm_ir_list_del(struct vcpu_svm *svm, struct amd_iommu_pi_data *pi)
 {
 	unsigned long flags;
diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c
index ab50d73b1e2e..5f75f50b861c 100644
--- a/arch/x86/kvm/svm/svm.c
+++ b/arch/x86/kvm/svm/svm.c
@@ -3479,16 +3479,6 @@ static void svm_enable_nmi_window(struct kvm_vcpu *vcpu)
 	svm->vmcb->save.rflags |= (X86_EFLAGS_TF | X86_EFLAGS_RF);
 }
 
-static int svm_set_tss_addr(struct kvm *kvm, unsigned int addr)
-{
-	return 0;
-}
-
-static int svm_set_identity_map_addr(struct kvm *kvm, u64 ident_addr)
-{
-	return 0;
-}
-
 static void svm_flush_tlb_current(struct kvm_vcpu *vcpu)
 {
 	struct vcpu_svm *svm = to_svm(vcpu);
@@ -3863,11 +3853,6 @@ static int __init svm_check_processor_compat(void)
 	return 0;
 }
 
-static bool svm_cpu_has_accelerated_tpr(void)
-{
-	return false;
-}
-
 /*
  * The kvm parameter can be NULL (module initialization, or invocation before
  * VM creation). Be sure to check the kvm parameter before using it.
@@ -3890,11 +3875,6 @@ static bool svm_has_emulated_msr(struct kvm *kvm, u32 index)
 	return true;
 }
 
-static u64 svm_get_mt_mask(struct kvm_vcpu *vcpu, gfn_t gfn, bool is_mmio)
-{
-	return 0;
-}
-
 static void svm_vcpu_after_set_cpuid(struct kvm_vcpu *vcpu)
 {
 	struct vcpu_svm *svm = to_svm(vcpu);
@@ -4470,7 +4450,6 @@ static struct kvm_x86_ops svm_x86_ops __initdata = {
 	.hardware_unsetup = svm_hardware_unsetup,
 	.hardware_enable = svm_hardware_enable,
 	.hardware_disable = svm_hardware_disable,
-	.cpu_has_accelerated_tpr = svm_cpu_has_accelerated_tpr,
 	.has_emulated_msr = svm_has_emulated_msr,
 
 	.vcpu_create = svm_vcpu_create,
@@ -4542,10 +4521,6 @@ static struct kvm_x86_ops svm_x86_ops __initdata = {
 	.load_eoi_exitmap = avic_load_eoi_exitmap,
 	.apicv_post_state_restore = avic_apicv_post_state_restore,
 
-	.set_tss_addr = svm_set_tss_addr,
-	.set_identity_map_addr = svm_set_identity_map_addr,
-	.get_mt_mask = svm_get_mt_mask,
-
 	.get_exit_info = svm_get_exit_info,
 
 	.vcpu_after_set_cpuid = svm_vcpu_after_set_cpuid,
@@ -4570,7 +4545,6 @@ static struct kvm_x86_ops svm_x86_ops __initdata = {
 	.nested_ops = &svm_nested_ops,
 
 	.deliver_interrupt = svm_deliver_interrupt,
-	.dy_apicv_has_pending_interrupt = avic_dy_apicv_has_pending_interrupt,
 	.pi_update_irte = avic_pi_update_irte,
 	.setup_mce = svm_setup_mce,
 
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index a527cffd0a2b..2daca3dd128a 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -129,6 +129,7 @@ struct kvm_x86_ops kvm_x86_ops __read_mostly;
 	DEFINE_STATIC_CALL_NULL(kvm_x86_##func,			     \
 				*(((struct kvm_x86_ops *)0)->func));
 #define KVM_X86_OP_NULL KVM_X86_OP
+#define KVM_X86_OP_RET0 KVM_X86_OP
 #include <asm/kvm-x86-ops.h>
 EXPORT_STATIC_CALL_GPL(kvm_x86_get_cs_db_l_bits);
 EXPORT_STATIC_CALL_GPL(kvm_x86_cache_reg);
@@ -12057,7 +12058,6 @@ void kvm_arch_flush_shadow_memslot(struct kvm *kvm,
 static inline bool kvm_guest_apic_has_interrupt(struct kvm_vcpu *vcpu)
 {
 	return (is_guest_mode(vcpu) &&
-			kvm_x86_ops.guest_apic_has_interrupt &&
 			static_call(kvm_x86_guest_apic_has_interrupt)(vcpu));
 }
 
-- 
2.31.1


^ permalink raw reply related	[flat|nested] 9+ messages in thread

* Re: [PATCH 5/5] KVM: x86: allow defining return-0 static calls
  2022-02-02 18:18 ` [PATCH 5/5] KVM: x86: allow defining return-0 static calls Paolo Bonzini
@ 2022-02-03 18:40   ` Paolo Bonzini
  2022-02-06 14:10     ` Peter Zijlstra
  0 siblings, 1 reply; 9+ messages in thread
From: Paolo Bonzini @ 2022-02-03 18:40 UTC (permalink / raw)
  To: linux-kernel, kvm; +Cc: seanjc, Peter Zijlstra

On 2/2/22 19:18, Paolo Bonzini wrote:
> A few vendor callbacks are only used by VMX, but they return an integer
> or bool value.  Introduce KVM_X86_OP_RET0 for them: a NULL value in
> struct kvm_x86_ops will be changed to __static_call_return0.

This also needs EXPORT_SYMBOL_GPL(__static_call_ret0).  Peter, any 
objections?

Paolo

> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
> ---
>   arch/x86/include/asm/kvm-x86-ops.h | 13 +++++++------
>   arch/x86/include/asm/kvm_host.h    |  4 ++++
>   arch/x86/kvm/svm/avic.c            |  5 -----
>   arch/x86/kvm/svm/svm.c             | 26 --------------------------
>   arch/x86/kvm/x86.c                 |  2 +-
>   5 files changed, 12 insertions(+), 38 deletions(-)
> 
> diff --git a/arch/x86/include/asm/kvm-x86-ops.h b/arch/x86/include/asm/kvm-x86-ops.h
> index 843bd9efd2ae..89fa5dd21f34 100644
> --- a/arch/x86/include/asm/kvm-x86-ops.h
> +++ b/arch/x86/include/asm/kvm-x86-ops.h
> @@ -13,7 +13,7 @@ BUILD_BUG_ON(1)
>   KVM_X86_OP(hardware_enable)
>   KVM_X86_OP(hardware_disable)
>   KVM_X86_OP(hardware_unsetup)
> -KVM_X86_OP(cpu_has_accelerated_tpr)
> +KVM_X86_OP_RET0(cpu_has_accelerated_tpr)
>   KVM_X86_OP(has_emulated_msr)
>   KVM_X86_OP(vcpu_after_set_cpuid)
>   KVM_X86_OP(vm_init)
> @@ -76,15 +76,15 @@ KVM_X86_OP(check_apicv_inhibit_reasons)
>   KVM_X86_OP(refresh_apicv_exec_ctrl)
>   KVM_X86_OP_NULL(hwapic_irr_update)
>   KVM_X86_OP_NULL(hwapic_isr_update)
> -KVM_X86_OP_NULL(guest_apic_has_interrupt)
> +KVM_X86_OP_RET0(guest_apic_has_interrupt)
>   KVM_X86_OP(load_eoi_exitmap)
>   KVM_X86_OP(set_virtual_apic_mode)
>   KVM_X86_OP_NULL(set_apic_access_page_addr)
>   KVM_X86_OP(deliver_interrupt)
>   KVM_X86_OP_NULL(sync_pir_to_irr)
> -KVM_X86_OP(set_tss_addr)
> -KVM_X86_OP(set_identity_map_addr)
> -KVM_X86_OP(get_mt_mask)
> +KVM_X86_OP_RET0(set_tss_addr)
> +KVM_X86_OP_RET0(set_identity_map_addr)
> +KVM_X86_OP_RET0(get_mt_mask)
>   KVM_X86_OP(load_mmu_pgd)
>   KVM_X86_OP(has_wbinvd_exit)
>   KVM_X86_OP(get_l2_tsc_offset)
> @@ -102,7 +102,7 @@ KVM_X86_OP_NULL(vcpu_unblocking)
>   KVM_X86_OP_NULL(pi_update_irte)
>   KVM_X86_OP_NULL(pi_start_assignment)
>   KVM_X86_OP_NULL(apicv_post_state_restore)
> -KVM_X86_OP_NULL(dy_apicv_has_pending_interrupt)
> +KVM_X86_OP_RET0(dy_apicv_has_pending_interrupt)
>   KVM_X86_OP_NULL(set_hv_timer)
>   KVM_X86_OP_NULL(cancel_hv_timer)
>   KVM_X86_OP(setup_mce)
> @@ -126,3 +126,4 @@ KVM_X86_OP(vcpu_deliver_sipi_vector)
>   
>   #undef KVM_X86_OP
>   #undef KVM_X86_OP_NULL
> +#undef KVM_X86_OP_RET0
> diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
> index 61faeb57889c..e7e5bd9a984d 100644
> --- a/arch/x86/include/asm/kvm_host.h
> +++ b/arch/x86/include/asm/kvm_host.h
> @@ -1540,6 +1540,7 @@ extern struct kvm_x86_ops kvm_x86_ops;
>   #define KVM_X86_OP(func) \
>   	DECLARE_STATIC_CALL(kvm_x86_##func, *(((struct kvm_x86_ops *)0)->func));
>   #define KVM_X86_OP_NULL KVM_X86_OP
> +#define KVM_X86_OP_RET0 KVM_X86_OP
>   #include <asm/kvm-x86-ops.h>
>   
>   static inline void kvm_ops_static_call_update(void)
> @@ -1548,6 +1549,9 @@ static inline void kvm_ops_static_call_update(void)
>   	static_call_update(kvm_x86_##func, kvm_x86_ops.func);
>   #define KVM_X86_OP(func) \
>   	WARN_ON(!kvm_x86_ops.func); KVM_X86_OP_NULL(func)
> +#define KVM_X86_OP_RET0(func) \
> +	static_call_update(kvm_x86_##func, kvm_x86_ops.func ? : \
> +			   (typeof(kvm_x86_ops.func)) __static_call_return0);
>   #include <asm/kvm-x86-ops.h>
>   }
>   
> diff --git a/arch/x86/kvm/svm/avic.c b/arch/x86/kvm/svm/avic.c
> index b49ee6f34fe7..c82457793fc8 100644
> --- a/arch/x86/kvm/svm/avic.c
> +++ b/arch/x86/kvm/svm/avic.c
> @@ -707,11 +707,6 @@ int svm_deliver_avic_intr(struct kvm_vcpu *vcpu, int vec)
>   	return 0;
>   }
>   
> -bool avic_dy_apicv_has_pending_interrupt(struct kvm_vcpu *vcpu)
> -{
> -	return false;
> -}
> -
>   static void svm_ir_list_del(struct vcpu_svm *svm, struct amd_iommu_pi_data *pi)
>   {
>   	unsigned long flags;
> diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c
> index ab50d73b1e2e..5f75f50b861c 100644
> --- a/arch/x86/kvm/svm/svm.c
> +++ b/arch/x86/kvm/svm/svm.c
> @@ -3479,16 +3479,6 @@ static void svm_enable_nmi_window(struct kvm_vcpu *vcpu)
>   	svm->vmcb->save.rflags |= (X86_EFLAGS_TF | X86_EFLAGS_RF);
>   }
>   
> -static int svm_set_tss_addr(struct kvm *kvm, unsigned int addr)
> -{
> -	return 0;
> -}
> -
> -static int svm_set_identity_map_addr(struct kvm *kvm, u64 ident_addr)
> -{
> -	return 0;
> -}
> -
>   static void svm_flush_tlb_current(struct kvm_vcpu *vcpu)
>   {
>   	struct vcpu_svm *svm = to_svm(vcpu);
> @@ -3863,11 +3853,6 @@ static int __init svm_check_processor_compat(void)
>   	return 0;
>   }
>   
> -static bool svm_cpu_has_accelerated_tpr(void)
> -{
> -	return false;
> -}
> -
>   /*
>    * The kvm parameter can be NULL (module initialization, or invocation before
>    * VM creation). Be sure to check the kvm parameter before using it.
> @@ -3890,11 +3875,6 @@ static bool svm_has_emulated_msr(struct kvm *kvm, u32 index)
>   	return true;
>   }
>   
> -static u64 svm_get_mt_mask(struct kvm_vcpu *vcpu, gfn_t gfn, bool is_mmio)
> -{
> -	return 0;
> -}
> -
>   static void svm_vcpu_after_set_cpuid(struct kvm_vcpu *vcpu)
>   {
>   	struct vcpu_svm *svm = to_svm(vcpu);
> @@ -4470,7 +4450,6 @@ static struct kvm_x86_ops svm_x86_ops __initdata = {
>   	.hardware_unsetup = svm_hardware_unsetup,
>   	.hardware_enable = svm_hardware_enable,
>   	.hardware_disable = svm_hardware_disable,
> -	.cpu_has_accelerated_tpr = svm_cpu_has_accelerated_tpr,
>   	.has_emulated_msr = svm_has_emulated_msr,
>   
>   	.vcpu_create = svm_vcpu_create,
> @@ -4542,10 +4521,6 @@ static struct kvm_x86_ops svm_x86_ops __initdata = {
>   	.load_eoi_exitmap = avic_load_eoi_exitmap,
>   	.apicv_post_state_restore = avic_apicv_post_state_restore,
>   
> -	.set_tss_addr = svm_set_tss_addr,
> -	.set_identity_map_addr = svm_set_identity_map_addr,
> -	.get_mt_mask = svm_get_mt_mask,
> -
>   	.get_exit_info = svm_get_exit_info,
>   
>   	.vcpu_after_set_cpuid = svm_vcpu_after_set_cpuid,
> @@ -4570,7 +4545,6 @@ static struct kvm_x86_ops svm_x86_ops __initdata = {
>   	.nested_ops = &svm_nested_ops,
>   
>   	.deliver_interrupt = svm_deliver_interrupt,
> -	.dy_apicv_has_pending_interrupt = avic_dy_apicv_has_pending_interrupt,
>   	.pi_update_irte = avic_pi_update_irte,
>   	.setup_mce = svm_setup_mce,
>   
> diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
> index a527cffd0a2b..2daca3dd128a 100644
> --- a/arch/x86/kvm/x86.c
> +++ b/arch/x86/kvm/x86.c
> @@ -129,6 +129,7 @@ struct kvm_x86_ops kvm_x86_ops __read_mostly;
>   	DEFINE_STATIC_CALL_NULL(kvm_x86_##func,			     \
>   				*(((struct kvm_x86_ops *)0)->func));
>   #define KVM_X86_OP_NULL KVM_X86_OP
> +#define KVM_X86_OP_RET0 KVM_X86_OP
>   #include <asm/kvm-x86-ops.h>
>   EXPORT_STATIC_CALL_GPL(kvm_x86_get_cs_db_l_bits);
>   EXPORT_STATIC_CALL_GPL(kvm_x86_cache_reg);
> @@ -12057,7 +12058,6 @@ void kvm_arch_flush_shadow_memslot(struct kvm *kvm,
>   static inline bool kvm_guest_apic_has_interrupt(struct kvm_vcpu *vcpu)
>   {
>   	return (is_guest_mode(vcpu) &&
> -			kvm_x86_ops.guest_apic_has_interrupt &&
>   			static_call(kvm_x86_guest_apic_has_interrupt)(vcpu));
>   }
>   


^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH 5/5] KVM: x86: allow defining return-0 static calls
  2022-02-03 18:40   ` Paolo Bonzini
@ 2022-02-06 14:10     ` Peter Zijlstra
  0 siblings, 0 replies; 9+ messages in thread
From: Peter Zijlstra @ 2022-02-06 14:10 UTC (permalink / raw)
  To: Paolo Bonzini; +Cc: linux-kernel, kvm, seanjc

On Thu, Feb 03, 2022 at 07:40:25PM +0100, Paolo Bonzini wrote:
> On 2/2/22 19:18, Paolo Bonzini wrote:
> > A few vendor callbacks are only used by VMX, but they return an integer
> > or bool value.  Introduce KVM_X86_OP_RET0 for them: a NULL value in
> > struct kvm_x86_ops will be changed to __static_call_return0.
> 
> This also needs EXPORT_SYMBOL_GPL(__static_call_ret0).  Peter, any
> objections?

__static_call_return0 I suppose, but no. Go ahead.

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH 0/5] kvm: x86: better handling of NULL-able kvm_x86_ops
  2022-02-02 18:18 [PATCH 0/5] kvm: x86: better handling of NULL-able kvm_x86_ops Paolo Bonzini
                   ` (4 preceding siblings ...)
  2022-02-02 18:18 ` [PATCH 5/5] KVM: x86: allow defining return-0 static calls Paolo Bonzini
@ 2022-02-08  0:41 ` Sean Christopherson
  5 siblings, 0 replies; 9+ messages in thread
From: Sean Christopherson @ 2022-02-08  0:41 UTC (permalink / raw)
  To: Paolo Bonzini; +Cc: linux-kernel, kvm

On Wed, Feb 02, 2022, Paolo Bonzini wrote:
> This series is really two changes:
> 
> - patch 1 to 4 clean up NULLable kvm_x86_ops so that they are marked
>   in kvm-x86-ops.h and the non-NULLable ones WARN if used incorrectly.
>   As an additional outcome of the review, a few more uses of
>   static_call_cond are introduced.
> 
> - patch 5 allows to NULL a few kvm_x86_ops that return a value, by
>   using __static_call_ret0.
> 
> Paolo Bonzini (5):
>   KVM: x86: use static_call_cond for optional callbacks
>   KVM: x86: mark NULL-able kvm_x86_ops
>   KVM: x86: warn on incorrectly NULL static calls
>   KVM: x86: change hwapic_{irr,isr}_update to NULLable calls
>   KVM: x86: allow defining return-0 static calls

I belatedly remembered the other thing about "NULL" that I don't like:

        #define KVM_X86_OP(func)                                             \
                DEFINE_STATIC_CALL_NULL(kvm_x86_##func,                      \
                                        *(((struct kvm_x86_ops *)0)->func));
        #define KVM_X86_OP_NULL KVM_X86_OP

That's bound to be confusing for folks that aren't already familiar with the
code, especially if they don't have a good handle on static_call() magic.

Side topic, the above doesn't handle KVM_X86_OP_RET0, no idea how that doesn't
fail at link time.  The BUILD_BUG_ON(1) in kvm-x86-ops.h also needs to be updated,
and the comment too.

Anyways, back to NULL.  KVM_X86_OP_RET0 also doesn't capture that the hook can
be NULL in that case; if the reader is familiar with static_call() then they'll
understand the full meaning, but I doubt that covers the majority of readers.

TL;DR: what about using more verbose names KVM_X86_OP_OPTIONAL and
KVM_X86_OP_OPTIONAL_RET0[*]?  And also tweak kvm_ops_static_call_update()'s
defines so that KVM_X86_OP never routes through KVM_X86_OP_OPTIONAL (as syntactic
sugar to avoid confusion.

Other than that, I like the WARN on KVM_X86_OPS with a NULL implementation.

[*] The OP_OPTIONAL kills me, but I can't think of a better alternative.


E.g. sans the kvm-x86-ops.h changes...

---
 arch/x86/include/asm/kvm_host.h | 11 ++++++-----
 arch/x86/kvm/x86.c              | 11 +++++++++--
 2 files changed, 15 insertions(+), 7 deletions(-)

diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
index e7e5bd9a984d..055b3a2419f7 100644
--- a/arch/x86/include/asm/kvm_host.h
+++ b/arch/x86/include/asm/kvm_host.h
@@ -1539,17 +1539,18 @@ extern struct kvm_x86_ops kvm_x86_ops;

 #define KVM_X86_OP(func) \
 	DECLARE_STATIC_CALL(kvm_x86_##func, *(((struct kvm_x86_ops *)0)->func));
-#define KVM_X86_OP_NULL KVM_X86_OP
-#define KVM_X86_OP_RET0 KVM_X86_OP
+#define KVM_X86_OP_OPTIONAL KVM_X86_OP
+#define KVM_X86_OP_OPTIONAL_RET0 KVM_X86_OP
 #include <asm/kvm-x86-ops.h>

 static inline void kvm_ops_static_call_update(void)
 {
-#define KVM_X86_OP_NULL(func) \
+#define __KVM_X86_OP(func) \
 	static_call_update(kvm_x86_##func, kvm_x86_ops.func);
 #define KVM_X86_OP(func) \
-	WARN_ON(!kvm_x86_ops.func); KVM_X86_OP_NULL(func)
-#define KVM_X86_OP_RET0(func) \
+	WARN_ON(!kvm_x86_ops.func); __KVM_X86_OP(func)
+#define KVM_X86_OP_OPTIONAL __KVM_X86_OP
+#define KVM_X86_OP_OPTIONAL_RET0(func) \
 	static_call_update(kvm_x86_##func, kvm_x86_ops.func ? : \
 			   (typeof(kvm_x86_ops.func)) __static_call_return0);
 #include <asm/kvm-x86-ops.h>
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index 657aa646871e..337e39dec3c4 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -125,11 +125,18 @@ static void __get_sregs2(struct kvm_vcpu *vcpu, struct kvm_sregs2 *sregs2);

 struct kvm_x86_ops kvm_x86_ops __read_mostly;

+/*
+ * All ops are filled by vendor code, thus the default function is NULL for
+ * both mandatory and optional hooks.  The exception are optional RET0 hooks,
+ * which obviously default to __static_call_return0.
+ */
 #define KVM_X86_OP(func)					     \
 	DEFINE_STATIC_CALL_NULL(kvm_x86_##func,			     \
 				*(((struct kvm_x86_ops *)0)->func));
-#define KVM_X86_OP_NULL KVM_X86_OP
-#define KVM_X86_OP_RET0 KVM_X86_OP
+#define KVM_X86_OP_OPTIONAL KVM_X86_OP
+#define KVM_X86_OP_OPTIONAL_RET0(func)				     \
+	DEFINE_STATIC_CALL_RET0(kvm_x86_##func,			     \
+				*(((struct kvm_x86_ops *)0)->func));
 #include <asm/kvm-x86-ops.h>
 EXPORT_STATIC_CALL_GPL(kvm_x86_get_cs_db_l_bits);
 EXPORT_STATIC_CALL_GPL(kvm_x86_cache_reg);

base-commit: 347f6a965596211726c39eb6bc320e8375f80b52
--



^ permalink raw reply related	[flat|nested] 9+ messages in thread

end of thread, other threads:[~2022-02-08  1:06 UTC | newest]

Thread overview: 9+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2022-02-02 18:18 [PATCH 0/5] kvm: x86: better handling of NULL-able kvm_x86_ops Paolo Bonzini
2022-02-02 18:18 ` [PATCH 1/5] KVM: x86: use static_call_cond for optional callbacks Paolo Bonzini
2022-02-02 18:18 ` [PATCH 2/5] KVM: x86: mark NULL-able kvm_x86_ops Paolo Bonzini
2022-02-02 18:18 ` [PATCH 3/5] KVM: x86: warn on incorrectly NULL static calls Paolo Bonzini
2022-02-02 18:18 ` [PATCH 4/5] KVM: x86: change hwapic_{irr,isr}_update to NULLable calls Paolo Bonzini
2022-02-02 18:18 ` [PATCH 5/5] KVM: x86: allow defining return-0 static calls Paolo Bonzini
2022-02-03 18:40   ` Paolo Bonzini
2022-02-06 14:10     ` Peter Zijlstra
2022-02-08  0:41 ` [PATCH 0/5] kvm: x86: better handling of NULL-able kvm_x86_ops Sean Christopherson

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).