public inbox for kvm@vger.kernel.org
 help / color / mirror / Atom feed
* [PATCH v4 0/6] KVM: x86/pmu: Add support for AMD Host-Only/Guest-Only bits
@ 2026-03-26  3:11 Yosry Ahmed
  2026-03-26  3:11 ` [PATCH v4 1/6] KVM: x86: Move enable_pmu/enable_mediated_pmu to pmu.h and pmu.c Yosry Ahmed
                   ` (5 more replies)
  0 siblings, 6 replies; 14+ messages in thread
From: Yosry Ahmed @ 2026-03-26  3:11 UTC (permalink / raw)
  To: Sean Christopherson
  Cc: Paolo Bonzini, Jim Mattson, kvm, linux-kernel, Yosry Ahmed

v4 of Jim's series adding support for AMD's Host-Only and Guest-Only
performance counter eventsel bits in KVM's mediated PMU passthrough
implementation.

These bits allow an nSVM-enabled guest to configure performance counters
that count only during L1 execution (Host-Only) or only during L2 execution
(Guest-Only).

KVM updates the hardware event selector ENABLE bit at the following state
transitions to ensure counters only count in the appropriate mode:

  - EFER.SVME changes: Enable/disable Guest-Only counters
  - Nested VMRUN: Disable Host-Only, enable Guest-Only counters
  - Nested VMEXIT: Enable Host-Only, disable Guest-Only counters

v3 -> v4:
- Dropped amd_pmu_set_eventsel_hw(), moved handling of
  Host-Only/Guest-Only bits to PMU counter reprogramming and funnelled
  all processing through KVM_REQ_PMU [Sean].
  - For this to work, added a per-vendor callback for reprogramming PMU
    counters.
- Sorta restored the bitmask from v1, except now it's a single bitmask
  tracking all counters that need to be reprogrammed on nested
  transitions. The bitmask is used to avoid unnecessary KVM_REQ_PMU
  requests and only reprogram counters as needed  [Jim].
  - This is also needed to avoid directling calling vendor code from
    {enter/leave}_guest_mode() as requested by Sean.
- Added prep patches moving enable_pmu/enable_mediated_pmu and
  guest_mode helpers to facilitate following changes without circular
  dependencies and to avoid including new headers from leaf headers.

v3: https://lore.kernel.org/kvm/20260207012339.2646196-1-jmattson@google.com/

Jim Mattson (2):
  KVM: x86/pmu: Allow Host-Only/Guest-Only bits with nSVM and mediated
    PMU
  KVM: selftests: Add svm_pmu_host_guest_test for Host-Only/Guest-Only
    bits

Yosry Ahmed (4):
  KVM: x86: Move enable_pmu/enable_mediated_pmu to pmu.h and pmu.c
  KVM: x86: Move guest_mode helpers to x86.h
  KVM: x86/pmu: Disable counters based on Host-Only/Guest-Only bits in
    SVM
  KVM: x86/pmu: Re-evaluate Host-Only/Guest-Only on nested SVM
    transitions

 arch/x86/include/asm/kvm-x86-pmu-ops.h        |   1 +
 arch/x86/include/asm/kvm_host.h               |   6 +
 arch/x86/include/asm/perf_event.h             |   2 +
 arch/x86/kvm/kvm_cache_regs.h                 |  23 --
 arch/x86/kvm/pmu.c                            |  12 ++
 arch/x86/kvm/pmu.h                            |  17 ++
 arch/x86/kvm/svm/pmu.c                        |  44 ++++
 arch/x86/kvm/svm/svm.c                        |   1 +
 arch/x86/kvm/x86.c                            |   9 -
 arch/x86/kvm/x86.h                            |  31 ++-
 tools/testing/selftests/kvm/Makefile.kvm      |   1 +
 tools/testing/selftests/kvm/include/x86/pmu.h |   6 +
 .../selftests/kvm/include/x86/processor.h     |   2 +
 .../kvm/x86/svm_pmu_host_guest_test.c         | 199 ++++++++++++++++++
 14 files changed, 319 insertions(+), 35 deletions(-)
 create mode 100644 tools/testing/selftests/kvm/x86/svm_pmu_host_guest_test.c


base-commit: 3d6cdcc8883b5726513d245eef0e91cabfc397f7
-- 
2.53.0.1018.g2bb0e51243-goog


^ permalink raw reply	[flat|nested] 14+ messages in thread

* [PATCH v4 1/6] KVM: x86: Move enable_pmu/enable_mediated_pmu to pmu.h and pmu.c
  2026-03-26  3:11 [PATCH v4 0/6] KVM: x86/pmu: Add support for AMD Host-Only/Guest-Only bits Yosry Ahmed
@ 2026-03-26  3:11 ` Yosry Ahmed
  2026-03-26  3:11 ` [PATCH v4 2/6] KVM: x86: Move guest_mode helpers to x86.h Yosry Ahmed
                   ` (4 subsequent siblings)
  5 siblings, 0 replies; 14+ messages in thread
From: Yosry Ahmed @ 2026-03-26  3:11 UTC (permalink / raw)
  To: Sean Christopherson
  Cc: Paolo Bonzini, Jim Mattson, kvm, linux-kernel, Yosry Ahmed

The declaration and definition of enable_pmu/enable_mediated_pmu
semantically belongs in pmu.h and pmu.c, and more importantly, pmu.h
uses enable_mediated_pmu and relies on the caller including x86.h.

There is already precedence for other module params defined outside of
x86.c, so move enable_pmu/enable_mediated_pmu to pmu.c.

No functional change intended.

Signed-off-by: Yosry Ahmed <yosry@kernel.org>
---
 arch/x86/kvm/pmu.c | 10 ++++++++++
 arch/x86/kvm/pmu.h |  3 +++
 arch/x86/kvm/x86.c |  9 ---------
 arch/x86/kvm/x86.h |  3 ---
 4 files changed, 13 insertions(+), 12 deletions(-)

diff --git a/arch/x86/kvm/pmu.c b/arch/x86/kvm/pmu.c
index e218352e34231..d6ac3c55fce55 100644
--- a/arch/x86/kvm/pmu.c
+++ b/arch/x86/kvm/pmu.c
@@ -16,6 +16,7 @@
 #include <linux/perf_event.h>
 #include <linux/bsearch.h>
 #include <linux/sort.h>
+#include <linux/moduleparam.h>
 #include <asm/perf_event.h>
 #include <asm/cpu_device_id.h>
 #include "x86.h"
@@ -33,6 +34,15 @@ static struct x86_pmu_capability __read_mostly kvm_host_pmu;
 struct x86_pmu_capability __read_mostly kvm_pmu_cap;
 EXPORT_SYMBOL_FOR_KVM_INTERNAL(kvm_pmu_cap);
 
+/* Enable/disable PMU virtualization */
+bool __read_mostly enable_pmu = true;
+EXPORT_SYMBOL_FOR_KVM_INTERNAL(enable_pmu);
+module_param(enable_pmu, bool, 0444);
+
+/* Enable/disabled mediated PMU virtualization. */
+bool __read_mostly enable_mediated_pmu;
+EXPORT_SYMBOL_FOR_KVM_INTERNAL(enable_mediated_pmu);
+
 struct kvm_pmu_emulated_event_selectors {
 	u64 INSTRUCTIONS_RETIRED;
 	u64 BRANCH_INSTRUCTIONS_RETIRED;
diff --git a/arch/x86/kvm/pmu.h b/arch/x86/kvm/pmu.h
index 0925246731cb1..b1f2418e960ac 100644
--- a/arch/x86/kvm/pmu.h
+++ b/arch/x86/kvm/pmu.h
@@ -53,6 +53,9 @@ struct kvm_pmu_ops {
 	const u32 MSR_STRIDE;
 };
 
+extern bool enable_pmu;
+extern bool enable_mediated_pmu;
+
 void kvm_pmu_ops_update(const struct kvm_pmu_ops *pmu_ops);
 
 void kvm_handle_guest_mediated_pmi(void);
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index 0b5d48e75b657..0a5fd473a24e1 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -182,15 +182,6 @@ module_param(force_emulation_prefix, int, 0644);
 int __read_mostly pi_inject_timer = -1;
 module_param(pi_inject_timer, bint, 0644);
 
-/* Enable/disable PMU virtualization */
-bool __read_mostly enable_pmu = true;
-EXPORT_SYMBOL_FOR_KVM_INTERNAL(enable_pmu);
-module_param(enable_pmu, bool, 0444);
-
-/* Enable/disabled mediated PMU virtualization. */
-bool __read_mostly enable_mediated_pmu;
-EXPORT_SYMBOL_FOR_KVM_INTERNAL(enable_mediated_pmu);
-
 bool __read_mostly eager_page_split = true;
 module_param(eager_page_split, bool, 0644);
 
diff --git a/arch/x86/kvm/x86.h b/arch/x86/kvm/x86.h
index 44a28d343d407..48f3e8c0dc30d 100644
--- a/arch/x86/kvm/x86.h
+++ b/arch/x86/kvm/x86.h
@@ -480,9 +480,6 @@ fastpath_t handle_fastpath_invd(struct kvm_vcpu *vcpu);
 extern struct kvm_caps kvm_caps;
 extern struct kvm_host_values kvm_host;
 
-extern bool enable_pmu;
-extern bool enable_mediated_pmu;
-
 void kvm_setup_xss_caps(void);
 
 /*
-- 
2.53.0.1018.g2bb0e51243-goog


^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [PATCH v4 2/6] KVM: x86: Move guest_mode helpers to x86.h
  2026-03-26  3:11 [PATCH v4 0/6] KVM: x86/pmu: Add support for AMD Host-Only/Guest-Only bits Yosry Ahmed
  2026-03-26  3:11 ` [PATCH v4 1/6] KVM: x86: Move enable_pmu/enable_mediated_pmu to pmu.h and pmu.c Yosry Ahmed
@ 2026-03-26  3:11 ` Yosry Ahmed
  2026-03-26 22:48   ` kernel test robot
  2026-03-27  3:15   ` kernel test robot
  2026-03-26  3:11 ` [PATCH v4 3/6] KVM: x86/pmu: Disable counters based on Host-Only/Guest-Only bits in SVM Yosry Ahmed
                   ` (3 subsequent siblings)
  5 siblings, 2 replies; 14+ messages in thread
From: Yosry Ahmed @ 2026-03-26  3:11 UTC (permalink / raw)
  To: Sean Christopherson
  Cc: Paolo Bonzini, Jim Mattson, kvm, linux-kernel, Yosry Ahmed

Move enter_guest_mode(), leave_guest_mode(), and is_guest_mode() to
x86.h. This is more semantically appropriate to keep kvm_cache_regs.h
for register helpers, and more importantly it allows for expanding these
helpers without including more headers into kvm_cache_regs.h, keeping it
as much as possible a leaf header.

No functional change intended.

Signed-off-by: Yosry Ahmed <yosry@kernel.org>
---
 arch/x86/kvm/kvm_cache_regs.h | 23 -----------------------
 arch/x86/kvm/x86.h            | 23 +++++++++++++++++++++++
 2 files changed, 23 insertions(+), 23 deletions(-)

diff --git a/arch/x86/kvm/kvm_cache_regs.h b/arch/x86/kvm/kvm_cache_regs.h
index 8ddb01191d6f6..8682ef54a8c9b 100644
--- a/arch/x86/kvm/kvm_cache_regs.h
+++ b/arch/x86/kvm/kvm_cache_regs.h
@@ -223,27 +223,4 @@ static inline u64 kvm_read_edx_eax(struct kvm_vcpu *vcpu)
 		| ((u64)(kvm_rdx_read(vcpu) & -1u) << 32);
 }
 
-static inline void enter_guest_mode(struct kvm_vcpu *vcpu)
-{
-	vcpu->arch.hflags |= HF_GUEST_MASK;
-	vcpu->stat.guest_mode = 1;
-}
-
-static inline void leave_guest_mode(struct kvm_vcpu *vcpu)
-{
-	vcpu->arch.hflags &= ~HF_GUEST_MASK;
-
-	if (vcpu->arch.load_eoi_exitmap_pending) {
-		vcpu->arch.load_eoi_exitmap_pending = false;
-		kvm_make_request(KVM_REQ_LOAD_EOI_EXITMAP, vcpu);
-	}
-
-	vcpu->stat.guest_mode = 0;
-}
-
-static inline bool is_guest_mode(struct kvm_vcpu *vcpu)
-{
-	return vcpu->arch.hflags & HF_GUEST_MASK;
-}
-
 #endif
diff --git a/arch/x86/kvm/x86.h b/arch/x86/kvm/x86.h
index 48f3e8c0dc30d..f1c29ac306917 100644
--- a/arch/x86/kvm/x86.h
+++ b/arch/x86/kvm/x86.h
@@ -148,6 +148,29 @@ static inline unsigned int __shrink_ple_window(unsigned int val,
 void kvm_service_local_tlb_flush_requests(struct kvm_vcpu *vcpu);
 int kvm_check_nested_events(struct kvm_vcpu *vcpu);
 
+static inline void enter_guest_mode(struct kvm_vcpu *vcpu)
+{
+	vcpu->arch.hflags |= HF_GUEST_MASK;
+	vcpu->stat.guest_mode = 1;
+}
+
+static inline void leave_guest_mode(struct kvm_vcpu *vcpu)
+{
+	vcpu->arch.hflags &= ~HF_GUEST_MASK;
+
+	if (vcpu->arch.load_eoi_exitmap_pending) {
+		vcpu->arch.load_eoi_exitmap_pending = false;
+		kvm_make_request(KVM_REQ_LOAD_EOI_EXITMAP, vcpu);
+	}
+
+	vcpu->stat.guest_mode = 0;
+}
+
+static inline bool is_guest_mode(struct kvm_vcpu *vcpu)
+{
+	return vcpu->arch.hflags & HF_GUEST_MASK;
+}
+
 /* Forcibly leave the nested mode in cases like a vCPU reset */
 static inline void kvm_leave_nested(struct kvm_vcpu *vcpu)
 {
-- 
2.53.0.1018.g2bb0e51243-goog


^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [PATCH v4 3/6] KVM: x86/pmu: Disable counters based on Host-Only/Guest-Only bits in SVM
  2026-03-26  3:11 [PATCH v4 0/6] KVM: x86/pmu: Add support for AMD Host-Only/Guest-Only bits Yosry Ahmed
  2026-03-26  3:11 ` [PATCH v4 1/6] KVM: x86: Move enable_pmu/enable_mediated_pmu to pmu.h and pmu.c Yosry Ahmed
  2026-03-26  3:11 ` [PATCH v4 2/6] KVM: x86: Move guest_mode helpers to x86.h Yosry Ahmed
@ 2026-03-26  3:11 ` Yosry Ahmed
  2026-04-07  1:30   ` Sean Christopherson
  2026-03-26  3:11 ` [PATCH v4 4/6] KVM: x86/pmu: Re-evaluate Host-Only/Guest-Only on nested SVM transitions Yosry Ahmed
                   ` (2 subsequent siblings)
  5 siblings, 1 reply; 14+ messages in thread
From: Yosry Ahmed @ 2026-03-26  3:11 UTC (permalink / raw)
  To: Sean Christopherson
  Cc: Paolo Bonzini, Jim Mattson, kvm, linux-kernel, Yosry Ahmed

Introduce a per-vendor PMU callback for reprogramming counters with a
mediated PMU, and register a callback on AMD to disable a counter based
on the vCPU's setting of Host-Only or Guest-Only EVENT_SELECT bits (if
EFER.SVME is set). In other words, disable the counter if Host-Only and
guest_mode() or Guest-Only and !guest_mode().

kvm_mediated_pmu_refresh_event_filter() ensures that
ARCH_PERFMON_EVENTSEL_ENABLE is set for any enabled counters before
mediated_reprogram_counter() callback, and kvm_mediated_pmu_load()
writes the updated value of eventsel_hw to the appropriate MSR after the
counters are reprogrammed through KVM_REQ_PMU.

Note that the behavior is equivalent if both bits are cleared or if both
bits are set, events are counted regardless of host/guest state (from
L1's perspective), so KVM should always keep the counter enabled unless
only one of the bits is set.

It's a bit unnatural to check if both bits are set or cleared, then
check EFER.SVME, then go back to checking the set bit. This ordering
will be needed by following changes that will track counters with only
one Host-Only/Guest-Only bit set regardless of EFER.SVME.

Host-Only and Guest-Only bits are currently reserved, so this change is
a noop, but the bits will be allowed with mediated PMU in a following
change when fully supported.

Originally-by: Jim Mattson <jmattson@google.com>
Signed-off-by: Yosry Ahmed <yosry@kernel.org>
---
 arch/x86/include/asm/kvm-x86-pmu-ops.h |  1 +
 arch/x86/include/asm/perf_event.h      |  2 ++
 arch/x86/kvm/pmu.c                     |  1 +
 arch/x86/kvm/pmu.h                     |  1 +
 arch/x86/kvm/svm/pmu.c                 | 29 ++++++++++++++++++++++++++
 5 files changed, 34 insertions(+)

diff --git a/arch/x86/include/asm/kvm-x86-pmu-ops.h b/arch/x86/include/asm/kvm-x86-pmu-ops.h
index d5452b3433b7d..11ce0012b8301 100644
--- a/arch/x86/include/asm/kvm-x86-pmu-ops.h
+++ b/arch/x86/include/asm/kvm-x86-pmu-ops.h
@@ -27,6 +27,7 @@ KVM_X86_PMU_OP_OPTIONAL(cleanup)
 KVM_X86_PMU_OP_OPTIONAL(write_global_ctrl)
 KVM_X86_PMU_OP(mediated_load)
 KVM_X86_PMU_OP(mediated_put)
+KVM_X86_PMU_OP_OPTIONAL(mediated_reprogram_counter)
 #endif
 
 #undef KVM_X86_PMU_OP
diff --git a/arch/x86/include/asm/perf_event.h b/arch/x86/include/asm/perf_event.h
index ff5acb8b199b0..5961c002b28eb 100644
--- a/arch/x86/include/asm/perf_event.h
+++ b/arch/x86/include/asm/perf_event.h
@@ -60,6 +60,8 @@
 #define AMD64_EVENTSEL_INT_CORE_ENABLE			(1ULL << 36)
 #define AMD64_EVENTSEL_GUESTONLY			(1ULL << 40)
 #define AMD64_EVENTSEL_HOSTONLY				(1ULL << 41)
+#define AMD64_EVENTSEL_HOST_GUEST_MASK			\
+	(AMD64_EVENTSEL_HOSTONLY | AMD64_EVENTSEL_GUESTONLY)
 
 #define AMD64_EVENTSEL_INT_CORE_SEL_SHIFT		37
 #define AMD64_EVENTSEL_INT_CORE_SEL_MASK		\
diff --git a/arch/x86/kvm/pmu.c b/arch/x86/kvm/pmu.c
index d6ac3c55fce55..e35d598f809a2 100644
--- a/arch/x86/kvm/pmu.c
+++ b/arch/x86/kvm/pmu.c
@@ -559,6 +559,7 @@ static int reprogram_counter(struct kvm_pmc *pmc)
 
 	if (kvm_vcpu_has_mediated_pmu(pmu_to_vcpu(pmu))) {
 		kvm_mediated_pmu_refresh_event_filter(pmc);
+		kvm_pmu_call(mediated_reprogram_counter)(pmc);
 		return 0;
 	}
 
diff --git a/arch/x86/kvm/pmu.h b/arch/x86/kvm/pmu.h
index b1f2418e960ac..bdbe0456049d0 100644
--- a/arch/x86/kvm/pmu.h
+++ b/arch/x86/kvm/pmu.h
@@ -40,6 +40,7 @@ struct kvm_pmu_ops {
 	bool (*is_mediated_pmu_supported)(struct x86_pmu_capability *host_pmu);
 	void (*mediated_load)(struct kvm_vcpu *vcpu);
 	void (*mediated_put)(struct kvm_vcpu *vcpu);
+	void (*mediated_reprogram_counter)(struct kvm_pmc *pmc);
 	void (*write_global_ctrl)(u64 global_ctrl);
 
 	const u64 EVENTSEL_EVENT;
diff --git a/arch/x86/kvm/svm/pmu.c b/arch/x86/kvm/svm/pmu.c
index 7aa298eeb0721..60931dfd624b2 100644
--- a/arch/x86/kvm/svm/pmu.c
+++ b/arch/x86/kvm/svm/pmu.c
@@ -260,6 +260,34 @@ static void amd_mediated_pmu_put(struct kvm_vcpu *vcpu)
 		wrmsrq(MSR_AMD64_PERF_CNTR_GLOBAL_STATUS_CLR, pmu->global_status);
 }
 
+static void amd_mediated_pmu_handle_host_guest_bits(struct kvm_pmc *pmc)
+{
+	struct kvm_vcpu *vcpu = pmc->vcpu;
+	u64 host_guest_bits;
+
+	if (!(pmc->eventsel & ARCH_PERFMON_EVENTSEL_ENABLE))
+		return;
+
+	/* Count all events if both bits are cleared or both bits are set */
+	host_guest_bits = pmc->eventsel & AMD64_EVENTSEL_HOST_GUEST_MASK;
+	if (hweight64(host_guest_bits) != 1)
+		return;
+
+	/* Host-Only and Guest-Only are ignored if EFER.SVME == 0 */
+	if (!(vcpu->arch.efer & EFER_SVME))
+		return;
+
+	if (!!(host_guest_bits & AMD64_EVENTSEL_GUESTONLY) == is_guest_mode(vcpu))
+		return;
+
+	pmc->eventsel_hw &= ~ARCH_PERFMON_EVENTSEL_ENABLE;
+}
+
+static void amd_mediated_pmu_reprogram_counter(struct kvm_pmc *pmc)
+{
+	amd_mediated_pmu_handle_host_guest_bits(pmc);
+}
+
 struct kvm_pmu_ops amd_pmu_ops __initdata = {
 	.rdpmc_ecx_to_pmc = amd_rdpmc_ecx_to_pmc,
 	.msr_idx_to_pmc = amd_msr_idx_to_pmc,
@@ -273,6 +301,7 @@ struct kvm_pmu_ops amd_pmu_ops __initdata = {
 	.is_mediated_pmu_supported = amd_pmu_is_mediated_pmu_supported,
 	.mediated_load = amd_mediated_pmu_load,
 	.mediated_put = amd_mediated_pmu_put,
+	.mediated_reprogram_counter = amd_mediated_pmu_reprogram_counter,
 
 	.EVENTSEL_EVENT = AMD64_EVENTSEL_EVENT,
 	.MAX_NR_GP_COUNTERS = KVM_MAX_NR_AMD_GP_COUNTERS,
-- 
2.53.0.1018.g2bb0e51243-goog


^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [PATCH v4 4/6] KVM: x86/pmu: Re-evaluate Host-Only/Guest-Only on nested SVM transitions
  2026-03-26  3:11 [PATCH v4 0/6] KVM: x86/pmu: Add support for AMD Host-Only/Guest-Only bits Yosry Ahmed
                   ` (2 preceding siblings ...)
  2026-03-26  3:11 ` [PATCH v4 3/6] KVM: x86/pmu: Disable counters based on Host-Only/Guest-Only bits in SVM Yosry Ahmed
@ 2026-03-26  3:11 ` Yosry Ahmed
  2026-04-07  1:35   ` Sean Christopherson
  2026-03-26  3:11 ` [PATCH v4 5/6] KVM: x86/pmu: Allow Host-Only/Guest-Only bits with nSVM and mediated PMU Yosry Ahmed
  2026-03-26  3:11 ` [PATCH v4 6/6] KVM: selftests: Add svm_pmu_host_guest_test for Host-Only/Guest-Only bits Yosry Ahmed
  5 siblings, 1 reply; 14+ messages in thread
From: Yosry Ahmed @ 2026-03-26  3:11 UTC (permalink / raw)
  To: Sean Christopherson
  Cc: Paolo Bonzini, Jim Mattson, kvm, linux-kernel, Yosry Ahmed

Reprogram all counters on nested transitions for the mediated PMU, to
re-evaluate Host-Only and Guest-Only bits and enable/disable the PMU
counters accordingly. For example, if Host-Only is set and Guest-Only is
cleared, a counter should be disabled when entering guest mode and
enabled when exiting guest mode.

Having one of Host-Only and Guest-Only set is only effective when
EFER.SVME is set, so also trigger counter reprogramming when EFER.SVME
is toggled.

Track counters with one of Host-Only and Guest-Only set as counters
requiring reprogramming on nested transitions in a bitmap. Use the
bitmap to only request KVM_PMU_REQ if some counters need reprogramming,
and only reprogram the counters that actually need it.

Track such counters even if EFER.SVME is cleared, such that if/when
EFER.SVME is set, KVM can reprogram those counters and enable/disable
them appropriately. Otherwise, toggling EFER.SVME would need to
reprogram all counters and use a different code path than
kvm_pmu_handle_nested_transition().

Signed-off-by: Yosry Ahmed <yosry@kernel.org>
---
 arch/x86/include/asm/kvm_host.h |  6 ++++++
 arch/x86/kvm/pmu.c              |  1 +
 arch/x86/kvm/pmu.h              | 13 +++++++++++++
 arch/x86/kvm/svm/pmu.c          | 13 ++++++++++++-
 arch/x86/kvm/svm/svm.c          |  1 +
 arch/x86/kvm/x86.h              |  5 +++++
 6 files changed, 38 insertions(+), 1 deletion(-)

diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
index d3bdc98281339..b2f8710838372 100644
--- a/arch/x86/include/asm/kvm_host.h
+++ b/arch/x86/include/asm/kvm_host.h
@@ -594,6 +594,12 @@ struct kvm_pmu {
 	DECLARE_BITMAP(pmc_counting_instructions, X86_PMC_IDX_MAX);
 	DECLARE_BITMAP(pmc_counting_branches, X86_PMC_IDX_MAX);
 
+	/*
+	 * Whether or not PMU counters need to be reprogrammed on transitions
+	 * between L1 and L2 (or when nesting enablement is toggled).
+	 */
+	DECLARE_BITMAP(pmc_needs_nested_reprogram, X86_PMC_IDX_MAX);
+
 	u64 ds_area;
 	u64 pebs_enable;
 	u64 pebs_enable_rsvd;
diff --git a/arch/x86/kvm/pmu.c b/arch/x86/kvm/pmu.c
index e35d598f809a2..a7b38c104d067 100644
--- a/arch/x86/kvm/pmu.c
+++ b/arch/x86/kvm/pmu.c
@@ -932,6 +932,7 @@ static void kvm_pmu_reset(struct kvm_vcpu *vcpu)
 	pmu->need_cleanup = false;
 
 	bitmap_zero(pmu->reprogram_pmi, X86_PMC_IDX_MAX);
+	bitmap_zero(pmu->pmc_needs_nested_reprogram, X86_PMC_IDX_MAX);
 
 	kvm_for_each_pmc(pmu, pmc, i, pmu->all_valid_pmc_idx) {
 		pmc_stop_counter(pmc);
diff --git a/arch/x86/kvm/pmu.h b/arch/x86/kvm/pmu.h
index bdbe0456049d0..fb73806d3bfa0 100644
--- a/arch/x86/kvm/pmu.h
+++ b/arch/x86/kvm/pmu.h
@@ -248,6 +248,19 @@ static inline bool kvm_pmu_is_fastpath_emulation_allowed(struct kvm_vcpu *vcpu)
 				  X86_PMC_IDX_MAX);
 }
 
+static inline void kvm_pmu_handle_nested_transition(struct kvm_vcpu *vcpu)
+{
+	struct kvm_pmu *pmu = vcpu_to_pmu(vcpu);
+
+	if (bitmap_empty(pmu->pmc_needs_nested_reprogram, X86_PMC_IDX_MAX))
+		return;
+
+	BUILD_BUG_ON(sizeof(pmu->pmc_needs_nested_reprogram) != sizeof(atomic64_t));
+	atomic64_or(*(s64 *)pmu->pmc_needs_nested_reprogram,
+		    &vcpu_to_pmu(vcpu)->__reprogram_pmi);
+	kvm_make_request(KVM_REQ_PMU, vcpu);
+}
+
 void kvm_pmu_deliver_pmi(struct kvm_vcpu *vcpu);
 void kvm_pmu_handle_event(struct kvm_vcpu *vcpu);
 int kvm_pmu_rdpmc(struct kvm_vcpu *vcpu, unsigned pmc, u64 *data);
diff --git a/arch/x86/kvm/svm/pmu.c b/arch/x86/kvm/svm/pmu.c
index 60931dfd624b2..cc1eabb0ad15f 100644
--- a/arch/x86/kvm/svm/pmu.c
+++ b/arch/x86/kvm/svm/pmu.c
@@ -262,17 +262,28 @@ static void amd_mediated_pmu_put(struct kvm_vcpu *vcpu)
 
 static void amd_mediated_pmu_handle_host_guest_bits(struct kvm_pmc *pmc)
 {
+	struct kvm_pmu *pmu = pmc_to_pmu(pmc);
 	struct kvm_vcpu *vcpu = pmc->vcpu;
 	u64 host_guest_bits;
 
+	__clear_bit(pmc->idx, pmu->pmc_needs_nested_reprogram);
+
 	if (!(pmc->eventsel & ARCH_PERFMON_EVENTSEL_ENABLE))
 		return;
 
-	/* Count all events if both bits are cleared or both bits are set */
+	/*
+	 * If both bits are cleared or both bits are set, count all events.
+	 * Otherwise, the counter enablement should be re-evaluated on every
+	 * nested transition. Track which counters need to be re-evaluated even
+	 * if EFER.SVME == 0, such that the counters are correctly reprogrammed
+	 * on nested transitions after EFER.SVME is set.
+	 */
 	host_guest_bits = pmc->eventsel & AMD64_EVENTSEL_HOST_GUEST_MASK;
 	if (hweight64(host_guest_bits) != 1)
 		return;
 
+	__set_bit(pmc->idx, pmu->pmc_needs_nested_reprogram);
+
 	/* Host-Only and Guest-Only are ignored if EFER.SVME == 0 */
 	if (!(vcpu->arch.efer & EFER_SVME))
 		return;
diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c
index d2ca226871c2f..1ac00d2cba0ab 100644
--- a/arch/x86/kvm/svm/svm.c
+++ b/arch/x86/kvm/svm/svm.c
@@ -261,6 +261,7 @@ int svm_set_efer(struct kvm_vcpu *vcpu, u64 efer)
 				set_exception_intercept(svm, GP_VECTOR);
 		}
 
+		kvm_pmu_handle_nested_transition(vcpu);
 		kvm_make_request(KVM_REQ_RECALC_INTERCEPTS, vcpu);
 	}
 
diff --git a/arch/x86/kvm/x86.h b/arch/x86/kvm/x86.h
index f1c29ac306917..966e4138308f6 100644
--- a/arch/x86/kvm/x86.h
+++ b/arch/x86/kvm/x86.h
@@ -9,6 +9,7 @@
 #include "kvm_cache_regs.h"
 #include "kvm_emulate.h"
 #include "cpuid.h"
+#include "pmu.h"
 
 #define KVM_MAX_MCE_BANKS 32
 
@@ -152,6 +153,8 @@ static inline void enter_guest_mode(struct kvm_vcpu *vcpu)
 {
 	vcpu->arch.hflags |= HF_GUEST_MASK;
 	vcpu->stat.guest_mode = 1;
+
+	kvm_pmu_handle_nested_transition(vcpu);
 }
 
 static inline void leave_guest_mode(struct kvm_vcpu *vcpu)
@@ -164,6 +167,8 @@ static inline void leave_guest_mode(struct kvm_vcpu *vcpu)
 	}
 
 	vcpu->stat.guest_mode = 0;
+
+	kvm_pmu_handle_nested_transition(vcpu);
 }
 
 static inline bool is_guest_mode(struct kvm_vcpu *vcpu)
-- 
2.53.0.1018.g2bb0e51243-goog


^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [PATCH v4 5/6] KVM: x86/pmu: Allow Host-Only/Guest-Only bits with nSVM and mediated PMU
  2026-03-26  3:11 [PATCH v4 0/6] KVM: x86/pmu: Add support for AMD Host-Only/Guest-Only bits Yosry Ahmed
                   ` (3 preceding siblings ...)
  2026-03-26  3:11 ` [PATCH v4 4/6] KVM: x86/pmu: Re-evaluate Host-Only/Guest-Only on nested SVM transitions Yosry Ahmed
@ 2026-03-26  3:11 ` Yosry Ahmed
  2026-03-26  3:11 ` [PATCH v4 6/6] KVM: selftests: Add svm_pmu_host_guest_test for Host-Only/Guest-Only bits Yosry Ahmed
  5 siblings, 0 replies; 14+ messages in thread
From: Yosry Ahmed @ 2026-03-26  3:11 UTC (permalink / raw)
  To: Sean Christopherson
  Cc: Paolo Bonzini, Jim Mattson, kvm, linux-kernel, Yosry Ahmed

From: Jim Mattson <jmattson@google.com>

Now that KVM correctly handles Host-Only and Guest-Only bits in the
event selector MSRs, allow the guest to set them if the vCPU advertises
SVM and uses the mediated PMU.

Signed-off-by: Jim Mattson <jmattson@google.com>
Signed-off-by: Yosry Ahmed <yosry@kernel.org>
---
 arch/x86/kvm/svm/pmu.c | 4 ++++
 1 file changed, 4 insertions(+)

diff --git a/arch/x86/kvm/svm/pmu.c b/arch/x86/kvm/svm/pmu.c
index cc1eabb0ad15f..3c04e8da24d33 100644
--- a/arch/x86/kvm/svm/pmu.c
+++ b/arch/x86/kvm/svm/pmu.c
@@ -207,7 +207,11 @@ static void amd_pmu_refresh(struct kvm_vcpu *vcpu)
 	}
 
 	pmu->counter_bitmask[KVM_PMC_GP] = BIT_ULL(48) - 1;
+
 	pmu->reserved_bits = 0xfffffff000280000ull;
+	if (guest_cpu_cap_has(vcpu, X86_FEATURE_SVM) && kvm_vcpu_has_mediated_pmu(vcpu))
+		pmu->reserved_bits &= ~AMD64_EVENTSEL_HOST_GUEST_MASK;
+
 	pmu->raw_event_mask = AMD64_RAW_EVENT_MASK;
 	/* not applicable to AMD; but clean them to prevent any fall out */
 	pmu->counter_bitmask[KVM_PMC_FIXED] = 0;
-- 
2.53.0.1018.g2bb0e51243-goog


^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [PATCH v4 6/6] KVM: selftests: Add svm_pmu_host_guest_test for Host-Only/Guest-Only bits
  2026-03-26  3:11 [PATCH v4 0/6] KVM: x86/pmu: Add support for AMD Host-Only/Guest-Only bits Yosry Ahmed
                   ` (4 preceding siblings ...)
  2026-03-26  3:11 ` [PATCH v4 5/6] KVM: x86/pmu: Allow Host-Only/Guest-Only bits with nSVM and mediated PMU Yosry Ahmed
@ 2026-03-26  3:11 ` Yosry Ahmed
  2026-04-07  1:39   ` Sean Christopherson
  5 siblings, 1 reply; 14+ messages in thread
From: Yosry Ahmed @ 2026-03-26  3:11 UTC (permalink / raw)
  To: Sean Christopherson
  Cc: Paolo Bonzini, Jim Mattson, kvm, linux-kernel, Yosry Ahmed

From: Jim Mattson <jmattson@google.com>

Add a selftest to verify KVM correctly virtualizes the AMD PMU Host-Only
(bit 41) and Guest-Only (bit 40) event selector bits across all relevant
SVM state transitions.

The test programs 4 PMCs simultaneously with all combinations of the
Host-Only and Guest-Only bits, then verifies correct counting behavior:
  1. SVME=0: all counters count (Host-Only/Guest-Only bits ignored)
  2. Set SVME=1: Host-Only and neither/both count; Guest-Only stops
  3. VMRUN to L2: Guest-Only and neither/both count; Host-Only stops
  4. VMEXIT to L1: Host-Only and neither/both count; Guest-Only stops
  5. Clear SVME=0: all counters count (bits ignored again)

Signed-off-by: Jim Mattson <jmattson@google.com>
Signed-off-by: Yosry Ahmed <yosry@kernel.org>
---
 tools/testing/selftests/kvm/Makefile.kvm      |   1 +
 tools/testing/selftests/kvm/include/x86/pmu.h |   6 +
 .../selftests/kvm/include/x86/processor.h     |   2 +
 .../kvm/x86/svm_pmu_host_guest_test.c         | 199 ++++++++++++++++++
 4 files changed, 208 insertions(+)
 create mode 100644 tools/testing/selftests/kvm/x86/svm_pmu_host_guest_test.c

diff --git a/tools/testing/selftests/kvm/Makefile.kvm b/tools/testing/selftests/kvm/Makefile.kvm
index 3d372d78a2756..9418c45291231 100644
--- a/tools/testing/selftests/kvm/Makefile.kvm
+++ b/tools/testing/selftests/kvm/Makefile.kvm
@@ -116,6 +116,7 @@ TEST_GEN_PROGS_x86 += x86/svm_nested_invalid_vmcb12_gpa
 TEST_GEN_PROGS_x86 += x86/svm_nested_shutdown_test
 TEST_GEN_PROGS_x86 += x86/svm_nested_soft_inject_test
 TEST_GEN_PROGS_x86 += x86/svm_lbr_nested_state
+TEST_GEN_PROGS_x86 += x86/svm_pmu_host_guest_test
 TEST_GEN_PROGS_x86 += x86/tsc_scaling_sync
 TEST_GEN_PROGS_x86 += x86/sync_regs_test
 TEST_GEN_PROGS_x86 += x86/ucna_injection_test
diff --git a/tools/testing/selftests/kvm/include/x86/pmu.h b/tools/testing/selftests/kvm/include/x86/pmu.h
index 72575eadb63a0..af9b279c78df4 100644
--- a/tools/testing/selftests/kvm/include/x86/pmu.h
+++ b/tools/testing/selftests/kvm/include/x86/pmu.h
@@ -38,6 +38,12 @@
 #define ARCH_PERFMON_EVENTSEL_INV		BIT_ULL(23)
 #define ARCH_PERFMON_EVENTSEL_CMASK		GENMASK_ULL(31, 24)
 
+/*
+ * These are AMD-specific bits.
+ */
+#define AMD64_EVENTSEL_GUESTONLY		BIT_ULL(40)
+#define AMD64_EVENTSEL_HOSTONLY			BIT_ULL(41)
+
 /* RDPMC control flags, Intel only. */
 #define INTEL_RDPMC_METRICS			BIT_ULL(29)
 #define INTEL_RDPMC_FIXED			BIT_ULL(30)
diff --git a/tools/testing/selftests/kvm/include/x86/processor.h b/tools/testing/selftests/kvm/include/x86/processor.h
index d8634a760a609..4cc1ba8752347 100644
--- a/tools/testing/selftests/kvm/include/x86/processor.h
+++ b/tools/testing/selftests/kvm/include/x86/processor.h
@@ -19,6 +19,8 @@
 #include "kvm_util.h"
 #include "ucall_common.h"
 
+#define __stack_aligned__	__aligned(16)
+
 extern bool host_cpu_is_intel;
 extern bool host_cpu_is_amd;
 extern bool host_cpu_is_hygon;
diff --git a/tools/testing/selftests/kvm/x86/svm_pmu_host_guest_test.c b/tools/testing/selftests/kvm/x86/svm_pmu_host_guest_test.c
new file mode 100644
index 0000000000000..a08c03a40d4f6
--- /dev/null
+++ b/tools/testing/selftests/kvm/x86/svm_pmu_host_guest_test.c
@@ -0,0 +1,199 @@
+// SPDX-License-Identifier: GPL-2.0-only
+/*
+ * KVM nested SVM PMU Host-Only/Guest-Only test
+ *
+ * Copyright (C) 2026, Google LLC.
+ *
+ * Test that KVM correctly virtualizes the AMD PMU Host-Only (bit 41) and
+ * Guest-Only (bit 40) event selector bits across all SVM state
+ * transitions.
+ *
+ * Programs 4 PMCs simultaneously with all combinations of Host-Only and
+ * Guest-Only bits, then verifies correct counting behavior through:
+ *   1. SVME=0: all counters count (Host-Only/Guest-Only bits ignored)
+ *   2. Set SVME=1: Host-Only and neither/both count; Guest-Only stops
+ *   3. VMRUN to L2: Guest-Only and neither/both count; Host-Only stops
+ *   4. VMEXIT to L1: Host-Only and neither/both count; Guest-Only stops
+ *   5. Clear SVME=0: all counters count (bits ignored again)
+ */
+#include <fcntl.h>
+#include <stdio.h>
+#include <stdlib.h>
+#include <string.h>
+
+#include "test_util.h"
+#include "kvm_util.h"
+#include "processor.h"
+#include "svm_util.h"
+#include "pmu.h"
+
+#define L2_GUEST_STACK_SIZE	255
+
+#define EVENTSEL_RETIRED_INSNS	(ARCH_PERFMON_EVENTSEL_OS |	\
+				 ARCH_PERFMON_EVENTSEL_USR |	\
+				 ARCH_PERFMON_EVENTSEL_ENABLE |	\
+				 AMD_ZEN_INSTRUCTIONS_RETIRED)
+
+/* PMC configurations: index corresponds to Host-Only | Guest-Only bits */
+#define PMC_NEITHER	0  /* Neither bit set */
+#define PMC_GUESTONLY	1  /* Guest-Only bit set */
+#define PMC_HOSTONLY	2  /* Host-Only bit set */
+#define PMC_BOTH	3  /* Both bits set */
+#define NR_PMCS		4
+
+/* Bitmasks for which PMCs should be counting in each state */
+#define COUNTS_ALL	(BIT(PMC_NEITHER) | BIT(PMC_GUESTONLY) | \
+			 BIT(PMC_HOSTONLY) | BIT(PMC_BOTH))
+#define COUNTS_L1	(BIT(PMC_NEITHER) | BIT(PMC_HOSTONLY) | BIT(PMC_BOTH))
+#define COUNTS_L2	(BIT(PMC_NEITHER) | BIT(PMC_GUESTONLY) | BIT(PMC_BOTH))
+
+#define LOOP_INSNS	1000
+
+static __always_inline void run_instruction_loop(void)
+{
+	unsigned int i;
+
+	for (i = 0; i < LOOP_INSNS; i++)
+		__asm__ __volatile__("nop");
+}
+
+static __always_inline void read_counters(uint64_t *counts)
+{
+	int i;
+
+	for (i = 0; i < NR_PMCS; i++)
+		counts[i] = rdmsr(MSR_F15H_PERF_CTR + 2 * i);
+}
+
+static __always_inline void run_and_measure(uint64_t *deltas)
+{
+	uint64_t before[NR_PMCS], after[NR_PMCS];
+	int i;
+
+	read_counters(before);
+	run_instruction_loop();
+	read_counters(after);
+
+	for (i = 0; i < NR_PMCS; i++)
+		deltas[i] = after[i] - before[i];
+}
+
+static void assert_pmc_counts(uint64_t *deltas, unsigned int expected_counting)
+{
+	int i;
+
+	for (i = 0; i < NR_PMCS; i++) {
+		if (expected_counting & BIT(i))
+			GUEST_ASSERT_NE(deltas[i], 0);
+		else
+			GUEST_ASSERT_EQ(deltas[i], 0);
+	}
+}
+
+struct test_data {
+	uint64_t l2_deltas[NR_PMCS];
+	bool l2_done;
+};
+
+static struct test_data *test_data;
+
+static void l2_guest_code(void)
+{
+	run_and_measure(test_data->l2_deltas);
+	test_data->l2_done = true;
+	vmmcall();
+}
+
+static void l1_guest_code(struct svm_test_data *svm, struct test_data *data)
+{
+	unsigned long l2_guest_stack[L2_GUEST_STACK_SIZE] __stack_aligned__;
+	struct vmcb *vmcb = svm->vmcb;
+	uint64_t deltas[NR_PMCS];
+	uint64_t eventsel;
+	int i;
+
+	test_data = data;
+
+	/* Program 4 PMCs with all combinations of Host-Only/Guest-Only bits */
+	for (i = 0; i < NR_PMCS; i++) {
+		eventsel = EVENTSEL_RETIRED_INSNS;
+		if (i & PMC_GUESTONLY)
+			eventsel |= AMD64_EVENTSEL_GUESTONLY;
+		if (i & PMC_HOSTONLY)
+			eventsel |= AMD64_EVENTSEL_HOSTONLY;
+		wrmsr(MSR_F15H_PERF_CTL + 2 * i, eventsel);
+		wrmsr(MSR_F15H_PERF_CTR + 2 * i, 0);
+	}
+
+	/* Step 1: SVME=0 - Host-Only/Guest-Only bits ignored; all count */
+	wrmsr(MSR_EFER, rdmsr(MSR_EFER) & ~EFER_SVME);
+	run_and_measure(deltas);
+	assert_pmc_counts(deltas, COUNTS_ALL);
+
+	/* Step 2: Set SVME=1 - In L1 "host mode"; Guest-Only stops */
+	wrmsr(MSR_EFER, rdmsr(MSR_EFER) | EFER_SVME);
+	run_and_measure(deltas);
+	assert_pmc_counts(deltas, COUNTS_L1);
+
+	/* Step 3: VMRUN to L2 - In "guest mode"; Host-Only stops */
+	generic_svm_setup(svm, l2_guest_code,
+			  &l2_guest_stack[L2_GUEST_STACK_SIZE]);
+	vmcb->control.intercept &= ~(1ULL << INTERCEPT_MSR_PROT);
+
+	run_guest(vmcb, svm->vmcb_gpa);
+
+	GUEST_ASSERT_EQ(vmcb->control.exit_code, SVM_EXIT_VMMCALL);
+	GUEST_ASSERT(data->l2_done);
+	assert_pmc_counts(data->l2_deltas, COUNTS_L2);
+
+	/* Step 4: After VMEXIT to L1 - Back in "host mode"; Guest-Only stops */
+	run_and_measure(deltas);
+	assert_pmc_counts(deltas, COUNTS_L1);
+
+	/* Step 5: Clear SVME - Host-Only/Guest-Only bits ignored; all count */
+	wrmsr(MSR_EFER, rdmsr(MSR_EFER) & ~EFER_SVME);
+	run_and_measure(deltas);
+	assert_pmc_counts(deltas, COUNTS_ALL);
+
+	GUEST_DONE();
+}
+
+int main(int argc, char *argv[])
+{
+	vm_vaddr_t svm_gva, data_gva;
+	struct test_data *data_hva;
+	struct kvm_vcpu *vcpu;
+	struct kvm_vm *vm;
+	struct ucall uc;
+
+	TEST_REQUIRE(kvm_cpu_has(X86_FEATURE_SVM));
+	TEST_REQUIRE(kvm_is_pmu_enabled());
+	TEST_REQUIRE(get_kvm_amd_param_bool("enable_mediated_pmu"));
+	TEST_REQUIRE(host_cpu_is_amd && kvm_cpu_family() >= 0x17);
+
+	vm = vm_create_with_one_vcpu(&vcpu, l1_guest_code);
+
+	vcpu_alloc_svm(vm, &svm_gva);
+
+	data_gva = vm_vaddr_alloc_page(vm);
+	data_hva = addr_gva2hva(vm, data_gva);
+	memset(data_hva, 0, sizeof(*data_hva));
+
+	vcpu_args_set(vcpu, 2, svm_gva, data_gva);
+
+	vcpu_run(vcpu);
+	TEST_ASSERT_KVM_EXIT_REASON(vcpu, KVM_EXIT_IO);
+
+	switch (get_ucall(vcpu, &uc)) {
+	case UCALL_ABORT:
+		REPORT_GUEST_ASSERT(uc);
+		break;
+	case UCALL_DONE:
+		break;
+	default:
+		TEST_FAIL("Unknown ucall %lu", uc.cmd);
+	}
+
+	kvm_vm_free(vm);
+	return 0;
+}
-- 
2.53.0.1018.g2bb0e51243-goog


^ permalink raw reply related	[flat|nested] 14+ messages in thread

* Re: [PATCH v4 2/6] KVM: x86: Move guest_mode helpers to x86.h
  2026-03-26  3:11 ` [PATCH v4 2/6] KVM: x86: Move guest_mode helpers to x86.h Yosry Ahmed
@ 2026-03-26 22:48   ` kernel test robot
  2026-03-26 23:18     ` Yosry Ahmed
  2026-03-27  3:15   ` kernel test robot
  1 sibling, 1 reply; 14+ messages in thread
From: kernel test robot @ 2026-03-26 22:48 UTC (permalink / raw)
  To: Yosry Ahmed, Sean Christopherson
  Cc: llvm, oe-kbuild-all, Paolo Bonzini, Jim Mattson, kvm,
	linux-kernel, Yosry Ahmed

Hi Yosry,

kernel test robot noticed the following build errors:

[auto build test ERROR on 3d6cdcc8883b5726513d245eef0e91cabfc397f7]

url:    https://github.com/intel-lab-lkp/linux/commits/Yosry-Ahmed/KVM-x86-Move-enable_pmu-enable_mediated_pmu-to-pmu-h-and-pmu-c/20260326-191518
base:   3d6cdcc8883b5726513d245eef0e91cabfc397f7
patch link:    https://lore.kernel.org/r/20260326031150.3774017-3-yosry%40kernel.org
patch subject: [PATCH v4 2/6] KVM: x86: Move guest_mode helpers to x86.h
config: x86_64-buildonly-randconfig-004-20260327 (https://download.01.org/0day-ci/archive/20260327/202603270611.WB2i1rjQ-lkp@intel.com/config)
compiler: clang version 20.1.8 (https://github.com/llvm/llvm-project 87f0227cb60147a26a1eeb4fb06e3b505e9c7261)
reproduce (this is a W=1 build): (https://download.01.org/0day-ci/archive/20260327/202603270611.WB2i1rjQ-lkp@intel.com/reproduce)

If you fix the issue in a separate patch/commit (i.e. not just a new version of
the same patch/commit), kindly add following tags
| Reported-by: kernel test robot <lkp@intel.com>
| Closes: https://lore.kernel.org/oe-kbuild-all/202603270611.WB2i1rjQ-lkp@intel.com/

All errors (new ones prefixed by >>):

   In file included from arch/x86/kvm/svm/svm_onhyperv.c:11:
>> arch/x86/kvm/svm/svm.h:520:6: error: call to undeclared function 'is_guest_mode'; ISO C99 and later do not support implicit function declarations [-Wimplicit-function-declaration]
     520 |         if (is_guest_mode(&svm->vcpu))
         |             ^
   arch/x86/kvm/svm/svm.h:578:6: error: call to undeclared function 'is_guest_mode'; ISO C99 and later do not support implicit function declarations [-Wimplicit-function-declaration]
     578 |         if (is_guest_mode(&svm->vcpu) && !nested_vgif_enabled(svm))
         |             ^
   arch/x86/kvm/svm/svm.h:639:6: error: call to undeclared function 'is_guest_mode'; ISO C99 and later do not support implicit function declarations [-Wimplicit-function-declaration]
     639 |         if (is_guest_mode(&svm->vcpu))
         |             ^
   arch/x86/kvm/svm/svm.h:802:9: error: call to undeclared function 'is_guest_mode'; ISO C99 and later do not support implicit function declarations [-Wimplicit-function-declaration]
     802 |         return is_guest_mode(vcpu) && (svm->nested.ctl.int_ctl & V_INTR_MASKING_MASK);
         |                ^
   In file included from arch/x86/kvm/svm/svm_onhyperv.c:12:
   In file included from arch/x86/kvm/svm/svm_ops.h:7:
>> arch/x86/kvm/x86.h:169:20: error: conflicting types for 'is_guest_mode'
     169 | static inline bool is_guest_mode(struct kvm_vcpu *vcpu)
         |                    ^
   arch/x86/kvm/svm/svm.h:520:6: note: previous implicit declaration is here
     520 |         if (is_guest_mode(&svm->vcpu))
         |             ^
   5 errors generated.


vim +/is_guest_mode +520 arch/x86/kvm/svm/svm.h

0b97f929831a70 Sean Christopherson 2026-02-18  509  
0b97f929831a70 Sean Christopherson 2026-02-18  510  static inline void svm_mark_intercepts_dirty(struct vcpu_svm *svm)
0b97f929831a70 Sean Christopherson 2026-02-18  511  {
0b97f929831a70 Sean Christopherson 2026-02-18  512  	vmcb_mark_dirty(svm->vmcb01.ptr, VMCB_INTERCEPTS);
0b97f929831a70 Sean Christopherson 2026-02-18  513  
0b97f929831a70 Sean Christopherson 2026-02-18  514  	/*
0b97f929831a70 Sean Christopherson 2026-02-18  515  	 * If L2 is active, recalculate the intercepts for vmcb02 to account
0b97f929831a70 Sean Christopherson 2026-02-18  516  	 * for the changes made to vmcb01.  All intercept configuration is done
0b97f929831a70 Sean Christopherson 2026-02-18  517  	 * for vmcb01 and then propagated to vmcb02 to combine KVM's intercepts
0b97f929831a70 Sean Christopherson 2026-02-18  518  	 * with L1's intercepts (from the vmcb12 snapshot).
0b97f929831a70 Sean Christopherson 2026-02-18  519  	 */
0b97f929831a70 Sean Christopherson 2026-02-18 @520  	if (is_guest_mode(&svm->vcpu))
0b97f929831a70 Sean Christopherson 2026-02-18  521  		nested_vmcb02_recalc_intercepts(svm);
0b97f929831a70 Sean Christopherson 2026-02-18  522  }
0b97f929831a70 Sean Christopherson 2026-02-18  523  

-- 
0-DAY CI Kernel Test Service
https://github.com/intel/lkp-tests/wiki

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH v4 2/6] KVM: x86: Move guest_mode helpers to x86.h
  2026-03-26 22:48   ` kernel test robot
@ 2026-03-26 23:18     ` Yosry Ahmed
  0 siblings, 0 replies; 14+ messages in thread
From: Yosry Ahmed @ 2026-03-26 23:18 UTC (permalink / raw)
  To: kernel test robot
  Cc: Sean Christopherson, llvm, oe-kbuild-all, Paolo Bonzini,
	Jim Mattson, kvm, linux-kernel

On Thu, Mar 26, 2026 at 3:49 PM kernel test robot <lkp@intel.com> wrote:
>
> Hi Yosry,
>
> kernel test robot noticed the following build errors:
>
> [auto build test ERROR on 3d6cdcc8883b5726513d245eef0e91cabfc397f7]
>
> url:    https://github.com/intel-lab-lkp/linux/commits/Yosry-Ahmed/KVM-x86-Move-enable_pmu-enable_mediated_pmu-to-pmu-h-and-pmu-c/20260326-191518
> base:   3d6cdcc8883b5726513d245eef0e91cabfc397f7
> patch link:    https://lore.kernel.org/r/20260326031150.3774017-3-yosry%40kernel.org
> patch subject: [PATCH v4 2/6] KVM: x86: Move guest_mode helpers to x86.h
> config: x86_64-buildonly-randconfig-004-20260327 (https://download.01.org/0day-ci/archive/20260327/202603270611.WB2i1rjQ-lkp@intel.com/config)
> compiler: clang version 20.1.8 (https://github.com/llvm/llvm-project 87f0227cb60147a26a1eeb4fb06e3b505e9c7261)
> reproduce (this is a W=1 build): (https://download.01.org/0day-ci/archive/20260327/202603270611.WB2i1rjQ-lkp@intel.com/reproduce)
>
> If you fix the issue in a separate patch/commit (i.e. not just a new version of
> the same patch/commit), kindly add following tags
> | Reported-by: kernel test robot <lkp@intel.com>
> | Closes: https://lore.kernel.org/oe-kbuild-all/202603270611.WB2i1rjQ-lkp@intel.com/
>
> All errors (new ones prefixed by >>):
>
>    In file included from arch/x86/kvm/svm/svm_onhyperv.c:11:
> >> arch/x86/kvm/svm/svm.h:520:6: error: call to undeclared function 'is_guest_mode'; ISO C99 and later do not support implicit function declarations [-Wimplicit-function-declaration]
>      520 |         if (is_guest_mode(&svm->vcpu))
>          |             ^
>    arch/x86/kvm/svm/svm.h:578:6: error: call to undeclared function 'is_guest_mode'; ISO C99 and later do not support implicit function declarations [-Wimplicit-function-declaration]
>      578 |         if (is_guest_mode(&svm->vcpu) && !nested_vgif_enabled(svm))
>          |             ^
>    arch/x86/kvm/svm/svm.h:639:6: error: call to undeclared function 'is_guest_mode'; ISO C99 and later do not support implicit function declarations [-Wimplicit-function-declaration]
>      639 |         if (is_guest_mode(&svm->vcpu))
>          |             ^
>    arch/x86/kvm/svm/svm.h:802:9: error: call to undeclared function 'is_guest_mode'; ISO C99 and later do not support implicit function declarations [-Wimplicit-function-declaration]
>      802 |         return is_guest_mode(vcpu) && (svm->nested.ctl.int_ctl & V_INTR_MASKING_MASK);
>          |                ^
>    In file included from arch/x86/kvm/svm/svm_onhyperv.c:12:
>    In file included from arch/x86/kvm/svm/svm_ops.h:7:
> >> arch/x86/kvm/x86.h:169:20: error: conflicting types for 'is_guest_mode'
>      169 | static inline bool is_guest_mode(struct kvm_vcpu *vcpu)
>          |                    ^
>    arch/x86/kvm/svm/svm.h:520:6: note: previous implicit declaration is here
>      520 |         if (is_guest_mode(&svm->vcpu))
>          |             ^
>    5 errors generated.

Sashiko caught this first :)
https://sashiko.dev/#/patchset/20260326031150.3774017-1-yosry%40kernel.org

I think we just need to fold in including x86.h in svm.h, this fixes it for me:

diff --git a/arch/x86/kvm/svm/svm.h b/arch/x86/kvm/svm/svm.h
index ff1e4b4dc9986..2d55c6bf15b32 100644
--- a/arch/x86/kvm/svm/svm.h
+++ b/arch/x86/kvm/svm/svm.h
@@ -24,6 +24,7 @@

 #include "cpuid.h"
 #include "kvm_cache_regs.h"
+#include "x86.h"

 /*
  * Helpers to convert to/from physical addresses for pages whose address is

^ permalink raw reply related	[flat|nested] 14+ messages in thread

* Re: [PATCH v4 2/6] KVM: x86: Move guest_mode helpers to x86.h
  2026-03-26  3:11 ` [PATCH v4 2/6] KVM: x86: Move guest_mode helpers to x86.h Yosry Ahmed
  2026-03-26 22:48   ` kernel test robot
@ 2026-03-27  3:15   ` kernel test robot
  1 sibling, 0 replies; 14+ messages in thread
From: kernel test robot @ 2026-03-27  3:15 UTC (permalink / raw)
  To: Yosry Ahmed, Sean Christopherson
  Cc: oe-kbuild-all, Paolo Bonzini, Jim Mattson, kvm, linux-kernel,
	Yosry Ahmed

Hi Yosry,

kernel test robot noticed the following build errors:

[auto build test ERROR on 3d6cdcc8883b5726513d245eef0e91cabfc397f7]

url:    https://github.com/intel-lab-lkp/linux/commits/Yosry-Ahmed/KVM-x86-Move-enable_pmu-enable_mediated_pmu-to-pmu-h-and-pmu-c/20260326-191518
base:   3d6cdcc8883b5726513d245eef0e91cabfc397f7
patch link:    https://lore.kernel.org/r/20260326031150.3774017-3-yosry%40kernel.org
patch subject: [PATCH v4 2/6] KVM: x86: Move guest_mode helpers to x86.h
config: i386-allmodconfig (https://download.01.org/0day-ci/archive/20260327/202603271150.0WNYfBuF-lkp@intel.com/config)
compiler: gcc-14 (Debian 14.2.0-19) 14.2.0
reproduce (this is a W=1 build): (https://download.01.org/0day-ci/archive/20260327/202603271150.0WNYfBuF-lkp@intel.com/reproduce)

If you fix the issue in a separate patch/commit (i.e. not just a new version of
the same patch/commit), kindly add following tags
| Reported-by: kernel test robot <lkp@intel.com>
| Closes: https://lore.kernel.org/oe-kbuild-all/202603271150.0WNYfBuF-lkp@intel.com/

All errors (new ones prefixed by >>):

   In file included from arch/x86/kvm/svm/svm_onhyperv.c:11:
   arch/x86/kvm/svm/svm.h: In function 'svm_mark_intercepts_dirty':
>> arch/x86/kvm/svm/svm.h:520:13: error: implicit declaration of function 'is_guest_mode' [-Wimplicit-function-declaration]
     520 |         if (is_guest_mode(&svm->vcpu))
         |             ^~~~~~~~~~~~~
   In file included from arch/x86/kvm/svm/svm_ops.h:7,
                    from arch/x86/kvm/svm/svm_onhyperv.c:12:
   arch/x86/kvm/x86.h: At top level:
>> arch/x86/kvm/x86.h:169:20: error: conflicting types for 'is_guest_mode'; have 'bool(struct kvm_vcpu *)' {aka '_Bool(struct kvm_vcpu *)'}
     169 | static inline bool is_guest_mode(struct kvm_vcpu *vcpu)
         |                    ^~~~~~~~~~~~~
   arch/x86/kvm/svm/svm.h:520:13: note: previous implicit declaration of 'is_guest_mode' with type 'int()'
     520 |         if (is_guest_mode(&svm->vcpu))
         |             ^~~~~~~~~~~~~
--
   In file included from kvm/svm/svm_onhyperv.c:11:
   kvm/svm/svm.h: In function 'svm_mark_intercepts_dirty':
   kvm/svm/svm.h:520:13: error: implicit declaration of function 'is_guest_mode' [-Wimplicit-function-declaration]
     520 |         if (is_guest_mode(&svm->vcpu))
         |             ^~~~~~~~~~~~~
   In file included from kvm/svm/svm_ops.h:7,
                    from kvm/svm/svm_onhyperv.c:12:
   arch/x86/kvm/x86.h: At top level:
>> arch/x86/kvm/x86.h:169:20: error: conflicting types for 'is_guest_mode'; have 'bool(struct kvm_vcpu *)' {aka '_Bool(struct kvm_vcpu *)'}
     169 | static inline bool is_guest_mode(struct kvm_vcpu *vcpu)
         |                    ^~~~~~~~~~~~~
   kvm/svm/svm.h:520:13: note: previous implicit declaration of 'is_guest_mode' with type 'int()'
     520 |         if (is_guest_mode(&svm->vcpu))
         |             ^~~~~~~~~~~~~


vim +/is_guest_mode +520 arch/x86/kvm/svm/svm.h

0b97f929831a70 Sean Christopherson 2026-02-18  509  
0b97f929831a70 Sean Christopherson 2026-02-18  510  static inline void svm_mark_intercepts_dirty(struct vcpu_svm *svm)
0b97f929831a70 Sean Christopherson 2026-02-18  511  {
0b97f929831a70 Sean Christopherson 2026-02-18  512  	vmcb_mark_dirty(svm->vmcb01.ptr, VMCB_INTERCEPTS);
0b97f929831a70 Sean Christopherson 2026-02-18  513  
0b97f929831a70 Sean Christopherson 2026-02-18  514  	/*
0b97f929831a70 Sean Christopherson 2026-02-18  515  	 * If L2 is active, recalculate the intercepts for vmcb02 to account
0b97f929831a70 Sean Christopherson 2026-02-18  516  	 * for the changes made to vmcb01.  All intercept configuration is done
0b97f929831a70 Sean Christopherson 2026-02-18  517  	 * for vmcb01 and then propagated to vmcb02 to combine KVM's intercepts
0b97f929831a70 Sean Christopherson 2026-02-18  518  	 * with L1's intercepts (from the vmcb12 snapshot).
0b97f929831a70 Sean Christopherson 2026-02-18  519  	 */
0b97f929831a70 Sean Christopherson 2026-02-18 @520  	if (is_guest_mode(&svm->vcpu))
0b97f929831a70 Sean Christopherson 2026-02-18  521  		nested_vmcb02_recalc_intercepts(svm);
0b97f929831a70 Sean Christopherson 2026-02-18  522  }
0b97f929831a70 Sean Christopherson 2026-02-18  523  

-- 
0-DAY CI Kernel Test Service
https://github.com/intel/lkp-tests/wiki

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH v4 3/6] KVM: x86/pmu: Disable counters based on Host-Only/Guest-Only bits in SVM
  2026-03-26  3:11 ` [PATCH v4 3/6] KVM: x86/pmu: Disable counters based on Host-Only/Guest-Only bits in SVM Yosry Ahmed
@ 2026-04-07  1:30   ` Sean Christopherson
  0 siblings, 0 replies; 14+ messages in thread
From: Sean Christopherson @ 2026-04-07  1:30 UTC (permalink / raw)
  To: Yosry Ahmed; +Cc: Paolo Bonzini, Jim Mattson, kvm, linux-kernel

On Thu, Mar 26, 2026, Yosry Ahmed wrote:
> diff --git a/arch/x86/include/asm/perf_event.h b/arch/x86/include/asm/perf_event.h
> index ff5acb8b199b0..5961c002b28eb 100644
> --- a/arch/x86/include/asm/perf_event.h
> +++ b/arch/x86/include/asm/perf_event.h
> @@ -60,6 +60,8 @@
>  #define AMD64_EVENTSEL_INT_CORE_ENABLE			(1ULL << 36)
>  #define AMD64_EVENTSEL_GUESTONLY			(1ULL << 40)
>  #define AMD64_EVENTSEL_HOSTONLY				(1ULL << 41)
> +#define AMD64_EVENTSEL_HOST_GUEST_MASK			\
> +	(AMD64_EVENTSEL_HOSTONLY | AMD64_EVENTSEL_GUESTONLY)
>  
>  #define AMD64_EVENTSEL_INT_CORE_SEL_SHIFT		37
>  #define AMD64_EVENTSEL_INT_CORE_SEL_MASK		\
> diff --git a/arch/x86/kvm/pmu.c b/arch/x86/kvm/pmu.c
> index d6ac3c55fce55..e35d598f809a2 100644
> --- a/arch/x86/kvm/pmu.c
> +++ b/arch/x86/kvm/pmu.c
> @@ -559,6 +559,7 @@ static int reprogram_counter(struct kvm_pmc *pmc)
>  
>  	if (kvm_vcpu_has_mediated_pmu(pmu_to_vcpu(pmu))) {
>  		kvm_mediated_pmu_refresh_event_filter(pmc);
> +		kvm_pmu_call(mediated_reprogram_counter)(pmc);

I would rather make a single call from kvm_pmu_handle_event(), and let the vendor
deal with mediated vs. legacy.  I want to avoid mediated-specific ops as much as
possible, and I think kvm_x86_ops.reprogram_counters() would be easier to
understand overall.

diff --git a/arch/x86/kvm/pmu.c b/arch/x86/kvm/pmu.c
index a7b38c104d06..7da0077ae24c 100644
--- a/arch/x86/kvm/pmu.c
+++ b/arch/x86/kvm/pmu.c
@@ -670,6 +670,8 @@ void kvm_pmu_handle_event(struct kvm_vcpu *vcpu)
                        set_bit(pmc->idx, pmu->reprogram_pmi);
        }
 
+       kvm_pmu_call(reprogram_counters)(vcpu, bitmap);
+
        /*
         * Release unused perf_events if the corresponding guest MSRs weren't
         * accessed during the last vCPU time slice (need_cleanup is set when


> diff --git a/arch/x86/kvm/svm/pmu.c b/arch/x86/kvm/svm/pmu.c
> index 7aa298eeb0721..60931dfd624b2 100644
> --- a/arch/x86/kvm/svm/pmu.c
> +++ b/arch/x86/kvm/svm/pmu.c
> @@ -260,6 +260,34 @@ static void amd_mediated_pmu_put(struct kvm_vcpu *vcpu)
>  		wrmsrq(MSR_AMD64_PERF_CNTR_GLOBAL_STATUS_CLR, pmu->global_status);
>  }
>  
> +static void amd_mediated_pmu_handle_host_guest_bits(struct kvm_pmc *pmc)
> +{
> +	struct kvm_vcpu *vcpu = pmc->vcpu;
> +	u64 host_guest_bits;
> +
> +	if (!(pmc->eventsel & ARCH_PERFMON_EVENTSEL_ENABLE))
> +		return;
> +
> +	/* Count all events if both bits are cleared or both bits are set */
> +	host_guest_bits = pmc->eventsel & AMD64_EVENTSEL_HOST_GUEST_MASK;
> +	if (hweight64(host_guest_bits) != 1)
> +		return;
> +
> +	/* Host-Only and Guest-Only are ignored if EFER.SVME == 0 */
> +	if (!(vcpu->arch.efer & EFER_SVME))
> +		return;
> +
> +	if (!!(host_guest_bits & AMD64_EVENTSEL_GUESTONLY) == is_guest_mode(vcpu))
> +		return;
> +
> +	pmc->eventsel_hw &= ~ARCH_PERFMON_EVENTSEL_ENABLE;
> +}
> +
> +static void amd_mediated_pmu_reprogram_counter(struct kvm_pmc *pmc)
> +{
> +	amd_mediated_pmu_handle_host_guest_bits(pmc);

And then this doesn't need to be such a wonky wrapper, and the "reprogram on
nested transition" logic can also clear the entire bitmap instead of doing things
piecemeal, e.g. it can be something like so in the end:

	if (!kvm_vcpu_has_mediated_pmu(vcpu))
		return;

	bitmap_zero(pmu->pmc_reprogram_on_nested_transition, X86_PMC_IDX_MAX);

	kvm_for_each_pmc(pmu, pmc, bit, bitmap)
		amd_mediated_pmu_handle_host_guest_bits(pmc);

^ permalink raw reply related	[flat|nested] 14+ messages in thread

* Re: [PATCH v4 4/6] KVM: x86/pmu: Re-evaluate Host-Only/Guest-Only on nested SVM transitions
  2026-03-26  3:11 ` [PATCH v4 4/6] KVM: x86/pmu: Re-evaluate Host-Only/Guest-Only on nested SVM transitions Yosry Ahmed
@ 2026-04-07  1:35   ` Sean Christopherson
  0 siblings, 0 replies; 14+ messages in thread
From: Sean Christopherson @ 2026-04-07  1:35 UTC (permalink / raw)
  To: Yosry Ahmed; +Cc: Paolo Bonzini, Jim Mattson, kvm, linux-kernel

On Thu, Mar 26, 2026, Yosry Ahmed wrote:
> Reprogram all counters on nested transitions for the mediated PMU, to
> re-evaluate Host-Only and Guest-Only bits and enable/disable the PMU
> counters accordingly. For example, if Host-Only is set and Guest-Only is
> cleared, a counter should be disabled when entering guest mode and
> enabled when exiting guest mode.
> 
> Having one of Host-Only and Guest-Only set is only effective when
> EFER.SVME is set, so also trigger counter reprogramming when EFER.SVME
> is toggled.
> 
> Track counters with one of Host-Only and Guest-Only set as counters
> requiring reprogramming on nested transitions in a bitmap. Use the
> bitmap to only request KVM_PMU_REQ if some counters need reprogramming,
> and only reprogram the counters that actually need it.
> 
> Track such counters even if EFER.SVME is cleared, such that if/when
> EFER.SVME is set, KVM can reprogram those counters and enable/disable
> them appropriately. Otherwise, toggling EFER.SVME would need to
> reprogram all counters and use a different code path than
> kvm_pmu_handle_nested_transition().
> 
> Signed-off-by: Yosry Ahmed <yosry@kernel.org>
> ---
>  arch/x86/include/asm/kvm_host.h |  6 ++++++
>  arch/x86/kvm/pmu.c              |  1 +
>  arch/x86/kvm/pmu.h              | 13 +++++++++++++
>  arch/x86/kvm/svm/pmu.c          | 13 ++++++++++++-
>  arch/x86/kvm/svm/svm.c          |  1 +
>  arch/x86/kvm/x86.h              |  5 +++++
>  6 files changed, 38 insertions(+), 1 deletion(-)
> 
> diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
> index d3bdc98281339..b2f8710838372 100644
> --- a/arch/x86/include/asm/kvm_host.h
> +++ b/arch/x86/include/asm/kvm_host.h
> @@ -594,6 +594,12 @@ struct kvm_pmu {
>  	DECLARE_BITMAP(pmc_counting_instructions, X86_PMC_IDX_MAX);
>  	DECLARE_BITMAP(pmc_counting_branches, X86_PMC_IDX_MAX);
>  
> +	/*
> +	 * Whether or not PMU counters need to be reprogrammed on transitions
> +	 * between L1 and L2 (or when nesting enablement is toggled).
> +	 */
> +	DECLARE_BITMAP(pmc_needs_nested_reprogram, X86_PMC_IDX_MAX);

Hmm, I think this should be reprogram_pmc_on_nested_transition, or something like
that.  I don't like using "needs" because KVM tends to use "needs" for one-shot
things, e.g. "xyz needs to be sync'd on the next whatever".  And while the code
kinda-sorta takes that approach, in practice it doesn't; clearing the bits and
then setting them again is really just an implementation quirk to keep things
simple.

Conceptually, these flags are much "stickier" in that they are kept set across
all nested transitions, and only cleared when the event selector itself is
changed in some way.

> diff --git a/arch/x86/kvm/pmu.h b/arch/x86/kvm/pmu.h
> index bdbe0456049d0..fb73806d3bfa0 100644
> --- a/arch/x86/kvm/pmu.h
> +++ b/arch/x86/kvm/pmu.h
> @@ -248,6 +248,19 @@ static inline bool kvm_pmu_is_fastpath_emulation_allowed(struct kvm_vcpu *vcpu)
>  				  X86_PMC_IDX_MAX);
>  }
>  
> +static inline void kvm_pmu_handle_nested_transition(struct kvm_vcpu *vcpu)
> +{
> +	struct kvm_pmu *pmu = vcpu_to_pmu(vcpu);
> +
> +	if (bitmap_empty(pmu->pmc_needs_nested_reprogram, X86_PMC_IDX_MAX))
> +		return;
> +
> +	BUILD_BUG_ON(sizeof(pmu->pmc_needs_nested_reprogram) != sizeof(atomic64_t));
> +	atomic64_or(*(s64 *)pmu->pmc_needs_nested_reprogram,
> +		    &vcpu_to_pmu(vcpu)->__reprogram_pmi);

And here especially I think reprogram_pmc_on_nested_transition works better,
e.g. it's a bit more obvious that leaving reprogram_pmc_on_nested_transition
as-is is intentional.

> +	kvm_make_request(KVM_REQ_PMU, vcpu);

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH v4 6/6] KVM: selftests: Add svm_pmu_host_guest_test for Host-Only/Guest-Only bits
  2026-03-26  3:11 ` [PATCH v4 6/6] KVM: selftests: Add svm_pmu_host_guest_test for Host-Only/Guest-Only bits Yosry Ahmed
@ 2026-04-07  1:39   ` Sean Christopherson
  2026-04-07  3:23     ` Jim Mattson
  0 siblings, 1 reply; 14+ messages in thread
From: Sean Christopherson @ 2026-04-07  1:39 UTC (permalink / raw)
  To: Yosry Ahmed; +Cc: Paolo Bonzini, Jim Mattson, kvm, linux-kernel

On Thu, Mar 26, 2026, Yosry Ahmed wrote:
> From: Jim Mattson <jmattson@google.com>
> 
> Add a selftest to verify KVM correctly virtualizes the AMD PMU Host-Only
> (bit 41) and Guest-Only (bit 40) event selector bits across all relevant
> SVM state transitions.
> 
> The test programs 4 PMCs simultaneously with all combinations of the
> Host-Only and Guest-Only bits, then verifies correct counting behavior:
>   1. SVME=0: all counters count (Host-Only/Guest-Only bits ignored)
>   2. Set SVME=1: Host-Only and neither/both count; Guest-Only stops
>   3. VMRUN to L2: Guest-Only and neither/both count; Host-Only stops
>   4. VMEXIT to L1: Host-Only and neither/both count; Guest-Only stops
>   5. Clear SVME=0: all counters count (bits ignored again)
> 
> Signed-off-by: Jim Mattson <jmattson@google.com>
> Signed-off-by: Yosry Ahmed <yosry@kernel.org>
> ---
>  tools/testing/selftests/kvm/Makefile.kvm      |   1 +
>  tools/testing/selftests/kvm/include/x86/pmu.h |   6 +
>  .../selftests/kvm/include/x86/processor.h     |   2 +
>  .../kvm/x86/svm_pmu_host_guest_test.c         | 199 ++++++++++++++++++
>  4 files changed, 208 insertions(+)
>  create mode 100644 tools/testing/selftests/kvm/x86/svm_pmu_host_guest_test.c
> 
> diff --git a/tools/testing/selftests/kvm/Makefile.kvm b/tools/testing/selftests/kvm/Makefile.kvm
> index 3d372d78a2756..9418c45291231 100644
> --- a/tools/testing/selftests/kvm/Makefile.kvm
> +++ b/tools/testing/selftests/kvm/Makefile.kvm
> @@ -116,6 +116,7 @@ TEST_GEN_PROGS_x86 += x86/svm_nested_invalid_vmcb12_gpa
>  TEST_GEN_PROGS_x86 += x86/svm_nested_shutdown_test
>  TEST_GEN_PROGS_x86 += x86/svm_nested_soft_inject_test
>  TEST_GEN_PROGS_x86 += x86/svm_lbr_nested_state
> +TEST_GEN_PROGS_x86 += x86/svm_pmu_host_guest_test
>  TEST_GEN_PROGS_x86 += x86/tsc_scaling_sync
>  TEST_GEN_PROGS_x86 += x86/sync_regs_test
>  TEST_GEN_PROGS_x86 += x86/ucna_injection_test
> diff --git a/tools/testing/selftests/kvm/include/x86/pmu.h b/tools/testing/selftests/kvm/include/x86/pmu.h
> index 72575eadb63a0..af9b279c78df4 100644
> --- a/tools/testing/selftests/kvm/include/x86/pmu.h
> +++ b/tools/testing/selftests/kvm/include/x86/pmu.h
> @@ -38,6 +38,12 @@
>  #define ARCH_PERFMON_EVENTSEL_INV		BIT_ULL(23)
>  #define ARCH_PERFMON_EVENTSEL_CMASK		GENMASK_ULL(31, 24)
>  
> +/*
> + * These are AMD-specific bits.
> + */
> +#define AMD64_EVENTSEL_GUESTONLY		BIT_ULL(40)
> +#define AMD64_EVENTSEL_HOSTONLY			BIT_ULL(41)
> +
>  /* RDPMC control flags, Intel only. */
>  #define INTEL_RDPMC_METRICS			BIT_ULL(29)
>  #define INTEL_RDPMC_FIXED			BIT_ULL(30)
> diff --git a/tools/testing/selftests/kvm/include/x86/processor.h b/tools/testing/selftests/kvm/include/x86/processor.h
> index d8634a760a609..4cc1ba8752347 100644
> --- a/tools/testing/selftests/kvm/include/x86/processor.h
> +++ b/tools/testing/selftests/kvm/include/x86/processor.h
> @@ -19,6 +19,8 @@
>  #include "kvm_util.h"
>  #include "ucall_common.h"
>  
> +#define __stack_aligned__	__aligned(16)

I would much prefer to provide a macro helper to declare the stack in a prep patch,
and update the bajillion instances of "unsigned long l2_guest_stack[L2_GUEST_STACK_SIZE]"
through KVM selftests.

A blurb in the changelog explaining why _this_ test needs to honor alignment
while we've managed to squeak by without problems in other tests would also be
helpful

> +
>  extern bool host_cpu_is_intel;
>  extern bool host_cpu_is_amd;
>  extern bool host_cpu_is_hygon;

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH v4 6/6] KVM: selftests: Add svm_pmu_host_guest_test for Host-Only/Guest-Only bits
  2026-04-07  1:39   ` Sean Christopherson
@ 2026-04-07  3:23     ` Jim Mattson
  0 siblings, 0 replies; 14+ messages in thread
From: Jim Mattson @ 2026-04-07  3:23 UTC (permalink / raw)
  To: Sean Christopherson; +Cc: Yosry Ahmed, Paolo Bonzini, kvm, linux-kernel

On Mon, Apr 6, 2026 at 6:39 PM Sean Christopherson <seanjc@google.com> wrote:

> A blurb in the changelog explaining why _this_ test needs to honor alignment
> while we've managed to squeak by without problems in other tests would also be
> helpful

read_counters() induces the compiler to generate a movdqa instruction
referencing L2's stack, so the stack pointer at entry to
l2_guest_code() must not be 16-byte aligned. (Odd 8-byte alignment?)

Presumably, we've squeaked by without problems in other tests, because
no L2 instructions have required 16-byte (or greater) stack alignment.

^ permalink raw reply	[flat|nested] 14+ messages in thread

end of thread, other threads:[~2026-04-07  3:23 UTC | newest]

Thread overview: 14+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2026-03-26  3:11 [PATCH v4 0/6] KVM: x86/pmu: Add support for AMD Host-Only/Guest-Only bits Yosry Ahmed
2026-03-26  3:11 ` [PATCH v4 1/6] KVM: x86: Move enable_pmu/enable_mediated_pmu to pmu.h and pmu.c Yosry Ahmed
2026-03-26  3:11 ` [PATCH v4 2/6] KVM: x86: Move guest_mode helpers to x86.h Yosry Ahmed
2026-03-26 22:48   ` kernel test robot
2026-03-26 23:18     ` Yosry Ahmed
2026-03-27  3:15   ` kernel test robot
2026-03-26  3:11 ` [PATCH v4 3/6] KVM: x86/pmu: Disable counters based on Host-Only/Guest-Only bits in SVM Yosry Ahmed
2026-04-07  1:30   ` Sean Christopherson
2026-03-26  3:11 ` [PATCH v4 4/6] KVM: x86/pmu: Re-evaluate Host-Only/Guest-Only on nested SVM transitions Yosry Ahmed
2026-04-07  1:35   ` Sean Christopherson
2026-03-26  3:11 ` [PATCH v4 5/6] KVM: x86/pmu: Allow Host-Only/Guest-Only bits with nSVM and mediated PMU Yosry Ahmed
2026-03-26  3:11 ` [PATCH v4 6/6] KVM: selftests: Add svm_pmu_host_guest_test for Host-Only/Guest-Only bits Yosry Ahmed
2026-04-07  1:39   ` Sean Christopherson
2026-04-07  3:23     ` Jim Mattson

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox