* [PATCH 1/6] KVM: x86/pmu: Introduce amd_pmu_set_eventsel_hw()
2026-01-21 22:53 [PATCH 0/6] KVM: x86/pmu: Add support for AMD HG_ONLY bits Jim Mattson
@ 2026-01-21 22:53 ` Jim Mattson
2026-01-22 16:04 ` Sean Christopherson
2026-01-21 22:54 ` [PATCH 2/6] KVM: x86/pmu: Disable HG_ONLY events as appropriate for current vCPU state Jim Mattson
` (4 subsequent siblings)
5 siblings, 1 reply; 21+ messages in thread
From: Jim Mattson @ 2026-01-21 22:53 UTC (permalink / raw)
To: Sean Christopherson, Paolo Bonzini, Thomas Gleixner, Ingo Molnar,
Borislav Petkov, Dave Hansen, x86, H. Peter Anvin, Peter Zijlstra,
Arnaldo Carvalho de Melo, Namhyung Kim, Mark Rutland,
Alexander Shishkin, Jiri Olsa, Ian Rogers, Adrian Hunter,
James Clark, Shuah Khan, kvm, linux-kernel, linux-perf-users,
linux-kselftest
Cc: Jim Mattson
Extract the computation of eventsel_hw from amd_pmu_set_msr() into a
separate helper function, amd_pmu_set_eventsel_hw().
No functional change intended.
Signed-off-by: Jim Mattson <jmattson@google.com>
---
arch/x86/kvm/svm/pmu.c | 9 +++++++--
1 file changed, 7 insertions(+), 2 deletions(-)
diff --git a/arch/x86/kvm/svm/pmu.c b/arch/x86/kvm/svm/pmu.c
index 7aa298eeb072..33c139b23a9e 100644
--- a/arch/x86/kvm/svm/pmu.c
+++ b/arch/x86/kvm/svm/pmu.c
@@ -147,6 +147,12 @@ static int amd_pmu_get_msr(struct kvm_vcpu *vcpu, struct msr_data *msr_info)
return 1;
}
+static void amd_pmu_set_eventsel_hw(struct kvm_pmc *pmc)
+{
+ pmc->eventsel_hw = (pmc->eventsel & ~AMD64_EVENTSEL_HOSTONLY) |
+ AMD64_EVENTSEL_GUESTONLY;
+}
+
static int amd_pmu_set_msr(struct kvm_vcpu *vcpu, struct msr_data *msr_info)
{
struct kvm_pmu *pmu = vcpu_to_pmu(vcpu);
@@ -166,8 +172,7 @@ static int amd_pmu_set_msr(struct kvm_vcpu *vcpu, struct msr_data *msr_info)
data &= ~pmu->reserved_bits;
if (data != pmc->eventsel) {
pmc->eventsel = data;
- pmc->eventsel_hw = (data & ~AMD64_EVENTSEL_HOSTONLY) |
- AMD64_EVENTSEL_GUESTONLY;
+ amd_pmu_set_eventsel_hw(pmc);
kvm_pmu_request_counter_reprogram(pmc);
}
return 0;
--
2.52.0.457.g6b5491de43-goog
^ permalink raw reply related [flat|nested] 21+ messages in thread* Re: [PATCH 1/6] KVM: x86/pmu: Introduce amd_pmu_set_eventsel_hw()
2026-01-21 22:53 ` [PATCH 1/6] KVM: x86/pmu: Introduce amd_pmu_set_eventsel_hw() Jim Mattson
@ 2026-01-22 16:04 ` Sean Christopherson
2026-01-22 21:57 ` Jim Mattson
0 siblings, 1 reply; 21+ messages in thread
From: Sean Christopherson @ 2026-01-22 16:04 UTC (permalink / raw)
To: Jim Mattson
Cc: Paolo Bonzini, Thomas Gleixner, Ingo Molnar, Borislav Petkov,
Dave Hansen, x86, H. Peter Anvin, Peter Zijlstra,
Arnaldo Carvalho de Melo, Namhyung Kim, Mark Rutland,
Alexander Shishkin, Jiri Olsa, Ian Rogers, Adrian Hunter,
James Clark, Shuah Khan, kvm, linux-kernel, linux-perf-users,
linux-kselftest
On Wed, Jan 21, 2026, Jim Mattson wrote:
> Extract the computation of eventsel_hw from amd_pmu_set_msr() into a
> separate helper function, amd_pmu_set_eventsel_hw().
>
> No functional change intended.
>
> Signed-off-by: Jim Mattson <jmattson@google.com>
> ---
> arch/x86/kvm/svm/pmu.c | 9 +++++++--
> 1 file changed, 7 insertions(+), 2 deletions(-)
>
> diff --git a/arch/x86/kvm/svm/pmu.c b/arch/x86/kvm/svm/pmu.c
> index 7aa298eeb072..33c139b23a9e 100644
> --- a/arch/x86/kvm/svm/pmu.c
> +++ b/arch/x86/kvm/svm/pmu.c
> @@ -147,6 +147,12 @@ static int amd_pmu_get_msr(struct kvm_vcpu *vcpu, struct msr_data *msr_info)
> return 1;
> }
>
> +static void amd_pmu_set_eventsel_hw(struct kvm_pmc *pmc)
> +{
> + pmc->eventsel_hw = (pmc->eventsel & ~AMD64_EVENTSEL_HOSTONLY) |
> + AMD64_EVENTSEL_GUESTONLY;
Align indentation.
pmc->eventsel_hw = (pmc->eventsel & ~AMD64_EVENTSEL_HOSTONLY) |
AMD64_EVENTSEL_GUESTONLY;
> +}
> +
> static int amd_pmu_set_msr(struct kvm_vcpu *vcpu, struct msr_data *msr_info)
> {
> struct kvm_pmu *pmu = vcpu_to_pmu(vcpu);
> @@ -166,8 +172,7 @@ static int amd_pmu_set_msr(struct kvm_vcpu *vcpu, struct msr_data *msr_info)
> data &= ~pmu->reserved_bits;
> if (data != pmc->eventsel) {
> pmc->eventsel = data;
> - pmc->eventsel_hw = (data & ~AMD64_EVENTSEL_HOSTONLY) |
> - AMD64_EVENTSEL_GUESTONLY;
> + amd_pmu_set_eventsel_hw(pmc);
> kvm_pmu_request_counter_reprogram(pmc);
> }
> return 0;
> --
> 2.52.0.457.g6b5491de43-goog
>
^ permalink raw reply [flat|nested] 21+ messages in thread* Re: [PATCH 1/6] KVM: x86/pmu: Introduce amd_pmu_set_eventsel_hw()
2026-01-22 16:04 ` Sean Christopherson
@ 2026-01-22 21:57 ` Jim Mattson
0 siblings, 0 replies; 21+ messages in thread
From: Jim Mattson @ 2026-01-22 21:57 UTC (permalink / raw)
To: Sean Christopherson
Cc: Paolo Bonzini, Thomas Gleixner, Ingo Molnar, Borislav Petkov,
Dave Hansen, x86, H. Peter Anvin, Peter Zijlstra,
Arnaldo Carvalho de Melo, Namhyung Kim, Mark Rutland,
Alexander Shishkin, Jiri Olsa, Ian Rogers, Adrian Hunter,
James Clark, Shuah Khan, kvm, linux-kernel, linux-perf-users,
linux-kselftest
On Thu, Jan 22, 2026 at 8:04 AM Sean Christopherson <seanjc@google.com> wrote:
>
> On Wed, Jan 21, 2026, Jim Mattson wrote:
> > Extract the computation of eventsel_hw from amd_pmu_set_msr() into a
> > separate helper function, amd_pmu_set_eventsel_hw().
> >
> > No functional change intended.
> >
> > Signed-off-by: Jim Mattson <jmattson@google.com>
> > ---
> > arch/x86/kvm/svm/pmu.c | 9 +++++++--
> > 1 file changed, 7 insertions(+), 2 deletions(-)
> >
> > diff --git a/arch/x86/kvm/svm/pmu.c b/arch/x86/kvm/svm/pmu.c
> > index 7aa298eeb072..33c139b23a9e 100644
> > --- a/arch/x86/kvm/svm/pmu.c
> > +++ b/arch/x86/kvm/svm/pmu.c
> > @@ -147,6 +147,12 @@ static int amd_pmu_get_msr(struct kvm_vcpu *vcpu, struct msr_data *msr_info)
> > return 1;
> > }
> >
> > +static void amd_pmu_set_eventsel_hw(struct kvm_pmc *pmc)
> > +{
> > + pmc->eventsel_hw = (pmc->eventsel & ~AMD64_EVENTSEL_HOSTONLY) |
> > + AMD64_EVENTSEL_GUESTONLY;
>
> Align indentation.
Sure.
After wondering about how to configure emacs to do this for many
years, you have spurred me to action. In case anyone is wondering:
(add-hook 'c-mode-common-hook
(lambda ()
(c-set-offset 'statement-cont 'c-lineup-assignments)))
At least, this seems to work so far.
^ permalink raw reply [flat|nested] 21+ messages in thread
* [PATCH 2/6] KVM: x86/pmu: Disable HG_ONLY events as appropriate for current vCPU state
2026-01-21 22:53 [PATCH 0/6] KVM: x86/pmu: Add support for AMD HG_ONLY bits Jim Mattson
2026-01-21 22:53 ` [PATCH 1/6] KVM: x86/pmu: Introduce amd_pmu_set_eventsel_hw() Jim Mattson
@ 2026-01-21 22:54 ` Jim Mattson
2026-01-22 16:33 ` Sean Christopherson
2026-01-21 22:54 ` [PATCH 3/6] KVM: x86/pmu: Track enabled AMD PMCs with Host-Only xor Guest-Only bits set Jim Mattson
` (3 subsequent siblings)
5 siblings, 1 reply; 21+ messages in thread
From: Jim Mattson @ 2026-01-21 22:54 UTC (permalink / raw)
To: Sean Christopherson, Paolo Bonzini, Thomas Gleixner, Ingo Molnar,
Borislav Petkov, Dave Hansen, x86, H. Peter Anvin, Peter Zijlstra,
Arnaldo Carvalho de Melo, Namhyung Kim, Mark Rutland,
Alexander Shishkin, Jiri Olsa, Ian Rogers, Adrian Hunter,
James Clark, Shuah Khan, kvm, linux-kernel, linux-perf-users,
linux-kselftest
Cc: Jim Mattson
Introduce amd_pmu_dormant_hg_event(), which determines whether an AMD PMC
should be dormant (i.e. not count) based on the guest's Host-Only and
Guest-Only event selector bits and the current vCPU state.
Update amd_pmu_set_eventsel_hw() to clear the event selector's enable bit
when the event is dormant.
Signed-off-by: Jim Mattson <jmattson@google.com>
---
arch/x86/include/asm/perf_event.h | 2 ++
arch/x86/kvm/svm/pmu.c | 23 +++++++++++++++++++++++
2 files changed, 25 insertions(+)
diff --git a/arch/x86/include/asm/perf_event.h b/arch/x86/include/asm/perf_event.h
index 0d9af4135e0a..7649d79d91a6 100644
--- a/arch/x86/include/asm/perf_event.h
+++ b/arch/x86/include/asm/perf_event.h
@@ -58,6 +58,8 @@
#define AMD64_EVENTSEL_INT_CORE_ENABLE (1ULL << 36)
#define AMD64_EVENTSEL_GUESTONLY (1ULL << 40)
#define AMD64_EVENTSEL_HOSTONLY (1ULL << 41)
+#define AMD64_EVENTSEL_HG_ONLY \
+ (AMD64_EVENTSEL_HOSTONLY | AMD64_EVENTSEL_GUESTONLY)
#define AMD64_EVENTSEL_INT_CORE_SEL_SHIFT 37
#define AMD64_EVENTSEL_INT_CORE_SEL_MASK \
diff --git a/arch/x86/kvm/svm/pmu.c b/arch/x86/kvm/svm/pmu.c
index 33c139b23a9e..f619417557f9 100644
--- a/arch/x86/kvm/svm/pmu.c
+++ b/arch/x86/kvm/svm/pmu.c
@@ -147,10 +147,33 @@ static int amd_pmu_get_msr(struct kvm_vcpu *vcpu, struct msr_data *msr_info)
return 1;
}
+static bool amd_pmu_dormant_hg_event(struct kvm_pmc *pmc)
+{
+ u64 hg_only = pmc->eventsel & AMD64_EVENTSEL_HG_ONLY;
+ struct kvm_vcpu *vcpu = pmc->vcpu;
+
+ if (hg_only == 0)
+ /* Not an HG_ONLY event */
+ return false;
+
+ if (!(vcpu->arch.efer & EFER_SVME))
+ /* HG_ONLY bits are ignored when SVME is clear */
+ return false;
+
+ /* Always active if both HG_ONLY bits are set */
+ if (hg_only == AMD64_EVENTSEL_HG_ONLY)
+ return false;
+
+ return !!(hg_only & AMD64_EVENTSEL_HOSTONLY) == is_guest_mode(vcpu);
+}
+
static void amd_pmu_set_eventsel_hw(struct kvm_pmc *pmc)
{
pmc->eventsel_hw = (pmc->eventsel & ~AMD64_EVENTSEL_HOSTONLY) |
AMD64_EVENTSEL_GUESTONLY;
+
+ if (amd_pmu_dormant_hg_event(pmc))
+ pmc->eventsel_hw &= ~ARCH_PERFMON_EVENTSEL_ENABLE;
}
static int amd_pmu_set_msr(struct kvm_vcpu *vcpu, struct msr_data *msr_info)
--
2.52.0.457.g6b5491de43-goog
^ permalink raw reply related [flat|nested] 21+ messages in thread* Re: [PATCH 2/6] KVM: x86/pmu: Disable HG_ONLY events as appropriate for current vCPU state
2026-01-21 22:54 ` [PATCH 2/6] KVM: x86/pmu: Disable HG_ONLY events as appropriate for current vCPU state Jim Mattson
@ 2026-01-22 16:33 ` Sean Christopherson
2026-01-22 22:47 ` Jim Mattson
0 siblings, 1 reply; 21+ messages in thread
From: Sean Christopherson @ 2026-01-22 16:33 UTC (permalink / raw)
To: Jim Mattson
Cc: Paolo Bonzini, Thomas Gleixner, Ingo Molnar, Borislav Petkov,
Dave Hansen, x86, H. Peter Anvin, Peter Zijlstra,
Arnaldo Carvalho de Melo, Namhyung Kim, Mark Rutland,
Alexander Shishkin, Jiri Olsa, Ian Rogers, Adrian Hunter,
James Clark, Shuah Khan, kvm, linux-kernel, linux-perf-users,
linux-kselftest
On Wed, Jan 21, 2026, Jim Mattson wrote:
> Introduce amd_pmu_dormant_hg_event(), which determines whether an AMD PMC
> should be dormant (i.e. not count) based on the guest's Host-Only and
> Guest-Only event selector bits and the current vCPU state.
>
> Update amd_pmu_set_eventsel_hw() to clear the event selector's enable bit
> when the event is dormant.
>
> Signed-off-by: Jim Mattson <jmattson@google.com>
> ---
> arch/x86/include/asm/perf_event.h | 2 ++
> arch/x86/kvm/svm/pmu.c | 23 +++++++++++++++++++++++
> 2 files changed, 25 insertions(+)
>
> diff --git a/arch/x86/include/asm/perf_event.h b/arch/x86/include/asm/perf_event.h
> index 0d9af4135e0a..7649d79d91a6 100644
> --- a/arch/x86/include/asm/perf_event.h
> +++ b/arch/x86/include/asm/perf_event.h
> @@ -58,6 +58,8 @@
> #define AMD64_EVENTSEL_INT_CORE_ENABLE (1ULL << 36)
> #define AMD64_EVENTSEL_GUESTONLY (1ULL << 40)
> #define AMD64_EVENTSEL_HOSTONLY (1ULL << 41)
> +#define AMD64_EVENTSEL_HG_ONLY \
I would strongly prefer to avoid the HG acronym, as it's not immediately obvious
that it's HOST_GUEST, and avoiding long lines even with the full HOST_GUEST is
pretty easy.
The name should also have "MASK" at the end to make it more obvious this is a
multi-flag macro, i.e. not a single-flag value. Otherwise the intent and thus
correctness of code like this isn't obvious:
if (eventsel & AMD64_EVENTSEL_HG_ONLY)
How about AMD64_EVENTSEL_HOST_GUEST_MASK?
> + (AMD64_EVENTSEL_HOSTONLY | AMD64_EVENTSEL_GUESTONLY)
>
> #define AMD64_EVENTSEL_INT_CORE_SEL_SHIFT 37
> #define AMD64_EVENTSEL_INT_CORE_SEL_MASK \
> diff --git a/arch/x86/kvm/svm/pmu.c b/arch/x86/kvm/svm/pmu.c
> index 33c139b23a9e..f619417557f9 100644
> --- a/arch/x86/kvm/svm/pmu.c
> +++ b/arch/x86/kvm/svm/pmu.c
> @@ -147,10 +147,33 @@ static int amd_pmu_get_msr(struct kvm_vcpu *vcpu, struct msr_data *msr_info)
> return 1;
> }
>
> +static bool amd_pmu_dormant_hg_event(struct kvm_pmc *pmc)
I think I would prefer to flip the polarity, even though the only caller would
then need to invert the return value. Partly because I think we can come up with
a more intuitive name, partly because it'll make the last check in particular
more intutive, i.e. IMO, checking "guest == guest"
return !!(hg_only & AMD64_EVENTSEL_GUESTONLY) == is_guest_mode(vcpu);
is more obvious than checking "host == guest":
return !!(hg_only & AMD64_EVENTSEL_GUESTONLY) == is_guest_mode(vcpu);
Maybe amd_pmc_is_active() or amd_pmc_counts_in_current_mode()?
> +{
> + u64 hg_only = pmc->eventsel & AMD64_EVENTSEL_HG_ONLY;
> + struct kvm_vcpu *vcpu = pmc->vcpu;
> +
> + if (hg_only == 0)
!hg_only
In the spirit of avoiding the "hg" acronym, what if we do something like this?
const u64 HOST_GUEST_MASK = AMD64_EVENTSEL_HOST_GUEST_MASK;
struct kvm_vcpu *vcpu = pmc->vcpu;
u64 eventsel = pmc->eventsel;
/*
* PMCs count in both host and guest if neither {HOST,GUEST}_ONLY flags
* are set, or if both flags are set.
*/
if (!(eventsel & HOST_GUEST_MASK) ||
((eventsel & HOST_GUEST_MASK) == HOST_GUEST_MASK))
return true;
/* {HOST,GUEST}_ONLY bits are ignored when SVME is clear. */
if (!(vcpu->arch.efer & EFER_SVME))
return true;
return !!(eventsel & AMD64_EVENTSEL_GUESTONLY) == is_guest_mode(vcpu);
> + /* Not an HG_ONLY event */
Please don't put comments inside single-line if-statements. 99% of the time
it's easy to put the comment outside of the if-statement, and doing so encourages
a more verbose comment and avoids a "does this if-statement need curly-braces"
debate.
> + return false;
> +
> + if (!(vcpu->arch.efer & EFER_SVME))
> + /* HG_ONLY bits are ignored when SVME is clear */
> + return false;
> +
> + /* Always active if both HG_ONLY bits are set */
> + if (hg_only == AMD64_EVENTSEL_HG_ONLY)
I vote to check this condition at the same time !hg_only is checked. From a
*very* pedantic perspective, one could argue it's "wrong" to check the bits when
SVME=0, but the purpose of the helper is to detect if the PMC is active or not.
Precisely following the architectural behavior is unnecessary.
> + return false;
> +
> + return !!(hg_only & AMD64_EVENTSEL_HOSTONLY) == is_guest_mode(vcpu);
> +}
> +
> static void amd_pmu_set_eventsel_hw(struct kvm_pmc *pmc)
> {
> pmc->eventsel_hw = (pmc->eventsel & ~AMD64_EVENTSEL_HOSTONLY) |
> AMD64_EVENTSEL_GUESTONLY;
> +
> + if (amd_pmu_dormant_hg_event(pmc))
> + pmc->eventsel_hw &= ~ARCH_PERFMON_EVENTSEL_ENABLE;
> }
>
> static int amd_pmu_set_msr(struct kvm_vcpu *vcpu, struct msr_data *msr_info)
> --
> 2.52.0.457.g6b5491de43-goog
>
^ permalink raw reply [flat|nested] 21+ messages in thread* Re: [PATCH 2/6] KVM: x86/pmu: Disable HG_ONLY events as appropriate for current vCPU state
2026-01-22 16:33 ` Sean Christopherson
@ 2026-01-22 22:47 ` Jim Mattson
2026-01-22 23:51 ` Sean Christopherson
0 siblings, 1 reply; 21+ messages in thread
From: Jim Mattson @ 2026-01-22 22:47 UTC (permalink / raw)
To: Sean Christopherson
Cc: Paolo Bonzini, Thomas Gleixner, Ingo Molnar, Borislav Petkov,
Dave Hansen, x86, H. Peter Anvin, Peter Zijlstra,
Arnaldo Carvalho de Melo, Namhyung Kim, Mark Rutland,
Alexander Shishkin, Jiri Olsa, Ian Rogers, Adrian Hunter,
James Clark, Shuah Khan, kvm, linux-kernel, linux-perf-users,
linux-kselftest
On Thu, Jan 22, 2026 at 8:33 AM Sean Christopherson <seanjc@google.com> wrote:
>
> On Wed, Jan 21, 2026, Jim Mattson wrote:
> > Introduce amd_pmu_dormant_hg_event(), which determines whether an AMD PMC
> > should be dormant (i.e. not count) based on the guest's Host-Only and
> > Guest-Only event selector bits and the current vCPU state.
> >
> > Update amd_pmu_set_eventsel_hw() to clear the event selector's enable bit
> > when the event is dormant.
> >
> > Signed-off-by: Jim Mattson <jmattson@google.com>
> > ---
> > arch/x86/include/asm/perf_event.h | 2 ++
> > arch/x86/kvm/svm/pmu.c | 23 +++++++++++++++++++++++
> > 2 files changed, 25 insertions(+)
> >
> > diff --git a/arch/x86/include/asm/perf_event.h b/arch/x86/include/asm/perf_event.h
> > index 0d9af4135e0a..7649d79d91a6 100644
> > --- a/arch/x86/include/asm/perf_event.h
> > +++ b/arch/x86/include/asm/perf_event.h
> > @@ -58,6 +58,8 @@
> > #define AMD64_EVENTSEL_INT_CORE_ENABLE (1ULL << 36)
> > #define AMD64_EVENTSEL_GUESTONLY (1ULL << 40)
> > #define AMD64_EVENTSEL_HOSTONLY (1ULL << 41)
> > +#define AMD64_EVENTSEL_HG_ONLY \
>
> I would strongly prefer to avoid the HG acronym, as it's not immediately obvious
> that it's HOST_GUEST, and avoiding long lines even with the full HOST_GUEST is
> pretty easy.
In this instance, I'm happy to make the suggested change, but I think
your overall distaste for HG_ONLY is unwarranted.
These bits are documented in the APM as:
> HG_ONLY (Host/Guest Only)—Bits 41:40, read/write
> The name should also have "MASK" at the end to make it more obvious this is a
> multi-flag macro, i.e. not a single-flag value. Otherwise the intent and thus
> correctness of code like this isn't obvious:
>
> if (eventsel & AMD64_EVENTSEL_HG_ONLY)
>
> How about AMD64_EVENTSEL_HOST_GUEST_MASK?
Sure.
> > + (AMD64_EVENTSEL_HOSTONLY | AMD64_EVENTSEL_GUESTONLY)
> >
> > #define AMD64_EVENTSEL_INT_CORE_SEL_SHIFT 37
> > #define AMD64_EVENTSEL_INT_CORE_SEL_MASK \
> > diff --git a/arch/x86/kvm/svm/pmu.c b/arch/x86/kvm/svm/pmu.c
> > index 33c139b23a9e..f619417557f9 100644
> > --- a/arch/x86/kvm/svm/pmu.c
> > +++ b/arch/x86/kvm/svm/pmu.c
> > @@ -147,10 +147,33 @@ static int amd_pmu_get_msr(struct kvm_vcpu *vcpu, struct msr_data *msr_info)
> > return 1;
> > }
> >
> > +static bool amd_pmu_dormant_hg_event(struct kvm_pmc *pmc)
>
> I think I would prefer to flip the polarity, even though the only caller would
> then need to invert the return value. Partly because I think we can come up with
> a more intuitive name, partly because it'll make the last check in particular
> more intutive, i.e. IMO, checking "guest == guest"
>
> return !!(hg_only & AMD64_EVENTSEL_GUESTONLY) == is_guest_mode(vcpu);
>
> is more obvious than checking "host == guest":
>
> return !!(hg_only & AMD64_EVENTSEL_GUESTONLY) == is_guest_mode(vcpu);
>
> Maybe amd_pmc_is_active() or amd_pmc_counts_in_current_mode()?
I think amd_pmc_is_active() is a much stronger statement, implying
that both enable bits are also set.
Similarly, amd_pmc_counts_in_current_mode() sounds like it looks at
OS/USR bits as well.
I'll see if I can think of a better name that isn't misleading. I
actually went with this polarity because of the naming problem. But, I
agree that the reverse polarity is marginally better.
> > +{
> > + u64 hg_only = pmc->eventsel & AMD64_EVENTSEL_HG_ONLY;
> > + struct kvm_vcpu *vcpu = pmc->vcpu;
> > +
> > + if (hg_only == 0)
>
> !hg_only
Now, you're just being petty. But, okay.
> In the spirit of avoiding the "hg" acronym, what if we do something like this?
>
> const u64 HOST_GUEST_MASK = AMD64_EVENTSEL_HOST_GUEST_MASK;
Ugh. No. You can't both prefer the longer name and yet avoid it like
the plague. If you need to introduce a shorter alias, the longer name
is a bad choice.
> struct kvm_vcpu *vcpu = pmc->vcpu;
> u64 eventsel = pmc->eventsel;
>
> /*
> * PMCs count in both host and guest if neither {HOST,GUEST}_ONLY flags
> * are set, or if both flags are set.
> */
> if (!(eventsel & HOST_GUEST_MASK) ||
> ((eventsel & HOST_GUEST_MASK) == HOST_GUEST_MASK))
> return true;
>
> /* {HOST,GUEST}_ONLY bits are ignored when SVME is clear. */
> if (!(vcpu->arch.efer & EFER_SVME))
> return true;
>
> return !!(eventsel & AMD64_EVENTSEL_GUESTONLY) == is_guest_mode(vcpu);
>
> > + /* Not an HG_ONLY event */
>
> Please don't put comments inside single-line if-statements. 99% of the time
> it's easy to put the comment outside of the if-statement, and doing so encourages
> a more verbose comment and avoids a "does this if-statement need curly-braces"
> debate.
There is no debate. A comment is not a statement. But, okay.
> > + return false;
> > +
> > + if (!(vcpu->arch.efer & EFER_SVME))
> > + /* HG_ONLY bits are ignored when SVME is clear */
> > + return false;
> > +
> > + /* Always active if both HG_ONLY bits are set */
> > + if (hg_only == AMD64_EVENTSEL_HG_ONLY)
>
> I vote to check this condition at the same time !hg_only is checked. From a
> *very* pedantic perspective, one could argue it's "wrong" to check the bits when
> SVME=0, but the purpose of the helper is to detect if the PMC is active or not.
> Precisely following the architectural behavior is unnecessary.
Even I am not that pedantic.
> > + return false;
> > +
> > + return !!(hg_only & AMD64_EVENTSEL_HOSTONLY) == is_guest_mode(vcpu);
> > +}
> > +
> > static void amd_pmu_set_eventsel_hw(struct kvm_pmc *pmc)
> > {
> > pmc->eventsel_hw = (pmc->eventsel & ~AMD64_EVENTSEL_HOSTONLY) |
> > AMD64_EVENTSEL_GUESTONLY;
> > +
> > + if (amd_pmu_dormant_hg_event(pmc))
> > + pmc->eventsel_hw &= ~ARCH_PERFMON_EVENTSEL_ENABLE;
> > }
> >
> > static int amd_pmu_set_msr(struct kvm_vcpu *vcpu, struct msr_data *msr_info)
> > --
> > 2.52.0.457.g6b5491de43-goog
> >
^ permalink raw reply [flat|nested] 21+ messages in thread* Re: [PATCH 2/6] KVM: x86/pmu: Disable HG_ONLY events as appropriate for current vCPU state
2026-01-22 22:47 ` Jim Mattson
@ 2026-01-22 23:51 ` Sean Christopherson
0 siblings, 0 replies; 21+ messages in thread
From: Sean Christopherson @ 2026-01-22 23:51 UTC (permalink / raw)
To: Jim Mattson
Cc: Paolo Bonzini, Thomas Gleixner, Ingo Molnar, Borislav Petkov,
Dave Hansen, x86, H. Peter Anvin, Peter Zijlstra,
Arnaldo Carvalho de Melo, Namhyung Kim, Mark Rutland,
Alexander Shishkin, Jiri Olsa, Ian Rogers, Adrian Hunter,
James Clark, Shuah Khan, kvm, linux-kernel, linux-perf-users,
linux-kselftest
On Thu, Jan 22, 2026, Jim Mattson wrote:
> On Thu, Jan 22, 2026 at 8:33 AM Sean Christopherson <seanjc@google.com> wrote:
> > On Wed, Jan 21, 2026, Jim Mattson wrote:
> > > Introduce amd_pmu_dormant_hg_event(), which determines whether an AMD PMC
> > > should be dormant (i.e. not count) based on the guest's Host-Only and
> > > Guest-Only event selector bits and the current vCPU state.
> > >
> > > Update amd_pmu_set_eventsel_hw() to clear the event selector's enable bit
> > > when the event is dormant.
> > >
> > > Signed-off-by: Jim Mattson <jmattson@google.com>
> > > ---
> > > arch/x86/include/asm/perf_event.h | 2 ++
> > > arch/x86/kvm/svm/pmu.c | 23 +++++++++++++++++++++++
> > > 2 files changed, 25 insertions(+)
> > >
> > > diff --git a/arch/x86/include/asm/perf_event.h b/arch/x86/include/asm/perf_event.h
> > > index 0d9af4135e0a..7649d79d91a6 100644
> > > --- a/arch/x86/include/asm/perf_event.h
> > > +++ b/arch/x86/include/asm/perf_event.h
> > > @@ -58,6 +58,8 @@
> > > #define AMD64_EVENTSEL_INT_CORE_ENABLE (1ULL << 36)
> > > #define AMD64_EVENTSEL_GUESTONLY (1ULL << 40)
> > > #define AMD64_EVENTSEL_HOSTONLY (1ULL << 41)
> > > +#define AMD64_EVENTSEL_HG_ONLY \
> >
> > I would strongly prefer to avoid the HG acronym, as it's not immediately obvious
> > that it's HOST_GUEST, and avoiding long lines even with the full HOST_GUEST is
> > pretty easy.
>
> In this instance, I'm happy to make the suggested change, but I think
> your overall distaste for HG_ONLY is unwarranted.
> These bits are documented in the APM as:
>
> > HG_ONLY (Host/Guest Only)—Bits 41:40, read/write
Ugh, stupid APM. That makes me hate it a little less, but still, ugh.
> > Maybe amd_pmc_is_active() or amd_pmc_counts_in_current_mode()?
>
> I think amd_pmc_is_active() is a much stronger statement, implying
> that both enable bits are also set.
Ooh, good point.
> Similarly, amd_pmc_counts_in_current_mode() sounds like it looks at
> OS/USR bits as well.
Yeah, I didn't like that collision either. :-/
> I'll see if I can think of a better name that isn't misleading. I
> actually went with this polarity because of the naming problem. But, I
> agree that the reverse polarity is marginally better.
>
> > > +{
> > > + u64 hg_only = pmc->eventsel & AMD64_EVENTSEL_HG_ONLY;
> > > + struct kvm_vcpu *vcpu = pmc->vcpu;
> > > +
> > > + if (hg_only == 0)
> >
> > !hg_only
>
> Now, you're just being petty. But, okay.
Eh, that's a very standard kernel style thing.
> > In the spirit of avoiding the "hg" acronym, what if we do something like this?
> >
> > const u64 HOST_GUEST_MASK = AMD64_EVENTSEL_HOST_GUEST_MASK;
>
> Ugh. No. You can't both prefer the longer name and yet avoid it like
> the plague. If you need to introduce a shorter alias, the longer name
> is a bad choice.
IMO, there's a big difference between a global macro that may be read in a variety
of contexts, and a variable that's scoped to a function and consumed within a few
lines of its definition.
That said, I'm definitely open to other ways to write this code that don't require
a local const, it's HG_ONLY that I really dislike.
> > struct kvm_vcpu *vcpu = pmc->vcpu;
> > u64 eventsel = pmc->eventsel;
> >
> > /*
> > * PMCs count in both host and guest if neither {HOST,GUEST}_ONLY flags
> > * are set, or if both flags are set.
> > */
> > if (!(eventsel & HOST_GUEST_MASK) ||
> > ((eventsel & HOST_GUEST_MASK) == HOST_GUEST_MASK))
> > return true;
> >
> > /* {HOST,GUEST}_ONLY bits are ignored when SVME is clear. */
> > if (!(vcpu->arch.efer & EFER_SVME))
> > return true;
> >
> > return !!(eventsel & AMD64_EVENTSEL_GUESTONLY) == is_guest_mode(vcpu);
> >
> > > + /* Not an HG_ONLY event */
> >
> > Please don't put comments inside single-line if-statements. 99% of the time
> > it's easy to put the comment outside of the if-statement, and doing so encourages
> > a more verbose comment and avoids a "does this if-statement need curly-braces"
> > debate.
>
> There is no debate. A comment is not a statement. But, okay.
LOL, dollars to donuts says I can find someone to debate you on the "correct"
style. :-D
^ permalink raw reply [flat|nested] 21+ messages in thread
* [PATCH 3/6] KVM: x86/pmu: Track enabled AMD PMCs with Host-Only xor Guest-Only bits set
2026-01-21 22:53 [PATCH 0/6] KVM: x86/pmu: Add support for AMD HG_ONLY bits Jim Mattson
2026-01-21 22:53 ` [PATCH 1/6] KVM: x86/pmu: Introduce amd_pmu_set_eventsel_hw() Jim Mattson
2026-01-21 22:54 ` [PATCH 2/6] KVM: x86/pmu: Disable HG_ONLY events as appropriate for current vCPU state Jim Mattson
@ 2026-01-21 22:54 ` Jim Mattson
2026-01-22 16:49 ` Sean Christopherson
2026-01-21 22:54 ` [PATCH 4/6] KVM: x86/pmu: [De]activate HG_ONLY PMCs at SVME changes and nested transitions Jim Mattson
` (2 subsequent siblings)
5 siblings, 1 reply; 21+ messages in thread
From: Jim Mattson @ 2026-01-21 22:54 UTC (permalink / raw)
To: Sean Christopherson, Paolo Bonzini, Thomas Gleixner, Ingo Molnar,
Borislav Petkov, Dave Hansen, x86, H. Peter Anvin, Peter Zijlstra,
Arnaldo Carvalho de Melo, Namhyung Kim, Mark Rutland,
Alexander Shishkin, Jiri Olsa, Ian Rogers, Adrian Hunter,
James Clark, Shuah Khan, kvm, linux-kernel, linux-perf-users,
linux-kselftest
Cc: Jim Mattson
Add pmc_hostonly and pmc_guestonly bitmaps to struct kvm_pmu to track which
guest-enabled performance counters have just one of the Host-Only and
Guest-Only event selector bits set. PMCs that are disabled, have neither
HG_ONLY bit set, or have both HG_ONLY bits set are not tracked, because
they don't require special handling at vCPU state transitions.
Update the bitmaps when the guest writes to an event selector MSR.
Signed-off-by: Jim Mattson <jmattson@google.com>
---
arch/x86/include/asm/kvm_host.h | 4 ++++
arch/x86/kvm/pmu.c | 2 ++
arch/x86/kvm/svm/pmu.c | 28 ++++++++++++++++++++++++++++
3 files changed, 34 insertions(+)
diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
index ecd4019b84b7..92050f76f84b 100644
--- a/arch/x86/include/asm/kvm_host.h
+++ b/arch/x86/include/asm/kvm_host.h
@@ -593,6 +593,10 @@ struct kvm_pmu {
DECLARE_BITMAP(pmc_counting_instructions, X86_PMC_IDX_MAX);
DECLARE_BITMAP(pmc_counting_branches, X86_PMC_IDX_MAX);
+ /* AMD only: track PMCs with Host-Only or Guest-Only bits set */
+ DECLARE_BITMAP(pmc_hostonly, X86_PMC_IDX_MAX);
+ DECLARE_BITMAP(pmc_guestonly, X86_PMC_IDX_MAX);
+
u64 ds_area;
u64 pebs_enable;
u64 pebs_enable_rsvd;
diff --git a/arch/x86/kvm/pmu.c b/arch/x86/kvm/pmu.c
index bd6b785cf261..833ee2ecd43f 100644
--- a/arch/x86/kvm/pmu.c
+++ b/arch/x86/kvm/pmu.c
@@ -921,6 +921,8 @@ static void kvm_pmu_reset(struct kvm_vcpu *vcpu)
pmu->need_cleanup = false;
bitmap_zero(pmu->reprogram_pmi, X86_PMC_IDX_MAX);
+ bitmap_zero(pmu->pmc_hostonly, X86_PMC_IDX_MAX);
+ bitmap_zero(pmu->pmc_guestonly, X86_PMC_IDX_MAX);
kvm_for_each_pmc(pmu, pmc, i, pmu->all_valid_pmc_idx) {
pmc_stop_counter(pmc);
diff --git a/arch/x86/kvm/svm/pmu.c b/arch/x86/kvm/svm/pmu.c
index f619417557f9..c06013e2b4b1 100644
--- a/arch/x86/kvm/svm/pmu.c
+++ b/arch/x86/kvm/svm/pmu.c
@@ -147,6 +147,33 @@ static int amd_pmu_get_msr(struct kvm_vcpu *vcpu, struct msr_data *msr_info)
return 1;
}
+static void amd_pmu_update_hg_bitmaps(struct kvm_pmc *pmc)
+{
+ struct kvm_pmu *pmu = pmc_to_pmu(pmc);
+ u64 eventsel = pmc->eventsel;
+
+ if (!(eventsel & ARCH_PERFMON_EVENTSEL_ENABLE)) {
+ bitmap_clear(pmu->pmc_hostonly, pmc->idx, 1);
+ bitmap_clear(pmu->pmc_guestonly, pmc->idx, 1);
+ return;
+ }
+
+ switch (eventsel & AMD64_EVENTSEL_HG_ONLY) {
+ case AMD64_EVENTSEL_HOSTONLY:
+ bitmap_set(pmu->pmc_hostonly, pmc->idx, 1);
+ bitmap_clear(pmu->pmc_guestonly, pmc->idx, 1);
+ break;
+ case AMD64_EVENTSEL_GUESTONLY:
+ bitmap_clear(pmu->pmc_hostonly, pmc->idx, 1);
+ bitmap_set(pmu->pmc_guestonly, pmc->idx, 1);
+ break;
+ default:
+ bitmap_clear(pmu->pmc_hostonly, pmc->idx, 1);
+ bitmap_clear(pmu->pmc_guestonly, pmc->idx, 1);
+ break;
+ }
+}
+
static bool amd_pmu_dormant_hg_event(struct kvm_pmc *pmc)
{
u64 hg_only = pmc->eventsel & AMD64_EVENTSEL_HG_ONLY;
@@ -196,6 +223,7 @@ static int amd_pmu_set_msr(struct kvm_vcpu *vcpu, struct msr_data *msr_info)
if (data != pmc->eventsel) {
pmc->eventsel = data;
amd_pmu_set_eventsel_hw(pmc);
+ amd_pmu_update_hg_bitmaps(pmc);
kvm_pmu_request_counter_reprogram(pmc);
}
return 0;
--
2.52.0.457.g6b5491de43-goog
^ permalink raw reply related [flat|nested] 21+ messages in thread* Re: [PATCH 3/6] KVM: x86/pmu: Track enabled AMD PMCs with Host-Only xor Guest-Only bits set
2026-01-21 22:54 ` [PATCH 3/6] KVM: x86/pmu: Track enabled AMD PMCs with Host-Only xor Guest-Only bits set Jim Mattson
@ 2026-01-22 16:49 ` Sean Christopherson
2026-01-24 1:09 ` Jim Mattson
0 siblings, 1 reply; 21+ messages in thread
From: Sean Christopherson @ 2026-01-22 16:49 UTC (permalink / raw)
To: Jim Mattson
Cc: Paolo Bonzini, Thomas Gleixner, Ingo Molnar, Borislav Petkov,
Dave Hansen, x86, H. Peter Anvin, Peter Zijlstra,
Arnaldo Carvalho de Melo, Namhyung Kim, Mark Rutland,
Alexander Shishkin, Jiri Olsa, Ian Rogers, Adrian Hunter,
James Clark, Shuah Khan, kvm, linux-kernel, linux-perf-users,
linux-kselftest
On Wed, Jan 21, 2026, Jim Mattson wrote:
> Add pmc_hostonly and pmc_guestonly bitmaps to struct kvm_pmu to track which
> guest-enabled performance counters have just one of the Host-Only and
> Guest-Only event selector bits set. PMCs that are disabled, have neither
> HG_ONLY bit set, or have both HG_ONLY bits set are not tracked, because
> they don't require special handling at vCPU state transitions.
Why bother with bitmaps? The bitmaps are basically just eliding the checks in
amd_pmc_is_active() (my name), and those checks are super fast compared to
emulating transitions between L1 and L2.
Can't we simply do:
void amd_pmu_refresh_host_guest_eventsels(struct kvm_vcpu *vcpu)
{
struct kvm_pmu *pmu = vcpu_to_pmu(vcpu);
struct kvm_pmc *pmc;
int i;
kvm_for_each_pmc(pmu, pmc, i, pmu->all_valid_pmc_idx)
amd_pmu_set_eventsel_hw(pmc);
}
And then call that helper on all transitions?
> +static void amd_pmu_update_hg_bitmaps(struct kvm_pmc *pmc)
> +{
> + struct kvm_pmu *pmu = pmc_to_pmu(pmc);
> + u64 eventsel = pmc->eventsel;
> +
> + if (!(eventsel & ARCH_PERFMON_EVENTSEL_ENABLE)) {
> + bitmap_clear(pmu->pmc_hostonly, pmc->idx, 1);
> + bitmap_clear(pmu->pmc_guestonly, pmc->idx, 1);
> + return;
> + }
> +
> + switch (eventsel & AMD64_EVENTSEL_HG_ONLY) {
> + case AMD64_EVENTSEL_HOSTONLY:
> + bitmap_set(pmu->pmc_hostonly, pmc->idx, 1);
> + bitmap_clear(pmu->pmc_guestonly, pmc->idx, 1);
> + break;
> + case AMD64_EVENTSEL_GUESTONLY:
> + bitmap_clear(pmu->pmc_hostonly, pmc->idx, 1);
> + bitmap_set(pmu->pmc_guestonly, pmc->idx, 1);
> + break;
> + default:
> + bitmap_clear(pmu->pmc_hostonly, pmc->idx, 1);
> + bitmap_clear(pmu->pmc_guestonly, pmc->idx, 1);
> + break;
> + }
> +}
> +
> static bool amd_pmu_dormant_hg_event(struct kvm_pmc *pmc)
> {
> u64 hg_only = pmc->eventsel & AMD64_EVENTSEL_HG_ONLY;
> @@ -196,6 +223,7 @@ static int amd_pmu_set_msr(struct kvm_vcpu *vcpu, struct msr_data *msr_info)
> if (data != pmc->eventsel) {
> pmc->eventsel = data;
> amd_pmu_set_eventsel_hw(pmc);
> + amd_pmu_update_hg_bitmaps(pmc);
If we're going to bother adding amd_pmu_set_eventsel_hw(), and not reuse it as
suggested above, then it amd_pmu_set_eventsel_hw() should be renamed to just
amd_pmu_set_eventsel() and it should be the one configuring the bitmaps. Because
KVM should never write to an eventsel without updating the bitmaps. That would
also better capture the relationship between the bitmaps and eventsel_hw, e.g.
pmc->eventsel_hw = (pmc->eventsel & ~AMD64_EVENTSEL_HOSTONLY) |
AMD64_EVENTSEL_GUESTONLY;
if (!amd_pmc_is_active(pmc))
pmc->eventsel_hw &= ~ARCH_PERFMON_EVENTSEL_ENABLE;
/*
* Update the host/guest bitmaps used to reconfigure eventsel_hw on
* transitions to/from an L2 guest, so that KVM can quickly refresh
* the event selectors programmed into hardware, e.g. without having
* to
*/
if (!(eventsel & ARCH_PERFMON_EVENTSEL_ENABLE)) {
bitmap_clear(pmu->pmc_hostonly, pmc->idx, 1);
bitmap_clear(pmu->pmc_guestonly, pmc->idx, 1);
return;
}
switch (eventsel & AMD64_EVENTSEL_HG_ONLY) {
case AMD64_EVENTSEL_HOSTONLY:
bitmap_set(pmu->pmc_hostonly, pmc->idx, 1);
bitmap_clear(pmu->pmc_guestonly, pmc->idx, 1);
break;
case AMD64_EVENTSEL_GUESTONLY:
bitmap_clear(pmu->pmc_hostonly, pmc->idx, 1);
bitmap_set(pmu->pmc_guestonly, pmc->idx, 1);
break;
default:
bitmap_clear(pmu->pmc_hostonly, pmc->idx, 1);
bitmap_clear(pmu->pmc_guestonly, pmc->idx, 1);
break;
}
But I still don't see any point in the bitmaps.
> kvm_pmu_request_counter_reprogram(pmc);
> }
> return 0;
> --
> 2.52.0.457.g6b5491de43-goog
>
^ permalink raw reply [flat|nested] 21+ messages in thread* Re: [PATCH 3/6] KVM: x86/pmu: Track enabled AMD PMCs with Host-Only xor Guest-Only bits set
2026-01-22 16:49 ` Sean Christopherson
@ 2026-01-24 1:09 ` Jim Mattson
0 siblings, 0 replies; 21+ messages in thread
From: Jim Mattson @ 2026-01-24 1:09 UTC (permalink / raw)
To: Sean Christopherson
Cc: Paolo Bonzini, Thomas Gleixner, Ingo Molnar, Borislav Petkov,
Dave Hansen, x86, H. Peter Anvin, Peter Zijlstra,
Arnaldo Carvalho de Melo, Namhyung Kim, Mark Rutland,
Alexander Shishkin, Jiri Olsa, Ian Rogers, Adrian Hunter,
James Clark, Shuah Khan, kvm, linux-kernel, linux-perf-users,
linux-kselftest
On Thu, Jan 22, 2026 at 8:49 AM Sean Christopherson <seanjc@google.com> wrote:
>
> On Wed, Jan 21, 2026, Jim Mattson wrote:
> > Add pmc_hostonly and pmc_guestonly bitmaps to struct kvm_pmu to track which
> > guest-enabled performance counters have just one of the Host-Only and
> > Guest-Only event selector bits set. PMCs that are disabled, have neither
> > HG_ONLY bit set, or have both HG_ONLY bits set are not tracked, because
> > they don't require special handling at vCPU state transitions.
>
> Why bother with bitmaps? The bitmaps are basically just eliding the checks in
> amd_pmc_is_active() (my name), and those checks are super fast compared to
> emulating transitions between L1 and L2.
>
> Can't we simply do:
>
> void amd_pmu_refresh_host_guest_eventsels(struct kvm_vcpu *vcpu)
> {
> struct kvm_pmu *pmu = vcpu_to_pmu(vcpu);
> struct kvm_pmc *pmc;
> int i;
>
> kvm_for_each_pmc(pmu, pmc, i, pmu->all_valid_pmc_idx)
> amd_pmu_set_eventsel_hw(pmc);
>
> }
>
> And then call that helper on all transitions?
>
> > +static void amd_pmu_update_hg_bitmaps(struct kvm_pmc *pmc)
> > +{
> > + struct kvm_pmu *pmu = pmc_to_pmu(pmc);
> > + u64 eventsel = pmc->eventsel;
> > +
> > + if (!(eventsel & ARCH_PERFMON_EVENTSEL_ENABLE)) {
> > + bitmap_clear(pmu->pmc_hostonly, pmc->idx, 1);
> > + bitmap_clear(pmu->pmc_guestonly, pmc->idx, 1);
> > + return;
> > + }
> > +
> > + switch (eventsel & AMD64_EVENTSEL_HG_ONLY) {
> > + case AMD64_EVENTSEL_HOSTONLY:
> > + bitmap_set(pmu->pmc_hostonly, pmc->idx, 1);
> > + bitmap_clear(pmu->pmc_guestonly, pmc->idx, 1);
> > + break;
> > + case AMD64_EVENTSEL_GUESTONLY:
> > + bitmap_clear(pmu->pmc_hostonly, pmc->idx, 1);
> > + bitmap_set(pmu->pmc_guestonly, pmc->idx, 1);
> > + break;
> > + default:
> > + bitmap_clear(pmu->pmc_hostonly, pmc->idx, 1);
> > + bitmap_clear(pmu->pmc_guestonly, pmc->idx, 1);
> > + break;
> > + }
> > +}
> > +
> > static bool amd_pmu_dormant_hg_event(struct kvm_pmc *pmc)
> > {
> > u64 hg_only = pmc->eventsel & AMD64_EVENTSEL_HG_ONLY;
> > @@ -196,6 +223,7 @@ static int amd_pmu_set_msr(struct kvm_vcpu *vcpu, struct msr_data *msr_info)
> > if (data != pmc->eventsel) {
> > pmc->eventsel = data;
> > amd_pmu_set_eventsel_hw(pmc);
> > + amd_pmu_update_hg_bitmaps(pmc);
>
> If we're going to bother adding amd_pmu_set_eventsel_hw(), and not reuse it as
> suggested above, then it amd_pmu_set_eventsel_hw() should be renamed to just
> amd_pmu_set_eventsel() and it should be the one configuring the bitmaps. Because
> KVM should never write to an eventsel without updating the bitmaps. That would
> also better capture the relationship between the bitmaps and eventsel_hw, e.g.
>
> pmc->eventsel_hw = (pmc->eventsel & ~AMD64_EVENTSEL_HOSTONLY) |
> AMD64_EVENTSEL_GUESTONLY;
>
> if (!amd_pmc_is_active(pmc))
> pmc->eventsel_hw &= ~ARCH_PERFMON_EVENTSEL_ENABLE;
>
> /*
> * Update the host/guest bitmaps used to reconfigure eventsel_hw on
> * transitions to/from an L2 guest, so that KVM can quickly refresh
> * the event selectors programmed into hardware, e.g. without having
> * to
> */
> if (!(eventsel & ARCH_PERFMON_EVENTSEL_ENABLE)) {
> bitmap_clear(pmu->pmc_hostonly, pmc->idx, 1);
> bitmap_clear(pmu->pmc_guestonly, pmc->idx, 1);
> return;
> }
>
> switch (eventsel & AMD64_EVENTSEL_HG_ONLY) {
> case AMD64_EVENTSEL_HOSTONLY:
> bitmap_set(pmu->pmc_hostonly, pmc->idx, 1);
> bitmap_clear(pmu->pmc_guestonly, pmc->idx, 1);
> break;
> case AMD64_EVENTSEL_GUESTONLY:
> bitmap_clear(pmu->pmc_hostonly, pmc->idx, 1);
> bitmap_set(pmu->pmc_guestonly, pmc->idx, 1);
> break;
> default:
> bitmap_clear(pmu->pmc_hostonly, pmc->idx, 1);
> bitmap_clear(pmu->pmc_guestonly, pmc->idx, 1);
> break;
> }
>
> But I still don't see any point in the bitmaps.
No problem. I will drop them in the next version.
^ permalink raw reply [flat|nested] 21+ messages in thread
* [PATCH 4/6] KVM: x86/pmu: [De]activate HG_ONLY PMCs at SVME changes and nested transitions
2026-01-21 22:53 [PATCH 0/6] KVM: x86/pmu: Add support for AMD HG_ONLY bits Jim Mattson
` (2 preceding siblings ...)
2026-01-21 22:54 ` [PATCH 3/6] KVM: x86/pmu: Track enabled AMD PMCs with Host-Only xor Guest-Only bits set Jim Mattson
@ 2026-01-21 22:54 ` Jim Mattson
2026-01-22 16:55 ` Sean Christopherson
2026-01-21 22:54 ` [PATCH 5/6] KVM: x86/pmu: Allow HG_ONLY bits with nSVM and mediated PMU Jim Mattson
2026-01-21 22:54 ` [PATCH 6/6] KVM: selftests: x86: Add svm_pmu_hg_test for HG_ONLY bits Jim Mattson
5 siblings, 1 reply; 21+ messages in thread
From: Jim Mattson @ 2026-01-21 22:54 UTC (permalink / raw)
To: Sean Christopherson, Paolo Bonzini, Thomas Gleixner, Ingo Molnar,
Borislav Petkov, Dave Hansen, x86, H. Peter Anvin, Peter Zijlstra,
Arnaldo Carvalho de Melo, Namhyung Kim, Mark Rutland,
Alexander Shishkin, Jiri Olsa, Ian Rogers, Adrian Hunter,
James Clark, Shuah Khan, kvm, linux-kernel, linux-perf-users,
linux-kselftest
Cc: Jim Mattson
Add a new function, kvm_pmu_set_pmc_eventsel_hw_enable(), to set or clear
the enable bit in eventsel_hw for PMCs identified by a bitmap.
Use this function to update Host-Only and Guest-Only counters at the
following transitions:
- svm_set_efer(): When SVME changes, enable Guest-Only counters if SVME
is being cleared (HG_ONLY bits become ignored), or disable them if SVME
is being set (L1 is active).
- nested_svm_vmrun(): Disable Host-Only counters and enable Guest-Only
counters.
- nested_svm_vmexit(): Disable Guest-Only counters and enable Host-Only
counters.
Signed-off-by: Jim Mattson <jmattson@google.com>
---
arch/x86/include/asm/kvm-x86-pmu-ops.h | 1 +
arch/x86/kvm/pmu.c | 7 +++++++
arch/x86/kvm/pmu.h | 4 ++++
arch/x86/kvm/svm/nested.c | 10 ++++++++++
arch/x86/kvm/svm/pmu.c | 17 +++++++++++++++++
arch/x86/kvm/svm/svm.c | 3 +++
6 files changed, 42 insertions(+)
diff --git a/arch/x86/include/asm/kvm-x86-pmu-ops.h b/arch/x86/include/asm/kvm-x86-pmu-ops.h
index f0aa6996811f..7b32796213a0 100644
--- a/arch/x86/include/asm/kvm-x86-pmu-ops.h
+++ b/arch/x86/include/asm/kvm-x86-pmu-ops.h
@@ -26,6 +26,7 @@ KVM_X86_PMU_OP_OPTIONAL(cleanup)
KVM_X86_PMU_OP_OPTIONAL(write_global_ctrl)
KVM_X86_PMU_OP(mediated_load)
KVM_X86_PMU_OP(mediated_put)
+KVM_X86_PMU_OP_OPTIONAL(set_pmc_eventsel_hw_enable)
#undef KVM_X86_PMU_OP
#undef KVM_X86_PMU_OP_OPTIONAL
diff --git a/arch/x86/kvm/pmu.c b/arch/x86/kvm/pmu.c
index 833ee2ecd43f..1541c201285b 100644
--- a/arch/x86/kvm/pmu.c
+++ b/arch/x86/kvm/pmu.c
@@ -1142,6 +1142,13 @@ void kvm_pmu_branch_retired(struct kvm_vcpu *vcpu)
}
EXPORT_SYMBOL_FOR_KVM_INTERNAL(kvm_pmu_branch_retired);
+void kvm_pmu_set_pmc_eventsel_hw_enable(struct kvm_vcpu *vcpu,
+ unsigned long *bitmap, bool enable)
+{
+ kvm_pmu_call(set_pmc_eventsel_hw_enable)(vcpu, bitmap, enable);
+}
+EXPORT_SYMBOL_FOR_KVM_INTERNAL(kvm_pmu_set_pmc_eventsel_hw_enable);
+
static bool is_masked_filter_valid(const struct kvm_x86_pmu_event_filter *filter)
{
u64 mask = kvm_pmu_ops.EVENTSEL_EVENT |
diff --git a/arch/x86/kvm/pmu.h b/arch/x86/kvm/pmu.h
index 0925246731cb..b8be8b6e40d8 100644
--- a/arch/x86/kvm/pmu.h
+++ b/arch/x86/kvm/pmu.h
@@ -41,6 +41,8 @@ struct kvm_pmu_ops {
void (*mediated_load)(struct kvm_vcpu *vcpu);
void (*mediated_put)(struct kvm_vcpu *vcpu);
void (*write_global_ctrl)(u64 global_ctrl);
+ void (*set_pmc_eventsel_hw_enable)(struct kvm_vcpu *vcpu,
+ unsigned long *bitmap, bool enable);
const u64 EVENTSEL_EVENT;
const int MAX_NR_GP_COUNTERS;
@@ -258,6 +260,8 @@ void kvm_pmu_destroy(struct kvm_vcpu *vcpu);
int kvm_vm_ioctl_set_pmu_event_filter(struct kvm *kvm, void __user *argp);
void kvm_pmu_instruction_retired(struct kvm_vcpu *vcpu);
void kvm_pmu_branch_retired(struct kvm_vcpu *vcpu);
+void kvm_pmu_set_pmc_eventsel_hw_enable(struct kvm_vcpu *vcpu,
+ unsigned long *bitmap, bool enable);
void kvm_mediated_pmu_load(struct kvm_vcpu *vcpu);
void kvm_mediated_pmu_put(struct kvm_vcpu *vcpu);
diff --git a/arch/x86/kvm/svm/nested.c b/arch/x86/kvm/svm/nested.c
index de90b104a0dd..edaa76e38417 100644
--- a/arch/x86/kvm/svm/nested.c
+++ b/arch/x86/kvm/svm/nested.c
@@ -28,6 +28,7 @@
#include "smm.h"
#include "cpuid.h"
#include "lapic.h"
+#include "pmu.h"
#include "svm.h"
#include "hyperv.h"
@@ -1054,6 +1055,11 @@ int nested_svm_vmrun(struct kvm_vcpu *vcpu)
if (enter_svm_guest_mode(vcpu, vmcb12_gpa, vmcb12, true))
goto out_exit_err;
+ kvm_pmu_set_pmc_eventsel_hw_enable(vcpu,
+ vcpu_to_pmu(vcpu)->pmc_hostonly, false);
+ kvm_pmu_set_pmc_eventsel_hw_enable(vcpu,
+ vcpu_to_pmu(vcpu)->pmc_guestonly, true);
+
if (nested_svm_merge_msrpm(vcpu))
goto out;
@@ -1137,6 +1143,10 @@ int nested_svm_vmexit(struct vcpu_svm *svm)
/* Exit Guest-Mode */
leave_guest_mode(vcpu);
+ kvm_pmu_set_pmc_eventsel_hw_enable(vcpu,
+ vcpu_to_pmu(vcpu)->pmc_hostonly, true);
+ kvm_pmu_set_pmc_eventsel_hw_enable(vcpu,
+ vcpu_to_pmu(vcpu)->pmc_guestonly, false);
svm->nested.vmcb12_gpa = 0;
WARN_ON_ONCE(svm->nested.nested_run_pending);
diff --git a/arch/x86/kvm/svm/pmu.c b/arch/x86/kvm/svm/pmu.c
index c06013e2b4b1..85155d65fa38 100644
--- a/arch/x86/kvm/svm/pmu.c
+++ b/arch/x86/kvm/svm/pmu.c
@@ -316,6 +316,22 @@ static void amd_mediated_pmu_put(struct kvm_vcpu *vcpu)
wrmsrq(MSR_AMD64_PERF_CNTR_GLOBAL_STATUS_CLR, pmu->global_status);
}
+static void amd_pmu_set_pmc_eventsel_hw_enable(struct kvm_vcpu *vcpu,
+ unsigned long *bitmap,
+ bool enable)
+{
+ struct kvm_pmu *pmu = vcpu_to_pmu(vcpu);
+ struct kvm_pmc *pmc;
+ int i;
+
+ kvm_for_each_pmc(pmu, pmc, i, bitmap) {
+ if (enable)
+ pmc->eventsel_hw |= ARCH_PERFMON_EVENTSEL_ENABLE;
+ else
+ pmc->eventsel_hw &= ~ARCH_PERFMON_EVENTSEL_ENABLE;
+ }
+}
+
struct kvm_pmu_ops amd_pmu_ops __initdata = {
.rdpmc_ecx_to_pmc = amd_rdpmc_ecx_to_pmc,
.msr_idx_to_pmc = amd_msr_idx_to_pmc,
@@ -329,6 +345,7 @@ struct kvm_pmu_ops amd_pmu_ops __initdata = {
.is_mediated_pmu_supported = amd_pmu_is_mediated_pmu_supported,
.mediated_load = amd_mediated_pmu_load,
.mediated_put = amd_mediated_pmu_put,
+ .set_pmc_eventsel_hw_enable = amd_pmu_set_pmc_eventsel_hw_enable,
.EVENTSEL_EVENT = AMD64_EVENTSEL_EVENT,
.MAX_NR_GP_COUNTERS = KVM_MAX_NR_AMD_GP_COUNTERS,
diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c
index 7803d2781144..953089b38921 100644
--- a/arch/x86/kvm/svm/svm.c
+++ b/arch/x86/kvm/svm/svm.c
@@ -244,6 +244,9 @@ int svm_set_efer(struct kvm_vcpu *vcpu, u64 efer)
if (svm_gp_erratum_intercept && !sev_guest(vcpu->kvm))
set_exception_intercept(svm, GP_VECTOR);
}
+
+ kvm_pmu_set_pmc_eventsel_hw_enable(vcpu,
+ vcpu_to_pmu(vcpu)->pmc_guestonly, !(efer & EFER_SVME));
}
svm->vmcb->save.efer = efer | EFER_SVME;
--
2.52.0.457.g6b5491de43-goog
^ permalink raw reply related [flat|nested] 21+ messages in thread* Re: [PATCH 4/6] KVM: x86/pmu: [De]activate HG_ONLY PMCs at SVME changes and nested transitions
2026-01-21 22:54 ` [PATCH 4/6] KVM: x86/pmu: [De]activate HG_ONLY PMCs at SVME changes and nested transitions Jim Mattson
@ 2026-01-22 16:55 ` Sean Christopherson
2026-01-28 23:43 ` Jim Mattson
0 siblings, 1 reply; 21+ messages in thread
From: Sean Christopherson @ 2026-01-22 16:55 UTC (permalink / raw)
To: Jim Mattson
Cc: Paolo Bonzini, Thomas Gleixner, Ingo Molnar, Borislav Petkov,
Dave Hansen, x86, H. Peter Anvin, Peter Zijlstra,
Arnaldo Carvalho de Melo, Namhyung Kim, Mark Rutland,
Alexander Shishkin, Jiri Olsa, Ian Rogers, Adrian Hunter,
James Clark, Shuah Khan, kvm, linux-kernel, linux-perf-users,
linux-kselftest
On Wed, Jan 21, 2026, Jim Mattson wrote:
> diff --git a/arch/x86/include/asm/kvm-x86-pmu-ops.h b/arch/x86/include/asm/kvm-x86-pmu-ops.h
> index f0aa6996811f..7b32796213a0 100644
> --- a/arch/x86/include/asm/kvm-x86-pmu-ops.h
> +++ b/arch/x86/include/asm/kvm-x86-pmu-ops.h
> @@ -26,6 +26,7 @@ KVM_X86_PMU_OP_OPTIONAL(cleanup)
> KVM_X86_PMU_OP_OPTIONAL(write_global_ctrl)
> KVM_X86_PMU_OP(mediated_load)
> KVM_X86_PMU_OP(mediated_put)
> +KVM_X86_PMU_OP_OPTIONAL(set_pmc_eventsel_hw_enable)
>
> #undef KVM_X86_PMU_OP
> #undef KVM_X86_PMU_OP_OPTIONAL
> diff --git a/arch/x86/kvm/pmu.c b/arch/x86/kvm/pmu.c
> index 833ee2ecd43f..1541c201285b 100644
> --- a/arch/x86/kvm/pmu.c
> +++ b/arch/x86/kvm/pmu.c
> @@ -1142,6 +1142,13 @@ void kvm_pmu_branch_retired(struct kvm_vcpu *vcpu)
> }
> EXPORT_SYMBOL_FOR_KVM_INTERNAL(kvm_pmu_branch_retired);
>
> +void kvm_pmu_set_pmc_eventsel_hw_enable(struct kvm_vcpu *vcpu,
> + unsigned long *bitmap, bool enable)
> +{
> + kvm_pmu_call(set_pmc_eventsel_hw_enable)(vcpu, bitmap, enable);
> +}
> +EXPORT_SYMBOL_FOR_KVM_INTERNAL(kvm_pmu_set_pmc_eventsel_hw_enable);
Why bounce through a PMU op just to go from nested.c to pmu.c? AFAICT, common
x86 code never calls kvm_pmu_set_pmc_eventsel_hw_enable(), just wire up calls
directly to amd_pmu_refresh_host_guest_eventsels().
> @@ -1054,6 +1055,11 @@ int nested_svm_vmrun(struct kvm_vcpu *vcpu)
> if (enter_svm_guest_mode(vcpu, vmcb12_gpa, vmcb12, true))
> goto out_exit_err;
>
> + kvm_pmu_set_pmc_eventsel_hw_enable(vcpu,
> + vcpu_to_pmu(vcpu)->pmc_hostonly, false);
> + kvm_pmu_set_pmc_eventsel_hw_enable(vcpu,
> + vcpu_to_pmu(vcpu)->pmc_guestonly, true);
> +
> if (nested_svm_merge_msrpm(vcpu))
> goto out;
>
> @@ -1137,6 +1143,10 @@ int nested_svm_vmexit(struct vcpu_svm *svm)
>
> /* Exit Guest-Mode */
> leave_guest_mode(vcpu);
> + kvm_pmu_set_pmc_eventsel_hw_enable(vcpu,
> + vcpu_to_pmu(vcpu)->pmc_hostonly, true);
> + kvm_pmu_set_pmc_eventsel_hw_enable(vcpu,
> + vcpu_to_pmu(vcpu)->pmc_guestonly, false);
> svm->nested.vmcb12_gpa = 0;
> WARN_ON_ONCE(svm->nested.nested_run_pending);
I don't think these are the right places to hook. Shouldn't KVM update the
event selectors on _all_ transitions, whether they're architectural or not? E.g.
by wrapping {enter,leave}_guest_mode()?
static void svm_enter_guest_mode(struct kvm_vcpu *vcpu)
{
enter_guest_mode(vcpu);
amd_pmu_refresh_host_guest_eventsels(vcpu);
}
static void svm_leave_guest_mode(struct kvm_vcpu *vcpu)
{
leave_guest_mode(vcpu);
amd_pmu_refresh_host_guest_eventsels(vcpu);
}
^ permalink raw reply [flat|nested] 21+ messages in thread* Re: [PATCH 4/6] KVM: x86/pmu: [De]activate HG_ONLY PMCs at SVME changes and nested transitions
2026-01-22 16:55 ` Sean Christopherson
@ 2026-01-28 23:43 ` Jim Mattson
2026-01-29 22:34 ` Sean Christopherson
0 siblings, 1 reply; 21+ messages in thread
From: Jim Mattson @ 2026-01-28 23:43 UTC (permalink / raw)
To: Sean Christopherson
Cc: Paolo Bonzini, Thomas Gleixner, Ingo Molnar, Borislav Petkov,
Dave Hansen, x86, H. Peter Anvin, Peter Zijlstra,
Arnaldo Carvalho de Melo, Namhyung Kim, Mark Rutland,
Alexander Shishkin, Jiri Olsa, Ian Rogers, Adrian Hunter,
James Clark, Shuah Khan, kvm, linux-kernel, linux-perf-users,
linux-kselftest
On Thu, Jan 22, 2026 at 8:55 AM Sean Christopherson <seanjc@google.com> wrote:
>
> On Wed, Jan 21, 2026, Jim Mattson wrote:
> > diff --git a/arch/x86/include/asm/kvm-x86-pmu-ops.h b/arch/x86/include/asm/kvm-x86-pmu-ops.h
> > index f0aa6996811f..7b32796213a0 100644
> > --- a/arch/x86/include/asm/kvm-x86-pmu-ops.h
> > +++ b/arch/x86/include/asm/kvm-x86-pmu-ops.h
> > @@ -26,6 +26,7 @@ KVM_X86_PMU_OP_OPTIONAL(cleanup)
> > KVM_X86_PMU_OP_OPTIONAL(write_global_ctrl)
> > KVM_X86_PMU_OP(mediated_load)
> > KVM_X86_PMU_OP(mediated_put)
> > +KVM_X86_PMU_OP_OPTIONAL(set_pmc_eventsel_hw_enable)
> >
> > #undef KVM_X86_PMU_OP
> > #undef KVM_X86_PMU_OP_OPTIONAL
> > diff --git a/arch/x86/kvm/pmu.c b/arch/x86/kvm/pmu.c
> > index 833ee2ecd43f..1541c201285b 100644
> > --- a/arch/x86/kvm/pmu.c
> > +++ b/arch/x86/kvm/pmu.c
> > @@ -1142,6 +1142,13 @@ void kvm_pmu_branch_retired(struct kvm_vcpu *vcpu)
> > }
> > EXPORT_SYMBOL_FOR_KVM_INTERNAL(kvm_pmu_branch_retired);
> >
> > +void kvm_pmu_set_pmc_eventsel_hw_enable(struct kvm_vcpu *vcpu,
> > + unsigned long *bitmap, bool enable)
> > +{
> > + kvm_pmu_call(set_pmc_eventsel_hw_enable)(vcpu, bitmap, enable);
> > +}
> > +EXPORT_SYMBOL_FOR_KVM_INTERNAL(kvm_pmu_set_pmc_eventsel_hw_enable);
>
> Why bounce through a PMU op just to go from nested.c to pmu.c? AFAICT, common
> x86 code never calls kvm_pmu_set_pmc_eventsel_hw_enable(), just wire up calls
> directly to amd_pmu_refresh_host_guest_eventsels().
It seemed that pmu.c deliberately didn't export anything. All accesses
were via virtual function table. But maybe that was just happenstance.
Should I create a separate pmu.h, or just throw the prototype into
svm.h?
> > @@ -1054,6 +1055,11 @@ int nested_svm_vmrun(struct kvm_vcpu *vcpu)
> > if (enter_svm_guest_mode(vcpu, vmcb12_gpa, vmcb12, true))
> > goto out_exit_err;
> >
> > + kvm_pmu_set_pmc_eventsel_hw_enable(vcpu,
> > + vcpu_to_pmu(vcpu)->pmc_hostonly, false);
> > + kvm_pmu_set_pmc_eventsel_hw_enable(vcpu,
> > + vcpu_to_pmu(vcpu)->pmc_guestonly, true);
> > +
> > if (nested_svm_merge_msrpm(vcpu))
> > goto out;
> >
> > @@ -1137,6 +1143,10 @@ int nested_svm_vmexit(struct vcpu_svm *svm)
> >
> > /* Exit Guest-Mode */
> > leave_guest_mode(vcpu);
> > + kvm_pmu_set_pmc_eventsel_hw_enable(vcpu,
> > + vcpu_to_pmu(vcpu)->pmc_hostonly, true);
> > + kvm_pmu_set_pmc_eventsel_hw_enable(vcpu,
> > + vcpu_to_pmu(vcpu)->pmc_guestonly, false);
> > svm->nested.vmcb12_gpa = 0;
> > WARN_ON_ONCE(svm->nested.nested_run_pending);
>
> I don't think these are the right places to hook. Shouldn't KVM update the
> event selectors on _all_ transitions, whether they're architectural or not? E.g.
> by wrapping {enter,leave}_guest_mode()?
You are so right! I will fix this in the next version.
> static void svm_enter_guest_mode(struct kvm_vcpu *vcpu)
> {
> enter_guest_mode(vcpu);
> amd_pmu_refresh_host_guest_eventsels(vcpu);
> }
>
> static void svm_leave_guest_mode(struct kvm_vcpu *vcpu)
> {
> leave_guest_mode(vcpu);
> amd_pmu_refresh_host_guest_eventsels(vcpu);
> }
^ permalink raw reply [flat|nested] 21+ messages in thread* Re: [PATCH 4/6] KVM: x86/pmu: [De]activate HG_ONLY PMCs at SVME changes and nested transitions
2026-01-28 23:43 ` Jim Mattson
@ 2026-01-29 22:34 ` Sean Christopherson
0 siblings, 0 replies; 21+ messages in thread
From: Sean Christopherson @ 2026-01-29 22:34 UTC (permalink / raw)
To: Jim Mattson
Cc: Paolo Bonzini, Thomas Gleixner, Ingo Molnar, Borislav Petkov,
Dave Hansen, x86, H. Peter Anvin, Peter Zijlstra,
Arnaldo Carvalho de Melo, Namhyung Kim, Mark Rutland,
Alexander Shishkin, Jiri Olsa, Ian Rogers, Adrian Hunter,
James Clark, Shuah Khan, kvm, linux-kernel, linux-perf-users,
linux-kselftest
On Wed, Jan 28, 2026, Jim Mattson wrote:
> On Thu, Jan 22, 2026 at 8:55 AM Sean Christopherson <seanjc@google.com> wrote:
> >
> > On Wed, Jan 21, 2026, Jim Mattson wrote:
> > > diff --git a/arch/x86/include/asm/kvm-x86-pmu-ops.h b/arch/x86/include/asm/kvm-x86-pmu-ops.h
> > > index f0aa6996811f..7b32796213a0 100644
> > > --- a/arch/x86/include/asm/kvm-x86-pmu-ops.h
> > > +++ b/arch/x86/include/asm/kvm-x86-pmu-ops.h
> > > @@ -26,6 +26,7 @@ KVM_X86_PMU_OP_OPTIONAL(cleanup)
> > > KVM_X86_PMU_OP_OPTIONAL(write_global_ctrl)
> > > KVM_X86_PMU_OP(mediated_load)
> > > KVM_X86_PMU_OP(mediated_put)
> > > +KVM_X86_PMU_OP_OPTIONAL(set_pmc_eventsel_hw_enable)
> > >
> > > #undef KVM_X86_PMU_OP
> > > #undef KVM_X86_PMU_OP_OPTIONAL
> > > diff --git a/arch/x86/kvm/pmu.c b/arch/x86/kvm/pmu.c
> > > index 833ee2ecd43f..1541c201285b 100644
> > > --- a/arch/x86/kvm/pmu.c
> > > +++ b/arch/x86/kvm/pmu.c
> > > @@ -1142,6 +1142,13 @@ void kvm_pmu_branch_retired(struct kvm_vcpu *vcpu)
> > > }
> > > EXPORT_SYMBOL_FOR_KVM_INTERNAL(kvm_pmu_branch_retired);
> > >
> > > +void kvm_pmu_set_pmc_eventsel_hw_enable(struct kvm_vcpu *vcpu,
> > > + unsigned long *bitmap, bool enable)
> > > +{
> > > + kvm_pmu_call(set_pmc_eventsel_hw_enable)(vcpu, bitmap, enable);
> > > +}
> > > +EXPORT_SYMBOL_FOR_KVM_INTERNAL(kvm_pmu_set_pmc_eventsel_hw_enable);
> >
> > Why bounce through a PMU op just to go from nested.c to pmu.c? AFAICT, common
> > x86 code never calls kvm_pmu_set_pmc_eventsel_hw_enable(), just wire up calls
> > directly to amd_pmu_refresh_host_guest_eventsels().
>
> It seemed that pmu.c deliberately didn't export anything. All accesses
> were via virtual function table. But maybe that was just happenstance.
Probably just happenstance?
> Should I create a separate pmu.h, or just throw the prototype into
> svm.h?
I say just throw it in svm.h. We've had pmu_intel.h for a long time, and there's
hardly anything in there. And somewhat surprisingly, only two things in vmx.h
that obviously could go in pmu_intel.h.
void intel_pmu_cross_mapped_check(struct kvm_pmu *pmu);
int intel_pmu_create_guest_lbr_event(struct kvm_vcpu *vcpu);
^ permalink raw reply [flat|nested] 21+ messages in thread
* [PATCH 5/6] KVM: x86/pmu: Allow HG_ONLY bits with nSVM and mediated PMU
2026-01-21 22:53 [PATCH 0/6] KVM: x86/pmu: Add support for AMD HG_ONLY bits Jim Mattson
` (3 preceding siblings ...)
2026-01-21 22:54 ` [PATCH 4/6] KVM: x86/pmu: [De]activate HG_ONLY PMCs at SVME changes and nested transitions Jim Mattson
@ 2026-01-21 22:54 ` Jim Mattson
2026-01-22 16:56 ` Sean Christopherson
2026-01-21 22:54 ` [PATCH 6/6] KVM: selftests: x86: Add svm_pmu_hg_test for HG_ONLY bits Jim Mattson
5 siblings, 1 reply; 21+ messages in thread
From: Jim Mattson @ 2026-01-21 22:54 UTC (permalink / raw)
To: Sean Christopherson, Paolo Bonzini, Thomas Gleixner, Ingo Molnar,
Borislav Petkov, Dave Hansen, x86, H. Peter Anvin, Peter Zijlstra,
Arnaldo Carvalho de Melo, Namhyung Kim, Mark Rutland,
Alexander Shishkin, Jiri Olsa, Ian Rogers, Adrian Hunter,
James Clark, Shuah Khan, kvm, linux-kernel, linux-perf-users,
linux-kselftest
Cc: Jim Mattson
If the vCPU advertises SVM and uses the mediated PMU, allow the guest to
set the Host-Only and Guest-Only bits in the event selector MSRs.
Signed-off-by: Jim Mattson <jmattson@google.com>
---
arch/x86/kvm/svm/pmu.c | 7 ++++++-
1 file changed, 6 insertions(+), 1 deletion(-)
diff --git a/arch/x86/kvm/svm/pmu.c b/arch/x86/kvm/svm/pmu.c
index 85155d65fa38..a1eeb7b38219 100644
--- a/arch/x86/kvm/svm/pmu.c
+++ b/arch/x86/kvm/svm/pmu.c
@@ -262,8 +262,13 @@ static void amd_pmu_refresh(struct kvm_vcpu *vcpu)
pmu->global_status_rsvd = pmu->global_ctrl_rsvd;
}
- pmu->counter_bitmask[KVM_PMC_GP] = BIT_ULL(48) - 1;
pmu->reserved_bits = 0xfffffff000280000ull;
+ if (guest_cpu_cap_has(vcpu, X86_FEATURE_SVM) &&
+ kvm_vcpu_has_mediated_pmu(vcpu))
+ /* Allow Host-Only and Guest-Only bits */
+ pmu->reserved_bits &= ~AMD64_EVENTSEL_HG_ONLY;
+
+ pmu->counter_bitmask[KVM_PMC_GP] = BIT_ULL(48) - 1;
pmu->raw_event_mask = AMD64_RAW_EVENT_MASK;
/* not applicable to AMD; but clean them to prevent any fall out */
pmu->counter_bitmask[KVM_PMC_FIXED] = 0;
--
2.52.0.457.g6b5491de43-goog
^ permalink raw reply related [flat|nested] 21+ messages in thread* Re: [PATCH 5/6] KVM: x86/pmu: Allow HG_ONLY bits with nSVM and mediated PMU
2026-01-21 22:54 ` [PATCH 5/6] KVM: x86/pmu: Allow HG_ONLY bits with nSVM and mediated PMU Jim Mattson
@ 2026-01-22 16:56 ` Sean Christopherson
0 siblings, 0 replies; 21+ messages in thread
From: Sean Christopherson @ 2026-01-22 16:56 UTC (permalink / raw)
To: Jim Mattson
Cc: Paolo Bonzini, Thomas Gleixner, Ingo Molnar, Borislav Petkov,
Dave Hansen, x86, H. Peter Anvin, Peter Zijlstra,
Arnaldo Carvalho de Melo, Namhyung Kim, Mark Rutland,
Alexander Shishkin, Jiri Olsa, Ian Rogers, Adrian Hunter,
James Clark, Shuah Khan, kvm, linux-kernel, linux-perf-users,
linux-kselftest
On Wed, Jan 21, 2026, Jim Mattson wrote:
> If the vCPU advertises SVM and uses the mediated PMU, allow the guest to
> set the Host-Only and Guest-Only bits in the event selector MSRs.
>
> Signed-off-by: Jim Mattson <jmattson@google.com>
> ---
> arch/x86/kvm/svm/pmu.c | 7 ++++++-
> 1 file changed, 6 insertions(+), 1 deletion(-)
>
> diff --git a/arch/x86/kvm/svm/pmu.c b/arch/x86/kvm/svm/pmu.c
> index 85155d65fa38..a1eeb7b38219 100644
> --- a/arch/x86/kvm/svm/pmu.c
> +++ b/arch/x86/kvm/svm/pmu.c
> @@ -262,8 +262,13 @@ static void amd_pmu_refresh(struct kvm_vcpu *vcpu)
> pmu->global_status_rsvd = pmu->global_ctrl_rsvd;
> }
>
> - pmu->counter_bitmask[KVM_PMC_GP] = BIT_ULL(48) - 1;
> pmu->reserved_bits = 0xfffffff000280000ull;
> + if (guest_cpu_cap_has(vcpu, X86_FEATURE_SVM) &&
> + kvm_vcpu_has_mediated_pmu(vcpu))
> + /* Allow Host-Only and Guest-Only bits */
Meh, no comment needed if the macro is more descriptive.
> + pmu->reserved_bits &= ~AMD64_EVENTSEL_HG_ONLY;
> +
> + pmu->counter_bitmask[KVM_PMC_GP] = BIT_ULL(48) - 1;
Spurious code movement?
> pmu->raw_event_mask = AMD64_RAW_EVENT_MASK;
> /* not applicable to AMD; but clean them to prevent any fall out */
> pmu->counter_bitmask[KVM_PMC_FIXED] = 0;
> --
> 2.52.0.457.g6b5491de43-goog
>
^ permalink raw reply [flat|nested] 21+ messages in thread
* [PATCH 6/6] KVM: selftests: x86: Add svm_pmu_hg_test for HG_ONLY bits
2026-01-21 22:53 [PATCH 0/6] KVM: x86/pmu: Add support for AMD HG_ONLY bits Jim Mattson
` (4 preceding siblings ...)
2026-01-21 22:54 ` [PATCH 5/6] KVM: x86/pmu: Allow HG_ONLY bits with nSVM and mediated PMU Jim Mattson
@ 2026-01-21 22:54 ` Jim Mattson
2026-01-22 17:12 ` Sean Christopherson
2026-01-22 18:56 ` kernel test robot
5 siblings, 2 replies; 21+ messages in thread
From: Jim Mattson @ 2026-01-21 22:54 UTC (permalink / raw)
To: Sean Christopherson, Paolo Bonzini, Thomas Gleixner, Ingo Molnar,
Borislav Petkov, Dave Hansen, x86, H. Peter Anvin, Peter Zijlstra,
Arnaldo Carvalho de Melo, Namhyung Kim, Mark Rutland,
Alexander Shishkin, Jiri Olsa, Ian Rogers, Adrian Hunter,
James Clark, Shuah Khan, kvm, linux-kernel, linux-perf-users,
linux-kselftest
Cc: Jim Mattson
Add a selftest to verify KVM correctly virtualizes the AMD PMU Host-Only
(bit 41) and Guest-Only (bit 40) event selector bits across all relevant
SVM state transitions.
For both Guest-Only and Host-Only counters, verify that:
1. SVME=0: counter counts (HG_ONLY bits ignored)
2. Set SVME=1: counter behavior changes based on HG_ONLY bit
3. VMRUN to L2: counter behavior switches (guest vs host mode)
4. VMEXIT to L1: counter behavior switches back
5. Clear SVME=0: counter counts (HG_ONLY bits ignored again)
Also confirm that setting both bits is the same as setting neither bit.
Signed-off-by: Jim Mattson <jmattson@google.com>
---
tools/testing/selftests/kvm/Makefile.kvm | 1 +
.../selftests/kvm/x86/svm_pmu_hg_test.c | 297 ++++++++++++++++++
2 files changed, 298 insertions(+)
create mode 100644 tools/testing/selftests/kvm/x86/svm_pmu_hg_test.c
diff --git a/tools/testing/selftests/kvm/Makefile.kvm b/tools/testing/selftests/kvm/Makefile.kvm
index e88699e227dd..06ba85d97618 100644
--- a/tools/testing/selftests/kvm/Makefile.kvm
+++ b/tools/testing/selftests/kvm/Makefile.kvm
@@ -112,6 +112,7 @@ TEST_GEN_PROGS_x86 += x86/svm_vmcall_test
TEST_GEN_PROGS_x86 += x86/svm_int_ctl_test
TEST_GEN_PROGS_x86 += x86/svm_nested_shutdown_test
TEST_GEN_PROGS_x86 += x86/svm_nested_soft_inject_test
+TEST_GEN_PROGS_x86 += x86/svm_pmu_hg_test
TEST_GEN_PROGS_x86 += x86/tsc_scaling_sync
TEST_GEN_PROGS_x86 += x86/sync_regs_test
TEST_GEN_PROGS_x86 += x86/ucna_injection_test
diff --git a/tools/testing/selftests/kvm/x86/svm_pmu_hg_test.c b/tools/testing/selftests/kvm/x86/svm_pmu_hg_test.c
new file mode 100644
index 000000000000..e811b4c1a818
--- /dev/null
+++ b/tools/testing/selftests/kvm/x86/svm_pmu_hg_test.c
@@ -0,0 +1,297 @@
+// SPDX-License-Identifier: GPL-2.0-only
+/*
+ * KVM nested SVM PMU Host-Only/Guest-Only test
+ *
+ * Copyright (C) 2026, Google LLC.
+ *
+ * Test that KVM correctly virtualizes the AMD PMU Host-Only (bit 41) and
+ * Guest-Only (bit 40) event selector bits across all SVM state transitions.
+ *
+ * For Guest-Only counters:
+ * 1. SVME=0: counter counts (HG_ONLY bits ignored)
+ * 2. Set SVME=1: counter stops (in host mode)
+ * 3. VMRUN to L2: counter counts (in guest mode)
+ * 4. VMEXIT to L1: counter stops (back to host mode)
+ * 5. Clear SVME=0: counter counts (HG_ONLY bits ignored)
+ *
+ * For Host-Only counters:
+ * 1. SVME=0: counter counts (HG_ONLY bits ignored)
+ * 2. Set SVME=1: counter counts (in host mode)
+ * 3. VMRUN to L2: counter stops (in guest mode)
+ * 4. VMEXIT to L1: counter counts (back to host mode)
+ * 5. Clear SVME=0: counter counts (HG_ONLY bits ignored)
+ */
+#include <fcntl.h>
+#include <stdio.h>
+#include <stdlib.h>
+#include <string.h>
+
+#include "test_util.h"
+#include "kvm_util.h"
+#include "processor.h"
+#include "svm_util.h"
+#include "pmu.h"
+
+#define L2_GUEST_STACK_SIZE 256
+
+#define MSR_F15H_PERF_CTL0 0xc0010200
+#define MSR_F15H_PERF_CTR0 0xc0010201
+
+#define AMD64_EVENTSEL_GUESTONLY BIT_ULL(40)
+#define AMD64_EVENTSEL_HOSTONLY BIT_ULL(41)
+
+#define EVENTSEL_RETIRED_INSNS (ARCH_PERFMON_EVENTSEL_OS | \
+ ARCH_PERFMON_EVENTSEL_USR | \
+ ARCH_PERFMON_EVENTSEL_ENABLE | \
+ AMD_ZEN_INSTRUCTIONS_RETIRED)
+
+#define LOOP_INSNS 1000
+
+static __always_inline void run_instruction_loop(void)
+{
+ unsigned int i;
+
+ for (i = 0; i < LOOP_INSNS; i++)
+ __asm__ __volatile__("nop");
+}
+
+static __always_inline uint64_t run_and_measure(void)
+{
+ uint64_t count_before, count_after;
+
+ count_before = rdmsr(MSR_F15H_PERF_CTR0);
+ run_instruction_loop();
+ count_after = rdmsr(MSR_F15H_PERF_CTR0);
+
+ return count_after - count_before;
+}
+
+struct hg_test_data {
+ uint64_t l2_delta;
+ bool l2_done;
+};
+
+static struct hg_test_data *hg_data;
+
+static void l2_guest_code(void)
+{
+ hg_data->l2_delta = run_and_measure();
+ hg_data->l2_done = true;
+ vmmcall();
+}
+
+/*
+ * Test Guest-Only counter across all relevant state transitions.
+ */
+static void l1_guest_code_guestonly(struct svm_test_data *svm,
+ struct hg_test_data *data)
+{
+ unsigned long l2_guest_stack[L2_GUEST_STACK_SIZE];
+ struct vmcb *vmcb = svm->vmcb;
+ uint64_t eventsel, delta;
+
+ hg_data = data;
+
+ eventsel = EVENTSEL_RETIRED_INSNS | AMD64_EVENTSEL_GUESTONLY;
+ wrmsr(MSR_F15H_PERF_CTL0, eventsel);
+ wrmsr(MSR_F15H_PERF_CTR0, 0);
+
+ /* Step 1: SVME=0; HG_ONLY ignored */
+ wrmsr(MSR_EFER, rdmsr(MSR_EFER) & ~EFER_SVME);
+ delta = run_and_measure();
+ GUEST_ASSERT_NE(delta, 0);
+
+ /* Step 2: Set SVME=1; Guest-Only counter stops */
+ wrmsr(MSR_EFER, rdmsr(MSR_EFER) | EFER_SVME);
+ delta = run_and_measure();
+ GUEST_ASSERT_EQ(delta, 0);
+
+ /* Step 3: VMRUN to L2; Guest-Only counter counts */
+ generic_svm_setup(svm, l2_guest_code,
+ &l2_guest_stack[L2_GUEST_STACK_SIZE]);
+ vmcb->control.intercept &= ~(1ULL << INTERCEPT_MSR_PROT);
+
+ run_guest(vmcb, svm->vmcb_gpa);
+
+ GUEST_ASSERT_EQ(vmcb->control.exit_code, SVM_EXIT_VMMCALL);
+ GUEST_ASSERT(data->l2_done);
+ GUEST_ASSERT_NE(data->l2_delta, 0);
+
+ /* Step 4: After VMEXIT to L1; Guest-Only counter stops */
+ delta = run_and_measure();
+ GUEST_ASSERT_EQ(delta, 0);
+
+ /* Step 5: Clear SVME; HG_ONLY ignored */
+ wrmsr(MSR_EFER, rdmsr(MSR_EFER) & ~EFER_SVME);
+ delta = run_and_measure();
+ GUEST_ASSERT_NE(delta, 0);
+
+ GUEST_DONE();
+}
+
+/*
+ * Test Host-Only counter across all relevant state transitions.
+ */
+static void l1_guest_code_hostonly(struct svm_test_data *svm,
+ struct hg_test_data *data)
+{
+ unsigned long l2_guest_stack[L2_GUEST_STACK_SIZE];
+ struct vmcb *vmcb = svm->vmcb;
+ uint64_t eventsel, delta;
+
+ hg_data = data;
+
+ eventsel = EVENTSEL_RETIRED_INSNS | AMD64_EVENTSEL_HOSTONLY;
+ wrmsr(MSR_F15H_PERF_CTL0, eventsel);
+ wrmsr(MSR_F15H_PERF_CTR0, 0);
+
+
+ /* Step 1: SVME=0; HG_ONLY ignored */
+ wrmsr(MSR_EFER, rdmsr(MSR_EFER) & ~EFER_SVME);
+ delta = run_and_measure();
+ GUEST_ASSERT_NE(delta, 0);
+
+ /* Step 2: Set SVME=1; Host-Only counter still counts */
+ wrmsr(MSR_EFER, rdmsr(MSR_EFER) | EFER_SVME);
+ delta = run_and_measure();
+ GUEST_ASSERT_NE(delta, 0);
+
+ /* Step 3: VMRUN to L2; Host-Only counter stops */
+ generic_svm_setup(svm, l2_guest_code,
+ &l2_guest_stack[L2_GUEST_STACK_SIZE]);
+ vmcb->control.intercept &= ~(1ULL << INTERCEPT_MSR_PROT);
+
+ run_guest(vmcb, svm->vmcb_gpa);
+
+ GUEST_ASSERT_EQ(vmcb->control.exit_code, SVM_EXIT_VMMCALL);
+ GUEST_ASSERT(data->l2_done);
+ GUEST_ASSERT_EQ(data->l2_delta, 0);
+
+ /* Step 4: After VMEXIT to L1; Host-Only counter counts */
+ delta = run_and_measure();
+ GUEST_ASSERT_NE(delta, 0);
+
+ /* Step 5: Clear SVME; HG_ONLY ignored */
+ wrmsr(MSR_EFER, rdmsr(MSR_EFER) & ~EFER_SVME);
+ delta = run_and_measure();
+ GUEST_ASSERT_NE(delta, 0);
+
+ GUEST_DONE();
+}
+
+/*
+ * Test that both bits set is the same as neither bit set (always counts).
+ */
+static void l1_guest_code_both_bits(struct svm_test_data *svm,
+ struct hg_test_data *data)
+{
+ unsigned long l2_guest_stack[L2_GUEST_STACK_SIZE];
+ struct vmcb *vmcb = svm->vmcb;
+ uint64_t eventsel, delta;
+
+ hg_data = data;
+
+ eventsel = EVENTSEL_RETIRED_INSNS |
+ AMD64_EVENTSEL_HOSTONLY | AMD64_EVENTSEL_GUESTONLY;
+ wrmsr(MSR_F15H_PERF_CTL0, eventsel);
+ wrmsr(MSR_F15H_PERF_CTR0, 0);
+
+ /* Step 1: SVME=0 */
+ wrmsr(MSR_EFER, rdmsr(MSR_EFER) & ~EFER_SVME);
+ delta = run_and_measure();
+ GUEST_ASSERT_NE(delta, 0);
+
+ /* Step 2: Set SVME=1 */
+ wrmsr(MSR_EFER, rdmsr(MSR_EFER) | EFER_SVME);
+ delta = run_and_measure();
+ GUEST_ASSERT_NE(delta, 0);
+
+ /* Step 3: VMRUN to L2 */
+ generic_svm_setup(svm, l2_guest_code,
+ &l2_guest_stack[L2_GUEST_STACK_SIZE]);
+ vmcb->control.intercept &= ~(1ULL << INTERCEPT_MSR_PROT);
+
+ run_guest(vmcb, svm->vmcb_gpa);
+
+ GUEST_ASSERT_EQ(vmcb->control.exit_code, SVM_EXIT_VMMCALL);
+ GUEST_ASSERT(data->l2_done);
+ GUEST_ASSERT_NE(data->l2_delta, 0);
+
+ /* Step 4: After VMEXIT to L1 */
+ delta = run_and_measure();
+ GUEST_ASSERT_NE(delta, 0);
+
+ /* Step 5: Clear SVME */
+ wrmsr(MSR_EFER, rdmsr(MSR_EFER) & ~EFER_SVME);
+ delta = run_and_measure();
+ GUEST_ASSERT_NE(delta, 0);
+
+ GUEST_DONE();
+}
+
+static void l1_guest_code(struct svm_test_data *svm, struct hg_test_data *data,
+ int test_num)
+{
+ switch (test_num) {
+ case 0:
+ l1_guest_code_guestonly(svm, data);
+ break;
+ case 1:
+ l1_guest_code_hostonly(svm, data);
+ break;
+ case 2:
+ l1_guest_code_both_bits(svm, data);
+ break;
+ }
+}
+
+static void run_test(int test_number, const char *test_name)
+{
+ struct hg_test_data *data_hva;
+ vm_vaddr_t svm_gva, data_gva;
+ struct kvm_vcpu *vcpu;
+ struct kvm_vm *vm;
+ struct ucall uc;
+
+ pr_info("Testing: %s\n", test_name);
+
+ vm = vm_create_with_one_vcpu(&vcpu, l1_guest_code);
+
+ vcpu_alloc_svm(vm, &svm_gva);
+
+ data_gva = vm_vaddr_alloc_page(vm);
+ data_hva = addr_gva2hva(vm, data_gva);
+ memset(data_hva, 0, sizeof(*data_hva));
+
+ vcpu_args_set(vcpu, 3, svm_gva, data_gva, test_number);
+
+ for (;;) {
+ vcpu_run(vcpu);
+ TEST_ASSERT_KVM_EXIT_REASON(vcpu, KVM_EXIT_IO);
+
+ switch (get_ucall(vcpu, &uc)) {
+ case UCALL_ABORT:
+ REPORT_GUEST_ASSERT(uc);
+ /* NOT REACHED */
+ case UCALL_DONE:
+ pr_info(" PASSED\n");
+ kvm_vm_free(vm);
+ return;
+ default:
+ TEST_FAIL("Unknown ucall %lu", uc.cmd);
+ }
+ }
+}
+
+int main(int argc, char *argv[])
+{
+ TEST_REQUIRE(kvm_cpu_has(X86_FEATURE_SVM));
+ TEST_REQUIRE(kvm_is_pmu_enabled());
+ TEST_REQUIRE(get_kvm_amd_param_bool("enable_mediated_pmu"));
+
+ run_test(0, "Guest-Only counter across all transitions");
+ run_test(1, "Host-Only counter across all transitions");
+ run_test(2, "Both HG_ONLY bits set (always count)");
+
+ return 0;
+}
--
2.52.0.457.g6b5491de43-goog
^ permalink raw reply related [flat|nested] 21+ messages in thread* Re: [PATCH 6/6] KVM: selftests: x86: Add svm_pmu_hg_test for HG_ONLY bits
2026-01-21 22:54 ` [PATCH 6/6] KVM: selftests: x86: Add svm_pmu_hg_test for HG_ONLY bits Jim Mattson
@ 2026-01-22 17:12 ` Sean Christopherson
2026-01-28 23:47 ` Jim Mattson
2026-01-22 18:56 ` kernel test robot
1 sibling, 1 reply; 21+ messages in thread
From: Sean Christopherson @ 2026-01-22 17:12 UTC (permalink / raw)
To: Jim Mattson
Cc: Paolo Bonzini, Thomas Gleixner, Ingo Molnar, Borislav Petkov,
Dave Hansen, x86, H. Peter Anvin, Peter Zijlstra,
Arnaldo Carvalho de Melo, Namhyung Kim, Mark Rutland,
Alexander Shishkin, Jiri Olsa, Ian Rogers, Adrian Hunter,
James Clark, Shuah Khan, kvm, linux-kernel, linux-perf-users,
linux-kselftest
On Wed, Jan 21, 2026, Jim Mattson wrote:
> Add a selftest to verify KVM correctly virtualizes the AMD PMU Host-Only
> (bit 41) and Guest-Only (bit 40) event selector bits across all relevant
> SVM state transitions.
>
> For both Guest-Only and Host-Only counters, verify that:
> 1. SVME=0: counter counts (HG_ONLY bits ignored)
> 2. Set SVME=1: counter behavior changes based on HG_ONLY bit
> 3. VMRUN to L2: counter behavior switches (guest vs host mode)
> 4. VMEXIT to L1: counter behavior switches back
> 5. Clear SVME=0: counter counts (HG_ONLY bits ignored again)
>
> Also confirm that setting both bits is the same as setting neither bit.
>
> Signed-off-by: Jim Mattson <jmattson@google.com>
> ---
> tools/testing/selftests/kvm/Makefile.kvm | 1 +
> .../selftests/kvm/x86/svm_pmu_hg_test.c | 297 ++++++++++++++++++
> 2 files changed, 298 insertions(+)
> create mode 100644 tools/testing/selftests/kvm/x86/svm_pmu_hg_test.c
>
> diff --git a/tools/testing/selftests/kvm/Makefile.kvm b/tools/testing/selftests/kvm/Makefile.kvm
> index e88699e227dd..06ba85d97618 100644
> --- a/tools/testing/selftests/kvm/Makefile.kvm
> +++ b/tools/testing/selftests/kvm/Makefile.kvm
> @@ -112,6 +112,7 @@ TEST_GEN_PROGS_x86 += x86/svm_vmcall_test
> TEST_GEN_PROGS_x86 += x86/svm_int_ctl_test
> TEST_GEN_PROGS_x86 += x86/svm_nested_shutdown_test
> TEST_GEN_PROGS_x86 += x86/svm_nested_soft_inject_test
> +TEST_GEN_PROGS_x86 += x86/svm_pmu_hg_test
Maybe svm_nested_pmu_test? Hmm, that makes it sound like "nested PMU" though.
svm_pmu_host_guest_test?
> +#define MSR_F15H_PERF_CTL0 0xc0010200
> +#define MSR_F15H_PERF_CTR0 0xc0010201
> +
> +#define AMD64_EVENTSEL_GUESTONLY BIT_ULL(40)
> +#define AMD64_EVENTSEL_HOSTONLY BIT_ULL(41)
Please put architectural definitions in pmu.h (or whatever library header we
have).
> +struct hg_test_data {
Please drop "hg" (I keep reading it as "mercury").
> + uint64_t l2_delta;
> + bool l2_done;
> +};
> +
> +static struct hg_test_data *hg_data;
> +
> +static void l2_guest_code(void)
> +{
> + hg_data->l2_delta = run_and_measure();
> + hg_data->l2_done = true;
> + vmmcall();
> +}
> +
> +/*
> + * Test Guest-Only counter across all relevant state transitions.
> + */
> +static void l1_guest_code_guestonly(struct svm_test_data *svm,
> + struct hg_test_data *data)
> +{
> + unsigned long l2_guest_stack[L2_GUEST_STACK_SIZE];
> + struct vmcb *vmcb = svm->vmcb;
> + uint64_t eventsel, delta;
> +
> + hg_data = data;
> +
> + eventsel = EVENTSEL_RETIRED_INSNS | AMD64_EVENTSEL_GUESTONLY;
> + wrmsr(MSR_F15H_PERF_CTL0, eventsel);
> + wrmsr(MSR_F15H_PERF_CTR0, 0);
> +
> + /* Step 1: SVME=0; HG_ONLY ignored */
> + wrmsr(MSR_EFER, rdmsr(MSR_EFER) & ~EFER_SVME);
> + delta = run_and_measure();
> + GUEST_ASSERT_NE(delta, 0);
> +
> + /* Step 2: Set SVME=1; Guest-Only counter stops */
> + wrmsr(MSR_EFER, rdmsr(MSR_EFER) | EFER_SVME);
> + delta = run_and_measure();
> + GUEST_ASSERT_EQ(delta, 0);
> +
> + /* Step 3: VMRUN to L2; Guest-Only counter counts */
> + generic_svm_setup(svm, l2_guest_code,
> + &l2_guest_stack[L2_GUEST_STACK_SIZE]);
> + vmcb->control.intercept &= ~(1ULL << INTERCEPT_MSR_PROT);
> +
> + run_guest(vmcb, svm->vmcb_gpa);
> +
> + GUEST_ASSERT_EQ(vmcb->control.exit_code, SVM_EXIT_VMMCALL);
> + GUEST_ASSERT(data->l2_done);
> + GUEST_ASSERT_NE(data->l2_delta, 0);
> +
> + /* Step 4: After VMEXIT to L1; Guest-Only counter stops */
> + delta = run_and_measure();
> + GUEST_ASSERT_EQ(delta, 0);
> +
> + /* Step 5: Clear SVME; HG_ONLY ignored */
> + wrmsr(MSR_EFER, rdmsr(MSR_EFER) & ~EFER_SVME);
> + delta = run_and_measure();
> + GUEST_ASSERT_NE(delta, 0);
> +
> + GUEST_DONE();
> +}
> +
> +/*
> + * Test Host-Only counter across all relevant state transitions.
> + */
> +static void l1_guest_code_hostonly(struct svm_test_data *svm,
> + struct hg_test_data *data)
> +{
> + unsigned long l2_guest_stack[L2_GUEST_STACK_SIZE];
> + struct vmcb *vmcb = svm->vmcb;
> + uint64_t eventsel, delta;
> +
> + hg_data = data;
> +
> + eventsel = EVENTSEL_RETIRED_INSNS | AMD64_EVENTSEL_HOSTONLY;
> + wrmsr(MSR_F15H_PERF_CTL0, eventsel);
> + wrmsr(MSR_F15H_PERF_CTR0, 0);
> +
> +
> + /* Step 1: SVME=0; HG_ONLY ignored */
> + wrmsr(MSR_EFER, rdmsr(MSR_EFER) & ~EFER_SVME);
> + delta = run_and_measure();
> + GUEST_ASSERT_NE(delta, 0);
> +
> + /* Step 2: Set SVME=1; Host-Only counter still counts */
> + wrmsr(MSR_EFER, rdmsr(MSR_EFER) | EFER_SVME);
> + delta = run_and_measure();
> + GUEST_ASSERT_NE(delta, 0);
> +
> + /* Step 3: VMRUN to L2; Host-Only counter stops */
> + generic_svm_setup(svm, l2_guest_code,
> + &l2_guest_stack[L2_GUEST_STACK_SIZE]);
> + vmcb->control.intercept &= ~(1ULL << INTERCEPT_MSR_PROT);
> +
> + run_guest(vmcb, svm->vmcb_gpa);
> +
> + GUEST_ASSERT_EQ(vmcb->control.exit_code, SVM_EXIT_VMMCALL);
> + GUEST_ASSERT(data->l2_done);
> + GUEST_ASSERT_EQ(data->l2_delta, 0);
> +
> + /* Step 4: After VMEXIT to L1; Host-Only counter counts */
> + delta = run_and_measure();
> + GUEST_ASSERT_NE(delta, 0);
> +
> + /* Step 5: Clear SVME; HG_ONLY ignored */
> + wrmsr(MSR_EFER, rdmsr(MSR_EFER) & ~EFER_SVME);
> + delta = run_and_measure();
> + GUEST_ASSERT_NE(delta, 0);
> +
> + GUEST_DONE();
> +}
> +
> +/*
> + * Test that both bits set is the same as neither bit set (always counts).
> + */
> +static void l1_guest_code_both_bits(struct svm_test_data *svm,
l1_guest_code gets somewhat redundant. What about these to be more descriptive
about the salient points, without creating monstrous names?
l1_test_no_filtering // very open to suggestions for a better name
l1_test_guestonly
l1_test_hostonly
l1_test_host_and_guest
Actually, why are there even separate helpers? Very off the cuff, but this seems
trivial to dedup:
static void l1_guest_code(struct svm_test_data *svm, u64 host_guest_mask)
{
const bool count_in_host = !host_guest_mask ||
(host_guest_mask & AMD64_EVENTSEL_HOSTONLY);
const bool count_in_guest = !host_guest_mask ||
(host_guest_mask & AMD64_EVENTSEL_GUESTONLY);
unsigned long l2_guest_stack[L2_GUEST_STACK_SIZE];
struct vmcb *vmcb = svm->vmcb;
uint64_t eventsel, delta;
wrmsr(MSR_F15H_PERF_CTL0, EVENTSEL_RETIRED_INSNS | host_guest_mask);
wrmsr(MSR_F15H_PERF_CTR0, 0);
/* Step 1: SVME=0; host always counts */
wrmsr(MSR_EFER, rdmsr(MSR_EFER) & ~EFER_SVME);
delta = run_and_measure();
GUEST_ASSERT_NE(delta, 0);
/* Step 2: Set SVME=1; Guest-Only counter stops */
wrmsr(MSR_EFER, rdmsr(MSR_EFER) | EFER_SVME);
delta = run_and_measure();
GUEST_ASSERT(!!delta == count_in_host);
/* Step 3: VMRUN to L2; Guest-Only counter counts */
generic_svm_setup(svm, l2_guest_code,
&l2_guest_stack[L2_GUEST_STACK_SIZE]);
vmcb->control.intercept &= ~(1ULL << INTERCEPT_MSR_PROT);
run_guest(vmcb, svm->vmcb_gpa);
GUEST_ASSERT_EQ(vmcb->control.exit_code, SVM_EXIT_VMMCALL);
GUEST_ASSERT(data->l2_done);
GUEST_ASSERT(!!data->l2_delta == count_in_guest);
/* Step 4: After VMEXIT to L1; Guest-Only counter stops */
delta = run_and_measure();
GUEST_ASSERT(!!delta == count_in_host);
/* Step 5: Clear SVME; HG_ONLY ignored */
wrmsr(MSR_EFER, rdmsr(MSR_EFER) & ~EFER_SVME);
delta = run_and_measure();
GUEST_ASSERT_NE(delta, 0);
GUEST_DONE();
}
> + struct hg_test_data *data)
> +{
> + unsigned long l2_guest_stack[L2_GUEST_STACK_SIZE];
> + struct vmcb *vmcb = svm->vmcb;
> + uint64_t eventsel, delta;
> +
> + hg_data = data;
> +
> + eventsel = EVENTSEL_RETIRED_INSNS |
> + AMD64_EVENTSEL_HOSTONLY | AMD64_EVENTSEL_GUESTONLY;
> + wrmsr(MSR_F15H_PERF_CTL0, eventsel);
> + wrmsr(MSR_F15H_PERF_CTR0, 0);
> +
> + /* Step 1: SVME=0 */
> + wrmsr(MSR_EFER, rdmsr(MSR_EFER) & ~EFER_SVME);
> + delta = run_and_measure();
> + GUEST_ASSERT_NE(delta, 0);
> +
> + /* Step 2: Set SVME=1 */
> + wrmsr(MSR_EFER, rdmsr(MSR_EFER) | EFER_SVME);
> + delta = run_and_measure();
> + GUEST_ASSERT_NE(delta, 0);
> +
> + /* Step 3: VMRUN to L2 */
> + generic_svm_setup(svm, l2_guest_code,
> + &l2_guest_stack[L2_GUEST_STACK_SIZE]);
> + vmcb->control.intercept &= ~(1ULL << INTERCEPT_MSR_PROT);
> +
> + run_guest(vmcb, svm->vmcb_gpa);
> +
> + GUEST_ASSERT_EQ(vmcb->control.exit_code, SVM_EXIT_VMMCALL);
> + GUEST_ASSERT(data->l2_done);
> + GUEST_ASSERT_NE(data->l2_delta, 0);
> +
> + /* Step 4: After VMEXIT to L1 */
> + delta = run_and_measure();
> + GUEST_ASSERT_NE(delta, 0);
> +
> + /* Step 5: Clear SVME */
> + wrmsr(MSR_EFER, rdmsr(MSR_EFER) & ~EFER_SVME);
> + delta = run_and_measure();
> + GUEST_ASSERT_NE(delta, 0);
> +
> + GUEST_DONE();
> +}
> +
> +static void l1_guest_code(struct svm_test_data *svm, struct hg_test_data *data,
> + int test_num)
> +{
> + switch (test_num) {
> + case 0:
As above, I would much rather pass in the mask of GUEST_HOST bits to set, and
then react accordingly, as opposed to passing in a magic/arbitrary @test_num.
Then I'm pretty sure we don't need a dispatch function, just run the testcase
using the passed in mask.
> + l1_guest_code_guestonly(svm, data);
> + break;
> + case 1:
> + l1_guest_code_hostonly(svm, data);
> + break;
> + case 2:
> + l1_guest_code_both_bits(svm, data);
> + break;
> + }
> +}
...
> +int main(int argc, char *argv[])
> +{
> + TEST_REQUIRE(kvm_cpu_has(X86_FEATURE_SVM));
> + TEST_REQUIRE(kvm_is_pmu_enabled());
> + TEST_REQUIRE(get_kvm_amd_param_bool("enable_mediated_pmu"));
> +
> + run_test(0, "Guest-Only counter across all transitions");
> + run_test(1, "Host-Only counter across all transitions");
> + run_test(2, "Both HG_ONLY bits set (always count)");
As alluded to above, shouldn't we also test "no bits set"?
> +
> + return 0;
> +}
> --
> 2.52.0.457.g6b5491de43-goog
>
^ permalink raw reply [flat|nested] 21+ messages in thread* Re: [PATCH 6/6] KVM: selftests: x86: Add svm_pmu_hg_test for HG_ONLY bits
2026-01-22 17:12 ` Sean Christopherson
@ 2026-01-28 23:47 ` Jim Mattson
0 siblings, 0 replies; 21+ messages in thread
From: Jim Mattson @ 2026-01-28 23:47 UTC (permalink / raw)
To: Sean Christopherson
Cc: Paolo Bonzini, Thomas Gleixner, Ingo Molnar, Borislav Petkov,
Dave Hansen, x86, H. Peter Anvin, Peter Zijlstra,
Arnaldo Carvalho de Melo, Namhyung Kim, Mark Rutland,
Alexander Shishkin, Jiri Olsa, Ian Rogers, Adrian Hunter,
James Clark, Shuah Khan, kvm, linux-kernel, linux-perf-users,
linux-kselftest
On Thu, Jan 22, 2026 at 9:12 AM Sean Christopherson <seanjc@google.com> wrote:
>
> On Wed, Jan 21, 2026, Jim Mattson wrote:
> > Add a selftest to verify KVM correctly virtualizes the AMD PMU Host-Only
> > (bit 41) and Guest-Only (bit 40) event selector bits across all relevant
> > SVM state transitions.
> >
> > For both Guest-Only and Host-Only counters, verify that:
> > 1. SVME=0: counter counts (HG_ONLY bits ignored)
> > 2. Set SVME=1: counter behavior changes based on HG_ONLY bit
> > 3. VMRUN to L2: counter behavior switches (guest vs host mode)
> > 4. VMEXIT to L1: counter behavior switches back
> > 5. Clear SVME=0: counter counts (HG_ONLY bits ignored again)
> >
> > Also confirm that setting both bits is the same as setting neither bit.
> >
> > Signed-off-by: Jim Mattson <jmattson@google.com>
> > ---
> > tools/testing/selftests/kvm/Makefile.kvm | 1 +
> > .../selftests/kvm/x86/svm_pmu_hg_test.c | 297 ++++++++++++++++++
> > 2 files changed, 298 insertions(+)
> > create mode 100644 tools/testing/selftests/kvm/x86/svm_pmu_hg_test.c
> >
> > diff --git a/tools/testing/selftests/kvm/Makefile.kvm b/tools/testing/selftests/kvm/Makefile.kvm
> > index e88699e227dd..06ba85d97618 100644
> > --- a/tools/testing/selftests/kvm/Makefile.kvm
> > +++ b/tools/testing/selftests/kvm/Makefile.kvm
> > @@ -112,6 +112,7 @@ TEST_GEN_PROGS_x86 += x86/svm_vmcall_test
> > TEST_GEN_PROGS_x86 += x86/svm_int_ctl_test
> > TEST_GEN_PROGS_x86 += x86/svm_nested_shutdown_test
> > TEST_GEN_PROGS_x86 += x86/svm_nested_soft_inject_test
> > +TEST_GEN_PROGS_x86 += x86/svm_pmu_hg_test
>
> Maybe svm_nested_pmu_test? Hmm, that makes it sound like "nested PMU" though.
>
> svm_pmu_host_guest_test?
Sounds good.
> > +#define MSR_F15H_PERF_CTL0 0xc0010200
> > +#define MSR_F15H_PERF_CTR0 0xc0010201
> > +
> > +#define AMD64_EVENTSEL_GUESTONLY BIT_ULL(40)
> > +#define AMD64_EVENTSEL_HOSTONLY BIT_ULL(41)
>
> Please put architectural definitions in pmu.h (or whatever library header we
> have).
These should be redundant. I will confirm.
> > +struct hg_test_data {
>
> Please drop "hg" (I keep reading it as "mercury").
>
> > + uint64_t l2_delta;
> > + bool l2_done;
> > +};
> > +
> > +static struct hg_test_data *hg_data;
> > +
> > +static void l2_guest_code(void)
> > +{
> > + hg_data->l2_delta = run_and_measure();
> > + hg_data->l2_done = true;
> > + vmmcall();
> > +}
> > +
> > +/*
> > + * Test Guest-Only counter across all relevant state transitions.
> > + */
> > +static void l1_guest_code_guestonly(struct svm_test_data *svm,
> > + struct hg_test_data *data)
> > +{
> > + unsigned long l2_guest_stack[L2_GUEST_STACK_SIZE];
> > + struct vmcb *vmcb = svm->vmcb;
> > + uint64_t eventsel, delta;
> > +
> > + hg_data = data;
> > +
> > + eventsel = EVENTSEL_RETIRED_INSNS | AMD64_EVENTSEL_GUESTONLY;
> > + wrmsr(MSR_F15H_PERF_CTL0, eventsel);
> > + wrmsr(MSR_F15H_PERF_CTR0, 0);
> > +
> > + /* Step 1: SVME=0; HG_ONLY ignored */
> > + wrmsr(MSR_EFER, rdmsr(MSR_EFER) & ~EFER_SVME);
> > + delta = run_and_measure();
> > + GUEST_ASSERT_NE(delta, 0);
> > +
> > + /* Step 2: Set SVME=1; Guest-Only counter stops */
> > + wrmsr(MSR_EFER, rdmsr(MSR_EFER) | EFER_SVME);
> > + delta = run_and_measure();
> > + GUEST_ASSERT_EQ(delta, 0);
> > +
> > + /* Step 3: VMRUN to L2; Guest-Only counter counts */
> > + generic_svm_setup(svm, l2_guest_code,
> > + &l2_guest_stack[L2_GUEST_STACK_SIZE]);
> > + vmcb->control.intercept &= ~(1ULL << INTERCEPT_MSR_PROT);
> > +
> > + run_guest(vmcb, svm->vmcb_gpa);
> > +
> > + GUEST_ASSERT_EQ(vmcb->control.exit_code, SVM_EXIT_VMMCALL);
> > + GUEST_ASSERT(data->l2_done);
> > + GUEST_ASSERT_NE(data->l2_delta, 0);
> > +
> > + /* Step 4: After VMEXIT to L1; Guest-Only counter stops */
> > + delta = run_and_measure();
> > + GUEST_ASSERT_EQ(delta, 0);
> > +
> > + /* Step 5: Clear SVME; HG_ONLY ignored */
> > + wrmsr(MSR_EFER, rdmsr(MSR_EFER) & ~EFER_SVME);
> > + delta = run_and_measure();
> > + GUEST_ASSERT_NE(delta, 0);
> > +
> > + GUEST_DONE();
> > +}
> > +
> > +/*
> > + * Test Host-Only counter across all relevant state transitions.
> > + */
> > +static void l1_guest_code_hostonly(struct svm_test_data *svm,
> > + struct hg_test_data *data)
> > +{
> > + unsigned long l2_guest_stack[L2_GUEST_STACK_SIZE];
> > + struct vmcb *vmcb = svm->vmcb;
> > + uint64_t eventsel, delta;
> > +
> > + hg_data = data;
> > +
> > + eventsel = EVENTSEL_RETIRED_INSNS | AMD64_EVENTSEL_HOSTONLY;
> > + wrmsr(MSR_F15H_PERF_CTL0, eventsel);
> > + wrmsr(MSR_F15H_PERF_CTR0, 0);
> > +
> > +
> > + /* Step 1: SVME=0; HG_ONLY ignored */
> > + wrmsr(MSR_EFER, rdmsr(MSR_EFER) & ~EFER_SVME);
> > + delta = run_and_measure();
> > + GUEST_ASSERT_NE(delta, 0);
> > +
> > + /* Step 2: Set SVME=1; Host-Only counter still counts */
> > + wrmsr(MSR_EFER, rdmsr(MSR_EFER) | EFER_SVME);
> > + delta = run_and_measure();
> > + GUEST_ASSERT_NE(delta, 0);
> > +
> > + /* Step 3: VMRUN to L2; Host-Only counter stops */
> > + generic_svm_setup(svm, l2_guest_code,
> > + &l2_guest_stack[L2_GUEST_STACK_SIZE]);
> > + vmcb->control.intercept &= ~(1ULL << INTERCEPT_MSR_PROT);
> > +
> > + run_guest(vmcb, svm->vmcb_gpa);
> > +
> > + GUEST_ASSERT_EQ(vmcb->control.exit_code, SVM_EXIT_VMMCALL);
> > + GUEST_ASSERT(data->l2_done);
> > + GUEST_ASSERT_EQ(data->l2_delta, 0);
> > +
> > + /* Step 4: After VMEXIT to L1; Host-Only counter counts */
> > + delta = run_and_measure();
> > + GUEST_ASSERT_NE(delta, 0);
> > +
> > + /* Step 5: Clear SVME; HG_ONLY ignored */
> > + wrmsr(MSR_EFER, rdmsr(MSR_EFER) & ~EFER_SVME);
> > + delta = run_and_measure();
> > + GUEST_ASSERT_NE(delta, 0);
> > +
> > + GUEST_DONE();
> > +}
> > +
> > +/*
> > + * Test that both bits set is the same as neither bit set (always counts).
> > + */
> > +static void l1_guest_code_both_bits(struct svm_test_data *svm,
>
> l1_guest_code gets somewhat redundant. What about these to be more descriptive
> about the salient points, without creating monstrous names?
>
> l1_test_no_filtering // very open to suggestions for a better name
> l1_test_guestonly
> l1_test_hostonly
> l1_test_host_and_guest
>
> Actually, why are there even separate helpers? Very off the cuff, but this seems
> trivial to dedup:
>
> static void l1_guest_code(struct svm_test_data *svm, u64 host_guest_mask)
> {
> const bool count_in_host = !host_guest_mask ||
> (host_guest_mask & AMD64_EVENTSEL_HOSTONLY);
> const bool count_in_guest = !host_guest_mask ||
> (host_guest_mask & AMD64_EVENTSEL_GUESTONLY);
> unsigned long l2_guest_stack[L2_GUEST_STACK_SIZE];
> struct vmcb *vmcb = svm->vmcb;
> uint64_t eventsel, delta;
>
> wrmsr(MSR_F15H_PERF_CTL0, EVENTSEL_RETIRED_INSNS | host_guest_mask);
> wrmsr(MSR_F15H_PERF_CTR0, 0);
>
> /* Step 1: SVME=0; host always counts */
> wrmsr(MSR_EFER, rdmsr(MSR_EFER) & ~EFER_SVME);
> delta = run_and_measure();
> GUEST_ASSERT_NE(delta, 0);
>
> /* Step 2: Set SVME=1; Guest-Only counter stops */
> wrmsr(MSR_EFER, rdmsr(MSR_EFER) | EFER_SVME);
> delta = run_and_measure();
> GUEST_ASSERT(!!delta == count_in_host);
>
> /* Step 3: VMRUN to L2; Guest-Only counter counts */
> generic_svm_setup(svm, l2_guest_code,
> &l2_guest_stack[L2_GUEST_STACK_SIZE]);
> vmcb->control.intercept &= ~(1ULL << INTERCEPT_MSR_PROT);
>
> run_guest(vmcb, svm->vmcb_gpa);
>
> GUEST_ASSERT_EQ(vmcb->control.exit_code, SVM_EXIT_VMMCALL);
> GUEST_ASSERT(data->l2_done);
> GUEST_ASSERT(!!data->l2_delta == count_in_guest);
>
> /* Step 4: After VMEXIT to L1; Guest-Only counter stops */
> delta = run_and_measure();
> GUEST_ASSERT(!!delta == count_in_host);
>
> /* Step 5: Clear SVME; HG_ONLY ignored */
> wrmsr(MSR_EFER, rdmsr(MSR_EFER) & ~EFER_SVME);
> delta = run_and_measure();
> GUEST_ASSERT_NE(delta, 0);
>
> GUEST_DONE();
> }
Even better, I will fold this all into one test flow with 4 PMCs
covering the bit permutations.
> > + struct hg_test_data *data)
> > +{
> > + unsigned long l2_guest_stack[L2_GUEST_STACK_SIZE];
> > + struct vmcb *vmcb = svm->vmcb;
> > + uint64_t eventsel, delta;
> > +
> > + hg_data = data;
> > +
> > + eventsel = EVENTSEL_RETIRED_INSNS |
> > + AMD64_EVENTSEL_HOSTONLY | AMD64_EVENTSEL_GUESTONLY;
> > + wrmsr(MSR_F15H_PERF_CTL0, eventsel);
> > + wrmsr(MSR_F15H_PERF_CTR0, 0);
> > +
> > + /* Step 1: SVME=0 */
> > + wrmsr(MSR_EFER, rdmsr(MSR_EFER) & ~EFER_SVME);
> > + delta = run_and_measure();
> > + GUEST_ASSERT_NE(delta, 0);
> > +
> > + /* Step 2: Set SVME=1 */
> > + wrmsr(MSR_EFER, rdmsr(MSR_EFER) | EFER_SVME);
> > + delta = run_and_measure();
> > + GUEST_ASSERT_NE(delta, 0);
> > +
> > + /* Step 3: VMRUN to L2 */
> > + generic_svm_setup(svm, l2_guest_code,
> > + &l2_guest_stack[L2_GUEST_STACK_SIZE]);
> > + vmcb->control.intercept &= ~(1ULL << INTERCEPT_MSR_PROT);
> > +
> > + run_guest(vmcb, svm->vmcb_gpa);
> > +
> > + GUEST_ASSERT_EQ(vmcb->control.exit_code, SVM_EXIT_VMMCALL);
> > + GUEST_ASSERT(data->l2_done);
> > + GUEST_ASSERT_NE(data->l2_delta, 0);
> > +
> > + /* Step 4: After VMEXIT to L1 */
> > + delta = run_and_measure();
> > + GUEST_ASSERT_NE(delta, 0);
> > +
> > + /* Step 5: Clear SVME */
> > + wrmsr(MSR_EFER, rdmsr(MSR_EFER) & ~EFER_SVME);
> > + delta = run_and_measure();
> > + GUEST_ASSERT_NE(delta, 0);
> > +
> > + GUEST_DONE();
> > +}
> > +
> > +static void l1_guest_code(struct svm_test_data *svm, struct hg_test_data *data,
> > + int test_num)
> > +{
> > + switch (test_num) {
> > + case 0:
>
> As above, I would much rather pass in the mask of GUEST_HOST bits to set, and
> then react accordingly, as opposed to passing in a magic/arbitrary @test_num.
> Then I'm pretty sure we don't need a dispatch function, just run the testcase
> using the passed in mask.
>
> > + l1_guest_code_guestonly(svm, data);
> > + break;
> > + case 1:
> > + l1_guest_code_hostonly(svm, data);
> > + break;
> > + case 2:
> > + l1_guest_code_both_bits(svm, data);
> > + break;
> > + }
> > +}
>
> ...
>
> > +int main(int argc, char *argv[])
> > +{
> > + TEST_REQUIRE(kvm_cpu_has(X86_FEATURE_SVM));
> > + TEST_REQUIRE(kvm_is_pmu_enabled());
> > + TEST_REQUIRE(get_kvm_amd_param_bool("enable_mediated_pmu"));
> > +
> > + run_test(0, "Guest-Only counter across all transitions");
> > + run_test(1, "Host-Only counter across all transitions");
> > + run_test(2, "Both HG_ONLY bits set (always count)");
>
> As alluded to above, shouldn't we also test "no bits set"?
> > +
> > + return 0;
> > +}
> > --
> > 2.52.0.457.g6b5491de43-goog
> >
^ permalink raw reply [flat|nested] 21+ messages in thread
* Re: [PATCH 6/6] KVM: selftests: x86: Add svm_pmu_hg_test for HG_ONLY bits
2026-01-21 22:54 ` [PATCH 6/6] KVM: selftests: x86: Add svm_pmu_hg_test for HG_ONLY bits Jim Mattson
2026-01-22 17:12 ` Sean Christopherson
@ 2026-01-22 18:56 ` kernel test robot
1 sibling, 0 replies; 21+ messages in thread
From: kernel test robot @ 2026-01-22 18:56 UTC (permalink / raw)
To: Jim Mattson, Sean Christopherson, Paolo Bonzini, Thomas Gleixner,
Ingo Molnar, Borislav Petkov, Dave Hansen, x86, H. Peter Anvin,
Peter Zijlstra, Arnaldo Carvalho de Melo, Namhyung Kim,
Mark Rutland, Alexander Shishkin, Jiri Olsa, Ian Rogers,
Adrian Hunter, James Clark, Shuah Khan, kvm, linux-kernel,
linux-perf-users, linux-kselftest
Cc: oe-kbuild-all, Jim Mattson
Hi Jim,
kernel test robot noticed the following build warnings:
[auto build test WARNING on next-20260121]
[cannot apply to kvm/queue kvm/next perf-tools-next/perf-tools-next tip/perf/core perf-tools/perf-tools kvm/linux-next acme/perf/core v6.19-rc6 v6.19-rc5 v6.19-rc4 linus/master v6.19-rc6]
[If your patch is applied to the wrong git tree, kindly drop us a note.
And when submitting patch, we suggest to use '--base' as documented in
https://git-scm.com/docs/git-format-patch#_base_tree_information]
url: https://github.com/intel-lab-lkp/linux/commits/Jim-Mattson/KVM-x86-pmu-Introduce-amd_pmu_set_eventsel_hw/20260122-070031
base: next-20260121
patch link: https://lore.kernel.org/r/20260121225438.3908422-7-jmattson%40google.com
patch subject: [PATCH 6/6] KVM: selftests: x86: Add svm_pmu_hg_test for HG_ONLY bits
config: x86_64-allnoconfig-bpf (https://download.01.org/0day-ci/archive/20260122/202601221906.D038BKHz-lkp@intel.com/config)
compiler: gcc-14 (Debian 14.2.0-19) 14.2.0
reproduce (this is a W=1 build): (https://download.01.org/0day-ci/archive/20260122/202601221906.D038BKHz-lkp@intel.com/reproduce)
If you fix the issue in a separate patch/commit (i.e. not just a new version of
the same patch/commit), kindly add following tags
| Reported-by: kernel test robot <lkp@intel.com>
| Closes: https://lore.kernel.org/oe-kbuild-all/202601221906.D038BKHz-lkp@intel.com/
All warnings (new ones prefixed by >>):
>> x86/svm_pmu_hg_test.c:37:9: warning: "MSR_F15H_PERF_CTL0" redefined
37 | #define MSR_F15H_PERF_CTL0 0xc0010200
| ^~~~~~~~~~~~~~~~~~
In file included from include/x86/processor.h:13,
from x86/svm_pmu_hg_test.c:31:
tools/testing/selftests/../../../tools/arch/x86/include/asm/msr-index.h:815:9: note: this is the location of the previous definition
815 | #define MSR_F15H_PERF_CTL0 MSR_F15H_PERF_CTL
| ^~~~~~~~~~~~~~~~~~
>> x86/svm_pmu_hg_test.c:38:9: warning: "MSR_F15H_PERF_CTR0" redefined
38 | #define MSR_F15H_PERF_CTR0 0xc0010201
| ^~~~~~~~~~~~~~~~~~
tools/testing/selftests/../../../tools/arch/x86/include/asm/msr-index.h:823:9: note: this is the location of the previous definition
823 | #define MSR_F15H_PERF_CTR0 MSR_F15H_PERF_CTR
| ^~~~~~~~~~~~~~~~~~
cc1: note: unrecognized command-line option '-Wno-gnu-variable-sized-type-not-at-end' may have been intended to silence earlier diagnostics
--
0-DAY CI Kernel Test Service
https://github.com/intel/lkp-tests/wiki
^ permalink raw reply [flat|nested] 21+ messages in thread