public inbox for linux-kernel@vger.kernel.org
 help / color / mirror / Atom feed
From: Sean Christopherson <seanjc@google.com>
To: Yosry Ahmed <yosry@kernel.org>
Cc: Paolo Bonzini <pbonzini@redhat.com>,
	Jim Mattson <jmattson@google.com>,
	 Peter Zijlstra <peterz@infradead.org>,
	Ingo Molnar <mingo@redhat.com>,
	 Arnaldo Carvalho de Melo <acme@kernel.org>,
	Namhyung Kim <namhyung@kernel.org>,
	 Mark Rutland <mark.rutland@arm.com>,
	 Alexander Shishkin <alexander.shishkin@linux.intel.com>,
	kvm@vger.kernel.org,  linux-kernel@vger.kernel.org
Subject: Re: [PATCH v5 07/13] KVM: x86/pmu: Disable counters based on Host-Only/Guest-Only bits in SVM
Date: Tue, 5 May 2026 11:49:12 -0700	[thread overview]
Message-ID: <afo7qB3hyrD_9Jme@google.com> (raw)
In-Reply-To: <CAO9r8zPqvBY0=O2L6DterLyN3nZrtOCs-mv_4hvZ+gt=bxRbDg@mail.gmail.com>

On Tue, May 05, 2026, Yosry Ahmed wrote:
> On Tue, May 5, 2026 at 11:11 AM Sean Christopherson <seanjc@google.com> wrote:
> > So I think we actually want to handle this in pmc_is_locally_enabled(), because
> > the host/guest bits are "local" controls.  One option would be to add the guest/host
> > masks as constants in kvm_pmu_ops, and bleed the logic into pmc_is_locally_enabled(),
> > e.g. to avoid the CALL+RET overhead.  But if make the callback a "negative", then
> > we can make the static call OPTIONAL_RET0, which will turn the call into a glorified
> > nop for everything except AMD with a mediated PMU.  E.g.
> >
> > diff --git arch/x86/kvm/pmu.h arch/x86/kvm/pmu.h
> > index 0925246731cb..d8ce0938fcbe 100644
> > --- arch/x86/kvm/pmu.h
> > +++ arch/x86/kvm/pmu.h
> > @@ -190,7 +190,8 @@ static inline bool pmc_is_locally_enabled(struct kvm_pmc *pmc)
> >                                         pmc->idx - KVM_FIXED_PMC_BASE_IDX) &
> >                                         (INTEL_FIXED_0_KERNEL | INTEL_FIXED_0_USER);
> >
> > -       return pmc->eventsel & ARCH_PERFMON_EVENTSEL_ENABLE;
> > +       return (pmc->eventsel & ARCH_PERFMON_EVENTSEL_ENABLE) &&
> > +              !kvm_pmu_call(pmc_is_locally_disabled(pmc));
> 
> We still get the overhead on AMD with mediated PMU enabled, but more
> importantly, I am not sure what pmc_is_locally_disabled() would test
> for here? Do we re-check EFER, guest mode, etc to figure it out? I
> don't think this is what you mean as it would be redundant, but I am
> not sure what else.

Yep, that's exactly what I mean.

> Did you see my other replies and code snippet tracking disabling
> reasons? I think the code snippet would still work, I just need to
> move the pmc_is_nested_disabled() check into pmc_is_locally_enabled().

I did.  IMO, all of what you proposed is an optimization to avoid the "costly"
checks at the time of pmc_is_locally_enabled().  In quotes because I don't think
the _overall_ cost is actually all that high.  pmc_is_locally_enabled() is only
called in relatively slow paths, and my guess is the CALL+RET (or untrained RET,
ugh) is probably more expensive than the logic itself.

The very nice side effect of incorporating the logic into pmc_is_locally_enabled()
is that I _think_ we can drop kvm_pmu_ops.reprogram_counters(), because
amd_mediated_pmu_handle_host_guest_bits() will instead be:

  static bool amd_pmc_is_locally_disabled(struct kvm_pmc *pmc)
  {
	struct kvm_pmu *pmu = pmc_to_pmu(pmc);
	struct kvm_vcpu *vcpu = pmu_to_vcpu(pmu);
	u64 host_guest_bits;

	/* Common code is supposed to check the common enable bit. */
	if (WARN_ON_ONCE(!(pmc->eventsel & ARCH_PERFMON_EVENTSEL_ENABLE)))
		return false;

	/*
	 * If both bits are cleared, always keep the counter enabled. Otherwise,
	 * counter enablement needs to be re-evaluated on every nested
	 * transition (and EFER.SVME change).
	 */
	host_guest_bits = pmc->eventsel & AMD64_EVENTSEL_HOST_GUEST_MASK;
	if (!host_guest_bits)
		return true;

	/* If either bit is set and EFER.SVME=0, the counter is disabled. */
	if (!(vcpu->arch.efer & EFER_SVME))
		return false;

	if (host_guest_bits == AMD64_EVENTSEL_HOST_GUEST_MASK)
		return true;

	return !!(host_guest_bits & AMD64_EVENTSEL_GUESTONLY) == is_guest_mode(vcpu);
  }


reprogram_pmcs_on_nested_transitions would need to be handled somewhere else, but
(a) that's probably the correct approach anyways (hook writes to the eventsel)
and (b) is _also_ an optimization, because KVM can start with the naive approach
of always reprogramming counters on nested transitions (if guest/host bits are
supported).

And if we're clever, we can optimize pmc_is_locally_enabled() by putting
reprogram_pmcs_on_nested_transitions in kvm_pmu, e.g. as something like
pmc_has_mode_specific_enables, and then doing:

	if (!(pmc->eventsel & ARCH_PERFMON_EVENTSEL_ENABLE))
		return false;

	if (!test_bit(pmc->idx, &pmu->pmc_has_mode_specific_enables))
		return true;

	return !kvm_pmu_call(pmc_is_locally_disabled(pmc));

  reply	other threads:[~2026-05-05 18:49 UTC|newest]

Thread overview: 24+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2026-04-30 20:27 [PATCH v5 00/13] KVM: x86/pmu: Add support for AMD Host-Only/Guest-Only bits Yosry Ahmed
2026-04-30 20:27 ` [PATCH v5 01/13] KVM: nSVM: Stop leaking single-stepping on VMRUN into L2 Yosry Ahmed
2026-04-30 20:27 ` [PATCH v5 02/13] KVM: nSVM: Bail early out of VMRUN emulation if advancing RIP fails Yosry Ahmed
2026-04-30 20:27 ` [PATCH v5 03/13] KVM: nSVM: Move VMRUN instruction retirement after entering guest mode Yosry Ahmed
2026-04-30 20:27 ` [PATCH v5 04/13] KVM: x86: Move enable_pmu/enable_mediated_pmu to pmu.h and pmu.c Yosry Ahmed
2026-04-30 20:27 ` [PATCH v5 05/13] KVM: x86/pmu: Rename reprogram_counters() to clarify usage Yosry Ahmed
2026-04-30 20:27 ` [PATCH v5 06/13] KVM: x86/pmu: Do a single atomic OR when reprogramming counters Yosry Ahmed
2026-04-30 20:27 ` [PATCH v5 07/13] KVM: x86/pmu: Disable counters based on Host-Only/Guest-Only bits in SVM Yosry Ahmed
2026-04-30 23:24   ` Yosry Ahmed
2026-05-01  3:34     ` Yosry Ahmed
2026-05-01 17:50       ` Yosry Ahmed
2026-05-05 18:11         ` Sean Christopherson
2026-05-05 18:23           ` Yosry Ahmed
2026-05-05 18:49             ` Sean Christopherson [this message]
2026-05-05 19:32               ` Yosry Ahmed
2026-05-05 19:58                 ` Sean Christopherson
2026-05-05 20:24                   ` Yosry Ahmed
2026-04-30 20:27 ` [PATCH v5 08/13] KVM: x86/pmu: Reprogram Host/Guest-Only counters on nested transitions Yosry Ahmed
2026-04-30 20:27 ` [PATCH v5 09/13] KVM: x86/pmu: Allow Host-Only/Guest-Only bits with nSVM and mediated PMU Yosry Ahmed
2026-04-30 20:27 ` [PATCH v5 10/13] KVM: selftests: Refactor allocating guest stack into a helper Yosry Ahmed
2026-04-30 20:27 ` [PATCH v5 11/13] KVM: selftests: Allocate a dedicated guest page for x86 L2 guest stack Yosry Ahmed
2026-04-30 20:27 ` [PATCH v5 12/13] KVM: selftests: Drop L1-provided stacks for L2 guests on x86 Yosry Ahmed
2026-04-30 20:27 ` [PATCH v5 13/13] KVM: selftests: Add svm_pmu_host_guest_test for Host-Only/Guest-Only bits Yosry Ahmed
2026-04-30 20:38 ` [PATCH v5 00/13] KVM: x86/pmu: Add support for AMD " Yosry Ahmed

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=afo7qB3hyrD_9Jme@google.com \
    --to=seanjc@google.com \
    --cc=acme@kernel.org \
    --cc=alexander.shishkin@linux.intel.com \
    --cc=jmattson@google.com \
    --cc=kvm@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=mark.rutland@arm.com \
    --cc=mingo@redhat.com \
    --cc=namhyung@kernel.org \
    --cc=pbonzini@redhat.com \
    --cc=peterz@infradead.org \
    --cc=yosry@kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox