public inbox for kvm@vger.kernel.org
 help / color / mirror / Atom feed
From: Yosry Ahmed <yosry@kernel.org>
To: Sean Christopherson <seanjc@google.com>
Cc: Paolo Bonzini <pbonzini@redhat.com>,
	Jim Mattson <jmattson@google.com>,
	 Peter Zijlstra <peterz@infradead.org>,
	Ingo Molnar <mingo@redhat.com>,
	 Arnaldo Carvalho de Melo <acme@kernel.org>,
	Namhyung Kim <namhyung@kernel.org>,
	 Mark Rutland <mark.rutland@arm.com>,
	Alexander Shishkin <alexander.shishkin@linux.intel.com>,
	 kvm@vger.kernel.org, linux-kernel@vger.kernel.org
Subject: Re: [PATCH v5 07/13] KVM: x86/pmu: Disable counters based on Host-Only/Guest-Only bits in SVM
Date: Tue, 5 May 2026 19:32:36 +0000	[thread overview]
Message-ID: <afpAw3UkbVMbXptv@google.com> (raw)
In-Reply-To: <afo7qB3hyrD_9Jme@google.com>

> > Did you see my other replies and code snippet tracking disabling
> > reasons? I think the code snippet would still work, I just need to
> > move the pmc_is_nested_disabled() check into pmc_is_locally_enabled().
> 
> I did.  IMO, all of what you proposed is an optimization to avoid the "costly"
> checks at the time of pmc_is_locally_enabled().  In quotes because I don't think
> the _overall_ cost is actually all that high.  pmc_is_locally_enabled() is only
> called in relatively slow paths, and my guess is the CALL+RET (or untrained RET,
> ugh) is probably more expensive than the logic itself.
> 
> The very nice side effect of incorporating the logic into pmc_is_locally_enabled()
> is that I _think_ we can drop kvm_pmu_ops.reprogram_counters(), because
> amd_mediated_pmu_handle_host_guest_bits() will instead be:
> 
>   static bool amd_pmc_is_locally_disabled(struct kvm_pmc *pmc)
>   {
> 	struct kvm_pmu *pmu = pmc_to_pmu(pmc);
> 	struct kvm_vcpu *vcpu = pmu_to_vcpu(pmu);
> 	u64 host_guest_bits;
> 
> 	/* Common code is supposed to check the common enable bit. */
> 	if (WARN_ON_ONCE(!(pmc->eventsel & ARCH_PERFMON_EVENTSEL_ENABLE)))
> 		return false;
> 
> 	/*
> 	 * If both bits are cleared, always keep the counter enabled. Otherwise,
> 	 * counter enablement needs to be re-evaluated on every nested
> 	 * transition (and EFER.SVME change).
> 	 */
> 	host_guest_bits = pmc->eventsel & AMD64_EVENTSEL_HOST_GUEST_MASK;
> 	if (!host_guest_bits)
> 		return true;
> 
> 	/* If either bit is set and EFER.SVME=0, the counter is disabled. */
> 	if (!(vcpu->arch.efer & EFER_SVME))
> 		return false;
> 
> 	if (host_guest_bits == AMD64_EVENTSEL_HOST_GUEST_MASK)
> 		return true;
> 
> 	return !!(host_guest_bits & AMD64_EVENTSEL_GUESTONLY) == is_guest_mode(vcpu);
>   }

If we do this and drop kvm_pmu_ops.reprogram_counters(), we still need
somewhere to actually clear ARCH_PERFMON_EVENTSEL_ENABLE in eventsel_hw.

What if we call kvm_pmu_ops.pmc_is_locally_disabled() at the top of
reprogram_counter(), cache the result, and use that for eventsel_hw
modification and in pmc_is_locally_enabled()?

We'd also probably want to rename it. I would honeslty just use 'nested'
instead of 'locally_disabled' and 'mode_specific_enables' as that's the
only current user.

Something like this with your proposed amd_pmc_is_locally_disabled()
above, which is similar to the kvm_pmu_ops.mediated_reprogram_counter()
implementation in v4 except that the vendor-specific callback is more
targeted:

static void pmc_check_nested_disabled(struct kvm_pmc *pmc)
{
	if (!(pmc->eventsel & ARCH_PERFMON_EVENTSEL_ENABLE))
		return;
	
	if (!test_bit(pmc->idx, &pmu->pmc_has_nested_enables))
		return;

	pmc->is_nested_disabled = kvm_pmu_call(pmc_is_nested_disabled)(pmc);
	if (!pmc->is_nested_disabled)
		pmc->eventsel_hw &= ~ARCH_PERFMON_EVENTSEL_ENABLE;
}

static int reprogram_counter(struct kvm_pmc *pmc)
{
	...

	if (kvm_vcpu_has_mediated_pmu(pmu_to_vcpu(pmu))) {
		kvm_mediated_pmu_refresh_event_filter(pmc);
		pmc_check_nested_disabled(pmc);
		return 0;
	}
	...
}

Then pmc_is_locally_enabled() also checks pmc->is_nested_disabled. This
completely avoids the extra calls in pmc_is_locally_enabled() and
provides a place for us to actually clear ARCH_PERFMON_EVENTSEL_ENABLE.

> 
> reprogram_pmcs_on_nested_transitions would need to be handled somewhere else, but
> (a) that's probably the correct approach anyways (hook writes to the eventsel)
> and (b) is _also_ an optimization, because KVM can start with the naive approach
> of always reprogramming counters on nested transitions (if guest/host bits are
> supported).
> 
> And if we're clever, we can optimize pmc_is_locally_enabled() by putting
> reprogram_pmcs_on_nested_transitions in kvm_pmu, e.g. as something like
> pmc_has_mode_specific_enables, and then doing:

Yeah I more-or-less incorporated this above. pmu->pmc_has_nested_enables
would be set for a counter in amd_pmu_set_msr() if either bits is set.
Then, svm_pmu_handle_nested_transition() (in the next patch) will use
pmu->pmc_has_nested_enables instead of
svm->nested.reprogram_pmcs_on_nested_transitions.

WDYT? This is essentially the v4 approach except that the per-counter
callback now returns a boolean that we cache and reuse in
pmc_is_locally_enabled(). We have come full circle with the per-counter
callback vs. single callback for all :)

Also, would you rather I send a new version with everything, or do you
want to pick up some of the patches in this version independently?

> 
> 	if (!(pmc->eventsel & ARCH_PERFMON_EVENTSEL_ENABLE))
> 		return false;
> 
> 	if (!test_bit(pmc->idx, &pmu->pmc_has_mode_specific_enables))
> 		return true;
> 
> 	return !kvm_pmu_call(pmc_is_locally_disabled(pmc));
> 

  reply	other threads:[~2026-05-05 19:32 UTC|newest]

Thread overview: 24+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2026-04-30 20:27 [PATCH v5 00/13] KVM: x86/pmu: Add support for AMD Host-Only/Guest-Only bits Yosry Ahmed
2026-04-30 20:27 ` [PATCH v5 01/13] KVM: nSVM: Stop leaking single-stepping on VMRUN into L2 Yosry Ahmed
2026-04-30 20:27 ` [PATCH v5 02/13] KVM: nSVM: Bail early out of VMRUN emulation if advancing RIP fails Yosry Ahmed
2026-04-30 20:27 ` [PATCH v5 03/13] KVM: nSVM: Move VMRUN instruction retirement after entering guest mode Yosry Ahmed
2026-04-30 20:27 ` [PATCH v5 04/13] KVM: x86: Move enable_pmu/enable_mediated_pmu to pmu.h and pmu.c Yosry Ahmed
2026-04-30 20:27 ` [PATCH v5 05/13] KVM: x86/pmu: Rename reprogram_counters() to clarify usage Yosry Ahmed
2026-04-30 20:27 ` [PATCH v5 06/13] KVM: x86/pmu: Do a single atomic OR when reprogramming counters Yosry Ahmed
2026-04-30 20:27 ` [PATCH v5 07/13] KVM: x86/pmu: Disable counters based on Host-Only/Guest-Only bits in SVM Yosry Ahmed
2026-04-30 23:24   ` Yosry Ahmed
2026-05-01  3:34     ` Yosry Ahmed
2026-05-01 17:50       ` Yosry Ahmed
2026-05-05 18:11         ` Sean Christopherson
2026-05-05 18:23           ` Yosry Ahmed
2026-05-05 18:49             ` Sean Christopherson
2026-05-05 19:32               ` Yosry Ahmed [this message]
2026-05-05 19:58                 ` Sean Christopherson
2026-05-05 20:24                   ` Yosry Ahmed
2026-04-30 20:27 ` [PATCH v5 08/13] KVM: x86/pmu: Reprogram Host/Guest-Only counters on nested transitions Yosry Ahmed
2026-04-30 20:27 ` [PATCH v5 09/13] KVM: x86/pmu: Allow Host-Only/Guest-Only bits with nSVM and mediated PMU Yosry Ahmed
2026-04-30 20:27 ` [PATCH v5 10/13] KVM: selftests: Refactor allocating guest stack into a helper Yosry Ahmed
2026-04-30 20:27 ` [PATCH v5 11/13] KVM: selftests: Allocate a dedicated guest page for x86 L2 guest stack Yosry Ahmed
2026-04-30 20:27 ` [PATCH v5 12/13] KVM: selftests: Drop L1-provided stacks for L2 guests on x86 Yosry Ahmed
2026-04-30 20:27 ` [PATCH v5 13/13] KVM: selftests: Add svm_pmu_host_guest_test for Host-Only/Guest-Only bits Yosry Ahmed
2026-04-30 20:38 ` [PATCH v5 00/13] KVM: x86/pmu: Add support for AMD " Yosry Ahmed

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=afpAw3UkbVMbXptv@google.com \
    --to=yosry@kernel.org \
    --cc=acme@kernel.org \
    --cc=alexander.shishkin@linux.intel.com \
    --cc=jmattson@google.com \
    --cc=kvm@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=mark.rutland@arm.com \
    --cc=mingo@redhat.com \
    --cc=namhyung@kernel.org \
    --cc=pbonzini@redhat.com \
    --cc=peterz@infradead.org \
    --cc=seanjc@google.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox