All of lore.kernel.org
 help / color / mirror / Atom feed
From: Yosry Ahmed <yosry@kernel.org>
To: Sean Christopherson <seanjc@google.com>
Cc: Paolo Bonzini <pbonzini@redhat.com>,
	Jim Mattson <jmattson@google.com>,
	 kvm@vger.kernel.org, linux-kernel@vger.kernel.org
Subject: Re: [PATCH v4 3/6] KVM: x86/pmu: Disable counters based on Host-Only/Guest-Only bits in SVM
Date: Fri, 24 Apr 2026 06:55:52 +0000	[thread overview]
Message-ID: <aesRPt5Vpco29vAt@google.com> (raw)
In-Reply-To: <adReHbkVRDGOihb4@google.com>

On Mon, Apr 06, 2026 at 06:30:05PM -0700, Sean Christopherson wrote:
> On Thu, Mar 26, 2026, Yosry Ahmed wrote:
> > diff --git a/arch/x86/include/asm/perf_event.h b/arch/x86/include/asm/perf_event.h
> > index ff5acb8b199b0..5961c002b28eb 100644
> > --- a/arch/x86/include/asm/perf_event.h
> > +++ b/arch/x86/include/asm/perf_event.h
> > @@ -60,6 +60,8 @@
> >  #define AMD64_EVENTSEL_INT_CORE_ENABLE			(1ULL << 36)
> >  #define AMD64_EVENTSEL_GUESTONLY			(1ULL << 40)
> >  #define AMD64_EVENTSEL_HOSTONLY				(1ULL << 41)
> > +#define AMD64_EVENTSEL_HOST_GUEST_MASK			\
> > +	(AMD64_EVENTSEL_HOSTONLY | AMD64_EVENTSEL_GUESTONLY)
> >  
> >  #define AMD64_EVENTSEL_INT_CORE_SEL_SHIFT		37
> >  #define AMD64_EVENTSEL_INT_CORE_SEL_MASK		\
> > diff --git a/arch/x86/kvm/pmu.c b/arch/x86/kvm/pmu.c
> > index d6ac3c55fce55..e35d598f809a2 100644
> > --- a/arch/x86/kvm/pmu.c
> > +++ b/arch/x86/kvm/pmu.c
> > @@ -559,6 +559,7 @@ static int reprogram_counter(struct kvm_pmc *pmc)
> >  
> >  	if (kvm_vcpu_has_mediated_pmu(pmu_to_vcpu(pmu))) {
> >  		kvm_mediated_pmu_refresh_event_filter(pmc);
> > +		kvm_pmu_call(mediated_reprogram_counter)(pmc);
> 
> I would rather make a single call from kvm_pmu_handle_event(), and let the vendor
> deal with mediated vs. legacy.  I want to avoid mediated-specific ops as much as
> possible, and I think kvm_x86_ops.reprogram_counters() would be easier to
> understand overall.

I think this doesn't apply anymore now that most nested transitions
won't be handled through kvm_pmu_handle_event(). Also because we need
kvm_mediated_pmu_refresh_event_filter() to still be called before
re-evaluating H/G bits and EFER.SVME.

I think leave this callback as-is and handle everything through
reprogram_counter(). Export reprogram_counter() and rename it to
kvm_pmu_reprogram_counter(), and end up with something like this:

void __kvm_pmu_handle_nested_transition(struct kvm_vcpu *vcpu, bool defer)
{
	struct kvm_pmu *pmu = vcpu_to_pmu(vcpu);
	DECLARE_BITMAP(bitmap, X86_PMC_IDX_MAX);

	if (bitmap_empty(pmu->reprogram_on_nested_transition, X86_PMC_IDX_MAX))
		return;

	bitmap_copy(bitmap, pmu->reprogram_pmi, X86_PMC_IDX_MAX);
	bitmap_zero(pmu->reprogram_on_nested_transition, X86_PMC_IDX_MAX);

	BUILD_BUG_ON(sizeof(pmu->reprogram_on_nested_transition) != sizeof(atomic64_t));
	if (defer) {
		atomic64_or(*(s64 *)pmu->reprogram_on_nested_transition,
			    &vcpu_to_pmu(vcpu)->__reprogram_pmi);
		kvm_make_request(KVM_REQ_PMU, vcpu);
		return;
	}

	kvm_for_each_pmc(pmu, pmc, bit, bitmap)
		kvm_pmu_reprogram_counter(pmc);
}

void kvm_pmu_handle_nested_transition(struct kvm_vcpu *vcpu)
{
	__kvm_pmu_handle_nested_transition(vcpu, false);
}

Actually, if that's desired, we can move this logic into SVM code now.
We won't be calling kvm_pmu_handle_nested_transition() from inside
enter_guest_mode() and leave_guest_mode() anyway so that we can only
defer for the svm_leave_nested() path.

So we can move:
- kvm_pmu_handle_nested_transition() to
svm_pmu_handle_nested_transition()
- pmu->reprogram_on_nested_transition to
svm->nested.reprogram_on_nested_transition

Not sure if we want to keep SVM-specific logic in SVM code, or if we
want to keep code generic as much as possible. I can see good arguments
for both stances.

> > +
> > +static void amd_mediated_pmu_reprogram_counter(struct kvm_pmc *pmc)
> > +{
> > +	amd_mediated_pmu_handle_host_guest_bits(pmc);
> 
> And then this doesn't need to be such a wonky wrapper, and the "reprogram on
> nested transition" logic can also clear the entire bitmap instead of doing things
> piecemeal, e.g. it can be something like so in the end:

We can still clear the entire bitmap in one go with the above
suggestion, but we'll keep the wonky wrapper :)

> 
> 	if (!kvm_vcpu_has_mediated_pmu(vcpu))
> 		return;
> 
> 	bitmap_zero(pmu->pmc_reprogram_on_nested_transition, X86_PMC_IDX_MAX);
> 
> 	kvm_for_each_pmc(pmu, pmc, bit, bitmap)
> 		amd_mediated_pmu_handle_host_guest_bits(pmc);

  reply	other threads:[~2026-04-24  6:55 UTC|newest]

Thread overview: 40+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2026-03-26  3:11 [PATCH v4 0/6] KVM: x86/pmu: Add support for AMD Host-Only/Guest-Only bits Yosry Ahmed
2026-03-26  3:11 ` [PATCH v4 1/6] KVM: x86: Move enable_pmu/enable_mediated_pmu to pmu.h and pmu.c Yosry Ahmed
2026-03-26  3:11 ` [PATCH v4 2/6] KVM: x86: Move guest_mode helpers to x86.h Yosry Ahmed
2026-03-26 22:48   ` kernel test robot
2026-03-26 23:18     ` Yosry Ahmed
2026-03-27  3:15   ` kernel test robot
2026-03-26  3:11 ` [PATCH v4 3/6] KVM: x86/pmu: Disable counters based on Host-Only/Guest-Only bits in SVM Yosry Ahmed
2026-04-07  1:30   ` Sean Christopherson
2026-04-24  6:55     ` Yosry Ahmed [this message]
2026-04-27 18:50       ` Sean Christopherson
2026-04-27 19:11         ` Yosry Ahmed
2026-04-27 19:54           ` Sean Christopherson
2026-04-27 20:02             ` Yosry Ahmed
2026-04-27 20:06               ` Sean Christopherson
2026-04-27 23:20         ` Yosry Ahmed
2026-04-27 23:53           ` Sean Christopherson
2026-04-28  0:34             ` Yosry Ahmed
2026-04-28  0:35               ` Yosry Ahmed
2026-04-28  0:37                 ` Yosry Ahmed
2026-03-26  3:11 ` [PATCH v4 4/6] KVM: x86/pmu: Re-evaluate Host-Only/Guest-Only on nested SVM transitions Yosry Ahmed
2026-04-07  1:35   ` Sean Christopherson
2026-04-09  4:59   ` Jim Mattson
2026-04-09 17:22     ` Sean Christopherson
2026-04-09 17:29       ` Jim Mattson
2026-04-09 17:48         ` Sean Christopherson
2026-04-09 18:35           ` Jim Mattson
2026-04-09 18:38             ` Sean Christopherson
2026-04-09 21:21               ` Sean Christopherson
2026-04-10  3:50                 ` Jim Mattson
2026-04-15 21:26                   ` Sean Christopherson
2026-04-15 23:07                     ` Jim Mattson
2026-04-16  0:29                       ` Sean Christopherson
2026-04-17 22:51                         ` Jim Mattson
2026-04-21 20:01                 ` Yosry Ahmed
2026-04-22 22:42                   ` Sean Christopherson
2026-04-24  6:57                     ` Yosry Ahmed
2026-03-26  3:11 ` [PATCH v4 5/6] KVM: x86/pmu: Allow Host-Only/Guest-Only bits with nSVM and mediated PMU Yosry Ahmed
2026-03-26  3:11 ` [PATCH v4 6/6] KVM: selftests: Add svm_pmu_host_guest_test for Host-Only/Guest-Only bits Yosry Ahmed
2026-04-07  1:39   ` Sean Christopherson
2026-04-07  3:23     ` Jim Mattson

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=aesRPt5Vpco29vAt@google.com \
    --to=yosry@kernel.org \
    --cc=jmattson@google.com \
    --cc=kvm@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=pbonzini@redhat.com \
    --cc=seanjc@google.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.