linux-doc.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Colton Lewis <coltonlewis@google.com>
To: kvm@vger.kernel.org
Cc: Paolo Bonzini <pbonzini@redhat.com>,
	Jonathan Corbet <corbet@lwn.net>,
	 Russell King <linux@armlinux.org.uk>,
	Catalin Marinas <catalin.marinas@arm.com>,
	 Will Deacon <will@kernel.org>, Marc Zyngier <maz@kernel.org>,
	Oliver Upton <oliver.upton@linux.dev>,
	 Mingwei Zhang <mizhang@google.com>,
	Joey Gouly <joey.gouly@arm.com>,
	 Suzuki K Poulose <suzuki.poulose@arm.com>,
	Zenghui Yu <yuzenghui@huawei.com>,
	 Mark Rutland <mark.rutland@arm.com>,
	Shuah Khan <shuah@kernel.org>,
	linux-doc@vger.kernel.org,  linux-kernel@vger.kernel.org,
	linux-arm-kernel@lists.infradead.org,  kvmarm@lists.linux.dev,
	linux-perf-users@vger.kernel.org,
	 linux-kselftest@vger.kernel.org,
	Colton Lewis <coltonlewis@google.com>
Subject: [PATCH v4 19/23] KVM: arm64: Implement lazy PMU context swaps
Date: Mon, 14 Jul 2025 22:59:13 +0000	[thread overview]
Message-ID: <20250714225917.1396543-20-coltonlewis@google.com> (raw)
In-Reply-To: <20250714225917.1396543-1-coltonlewis@google.com>

Since many guests will never touch the PMU, they need not pay the cost
of context swapping those registers.

Use the ownership enum from the previous commit to implement a simple
state machine for PMU ownership. The PMU is always in one of three
states: host owned, guest owned, or free.

A host owned state means all PMU registers are trapped coarsely by
MDCR_EL2.TPM. In host owned state PMU partitioning is disabled and the
PMU may not transition to a different state without intervention from
the host.

A guest owned state means some PMU registers are untrapped under FGT
controls. This is the only state in which context swaps take place.

A free state is the default partitioned state. It means no context
swaps take place and KVM keeps the registers trapped. If a guest
accesses the PMU registers in a free state, the PMU transitions to a
guest owned state and KVM recalculates MDCR_EL2 to unset TPM.

Signed-off-by: Colton Lewis <coltonlewis@google.com>
---
 arch/arm64/include/asm/kvm_host.h |  1 +
 arch/arm64/include/asm/kvm_pmu.h  | 18 ++++++++++++++++++
 arch/arm64/kvm/debug.c            |  2 +-
 arch/arm64/kvm/pmu-direct.c       |  4 +++-
 arch/arm64/kvm/pmu.c              | 24 ++++++++++++++++++++++++
 arch/arm64/kvm/sys_regs.c         | 24 ++++++++++++++++++++++--
 6 files changed, 69 insertions(+), 4 deletions(-)

diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
index 21e32d7fa19b..f6803b57b648 100644
--- a/arch/arm64/include/asm/kvm_host.h
+++ b/arch/arm64/include/asm/kvm_host.h
@@ -1453,6 +1453,7 @@ static inline bool kvm_system_needs_idmapped_vectors(void)
 	return cpus_have_final_cap(ARM64_SPECTRE_V3A);
 }
 
+void kvm_arm_setup_mdcr_el2(struct kvm_vcpu *vcpu);
 void kvm_init_host_debug_data(void);
 void kvm_vcpu_load_debug(struct kvm_vcpu *vcpu);
 void kvm_vcpu_put_debug(struct kvm_vcpu *vcpu);
diff --git a/arch/arm64/include/asm/kvm_pmu.h b/arch/arm64/include/asm/kvm_pmu.h
index 58c1219adf54..47cfff7ebc26 100644
--- a/arch/arm64/include/asm/kvm_pmu.h
+++ b/arch/arm64/include/asm/kvm_pmu.h
@@ -97,6 +97,11 @@ u64 kvm_pmu_guest_counter_mask(struct arm_pmu *pmu);
 void kvm_pmu_host_counters_enable(void);
 void kvm_pmu_host_counters_disable(void);
 
+bool kvm_pmu_regs_free(struct kvm_vcpu *vcpu);
+bool kvm_pmu_regs_host_owned(struct kvm_vcpu *vcpu);
+bool kvm_pmu_regs_guest_owned(struct kvm_vcpu *vcpu);
+void kvm_pmu_regs_set_guest_owned(struct kvm_vcpu *vcpu);
+
 u8 kvm_pmu_guest_num_counters(struct kvm_vcpu *vcpu);
 u8 kvm_pmu_hpmn(struct kvm_vcpu *vcpu);
 void kvm_pmu_load(struct kvm_vcpu *vcpu);
@@ -168,6 +173,19 @@ static inline u8 kvm_pmu_guest_num_counters(struct kvm_vcpu *vcpu)
 {
 	return 0;
 }
+static inline bool kvm_pmu_regs_free(struct kvm_vcpu *vcpu)
+{
+	return false;
+}
+static inline bool kvm_pmu_regs_host_owned(struct kvm_vcpu *vcpu)
+{
+	return true;
+}
+static inline bool kvm_pmu_regs_guest_owned(struct kvm_vcpu *vcpu)
+{
+	return false;
+}
+static inline void kvm_pmu_regs_set_guest_owned(struct kvm_vcpu *vcpu) {}
 static inline u8 kvm_pmu_hpmn(struct kvm_vcpu *vcpu)
 {
 	return 0;
diff --git a/arch/arm64/kvm/debug.c b/arch/arm64/kvm/debug.c
index fa8b4f846b68..128fa17b7a35 100644
--- a/arch/arm64/kvm/debug.c
+++ b/arch/arm64/kvm/debug.c
@@ -28,7 +28,7 @@
  *  - Self-hosted Trace Filter controls (MDCR_EL2_TTRF)
  *  - Self-hosted Trace (MDCR_EL2_TTRF/MDCR_EL2_E2TB)
  */
-static void kvm_arm_setup_mdcr_el2(struct kvm_vcpu *vcpu)
+void kvm_arm_setup_mdcr_el2(struct kvm_vcpu *vcpu)
 {
 	preempt_disable();
 
diff --git a/arch/arm64/kvm/pmu-direct.c b/arch/arm64/kvm/pmu-direct.c
index e21fdd274c2e..28d8540c5ed2 100644
--- a/arch/arm64/kvm/pmu-direct.c
+++ b/arch/arm64/kvm/pmu-direct.c
@@ -52,7 +52,8 @@ bool kvm_pmu_is_partitioned(struct arm_pmu *pmu)
  */
 bool kvm_vcpu_pmu_is_partitioned(struct kvm_vcpu *vcpu)
 {
-	return kvm_pmu_is_partitioned(vcpu->kvm->arch.arm_pmu);
+	return kvm_pmu_is_partitioned(vcpu->kvm->arch.arm_pmu) &&
+		!kvm_pmu_regs_host_owned(vcpu);
 }
 
 /**
@@ -69,6 +70,7 @@ bool kvm_vcpu_pmu_use_fgt(struct kvm_vcpu *vcpu)
 	u8 hpmn = vcpu->kvm->arch.nr_pmu_counters;
 
 	return kvm_vcpu_pmu_is_partitioned(vcpu) &&
+		kvm_pmu_regs_guest_owned(vcpu) &&
 		cpus_have_final_cap(ARM64_HAS_FGT) &&
 		(hpmn != 0 || cpus_have_final_cap(ARM64_HAS_HPMN0));
 }
diff --git a/arch/arm64/kvm/pmu.c b/arch/arm64/kvm/pmu.c
index 1e5f46c1346c..db11a3e9c4b7 100644
--- a/arch/arm64/kvm/pmu.c
+++ b/arch/arm64/kvm/pmu.c
@@ -496,6 +496,7 @@ static int kvm_arm_pmu_v3_init(struct kvm_vcpu *vcpu)
 	init_irq_work(&vcpu->arch.pmu.overflow_work,
 		      kvm_pmu_perf_overflow_notify_vcpu);
 
+	vcpu->arch.pmu.owner = VCPU_REGISTER_HOST_OWNED;
 	vcpu->arch.pmu.created = true;
 	return 0;
 }
@@ -906,3 +907,26 @@ bool pmu_access_el0_disabled(struct kvm_vcpu *vcpu)
 {
 	return check_pmu_access_disabled(vcpu, ARMV8_PMU_USERENR_EN);
 }
+
+bool kvm_pmu_regs_free(struct kvm_vcpu *vcpu)
+{
+	return vcpu->arch.pmu.owner == VCPU_REGISTER_FREE;
+}
+
+bool kvm_pmu_regs_host_owned(struct kvm_vcpu *vcpu)
+{
+	return vcpu->arch.pmu.owner == VCPU_REGISTER_HOST_OWNED;
+}
+
+bool kvm_pmu_regs_guest_owned(struct kvm_vcpu *vcpu)
+{
+	return vcpu->arch.pmu.owner == VCPU_REGISTER_GUEST_OWNED;
+}
+
+void kvm_pmu_regs_set_guest_owned(struct kvm_vcpu *vcpu)
+{
+	if (kvm_pmu_regs_free(vcpu)) {
+		vcpu->arch.pmu.owner = VCPU_REGISTER_GUEST_OWNED;
+		kvm_arm_setup_mdcr_el2(vcpu);
+	}
+}
diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index e3d4ca167881..7d4b194bfa0a 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -860,6 +860,8 @@ static bool access_pmcr(struct kvm_vcpu *vcpu, struct sys_reg_params *p,
 {
 	u64 val;
 
+	kvm_pmu_regs_set_guest_owned(vcpu);
+
 	if (pmu_access_el0_disabled(vcpu))
 		return false;
 
@@ -887,6 +889,8 @@ static bool access_pmcr(struct kvm_vcpu *vcpu, struct sys_reg_params *p,
 static bool access_pmselr(struct kvm_vcpu *vcpu, struct sys_reg_params *p,
 			  const struct sys_reg_desc *r)
 {
+	kvm_pmu_regs_set_guest_owned(vcpu);
+
 	if (pmu_access_event_counter_el0_disabled(vcpu))
 		return false;
 
@@ -905,6 +909,8 @@ static bool access_pmceid(struct kvm_vcpu *vcpu, struct sys_reg_params *p,
 {
 	u64 pmceid, mask, shift;
 
+	kvm_pmu_regs_set_guest_owned(vcpu);
+
 	BUG_ON(p->is_write);
 
 	if (pmu_access_el0_disabled(vcpu))
@@ -973,6 +979,8 @@ static bool access_pmu_evcntr(struct kvm_vcpu *vcpu,
 {
 	u64 idx = ~0UL;
 
+	kvm_pmu_regs_set_guest_owned(vcpu);
+
 	if (r->CRn == 9 && r->CRm == 13) {
 		if (r->Op2 == 2) {
 			/* PMXEVCNTR_EL0 */
@@ -1049,6 +1057,8 @@ static bool access_pmu_evtyper(struct kvm_vcpu *vcpu, struct sys_reg_params *p,
 {
 	u64 idx, reg, pmselr;
 
+	kvm_pmu_regs_set_guest_owned(vcpu);
+
 	if (pmu_access_el0_disabled(vcpu))
 		return false;
 
@@ -1110,6 +1120,8 @@ static bool access_pmcnten(struct kvm_vcpu *vcpu, struct sys_reg_params *p,
 {
 	u64 val, mask;
 
+	kvm_pmu_regs_set_guest_owned(vcpu);
+
 	if (pmu_access_el0_disabled(vcpu))
 		return false;
 
@@ -1134,7 +1146,10 @@ static bool access_pmcnten(struct kvm_vcpu *vcpu, struct sys_reg_params *p,
 static bool access_pminten(struct kvm_vcpu *vcpu, struct sys_reg_params *p,
 			   const struct sys_reg_desc *r)
 {
-	u64 mask = kvm_pmu_accessible_counter_mask(vcpu);
+	u64 mask;
+
+	kvm_pmu_regs_set_guest_owned(vcpu);
+	mask = kvm_pmu_accessible_counter_mask(vcpu);
 
 	if (check_pmu_access_disabled(vcpu, 0))
 		return false;
@@ -1171,7 +1186,10 @@ static void writethrough_pmovs(struct kvm_vcpu *vcpu, struct sys_reg_params *p,
 static bool access_pmovs(struct kvm_vcpu *vcpu, struct sys_reg_params *p,
 			 const struct sys_reg_desc *r)
 {
-	u64 mask = kvm_pmu_accessible_counter_mask(vcpu);
+	u64 mask;
+
+	kvm_pmu_regs_set_guest_owned(vcpu);
+	mask = kvm_pmu_accessible_counter_mask(vcpu);
 
 	if (pmu_access_el0_disabled(vcpu))
 		return false;
@@ -1211,6 +1229,8 @@ static bool access_pmswinc(struct kvm_vcpu *vcpu, struct sys_reg_params *p,
 static bool access_pmuserenr(struct kvm_vcpu *vcpu, struct sys_reg_params *p,
 			     const struct sys_reg_desc *r)
 {
+	kvm_pmu_regs_set_guest_owned(vcpu);
+
 	if (p->is_write) {
 		if (!vcpu_mode_priv(vcpu))
 			return undef_access(vcpu, p, r);
-- 
2.50.0.727.gbf7dc18ff4-goog


  parent reply	other threads:[~2025-07-14 22:59 UTC|newest]

Thread overview: 31+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2025-07-14 22:58 [PATCH v4 00/23] ARM64 PMU Partitioning Colton Lewis
2025-07-14 22:58 ` [PATCH v4 01/23] arm64: cpufeature: Add cpucap for HPMN0 Colton Lewis
2025-07-15 23:22   ` Suzuki K Poulose
2025-07-21 18:00     ` Colton Lewis
2025-07-14 22:58 ` [PATCH v4 02/23] KVM: arm64: Reorganize PMU includes Colton Lewis
2025-07-14 22:58 ` [PATCH v4 03/23] KVM: arm64: Reorganize PMU functions Colton Lewis
2025-07-14 22:58 ` [PATCH v4 04/23] perf: arm_pmuv3: Introduce method to partition the PMU Colton Lewis
2025-07-14 22:58 ` [PATCH v4 05/23] perf: arm_pmuv3: Generalize counter bitmasks Colton Lewis
2025-07-14 22:59 ` [PATCH v4 06/23] perf: arm_pmuv3: Keep out of guest counter partition Colton Lewis
2025-08-30  4:13   ` Colton Lewis
2025-07-14 22:59 ` [PATCH v4 07/23] KVM: arm64: Account for partitioning in kvm_pmu_get_max_counters() Colton Lewis
2025-07-14 22:59 ` [PATCH v4 08/23] KVM: arm64: Introduce non-UNDEF FGT control Colton Lewis
2025-07-14 22:59 ` [PATCH v4 09/23] KVM: arm64: Set up FGT for Partitioned PMU Colton Lewis
2025-07-14 22:59 ` [PATCH v4 10/23] KVM: arm64: Writethrough trapped PMEVTYPER register Colton Lewis
2025-08-13 22:01   ` Colton Lewis
2025-07-14 22:59 ` [PATCH v4 11/23] KVM: arm64: Use physical PMSELR for PMXEVTYPER if partitioned Colton Lewis
2025-07-14 22:59 ` [PATCH v4 12/23] KVM: arm64: Writethrough trapped PMOVS register Colton Lewis
2025-07-14 22:59 ` [PATCH v4 13/23] KVM: arm64: Write fast path PMU register handlers Colton Lewis
2025-07-14 22:59 ` [PATCH v4 14/23] KVM: arm64: Setup MDCR_EL2 to handle a partitioned PMU Colton Lewis
2025-07-14 22:59 ` [PATCH v4 15/23] KVM: arm64: Account for partitioning in PMCR_EL0 access Colton Lewis
2025-07-14 22:59 ` [PATCH v4 16/23] KVM: arm64: Context swap Partitioned PMU guest registers Colton Lewis
2025-07-14 22:59 ` [PATCH v4 17/23] KVM: arm64: Enforce PMU event filter at vcpu_load() Colton Lewis
2025-07-14 22:59 ` [PATCH v4 18/23] KVM: arm64: Extract enum debug_owner to enum vcpu_register_owner Colton Lewis
2025-07-14 22:59 ` Colton Lewis [this message]
2025-07-14 22:59 ` [PATCH v4 20/23] perf: arm_pmuv3: Handle IRQs for Partitioned PMU guest counters Colton Lewis
2025-07-14 22:59 ` [PATCH v4 21/23] KVM: arm64: Inject recorded guest interrupts Colton Lewis
2025-07-14 22:59 ` [PATCH v4 22/23] KVM: arm64: Add ioctl to partition the PMU when supported Colton Lewis
2025-07-15 17:26   ` kernel test robot
2025-07-15 21:16     ` Colton Lewis
2025-07-15 17:36   ` kernel test robot
2025-07-14 22:59 ` [PATCH v4 23/23] KVM: arm64: selftests: Add test case for partitioned PMU Colton Lewis

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20250714225917.1396543-20-coltonlewis@google.com \
    --to=coltonlewis@google.com \
    --cc=catalin.marinas@arm.com \
    --cc=corbet@lwn.net \
    --cc=joey.gouly@arm.com \
    --cc=kvm@vger.kernel.org \
    --cc=kvmarm@lists.linux.dev \
    --cc=linux-arm-kernel@lists.infradead.org \
    --cc=linux-doc@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-kselftest@vger.kernel.org \
    --cc=linux-perf-users@vger.kernel.org \
    --cc=linux@armlinux.org.uk \
    --cc=mark.rutland@arm.com \
    --cc=maz@kernel.org \
    --cc=mizhang@google.com \
    --cc=oliver.upton@linux.dev \
    --cc=pbonzini@redhat.com \
    --cc=shuah@kernel.org \
    --cc=suzuki.poulose@arm.com \
    --cc=will@kernel.org \
    --cc=yuzenghui@huawei.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).