From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id B7640EB64DD for ; Fri, 11 Aug 2023 06:10:18 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230403AbjHKGKR (ORCPT ); Fri, 11 Aug 2023 02:10:17 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:44190 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229992AbjHKGKP (ORCPT ); Fri, 11 Aug 2023 02:10:15 -0400 Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id BBD5D2D48 for ; Thu, 10 Aug 2023 23:10:14 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id 4FFD860D3E for ; Fri, 11 Aug 2023 06:10:14 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 90E8EC433C8; Fri, 11 Aug 2023 06:10:13 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1691734213; bh=c0rV1rg3Ui5WM4U60sJPoJIhowA0M6WjUQJlEqcHJSA=; h=Date:From:To:Cc:Subject:In-Reply-To:References:From; b=Ufz0+sBUo97tz5b/iWlZj5ytUlVzgRueW0LGHRrxQgr+lm37HYNI6UXMP07qeHP5D oyd4ljFMNHPSclLc1brQ9Ptt+QepmYXn0jotdhvV4TOrqGNPGTupMOSgvfxH4rPdfQ DWGMOmRzB548Vm03OAZuRFzGy5X0PFsk44rBc6gdBgO6jwmvzNir4f411e0MbbORmQ 06U9maRzUlYv6LtFzsSZD7rL1+o3a4yyxfesNn16eMImRCWTzL80Dt2GFbuNf+Gl32 3RA56fP9b714UnQIEBVRrFQuCtso+XVox4SKsW5kq4or58YKYTDNMxhGpDRN4tc9Ib GPyyIyOZWring== Received: from c-xd4ed8728.customers.hiper-net.dk ([212.237.135.40] helo=wait-a-minute.misterjones.org) by disco-boy.misterjones.org with esmtpsa (TLS1.3) tls TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 (Exim 4.95) (envelope-from ) id 1qULLf-0043SF-42; Fri, 11 Aug 2023 07:10:11 +0100 Date: Fri, 11 Aug 2023 07:10:19 +0100 Message-ID: <87r0oap0s4.wl-maz@kernel.org> From: Marc Zyngier To: Shijie Huang Cc: Huang Shijie , oliver.upton@linux.dev, james.morse@arm.com, suzuki.poulose@arm.com, yuzenghui@huawei.com, catalin.marinas@arm.com, will@kernel.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org, patches@amperecomputing.com, zwang@amperecomputing.com, Mark Rutland Subject: Re: [PATCH v2] KVM/arm64: reconfigurate the event filters for guest context In-Reply-To: <95726705-765d-020b-8c85-62fb917f2c14@amperemail.onmicrosoft.com> References: <20230810072906.4007-1-shijie@os.amperecomputing.com> <87sf8qq5o0.wl-maz@kernel.org> <95726705-765d-020b-8c85-62fb917f2c14@amperemail.onmicrosoft.com> User-Agent: Wanderlust/2.15.9 (Almost Unreal) SEMI-EPG/1.14.7 (Harue) FLIM-LB/1.14.9 (=?UTF-8?B?R29qxY0=?=) APEL-LB/10.8 EasyPG/1.0.0 Emacs/28.2 (x86_64-pc-linux-gnu) MULE/6.0 (HANACHIRUSATO) MIME-Version: 1.0 (generated by SEMI-EPG 1.14.7 - "Harue") Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: quoted-printable X-SA-Exim-Connect-IP: 212.237.135.40 X-SA-Exim-Rcpt-To: shijie@amperemail.onmicrosoft.com, shijie@os.amperecomputing.com, oliver.upton@linux.dev, james.morse@arm.com, suzuki.poulose@arm.com, yuzenghui@huawei.com, catalin.marinas@arm.com, will@kernel.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org, patches@amperecomputing.com, zwang@amperecomputing.com, mark.rutland@arm.com X-SA-Exim-Mail-From: maz@kernel.org X-SA-Exim-Scanned: No (on disco-boy.misterjones.org); SAEximRunCond expanded to false Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Fri, 11 Aug 2023 02:46:49 +0100, Shijie Huang wrote: >=20 > Hi Marc, >=20 > =E5=9C=A8 2023/8/10 23:27, Marc Zyngier =E5=86=99=E9=81=93: > > Huang, > >=20 > > Please make sure you add everyone who commented on v1 (I've Cc'd Mark > > so that he can shime need as needed). > thanks. > >=20 > > On Thu, 10 Aug 2023 08:29:06 +0100, > > Huang Shijie wrote: > >> 1.) Background. > >> 1.1) In arm64, start a guest with Qemu which is running as a VMM o= f KVM, > >> and bind the guest to core 33 and run program "a" in guest. > >> The code of "a" shows below: > >> ---------------------------------------------------------- > >> #include > >>=20 > >> int main() > >> { > >> unsigned long i =3D 0; > >>=20 > >> for (;;) { > >> i++; > >> } > >>=20 > >> printf("i:%ld\n", i); > >> return 0; > >> } > >> ---------------------------------------------------------- > >>=20 > >> 1.2) Use the following perf command in host: > >> #perf stat -e cycles:G,cycles:H -C 33 -I 1000 sleep 1 > >> # time counts unit events > >> 1.000817400 3,299,471,572 cycles:G > >> 1.000817400 3,240,586 cycles:H > >>=20 > >> This result is correct, my cpu's frequency is 3.3G. > >>=20 > >> 1.3) Use the following perf command in host: > >> #perf stat -e cycles:G,cycles:H -C 33 -d -d -I 1000 sleep 1 > >> time counts unit events > >> 1.000831480 153,634,097 cycles:G = (70.03%) > >> 1.000831480 3,147,940,599 cycles:H = (70.03%) > >> 1.000831480 1,143,598,527 L1-dcache-loads = (70.03%) > >> 1.000831480 9,986 L1-dcache-load-misses = # 0.00% of all L1-dcache accesses (70.03%) > >> 1.000831480 LLC-loads > >> 1.000831480 LLC-load-misses > >> 1.000831480 580,887,696 L1-icache-loads = (70.03%) > >> 1.000831480 77,855 L1-icache-load-misses = # 0.01% of all L1-icache accesses (70.03%) > >> 1.000831480 6,112,224,612 dTLB-loads = (70.03%) > >> 1.000831480 16,222 dTLB-load-misses = # 0.00% of all dTLB cache accesses (69.94%) > >> 1.000831480 590,015,996 iTLB-loads = (59.95%) > >> 1.000831480 505 iTLB-load-misses = # 0.00% of all iTLB cache accesses (59.95%) > >>=20 > >> This result is wrong. The "cycle:G" should be nearly 3.3G. > >>=20 > >> 2.) Root cause. > >> There is only 7 counters in my arm64 platform: > >> (one cycle counter) + (6 normal counters) > >>=20 > >> In 1.3 above, we will use 10 event counters. > >> Since we only have 7 counters, the perf core will trigger > >> multiplexing in hrtimer: > >> perf_mux_hrtimer_restart() --> perf_rotate_context(). > >>=20 > >> If the hrtimer occurs when the host is running, it's fine. > >> If the hrtimer occurs when the guest is running, > >> the perf_rotate_context() will program the PMU with filters for > >> host context. The KVM does not have a chance to restore > >> PMU registers with kvm_vcpu_pmu_restore_guest(). > >> The PMU does not work correctly, so we got wrong result. > >>=20 > >> 3.) About this patch. > >> Make a KVM_REQ_RELOAD_PMU request before reentering the > >> guest. The request will call kvm_vcpu_pmu_restore_guest() > >> to reconfigurate the filters for guest context. > >>=20 > >> 4.) Test result of this patch: > >> #perf stat -e cycles:G,cycles:H -C 33 -d -d -I 1000 sleep 1 > >> time counts unit events > >> 1.001006400 3,298,348,656 cycles:G = (70.03%) > >> 1.001006400 3,144,532 cycles:H = (70.03%) > >> 1.001006400 941,149 L1-dcache-loads = (70.03%) > >> 1.001006400 17,937 L1-dcache-load-misses = # 1.91% of all L1-dcache accesses (70.03%) > >> 1.001006400 LLC-loads > >> 1.001006400 LLC-load-misses > >> 1.001006400 1,101,889 L1-icache-loads = (70.03%) > >> 1.001006400 121,638 L1-icache-load-misses = # 11.04% of all L1-icache accesses (70.03%) > >> 1.001006400 1,031,228 dTLB-loads = (70.03%) > >> 1.001006400 26,952 dTLB-load-misses = # 2.61% of all dTLB cache accesses (69.93%) > >> 1.001006400 1,030,678 iTLB-loads = (59.94%) > >> 1.001006400 338 iTLB-load-misses = # 0.03% of all iTLB cache accesses (59.94%) > >>=20 > >> The result is correct. The "cycle:G" is nearly 3.3G now. > >>=20 > >> Signed-off-by: Huang Shijie > >> --- > >> v1 --> v2: > >> Do not change perf/core code, only change the ARM64 kvm code. > >> v1: https://lkml.org/lkml/2023/8/8/1465 > >>=20 > >> --- > >> arch/arm64/kvm/arm.c | 11 ++++++++++- > >> 1 file changed, 10 insertions(+), 1 deletion(-) > >>=20 > >> diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c > >> index c2c14059f6a8..475a2f0e0e40 100644 > >> --- a/arch/arm64/kvm/arm.c > >> +++ b/arch/arm64/kvm/arm.c > >> @@ -919,8 +919,17 @@ int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu) > >> if (!ret) > >> ret =3D 1; > >> - if (ret > 0) > >> + if (ret > 0) { > >> + /* > >> + * The perf_rotate_context() may rotate the events and > >> + * reprogram PMU with filters for host context. > >> + * So make a request before reentering the guest to > >> + * reconfigurate the event filters for guest context. > >> + */ > >> + kvm_make_request(KVM_REQ_RELOAD_PMU, vcpu); > >> + > >> ret =3D check_vcpu_requests(vcpu); > >> + } > > This looks extremely heavy handed. You're performing the reload on > > *every* entry, and I don't think this is right (exit-heavy workloads > > will suffer from it) > >=20 > > Furthermore, you're also reloading the virtual state of the PMU > > (recreating guest events and other things), all of which looks pretty > > pointless, as all we're interested in is what is being counted on the > > *host*. >=20 > okay. What about to add a _new_ request, such as KVM_REQ_RESTROE_PMU_GUES= T? >=20 >=20 > > Instead, we can restrict the reload of the host state (and only that) > > to situations where: > >=20 > > - we're running on a VHE system > >=20 > > - we have a host PMUv3 (not everybody does), as that's the only way we > > can profile a guest >=20 > okay. No problem. >=20 >=20 > >=20 > > and ideally we would have a way to detect that a rotation happened > > (which may requires some help from the low-level PMU code). >=20 > I will check it, hope we can find a better way. I came up with the following patch, completely untested. Let me know how that fares for you. Thanks, M. diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm= _host.h index 93c541111dea..fb875c5c0347 100644 --- a/arch/arm64/include/asm/kvm_host.h +++ b/arch/arm64/include/asm/kvm_host.h @@ -49,6 +49,7 @@ #define KVM_REQ_RELOAD_GICv4 KVM_ARCH_REQ(4) #define KVM_REQ_RELOAD_PMU KVM_ARCH_REQ(5) #define KVM_REQ_SUSPEND KVM_ARCH_REQ(6) +#define KVM_REQ_RELOAD_GUEST_PMU_EVENTS KVM_ARCH_REQ(7) =20 #define KVM_DIRTY_LOG_MANUAL_CAPS (KVM_DIRTY_LOG_MANUAL_PROTECT_ENABLE |= \ KVM_DIRTY_LOG_INITIALLY_SET) diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c index 8b51570a76f8..b40db24f1f0b 100644 --- a/arch/arm64/kvm/arm.c +++ b/arch/arm64/kvm/arm.c @@ -804,6 +804,9 @@ static int check_vcpu_requests(struct kvm_vcpu *vcpu) kvm_pmu_handle_pmcr(vcpu, __vcpu_sys_reg(vcpu, PMCR_EL0)); =20 + if (kvm_check_request(KVM_REQ_RELOAD_GUEST_PMU_EVENTS, vcpu)) + kvm_vcpu_pmu_restore_guest(vcpu); + if (kvm_check_request(KVM_REQ_SUSPEND, vcpu)) return kvm_vcpu_suspend(vcpu); =20 diff --git a/drivers/perf/arm_pmuv3.c b/drivers/perf/arm_pmuv3.c index 08b3a1bf0ef6..7012de417092 100644 --- a/drivers/perf/arm_pmuv3.c +++ b/drivers/perf/arm_pmuv3.c @@ -772,6 +772,9 @@ static void armv8pmu_start(struct arm_pmu *cpu_pmu) =20 /* Enable all counters */ armv8pmu_pmcr_write(armv8pmu_pmcr_read() | ARMV8_PMU_PMCR_E); + + if (in_interrupt()) + kvm_resync_guest_context(); } =20 static void armv8pmu_stop(struct arm_pmu *cpu_pmu) diff --git a/include/kvm/arm_pmu.h b/include/kvm/arm_pmu.h index 847da6fc2713..d66f7216b5a9 100644 --- a/include/kvm/arm_pmu.h +++ b/include/kvm/arm_pmu.h @@ -74,6 +74,7 @@ int kvm_arm_pmu_v3_enable(struct kvm_vcpu *vcpu); struct kvm_pmu_events *kvm_get_pmu_events(void); void kvm_vcpu_pmu_restore_guest(struct kvm_vcpu *vcpu); void kvm_vcpu_pmu_restore_host(struct kvm_vcpu *vcpu); +void kvm_resync_guest_context(void); =20 #define kvm_vcpu_has_pmu(vcpu) \ (test_bit(KVM_ARM_VCPU_PMU_V3, (vcpu)->arch.features)) @@ -171,6 +172,7 @@ static inline u8 kvm_arm_pmu_get_pmuver_limit(void) { return 0; } +static inline void kvm_resync_guest_context(void) {} =20 #endif =20 --=20 Without deviation from the norm, progress is not possible.