From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 5F849CA0ED1 for ; Mon, 18 Aug 2025 17:32:13 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender:List-Subscribe:List-Help :List-Post:List-Archive:List-Unsubscribe:List-Id:Content-Type:Cc:To:From: Subject:Message-ID:References:Mime-Version:In-Reply-To:Date:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=CsoYx/ZJm+dL7FRV6jyG7YgqbQ7bK3pBhdAAvFLAFxA=; b=NX8zpAX2hMYnkmHeptDI+kUqq6 rn6iMgDqTs5gBkXjvspe3MC6WYGJb4//y5wxtbMjuofLjgqr/SzVLsWutkI3lWpBzbDcE9llxFcli DKM5OXF0Iwc7WR15cIRv4leQpx44veSyi31gikZSM+DZR4inRlbgd+SExjsTl1F3iZLaXoXkly6ht YOKuVkrAxWLCsWn3JxVMM8IFmbGehWDxfvVc2YMF4lPRxKWPWmW7YF5yUh+oQkQ0/U4w7aH+5mnSg fIcFRIRzqRZSoBT37g2WdN2y2hFfvkvcOcrED/c/NbyxdqfO59Wg/LyIWyS8N60YyLmGUgZFSrLvf jt8y7/Wg==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.98.2 #2 (Red Hat Linux)) id 1uo3iJ-00000008AEB-42xL; Mon, 18 Aug 2025 17:32:07 +0000 Received: from mail-pj1-x104a.google.com ([2607:f8b0:4864:20::104a]) by bombadil.infradead.org with esmtps (Exim 4.98.2 #2 (Red Hat Linux)) id 1uo1jt-00000007urk-1k1Y for linux-arm-kernel@lists.infradead.org; Mon, 18 Aug 2025 15:25:39 +0000 Received: by mail-pj1-x104a.google.com with SMTP id 98e67ed59e1d1-32326e6b1deso3883047a91.3 for ; Mon, 18 Aug 2025 08:25:36 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1755530736; x=1756135536; darn=lists.infradead.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=CsoYx/ZJm+dL7FRV6jyG7YgqbQ7bK3pBhdAAvFLAFxA=; b=EjiDmJfMM+NPTm2hpTb/jJHTGn73r7KnPvEM7kG0Wq9nXIRHYpJHhqWtMk68k85bkP nK655qac9HCK00t/tosmuq7D1vLvCue6Rnh2xiOwOEOQJnjicSD4AvUJv8HR75JTrDhg v5layamigOKoBY+jP0a15bZNGLIzcf6A/b2n7NSlq/fyaGrn8mAAs07w3k7xWgkYorfE YslI+TX9ElTE/iv1/rvGkVZTOWujzGalAhl0h5t6/m7rKxHFjh2QdNW60iXlEIbNarwW js6zwi7TXVlATxGnatPA3SueJKzUcYhONkiuySZrhGDiZ05ZMUjwVVZJzSUkj5qIGyC6 Vh8g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1755530736; x=1756135536; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=CsoYx/ZJm+dL7FRV6jyG7YgqbQ7bK3pBhdAAvFLAFxA=; b=hVgmxjKecCvXXuYGtgpnpE8mkh9TxFTGChK+sumUMWkbYYpmcZ5iJHLYkJHwL+axLz 5XiTIMg3dIl31+e7tbrXXzvPhZZGXi2vXmuYxszxha3HR59iNGSaWoA6vNcrhJcmY91s BryMnMWP5w6mEdE/oxSJeNoXNcT+fCKo9l/mOrRsgP6SbYgjYHUlxQN6MyK8Jwc30qyh TT1uEo+xELga9tf9M42vTSSFyBKCcKIU4IvL+sTpgyIQ4Wsez6H9IQvAcC5VICgR2VeN MFXZYp89G0SYHDRUFGy0vOpa5KpAzE9Lj+g5JLViAm0mnF8K0ZwK4p3VDPEWz3/KzqOE uaWw== X-Forwarded-Encrypted: i=1; AJvYcCX1eYff0Mxa3dx2cZLqykMmEPQjbwalfTtkU+UZKmnUvuzVCwPFpO0hTOPvZTrIe9atkMvJstsn4hU85llWkft4@lists.infradead.org X-Gm-Message-State: AOJu0YxBbRC06MpwImhuXw7jQQtge5KCq3L7Fkv6w5CkmxFWkHqjLv3X piQkWjfqW+W8QRf6fbGChh7F+GKYdVNK2oTcELhU4aGy1JL4yfaMgqfSy+fo0mtI7/b4j189l1P s4T5PGA== X-Google-Smtp-Source: AGHT+IFCmpj8FbJmK1+JWLBztlT6f28DkrWlTFsNjoSajBRjHfMiKFT/6+vAkpnb7u9ULN6Oz/FIAv/3KBU= X-Received: from pjee5.prod.google.com ([2002:a17:90b:5785:b0:311:ef56:7694]) (user=seanjc job=prod-delivery.src-stubby-dispatcher) by 2002:a17:90b:52c6:b0:31e:f3b7:49c6 with SMTP id 98e67ed59e1d1-323421b64b7mr17300718a91.15.1755530736134; Mon, 18 Aug 2025 08:25:36 -0700 (PDT) Date: Mon, 18 Aug 2025 08:25:34 -0700 In-Reply-To: <20250818143204.GH3289052@noisy.programming.kicks-ass.net> Mime-Version: 1.0 References: <20250806195706.1650976-1-seanjc@google.com> <20250806195706.1650976-10-seanjc@google.com> <20250815113951.GC4067720@noisy.programming.kicks-ass.net> <20250818143204.GH3289052@noisy.programming.kicks-ass.net> Message-ID: Subject: Re: [PATCH v5 09/44] perf/x86: Switch LVTPC to/from mediated PMI vector on guest load/put context From: Sean Christopherson To: Peter Zijlstra Cc: Marc Zyngier , Oliver Upton , Tianrui Zhao , Bibo Mao , Huacai Chen , Anup Patel , Paul Walmsley , Palmer Dabbelt , Albert Ou , Xin Li , "H. Peter Anvin" , Andy Lutomirski , Ingo Molnar , Arnaldo Carvalho de Melo , Namhyung Kim , Paolo Bonzini , linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, kvm@vger.kernel.org, loongarch@lists.linux.dev, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, linux-perf-users@vger.kernel.org, Kan Liang , Yongwei Ma , Mingwei Zhang , Xiong Zhang , Sandipan Das , Dapeng Mi Content-Type: text/plain; charset="us-ascii" X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20250818_082537_456299_89967DC5 X-CRM114-Status: GOOD ( 41.52 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org On Mon, Aug 18, 2025, Peter Zijlstra wrote: > On Fri, Aug 15, 2025 at 08:55:25AM -0700, Sean Christopherson wrote: > > On Fri, Aug 15, 2025, Sean Christopherson wrote: > > > On Fri, Aug 15, 2025, Peter Zijlstra wrote: > > > > > diff --git a/kernel/events/core.c b/kernel/events/core.c > > > > > index e1df3c3bfc0d..ad22b182762e 100644 > > > > > --- a/kernel/events/core.c > > > > > +++ b/kernel/events/core.c > > > > > @@ -6408,6 +6408,8 @@ void perf_load_guest_context(unsigned long data) > > > > > task_ctx_sched_out(cpuctx->task_ctx, NULL, EVENT_GUEST); > > > > > } > > > > > > > > > > + arch_perf_load_guest_context(data); > > > > > > > > So I still don't understand why this ever needs to reach the generic > > > > code. x86 pmu driver and x86 kvm can surely sort this out inside of x86, > > > > no? > > > > > > It's definitely possible to handle this entirely within x86, I just don't love > > > switching the LVTPC without the protection of perf_ctx_lock and perf_ctx_disable(). > > > It's not a sticking point for me if you strongly prefer something like this: > > > > > > diff --git a/arch/x86/kvm/pmu.c b/arch/x86/kvm/pmu.c > > > index 0e5048ae86fa..86b81c217b97 100644 > > > --- a/arch/x86/kvm/pmu.c > > > +++ b/arch/x86/kvm/pmu.c > > > @@ -1319,7 +1319,9 @@ void kvm_mediated_pmu_load(struct kvm_vcpu *vcpu) > > > > > > lockdep_assert_irqs_disabled(); > > > > > > - perf_load_guest_context(kvm_lapic_get_reg(vcpu->arch.apic, APIC_LVTPC)); > > > + perf_load_guest_context(); > > > + > > > + perf_load_guest_lvtpc(kvm_lapic_get_reg(vcpu->arch.apic, APIC_LVTPC)); > > > > Hmm, an argument for providing a dedicated perf_load_guest_lvtpc() APIs is that > > it would allow KVM to handle LVTPC writes in KVM's VM-Exit fastpath, i.e. without > > having to do a full put+reload of the guest context. > > > > So if we're confident that switching the host LVTPC outside of > > perf_{load,put}_guest_context() is functionally safe, I'm a-ok with it. > > Let me see. So the hardware sets Masked when it raises the interrupt. > > The interrupt handler clears it from software -- depending on uarch in 3 > different places: > 1) right at the start of the PMI > 2) in the middle, right before enabling the PMU (writing global control) > 3) at the end of the PMI > > the various changelogs adding that code mention spurious PMIs and > malformed PEBS records. > > So the fun all happens when the guest is doing PMI and gets a VM-exit > while still Masked. > > At that point, we can come in and completely rewrite the PMU state, > reroute the PMI and enable things again. Then later, we 'restore' the > PMU state, re-set LVTPC masked to the guest interrupt and 'resume'. > > What could possibly go wrong :/ Kan, I'm assuming, but not knowing, that > writing all the PMU MSRs is somehow serializing state sufficient to not > cause the above mentioned fails? Specifically, clearing PEBS_ENABLE > should inhibit those malformed PEBS records or something? What if the > host also has PEBS and we don't actually clear the bit? > > The current order ensures we rewrite LVTPC when global control is unset; > I think we want to keep that. Yes, for sure. > While staring at this, I note that perf_load_guest_context() will clear > global ctrl, clear all the counter programming, and re-enable an empty > pmu. Now, an empty PMU should result in global control being zero -- > there is nothing run after all. > > But then kvm_mediated_pmu_load() writes an explicit 0 again. Perhaps > replace this with asserting it is 0 instead? Yeah, I like that idea, a lot. This? perf_load_guest_context(); /* * Sanity check that "loading" guest context disabled all counters, as * modifying the LVTPC while host perf is active will cause explosions, * as will loading event selectors and PMCs with guest values. * * VMX will enable/disable counters at VM-Enter/VM-Exit by atomically * loading PERF_GLOBAL_CONTROL. SVM effectively performs the switch by * configuring all events to be GUEST_ONLY. */ WARN_ON_ONCE(rdmsrq(kvm_pmu_ops.PERF_GLOBAL_CTRL)); perf_load_guest_lvtpc(kvm_lapic_get_reg(vcpu->arch.apic, APIC_LVTPC)); > Anyway, this means that moving the LVTPC writing into > kvm_mediated_pmu_load() as you suggest is identical. > perf_load_guest_context() results in global control being 0, we then > assert it is 0, and write LVTPC while it is still 0. > kvm_pmu_load_guest_pmcs() will then frob the MSRs. > > OK, so *IF* doing the VM-exit during PMI is sound, this is something > that needs a comment somewhere. I'm a bit lost here. Are you essentially asking if it's ok to take a VM-Exit while the guest is handling a PMI? If so, that _has_ to work, because there are myriad things that can/will trigger a VM-Exit at any point while the guest is active. > Then going back again, is the easy part, since on the host side, we can never > transition into KVM during a PMI.