From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from mail-pl1-f202.google.com (mail-pl1-f202.google.com [209.85.214.202]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 0A2F02222CB for ; Sat, 6 Dec 2025 00:17:40 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.214.202 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1764980263; cv=none; b=ML1H0Y6z6aEghE8fbTKfk9aqWfKrV8ZpoUB2vTn1s1PS5yaXFMpKp5AbmyzdXAD1/t1/Zg3Ph48ngxO6TkDvD1MeZsnOJSHiiSwrrtxAMyQ+iEMG+KVHL3eloyHmI4TEJic5ALSMj0VqjzhyG23UuLsvZb4FOWcSrVqWHddnuHE= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1764980263; c=relaxed/simple; bh=ol4osHN66I/PAiXEwvP+OdGz+wl0D4v1AlsJ0ZqcJWM=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=T+B0YvzeQ6FJM1kmboNzh9AXJm0GDGOZPMk0zI5e6R6zOhpCwAmzcfjlzYNTvMAYqITJ6Cn1JGGclSEoGMW1TlP77oU9nLTkl95ostd/gG4L7Vme1pUqA2n/5fHMlRp/GAGoy00hYgw+ESbm1HwrI8DgviKApchfaJ+DuT/iEgk= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=a3mbt8lU; arc=none smtp.client-ip=209.85.214.202 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="a3mbt8lU" Received: by mail-pl1-f202.google.com with SMTP id d9443c01a7336-297ddb3c707so24108635ad.2 for ; Fri, 05 Dec 2025 16:17:40 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1764980260; x=1765585060; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=FylYYLVT0WTXptfl4HpO9XPf05XgvSaO59C95hLudDA=; b=a3mbt8lUAdbV6kGXC3GT6vE8YPxA8+wrR0VjKodKfqFdiTSzN4epX9HLzvFa4Wqx6o b5Ml6LcoIy2i2xtQYwgL3XWNA0Iv3zO9CocSi4pUUiySoTVfr3IKqGLHUr+PX53+kNED F4eFcR9zgHVR8dOxe2eL46/npNZ1/x5WMzmyI1B6e0WjyJhzfIkxCUvGUVZ/Pz3dEFnD 9jjDSpVu2O+VavEWrdZ2YoRNSv9KSONRwqmI8fogRsujw7dLEeubqQBbS4ys71D1DIVC bnM1kftvOW+OCiOgO4PO41D+AWIDTqEsb+71u3vOLNzAxyvNDfGvgoZVxG3YmJeM3C5h fnrg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1764980260; x=1765585060; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=FylYYLVT0WTXptfl4HpO9XPf05XgvSaO59C95hLudDA=; b=N8Fger4uxpWs7cb4/6EZH8jDeDj37koTfNXFhTZFDAwYX1EjCTrQdW/iox2brsUGE3 CyRpF2YRizfbXYuw7f0hM4v2z6yMzkMhgKDyqlV3VAlZwTjC55vO0HUOKx1UaVidRYtZ tcgUULUrYF0obXJRZAR8bZH0vH+WN8SAvlpkw44+5bhFNKPk7q9hG6tAG+lwMcL3a4S8 x7QkQHrke0n5Jvj6+jZsWUuvBnYwHcoEQP9f1wCrkqp8fImy7nRgMtq6GIMm3svvtjNW FchBWEheMwBzNXcfZood8+BFovfFGmW8SxaTQKswqi0k7Yqgomf75fL/4RG0O3au8kny xaNQ== X-Forwarded-Encrypted: i=1; AJvYcCUWSh6/W10v846dT5j6MyBOA+0irJKKqE7quZhcQcs0n35DEUxbDVyt5vYjgbSxVK6XBAsRXTXCdz9pWBJ2sqO4@vger.kernel.org X-Gm-Message-State: AOJu0Yz5URNPUnAyY9rDhVt95YNlh4hn0SDdMUMnyXRnqZiy2dRFoY6i x73WM1eoBPq9p3+higYrpg1on3XXBkQM2SIXFZDz/wgI+r4Uz1af9Kup9QGQHXmBS+fQzEDbbpL 4xFqSiQ== X-Google-Smtp-Source: AGHT+IFoT/ixFxXWDXJBPVNPACu7PrN8IiUqBDaVd9+tgekzVLP1tDFd7WsI6N47bV9683aut9/Cc/2CP5c= X-Received: from pjtl17.prod.google.com ([2002:a17:90a:c591:b0:349:1598:173b]) (user=seanjc job=prod-delivery.src-stubby-dispatcher) by 2002:a17:90b:4acf:b0:340:d06d:ea73 with SMTP id 98e67ed59e1d1-349a2686512mr571973a91.19.1764980260147; Fri, 05 Dec 2025 16:17:40 -0800 (PST) Reply-To: Sean Christopherson Date: Fri, 5 Dec 2025 16:16:43 -0800 In-Reply-To: <20251206001720.468579-1-seanjc@google.com> Precedence: bulk X-Mailing-List: linux-perf-users@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20251206001720.468579-1-seanjc@google.com> X-Mailer: git-send-email 2.52.0.223.gf5cc29aaa4-goog Message-ID: <20251206001720.468579-8-seanjc@google.com> Subject: [PATCH v6 07/44] perf: Add APIs to load/put guest mediated PMU context From: Sean Christopherson To: Marc Zyngier , Oliver Upton , Tianrui Zhao , Bibo Mao , Huacai Chen , Anup Patel , Paul Walmsley , Palmer Dabbelt , Albert Ou , Xin Li , "H. Peter Anvin" , Andy Lutomirski , Peter Zijlstra , Ingo Molnar , Arnaldo Carvalho de Melo , Namhyung Kim , Sean Christopherson , Paolo Bonzini Cc: linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, kvm@vger.kernel.org, loongarch@lists.linux.dev, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, linux-perf-users@vger.kernel.org, Mingwei Zhang , Xudong Hao , Sandipan Das , Dapeng Mi , Xiong Zhang , Manali Shukla , Jim Mattson Content-Type: text/plain; charset="UTF-8" From: Kan Liang Add exported APIs to load/put a guest mediated PMU context. KVM will load the guest PMU shortly before VM-Enter, and put the guest PMU shortly after VM-Exit. On the perf side of things, schedule out all exclude_guest events when the guest context is loaded, and schedule them back in when the guest context is put. I.e. yield the hardware PMU resources to the guest, by way of KVM. Note, perf is only responsible for managing host context. KVM is responsible for loading/storing guest state to/from hardware. Suggested-by: Sean Christopherson Signed-off-by: Kan Liang Signed-off-by: Mingwei Zhang [sean: shuffle patches around, write changelog] Tested-by: Xudong Hao Signed-off-by: Sean Christopherson --- include/linux/perf_event.h | 2 ++ kernel/events/core.c | 61 ++++++++++++++++++++++++++++++++++++++ 2 files changed, 63 insertions(+) diff --git a/include/linux/perf_event.h b/include/linux/perf_event.h index eaab830c9bf5..cfc8cd86c409 100644 --- a/include/linux/perf_event.h +++ b/include/linux/perf_event.h @@ -1925,6 +1925,8 @@ extern u64 perf_event_pause(struct perf_event *event, bool reset); #ifdef CONFIG_PERF_GUEST_MEDIATED_PMU int perf_create_mediated_pmu(void); void perf_release_mediated_pmu(void); +void perf_load_guest_context(void); +void perf_put_guest_context(void); #endif #else /* !CONFIG_PERF_EVENTS: */ diff --git a/kernel/events/core.c b/kernel/events/core.c index f72d4844b05e..81c35859e6ea 100644 --- a/kernel/events/core.c +++ b/kernel/events/core.c @@ -469,10 +469,19 @@ static cpumask_var_t perf_online_pkg_mask; static cpumask_var_t perf_online_sys_mask; static struct kmem_cache *perf_event_cache; +#ifdef CONFIG_PERF_GUEST_MEDIATED_PMU +static DEFINE_PER_CPU(bool, guest_ctx_loaded); + +static __always_inline bool is_guest_mediated_pmu_loaded(void) +{ + return __this_cpu_read(guest_ctx_loaded); +} +#else static __always_inline bool is_guest_mediated_pmu_loaded(void) { return false; } +#endif /* * perf event paranoia level: @@ -6385,6 +6394,58 @@ void perf_release_mediated_pmu(void) atomic_dec(&nr_mediated_pmu_vms); } EXPORT_SYMBOL_GPL(perf_release_mediated_pmu); + +/* When loading a guest's mediated PMU, schedule out all exclude_guest events. */ +void perf_load_guest_context(void) +{ + struct perf_cpu_context *cpuctx = this_cpu_ptr(&perf_cpu_context); + + lockdep_assert_irqs_disabled(); + + guard(perf_ctx_lock)(cpuctx, cpuctx->task_ctx); + + if (WARN_ON_ONCE(__this_cpu_read(guest_ctx_loaded))) + return; + + perf_ctx_disable(&cpuctx->ctx, EVENT_GUEST); + ctx_sched_out(&cpuctx->ctx, NULL, EVENT_GUEST); + if (cpuctx->task_ctx) { + perf_ctx_disable(cpuctx->task_ctx, EVENT_GUEST); + task_ctx_sched_out(cpuctx->task_ctx, NULL, EVENT_GUEST); + } + + perf_ctx_enable(&cpuctx->ctx, EVENT_GUEST); + if (cpuctx->task_ctx) + perf_ctx_enable(cpuctx->task_ctx, EVENT_GUEST); + + __this_cpu_write(guest_ctx_loaded, true); +} +EXPORT_SYMBOL_GPL(perf_load_guest_context); + +void perf_put_guest_context(void) +{ + struct perf_cpu_context *cpuctx = this_cpu_ptr(&perf_cpu_context); + + lockdep_assert_irqs_disabled(); + + guard(perf_ctx_lock)(cpuctx, cpuctx->task_ctx); + + if (WARN_ON_ONCE(!__this_cpu_read(guest_ctx_loaded))) + return; + + perf_ctx_disable(&cpuctx->ctx, EVENT_GUEST); + if (cpuctx->task_ctx) + perf_ctx_disable(cpuctx->task_ctx, EVENT_GUEST); + + perf_event_sched_in(cpuctx, cpuctx->task_ctx, NULL, EVENT_GUEST); + + if (cpuctx->task_ctx) + perf_ctx_enable(cpuctx->task_ctx, EVENT_GUEST); + perf_ctx_enable(&cpuctx->ctx, EVENT_GUEST); + + __this_cpu_write(guest_ctx_loaded, false); +} +EXPORT_SYMBOL_GPL(perf_put_guest_context); #else static int mediated_pmu_account_event(struct perf_event *event) { return 0; } static void mediated_pmu_unaccount_event(struct perf_event *event) {} -- 2.52.0.223.gf5cc29aaa4-goog