From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from mail-oo1-f74.google.com (mail-oo1-f74.google.com [209.85.161.74]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 28212410D24 for ; Mon, 4 May 2026 21:18:45 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.161.74 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1777929530; cv=none; b=O9kUrJyPUVfLlcfKevXVd2j80KdtIR+lbpEZ7pLeHsvOArqtkPFXsU2ex80ZcbKPkX7xKu4K5N2d/MNctcGSJfl6uMJKldxz+Ouqm1gmn8D7ce6xbXpFNT8RIc0CnD1K5aZ3N8yfS5IdTIgVIiAlnA+oGOZWspQ+mwrMfQkTRqc= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1777929530; c=relaxed/simple; bh=S0Rh88cIzrwPcgFazDDX9G1MacYZO4t4rc6Ti6y7iLw=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=aV2UySuxDfDH1jS71YxJZxoo/vGDn/sR2XC7zzKrLUBYN1BdXpygqbPfMhLzk/C1X77xNc+c3RU5ZKjl9XPMdVnqZ8ybNVIkekLIkkJYFlLq6GMXiL3fPDjdn/+fRzcbESxgqXUY8OuTsCTVe7jAtdRqI1Rd3zmw8+JVQzy14/k= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--coltonlewis.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=suUKP8VD; arc=none smtp.client-ip=209.85.161.74 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--coltonlewis.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="suUKP8VD" Received: by mail-oo1-f74.google.com with SMTP id 006d021491bc7-696233b2816so9068216eaf.1 for ; Mon, 04 May 2026 14:18:45 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20251104; t=1777929524; x=1778534324; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=IHqwi47gLxulqjFwzyPWRWmJ21m3KfZnivqkYOWLGBw=; b=suUKP8VDI/w6B/+nWuXC9PVT+Uanm7EuOrfR1dptORtN5vfsGXf+LmnkhhqUA0gWA9 5I7rtXK68DW2fC5SxchvRHMHrlEzSotlrvoT+Rlbn8GnadevTw7VNFn3tjRkMwAeqlUv YsawUIYcHYTkn+bs8HElGlx318phzbGts5Y+4fuIfF1kWKKypHvJlz2wR1PRnwgoAqHI DNwMeoeVYSki2i8pnTBjxZ3WgC9gsud8K5lRsXxeEOH7R7K4aXyWQSitV4gLk44uFCYQ fAgLlAdqpLSzzVahrQh6OQyco3b9HLSfvT+mBJzoOIvetV5hRWthWAX/mb1ZoBbCKIac 5sPA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20251104; t=1777929524; x=1778534324; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=IHqwi47gLxulqjFwzyPWRWmJ21m3KfZnivqkYOWLGBw=; b=giB2KawBA6har1O+zdB1Mi64czxHW8oZ0JkSUA6YFaoECaaPYz74err+GN082wPCgQ 5wbbiiAN0IMVMqLTD5PdtNuC6ifgDlKEBFkVRSYfxjaiMy2S9XUiL9ez3tdXhe1tJk0e RyMy+GjNn6OAPnUa5TXH0ze4YMBXnG0zMfeRSJMmRNizdMUzDWdSN2QVTbJjb28IwqhR 3yq07WhC2/ANkcP6T4bKH+Iikv0Tgcu2UU79DJzlYP+X2zB2aTA/UpEEoO7Ri+njB0QF MEfu5GUYMFI2sH2YHaqGDndrHpVMQo8GETRLXmsY/LIEVpiNe/KDBpNS82XjzHxKkqzh ifOg== X-Gm-Message-State: AOJu0Ywia7Wcs63i8n1xh2Ix8pA/suN8YJTY8MWJhtYSy0I7N2ikqXwI Y40QhdVQ912M8po0MoPLBoTvXzgt3bBxNPWtqoHaYhYqwetQfBXHLNCQ/vMg7HciA8I+KZn9mTu Swe00tH16TOFDlrRjYchPn+ffCXgnsXePeD89pwaZgmUXknWcXfl9C8m9ADl9tZOwtz6QB7R8pS as8i8Tp8U4krU1B2ByeVQ2OwWt9uwpmGbzEXjiVAlBtV0kmXX8iTxFIrKmi7A= X-Received: from iods12.prod.google.com ([2002:a05:6602:324c:b0:96b:30a7:3dc9]) (user=coltonlewis job=prod-delivery.src-stubby-dispatcher) by 2002:a05:6820:211:b0:696:8278:c619 with SMTP id 006d021491bc7-696979ad7d2mr5858531eaf.6.1777929523888; Mon, 04 May 2026 14:18:43 -0700 (PDT) Date: Mon, 4 May 2026 21:18:05 +0000 In-Reply-To: <20260504211813.1804997-1-coltonlewis@google.com> Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20260504211813.1804997-1-coltonlewis@google.com> X-Mailer: git-send-email 2.54.0.545.g6539524ca2-goog Message-ID: <20260504211813.1804997-13-coltonlewis@google.com> Subject: [PATCH v7 12/20] perf: Add perf_pmu_resched_update() From: Colton Lewis To: kvm@vger.kernel.org Cc: Alexandru Elisei , Paolo Bonzini , Jonathan Corbet , Russell King , Catalin Marinas , Will Deacon , Marc Zyngier , Oliver Upton , Mingwei Zhang , Joey Gouly , Suzuki K Poulose , Zenghui Yu , Mark Rutland , Shuah Khan , Ganapatrao Kulkarni , James Clark , linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-perf-users@vger.kernel.org, linux-kselftest@vger.kernel.org, Colton Lewis Content-Type: text/plain; charset="UTF-8" To modify PMU guest counter reservations dynamically, we need to update the available counters safely. Introduce perf_pmu_resched_update() to allow updating the PMU struct in between scheduling perf events out and scheduling them back in again. It takes a callback operation to call in between schedule out and schedule in. This accomplishes the goal with minimal perf API expansion. Refactor ctx_resched call the callback in the right place. Signed-off-by: Colton Lewis --- include/linux/perf_event.h | 3 +++ kernel/events/core.c | 28 +++++++++++++++++++++++++--- 2 files changed, 28 insertions(+), 3 deletions(-) diff --git a/include/linux/perf_event.h b/include/linux/perf_event.h index 48d851fbd8ea5..a08db3ee38b10 100644 --- a/include/linux/perf_event.h +++ b/include/linux/perf_event.h @@ -1242,6 +1242,9 @@ extern int perf_event_task_disable(void); extern int perf_event_task_enable(void); extern void perf_pmu_resched(struct pmu *pmu); +extern void perf_pmu_resched_update(struct pmu *pmu, + void (*update)(struct pmu *, void *), + void *data); extern int perf_event_refresh(struct perf_event *event, int refresh); extern void perf_event_update_userpage(struct perf_event *event); diff --git a/kernel/events/core.c b/kernel/events/core.c index 89b40e4397177..62fec73caabad 100644 --- a/kernel/events/core.c +++ b/kernel/events/core.c @@ -2983,9 +2983,10 @@ static void perf_event_sched_in(struct perf_cpu_context *cpuctx, * event_type is a bit mask of the types of events involved. For CPU events, * event_type is only either EVENT_PINNED or EVENT_FLEXIBLE. */ -static void ctx_resched(struct perf_cpu_context *cpuctx, - struct perf_event_context *task_ctx, - struct pmu *pmu, enum event_type_t event_type) +static void __ctx_resched(struct perf_cpu_context *cpuctx, + struct perf_event_context *task_ctx, + struct pmu *pmu, enum event_type_t event_type, + void (*update)(struct pmu *, void *), void *data) { bool cpu_event = !!(event_type & EVENT_CPU); struct perf_event_pmu_context *epc; @@ -3021,6 +3022,9 @@ static void ctx_resched(struct perf_cpu_context *cpuctx, else if (event_type & EVENT_PINNED) ctx_sched_out(&cpuctx->ctx, pmu, EVENT_FLEXIBLE); + if (update) + update(pmu, data); + perf_event_sched_in(cpuctx, task_ctx, pmu, 0); for_each_epc(epc, &cpuctx->ctx, pmu, 0) @@ -3032,6 +3036,24 @@ static void ctx_resched(struct perf_cpu_context *cpuctx, } } +static void ctx_resched(struct perf_cpu_context *cpuctx, + struct perf_event_context *task_ctx, + struct pmu *pmu, enum event_type_t event_type) +{ + __ctx_resched(cpuctx, task_ctx, pmu, event_type, NULL, NULL); +} + +void perf_pmu_resched_update(struct pmu *pmu, void (*update)(struct pmu *, void *), void *data) +{ + struct perf_cpu_context *cpuctx = this_cpu_ptr(&perf_cpu_context); + struct perf_event_context *task_ctx = cpuctx->task_ctx; + + perf_ctx_lock(cpuctx, task_ctx); + __ctx_resched(cpuctx, task_ctx, pmu, EVENT_ALL|EVENT_CPU, update, data); + perf_ctx_unlock(cpuctx, task_ctx); +} +EXPORT_SYMBOL_GPL(perf_pmu_resched_update); + void perf_pmu_resched(struct pmu *pmu) { struct perf_cpu_context *cpuctx = this_cpu_ptr(&perf_cpu_context); -- 2.54.0.545.g6539524ca2-goog