From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 38F0122DA1C; Mon, 5 May 2025 23:05:34 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1746486335; cv=none; b=MUvtBXAXbKedXp+JaY7nXwBA4KCtqzHcdcRb1cVTbCaxrfTMGBQq4Pt65teJMe/GZSnvGDdAbVbIIYMAgt2/wfGHH6f23wozwCQhHDLWYv9gXYppG+wcL3Q8Nqq5R/ASutZUtrKROK5rsOfyLWJPz62cd1DYZ0E3r/pDJEjF4pA= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1746486335; c=relaxed/simple; bh=fgtcHdRdFF1rK9uWDUFhaMXFiOpA/R85EqA9Hg6vG3c=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=GkK9JzCCm/BmArcW6mpa5XhvKi8eq163no3mVJtfbe3kPiSMheK2cRJ6cuDFOiHnf+R3prZHLx2ZqdnvC0/O2greSsmVioS8n8Geukj/68zEJlqZbWajID3l/QZoG45oMzHAAILihMvGvnMbCdro6yQxrn4bEpQUF9RFd8EVtqM= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=fTJrPH3N; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="fTJrPH3N" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 1E9F5C4CEE4; Mon, 5 May 2025 23:05:33 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1746486334; bh=fgtcHdRdFF1rK9uWDUFhaMXFiOpA/R85EqA9Hg6vG3c=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=fTJrPH3NwVUMu2dSmqVVgtjEYfqXCABnK/JmOtHVsETB3QXwH63MR98lbOeK4CaOM Uj8AcQ4D1i9KLk98tYWH9+vRxZLcFAtAsV7+8vBF2DomVTdzKy2Pnr+4nDdHb9ENST Mgyb9qVb/Z71jtPS3hV3InwyF3M2l0MIMttvZCGyjY5j0/5GQUZikoD0lpCIXKospt 41SPn0hYkPlH/ekwjLCJWvygfJqFz59k7JtyH80kFua48TBkA5qajJyJRbaHvuAQPi EQJfK4WpiRahm6W/pMCuovVyYFcAkAzjJI64EndBai34vEqgE5hHfyI3TYqZCIZh5c WpDT+vtCbR13Q== From: Sasha Levin To: linux-kernel@vger.kernel.org, stable@vger.kernel.org Cc: "Peter Zijlstra (Intel)" , Kan Liang , Sasha Levin , mingo@redhat.com, acme@kernel.org, namhyung@kernel.org, linux-perf-users@vger.kernel.org Subject: [PATCH AUTOSEL 6.6 265/294] perf: Avoid the read if the count is already updated Date: Mon, 5 May 2025 18:56:05 -0400 Message-Id: <20250505225634.2688578-265-sashal@kernel.org> X-Mailer: git-send-email 2.39.5 In-Reply-To: <20250505225634.2688578-1-sashal@kernel.org> References: <20250505225634.2688578-1-sashal@kernel.org> Precedence: bulk X-Mailing-List: linux-perf-users@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-stable: review X-Patchwork-Hint: Ignore X-stable-base: Linux 6.6.89 Content-Transfer-Encoding: 8bit From: "Peter Zijlstra (Intel)" [ Upstream commit 8ce939a0fa194939cc1f92dbd8bc1a7806e7d40a ] The event may have been updated in the PMU-specific implementation, e.g., Intel PEBS counters snapshotting. The common code should not read and overwrite the value. The PERF_SAMPLE_READ in the data->sample_type can be used to detect whether the PMU-specific value is available. If yes, avoid the pmu->read() in the common code. Add a new flag, skip_read, to track the case. Factor out a perf_pmu_read() to clean up the code. Signed-off-by: Peter Zijlstra (Intel) Signed-off-by: Kan Liang Signed-off-by: Peter Zijlstra (Intel) Link: https://lkml.kernel.org/r/20250121152303.3128733-3-kan.liang@linux.intel.com Signed-off-by: Sasha Levin --- include/linux/perf_event.h | 8 +++++++- kernel/events/core.c | 33 ++++++++++++++++----------------- kernel/events/ring_buffer.c | 1 + 3 files changed, 24 insertions(+), 18 deletions(-) diff --git a/include/linux/perf_event.h b/include/linux/perf_event.h index fcb834dd75c24..90c782749b055 100644 --- a/include/linux/perf_event.h +++ b/include/linux/perf_event.h @@ -1016,7 +1016,13 @@ struct perf_output_handle { struct perf_buffer *rb; unsigned long wakeup; unsigned long size; - u64 aux_flags; + union { + u64 flags; /* perf_output*() */ + u64 aux_flags; /* perf_aux_output*() */ + struct { + u64 skip_read : 1; + }; + }; union { void *addr; unsigned long head; diff --git a/kernel/events/core.c b/kernel/events/core.c index 987807b1040ae..5dd6424e62fa8 100644 --- a/kernel/events/core.c +++ b/kernel/events/core.c @@ -1163,6 +1163,12 @@ static void perf_assert_pmu_disabled(struct pmu *pmu) WARN_ON_ONCE(*this_cpu_ptr(pmu->pmu_disable_count) == 0); } +static inline void perf_pmu_read(struct perf_event *event) +{ + if (event->state == PERF_EVENT_STATE_ACTIVE) + event->pmu->read(event); +} + static void get_ctx(struct perf_event_context *ctx) { refcount_inc(&ctx->refcount); @@ -3397,8 +3403,7 @@ static void __perf_event_sync_stat(struct perf_event *event, * we know the event must be on the current CPU, therefore we * don't need to use it. */ - if (event->state == PERF_EVENT_STATE_ACTIVE) - event->pmu->read(event); + perf_pmu_read(event); perf_event_update_time(event); @@ -4524,15 +4529,8 @@ static void __perf_event_read(void *info) pmu->read(event); - for_each_sibling_event(sub, event) { - if (sub->state == PERF_EVENT_STATE_ACTIVE) { - /* - * Use sibling's PMU rather than @event's since - * sibling could be on different (eg: software) PMU. - */ - sub->pmu->read(sub); - } - } + for_each_sibling_event(sub, event) + perf_pmu_read(sub); data->ret = pmu->commit_txn(pmu); @@ -7297,9 +7295,8 @@ static void perf_output_read_group(struct perf_output_handle *handle, if (read_format & PERF_FORMAT_TOTAL_TIME_RUNNING) values[n++] = running; - if ((leader != event) && - (leader->state == PERF_EVENT_STATE_ACTIVE)) - leader->pmu->read(leader); + if ((leader != event) && !handle->skip_read) + perf_pmu_read(leader); values[n++] = perf_event_count(leader); if (read_format & PERF_FORMAT_ID) @@ -7312,9 +7309,8 @@ static void perf_output_read_group(struct perf_output_handle *handle, for_each_sibling_event(sub, leader) { n = 0; - if ((sub != event) && - (sub->state == PERF_EVENT_STATE_ACTIVE)) - sub->pmu->read(sub); + if ((sub != event) && !handle->skip_read) + perf_pmu_read(sub); values[n++] = perf_event_count(sub); if (read_format & PERF_FORMAT_ID) @@ -7369,6 +7365,9 @@ void perf_output_sample(struct perf_output_handle *handle, { u64 sample_type = data->type; + if (data->sample_flags & PERF_SAMPLE_READ) + handle->skip_read = 1; + perf_output_put(handle, *header); if (sample_type & PERF_SAMPLE_IDENTIFIER) diff --git a/kernel/events/ring_buffer.c b/kernel/events/ring_buffer.c index 52de76ef8723b..dc1193b779c08 100644 --- a/kernel/events/ring_buffer.c +++ b/kernel/events/ring_buffer.c @@ -181,6 +181,7 @@ __perf_output_begin(struct perf_output_handle *handle, handle->rb = rb; handle->event = event; + handle->flags = 0; have_lost = local_read(&rb->lost); if (unlikely(have_lost)) { -- 2.39.5