From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.19]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id C8F6786325; Mon, 6 Jan 2025 14:20:52 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=192.198.163.19 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1736173254; cv=none; b=JfGppd6b8ETN+XbG374F5xB6DVLpZyAcjWHN+5hprtu6WLXv0DrFHAbm2MnKmMkukIigGCFi7yzVD0FQdL4osi9emMe5fRILX0aSyOR0zSYH2ydqmhK8LcTfFc7rza7RRua1VphJkLQm5Y2aXvguJIPi4c/Msx9u9IC5KRqiyVs= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1736173254; c=relaxed/simple; bh=d41yTObcwA/0pI979W3vaxzRJMUCBr5lof6Xt2+tXJE=; h=From:To:Cc:Subject:Date:Message-Id:MIME-Version; b=GuBium+5hDTjVwEZmsrp9hAqpcjH82jmLh4yh3aVdp8RZ7Xk5v9B0u/xxfHuN2vvi/FFHz1koI0/2y5gtWV3NTq1Ac8kKOyf4p3QD3Jn7vwHGbAx43Jhk1TScQLRyYK2Sl7sPkLhrLeuvXEhKB8FaVdVVRL5IwH7xWsGp2f3y9s= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.intel.com; spf=none smtp.mailfrom=linux.intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=e3M4zLnu; arc=none smtp.client-ip=192.198.163.19 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.intel.com Authentication-Results: smtp.subspace.kernel.org; spf=none smtp.mailfrom=linux.intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="e3M4zLnu" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1736173253; x=1767709253; h=from:to:cc:subject:date:message-id:mime-version: content-transfer-encoding; bh=d41yTObcwA/0pI979W3vaxzRJMUCBr5lof6Xt2+tXJE=; b=e3M4zLnuDGpxhKiRe9PwJ0+LT7yxqPTDXOGy0h22ibXMY6VTkMH6pPBK 1Qh2BarTMAkBG+AkB+KJiKSftx9jvzOBDrd0GmaFjocQRikN2dTKq8NhH EIq8dgueM2gXcSupJoMo7qeamYN2cP/7OrfIzSCsctrxJNcQwuP/2VEju wag39RmqkAq8T69iN1MJwfcEukVUIYhk55T3/zeSLi44Bf2IRAzZtB8Jy Gs5T+mHbhpoEi102aVjcqlsk2bmq1QNLDWrwztHFb9Jil/Tp+mWMNxtJs OP3WfE4XU0ImmvOvrwWoTpFhFDfaQbdyG0RCbg4f3rg2B8UnfrBAcapiq Q==; X-CSE-ConnectionGUID: A2hZGkspSlWVuVODiLuCKA== X-CSE-MsgGUID: auUKUZAeSta5LVhL34SJ4w== X-IronPort-AV: E=McAfee;i="6700,10204,11307"; a="35556042" X-IronPort-AV: E=Sophos;i="6.12,292,1728975600"; d="scan'208";a="35556042" Received: from orviesa005.jf.intel.com ([10.64.159.145]) by fmvoesa113.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 06 Jan 2025 06:20:52 -0800 X-CSE-ConnectionGUID: EFj4kOifRuiWn/wMELRi2g== X-CSE-MsgGUID: vmIKaW/eR6W6Pm4GetOPJw== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.12,224,1728975600"; d="scan'208";a="107519995" Received: from kanliang-dev.jf.intel.com ([10.165.154.102]) by orviesa005.jf.intel.com with ESMTP; 06 Jan 2025 06:20:52 -0800 From: kan.liang@linux.intel.com To: peterz@infradead.org, mingo@redhat.com, acme@kernel.org, namhyung@kernel.org, irogers@google.com, adrian.hunter@intel.com, linux-kernel@vger.kernel.org, linux-perf-users@vger.kernel.org Cc: ak@linux.intel.com, eranian@google.com, dapeng1.mi@linux.intel.com, Kan Liang Subject: [PATCH V8 1/2] perf: Avoid the read if the count is already updated Date: Mon, 6 Jan 2025 06:21:02 -0800 Message-Id: <20250106142103.1735729-1-kan.liang@linux.intel.com> X-Mailer: git-send-email 2.38.1 Precedence: bulk X-Mailing-List: linux-perf-users@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: 8bit From: "Peter Zijlstra (Intel)" The event may have been updated in the PMU-specific implementation, e.g., Intel PEBS counters snapshotting. The common code should not read and overwrite the value. The PERF_SAMPLE_READ in the data->sample_type can be used to detect whether the PMU-specific value is available. If yes, avoid the pmu->read() in the common code. Add a new flag, skip_read, to track the case. Factor out a perf_pmu_read() to clean up the code. Signed-off-by: Kan Liang Signed-off-by: Peter Zijlstra (Intel) --- New patch from Peter Zijlstra. - Use a flag to avoid the read. include/linux/perf_event.h | 8 +++++++- kernel/events/core.c | 33 ++++++++++++++++----------------- kernel/events/ring_buffer.c | 1 + 3 files changed, 24 insertions(+), 18 deletions(-) diff --git a/include/linux/perf_event.h b/include/linux/perf_event.h index 8333f132f4a9..2d07bc1193f3 100644 --- a/include/linux/perf_event.h +++ b/include/linux/perf_event.h @@ -1062,7 +1062,13 @@ struct perf_output_handle { struct perf_buffer *rb; unsigned long wakeup; unsigned long size; - u64 aux_flags; + union { + u64 flags; /* perf_output*() */ + u64 aux_flags; /* perf_aux_output*() */ + struct { + u64 skip_read : 1; + }; + }; union { void *addr; unsigned long head; diff --git a/kernel/events/core.c b/kernel/events/core.c index b2bc67791f84..f91ba29048ce 100644 --- a/kernel/events/core.c +++ b/kernel/events/core.c @@ -1191,6 +1191,12 @@ static void perf_assert_pmu_disabled(struct pmu *pmu) WARN_ON_ONCE(*this_cpu_ptr(pmu->pmu_disable_count) == 0); } +static inline void perf_pmu_read(struct perf_event *event) +{ + if (event->state == PERF_EVENT_STATE_ACTIVE) + event->pmu->read(event); +} + static void get_ctx(struct perf_event_context *ctx) { refcount_inc(&ctx->refcount); @@ -3473,8 +3479,7 @@ static void __perf_event_sync_stat(struct perf_event *event, * we know the event must be on the current CPU, therefore we * don't need to use it. */ - if (event->state == PERF_EVENT_STATE_ACTIVE) - event->pmu->read(event); + perf_pmu_read(event); perf_event_update_time(event); @@ -4618,15 +4623,8 @@ static void __perf_event_read(void *info) pmu->read(event); - for_each_sibling_event(sub, event) { - if (sub->state == PERF_EVENT_STATE_ACTIVE) { - /* - * Use sibling's PMU rather than @event's since - * sibling could be on different (eg: software) PMU. - */ - sub->pmu->read(sub); - } - } + for_each_sibling_event(sub, event) + perf_pmu_read(sub); data->ret = pmu->commit_txn(pmu); @@ -7400,9 +7398,8 @@ static void perf_output_read_group(struct perf_output_handle *handle, if (read_format & PERF_FORMAT_TOTAL_TIME_RUNNING) values[n++] = running; - if ((leader != event) && - (leader->state == PERF_EVENT_STATE_ACTIVE)) - leader->pmu->read(leader); + if ((leader != event) && !handle->skip_read) + perf_pmu_read(leader); values[n++] = perf_event_count(leader, self); if (read_format & PERF_FORMAT_ID) @@ -7415,9 +7412,8 @@ static void perf_output_read_group(struct perf_output_handle *handle, for_each_sibling_event(sub, leader) { n = 0; - if ((sub != event) && - (sub->state == PERF_EVENT_STATE_ACTIVE)) - sub->pmu->read(sub); + if ((sub != event) && !handle->skip_read) + perf_pmu_read(sub); values[n++] = perf_event_count(sub, self); if (read_format & PERF_FORMAT_ID) @@ -7476,6 +7472,9 @@ void perf_output_sample(struct perf_output_handle *handle, { u64 sample_type = data->type; + if (data->sample_flags & PERF_SAMPLE_READ) + handle->skip_read = 1; + perf_output_put(handle, *header); if (sample_type & PERF_SAMPLE_IDENTIFIER) diff --git a/kernel/events/ring_buffer.c b/kernel/events/ring_buffer.c index 4f46f688d0d4..9b49ecca693e 100644 --- a/kernel/events/ring_buffer.c +++ b/kernel/events/ring_buffer.c @@ -185,6 +185,7 @@ __perf_output_begin(struct perf_output_handle *handle, handle->rb = rb; handle->event = event; + handle->flags = 0; have_lost = local_read(&rb->lost); if (unlikely(have_lost)) { -- 2.38.1