From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 93B5C227EA8; Mon, 31 Mar 2025 14:36:42 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1743431802; cv=none; b=OPk4g5C0+RlhlDJi0nrYARD4ieijLE79Q5pcMjQyooFWjzPuuv76dA8MAKM698Z48fcp2JlPRVKOiHtEpc2q6gEzp0trlm1vH2U8ssPZ5t2Ql6XiQwQICALKYoQz9Y1wdy98efmnATGPL6yIEpqlYyE4OynEVxsppvgFqxm2HYs= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1743431802; c=relaxed/simple; bh=tXp53M/DQEszJ5FiqgiA/vrwjXNm856YthAXtnLNLBs=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=aR8f+FUuULgxWrf570kxL3jFKw2A1zT+CZ0PMg0/1YM6PfH4YAeDJSbKt/W+qK5b5ZD05SQRLAzORwJcDWp4c052a2LVlirn2iQfTKvbGbowkSEQvsnY4hFirPS7vLOs/xDfMgTjwOvQU50U7H3A1BuEekvIYwUh646M4MVAQX4= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=PJpWECEF; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="PJpWECEF" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 51811C4CEE4; Mon, 31 Mar 2025 14:36:41 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1743431802; bh=tXp53M/DQEszJ5FiqgiA/vrwjXNm856YthAXtnLNLBs=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=PJpWECEFwu3f+NRkWzDCNcR8djWfxehnjiI2T1kfh/CbXfJz9dbqsgMzr2zCtMuAb xKqtoA68HGCKYb1SubTXbRXCUF8GdYVHyiO7cDN6DejSnQPPUjjakIjgU5LGn3Zf5A 7s7MiPvHSAhwwvgGj+TO/PMBLxq+XDijyjTykuPzD0mptfhiLbj97EDOehk5pN+kpD jGyvABCA8NzLP3/v02M45TFx/XYWXPZk6BLoCAieh+Wk7/4Ti4djLhR6ZJXnbTsace Tpmimnd2GYCbyTTcUjw3MnRll6tPKe8a14WQRbumdqmDp81dqs8BkijdoZbjZZ5a7q 3YWEOA+h4tDIw== From: Sasha Levin To: linux-kernel@vger.kernel.org, stable@vger.kernel.org Cc: Mark Rutland , Rob Herring , Anshuman Khandual , James Clark , Will Deacon , Sasha Levin , linux-arm-kernel@lists.infradead.org, linux-perf-users@vger.kernel.org Subject: [PATCH AUTOSEL 6.1 3/6] perf: arm_pmu: Don't disable counter in armpmu_add() Date: Mon, 31 Mar 2025 10:36:29 -0400 Message-Id: <20250331143634.1686409-3-sashal@kernel.org> X-Mailer: git-send-email 2.39.5 In-Reply-To: <20250331143634.1686409-1-sashal@kernel.org> References: <20250331143634.1686409-1-sashal@kernel.org> Precedence: bulk X-Mailing-List: linux-perf-users@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-stable: review X-Patchwork-Hint: Ignore X-stable-base: Linux 6.1.132 Content-Transfer-Encoding: 8bit From: Mark Rutland [ Upstream commit dcca27bc1eccb9abc2552aab950b18a9742fb8e7 ] Currently armpmu_add() tries to handle a newly-allocated counter having a stale associated event, but this should not be possible, and if this were to happen the current mitigation is insufficient and potentially expensive. It would be better to warn if we encounter the impossible case. Calls to pmu::add() and pmu::del() are serialized by the core perf code, and armpmu_del() clears the relevant slot in pmu_hw_events::events[] before clearing the bit in pmu_hw_events::used_mask such that the counter can be reallocated. Thus when armpmu_add() allocates a counter index from pmu_hw_events::used_mask, it should not be possible to observe a stale even in pmu_hw_events::events[] unless either pmu_hw_events::used_mask or pmu_hw_events::events[] have been corrupted. If this were to happen, we'd end up with two events with the same event->hw.idx, which would clash with each other during reprogramming, deletion, etc, and produce bogus results. Add a WARN_ON_ONCE() for this case so that we can detect if this ever occurs in practice. That possiblity aside, there's no need to call arm_pmu::disable(event) for the new event. The PMU reset code initialises the counter in a disabled state, and armpmu_del() will disable the counter before it can be reused. Remove the redundant disable. Signed-off-by: Mark Rutland Signed-off-by: Rob Herring (Arm) Reviewed-by: Anshuman Khandual Tested-by: James Clark Link: https://lore.kernel.org/r/20250218-arm-brbe-v19-v20-2-4e9922fc2e8e@kernel.org Signed-off-by: Will Deacon Signed-off-by: Sasha Levin --- drivers/perf/arm_pmu.c | 8 +++----- 1 file changed, 3 insertions(+), 5 deletions(-) diff --git a/drivers/perf/arm_pmu.c b/drivers/perf/arm_pmu.c index 3f07df5a7e950..d351d6ce750bf 100644 --- a/drivers/perf/arm_pmu.c +++ b/drivers/perf/arm_pmu.c @@ -340,12 +340,10 @@ armpmu_add(struct perf_event *event, int flags) if (idx < 0) return idx; - /* - * If there is an event in the counter we are going to use then make - * sure it is disabled. - */ + /* The newly-allocated counter should be empty */ + WARN_ON_ONCE(hw_events->events[idx]); + event->hw.idx = idx; - armpmu->disable(event); hw_events->events[idx] = event; hwc->state = PERF_HES_STOPPED | PERF_HES_UPTODATE; -- 2.39.5