From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by smtp.subspace.kernel.org (Postfix) with ESMTP id 66E3E192B84; Wed, 27 Aug 2025 08:18:50 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=217.140.110.172 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1756282732; cv=none; b=F1u28dZJOF+FB9Wg+54w5ldsmb91UV06i7ADydi/odQ0/FJD8bZpXquVEh8aT1bkooWufIReDX0tr9F3Aag0PdFw35TZMLV3nipBah3PCarI6yAOCv6rdGxQIXncL5vMYQrACaFJ3WMA1gv+gr3AK0gMfWbYIKJrIwHaxIDmXg8= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1756282732; c=relaxed/simple; bh=CHy7QFSPgSRnuJKdtSIc/yYznxQB2FrFp5irQRpgBE0=; h=Date:From:To:Cc:Subject:Message-ID:References:MIME-Version: Content-Type:Content-Disposition:In-Reply-To; b=kT1lQP4FEQI6GbRdrqBrjMlxgL3VvsYcrHwKMExboOEXTJBHLXW+W1SqBJVb6HiFgbnFTGw8dcHl5MA+0A/8a7ddB+zbZoABjBeSIn0+nCE0ENLdZhy7TU/z8WBdxcN5lF/Mov6N+iMsDtaXkcopMcXiKOd1bMgg1XQc4MPgn6g= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=arm.com; spf=pass smtp.mailfrom=arm.com; arc=none smtp.client-ip=217.140.110.172 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=arm.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=arm.com Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 875DD1515; Wed, 27 Aug 2025 01:18:41 -0700 (PDT) Received: from J2N7QTR9R3 (usa-sjc-imap-foss1.foss.arm.com [10.121.207.14]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id C8D1A3F694; Wed, 27 Aug 2025 01:18:41 -0700 (PDT) Date: Wed, 27 Aug 2025 09:18:38 +0100 From: Mark Rutland To: Ian Rogers Cc: Robin Murphy , Peter Zijlstra , mingo@redhat.com, will@kernel.org, acme@kernel.org, namhyung@kernel.org, alexander.shishkin@linux.intel.com, jolsa@kernel.org, adrian.hunter@intel.com, kan.liang@linux.intel.com, linux-perf-users@vger.kernel.org, linux-kernel@vger.kernel.org, linux-alpha@vger.kernel.org, linux-snps-arc@lists.infradead.org, linux-arm-kernel@lists.infradead.org, imx@lists.linux.dev, linux-csky@vger.kernel.org, loongarch@lists.linux.dev, linux-mips@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, linux-s390@vger.kernel.org, linux-sh@vger.kernel.org, sparclinux@vger.kernel.org, linux-pm@vger.kernel.org, linux-rockchip@lists.infradead.org, dmaengine@vger.kernel.org, linux-fpga@vger.kernel.org, amd-gfx@lists.freedesktop.org, dri-devel@lists.freedesktop.org, intel-gfx@lists.freedesktop.org, intel-xe@lists.freedesktop.org, coresight@lists.linaro.org, iommu@lists.linux.dev, linux-amlogic@lists.infradead.org, linux-cxl@vger.kernel.org, linux-arm-msm@vger.kernel.org, linux-riscv@lists.infradead.org Subject: Re: [PATCH 12/19] perf: Ignore event state for group validation Message-ID: References: <20250826130329.GX4067720@noisy.programming.kicks-ass.net> <6080e45d-032e-48c2-8efc-3d7e5734d705@arm.com> Precedence: bulk X-Mailing-List: sparclinux@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Disposition: inline Content-Transfer-Encoding: 8bit In-Reply-To: On Tue, Aug 26, 2025 at 11:48:48AM -0700, Ian Rogers wrote: > On Tue, Aug 26, 2025 at 8:32 AM Robin Murphy wrote: > > > > On 2025-08-26 2:03 pm, Peter Zijlstra wrote: > > > On Wed, Aug 13, 2025 at 06:01:04PM +0100, Robin Murphy wrote: > > >> It may have been different long ago, but today it seems wrong for these > > >> drivers to skip counting disabled sibling events in group validation, > > >> given that perf_event_enable() could make them schedulable again, and > > >> thus increase the effective size of the group later. Conversely, if a > > >> sibling event is truly dead then it stands to reason that the whole > > >> group is dead, so it's not worth going to any special effort to try to > > >> squeeze in a new event that's never going to run anyway. Thus, we can > > >> simply remove all these checks. > > > > > > So currently you can do sort of a manual event rotation inside an > > > over-sized group and have it work. > > > > > > I'm not sure if anybody actually does this, but its possible. > > > > > > Eg. on a PMU that supports only 4 counters, create a group of 5 and > > > periodically cycle which of the 5 events is off. > > I'm not sure this is true, I thought this would fail in the > perf_event_open when adding the 5th event and there being insufficient > counters for the group. We're talking specifically about cases where the logic in a pmu's pmu::event_init() callback doesn't count events in specific states, and hence the 5th even doesn't get rejected when it is initialised. For example, in arch/x86/events/core.c, validate_group() uses collect_events(), which has: for_each_sibling_event(event, leader) { if (!is_x86_event(event) || event->state <= PERF_EVENT_STATE_OFF) continue; if (collect_event(cpuc, event, max_count, n)) return -EINVAL; n++; } ... and so where an event's state is <= PERF_EVENT_STATE_OFF at init time, that event is not counted to see if it fits into HW counters. Mark.