From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by smtp.subspace.kernel.org (Postfix) with ESMTP id 615222C21C8; Wed, 13 Aug 2025 09:31:08 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=217.140.110.172 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1755077470; cv=none; b=vDwFFqH2QTmW2/ym9xuyfKX4pQlk7K9e9pcHIAcM7LkA0DdsKc4o+icJjIWd+f53Glp1WPy+u4FlgEy/58DJJMbPK7cpIDWB1CGHbdYFk7rVDsC/G1XI/L0AUwGCgqOcqxlfyVOElMN95gXDb+PZ09oZsUZoxQNbsyuQwwbV6vs= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1755077470; c=relaxed/simple; bh=gh+TrPhyFiD1u6AmL0oc7wiBTl6k8FPlcuQEuF8Jdkg=; h=Date:From:To:Cc:Subject:Message-ID:References:MIME-Version: Content-Type:Content-Disposition:In-Reply-To; b=ir1P2uhx7vJ+p8p0mYEC3gEleokHwLSqUb8rus4J+ruNB/Q7JGvFhDMv8DQ6P7kmbFrnLm4UOh7VfIKvTq+Eaof9VF0rCs8Ow9sDJhRwzkofxQ5hOX24rh49LJ4puwWKKtiK7rOvUYMMzyNdUWZTyoQIO1HF+KRPN7/fTENvYrg= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=arm.com; spf=pass smtp.mailfrom=arm.com; arc=none smtp.client-ip=217.140.110.172 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=arm.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=arm.com Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 945D812FC; Wed, 13 Aug 2025 02:30:59 -0700 (PDT) Received: from arm.com (usa-sjc-imap-foss1.foss.arm.com [10.121.207.14]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id E3A6C3F63F; Wed, 13 Aug 2025 02:31:04 -0700 (PDT) Date: Wed, 13 Aug 2025 11:30:33 +0200 From: Beata Michalska To: Jie Zhan Cc: Prashant Malani , Viresh Kumar , Bowen Yu , rafael@kernel.org, linux-pm@vger.kernel.org, linux-kernel@vger.kernel.org, linuxarm@huawei.com, jonathan.cameron@huawei.com, lihuisong@huawei.com, zhenglifeng1@huawei.com, Ionela Voinescu Subject: Re: [PATCH 2/2] cpufreq: CPPC: Fix error handling in cppc_scale_freq_workfn() Message-ID: References: <20250730032312.167062-1-yubowen8@huawei.com> <20250730032312.167062-3-yubowen8@huawei.com> <20250730063930.cercfcpjwnfbnskj@vireshk-i7> <9041c44e-b81a-879d-90cd-3ad0e8992c6c@hisilicon.com> <7a9030d0-e758-4d11-11aa-d694edaa79a0@hisilicon.com> <8aa1efad-8f30-9548-259a-09fccb9da48a@hisilicon.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <8aa1efad-8f30-9548-259a-09fccb9da48a@hisilicon.com> On Wed, Aug 13, 2025 at 03:15:12PM +0800, Jie Zhan wrote: > > > On 05/08/2025 12:58, Prashant Malani wrote: > > On Mon, 4 Aug 2025 at 18:12, Prashant Malani wrote: > >> > >> On Sun, 3 Aug 2025 at 23:21, Jie Zhan wrote > >>> On 01/08/2025 16:58, Prashant Malani wrote: > >>>> This begs the question: why is this work function being scheduled > >>>> for CPUs that are in reset or offline/powered-down at all? > >>>> IANAE but it sounds like it would be better to add logic to ensure this > >>>> work function doesn't get scheduled/executed for CPUs that > >>>> are truly offline/powered-down or in reset. > >>> Yeah good question. We may discuss that on your thread. > >> > >> OK. > >> Quickly looking around, it sounds having in the CPPC tick function [1] > >> might be a better option (one probably doesn't want to lift it beyond the > >> CPPC layer, since other drivers might have different behaviour). > >> One can add a cpu_online/cpu_enabled check there. > > > > Fixed link: > > [1] https://elixir.bootlin.com/linux/v6.13/source/drivers/cpufreq/cppc_cpufreq.c#L125 > I don't think a cpu_online/cpu_enabled check there would help. > > Offlined CPUs don't make cppc_scale_freq_workfn() fail because they won't > have FIE triggered. It fails from accessing perf counters on powered-down > CPUs. > > Perhaps the CPPC FIE needs a bit rework. AFAICS, FIE is meant to run in > ticks, but currently the CPPC FIE eventually runs in a thread due to the > possible PCC path when reading CPC regs I guess. Just for my benefit: the tick is being fired on a given CPU which is when an irq_work is being queued. Then before this goes through the kworker and finally ends up in 'cppc_scale_freq_workfn' that CPU is entering a deeper idle state ? Could the cppc driver register for pm notifications and cancel any pending work for a CPU that is actually going down, directly or by setting some flag or smth so that the final worker function is either not triggered or knows it has to bail out early ? (Note this is a rough idea and needs verification) --- BR Beata