From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by smtp.subspace.kernel.org (Postfix) with ESMTP id 6DE3B238C08; Mon, 1 Sep 2025 07:29:29 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=217.140.110.172 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1756711771; cv=none; b=hUmeWMhlAwHvdiBxlJ+RdV4pPg60cZkdRnKtd0FrgF8fl2mRNahzEjBWwmBbYSNpeivKNEmGgFGf45GbxyjLsNqbDuQUc9TrtibLOa2jBxliR9EQJkdTqASyS8kvp6ZqZc6EPqUEJfmLUkXtemejLnJLZd0YKG/L3GCrYgsl1+o= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1756711771; c=relaxed/simple; bh=wXdcPFZWX2vCfAvMrw2gMSpLcpUqjA2fjEZ53dSHtAY=; h=Date:From:To:Cc:Subject:Message-ID:References:MIME-Version: Content-Type:Content-Disposition:In-Reply-To; b=mG2yzrUre+34dYS1mlcnshpRt0uaIplym29JZB2ZvKD87bGF2AomJi/z+vakMjW8Ig1dl75vMRJDZkw8cj8yfd0hyVZw4Ai8TnTtzR4wM001HVwx2O0S6dTwMGSYB/eifgqWeU6y5gl8caaeOeRlrxoCdDtmKNyTdkHRskPhGHU= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=arm.com; spf=pass smtp.mailfrom=arm.com; arc=none smtp.client-ip=217.140.110.172 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=arm.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=arm.com Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 3029D1A25; Mon, 1 Sep 2025 00:29:20 -0700 (PDT) Received: from localhost (ionvoi01-desktop.cambridge.arm.com [10.2.80.58]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 201C93F63F; Mon, 1 Sep 2025 00:29:28 -0700 (PDT) Date: Mon, 1 Sep 2025 08:29:26 +0100 From: Ionela Voinescu To: Lifeng Zheng Cc: catalin.marinas@arm.com, will@kernel.org, rafael@kernel.org, viresh.kumar@linaro.org, beata.michalska@arm.com, sudeep.holla@arm.com, linux-arm-kernel@lists.infradead.org, linux-pm@vger.kernel.org, linux-kernel@vger.kernel.org, linuxarm@huawei.com, jonathan.cameron@huawei.com, vincent.guittot@linaro.org, yangyicong@hisilicon.com, zhanjie9@hisilicon.com, lihuisong@huawei.com, yubowen8@huawei.com, zhangpengjie2@huawei.com, linhongye@h-partners.com Subject: Re: [PATCH v5 3/3] arm64: topology: Setup AMU FIE for online CPUs only Message-ID: References: <20250819072931.1647431-1-zhenglifeng1@huawei.com> <20250819072931.1647431-4-zhenglifeng1@huawei.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20250819072931.1647431-4-zhenglifeng1@huawei.com> Hi, On Tuesday 19 Aug 2025 at 15:29:31 (+0800), Lifeng Zheng wrote: > When boot with maxcpu=1 restrict, and LPI(Low Power Idle States) is on, > only CPU0 will go online. The support AMU flag of CPU0 will be set but the > flags of other CPUs will not. This will cause AMU FIE set up fail for CPU0 > when it shares a cpufreq policy with other CPU(s). After that, when other > CPUs are finally online and the support AMU flags of them are set, they'll > never have a chance to set up AMU FIE, even though they're eligible. > > To solve this problem, the process of setting up AMU FIE needs to be > modified as follows: > > 1. Set up AMU FIE only for the online CPUs. > > 2. Try to set up AMU FIE each time a CPU goes online and do the > freq_counters_valid() check. If this check fails, clear scale freq source > of all the CPUs related to the same policy, in case they use different > source of the freq scale. > > At the same time, this change also be applied to cpufreq when calling > arch_set_freq_scale. > > Signed-off-by: Lifeng Zheng > --- > arch/arm64/kernel/topology.c | 54 ++++++++++++++++++++++++++++++++++-- > drivers/cpufreq/cpufreq.c | 4 +-- > 2 files changed, 54 insertions(+), 4 deletions(-) > [..] > > diff --git a/drivers/cpufreq/cpufreq.c b/drivers/cpufreq/cpufreq.c > index 78ca68ea754d..d1890a2af1af 100644 > --- a/drivers/cpufreq/cpufreq.c > +++ b/drivers/cpufreq/cpufreq.c > @@ -417,7 +417,7 @@ void cpufreq_freq_transition_end(struct cpufreq_policy *policy, > > cpufreq_notify_post_transition(policy, freqs, transition_failed); > > - arch_set_freq_scale(policy->related_cpus, > + arch_set_freq_scale(policy->cpus, > policy->cur, > arch_scale_freq_ref(policy->cpu)); > > @@ -2219,7 +2219,7 @@ unsigned int cpufreq_driver_fast_switch(struct cpufreq_policy *policy, > return 0; > > policy->cur = freq; > - arch_set_freq_scale(policy->related_cpus, freq, > + arch_set_freq_scale(policy->cpus, freq, > arch_scale_freq_ref(policy->cpu)); I think it might be good to keep these calls to arch_set_freq_scale() for all related CPUs and not only online ones. This can result in CPUs coming out of hotplug with a wrong scale factor, because while they were out, any frequency transitions of the policy only modified the scale factor of online CPUs. When they come out of hotplug, arch_set_freq_scale() will not be called for them until there's a new frequency transition. I understand that if this is not changed to only pass online CPUs, supports_scale_freq_counters() will now fail when called in topology_set_freq_scale() for scenarios when only some CPUs in a policy are online - e.g. the scenario in your commit message. But I think a simple change in supports_scale_freq_counters() that instead checks that at least one CPU in the policy supports AMU-based FIE, instead of all, is a better fix that does not break the cpufreq-based FIE. If at least one CPU is marked as supporting AMUs for FIE we know that the AMU setup path is in progress and we should bail out of topology_set_freq_scale()/arch_set_freq_scale(). Hope it helps, Ionela. > cpufreq_stats_record_transition(policy, freq); > > -- > 2.33.0 > >