From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id C2B56C433F5 for ; Thu, 17 Feb 2022 08:52:40 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231969AbiBQIwx (ORCPT ); Thu, 17 Feb 2022 03:52:53 -0500 Received: from mxb-00190b01.gslb.pphosted.com ([23.128.96.19]:56660 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231430AbiBQIww (ORCPT ); Thu, 17 Feb 2022 03:52:52 -0500 Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id A6064EB17A; Thu, 17 Feb 2022 00:52:38 -0800 (PST) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id 3991661C99; Thu, 17 Feb 2022 08:52:38 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 933E5C340E8; Thu, 17 Feb 2022 08:52:37 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1645087957; bh=PSbweUlw1nN730hNaW1DogqbsfIdoaUabLVM6oVy13g=; h=Date:From:To:Cc:Subject:In-Reply-To:References:From; b=Gt66sdQ6PyaSE29aZ12Nmx6ncaomrt4+oHHLSlDRCokWTxoKoSszmQFyG9cWuBZLP WN8we9cJ2/RLBYJXxzQZ8tmDXk2kKhFQwyq8t+m+fRhZgULNyqg7Hx6zaJp35wCrT2 rxATefBP5MyLjc+qAvkwe/KQJUXx4gcYfS53qdLDFWM08om9A8rQbl5Rz8Wza5Mj5r Vs9PPM9FL1On+IUT7D1rdSvXmO5a8CSc+pO/pFC/JRjRlKpr7N3PLOF0UmUri6L2O6 vq3HBXwq+HTBcHgKY1kjHgENRtKhb8HWW80W2b+wUgoiHSzWF27uV68wbR3pVEfhxZ bfbYjmqx+6ITg== Received: from sofa.misterjones.org ([185.219.108.64] helo=why.misterjones.org) by disco-boy.misterjones.org with esmtpsa (TLS1.3) tls TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 (Exim 4.94.2) (envelope-from ) id 1nKcWh-008Vy6-Gd; Thu, 17 Feb 2022 08:52:35 +0000 Date: Thu, 17 Feb 2022 08:52:35 +0000 Message-ID: <875ypd50z0.wl-maz@kernel.org> From: Marc Zyngier To: Shawn Guo Cc: Sudeep Holla , Thomas Gleixner , Maulik Shah , Ulf Hansson , Bjorn Andersson , Lorenzo Pieralisi , "Rafael J . Wysocki" , Daniel Lezcano , Rob Herring , devicetree@vger.kernel.org, linux-arm-msm@vger.kernel.org, linux-kernel@vger.kernel.org Subject: Re: [PATCH v5 1/3] cpuidle: psci: Call cpu_cluster_pm_enter() on the last CPU In-Reply-To: <20220217073130.GD31965@dragon> References: <20220216132830.32490-1-shawn.guo@linaro.org> <20220216132830.32490-2-shawn.guo@linaro.org> <20220216144937.znsba7zbdenl7427@bogus> <9bda65e5bb85b00eaca71d95ad78e93b@kernel.org> <20220217073130.GD31965@dragon> User-Agent: Wanderlust/2.15.9 (Almost Unreal) SEMI-EPG/1.14.7 (Harue) FLIM-LB/1.14.9 (=?UTF-8?B?R29qxY0=?=) APEL-LB/10.8 EasyPG/1.0.0 Emacs/27.1 (x86_64-pc-linux-gnu) MULE/6.0 (HANACHIRUSATO) MIME-Version: 1.0 (generated by SEMI-EPG 1.14.7 - "Harue") Content-Type: text/plain; charset=US-ASCII X-SA-Exim-Connect-IP: 185.219.108.64 X-SA-Exim-Rcpt-To: shawn.guo@linaro.org, sudeep.holla@arm.com, tglx@linutronix.de, quic_mkshah@quicinc.com, ulf.hansson@linaro.org, bjorn.andersson@linaro.org, lorenzo.pieralisi@arm.com, rafael@kernel.org, daniel.lezcano@linaro.org, robh+dt@kernel.org, devicetree@vger.kernel.org, linux-arm-msm@vger.kernel.org, linux-kernel@vger.kernel.org X-SA-Exim-Mail-From: maz@kernel.org X-SA-Exim-Scanned: No (on disco-boy.misterjones.org); SAEximRunCond expanded to false Precedence: bulk List-ID: X-Mailing-List: devicetree@vger.kernel.org On Thu, 17 Feb 2022 07:31:32 +0000, Shawn Guo wrote: > > On Wed, Feb 16, 2022 at 03:58:41PM +0000, Marc Zyngier wrote: > > On 2022-02-16 14:49, Sudeep Holla wrote: > > > +Ulf (as you he is the author of cpuidle-psci-domains.c and can help you > > > with that if you require) > > Thanks, Sudeep! > > > > > > > On Wed, Feb 16, 2022 at 09:28:28PM +0800, Shawn Guo wrote: > > > > Make a call to cpu_cluster_pm_enter() on the last CPU going to low > > > > power > > > > state (and cpu_cluster_pm_exit() on the firt CPU coming back), so that > > > > platforms can be notified to set up hardware for getting into the > > > > cluster > > > > low power state. > > > > > > > > > > NACK. We are not getting the notion of CPU cluster back to cpuidle > > > again. > > > That must die. Remember the cluster doesn't map to idle states > > > especially > > > in the DSU systems where HMP CPUs are in the same cluster but can be in > > > different power domains. > > The 'cluster' in cpu_cluster_pm_enter() doesn't necessarily means > a physical CPU cluster. I think the documentation of the function has a > better description. > > * Notifies listeners that all cpus in a power domain are entering a low power > * state that may cause some blocks in the same power domain to reset. > > So cpu_domain_pm_enter() might be a better name? Anyways ... > > > > > > > You need to decide which PSCI CPU_SUSPEND mode you want to use first. If > > > it is > > > Platform Co-ordinated(PC), then you need not notify anything to the > > > platform. > > > Just request the desired idle state on each CPU and platform will take > > > care > > > from there. > > > > > > If for whatever reason you have chosen OS initiated mode(OSI), then > > > specify > > > the PSCI power domains correctly in the DT which will make use of the > > > cpuidle-psci-domains and handle the so called "cluster" state correctly. > > Yes, I'm running a Qualcomm platform that has OSI supported in PSCI. > > > > > My understanding is that what Shawn is after is a way to detect the "last > > man standing" on the system to kick off some funky wake-up controller that > > really should be handled by the power controller (because only that guy > > knows for sure who is the last CPU on the block). > > > > There was previously some really funky stuff (copy pasted from the existing > > rpmh_rsc_cpu_pm_callback()), which I totally objected to having hidden in > > an irqchip driver. > > > > My ask was that if we needed such information, and assuming that it is > > possible to obtain it in a reliable way, this should come from the core > > code, and not be invented by random drivers. > > Thanks Marc for explain my problem! > > Right, all I need is a notification in MPM irqchip driver when the CPU > domain/cluster is about to enter low power state. As cpu_pm - > kernel/cpu_pm.c, already has helper cpu_cluster_pm_enter() sending > CPU_CLUSTER_PM_ENTER event, I just need to find a caller to this cpu_pm > helper. > > Is .power_off hook of generic_pm_domain a better place for calling the > helper? I really don't understand why you want a cluster PM event generated by the idle driver. Specially considering that you are not after a *cluster* PM event, but after some sort of system-wide event (last man standing). It looks to me that having a predicate that can be called from a PM notifier event to find out whether you're the last in line would be better suited, and could further be used to remove the crap from the rpmh-rsc driver. M. -- Without deviation from the norm, progress is not possible.