From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 47BADC4167B for ; Fri, 8 Dec 2023 15:52:11 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:In-Reply-To:MIME-Version:References: Message-ID:Subject:Cc:To:From:Date:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=WOsjW7+uEPqqkQqJZDw82JIPeM1l3tUuUmJgYFVLpyI=; b=JXRjCDSx9cA5W1 c7w9LJ88FyG/AQahm1QGDxEo1VxtLZE9i0GJMoSPykbIi2HQzkP8gG9DzahWvWh19d7YXGmleGIUF vvpyFJlSxffqbzlc6K0C2XcoN3tu1j/vt1+ia/2+yfrjhkknR5Pz4xkpuuLuZRJ/apxWNfpw7to5k 5C8CktpwvgroQqPt6hJv8ThAT3rnuBu2u9F4d6tusXK3O3kdT7/7OfWKWGlIwTEfaQwSQ2Y6NQUBc /ZDy37CB+prNQG2sk1+q/xpjIR14j3YoVcD9Ncrai9y2Q7V/ENGAc8buZ6F8w7gcbVea7LRz5Qfki 42UI+CiIZ2MtUcgVuykg==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.96 #2 (Red Hat Linux)) id 1rBd8g-00Fwwt-2A; Fri, 08 Dec 2023 15:51:42 +0000 Received: from mail-pg1-x532.google.com ([2607:f8b0:4864:20::532]) by bombadil.infradead.org with esmtps (Exim 4.96 #2 (Red Hat Linux)) id 1rBd8c-00Fwv7-2L for linux-arm-kernel@lists.infradead.org; Fri, 08 Dec 2023 15:51:41 +0000 Received: by mail-pg1-x532.google.com with SMTP id 41be03b00d2f7-5c230c79c0bso1584414a12.1 for ; Fri, 08 Dec 2023 07:51:37 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; t=1702050697; x=1702655497; darn=lists.infradead.org; h=in-reply-to:content-transfer-encoding:content-disposition :mime-version:references:message-id:subject:cc:to:from:date:from:to :cc:subject:date:message-id:reply-to; bh=zk7HC5so0/fxufyN1/rDn6lTfL6Xf6T0O8RI6n/kBB4=; b=zaS/lYDDxnolW9WAsXoe2lpSckaZz3LhJrMogzIq92A8btnQH6LgWAOVyi5mTsvskS hnEWAziPCqadRKahKIUHDZ1rB/8IaxqChdtV6V8RKDhHRl7HGYIsM/5xzEFU9gYXcb0m tBRbd2nBnZGmzJdCnGQMQcR/h/+J4lgINHg3e+pfXjpqy1SfovxTh+y3G3IRYsAQt3Z9 0796PWFDX9VYjzyu+Oj10P3PgefznkMNMJ3VcoWfmQTtqqntT9QbkWJvsY5gV5Lp3Ga+ iQADp3LDjWLPTP72flUNrDsTJPugCNavQqoKoEQKktaO+ouCB+bQ8pQcgzJ3U6XFUhLt 43Vg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1702050697; x=1702655497; h=in-reply-to:content-transfer-encoding:content-disposition :mime-version:references:message-id:subject:cc:to:from:date :x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=zk7HC5so0/fxufyN1/rDn6lTfL6Xf6T0O8RI6n/kBB4=; b=EYyDPifwi0WbWaPeNgdMvi3lkrCJKob5U0ZAbolAJnjSqxMg9W58Fer1ehyvKB5Q15 8Xiq7ZYPrunNe9xHwQIEW2wowPSf3itILqhW1vveiJYbo6r6I+ycNkvODfdeZkNPzniK rIdzWVLV7gCZWkoWplzvUCLs6XEp7JkDWni/SaFWKcX2eaEip7a03goZS9YXVlnaARcu EuWvgxS1Gnk+sk+EHuOjt0IZ33t9fSTud9b7on26lKfweQFXGcXnxpvdtVOnPK41Of6U gilI7PXSBhxRity9yZkwh3ghQhtC9/0ebf/q6sXxksmk3sDfyFDOEqaLhX+/ncsKIhkQ E7qA== X-Gm-Message-State: AOJu0YyK5wkFJ0jv1i1CCLyL3VnC3OZgiGzoHLs9izkdB5dtHAZnsmaz xU3UxsXGA0rGQ6IZqVixObgQrQ== X-Google-Smtp-Source: AGHT+IE7mXR0woHSMg3du4/v1FTYxi3ox/DzisBAw8t3/DtxwugbdEjmXwA6bbukvxYcdUNzMf2+vQ== X-Received: by 2002:a05:6a20:560c:b0:18f:a92:9152 with SMTP id ir12-20020a056a20560c00b0018f0a929152mr162154pzc.103.1702050696653; Fri, 08 Dec 2023 07:51:36 -0800 (PST) Received: from p14s ([2604:3d09:148c:c800:bc99:83ae:a2d5:c5ca]) by smtp.gmail.com with ESMTPSA id c7-20020a6566c7000000b005c215baacc1sm1512572pgw.70.2023.12.08.07.51.34 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 08 Dec 2023 07:51:36 -0800 (PST) Date: Fri, 8 Dec 2023 08:51:33 -0700 From: Mathieu Poirier To: Tanmay Shah Cc: andersson@kernel.org, robh+dt@kernel.org, krzysztof.kozlowski+dt@linaro.org, conor+dt@kernel.org, michal.simek@amd.com, ben.levinsky@amd.com, linux-remoteproc@vger.kernel.org, devicetree@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org Subject: Re: [PATCH v7 3/4] remoteproc: zynqmp: add pm domains support Message-ID: References: <20231117174238.1876655-1-tanmay.shah@amd.com> <20231117174238.1876655-4-tanmay.shah@amd.com> <93487d3c-c324-4b9b-8b25-0b4ea52237b4@amd.com> <2e24d9be-be10-4f80-b3af-c3e0ed003e7b@amd.com> MIME-Version: 1.0 Content-Disposition: inline In-Reply-To: <2e24d9be-be10-4f80-b3af-c3e0ed003e7b@amd.com> X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20231208_075138_779876_EFB70187 X-CRM114-Status: GOOD ( 59.82 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Content-Type: text/plain; charset="iso-8859-1" Content-Transfer-Encoding: quoted-printable Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org On Wed, Dec 06, 2023 at 12:06:45PM -0600, Tanmay Shah wrote: > = > On 12/6/23 9:43 AM, Mathieu Poirier wrote: > > On Fri, 1 Dec 2023 at 11:10, Tanmay Shah wrote: > > > > > > > > > On 11/29/23 11:10 AM, Mathieu Poirier wrote: > > > > On Mon, Nov 27, 2023 at 10:33:05AM -0600, Tanmay Shah wrote: > > > > > > > > > > On 11/23/23 12:11 PM, Mathieu Poirier wrote: > > > > > > On Wed, Nov 22, 2023 at 03:00:36PM -0600, Tanmay Shah wrote: > > > > > > > Hi Mathieu, > > > > > > > > > > > > > > Please find my comments below. > > > > > > > > > > > > > > On 11/21/23 4:59 PM, Mathieu Poirier wrote: > > > > > > > > Hi, > > > > > > > > > > > > > > > > On Fri, Nov 17, 2023 at 09:42:37AM -0800, Tanmay Shah wrote: > > > > > > > > > Use TCM pm domains extracted from device-tree > > > > > > > > > to power on/off TCM using general pm domain framework. > > > > > > > > > > > > > > > > > > Signed-off-by: Tanmay Shah > > > > > > > > > --- > > > > > > > > > > > > > > > > > > Changes in v7: > > > > > > > > > - %s/pm_dev1/pm_dev_core0/r > > > > > > > > > - %s/pm_dev_link1/pm_dev_core0_link/r > > > > > > > > > - %s/pm_dev2/pm_dev_core1/r > > > > > > > > > - %s/pm_dev_link2/pm_dev_core1_link/r > > > > > > > > > - remove pm_domain_id check to move next patch > > > > > > > > > - add comment about how 1st entry in pm domain list is = used > > > > > > > > > - fix loop when jump to fail_add_pm_domains loop > > > > > > > > > > > > > > > > > > drivers/remoteproc/xlnx_r5_remoteproc.c | 215 ++++++++++= +++++++++++++- > > > > > > > > > 1 file changed, 212 insertions(+), 3 deletions(-) > > > > > > > > > > > > > > > > > > diff --git a/drivers/remoteproc/xlnx_r5_remoteproc.c b/dr= ivers/remoteproc/xlnx_r5_remoteproc.c > > > > > > > > > index 4395edea9a64..22bccc5075a0 100644 > > > > > > > > > --- a/drivers/remoteproc/xlnx_r5_remoteproc.c > > > > > > > > > +++ b/drivers/remoteproc/xlnx_r5_remoteproc.c > > > > > > > > > @@ -16,6 +16,7 @@ > > > > > > > > > #include > > > > > > > > > #include > > > > > > > > > #include > > > > > > > > > +#include > > > > > > > > > > > > > > > > > > #include "remoteproc_internal.h" > > > > > > > > > > > > > > > > > > @@ -102,6 +103,12 @@ static const struct mem_bank_data zy= nqmp_tcm_banks_lockstep[] =3D { > > > > > > > > > * @rproc: rproc handle > > > > > > > > > * @pm_domain_id: RPU CPU power domain id > > > > > > > > > * @ipi: pointer to mailbox information > > > > > > > > > + * @num_pm_dev: number of tcm pm domain devices for this= core > > > > > > > > > + * @pm_dev_core0: pm domain virtual devices for power do= main framework > > > > > > > > > + * @pm_dev_core0_link: pm domain device links after regi= stration > > > > > > > > > + * @pm_dev_core1: used only in lockstep mode. second cor= e's pm domain virtual devices > > > > > > > > > + * @pm_dev_core1_link: used only in lockstep mode. secon= d core's pm device links after > > > > > > > > > + * registration > > > > > > > > > */ > > > > > > > > > struct zynqmp_r5_core { > > > > > > > > > struct device *dev; > > > > > > > > > @@ -111,6 +118,11 @@ struct zynqmp_r5_core { > > > > > > > > > struct rproc *rproc; > > > > > > > > > u32 pm_domain_id; > > > > > > > > > struct mbox_info *ipi; > > > > > > > > > + int num_pm_dev; > > > > > > > > > + struct device **pm_dev_core0; > > > > > > > > > + struct device_link **pm_dev_core0_link; > > > > > > > > > + struct device **pm_dev_core1; > > > > > > > > > + struct device_link **pm_dev_core1_link; > > > > > > > > > }; > > > > > > > > > > > > > > > > > > /** > > > > > > > > > @@ -651,7 +663,8 @@ static int add_tcm_carveout_lockstep_= mode(struct rproc *rproc) > > > > > > > > > ZYNQMP_PM_CAPABI= LITY_ACCESS, 0, > > > > > > > > > ZYNQMP_PM_REQUES= T_ACK_BLOCKING); > > > > > > > > > if (ret < 0) { > > > > > > > > > - dev_err(dev, "failed to turn on TCM 0= x%x", pm_domain_id); > > > > > > > > > + dev_err(dev, "failed to turn on TCM 0= x%x", > > > > > > > > > + pm_domain_id); > > > > > > > > > > > > > > > > Spurious change, you should have caught that. > > > > > > > > > > > > > > Ack, need to observe changes more closely before sending them. > > > > > > > > > > > > > > > > > > > > > > > > goto release_tcm_lockstep; > > > > > > > > > } > > > > > > > > > > > > > > > > > > @@ -758,6 +771,189 @@ static int zynqmp_r5_parse_fw(struc= t rproc *rproc, const struct firmware *fw) > > > > > > > > > return ret; > > > > > > > > > } > > > > > > > > > > > > > > > > > > +static void zynqmp_r5_remove_pm_domains(struct rproc *rp= roc) > > > > > > > > > +{ > > > > > > > > > + struct zynqmp_r5_core *r5_core =3D rproc->priv; > > > > > > > > > + struct device *dev =3D r5_core->dev; > > > > > > > > > + struct zynqmp_r5_cluster *cluster; > > > > > > > > > + int i; > > > > > > > > > + > > > > > > > > > + cluster =3D platform_get_drvdata(to_platform_device(d= ev->parent)); > > > > > > > > > + > > > > > > > > > + for (i =3D 1; i < r5_core->num_pm_dev; i++) { > > > > > > > > > + device_link_del(r5_core->pm_dev_core0_link[i]= ); > > > > > > > > > + dev_pm_domain_detach(r5_core->pm_dev_core0[i]= , false); > > > > > > > > > + } > > > > > > > > > + > > > > > > > > > + kfree(r5_core->pm_dev_core0); > > > > > > > > > + r5_core->pm_dev_core0 =3D NULL; > > > > > > > > > + kfree(r5_core->pm_dev_core0_link); > > > > > > > > > + r5_core->pm_dev_core0_link =3D NULL; > > > > > > > > > + > > > > > > > > > + if (cluster->mode =3D=3D SPLIT_MODE) { > > > > > > > > > + r5_core->num_pm_dev =3D 0; > > > > > > > > > + return; > > > > > > > > > + } > > > > > > > > > + > > > > > > > > > + for (i =3D 1; i < r5_core->num_pm_dev; i++) { > > > > > > > > > + device_link_del(r5_core->pm_dev_core1_link[i]= ); > > > > > > > > > + dev_pm_domain_detach(r5_core->pm_dev_core1[i]= , false); > > > > > > > > > + } > > > > > > > > > + > > > > > > > > > + kfree(r5_core->pm_dev_core1); > > > > > > > > > + r5_core->pm_dev_core1 =3D NULL; > > > > > > > > > + kfree(r5_core->pm_dev_core1_link); > > > > > > > > > + r5_core->pm_dev_core1_link =3D NULL; > > > > > > > > > + r5_core->num_pm_dev =3D 0; > > > > > > > > > +} > > > > > > > > > + > > > > > > > > > +static int zynqmp_r5_add_pm_domains(struct rproc *rproc) > > > > > > > > > +{ > > > > > > > > > + struct zynqmp_r5_core *r5_core =3D rproc->priv; > > > > > > > > > + struct device *dev =3D r5_core->dev, *dev2; > > > > > > > > > + struct zynqmp_r5_cluster *cluster; > > > > > > > > > + struct platform_device *pdev; > > > > > > > > > + struct device_node *np; > > > > > > > > > + int i, j, num_pm_dev, ret; > > > > > > > > > + > > > > > > > > > + cluster =3D dev_get_drvdata(dev->parent); > > > > > > > > > + > > > > > > > > > + /* get number of power-domains */ > > > > > > > > > + num_pm_dev =3D of_count_phandle_with_args(r5_core->np= , "power-domains", > > > > > > > > > + "#power-domai= n-cells"); > > > > > > > > > + > > > > > > > > > + if (num_pm_dev <=3D 0) > > > > > > > > > + return -EINVAL; > > > > > > > > > + > > > > > > > > > + r5_core->pm_dev_core0 =3D kcalloc(num_pm_dev, > > > > > > > > > + sizeof(struct device = *), > > > > > > > > > + GFP_KERNEL); > > > > > > > > > + if (!r5_core->pm_dev_core0) > > > > > > > > > + ret =3D -ENOMEM; > > > > > > > > > + > > > > > > > > > + r5_core->pm_dev_core0_link =3D kcalloc(num_pm_dev, > > > > > > > > > + sizeof(struct de= vice_link *), > > > > > > > > > + GFP_KERNEL); > > > > > > > > > + if (!r5_core->pm_dev_core0_link) { > > > > > > > > > + kfree(r5_core->pm_dev_core0); > > > > > > > > > + r5_core->pm_dev_core0 =3D NULL; > > > > > > > > > + return -ENOMEM; > > > > > > > > > + } > > > > > > > > > + > > > > > > > > > + r5_core->num_pm_dev =3D num_pm_dev; > > > > > > > > > + > > > > > > > > > + /* > > > > > > > > > + * start from 2nd entry in power-domains property lis= t as > > > > > > > > > + * for zynqmp we only add TCM power domains and not c= ore's power domain. > > > > > > > > > + * 1st entry is used to configure r5 operation mode. > > > > > > > > > > > > > > > > You are still not saying _where_ ->pm_dev_core0[0] gets add= ed. > > > > > > > > > > > > > > So, pm_dev_core0[0] isn't really need to be added for zynqmp = platform, as firmware starts it with call, > > > > > > > > > > > > > > zynqmp_pm_request_wake during rproc_start callback. I will do= cument this in next > > > > > > > > > > > > > > > > > > > That is exactly what I am looking for. That way people don't h= ave to go through > > > > > > the entire driver trying to figure out what is happening with p= m_dev_core[0]. > > > > > > > > > > > > I'm also not sure about the power-up order. Logically the TCMs= should be > > > > > > powered up before the R5 in order to put code in them. The R5s= are powered in > > > > > > zynqmp_r5_rproc_start() but I am unclear as to where in the boo= t sequence the > > > > > > TCMs are powered - can you expand on that? > > > > > > > > > > > > > > > Sure. Following is call sequece > > > > > > > > > > zynqmp_r5_rproc_prepare > > > > > > > > > > zynqmp_r5_add_pm_domains -> Here TCM is powered on when device_li= nk_add is called via zynqmp-pm-domains.c driver. > > > > > > > > > > . . . > > > > > > > > > > zynqmp_r5_rproc_start -> load firmware and Starts RPU > > > > > > > > > > So what you mentioned is correct, TCM is being powerd-on before w= e load firmware and start RPU. > > > > > > > > > > > > > > > > > > > > > > > revision. For new platforms pm_dev_core0[0] will be added in = future. > > > > > > > > > > > > Now I'm really confused - what do you mean by "pm_dev_core0[0] = will be added in > > > > > > future"? > > > > > > > > > > > > > > > ZynqMP platform has platform management firmware running on micro= blaze. > > > > > > > > > > This firmware design does not expect R5 pm domains to be requeste= d explicitly. > > > > > > > > > > This means, during zynqmp_r5_rproc_start when "zynqmp_pm_request_= wake" is called, > > > > > > > > > > firmware powers on R5. So, pm_dev_core[0] is not really used for = ZynqMP. > > > > > > > > > > However, this design was changed for new platforms i.e. "versal" = and onwards. > > > > > > > > > > Firmware of new platform expects pm domains to be requested expli= citly for R5 cores before > > > > > > > > > > waking them up. > > > > > > > > > > That means, pm_dev_core[0] for R5 cores on new platform (Versal) = needs to be used same as TCM. > > > > > > > > > > Then, we should wake it up on r5_core. > > > > > > > > > > To summarize: > > > > > > > > > > For zynqmp only following call needed to start R5: > > > > > > > > > > zynqmp_pm_request_wake > > > > > > > > > > For "versal" and onwards we need two calls to start R5: > > > > > > > > > > "device_link_add" and zynqmp_pm_request_wake > > > > > > > > > > So, in future pm_core_dev[0] will be used. > > > > > > > > > > > > > Thanks for the clarification on both front. The problem here is th= at we are > > > > keeping R5 power domain information in two different places, i.e > > > > zynqmp_r5_core::pm_domain_id and zynqmp_r5_core::pm_dev_core0[0]. > > > > > > > > Please see if you can retreive the power domain ID from > > > > zynqmp_r5_core::pm_dev_core0[0]. That way you can get the power do= main ID when > > > > calling zynqmp_pm_request_wake() and zynqmp_pm_force_pwrdwn() and g= et rid of > > > > zynqmp_r5_core::pm_domain_id. > > > > > > Hi Mathieu, > > > > > > I looked into this. Probably I can't retrieve pm_domain_id from pm_de= v_core0[0], > > > > > > However, I can retrieve it from device-tree when calling zynqmp_pm_re= quest_wake > > > > > > and zynqmp_pm_force_pwrdwn. > > > > > > zynqmp_r5_core::pm_domain_id is caching that value during probe, and = use it during > > > rest of the driver lifecycle. > > > > > > I am okay either way, (keeping it as it is, or get it from device-tre= e). Let me know your > > > > > > preference. > > > > > > > Humm... Then I suggest to simply get rid of the device linking to > > deal with the TCMs' power management. From where I stand it provides > > more confusion than benefits, and that is without considering the > > extra complexity. > = > = > Do you mean to get rid of pm_dev_core0[1], and pm_dev_core0[2] as well ? > Yes > If yes, its preferred to use pm_domain framework to power-on/off TCM. > = That is when the pm runtime subsystem is used extensively but it isn't the case here. In this scenario using device linking is adding a lot of complexity for something that could be done in a single line of code. = > If we want to get rid of zynqmp_r5_core::pm_domain_id, I will do what's d= one in > = > __genpd_dev_pm_attach API where, pm_domain_id is retrieved using of_node = of pm_dev_core0[*] device. > = > =A0=A0=A0 ret =3D of_parse_phandle_with_args(dev->of_node, "power-domains= ", > =A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0= =A0=A0=A0 "#power-domain-cells", index, &pd_args); That is what I asked for in my previous email, i.e use ->pm_dev_core0[0] to retrieve the domain ID. > = > However, Its preferred to use pm_domain framework when power-domains are = available in device-tree. > = > Let=A0 me know. > = > Thanks, > = > Tanmay > = > = > > > Thanks, > > > > > > Tanmay > > > > > > > > > > > > > > > > > > > > > > I hope this meets expectations. > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > + */ > > > > > > > > > + for (i =3D 1; i < r5_core->num_pm_dev; i++) { > > > > > > > > > + r5_core->pm_dev_core0[i] =3D dev_pm_domain_at= tach_by_id(dev, i); > > > > > > > > > + if (IS_ERR_OR_NULL(r5_core->pm_dev_core0[i]))= { > > > > > > > > > > > > > > > > Here IS_ERR_OR_NULL() is used while two if conditions for N= ULL and an error > > > > > > > > code are used in the loop for the lockstep mode. Please pi= ck one heuristic and > > > > > > > > stick with it. I have no preference on which one. > > > > > > > > > > > > > > Ok, I think IS_ERR_OR_NULL is more cleaner, I will address it= in next revision. > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > + dev_dbg(dev, "failed to attach pm dom= ain %d, ret=3D%ld\n", i, > > > > > > > > > + PTR_ERR(r5_core->pm_dev_core0= [i])); > > > > > > > > > + ret =3D -EINVAL; > > > > > > > > > + goto fail_add_pm_domains; > > > > > > > > > + } > > > > > > > > > + r5_core->pm_dev_core0_link[i] =3D device_link= _add(dev, > > > > > > > > > + = r5_core->pm_dev_core0[i], > > > > > > > > > + = DL_FLAG_STATELESS | > > > > > > > > > + = DL_FLAG_RPM_ACTIVE | > > > > > > > > > + = DL_FLAG_PM_RUNTIME); > > > > > > > > > + if (!r5_core->pm_dev_core0_link[i]) { > > > > > > > > > + dev_pm_domain_detach(r5_core->pm_dev_= core0[i], true); > > > > > > > > > + r5_core->pm_dev_core0[i] =3D NULL; > > > > > > > > > + ret =3D -EINVAL; > > > > > > > > > + goto fail_add_pm_domains; > > > > > > > > > + } > > > > > > > > > + } > > > > > > > > > + > > > > > > > > > + if (cluster->mode =3D=3D SPLIT_MODE) > > > > > > > > > + return 0; > > > > > > > > > + > > > > > > > > > + r5_core->pm_dev_core1 =3D kcalloc(num_pm_dev, > > > > > > > > > + sizeof(struct device = *), > > > > > > > > > + GFP_KERNEL); > > > > > > > > > + if (!r5_core->pm_dev_core1) { > > > > > > > > > + ret =3D -ENOMEM; > > > > > > > > > + goto fail_add_pm_domains; > > > > > > > > > + } > > > > > > > > > + > > > > > > > > > + r5_core->pm_dev_core1_link =3D kcalloc(num_pm_dev, > > > > > > > > > + sizeof(struct de= vice_link *), > > > > > > > > > + GFP_KERNEL); > > > > > > > > > + if (!r5_core->pm_dev_core1_link) { > > > > > > > > > + kfree(r5_core->pm_dev_core1); > > > > > > > > > + r5_core->pm_dev_core1 =3D NULL; > > > > > > > > > + ret =3D -ENOMEM; > > > > > > > > > + goto fail_add_pm_domains; > > > > > > > > > + } > > > > > > > > > + > > > > > > > > > + /* get second core's device to detach its power-domai= ns */ > > > > > > > > > + np =3D of_get_next_child(cluster->dev->of_node, of_no= de_get(dev->of_node)); > > > > > > > > > + > > > > > > > > > + pdev =3D of_find_device_by_node(np); > > > > > > > > > + if (!pdev) { > > > > > > > > > + dev_err(cluster->dev, "core1 platform device = not available\n"); > > > > > > > > > + kfree(r5_core->pm_dev_core1); > > > > > > > > > + kfree(r5_core->pm_dev_core1_link); > > > > > > > > > + r5_core->pm_dev_core1 =3D NULL; > > > > > > > > > + r5_core->pm_dev_core1_link =3D NULL; > > > > > > > > > + ret =3D -EINVAL; > > > > > > > > > + goto fail_add_pm_domains; > > > > > > > > > + } > > > > > > > > > + > > > > > > > > > + dev2 =3D &pdev->dev; > > > > > > > > > + > > > > > > > > > + /* for zynqmp we only add TCM power domains and not c= ore's power domain */ > > > > > > > > > + for (j =3D 1; j < r5_core->num_pm_dev; j++) { > > > > > > > > > + r5_core->pm_dev_core1[j] =3D dev_pm_domain_at= tach_by_id(dev2, j); > > > > > > > > > + if (!r5_core->pm_dev_core1[j]) { > > > > > > > > > + dev_dbg(dev, "can't attach to pm doma= in %d\n", j); > > > > > > > > > + ret =3D -EINVAL; > > > > > > > > > + goto fail_add_pm_domains_lockstep; > > > > > > > > > + } else if (IS_ERR(r5_core->pm_dev_core1[j])) { > > > > > > > > > + dev_dbg(dev, "can't attach to pm doma= in %d\n", j); > > > > > > > > > + ret =3D PTR_ERR(r5_core->pm_dev_core1= [j]); > > > > > > > > > + goto fail_add_pm_domains_lockstep; > > > > > > > > > + } > > > > > > > > > + > > > > > > > > > + r5_core->pm_dev_core1_link[j] =3D device_link= _add(dev, > > > > > > > > > + = r5_core->pm_dev_core1[j], > > > > > > > > > + = DL_FLAG_STATELESS | > > > > > > > > > + = DL_FLAG_RPM_ACTIVE | > > > > > > > > > + = DL_FLAG_PM_RUNTIME); > > > > > > > > > + if (!r5_core->pm_dev_core1_link[j]) { > > > > > > > > > + dev_pm_domain_detach(r5_core->pm_dev_= core1[j], true); > > > > > > > > > + r5_core->pm_dev_core1[j] =3D NULL; > > > > > > > > > + ret =3D -ENODEV; > > > > > > > > > + goto fail_add_pm_domains_lockstep; > > > > > > > > > + } > > > > > > > > > + } > > > > > > > > > + > > > > > > > > > +fail_add_pm_domains_lockstep: > > > > > > > > > + while (--j >=3D 0) { > > > > > > > > > + device_link_del(r5_core->pm_dev_core1_link[j]= ); > > > > > > > > > + dev_pm_domain_detach(r5_core->pm_dev_core1[j]= , true); > > > > > > > > > + } > > > > > > > > > + kfree(r5_core->pm_dev_core1); > > > > > > > > > + r5_core->pm_dev_core1 =3D NULL; > > > > > > > > > + kfree(r5_core->pm_dev_core1_link); > > > > > > > > > + r5_core->pm_dev_core1_link =3D NULL; > > > > > > > > > + > > > > > > > > > +fail_add_pm_domains: > > > > > > > > > + while (--i >=3D 0) { > > > > > > > > > + device_link_del(r5_core->pm_dev_core0_link[i]= ); > > > > > > > > > + dev_pm_domain_detach(r5_core->pm_dev_core0[i]= , true); > > > > > > > > > + } > > > > > > > > > + kfree(r5_core->pm_dev_core0); > > > > > > > > > + r5_core->pm_dev_core0 =3D NULL; > > > > > > > > > + kfree(r5_core->pm_dev_core0_link); > > > > > > > > > + r5_core->pm_dev_core0_link =3D NULL; > > > > > > > > > + > > > > > > > > > > > > > > > > The error path is much cleaner and readable now. > > > > > > > > > > > > > > > > I will continue tomorrow. > > > > > > > > > > > > > > > > Mathieu > > > > > > > > > > > > > > > > > + return ret; > > > > > > > > > +} > > > > > > > > > + > > > > > > > > > /** > > > > > > > > > * zynqmp_r5_rproc_prepare() > > > > > > > > > * adds carveouts for TCM bank and reserved memory regio= ns > > > > > > > > > @@ -770,19 +966,30 @@ static int zynqmp_r5_rproc_prepare(= struct rproc *rproc) > > > > > > > > > { > > > > > > > > > int ret; > > > > > > > > > > > > > > > > > > + ret =3D zynqmp_r5_add_pm_domains(rproc); > > > > > > > > > + if (ret) { > > > > > > > > > + dev_err(&rproc->dev, "failed to add pm domain= s\n"); > > > > > > > > > + return ret; > > > > > > > > > + } > > > > > > > > > + > > > > > > > > > ret =3D add_tcm_banks(rproc); > > > > > > > > > if (ret) { > > > > > > > > > dev_err(&rproc->dev, "failed to get TCM banks= , err %d\n", ret); > > > > > > > > > - return ret; > > > > > > > > > + goto fail_prepare; > > > > > > > > > } > > > > > > > > > > > > > > > > > > ret =3D add_mem_regions_carveout(rproc); > > > > > > > > > if (ret) { > > > > > > > > > dev_err(&rproc->dev, "failed to get reserve m= em regions %d\n", ret); > > > > > > > > > - return ret; > > > > > > > > > + goto fail_prepare; > > > > > > > > > } > > > > > > > > > > > > > > > > > > return 0; > > > > > > > > > + > > > > > > > > > +fail_prepare: > > > > > > > > > + zynqmp_r5_remove_pm_domains(rproc); > > > > > > > > > + > > > > > > > > > + return ret; > > > > > > > > > } > > > > > > > > > > > > > > > > > > /** > > > > > > > > > @@ -801,6 +1008,8 @@ static int zynqmp_r5_rproc_unprepare= (struct rproc *rproc) > > > > > > > > > > > > > > > > > > r5_core =3D rproc->priv; > > > > > > > > > > > > > > > > > > + zynqmp_r5_remove_pm_domains(rproc); > > > > > > > > > + > > > > > > > > > for (i =3D 0; i < r5_core->tcm_bank_count; i++) { > > > > > > > > > pm_domain_id =3D r5_core->tcm_banks[i]->pm_do= main_id; > > > > > > > > > if (zynqmp_pm_release_node(pm_domain_id)) > > > > > > > > > -- > > > > > > > > > 2.25.1 > > > > > > > > > _______________________________________________ linux-arm-kernel mailing list linux-arm-kernel@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-arm-kernel