From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id EF224C636D6 for ; Wed, 22 Feb 2023 18:52:48 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:In-Reply-To:MIME-Version:References: Message-ID:Subject:Cc:To:From:Date:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=RsNRAcQocoup/he9rYKyDUJ+giSIC4D7nOqLw2SfGkA=; b=F3WGszGKoE5LNx faw7VAQsVeH0sM11Ji0XAn/akFsTpc8MxGU7K0DzZLqJ7cIqToLhh4oHYQZ5gQOTCgj+6vkqwQFqd 934qcVa2lQWFB0WiBzcaMvT8WtZYyQRAsgaiMK3VoZ/Rs6SaAYGF7IEvkkv2HrWq2/Cbf1xpjc6jV BFF31B+2+25xQKdaa2zR6zb6bhjWE0TMyhiaA3o7Ykq3g7YuLv5XQuKtuFFrOQ24ZmX1jJZ5x24GD HKzx4WikG675WsSsQLYUypIYfxD+k4ExdXFxEs+FUl8ij+nsxyBxO4zeJTL6rKOiQQMxm0V/raxoI b1lwdnXte9CVsJ08M/UA==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1pUuD5-00DZYN-Pa; Wed, 22 Feb 2023 18:51:25 +0000 Received: from mail-pl1-x630.google.com ([2607:f8b0:4864:20::630]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1pUu3f-00DUam-Gi for linux-arm-kernel@lists.infradead.org; Wed, 22 Feb 2023 18:41:41 +0000 Received: by mail-pl1-x630.google.com with SMTP id q11so10207397plx.5 for ; Wed, 22 Feb 2023 10:41:38 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=in-reply-to:content-disposition:mime-version:references:message-id :subject:cc:to:from:date:from:to:cc:subject:date:message-id:reply-to; bh=Ey37LMUCgjy+2lrIm4/0QBCqLJaHnsaYYYuoP9hlsqs=; b=d+pCwtYLAafC3qo7zsv2TWVXcN/LDLDTbIunLFvhddN1eDBsc9mnplGCtZKEagN9RZ gBoLRqitllp5yw5NyZn1QOI/dK0GFd1EaKaLwQxPDcFKABP3E6OzVjqTaO/aqXu31rzC oKodSCom8E+iN/vBxndRzKg3RhLIatMrHk3rmSEPh3DfBEn7mzeD3q/EVu62daIN0nlH Xl4R0mXdogW3Un+VjrXF2znBm2AWiKchlVx/fvQSTT1PZ6tUQiPNvs7WXj8FZMZIHIf7 IlcY942glqFJeCZF93Q+ac32I/bcZN5tb3gO1QAO9mBSjf0mHKJt3uzkZ/T2kjxzmwG+ wYNw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=in-reply-to:content-disposition:mime-version:references:message-id :subject:cc:to:from:date:x-gm-message-state:from:to:cc:subject:date :message-id:reply-to; bh=Ey37LMUCgjy+2lrIm4/0QBCqLJaHnsaYYYuoP9hlsqs=; b=pDq52B/KHzV4VTEhQ7iZ6LsAg619wTqU37nSWMf9ovxpI6ps1sEWla2XjT+s3ixBXQ kMt9UTn46VfhX/JGddK9Dj+kFnx031zkUKG2ndVhH3yQWkXY+HXqeFiJWjkeKgUbpQV2 DqO1Ic399fGrUq9K11GEIYRKzb1kEoqIreRkIiSxNEE8cpKHTXPieJKWjtkz8/EVAh5Y 5v5kQYXsTPY2IXNN1G2rVfYWQhXhncS/v4Oe4TwavWd4d0s3SzsX2zVzP47QJjfqU3kh aQj7AaLngxxwynKNtof7d54v6MVP4daY/vIaba+AQJJ6jBgar234N+ewuU9LvvBQJlrW Ck3g== X-Gm-Message-State: AO0yUKUMEwVXY0Am7lOzXfsn5jV6WYH4WF9T9uorEVBoEfa3Mr7gQPY5 mISHQI0ko36iAvaYmIqpy8ZsBg== X-Google-Smtp-Source: AK7set/RYHHKmw+O7sDn+2vH0kXe+nzW1HvuMex9F5RXuptbW26d0MXDMNbzFQSv4vl9iHqzp9HJcQ== X-Received: by 2002:a05:6a20:698e:b0:c7:24c5:fb90 with SMTP id t14-20020a056a20698e00b000c724c5fb90mr10291163pzk.27.1677091297987; Wed, 22 Feb 2023 10:41:37 -0800 (PST) Received: from p14s ([2604:3d09:148c:c800:78a7:1e00:32c7:e2c0]) by smtp.gmail.com with ESMTPSA id k184-20020a6384c1000000b004fb26a80875sm3290121pgd.22.2023.02.22.10.41.36 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 22 Feb 2023 10:41:37 -0800 (PST) Date: Wed, 22 Feb 2023 11:41:35 -0700 From: Mathieu Poirier To: Tanmay Shah Cc: michal.simek@amd.com, andersson@kernel.org, jaswinder.singh@linaro.org, ben.levinsky@amd.com, shubhrajyoti.datta@amd.com, linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-remoteproc@vger.kernel.org Subject: Re: [PATCH v3 2/3] drivers: remoteproc: xilinx: fix carveout names Message-ID: <20230222184135.GB909075@p14s> References: <20230213211825.3507034-1-tanmay.shah@amd.com> <20230213211825.3507034-3-tanmay.shah@amd.com> MIME-Version: 1.0 Content-Disposition: inline In-Reply-To: <20230213211825.3507034-3-tanmay.shah@amd.com> X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20230222_104139_634710_D53ED3B1 X-CRM114-Status: GOOD ( 30.86 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org On Mon, Feb 13, 2023 at 01:18:25PM -0800, Tanmay Shah wrote: > If the unit address is appended to node name of memory-region, > then adding rproc carveouts fails as node name and unit-address > both are passed as carveout name (i.e. vdev0vring0@xxxxxxxx). However, > only node name is expected by remoteproc framework. This patch moves > memory-region node parsing from driver probe to prepare and > only passes node-name and not unit-address > > Fixes: 6b291e8020a8 ("drivers: remoteproc: Add Xilinx r5 remoteproc driver") > Signed-off-by: Tanmay Shah > --- > > Changelog: > - This is first version of this change, however posting as part of the series > that has version v3. The v2 of the series could be found at following link. > > v2: https://lore.kernel.org/all/20230126213154.1707300-1-tanmay.shah@amd.com/ > > drivers/remoteproc/xlnx_r5_remoteproc.c | 87 ++++++------------------- > 1 file changed, 20 insertions(+), 67 deletions(-) > > diff --git a/drivers/remoteproc/xlnx_r5_remoteproc.c b/drivers/remoteproc/xlnx_r5_remoteproc.c > index 2db57d394155..81af2dea56c2 100644 > --- a/drivers/remoteproc/xlnx_r5_remoteproc.c > +++ b/drivers/remoteproc/xlnx_r5_remoteproc.c > @@ -61,8 +61,6 @@ static const struct mem_bank_data zynqmp_tcm_banks[] = { > * @np: device node of RPU instance > * @tcm_bank_count: number TCM banks accessible to this RPU > * @tcm_banks: array of each TCM bank data > - * @rmem_count: Number of reserved mem regions > - * @rmem: reserved memory region nodes from device tree > * @rproc: rproc handle > * @pm_domain_id: RPU CPU power domain id > */ > @@ -71,8 +69,6 @@ struct zynqmp_r5_core { > struct device_node *np; > int tcm_bank_count; > struct mem_bank_data **tcm_banks; > - int rmem_count; > - struct reserved_mem **rmem; > struct rproc *rproc; > u32 pm_domain_id; > }; > @@ -239,21 +235,31 @@ static int add_mem_regions_carveout(struct rproc *rproc) > { > struct rproc_mem_entry *rproc_mem; > struct zynqmp_r5_core *r5_core; > + struct device_node *rmem_np; > struct reserved_mem *rmem; > int i, num_mem_regions; > > r5_core = (struct zynqmp_r5_core *)rproc->priv; > - num_mem_regions = r5_core->rmem_count; > + > + num_mem_regions = of_property_count_elems_of_size(r5_core->np, "memory-region", > + sizeof(phandle)); > > for (i = 0; i < num_mem_regions; i++) { > - rmem = r5_core->rmem[i]; > Extra line Everyone else in the remoteproc subsystem is using of_phandle_iterator_next(), please do the same. It is easier to maintain and you don't have to call of_node_put() after each iteration. > - if (!strncmp(rmem->name, "vdev0buffer", strlen("vdev0buffer"))) { > + rmem_np = of_parse_phandle(r5_core->np, "memory-region", i); > + > + rmem = of_reserved_mem_lookup(rmem_np); > + if (!rmem) { > + of_node_put(rmem_np); > + return -EINVAL; > + } > + > + if (!strcmp(rmem_np->name, "vdev0buffer")) { > /* Init reserved memory for vdev buffer */ > rproc_mem = rproc_of_resm_mem_entry_init(&rproc->dev, i, > rmem->size, > rmem->base, > - rmem->name); > + rmem_np->name); > } else { > /* Register associated reserved memory regions */ > rproc_mem = rproc_mem_entry_init(&rproc->dev, NULL, > @@ -261,16 +267,20 @@ static int add_mem_regions_carveout(struct rproc *rproc) > rmem->size, rmem->base, > zynqmp_r5_mem_region_map, > zynqmp_r5_mem_region_unmap, > - rmem->name); > + rmem_np->name); > } > > - if (!rproc_mem) > + if (!rproc_mem) { > + of_node_put(rmem_np); When moving to of_phandle_iterator_next(), of_node_put(it.node) has to be called on error conditions. Other drivers don't do it, something I will fix in the next cycle. > return -ENOMEM; > + } > > rproc_add_carveout(rproc, rproc_mem); > > dev_dbg(&rproc->dev, "reserved mem carveout %s addr=%llx, size=0x%llx", > rmem->name, rmem->base, rmem->size); > + > + of_node_put(rmem_np); > } > > return 0; > @@ -726,59 +736,6 @@ static int zynqmp_r5_get_tcm_node(struct zynqmp_r5_cluster *cluster) > return 0; > } > > -/** > - * zynqmp_r5_get_mem_region_node() > - * parse memory-region property and get reserved mem regions > - * > - * @r5_core: pointer to zynqmp_r5_core type object > - * > - * Return: 0 for success and error code for failure. > - */ > -static int zynqmp_r5_get_mem_region_node(struct zynqmp_r5_core *r5_core) > -{ > - struct device_node *np, *rmem_np; > - struct reserved_mem **rmem; > - int res_mem_count, i; > - struct device *dev; > - > - dev = r5_core->dev; > - np = r5_core->np; > - > - res_mem_count = of_property_count_elems_of_size(np, "memory-region", > - sizeof(phandle)); > - if (res_mem_count <= 0) { > - dev_warn(dev, "failed to get memory-region property %d\n", > - res_mem_count); > - return 0; > - } > - > - rmem = devm_kcalloc(dev, res_mem_count, > - sizeof(struct reserved_mem *), GFP_KERNEL); > - if (!rmem) > - return -ENOMEM; > - > - for (i = 0; i < res_mem_count; i++) { > - rmem_np = of_parse_phandle(np, "memory-region", i); > - if (!rmem_np) > - goto release_rmem; > - > - rmem[i] = of_reserved_mem_lookup(rmem_np); > - if (!rmem[i]) { > - of_node_put(rmem_np); > - goto release_rmem; > - } > - > - of_node_put(rmem_np); > - } > - > - r5_core->rmem_count = res_mem_count; > - r5_core->rmem = rmem; > - return 0; > - > -release_rmem: > - return -EINVAL; > -} > - > /* > * zynqmp_r5_core_init() > * Create and initialize zynqmp_r5_core type object > @@ -806,10 +763,6 @@ static int zynqmp_r5_core_init(struct zynqmp_r5_cluster *cluster, > for (i = 0; i < cluster->core_count; i++) { > r5_core = cluster->r5_cores[i]; > > - ret = zynqmp_r5_get_mem_region_node(r5_core); > - if (ret) > - dev_warn(dev, "memory-region prop failed %d\n", ret); > - > /* Initialize r5 cores with power-domains parsed from dts */ > ret = of_property_read_u32_index(r5_core->np, "power-domains", > 1, &r5_core->pm_domain_id); > -- > 2.25.1 > _______________________________________________ linux-arm-kernel mailing list linux-arm-kernel@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-arm-kernel