* [PATCH v4 0/3] add zynqmp TCM bindings
@ 2023-08-29 18:18 Tanmay Shah
2023-08-29 18:18 ` [PATCH v4 1/3] dt-bindings: remoteproc: add Tightly Coupled Memory (TCM) bindings Tanmay Shah
` (2 more replies)
0 siblings, 3 replies; 16+ messages in thread
From: Tanmay Shah @ 2023-08-29 18:18 UTC (permalink / raw)
To: linux-remoteproc, devicetree, linux-arm-kernel, linux-kernel,
Bjorn Andersson, Mathieu Poirier, Rob Herring,
Krzysztof Kozlowski, Michal Simek
Cc: Conor Dooley, Radhey Shyam Pandey, Ben Levinsky, Tanmay Shah
Tightly-Coupled Memories(TCMs) are low-latency memory that provides
predictable instruction execution and predictable data load/store
timing. Each Cortex-R5F processor contains exclusive two 64 KB memory
banks on the ATCM and BTCM ports, for a total of 128 KB of memory.
In lockstep mode, both 128KB memory is accessible to the cluster.
As per ZynqMP Ultrascale+ Technical Reference Manual UG1085, following
is address space of TCM memory. The bindings in this patch series
introduces properties to accommodate following address space with
address translation between Linux and Cortex-R5 views.
| | | |
| --- | --- | --- |
| *Mode* | *R5 View* | *Linux view* | Notes |
| *Split Mode* | *start addr*| *start addr* | |
| R5_0 ATCM (64 KB) | 0x0000_0000 | 0xFFE0_0000 | |
| R5_0 BTCM (64 KB) | 0x0000_2000 | 0xFFE2_0000 | |
| R5_1 ATCM (64 KB) | 0x0000_0000 | 0xFFE9_0000 | alias of 0xFFE1_0000 |
| R5_1 BTCM (64 KB) | 0x0000_2000 | 0xFFEB_0000 | alias of 0xFFE3_0000 |
| ___ | ___ | ___ | |
| *Lockstep Mode* | | | |
| R5_0 ATCM (128 KB) | 0x0000_0000 | 0xFFE0_0000 | |
| R5_0 ATCM (128 KB) | 0x0002_0000 | 0xFFE2_0000 | |
References:
UG1085 TCM address space: https://docs.xilinx.com/r/en-US/ug1085-zynq-ultrascale-trm/Tightly-Coupled-Memory-Address-Map
This patch series continues previous effort to upstream ZynqMP
TCM bindings:
Previous v3 version link:
https://lore.kernel.org/all/1689964908-22371-1-git-send-email-radhey.shyam.pandey@amd.com/
Changes in v4:
- Use address-cells and size-cells value 2
- Modify ranges property as per new value of address-cells
and size-cells
- Modify child node "reg" property accordingly
- Remove previous ack for further review
- Introduce device-tree change with split mode representation of
r5 cluster and each corresponding TCM
- Introduce corresponding driver change to use TCM
Radhey Shyam Pandey (1):
dt-bindings: remoteproc: add Tightly Coupled Memory (TCM) bindings
Tanmay Shah (2):
dts: zynqmp: add properties for TCM in remoteproc
remoteproc: zynqmp: get TCM from device-tree
.../remoteproc/xlnx,zynqmp-r5fss.yaml | 131 ++++++--
arch/arm64/boot/dts/xilinx/zynqmp.dtsi | 28 +-
drivers/remoteproc/xlnx_r5_remoteproc.c | 279 ++++++++++++++----
3 files changed, 356 insertions(+), 82 deletions(-)
base-commit: 6995e2de6891c724bfeb2db33d7b87775f913ad1
--
2.25.1
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply [flat|nested] 16+ messages in thread
* [PATCH v4 1/3] dt-bindings: remoteproc: add Tightly Coupled Memory (TCM) bindings
2023-08-29 18:18 [PATCH v4 0/3] add zynqmp TCM bindings Tanmay Shah
@ 2023-08-29 18:18 ` Tanmay Shah
2023-08-30 18:16 ` Rob Herring
2023-08-29 18:18 ` [PATCH v4 2/3] dts: zynqmp: add properties for TCM in remoteproc Tanmay Shah
2023-08-29 18:19 ` [PATCH v4 3/3] remoteproc: zynqmp: get TCM from device-tree Tanmay Shah
2 siblings, 1 reply; 16+ messages in thread
From: Tanmay Shah @ 2023-08-29 18:18 UTC (permalink / raw)
To: linux-remoteproc, devicetree, linux-arm-kernel, linux-kernel,
Bjorn Andersson, Mathieu Poirier, Rob Herring,
Krzysztof Kozlowski, Michal Simek
Cc: Conor Dooley, Radhey Shyam Pandey, Ben Levinsky, Tanmay Shah
From: Radhey Shyam Pandey <radhey.shyam.pandey@amd.com>
Introduce bindings for TCM memory address space on AMD-xilinx Zynq
UltraScale+ platform. It will help in defining TCM in device-tree
and make it's access platform agnostic and data-driven.
Tightly-coupled memories(TCMs) are low-latency memory that provides
predictable instruction execution and predictable data load/store
timing. Each Cortex-R5F processor contains two 64-bit wide 64 KB memory
banks on the ATCM and BTCM ports, for a total of 128 KB of memory.
The TCM resources(reg, reg-names and power-domain) are documented for
each TCM in the R5 node. The reg and reg-names are made as required
properties as we don't want to hardcode TCM addresses for future
platforms and for zu+ legacy implementation will ensure that the
old dts w/o reg/reg-names works and stable ABI is maintained.
It also extends the examples for TCM split and lockstep modes.
Signed-off-by: Radhey Shyam Pandey <radhey.shyam.pandey@amd.com>
Signed-off-by: Tanmay Shah <tanmay.shah@amd.com>
---
Changes in v4:
- Use address-cells and size-cells value 2
- Modify ranges property as per new value of address-cells
and size-cells
- Modify child node "reg" property accordingly
- Remove previous ack for further review
.../remoteproc/xlnx,zynqmp-r5fss.yaml | 131 +++++++++++++++---
1 file changed, 113 insertions(+), 18 deletions(-)
diff --git a/Documentation/devicetree/bindings/remoteproc/xlnx,zynqmp-r5fss.yaml b/Documentation/devicetree/bindings/remoteproc/xlnx,zynqmp-r5fss.yaml
index 9f677367dd9f..5622767305fa 100644
--- a/Documentation/devicetree/bindings/remoteproc/xlnx,zynqmp-r5fss.yaml
+++ b/Documentation/devicetree/bindings/remoteproc/xlnx,zynqmp-r5fss.yaml
@@ -20,6 +20,17 @@ properties:
compatible:
const: xlnx,zynqmp-r5fss
+ "#address-cells":
+ const: 2
+
+ "#size-cells":
+ const: 2
+
+ ranges:
+ description: |
+ Standard ranges definition providing address translations for
+ local R5F TCM address spaces to bus addresses.
+
xlnx,cluster-mode:
$ref: /schemas/types.yaml#/definitions/uint32
enum: [0, 1, 2]
@@ -37,7 +48,7 @@ properties:
2: single cpu mode
patternProperties:
- "^r5f-[a-f0-9]+$":
+ "^r5f@[0-9a-f]+$":
type: object
description: |
The RPU is located in the Low Power Domain of the Processor Subsystem.
@@ -54,8 +65,19 @@ patternProperties:
compatible:
const: xlnx,zynqmp-r5f
+ reg:
+ items:
+ - description: ATCM internal memory region
+ - description: BTCM internal memory region
+
+ reg-names:
+ items:
+ - const: atcm
+ - const: btcm
+
power-domains:
- maxItems: 1
+ minItems: 1
+ maxItems: 3
mboxes:
minItems: 1
@@ -102,34 +124,107 @@ patternProperties:
required:
- compatible
- power-domains
+ - reg
+ - reg-names
unevaluatedProperties: false
required:
- compatible
+ - "#address-cells"
+ - "#size-cells"
+ - ranges
additionalProperties: false
examples:
- |
- remoteproc {
- compatible = "xlnx,zynqmp-r5fss";
- xlnx,cluster-mode = <1>;
-
- r5f-0 {
- compatible = "xlnx,zynqmp-r5f";
- power-domains = <&zynqmp_firmware 0x7>;
- memory-region = <&rproc_0_fw_image>, <&rpu0vdev0buffer>, <&rpu0vdev0vring0>, <&rpu0vdev0vring1>;
- mboxes = <&ipi_mailbox_rpu0 0>, <&ipi_mailbox_rpu0 1>;
- mbox-names = "tx", "rx";
+ #include <dt-bindings/power/xlnx-zynqmp-power.h>
+
+ //Split mode configuration
+ soc {
+ #address-cells = <2>;
+ #size-cells = <2>;
+
+ remoteproc@ffe00000 {
+ compatible = "xlnx,zynqmp-r5fss";
+ xlnx,cluster-mode = <0>;
+
+ #address-cells = <2>;
+ #size-cells = <2>;
+ ranges = <0x0 0x0 0x0 0xffe00000 0x0 0x10000>,
+ <0x0 0x20000 0x0 0xffe20000 0x0 0x10000>,
+ <0x1 0x0 0x0 0xffe90000 0x0 0x10000>,
+ <0x1 0x20000 0x0 0xffeb0000 0x0 0x10000>;
+
+ r5f@0 {
+ compatible = "xlnx,zynqmp-r5f";
+ reg = <0x0 0x0 0x0 0x10000>, <0x0 0x20000 0x0 0x10000>;
+ reg-names = "atcm", "btcm";
+ power-domains = <&zynqmp_firmware PD_RPU_0>,
+ <&zynqmp_firmware PD_R5_0_ATCM>,
+ <&zynqmp_firmware PD_R5_0_BTCM>;
+ memory-region = <&rproc_0_fw_image>, <&rpu0vdev0buffer>,
+ <&rpu0vdev0vring0>, <&rpu0vdev0vring1>;
+ mboxes = <&ipi_mailbox_rpu0 0>, <&ipi_mailbox_rpu0 1>;
+ mbox-names = "tx", "rx";
+ };
+
+ r5f@1 {
+ compatible = "xlnx,zynqmp-r5f";
+ reg = <0x1 0x0 0x0 0x10000>, <0x1 0x20000 0x0 0x10000>;
+ reg-names = "atcm", "btcm";
+ power-domains = <&zynqmp_firmware PD_RPU_1>,
+ <&zynqmp_firmware PD_R5_1_ATCM>,
+ <&zynqmp_firmware PD_R5_1_BTCM>;
+ memory-region = <&rproc_1_fw_image>, <&rpu1vdev0buffer>,
+ <&rpu1vdev0vring0>, <&rpu1vdev0vring1>;
+ mboxes = <&ipi_mailbox_rpu1 0>, <&ipi_mailbox_rpu1 1>;
+ mbox-names = "tx", "rx";
+ };
};
+ };
- r5f-1 {
- compatible = "xlnx,zynqmp-r5f";
- power-domains = <&zynqmp_firmware 0x8>;
- memory-region = <&rproc_1_fw_image>, <&rpu1vdev0buffer>, <&rpu1vdev0vring0>, <&rpu1vdev0vring1>;
- mboxes = <&ipi_mailbox_rpu1 0>, <&ipi_mailbox_rpu1 1>;
- mbox-names = "tx", "rx";
+ - |
+ //Lockstep configuration
+ soc {
+ #address-cells = <2>;
+ #size-cells = <2>;
+
+ remoteproc@ffe00000 {
+ compatible = "xlnx,zynqmp-r5fss";
+ xlnx,cluster-mode = <1>;
+
+ #address-cells = <2>;
+ #size-cells = <2>;
+ ranges = <0x0 0x0 0x0 0xffe00000 0x0 0x20000>,
+ <0x0 0x20000 0x0 0xffe20000 0x0 0x20000>;
+
+ r5f@0 {
+ compatible = "xlnx,zynqmp-r5f";
+ reg = <0x0 0x0 0x0 0x20000>, <0x0 0x20000 0x0 0x20000>;
+ reg-names = "atcm", "btcm";
+ power-domains = <&zynqmp_firmware PD_RPU_0>,
+ <&zynqmp_firmware PD_R5_0_ATCM>,
+ <&zynqmp_firmware PD_R5_0_BTCM>;
+ memory-region = <&rproc_0_fw_image>, <&rpu0vdev0buffer>,
+ <&rpu0vdev0vring0>, <&rpu0vdev0vring1>;
+ mboxes = <&ipi_mailbox_rpu0 0>, <&ipi_mailbox_rpu0 1>;
+ mbox-names = "tx", "rx";
+ };
+
+ r5f@1 {
+ compatible = "xlnx,zynqmp-r5f";
+ reg = <0x1 0x0 0x0 0x10000>, <0x1 0x20000 0x0 0x10000>;
+ reg-names = "atcm", "btcm";
+ power-domains = <&zynqmp_firmware PD_RPU_1>,
+ <&zynqmp_firmware PD_R5_1_ATCM>,
+ <&zynqmp_firmware PD_R5_1_BTCM>;
+ memory-region = <&rproc_1_fw_image>, <&rpu1vdev0buffer>,
+ <&rpu1vdev0vring0>, <&rpu1vdev0vring1>;
+ mboxes = <&ipi_mailbox_rpu1 0>, <&ipi_mailbox_rpu1 1>;
+ mbox-names = "tx", "rx";
+ };
};
};
...
--
2.25.1
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply related [flat|nested] 16+ messages in thread
* [PATCH v4 2/3] dts: zynqmp: add properties for TCM in remoteproc
2023-08-29 18:18 [PATCH v4 0/3] add zynqmp TCM bindings Tanmay Shah
2023-08-29 18:18 ` [PATCH v4 1/3] dt-bindings: remoteproc: add Tightly Coupled Memory (TCM) bindings Tanmay Shah
@ 2023-08-29 18:18 ` Tanmay Shah
2023-08-29 18:19 ` [PATCH v4 3/3] remoteproc: zynqmp: get TCM from device-tree Tanmay Shah
2 siblings, 0 replies; 16+ messages in thread
From: Tanmay Shah @ 2023-08-29 18:18 UTC (permalink / raw)
To: linux-remoteproc, devicetree, linux-arm-kernel, linux-kernel,
Bjorn Andersson, Mathieu Poirier, Rob Herring,
Krzysztof Kozlowski, Michal Simek
Cc: Conor Dooley, Radhey Shyam Pandey, Ben Levinsky, Tanmay Shah
Add properties as per new bindings in zynqmp remoteproc node
to represent TCM address and size.
Signed-off-by: Tanmay Shah <tanmay.shah@amd.com>
---
arch/arm64/boot/dts/xilinx/zynqmp.dtsi | 28 ++++++++++++++++++++------
1 file changed, 22 insertions(+), 6 deletions(-)
diff --git a/arch/arm64/boot/dts/xilinx/zynqmp.dtsi b/arch/arm64/boot/dts/xilinx/zynqmp.dtsi
index 153db59dc4b3..4a3110dc074b 100644
--- a/arch/arm64/boot/dts/xilinx/zynqmp.dtsi
+++ b/arch/arm64/boot/dts/xilinx/zynqmp.dtsi
@@ -226,19 +226,35 @@ fpga_full: fpga-full {
ranges;
};
- remoteproc {
+ remoteproc@ffe00000 {
compatible = "xlnx,zynqmp-r5fss";
- xlnx,cluster-mode = <1>;
+ xlnx,cluster-mode = <0>;
- r5f-0 {
+ #address-cells = <2>;
+ #size-cells = <2>;
+
+ ranges = <0x0 0x0 0x0 0xffe00000 0x0 0x10000>,
+ <0x0 0x20000 0x0 0xffe20000 0x0 0x10000>,
+ <0x1 0x0 0x0 0xffe90000 0x0 0x10000>,
+ <0x1 0x20000 0x0 0xffeb0000 0x0 0x10000>;
+
+ r5f@0 {
compatible = "xlnx,zynqmp-r5f";
- power-domains = <&zynqmp_firmware PD_RPU_0>;
+ reg = <0x0 0x0 0x0 0x10000>, <0x0 0x20000 0x0 0x10000>;
+ reg-names = "atcm", "btcm";
+ power-domains = <&zynqmp_firmware PD_RPU_0>,
+ <&zynqmp_firmware PD_R5_0_ATCM>,
+ <&zynqmp_firmware PD_R5_0_BTCM>;
memory-region = <&rproc_0_fw_image>;
};
- r5f-1 {
+ r5f@1 {
compatible = "xlnx,zynqmp-r5f";
- power-domains = <&zynqmp_firmware PD_RPU_1>;
+ reg = <0x1 0x0 0x0 0x10000>, <0x1 0x20000 0x0 0x10000>;
+ reg-names = "atcm", "btcm";
+ power-domains = <&zynqmp_firmware PD_RPU_1>,
+ <&zynqmp_firmware PD_R5_1_ATCM>,
+ <&zynqmp_firmware PD_R5_1_BTCM>;
memory-region = <&rproc_1_fw_image>;
};
};
--
2.25.1
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply related [flat|nested] 16+ messages in thread
* [PATCH v4 3/3] remoteproc: zynqmp: get TCM from device-tree
2023-08-29 18:18 [PATCH v4 0/3] add zynqmp TCM bindings Tanmay Shah
2023-08-29 18:18 ` [PATCH v4 1/3] dt-bindings: remoteproc: add Tightly Coupled Memory (TCM) bindings Tanmay Shah
2023-08-29 18:18 ` [PATCH v4 2/3] dts: zynqmp: add properties for TCM in remoteproc Tanmay Shah
@ 2023-08-29 18:19 ` Tanmay Shah
2023-09-04 7:50 ` Philippe Mathieu-Daudé
2023-09-06 19:47 ` Mathieu Poirier
2 siblings, 2 replies; 16+ messages in thread
From: Tanmay Shah @ 2023-08-29 18:19 UTC (permalink / raw)
To: linux-remoteproc, devicetree, linux-arm-kernel, linux-kernel,
Bjorn Andersson, Mathieu Poirier, Rob Herring,
Krzysztof Kozlowski, Michal Simek
Cc: Conor Dooley, Radhey Shyam Pandey, Ben Levinsky, Tanmay Shah
Use new dt bindings to get TCM address and size
information. Also make sure that driver stays
compatible with previous device-tree bindings.
So, if TCM information isn't available in device-tree
for zynqmp platform, hard-coded address of TCM will
be used.
New platforms that are compatible with this
driver must add TCM support in device-tree as per new
bindings.
Signed-off-by: Tanmay Shah <tanmay.shah@amd.com>
---
drivers/remoteproc/xlnx_r5_remoteproc.c | 279 +++++++++++++++++++-----
1 file changed, 221 insertions(+), 58 deletions(-)
diff --git a/drivers/remoteproc/xlnx_r5_remoteproc.c b/drivers/remoteproc/xlnx_r5_remoteproc.c
index feca6de68da2..4eb62eb545c2 100644
--- a/drivers/remoteproc/xlnx_r5_remoteproc.c
+++ b/drivers/remoteproc/xlnx_r5_remoteproc.c
@@ -39,15 +39,19 @@ enum zynqmp_r5_cluster_mode {
* struct mem_bank_data - Memory Bank description
*
* @addr: Start address of memory bank
+ * @da: device address for this tcm bank
* @size: Size of Memory bank
* @pm_domain_id: Power-domains id of memory bank for firmware to turn on/off
+ * @pm_domain_id2: second core's corresponding TCM's pm_domain_id
* @bank_name: name of the bank for remoteproc framework
*/
struct mem_bank_data {
- phys_addr_t addr;
- size_t size;
+ u32 addr;
+ u32 da;
+ u32 size;
u32 pm_domain_id;
- char *bank_name;
+ u32 pm_domain_id2;
+ char bank_name[32];
};
/**
@@ -75,11 +79,17 @@ struct mbox_info {
* Hardcoded TCM bank values. This will be removed once TCM bindings are
* accepted for system-dt specifications and upstreamed in linux kernel
*/
-static const struct mem_bank_data zynqmp_tcm_banks[] = {
- {0xffe00000UL, 0x10000UL, PD_R5_0_ATCM, "atcm0"}, /* TCM 64KB each */
- {0xffe20000UL, 0x10000UL, PD_R5_0_BTCM, "btcm0"},
- {0xffe90000UL, 0x10000UL, PD_R5_1_ATCM, "atcm1"},
- {0xffeb0000UL, 0x10000UL, PD_R5_1_BTCM, "btcm1"},
+static const struct mem_bank_data zynqmp_tcm_banks_split[] = {
+ {0xffe00000, 0x0, 0x10000, PD_R5_0_ATCM, 0, "atcm0"}, /* TCM 64KB each */
+ {0xffe20000, 0x20000, 0x10000, PD_R5_0_BTCM, 0, "btcm0"},
+ {0xffe90000, 0x0, 0x10000, PD_R5_1_ATCM, 0, "atcm1"},
+ {0xffeb0000, 0x20000, 0x10000, PD_R5_1_BTCM, 0, "btcm1"},
+};
+
+/* TCM 128KB each */
+static const struct mem_bank_data zynqmp_tcm_banks_lockstep[] = {
+ {0xffe00000, 0x0, 0x20000, PD_R5_0_ATCM, PD_R5_1_ATCM, "atcm0"},
+ {0xffe20000, 0x20000, 0x20000, PD_R5_0_BTCM, PD_R5_1_BTCM, "btcm0"},
};
/**
@@ -422,6 +432,7 @@ static int zynqmp_r5_mem_region_unmap(struct rproc *rproc,
struct rproc_mem_entry *mem)
{
iounmap((void __iomem *)mem->va);
+
return 0;
}
@@ -526,30 +537,6 @@ static int tcm_mem_map(struct rproc *rproc,
/* clear TCMs */
memset_io(va, 0, mem->len);
- /*
- * The R5s expect their TCM banks to be at address 0x0 and 0x2000,
- * while on the Linux side they are at 0xffexxxxx.
- *
- * Zero out the high 12 bits of the address. This will give
- * expected values for TCM Banks 0A and 0B (0x0 and 0x20000).
- */
- mem->da &= 0x000fffff;
-
- /*
- * TCM Banks 1A and 1B still have to be translated.
- *
- * Below handle these two banks' absolute addresses (0xffe90000 and
- * 0xffeb0000) and convert to the expected relative addresses
- * (0x0 and 0x20000).
- */
- if (mem->da == 0x90000 || mem->da == 0xB0000)
- mem->da -= 0x90000;
-
- /* if translated TCM bank address is not valid report error */
- if (mem->da != 0x0 && mem->da != 0x20000) {
- dev_err(&rproc->dev, "invalid TCM address: %x\n", mem->da);
- return -EINVAL;
- }
return 0;
}
@@ -571,6 +558,7 @@ static int add_tcm_carveout_split_mode(struct rproc *rproc)
u32 pm_domain_id;
size_t bank_size;
char *bank_name;
+ u32 da;
r5_core = rproc->priv;
dev = r5_core->dev;
@@ -586,6 +574,7 @@ static int add_tcm_carveout_split_mode(struct rproc *rproc)
bank_name = r5_core->tcm_banks[i]->bank_name;
bank_size = r5_core->tcm_banks[i]->size;
pm_domain_id = r5_core->tcm_banks[i]->pm_domain_id;
+ da = r5_core->tcm_banks[i]->da;
ret = zynqmp_pm_request_node(pm_domain_id,
ZYNQMP_PM_CAPABILITY_ACCESS, 0,
@@ -599,7 +588,7 @@ static int add_tcm_carveout_split_mode(struct rproc *rproc)
bank_name, bank_addr, bank_size);
rproc_mem = rproc_mem_entry_init(dev, NULL, bank_addr,
- bank_size, bank_addr,
+ bank_size, da,
tcm_mem_map, tcm_mem_unmap,
bank_name);
if (!rproc_mem) {
@@ -632,14 +621,14 @@ static int add_tcm_carveout_split_mode(struct rproc *rproc)
*/
static int add_tcm_carveout_lockstep_mode(struct rproc *rproc)
{
+ u32 pm_domain_id, da, pm_domain_id2;
struct rproc_mem_entry *rproc_mem;
struct zynqmp_r5_core *r5_core;
int i, num_banks, ret;
- phys_addr_t bank_addr;
- size_t bank_size = 0;
+ u32 bank_size = 0;
struct device *dev;
- u32 pm_domain_id;
char *bank_name;
+ u32 bank_addr;
r5_core = rproc->priv;
dev = r5_core->dev;
@@ -653,12 +642,16 @@ static int add_tcm_carveout_lockstep_mode(struct rproc *rproc)
* So, Enable each TCM block individually, but add their size
* to create contiguous memory region.
*/
- bank_addr = r5_core->tcm_banks[0]->addr;
- bank_name = r5_core->tcm_banks[0]->bank_name;
-
for (i = 0; i < num_banks; i++) {
- bank_size += r5_core->tcm_banks[i]->size;
+ bank_addr = r5_core->tcm_banks[i]->addr;
+ bank_name = r5_core->tcm_banks[i]->bank_name;
+ bank_size = r5_core->tcm_banks[i]->size;
pm_domain_id = r5_core->tcm_banks[i]->pm_domain_id;
+ pm_domain_id2 = r5_core->tcm_banks[i]->pm_domain_id2;
+ da = r5_core->tcm_banks[i]->da;
+
+ dev_dbg(dev, "TCM %s addr=0x%x, size=0x%x",
+ bank_name, bank_addr, bank_size);
/* Turn on each TCM bank individually */
ret = zynqmp_pm_request_node(pm_domain_id,
@@ -668,23 +661,28 @@ static int add_tcm_carveout_lockstep_mode(struct rproc *rproc)
dev_err(dev, "failed to turn on TCM 0x%x", pm_domain_id);
goto release_tcm_lockstep;
}
- }
- dev_dbg(dev, "TCM add carveout lockstep mode %s addr=0x%llx, size=0x%lx",
- bank_name, bank_addr, bank_size);
-
- /* Register TCM address range, TCM map and unmap functions */
- rproc_mem = rproc_mem_entry_init(dev, NULL, bank_addr,
- bank_size, bank_addr,
- tcm_mem_map, tcm_mem_unmap,
- bank_name);
- if (!rproc_mem) {
- ret = -ENOMEM;
- goto release_tcm_lockstep;
- }
+ /* Turn on each TCM bank individually */
+ ret = zynqmp_pm_request_node(pm_domain_id2,
+ ZYNQMP_PM_CAPABILITY_ACCESS, 0,
+ ZYNQMP_PM_REQUEST_ACK_BLOCKING);
+ if (ret < 0) {
+ dev_err(dev, "failed to turn on TCM 0x%x", pm_domain_id2);
+ goto release_tcm_lockstep;
+ }
- /* If registration is success, add carveouts */
- rproc_add_carveout(rproc, rproc_mem);
+ /* Register TCM address range, TCM map and unmap functions */
+ rproc_mem = rproc_mem_entry_init(dev, NULL, bank_addr,
+ bank_size, da,
+ tcm_mem_map, tcm_mem_unmap,
+ bank_name);
+ if (!rproc_mem) {
+ ret = -ENOMEM;
+ goto release_tcm_lockstep;
+ }
+
+ rproc_add_carveout(rproc, rproc_mem);
+ }
return 0;
@@ -693,7 +691,12 @@ static int add_tcm_carveout_lockstep_mode(struct rproc *rproc)
for (i--; i >= 0; i--) {
pm_domain_id = r5_core->tcm_banks[i]->pm_domain_id;
zynqmp_pm_release_node(pm_domain_id);
+ if (pm_domain_id2) {
+ pm_domain_id2 = r5_core->tcm_banks[i]->pm_domain_id2;
+ zynqmp_pm_release_node(pm_domain_id2);
+ }
}
+
return ret;
}
@@ -800,17 +803,23 @@ static int zynqmp_r5_rproc_prepare(struct rproc *rproc)
*/
static int zynqmp_r5_rproc_unprepare(struct rproc *rproc)
{
+ u32 pm_domain_id, pm_domain_id2;
struct zynqmp_r5_core *r5_core;
- u32 pm_domain_id;
int i;
r5_core = rproc->priv;
for (i = 0; i < r5_core->tcm_bank_count; i++) {
pm_domain_id = r5_core->tcm_banks[i]->pm_domain_id;
+ pm_domain_id2 = r5_core->tcm_banks[i]->pm_domain_id2;
if (zynqmp_pm_release_node(pm_domain_id))
dev_warn(r5_core->dev,
"can't turn off TCM bank 0x%x", pm_domain_id);
+ if (pm_domain_id2 && zynqmp_pm_release_node(pm_domain_id2))
+ dev_warn(r5_core->dev,
+ "can't turn off TCM bank 0x%x", pm_domain_id2);
+ dev_dbg(r5_core->dev, "pm_domain_id=%d, pm_domain_id2=%d\n",
+ pm_domain_id, pm_domain_id2);
}
return 0;
@@ -883,6 +892,137 @@ static struct zynqmp_r5_core *zynqmp_r5_add_rproc_core(struct device *cdev)
return ERR_PTR(ret);
}
+static int zynqmp_r5_get_tcm_node_from_dt(struct zynqmp_r5_cluster *cluster)
+{
+ int i, j, tcm_bank_count, ret = -EINVAL;
+ struct zynqmp_r5_core *r5_core;
+ struct of_phandle_args out_arg;
+ struct platform_device *cpdev;
+ struct resource *res = NULL;
+ u64 abs_addr = 0, size = 0;
+ struct mem_bank_data *tcm;
+ struct device_node *np, *np1 = NULL;
+ struct device *dev;
+
+ for (i = 0; i < cluster->core_count; i++) {
+ r5_core = cluster->r5_cores[i];
+ dev = r5_core->dev;
+ np = dev_of_node(dev);
+
+ /* we have address cell 2 and size cell as 2 */
+ ret = of_property_count_elems_of_size(np, "reg",
+ 4 * sizeof(u32));
+ if (ret <= 0) {
+ ret = -EINVAL;
+ goto fail_tcm;
+ }
+
+ tcm_bank_count = ret;
+
+ r5_core->tcm_banks = devm_kcalloc(dev, tcm_bank_count,
+ sizeof(struct mem_bank_data *),
+ GFP_KERNEL);
+ if (!r5_core->tcm_banks) {
+ ret = -ENOMEM;
+ goto fail_tcm;
+ }
+
+ r5_core->tcm_bank_count = tcm_bank_count;
+ for (j = 0; j < tcm_bank_count; j++) {
+ tcm = kzalloc(sizeof(struct mem_bank_data *), GFP_KERNEL);
+ if (!tcm) {
+ ret = -ENOMEM;
+ goto fail_tcm;
+ }
+
+ r5_core->tcm_banks[j] = tcm;
+ /* get tcm address without translation */
+ ret = of_property_read_reg(np, j, &abs_addr, &size);
+ if (ret) {
+ dev_err(dev, "failed to get reg property\n");
+ goto fail_tcm;
+ }
+
+ /*
+ * remote processor can address only 32 bits
+ * so convert 64-bits into 32-bits. This will discard
+ * any unwanted upper 32-bits.
+ */
+ tcm->da = (u32)abs_addr;
+ tcm->size = (u32)size;
+
+ cpdev = to_platform_device(dev);
+ res = platform_get_resource(cpdev, IORESOURCE_MEM, j);
+ if (!res) {
+ dev_err(dev, "failed to get tcm resource\n");
+ ret = -EINVAL;
+ goto fail_tcm;
+ }
+
+ tcm->addr = (u32)res->start;
+ res = devm_request_mem_region(dev, tcm->addr, tcm->size, res->name);
+ if (!res) {
+ dev_err(dev, "failed to request tcm resource\n");
+ ret = -EINVAL;
+ goto fail_tcm;
+ }
+
+ memcpy(tcm->bank_name, res->name, ARRAY_SIZE(tcm->bank_name));
+ np = of_node_get(dev_of_node(dev));
+ /*
+ * In dt power-domains are described in this order:
+ * <RPU core>, <atcm>, <btcm>
+ * parse power domains for tcm accordingly
+ */
+ of_parse_phandle_with_args(np, "power-domains",
+ "#power-domain-cells",
+ j + 1, &out_arg);
+ tcm->pm_domain_id = out_arg.args[0];
+ of_node_put(out_arg.np);
+
+ dev_dbg(dev, "TCM: %s, dma=0x%x, da=0x%x, size=0x%x\n",
+ tcm->bank_name, tcm->addr, tcm->da, tcm->size);
+ dev_dbg(dev, "tcm pm domain id %d\n", tcm->pm_domain_id);
+
+ if (cluster->mode == SPLIT_MODE)
+ continue;
+
+ /* Turn on core-1's TCM as well */
+ np1 = of_get_next_child(dev_of_node(cluster->dev),
+ r5_core->np);
+ if (!np1) {
+ of_node_put(np1);
+ np1 = NULL;
+ goto fail_tcm;
+ }
+
+ of_parse_phandle_with_args(np1, "power-domains",
+ "#power-domain-cells",
+ j + 1, &out_arg);
+ tcm->pm_domain_id2 = out_arg.args[0];
+ of_node_put(out_arg.np);
+ dev_dbg(dev, "tcm pm domain id %d\n", tcm->pm_domain_id2);
+ }
+ }
+
+ return 0;
+
+fail_tcm:
+ while (i >= 0) {
+ r5_core = cluster->r5_cores[i];
+ for (j = 0; j < r5_core->tcm_bank_count; j++) {
+ if (!r5_core->tcm_banks)
+ continue;
+ tcm = r5_core->tcm_banks[j];
+ kfree(tcm);
+ }
+ kfree(r5_core->tcm_banks);
+ i--;
+ }
+
+ return ret;
+}
+
/**
* zynqmp_r5_get_tcm_node()
* Ideally this function should parse tcm node and store information
@@ -895,12 +1035,20 @@ static struct zynqmp_r5_core *zynqmp_r5_add_rproc_core(struct device *cdev)
*/
static int zynqmp_r5_get_tcm_node(struct zynqmp_r5_cluster *cluster)
{
+ const struct mem_bank_data *zynqmp_tcm_banks;
struct device *dev = cluster->dev;
struct zynqmp_r5_core *r5_core;
int tcm_bank_count, tcm_node;
int i, j;
- tcm_bank_count = ARRAY_SIZE(zynqmp_tcm_banks);
+ if (cluster->mode == SPLIT_MODE) {
+ zynqmp_tcm_banks = zynqmp_tcm_banks_split;
+ tcm_bank_count = ARRAY_SIZE(zynqmp_tcm_banks_split);
+ } else {
+ zynqmp_tcm_banks = zynqmp_tcm_banks_lockstep;
+ tcm_bank_count = ARRAY_SIZE(zynqmp_tcm_banks_lockstep);
+ }
+
/* count per core tcm banks */
tcm_bank_count = tcm_bank_count / cluster->core_count;
@@ -951,10 +1099,25 @@ static int zynqmp_r5_core_init(struct zynqmp_r5_cluster *cluster,
enum rpu_tcm_comb tcm_mode)
{
struct device *dev = cluster->dev;
+ struct device_node *np;
struct zynqmp_r5_core *r5_core;
int ret, i;
- ret = zynqmp_r5_get_tcm_node(cluster);
+ /*
+ * try to get tcm nodes from dt but if fail, use hardcode addresses only
+ * for zynqmp platform. New platforms must use dt bindings for TCM.
+ */
+ ret = zynqmp_r5_get_tcm_node_from_dt(cluster);
+ if (ret) {
+ np = of_get_compatible_child(dev_of_node(dev), "xlnx,zynqmp-r5f");
+ if (np) {
+ ret = zynqmp_r5_get_tcm_node(cluster);
+ } else {
+ dev_err(dev, "tcm not found\n");
+ return -EINVAL;
+ }
+ }
+
if (ret < 0) {
dev_err(dev, "can't get tcm node, err %d\n", ret);
return ret;
--
2.25.1
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply related [flat|nested] 16+ messages in thread
* Re: [PATCH v4 1/3] dt-bindings: remoteproc: add Tightly Coupled Memory (TCM) bindings
2023-08-29 18:18 ` [PATCH v4 1/3] dt-bindings: remoteproc: add Tightly Coupled Memory (TCM) bindings Tanmay Shah
@ 2023-08-30 18:16 ` Rob Herring
0 siblings, 0 replies; 16+ messages in thread
From: Rob Herring @ 2023-08-30 18:16 UTC (permalink / raw)
To: Tanmay Shah
Cc: Ben Levinsky, Krzysztof Kozlowski, Rob Herring, linux-arm-kernel,
Conor Dooley, devicetree, Mathieu Poirier, linux-remoteproc,
linux-kernel, Michal Simek, Radhey Shyam Pandey, Bjorn Andersson
On Tue, 29 Aug 2023 11:18:58 -0700, Tanmay Shah wrote:
> From: Radhey Shyam Pandey <radhey.shyam.pandey@amd.com>
>
> Introduce bindings for TCM memory address space on AMD-xilinx Zynq
> UltraScale+ platform. It will help in defining TCM in device-tree
> and make it's access platform agnostic and data-driven.
>
> Tightly-coupled memories(TCMs) are low-latency memory that provides
> predictable instruction execution and predictable data load/store
> timing. Each Cortex-R5F processor contains two 64-bit wide 64 KB memory
> banks on the ATCM and BTCM ports, for a total of 128 KB of memory.
>
> The TCM resources(reg, reg-names and power-domain) are documented for
> each TCM in the R5 node. The reg and reg-names are made as required
> properties as we don't want to hardcode TCM addresses for future
> platforms and for zu+ legacy implementation will ensure that the
> old dts w/o reg/reg-names works and stable ABI is maintained.
>
> It also extends the examples for TCM split and lockstep modes.
>
> Signed-off-by: Radhey Shyam Pandey <radhey.shyam.pandey@amd.com>
> Signed-off-by: Tanmay Shah <tanmay.shah@amd.com>
> ---
>
> Changes in v4:
> - Use address-cells and size-cells value 2
> - Modify ranges property as per new value of address-cells
> and size-cells
> - Modify child node "reg" property accordingly
> - Remove previous ack for further review
>
>
> .../remoteproc/xlnx,zynqmp-r5fss.yaml | 131 +++++++++++++++---
> 1 file changed, 113 insertions(+), 18 deletions(-)
>
Acked-by: Rob Herring <robh@kernel.org>
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: [PATCH v4 3/3] remoteproc: zynqmp: get TCM from device-tree
2023-08-29 18:19 ` [PATCH v4 3/3] remoteproc: zynqmp: get TCM from device-tree Tanmay Shah
@ 2023-09-04 7:50 ` Philippe Mathieu-Daudé
2023-09-05 21:48 ` Tanmay Shah
2023-09-06 19:47 ` Mathieu Poirier
1 sibling, 1 reply; 16+ messages in thread
From: Philippe Mathieu-Daudé @ 2023-09-04 7:50 UTC (permalink / raw)
To: Tanmay Shah, linux-remoteproc, devicetree, linux-arm-kernel,
linux-kernel, Bjorn Andersson, Mathieu Poirier, Rob Herring,
Krzysztof Kozlowski, Michal Simek
Cc: Conor Dooley, Radhey Shyam Pandey, Ben Levinsky
Hi,
On 29/8/23 20:19, Tanmay Shah wrote:
> Use new dt bindings to get TCM address and size
> information. Also make sure that driver stays
> compatible with previous device-tree bindings.
> So, if TCM information isn't available in device-tree
> for zynqmp platform, hard-coded address of TCM will
> be used.
>
> New platforms that are compatible with this
> driver must add TCM support in device-tree as per new
> bindings.
>
> Signed-off-by: Tanmay Shah <tanmay.shah@amd.com>
> ---
> drivers/remoteproc/xlnx_r5_remoteproc.c | 279 +++++++++++++++++++-----
> 1 file changed, 221 insertions(+), 58 deletions(-)
> /**
> @@ -75,11 +79,17 @@ struct mbox_info {
> * Hardcoded TCM bank values. This will be removed once TCM bindings are
> * accepted for system-dt specifications and upstreamed in linux kernel
Just curious, for how long this fall back code has to be maintained?
(When/how will we know we can remove it?)
> */
> -static const struct mem_bank_data zynqmp_tcm_banks[] = {
> - {0xffe00000UL, 0x10000UL, PD_R5_0_ATCM, "atcm0"}, /* TCM 64KB each */
> - {0xffe20000UL, 0x10000UL, PD_R5_0_BTCM, "btcm0"},
> - {0xffe90000UL, 0x10000UL, PD_R5_1_ATCM, "atcm1"},
> - {0xffeb0000UL, 0x10000UL, PD_R5_1_BTCM, "btcm1"},
> +static const struct mem_bank_data zynqmp_tcm_banks_split[] = {
> + {0xffe00000, 0x0, 0x10000, PD_R5_0_ATCM, 0, "atcm0"}, /* TCM 64KB each */
> + {0xffe20000, 0x20000, 0x10000, PD_R5_0_BTCM, 0, "btcm0"},
> + {0xffe90000, 0x0, 0x10000, PD_R5_1_ATCM, 0, "atcm1"},
> + {0xffeb0000, 0x20000, 0x10000, PD_R5_1_BTCM, 0, "btcm1"},
> +};
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: [PATCH v4 3/3] remoteproc: zynqmp: get TCM from device-tree
2023-09-04 7:50 ` Philippe Mathieu-Daudé
@ 2023-09-05 21:48 ` Tanmay Shah
2023-09-06 6:20 ` Philippe Mathieu-Daudé
0 siblings, 1 reply; 16+ messages in thread
From: Tanmay Shah @ 2023-09-05 21:48 UTC (permalink / raw)
To: Philippe Mathieu-Daudé, linux-remoteproc, devicetree,
linux-arm-kernel, linux-kernel, Bjorn Andersson, Mathieu Poirier,
Rob Herring, Krzysztof Kozlowski, Michal Simek
Cc: Conor Dooley, Radhey Shyam Pandey, Ben Levinsky
On 9/4/23 2:50 AM, Philippe Mathieu-Daudé wrote:
> Hi,
>
> On 29/8/23 20:19, Tanmay Shah wrote:
> > Use new dt bindings to get TCM address and size
> > information. Also make sure that driver stays
> > compatible with previous device-tree bindings.
> > So, if TCM information isn't available in device-tree
> > for zynqmp platform, hard-coded address of TCM will
> > be used.
> >
> > New platforms that are compatible with this
> > driver must add TCM support in device-tree as per new
> > bindings.
> >
> > Signed-off-by: Tanmay Shah <tanmay.shah@amd.com>
> > ---
> > drivers/remoteproc/xlnx_r5_remoteproc.c | 279 +++++++++++++++++++-----
> > 1 file changed, 221 insertions(+), 58 deletions(-)
>
>
> > /**
> > @@ -75,11 +79,17 @@ struct mbox_info {
> > * Hardcoded TCM bank values. This will be removed once TCM bindings are
> > * accepted for system-dt specifications and upstreamed in linux kernel
>
> Just curious, for how long this fall back code has to be maintained?
> (When/how will we know we can remove it?)
I believe we should never remove it. It's important that driver works with old bindings as well.
>
> > */
> > -static const struct mem_bank_data zynqmp_tcm_banks[] = {
> > - {0xffe00000UL, 0x10000UL, PD_R5_0_ATCM, "atcm0"}, /* TCM 64KB each */
> > - {0xffe20000UL, 0x10000UL, PD_R5_0_BTCM, "btcm0"},
> > - {0xffe90000UL, 0x10000UL, PD_R5_1_ATCM, "atcm1"},
> > - {0xffeb0000UL, 0x10000UL, PD_R5_1_BTCM, "btcm1"},
> > +static const struct mem_bank_data zynqmp_tcm_banks_split[] = {
> > + {0xffe00000, 0x0, 0x10000, PD_R5_0_ATCM, 0, "atcm0"}, /* TCM 64KB each */
> > + {0xffe20000, 0x20000, 0x10000, PD_R5_0_BTCM, 0, "btcm0"},
> > + {0xffe90000, 0x0, 0x10000, PD_R5_1_ATCM, 0, "atcm1"},
> > + {0xffeb0000, 0x20000, 0x10000, PD_R5_1_BTCM, 0, "btcm1"},
> > +};
>
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: [PATCH v4 3/3] remoteproc: zynqmp: get TCM from device-tree
2023-09-05 21:48 ` Tanmay Shah
@ 2023-09-06 6:20 ` Philippe Mathieu-Daudé
2023-09-06 14:21 ` Tanmay Shah
0 siblings, 1 reply; 16+ messages in thread
From: Philippe Mathieu-Daudé @ 2023-09-06 6:20 UTC (permalink / raw)
To: Tanmay Shah, linux-remoteproc, devicetree, linux-arm-kernel,
linux-kernel, Bjorn Andersson, Mathieu Poirier, Rob Herring,
Krzysztof Kozlowski, Michal Simek
Cc: Conor Dooley, Radhey Shyam Pandey, Ben Levinsky
On 5/9/23 23:48, Tanmay Shah wrote:
>
> On 9/4/23 2:50 AM, Philippe Mathieu-Daudé wrote:
>> Hi,
>>
>> On 29/8/23 20:19, Tanmay Shah wrote:
>>> Use new dt bindings to get TCM address and size
>>> information. Also make sure that driver stays
>>> compatible with previous device-tree bindings.
>>> So, if TCM information isn't available in device-tree
>>> for zynqmp platform, hard-coded address of TCM will
>>> be used.
>>>
>>> New platforms that are compatible with this
>>> driver must add TCM support in device-tree as per new
>>> bindings.
>>>
>>> Signed-off-by: Tanmay Shah <tanmay.shah@amd.com>
>>> ---
>>> drivers/remoteproc/xlnx_r5_remoteproc.c | 279 +++++++++++++++++++-----
>>> 1 file changed, 221 insertions(+), 58 deletions(-)
>>
>>
>>> /**
>>> @@ -75,11 +79,17 @@ struct mbox_info {
>>> * Hardcoded TCM bank values. This will be removed once TCM bindings are
>>> * accepted for system-dt specifications and upstreamed in linux kernel
>>
>> Just curious, for how long this fall back code has to be maintained?
>> (When/how will we know we can remove it?)
>
>
> I believe we should never remove it. It's important that driver works with old bindings as well.
Do you mind posting a followup patch updating the comment,
to clarify?
Thanks,
Phil.
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: [PATCH v4 3/3] remoteproc: zynqmp: get TCM from device-tree
2023-09-06 6:20 ` Philippe Mathieu-Daudé
@ 2023-09-06 14:21 ` Tanmay Shah
0 siblings, 0 replies; 16+ messages in thread
From: Tanmay Shah @ 2023-09-06 14:21 UTC (permalink / raw)
To: Philippe Mathieu-Daudé, linux-remoteproc, devicetree,
linux-arm-kernel, linux-kernel, Bjorn Andersson, Mathieu Poirier,
Rob Herring, Krzysztof Kozlowski, Michal Simek
Cc: Conor Dooley, Radhey Shyam Pandey, Ben Levinsky
On 9/6/23 1:20 AM, Philippe Mathieu-Daudé wrote:
> On 5/9/23 23:48, Tanmay Shah wrote:
> >
> > On 9/4/23 2:50 AM, Philippe Mathieu-Daudé wrote:
> >> Hi,
> >>
> >> On 29/8/23 20:19, Tanmay Shah wrote:
> >>> Use new dt bindings to get TCM address and size
> >>> information. Also make sure that driver stays
> >>> compatible with previous device-tree bindings.
> >>> So, if TCM information isn't available in device-tree
> >>> for zynqmp platform, hard-coded address of TCM will
> >>> be used.
> >>>
> >>> New platforms that are compatible with this
> >>> driver must add TCM support in device-tree as per new
> >>> bindings.
> >>>
> >>> Signed-off-by: Tanmay Shah <tanmay.shah@amd.com>
> >>> ---
> >>> drivers/remoteproc/xlnx_r5_remoteproc.c | 279 +++++++++++++++++++-----
> >>> 1 file changed, 221 insertions(+), 58 deletions(-)
> >>
> >>
> >>> /**
> >>> @@ -75,11 +79,17 @@ struct mbox_info {
> >>> * Hardcoded TCM bank values. This will be removed once TCM bindings are
> >>> * accepted for system-dt specifications and upstreamed in linux kernel
> >>
> >> Just curious, for how long this fall back code has to be maintained?
> >> (When/how will we know we can remove it?)
> >
> >
> > I believe we should never remove it. It's important that driver works with old bindings as well.
>
> Do you mind posting a followup patch updating the comment,
> to clarify?
Sure I will post the follow up patch with comments updated.
I will wait for reviews from Mathieu on driver's patch then will address all the comments in v5.
>
> Thanks,
>
> Phil.
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: [PATCH v4 3/3] remoteproc: zynqmp: get TCM from device-tree
2023-08-29 18:19 ` [PATCH v4 3/3] remoteproc: zynqmp: get TCM from device-tree Tanmay Shah
2023-09-04 7:50 ` Philippe Mathieu-Daudé
@ 2023-09-06 19:47 ` Mathieu Poirier
2023-09-06 22:02 ` Tanmay Shah
1 sibling, 1 reply; 16+ messages in thread
From: Mathieu Poirier @ 2023-09-06 19:47 UTC (permalink / raw)
To: Tanmay Shah
Cc: linux-remoteproc, devicetree, linux-arm-kernel, linux-kernel,
Bjorn Andersson, Rob Herring, Krzysztof Kozlowski, Michal Simek,
Conor Dooley, Radhey Shyam Pandey, Ben Levinsky
Hi Tanmay,
On Tue, Aug 29, 2023 at 11:19:00AM -0700, Tanmay Shah wrote:
> Use new dt bindings to get TCM address and size
> information. Also make sure that driver stays
> compatible with previous device-tree bindings.
> So, if TCM information isn't available in device-tree
> for zynqmp platform, hard-coded address of TCM will
> be used.
>
> New platforms that are compatible with this
> driver must add TCM support in device-tree as per new
> bindings.
>
> Signed-off-by: Tanmay Shah <tanmay.shah@amd.com>
> ---
> drivers/remoteproc/xlnx_r5_remoteproc.c | 279 +++++++++++++++++++-----
> 1 file changed, 221 insertions(+), 58 deletions(-)
>
> diff --git a/drivers/remoteproc/xlnx_r5_remoteproc.c b/drivers/remoteproc/xlnx_r5_remoteproc.c
> index feca6de68da2..4eb62eb545c2 100644
> --- a/drivers/remoteproc/xlnx_r5_remoteproc.c
> +++ b/drivers/remoteproc/xlnx_r5_remoteproc.c
> @@ -39,15 +39,19 @@ enum zynqmp_r5_cluster_mode {
> * struct mem_bank_data - Memory Bank description
> *
> * @addr: Start address of memory bank
> + * @da: device address for this tcm bank
> * @size: Size of Memory bank
> * @pm_domain_id: Power-domains id of memory bank for firmware to turn on/off
> + * @pm_domain_id2: second core's corresponding TCM's pm_domain_id
> * @bank_name: name of the bank for remoteproc framework
> */
> struct mem_bank_data {
> - phys_addr_t addr;
> - size_t size;
> + u32 addr;
> + u32 da;
> + u32 size;
Why are the types of @addr and @size changed?
> u32 pm_domain_id;
> - char *bank_name;
> + u32 pm_domain_id2;
> + char bank_name[32];
Same
> };
>
> /**
> @@ -75,11 +79,17 @@ struct mbox_info {
> * Hardcoded TCM bank values. This will be removed once TCM bindings are
> * accepted for system-dt specifications and upstreamed in linux kernel
> */
> -static const struct mem_bank_data zynqmp_tcm_banks[] = {
> - {0xffe00000UL, 0x10000UL, PD_R5_0_ATCM, "atcm0"}, /* TCM 64KB each */
> - {0xffe20000UL, 0x10000UL, PD_R5_0_BTCM, "btcm0"},
> - {0xffe90000UL, 0x10000UL, PD_R5_1_ATCM, "atcm1"},
> - {0xffeb0000UL, 0x10000UL, PD_R5_1_BTCM, "btcm1"},
> +static const struct mem_bank_data zynqmp_tcm_banks_split[] = {
> + {0xffe00000, 0x0, 0x10000, PD_R5_0_ATCM, 0, "atcm0"}, /* TCM 64KB each */
> + {0xffe20000, 0x20000, 0x10000, PD_R5_0_BTCM, 0, "btcm0"},
Here the device address for btcm0 is 0x20000 while in the cover letter it is
0x2000.
> + {0xffe90000, 0x0, 0x10000, PD_R5_1_ATCM, 0, "atcm1"},
> + {0xffeb0000, 0x20000, 0x10000, PD_R5_1_BTCM, 0, "btcm1"},
Same
> +};
> +
> +/* TCM 128KB each */
> +static const struct mem_bank_data zynqmp_tcm_banks_lockstep[] = {
> + {0xffe00000, 0x0, 0x20000, PD_R5_0_ATCM, PD_R5_1_ATCM, "atcm0"},
> + {0xffe20000, 0x20000, 0x20000, PD_R5_0_BTCM, PD_R5_1_BTCM, "btcm0"},
> };
>
> /**
> @@ -422,6 +432,7 @@ static int zynqmp_r5_mem_region_unmap(struct rproc *rproc,
> struct rproc_mem_entry *mem)
> {
> iounmap((void __iomem *)mem->va);
> +
Spurious change
> return 0;
> }
>
> @@ -526,30 +537,6 @@ static int tcm_mem_map(struct rproc *rproc,
> /* clear TCMs */
> memset_io(va, 0, mem->len);
>
> - /*
> - * The R5s expect their TCM banks to be at address 0x0 and 0x2000,
> - * while on the Linux side they are at 0xffexxxxx.
> - *
> - * Zero out the high 12 bits of the address. This will give
> - * expected values for TCM Banks 0A and 0B (0x0 and 0x20000).
> - */
> - mem->da &= 0x000fffff;
> -
> - /*
> - * TCM Banks 1A and 1B still have to be translated.
> - *
> - * Below handle these two banks' absolute addresses (0xffe90000 and
> - * 0xffeb0000) and convert to the expected relative addresses
> - * (0x0 and 0x20000).
> - */
> - if (mem->da == 0x90000 || mem->da == 0xB0000)
> - mem->da -= 0x90000;
> -
> - /* if translated TCM bank address is not valid report error */
> - if (mem->da != 0x0 && mem->da != 0x20000) {
> - dev_err(&rproc->dev, "invalid TCM address: %x\n", mem->da);
> - return -EINVAL;
> - }
> return 0;
> }
>
> @@ -571,6 +558,7 @@ static int add_tcm_carveout_split_mode(struct rproc *rproc)
> u32 pm_domain_id;
> size_t bank_size;
> char *bank_name;
> + u32 da;
>
> r5_core = rproc->priv;
> dev = r5_core->dev;
> @@ -586,6 +574,7 @@ static int add_tcm_carveout_split_mode(struct rproc *rproc)
> bank_name = r5_core->tcm_banks[i]->bank_name;
> bank_size = r5_core->tcm_banks[i]->size;
> pm_domain_id = r5_core->tcm_banks[i]->pm_domain_id;
> + da = r5_core->tcm_banks[i]->da;
>
> ret = zynqmp_pm_request_node(pm_domain_id,
> ZYNQMP_PM_CAPABILITY_ACCESS, 0,
> @@ -599,7 +588,7 @@ static int add_tcm_carveout_split_mode(struct rproc *rproc)
> bank_name, bank_addr, bank_size);
>
> rproc_mem = rproc_mem_entry_init(dev, NULL, bank_addr,
> - bank_size, bank_addr,
> + bank_size, da,
> tcm_mem_map, tcm_mem_unmap,
> bank_name);
> if (!rproc_mem) {
> @@ -632,14 +621,14 @@ static int add_tcm_carveout_split_mode(struct rproc *rproc)
> */
> static int add_tcm_carveout_lockstep_mode(struct rproc *rproc)
> {
> + u32 pm_domain_id, da, pm_domain_id2;
> struct rproc_mem_entry *rproc_mem;
> struct zynqmp_r5_core *r5_core;
> int i, num_banks, ret;
> - phys_addr_t bank_addr;
> - size_t bank_size = 0;
> + u32 bank_size = 0;
> struct device *dev;
> - u32 pm_domain_id;
> char *bank_name;
> + u32 bank_addr;
>
> r5_core = rproc->priv;
> dev = r5_core->dev;
> @@ -653,12 +642,16 @@ static int add_tcm_carveout_lockstep_mode(struct rproc *rproc)
> * So, Enable each TCM block individually, but add their size
> * to create contiguous memory region.
> */
> - bank_addr = r5_core->tcm_banks[0]->addr;
> - bank_name = r5_core->tcm_banks[0]->bank_name;
> -
> for (i = 0; i < num_banks; i++) {
> - bank_size += r5_core->tcm_banks[i]->size;
> + bank_addr = r5_core->tcm_banks[i]->addr;
> + bank_name = r5_core->tcm_banks[i]->bank_name;
> + bank_size = r5_core->tcm_banks[i]->size;
> pm_domain_id = r5_core->tcm_banks[i]->pm_domain_id;
> + pm_domain_id2 = r5_core->tcm_banks[i]->pm_domain_id2;
> + da = r5_core->tcm_banks[i]->da;
> +
> + dev_dbg(dev, "TCM %s addr=0x%x, size=0x%x",
> + bank_name, bank_addr, bank_size);
>
> /* Turn on each TCM bank individually */
> ret = zynqmp_pm_request_node(pm_domain_id,
> @@ -668,23 +661,28 @@ static int add_tcm_carveout_lockstep_mode(struct rproc *rproc)
> dev_err(dev, "failed to turn on TCM 0x%x", pm_domain_id);
> goto release_tcm_lockstep;
> }
> - }
>
> - dev_dbg(dev, "TCM add carveout lockstep mode %s addr=0x%llx, size=0x%lx",
> - bank_name, bank_addr, bank_size);
> -
> - /* Register TCM address range, TCM map and unmap functions */
> - rproc_mem = rproc_mem_entry_init(dev, NULL, bank_addr,
> - bank_size, bank_addr,
> - tcm_mem_map, tcm_mem_unmap,
> - bank_name);
> - if (!rproc_mem) {
> - ret = -ENOMEM;
> - goto release_tcm_lockstep;
> - }
> + /* Turn on each TCM bank individually */
> + ret = zynqmp_pm_request_node(pm_domain_id2,
> + ZYNQMP_PM_CAPABILITY_ACCESS, 0,
> + ZYNQMP_PM_REQUEST_ACK_BLOCKING);
> + if (ret < 0) {
> + dev_err(dev, "failed to turn on TCM 0x%x", pm_domain_id2);
> + goto release_tcm_lockstep;
> + }
>
> - /* If registration is success, add carveouts */
> - rproc_add_carveout(rproc, rproc_mem);
> + /* Register TCM address range, TCM map and unmap functions */
> + rproc_mem = rproc_mem_entry_init(dev, NULL, bank_addr,
> + bank_size, da,
> + tcm_mem_map, tcm_mem_unmap,
> + bank_name);
> + if (!rproc_mem) {
> + ret = -ENOMEM;
> + goto release_tcm_lockstep;
> + }
> +
> + rproc_add_carveout(rproc, rproc_mem);
> + }
>
> return 0;
>
> @@ -693,7 +691,12 @@ static int add_tcm_carveout_lockstep_mode(struct rproc *rproc)
> for (i--; i >= 0; i--) {
> pm_domain_id = r5_core->tcm_banks[i]->pm_domain_id;
> zynqmp_pm_release_node(pm_domain_id);
> + if (pm_domain_id2) {
> + pm_domain_id2 = r5_core->tcm_banks[i]->pm_domain_id2;
> + zynqmp_pm_release_node(pm_domain_id2);
> + }
> }
> +
> return ret;
> }
>
> @@ -800,17 +803,23 @@ static int zynqmp_r5_rproc_prepare(struct rproc *rproc)
> */
> static int zynqmp_r5_rproc_unprepare(struct rproc *rproc)
> {
> + u32 pm_domain_id, pm_domain_id2;
> struct zynqmp_r5_core *r5_core;
> - u32 pm_domain_id;
> int i;
>
> r5_core = rproc->priv;
>
> for (i = 0; i < r5_core->tcm_bank_count; i++) {
> pm_domain_id = r5_core->tcm_banks[i]->pm_domain_id;
> + pm_domain_id2 = r5_core->tcm_banks[i]->pm_domain_id2;
> if (zynqmp_pm_release_node(pm_domain_id))
> dev_warn(r5_core->dev,
> "can't turn off TCM bank 0x%x", pm_domain_id);
> + if (pm_domain_id2 && zynqmp_pm_release_node(pm_domain_id2))
> + dev_warn(r5_core->dev,
> + "can't turn off TCM bank 0x%x", pm_domain_id2);
> + dev_dbg(r5_core->dev, "pm_domain_id=%d, pm_domain_id2=%d\n",
> + pm_domain_id, pm_domain_id2);
> }
>
> return 0;
> @@ -883,6 +892,137 @@ static struct zynqmp_r5_core *zynqmp_r5_add_rproc_core(struct device *cdev)
> return ERR_PTR(ret);
> }
>
> +static int zynqmp_r5_get_tcm_node_from_dt(struct zynqmp_r5_cluster *cluster)
> +{
> + int i, j, tcm_bank_count, ret = -EINVAL;
> + struct zynqmp_r5_core *r5_core;
> + struct of_phandle_args out_arg;
> + struct platform_device *cpdev;
> + struct resource *res = NULL;
> + u64 abs_addr = 0, size = 0;
> + struct mem_bank_data *tcm;
> + struct device_node *np, *np1 = NULL;
> + struct device *dev;
> +
> + for (i = 0; i < cluster->core_count; i++) {
> + r5_core = cluster->r5_cores[i];
> + dev = r5_core->dev;
> + np = dev_of_node(dev);
> +
> + /* we have address cell 2 and size cell as 2 */
> + ret = of_property_count_elems_of_size(np, "reg",
> + 4 * sizeof(u32));
> + if (ret <= 0) {
> + ret = -EINVAL;
> + goto fail_tcm;
> + }
> +
> + tcm_bank_count = ret;
> +
> + r5_core->tcm_banks = devm_kcalloc(dev, tcm_bank_count,
> + sizeof(struct mem_bank_data *),
> + GFP_KERNEL);
> + if (!r5_core->tcm_banks) {
> + ret = -ENOMEM;
> + goto fail_tcm;
> + }
> +
> + r5_core->tcm_bank_count = tcm_bank_count;
> + for (j = 0; j < tcm_bank_count; j++) {
> + tcm = kzalloc(sizeof(struct mem_bank_data *), GFP_KERNEL);
> + if (!tcm) {
> + ret = -ENOMEM;
> + goto fail_tcm;
> + }
> +
> + r5_core->tcm_banks[j] = tcm;
> + /* get tcm address without translation */
> + ret = of_property_read_reg(np, j, &abs_addr, &size);
> + if (ret) {
> + dev_err(dev, "failed to get reg property\n");
> + goto fail_tcm;
> + }
> +
> + /*
> + * remote processor can address only 32 bits
> + * so convert 64-bits into 32-bits. This will discard
> + * any unwanted upper 32-bits.
> + */
> + tcm->da = (u32)abs_addr;
> + tcm->size = (u32)size;
> +
> + cpdev = to_platform_device(dev);
> + res = platform_get_resource(cpdev, IORESOURCE_MEM, j);
> + if (!res) {
> + dev_err(dev, "failed to get tcm resource\n");
> + ret = -EINVAL;
> + goto fail_tcm;
> + }
> +
> + tcm->addr = (u32)res->start;
> + res = devm_request_mem_region(dev, tcm->addr, tcm->size, res->name);
> + if (!res) {
> + dev_err(dev, "failed to request tcm resource\n");
> + ret = -EINVAL;
> + goto fail_tcm;
> + }
> +
> + memcpy(tcm->bank_name, res->name, ARRAY_SIZE(tcm->bank_name));
> + np = of_node_get(dev_of_node(dev));
> + /*
> + * In dt power-domains are described in this order:
> + * <RPU core>, <atcm>, <btcm>
> + * parse power domains for tcm accordingly
> + */
> + of_parse_phandle_with_args(np, "power-domains",
> + "#power-domain-cells",
> + j + 1, &out_arg);
> + tcm->pm_domain_id = out_arg.args[0];
> + of_node_put(out_arg.np);
> +
> + dev_dbg(dev, "TCM: %s, dma=0x%x, da=0x%x, size=0x%x\n",
> + tcm->bank_name, tcm->addr, tcm->da, tcm->size);
> + dev_dbg(dev, "tcm pm domain id %d\n", tcm->pm_domain_id);
> +
> + if (cluster->mode == SPLIT_MODE)
> + continue;
> +
> + /* Turn on core-1's TCM as well */
> + np1 = of_get_next_child(dev_of_node(cluster->dev),
> + r5_core->np);
> + if (!np1) {
> + of_node_put(np1);
> + np1 = NULL;
> + goto fail_tcm;
> + }
> +
> + of_parse_phandle_with_args(np1, "power-domains",
> + "#power-domain-cells",
> + j + 1, &out_arg);
> + tcm->pm_domain_id2 = out_arg.args[0];
> + of_node_put(out_arg.np);
> + dev_dbg(dev, "tcm pm domain id %d\n", tcm->pm_domain_id2);
> + }
> + }
> +
> + return 0;
> +
> +fail_tcm:
> + while (i >= 0) {
> + r5_core = cluster->r5_cores[i];
> + for (j = 0; j < r5_core->tcm_bank_count; j++) {
> + if (!r5_core->tcm_banks)
> + continue;
> + tcm = r5_core->tcm_banks[j];
> + kfree(tcm);
> + }
> + kfree(r5_core->tcm_banks);
> + i--;
> + }
> +
> + return ret;
> +}
> +
> /**
> * zynqmp_r5_get_tcm_node()
> * Ideally this function should parse tcm node and store information
> @@ -895,12 +1035,20 @@ static struct zynqmp_r5_core *zynqmp_r5_add_rproc_core(struct device *cdev)
> */
> static int zynqmp_r5_get_tcm_node(struct zynqmp_r5_cluster *cluster)
> {
> + const struct mem_bank_data *zynqmp_tcm_banks;
> struct device *dev = cluster->dev;
> struct zynqmp_r5_core *r5_core;
> int tcm_bank_count, tcm_node;
> int i, j;
>
> - tcm_bank_count = ARRAY_SIZE(zynqmp_tcm_banks);
> + if (cluster->mode == SPLIT_MODE) {
> + zynqmp_tcm_banks = zynqmp_tcm_banks_split;
> + tcm_bank_count = ARRAY_SIZE(zynqmp_tcm_banks_split);
> + } else {
> + zynqmp_tcm_banks = zynqmp_tcm_banks_lockstep;
> + tcm_bank_count = ARRAY_SIZE(zynqmp_tcm_banks_lockstep);
> + }
Why are the changes to get TCM bank information from the DT and enhancement to
support lockstep mode in the same patch?
> +
>
> /* count per core tcm banks */
> tcm_bank_count = tcm_bank_count / cluster->core_count;
> @@ -951,10 +1099,25 @@ static int zynqmp_r5_core_init(struct zynqmp_r5_cluster *cluster,
> enum rpu_tcm_comb tcm_mode)
> {
> struct device *dev = cluster->dev;
> + struct device_node *np;
> struct zynqmp_r5_core *r5_core;
> int ret, i;
>
> - ret = zynqmp_r5_get_tcm_node(cluster);
> + /*
> + * try to get tcm nodes from dt but if fail, use hardcode addresses only
> + * for zynqmp platform. New platforms must use dt bindings for TCM.
> + */
> + ret = zynqmp_r5_get_tcm_node_from_dt(cluster);
> + if (ret) {
> + np = of_get_compatible_child(dev_of_node(dev), "xlnx,zynqmp-r5f");
> + if (np) {
Why was this check added?
So far there are too many unanswered questions with this patchset and as such I
will stop here.
Mathieu
> + ret = zynqmp_r5_get_tcm_node(cluster);
> + } else {
> + dev_err(dev, "tcm not found\n");
> + return -EINVAL;
> + }
> + }
> +
> if (ret < 0) {
> dev_err(dev, "can't get tcm node, err %d\n", ret);
> return ret;
> --
> 2.25.1
>
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: [PATCH v4 3/3] remoteproc: zynqmp: get TCM from device-tree
2023-09-06 19:47 ` Mathieu Poirier
@ 2023-09-06 22:02 ` Tanmay Shah
2023-09-07 17:23 ` Tanmay Shah
2023-09-07 18:08 ` Mathieu Poirier
0 siblings, 2 replies; 16+ messages in thread
From: Tanmay Shah @ 2023-09-06 22:02 UTC (permalink / raw)
To: Mathieu Poirier
Cc: linux-remoteproc, devicetree, linux-arm-kernel, linux-kernel,
Bjorn Andersson, Rob Herring, Krzysztof Kozlowski, Michal Simek,
Conor Dooley, Radhey Shyam Pandey, Ben Levinsky
HI Mathieu,
Thanks for reviews. Please find my comments below.
On 9/6/23 2:47 PM, Mathieu Poirier wrote:
> Hi Tanmay,
>
> On Tue, Aug 29, 2023 at 11:19:00AM -0700, Tanmay Shah wrote:
> > Use new dt bindings to get TCM address and size
> > information. Also make sure that driver stays
> > compatible with previous device-tree bindings.
> > So, if TCM information isn't available in device-tree
> > for zynqmp platform, hard-coded address of TCM will
> > be used.
> >
> > New platforms that are compatible with this
> > driver must add TCM support in device-tree as per new
> > bindings.
> >
> > Signed-off-by: Tanmay Shah <tanmay.shah@amd.com>
> > ---
> > drivers/remoteproc/xlnx_r5_remoteproc.c | 279 +++++++++++++++++++-----
> > 1 file changed, 221 insertions(+), 58 deletions(-)
> >
> > diff --git a/drivers/remoteproc/xlnx_r5_remoteproc.c b/drivers/remoteproc/xlnx_r5_remoteproc.c
> > index feca6de68da2..4eb62eb545c2 100644
> > --- a/drivers/remoteproc/xlnx_r5_remoteproc.c
> > +++ b/drivers/remoteproc/xlnx_r5_remoteproc.c
> > @@ -39,15 +39,19 @@ enum zynqmp_r5_cluster_mode {
> > * struct mem_bank_data - Memory Bank description
> > *
> > * @addr: Start address of memory bank
> > + * @da: device address for this tcm bank
> > * @size: Size of Memory bank
> > * @pm_domain_id: Power-domains id of memory bank for firmware to turn on/off
> > + * @pm_domain_id2: second core's corresponding TCM's pm_domain_id
> > * @bank_name: name of the bank for remoteproc framework
> > */
> > struct mem_bank_data {
> > - phys_addr_t addr;
> > - size_t size;
> > + u32 addr;
> > + u32 da;
> > + u32 size;
>
> Why are the types of @addr and @size changed?
So, R5 can access 32-bit address range only. Before I had missed this.
In Devce-tree bindings I am keeping address-cells and size-cells as 2.
So, out of 64-bits only 32-bits will be used to get address of TCM. Same for size.
This motivated me to change the type of @addr and @size fields. It doesn't have any side effects.
>
> > u32 pm_domain_id;
> > - char *bank_name;
> > + u32 pm_domain_id2;
> > + char bank_name[32];
>
> Same
Now we have "reg-names" property in dts so, when that is available, I try to use it.
So, instead of keeping simple pointer, I copy name from "struct resources". So, I changed bank_name
from pointer to array.
>
> > };
> >
> > /**
> > @@ -75,11 +79,17 @@ struct mbox_info {
> > * Hardcoded TCM bank values. This will be removed once TCM bindings are
> > * accepted for system-dt specifications and upstreamed in linux kernel
> > */
> > -static const struct mem_bank_data zynqmp_tcm_banks[] = {
> > - {0xffe00000UL, 0x10000UL, PD_R5_0_ATCM, "atcm0"}, /* TCM 64KB each */
> > - {0xffe20000UL, 0x10000UL, PD_R5_0_BTCM, "btcm0"},
> > - {0xffe90000UL, 0x10000UL, PD_R5_1_ATCM, "atcm1"},
> > - {0xffeb0000UL, 0x10000UL, PD_R5_1_BTCM, "btcm1"},
> > +static const struct mem_bank_data zynqmp_tcm_banks_split[] = {
> > + {0xffe00000, 0x0, 0x10000, PD_R5_0_ATCM, 0, "atcm0"}, /* TCM 64KB each */
> > + {0xffe20000, 0x20000, 0x10000, PD_R5_0_BTCM, 0, "btcm0"},
>
> Here the device address for btcm0 is 0x20000 while in the cover letter it is
> 0x2000.
Thanks for catching this. This is actually typo in cover-letter. It should be 0x20000 in cover-letter.
>
> > + {0xffe90000, 0x0, 0x10000, PD_R5_1_ATCM, 0, "atcm1"},
> > + {0xffeb0000, 0x20000, 0x10000, PD_R5_1_BTCM, 0, "btcm1"},
>
> Same
Same here: It should be 0x20000 in cover-letter.
>
> > +};
> > +
> > +/* TCM 128KB each */
> > +static const struct mem_bank_data zynqmp_tcm_banks_lockstep[] = {
> > + {0xffe00000, 0x0, 0x20000, PD_R5_0_ATCM, PD_R5_1_ATCM, "atcm0"},
> > + {0xffe20000, 0x20000, 0x20000, PD_R5_0_BTCM, PD_R5_1_BTCM, "btcm0"},
> > };
> >
> > /**
> > @@ -422,6 +432,7 @@ static int zynqmp_r5_mem_region_unmap(struct rproc *rproc,
> > struct rproc_mem_entry *mem)
> > {
> > iounmap((void __iomem *)mem->va);
> > +
>
> Spurious change
Sure, I will remove it.
>
> > return 0;
> > }
> >
> > @@ -526,30 +537,6 @@ static int tcm_mem_map(struct rproc *rproc,
> > /* clear TCMs */
> > memset_io(va, 0, mem->len);
> >
> > - /*
> > - * The R5s expect their TCM banks to be at address 0x0 and 0x2000,
> > - * while on the Linux side they are at 0xffexxxxx.
> > - *
> > - * Zero out the high 12 bits of the address. This will give
> > - * expected values for TCM Banks 0A and 0B (0x0 and 0x20000).
> > - */
> > - mem->da &= 0x000fffff;
> > -
> > - /*
> > - * TCM Banks 1A and 1B still have to be translated.
> > - *
> > - * Below handle these two banks' absolute addresses (0xffe90000 and
> > - * 0xffeb0000) and convert to the expected relative addresses
> > - * (0x0 and 0x20000).
> > - */
> > - if (mem->da == 0x90000 || mem->da == 0xB0000)
> > - mem->da -= 0x90000;
> > -
> > - /* if translated TCM bank address is not valid report error */
> > - if (mem->da != 0x0 && mem->da != 0x20000) {
> > - dev_err(&rproc->dev, "invalid TCM address: %x\n", mem->da);
> > - return -EINVAL;
> > - }
> > return 0;
> > }
> >
> > @@ -571,6 +558,7 @@ static int add_tcm_carveout_split_mode(struct rproc *rproc)
> > u32 pm_domain_id;
> > size_t bank_size;
> > char *bank_name;
> > + u32 da;
> >
> > r5_core = rproc->priv;
> > dev = r5_core->dev;
> > @@ -586,6 +574,7 @@ static int add_tcm_carveout_split_mode(struct rproc *rproc)
> > bank_name = r5_core->tcm_banks[i]->bank_name;
> > bank_size = r5_core->tcm_banks[i]->size;
> > pm_domain_id = r5_core->tcm_banks[i]->pm_domain_id;
> > + da = r5_core->tcm_banks[i]->da;
> >
> > ret = zynqmp_pm_request_node(pm_domain_id,
> > ZYNQMP_PM_CAPABILITY_ACCESS, 0,
> > @@ -599,7 +588,7 @@ static int add_tcm_carveout_split_mode(struct rproc *rproc)
> > bank_name, bank_addr, bank_size);
> >
> > rproc_mem = rproc_mem_entry_init(dev, NULL, bank_addr,
> > - bank_size, bank_addr,
> > + bank_size, da,
> > tcm_mem_map, tcm_mem_unmap,
> > bank_name);
> > if (!rproc_mem) {
> > @@ -632,14 +621,14 @@ static int add_tcm_carveout_split_mode(struct rproc *rproc)
> > */
> > static int add_tcm_carveout_lockstep_mode(struct rproc *rproc)
> > {
> > + u32 pm_domain_id, da, pm_domain_id2;
> > struct rproc_mem_entry *rproc_mem;
> > struct zynqmp_r5_core *r5_core;
> > int i, num_banks, ret;
> > - phys_addr_t bank_addr;
> > - size_t bank_size = 0;
> > + u32 bank_size = 0;
> > struct device *dev;
> > - u32 pm_domain_id;
> > char *bank_name;
> > + u32 bank_addr;
> >
> > r5_core = rproc->priv;
> > dev = r5_core->dev;
> > @@ -653,12 +642,16 @@ static int add_tcm_carveout_lockstep_mode(struct rproc *rproc)
> > * So, Enable each TCM block individually, but add their size
> > * to create contiguous memory region.
> > */
> > - bank_addr = r5_core->tcm_banks[0]->addr;
> > - bank_name = r5_core->tcm_banks[0]->bank_name;
> > -
> > for (i = 0; i < num_banks; i++) {
> > - bank_size += r5_core->tcm_banks[i]->size;
> > + bank_addr = r5_core->tcm_banks[i]->addr;
> > + bank_name = r5_core->tcm_banks[i]->bank_name;
> > + bank_size = r5_core->tcm_banks[i]->size;
> > pm_domain_id = r5_core->tcm_banks[i]->pm_domain_id;
> > + pm_domain_id2 = r5_core->tcm_banks[i]->pm_domain_id2;
> > + da = r5_core->tcm_banks[i]->da;
> > +
> > + dev_dbg(dev, "TCM %s addr=0x%x, size=0x%x",
> > + bank_name, bank_addr, bank_size);
> >
> > /* Turn on each TCM bank individually */
> > ret = zynqmp_pm_request_node(pm_domain_id,
> > @@ -668,23 +661,28 @@ static int add_tcm_carveout_lockstep_mode(struct rproc *rproc)
> > dev_err(dev, "failed to turn on TCM 0x%x", pm_domain_id);
> > goto release_tcm_lockstep;
> > }
> > - }
> >
> > - dev_dbg(dev, "TCM add carveout lockstep mode %s addr=0x%llx, size=0x%lx",
> > - bank_name, bank_addr, bank_size);
> > -
> > - /* Register TCM address range, TCM map and unmap functions */
> > - rproc_mem = rproc_mem_entry_init(dev, NULL, bank_addr,
> > - bank_size, bank_addr,
> > - tcm_mem_map, tcm_mem_unmap,
> > - bank_name);
> > - if (!rproc_mem) {
> > - ret = -ENOMEM;
> > - goto release_tcm_lockstep;
> > - }
> > + /* Turn on each TCM bank individually */
> > + ret = zynqmp_pm_request_node(pm_domain_id2,
> > + ZYNQMP_PM_CAPABILITY_ACCESS, 0,
> > + ZYNQMP_PM_REQUEST_ACK_BLOCKING);
> > + if (ret < 0) {
> > + dev_err(dev, "failed to turn on TCM 0x%x", pm_domain_id2);
> > + goto release_tcm_lockstep;
> > + }
> >
> > - /* If registration is success, add carveouts */
> > - rproc_add_carveout(rproc, rproc_mem);
> > + /* Register TCM address range, TCM map and unmap functions */
> > + rproc_mem = rproc_mem_entry_init(dev, NULL, bank_addr,
> > + bank_size, da,
> > + tcm_mem_map, tcm_mem_unmap,
> > + bank_name);
> > + if (!rproc_mem) {
> > + ret = -ENOMEM;
> > + goto release_tcm_lockstep;
> > + }
> > +
> > + rproc_add_carveout(rproc, rproc_mem);
> > + }
> >
> > return 0;
> >
> > @@ -693,7 +691,12 @@ static int add_tcm_carveout_lockstep_mode(struct rproc *rproc)
> > for (i--; i >= 0; i--) {
> > pm_domain_id = r5_core->tcm_banks[i]->pm_domain_id;
> > zynqmp_pm_release_node(pm_domain_id);
> > + if (pm_domain_id2) {
> > + pm_domain_id2 = r5_core->tcm_banks[i]->pm_domain_id2;
> > + zynqmp_pm_release_node(pm_domain_id2);
> > + }
> > }
> > +
> > return ret;
> > }
> >
> > @@ -800,17 +803,23 @@ static int zynqmp_r5_rproc_prepare(struct rproc *rproc)
> > */
> > static int zynqmp_r5_rproc_unprepare(struct rproc *rproc)
> > {
> > + u32 pm_domain_id, pm_domain_id2;
> > struct zynqmp_r5_core *r5_core;
> > - u32 pm_domain_id;
> > int i;
> >
> > r5_core = rproc->priv;
> >
> > for (i = 0; i < r5_core->tcm_bank_count; i++) {
> > pm_domain_id = r5_core->tcm_banks[i]->pm_domain_id;
> > + pm_domain_id2 = r5_core->tcm_banks[i]->pm_domain_id2;
> > if (zynqmp_pm_release_node(pm_domain_id))
> > dev_warn(r5_core->dev,
> > "can't turn off TCM bank 0x%x", pm_domain_id);
> > + if (pm_domain_id2 && zynqmp_pm_release_node(pm_domain_id2))
> > + dev_warn(r5_core->dev,
> > + "can't turn off TCM bank 0x%x", pm_domain_id2);
> > + dev_dbg(r5_core->dev, "pm_domain_id=%d, pm_domain_id2=%d\n",
> > + pm_domain_id, pm_domain_id2);
> > }
> >
> > return 0;
> > @@ -883,6 +892,137 @@ static struct zynqmp_r5_core *zynqmp_r5_add_rproc_core(struct device *cdev)
> > return ERR_PTR(ret);
> > }
> >
> > +static int zynqmp_r5_get_tcm_node_from_dt(struct zynqmp_r5_cluster *cluster)
> > +{
> > + int i, j, tcm_bank_count, ret = -EINVAL;
> > + struct zynqmp_r5_core *r5_core;
> > + struct of_phandle_args out_arg;
> > + struct platform_device *cpdev;
> > + struct resource *res = NULL;
> > + u64 abs_addr = 0, size = 0;
> > + struct mem_bank_data *tcm;
> > + struct device_node *np, *np1 = NULL;
> > + struct device *dev;
> > +
> > + for (i = 0; i < cluster->core_count; i++) {
> > + r5_core = cluster->r5_cores[i];
> > + dev = r5_core->dev;
> > + np = dev_of_node(dev);
> > +
> > + /* we have address cell 2 and size cell as 2 */
> > + ret = of_property_count_elems_of_size(np, "reg",
> > + 4 * sizeof(u32));
> > + if (ret <= 0) {
> > + ret = -EINVAL;
> > + goto fail_tcm;
> > + }
> > +
> > + tcm_bank_count = ret;
> > +
> > + r5_core->tcm_banks = devm_kcalloc(dev, tcm_bank_count,
> > + sizeof(struct mem_bank_data *),
> > + GFP_KERNEL);
> > + if (!r5_core->tcm_banks) {
> > + ret = -ENOMEM;
> > + goto fail_tcm;
> > + }
> > +
> > + r5_core->tcm_bank_count = tcm_bank_count;
> > + for (j = 0; j < tcm_bank_count; j++) {
> > + tcm = kzalloc(sizeof(struct mem_bank_data *), GFP_KERNEL);
> > + if (!tcm) {
> > + ret = -ENOMEM;
> > + goto fail_tcm;
> > + }
> > +
> > + r5_core->tcm_banks[j] = tcm;
> > + /* get tcm address without translation */
> > + ret = of_property_read_reg(np, j, &abs_addr, &size);
> > + if (ret) {
> > + dev_err(dev, "failed to get reg property\n");
> > + goto fail_tcm;
> > + }
> > +
> > + /*
> > + * remote processor can address only 32 bits
> > + * so convert 64-bits into 32-bits. This will discard
> > + * any unwanted upper 32-bits.
> > + */
> > + tcm->da = (u32)abs_addr;
> > + tcm->size = (u32)size;
> > +
> > + cpdev = to_platform_device(dev);
> > + res = platform_get_resource(cpdev, IORESOURCE_MEM, j);
> > + if (!res) {
> > + dev_err(dev, "failed to get tcm resource\n");
> > + ret = -EINVAL;
> > + goto fail_tcm;
> > + }
> > +
> > + tcm->addr = (u32)res->start;
> > + res = devm_request_mem_region(dev, tcm->addr, tcm->size, res->name);
> > + if (!res) {
> > + dev_err(dev, "failed to request tcm resource\n");
> > + ret = -EINVAL;
> > + goto fail_tcm;
> > + }
> > +
> > + memcpy(tcm->bank_name, res->name, ARRAY_SIZE(tcm->bank_name));
> > + np = of_node_get(dev_of_node(dev));
> > + /*
> > + * In dt power-domains are described in this order:
> > + * <RPU core>, <atcm>, <btcm>
> > + * parse power domains for tcm accordingly
> > + */
> > + of_parse_phandle_with_args(np, "power-domains",
> > + "#power-domain-cells",
> > + j + 1, &out_arg);
> > + tcm->pm_domain_id = out_arg.args[0];
> > + of_node_put(out_arg.np);
> > +
> > + dev_dbg(dev, "TCM: %s, dma=0x%x, da=0x%x, size=0x%x\n",
> > + tcm->bank_name, tcm->addr, tcm->da, tcm->size);
> > + dev_dbg(dev, "tcm pm domain id %d\n", tcm->pm_domain_id);
> > +
> > + if (cluster->mode == SPLIT_MODE)
> > + continue;
> > +
> > + /* Turn on core-1's TCM as well */
> > + np1 = of_get_next_child(dev_of_node(cluster->dev),
> > + r5_core->np);
> > + if (!np1) {
> > + of_node_put(np1);
> > + np1 = NULL;
> > + goto fail_tcm;
> > + }
> > +
> > + of_parse_phandle_with_args(np1, "power-domains",
> > + "#power-domain-cells",
> > + j + 1, &out_arg);
> > + tcm->pm_domain_id2 = out_arg.args[0];
> > + of_node_put(out_arg.np);
> > + dev_dbg(dev, "tcm pm domain id %d\n", tcm->pm_domain_id2);
> > + }
> > + }
> > +
> > + return 0;
> > +
> > +fail_tcm:
> > + while (i >= 0) {
> > + r5_core = cluster->r5_cores[i];
> > + for (j = 0; j < r5_core->tcm_bank_count; j++) {
> > + if (!r5_core->tcm_banks)
> > + continue;
> > + tcm = r5_core->tcm_banks[j];
> > + kfree(tcm);
> > + }
> > + kfree(r5_core->tcm_banks);
> > + i--;
> > + }
> > +
> > + return ret;
> > +}
> > +
> > /**
> > * zynqmp_r5_get_tcm_node()
> > * Ideally this function should parse tcm node and store information
> > @@ -895,12 +1035,20 @@ static struct zynqmp_r5_core *zynqmp_r5_add_rproc_core(struct device *cdev)
> > */
> > static int zynqmp_r5_get_tcm_node(struct zynqmp_r5_cluster *cluster)
> > {
> > + const struct mem_bank_data *zynqmp_tcm_banks;
> > struct device *dev = cluster->dev;
> > struct zynqmp_r5_core *r5_core;
> > int tcm_bank_count, tcm_node;
> > int i, j;
> >
> > - tcm_bank_count = ARRAY_SIZE(zynqmp_tcm_banks);
> > + if (cluster->mode == SPLIT_MODE) {
> > + zynqmp_tcm_banks = zynqmp_tcm_banks_split;
> > + tcm_bank_count = ARRAY_SIZE(zynqmp_tcm_banks_split);
> > + } else {
> > + zynqmp_tcm_banks = zynqmp_tcm_banks_lockstep;
> > + tcm_bank_count = ARRAY_SIZE(zynqmp_tcm_banks_lockstep);
> > + }
>
> Why are the changes to get TCM bank information from the DT and enhancement to
> support lockstep mode in the same patch?
Actually TCM in lockstep mode was supported before as well. It's just I was using same table in lockstep mode before.
However, now I am having two tables for split mode and lockstep mode.
I had to do this as I have introduced "da" field in "struct mem_bank_data" object. This makes it easy to process
"device address" derived from device-tree.
And as I have introduced "u32 da", I had to modify table as well and remove hardcoding of "da" calculation in "tcm_mem_map" function.
As all of this is connected, I have them in same patch. No new functionality is added, but just code refactoring.
> > +
> >
> > /* count per core tcm banks */
> > tcm_bank_count = tcm_bank_count / cluster->core_count;
> > @@ -951,10 +1099,25 @@ static int zynqmp_r5_core_init(struct zynqmp_r5_cluster *cluster,
> > enum rpu_tcm_comb tcm_mode)
> > {
> > struct device *dev = cluster->dev;
> > + struct device_node *np;
> > struct zynqmp_r5_core *r5_core;
> > int ret, i;
> >
> > - ret = zynqmp_r5_get_tcm_node(cluster);
> > + /*
> > + * try to get tcm nodes from dt but if fail, use hardcode addresses only
> > + * for zynqmp platform. New platforms must use dt bindings for TCM.
> > + */
> > + ret = zynqmp_r5_get_tcm_node_from_dt(cluster);
> > + if (ret) {
> > + np = of_get_compatible_child(dev_of_node(dev), "xlnx,zynqmp-r5f");
> > + if (np) {
>
> Why was this check added?
We want to maintain backward compatibility with previous bindings only for zynqmp platform.
So, hardcode table is used only for "zynqmp" platform if getting "reg" information from device-tree fails.
If node is not compatible with "xlnx,zynqmp-r5f" then it is new platform and we must not use hardcode
table instead we should fail.
> So far there are too many unanswered questions with this patchset and as such I
> will stop here.
No problem. Please let me know if you have any further questions.
> Mathieu
>
> > + ret = zynqmp_r5_get_tcm_node(cluster);
> > + } else {
> > + dev_err(dev, "tcm not found\n");
> > + return -EINVAL;
> > + }
> > + }
> > +
> > if (ret < 0) {
> > dev_err(dev, "can't get tcm node, err %d\n", ret);
> > return ret;
> > --
> > 2.25.1
> >
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: [PATCH v4 3/3] remoteproc: zynqmp: get TCM from device-tree
2023-09-06 22:02 ` Tanmay Shah
@ 2023-09-07 17:23 ` Tanmay Shah
2023-09-07 18:08 ` Mathieu Poirier
1 sibling, 0 replies; 16+ messages in thread
From: Tanmay Shah @ 2023-09-07 17:23 UTC (permalink / raw)
To: Mathieu Poirier
Cc: linux-remoteproc@vger.kernel.org, devicetree@vger.kernel.org,
linux-arm-kernel@lists.infradead.org,
linux-kernel@vger.kernel.org, Bjorn Andersson, Rob Herring,
Krzysztof Kozlowski, Simek, Michal, Conor Dooley,
Pandey, Radhey Shyam, Levinsky, Ben
On 9/6/23 5:02 PM, Tanmay Shah wrote:
> HI Mathieu,
>
> Thanks for reviews. Please find my comments below.
>
>
> On 9/6/23 2:47 PM, Mathieu Poirier wrote:
> > Hi Tanmay,
> >
> > On Tue, Aug 29, 2023 at 11:19:00AM -0700, Tanmay Shah wrote:
> > > Use new dt bindings to get TCM address and size
> > > information. Also make sure that driver stays
> > > compatible with previous device-tree bindings.
> > > So, if TCM information isn't available in device-tree
> > > for zynqmp platform, hard-coded address of TCM will
> > > be used.
> > >
> > > New platforms that are compatible with this
> > > driver must add TCM support in device-tree as per new
> > > bindings.
> > >
> > > Signed-off-by: Tanmay Shah <tanmay.shah@amd.com>
> > > ---
> > > drivers/remoteproc/xlnx_r5_remoteproc.c | 279 +++++++++++++++++++-----
> > > 1 file changed, 221 insertions(+), 58 deletions(-)
> > >
> > > diff --git a/drivers/remoteproc/xlnx_r5_remoteproc.c b/drivers/remoteproc/xlnx_r5_remoteproc.c
> > > index feca6de68da2..4eb62eb545c2 100644
> > > --- a/drivers/remoteproc/xlnx_r5_remoteproc.c
> > > +++ b/drivers/remoteproc/xlnx_r5_remoteproc.c
> > > @@ -39,15 +39,19 @@ enum zynqmp_r5_cluster_mode {
> > > * struct mem_bank_data - Memory Bank description
> > > *
> > > * @addr: Start address of memory bank
> > > + * @da: device address for this tcm bank
> > > * @size: Size of Memory bank
> > > * @pm_domain_id: Power-domains id of memory bank for firmware to turn on/off
> > > + * @pm_domain_id2: second core's corresponding TCM's pm_domain_id
> > > * @bank_name: name of the bank for remoteproc framework
> > > */
> > > struct mem_bank_data {
> > > - phys_addr_t addr;
> > > - size_t size;
> > > + u32 addr;
> > > + u32 da;
> > > + u32 size;
> >
> > Why are the types of @addr and @size changed?
>
> So, R5 can access 32-bit address range only. Before I had missed this.
>
> In Devce-tree bindings I am keeping address-cells and size-cells as 2.
>
> So, out of 64-bits only 32-bits will be used to get address of TCM. Same for size.
>
> This motivated me to change the type of @addr and @size fields. It doesn't have any side effects.
>
>
> >
> > > u32 pm_domain_id;
> > > - char *bank_name;
> > > + u32 pm_domain_id2;
> > > + char bank_name[32];
> >
> > Same
>
> Now we have "reg-names" property in dts so, when that is available, I try to use it.
>
> So, instead of keeping simple pointer, I copy name from "struct resources". So, I changed bank_name
>
> from pointer to array.
>
>
> >
> > > };
> > >
> > > /**
> > > @@ -75,11 +79,17 @@ struct mbox_info {
> > > * Hardcoded TCM bank values. This will be removed once TCM bindings are
> > > * accepted for system-dt specifications and upstreamed in linux kernel
> > > */
> > > -static const struct mem_bank_data zynqmp_tcm_banks[] = {
> > > - {0xffe00000UL, 0x10000UL, PD_R5_0_ATCM, "atcm0"}, /* TCM 64KB each */
> > > - {0xffe20000UL, 0x10000UL, PD_R5_0_BTCM, "btcm0"},
> > > - {0xffe90000UL, 0x10000UL, PD_R5_1_ATCM, "atcm1"},
> > > - {0xffeb0000UL, 0x10000UL, PD_R5_1_BTCM, "btcm1"},
> > > +static const struct mem_bank_data zynqmp_tcm_banks_split[] = {
> > > + {0xffe00000, 0x0, 0x10000, PD_R5_0_ATCM, 0, "atcm0"}, /* TCM 64KB each */
> > > + {0xffe20000, 0x20000, 0x10000, PD_R5_0_BTCM, 0, "btcm0"},
> >
> > Here the device address for btcm0 is 0x20000 while in the cover letter it is
> > 0x2000.
>
> Thanks for catching this. This is actually typo in cover-letter. It should be 0x20000 in cover-letter.
>
> >
> > > + {0xffe90000, 0x0, 0x10000, PD_R5_1_ATCM, 0, "atcm1"},
> > > + {0xffeb0000, 0x20000, 0x10000, PD_R5_1_BTCM, 0, "btcm1"},
> >
> > Same
>
> Same here: It should be 0x20000 in cover-letter.
>
> >
> > > +};
> > > +
> > > +/* TCM 128KB each */
> > > +static const struct mem_bank_data zynqmp_tcm_banks_lockstep[] = {
> > > + {0xffe00000, 0x0, 0x20000, PD_R5_0_ATCM, PD_R5_1_ATCM, "atcm0"},
> > > + {0xffe20000, 0x20000, 0x20000, PD_R5_0_BTCM, PD_R5_1_BTCM, "btcm0"},
> > > };
> > >
> > > /**
> > > @@ -422,6 +432,7 @@ static int zynqmp_r5_mem_region_unmap(struct rproc *rproc,
> > > struct rproc_mem_entry *mem)
> > > {
> > > iounmap((void __iomem *)mem->va);
> > > +
> >
> > Spurious change
> Sure, I will remove it.
> >
> > > return 0;
> > > }
> > >
> > > @@ -526,30 +537,6 @@ static int tcm_mem_map(struct rproc *rproc,
> > > /* clear TCMs */
> > > memset_io(va, 0, mem->len);
> > >
> > > - /*
> > > - * The R5s expect their TCM banks to be at address 0x0 and 0x2000,
> > > - * while on the Linux side they are at 0xffexxxxx.
> > > - *
> > > - * Zero out the high 12 bits of the address. This will give
> > > - * expected values for TCM Banks 0A and 0B (0x0 and 0x20000).
> > > - */
> > > - mem->da &= 0x000fffff;
> > > -
> > > - /*
> > > - * TCM Banks 1A and 1B still have to be translated.
> > > - *
> > > - * Below handle these two banks' absolute addresses (0xffe90000 and
> > > - * 0xffeb0000) and convert to the expected relative addresses
> > > - * (0x0 and 0x20000).
> > > - */
> > > - if (mem->da == 0x90000 || mem->da == 0xB0000)
> > > - mem->da -= 0x90000;
> > > -
> > > - /* if translated TCM bank address is not valid report error */
> > > - if (mem->da != 0x0 && mem->da != 0x20000) {
> > > - dev_err(&rproc->dev, "invalid TCM address: %x\n", mem->da);
> > > - return -EINVAL;
> > > - }
> > > return 0;
> > > }
> > >
> > > @@ -571,6 +558,7 @@ static int add_tcm_carveout_split_mode(struct rproc *rproc)
> > > u32 pm_domain_id;
> > > size_t bank_size;
> > > char *bank_name;
> > > + u32 da;
> > >
> > > r5_core = rproc->priv;
> > > dev = r5_core->dev;
> > > @@ -586,6 +574,7 @@ static int add_tcm_carveout_split_mode(struct rproc *rproc)
> > > bank_name = r5_core->tcm_banks[i]->bank_name;
> > > bank_size = r5_core->tcm_banks[i]->size;
> > > pm_domain_id = r5_core->tcm_banks[i]->pm_domain_id;
> > > + da = r5_core->tcm_banks[i]->da;
> > >
> > > ret = zynqmp_pm_request_node(pm_domain_id,
> > > ZYNQMP_PM_CAPABILITY_ACCESS, 0,
> > > @@ -599,7 +588,7 @@ static int add_tcm_carveout_split_mode(struct rproc *rproc)
> > > bank_name, bank_addr, bank_size);
> > >
> > > rproc_mem = rproc_mem_entry_init(dev, NULL, bank_addr,
> > > - bank_size, bank_addr,
> > > + bank_size, da,
> > > tcm_mem_map, tcm_mem_unmap,
> > > bank_name);
> > > if (!rproc_mem) {
> > > @@ -632,14 +621,14 @@ static int add_tcm_carveout_split_mode(struct rproc *rproc)
> > > */
> > > static int add_tcm_carveout_lockstep_mode(struct rproc *rproc)
> > > {
> > > + u32 pm_domain_id, da, pm_domain_id2;
> > > struct rproc_mem_entry *rproc_mem;
> > > struct zynqmp_r5_core *r5_core;
> > > int i, num_banks, ret;
> > > - phys_addr_t bank_addr;
> > > - size_t bank_size = 0;
> > > + u32 bank_size = 0;
> > > struct device *dev;
> > > - u32 pm_domain_id;
> > > char *bank_name;
> > > + u32 bank_addr;
> > >
> > > r5_core = rproc->priv;
> > > dev = r5_core->dev;
> > > @@ -653,12 +642,16 @@ static int add_tcm_carveout_lockstep_mode(struct rproc *rproc)
> > > * So, Enable each TCM block individually, but add their size
> > > * to create contiguous memory region.
> > > */
> > > - bank_addr = r5_core->tcm_banks[0]->addr;
> > > - bank_name = r5_core->tcm_banks[0]->bank_name;
> > > -
> > > for (i = 0; i < num_banks; i++) {
> > > - bank_size += r5_core->tcm_banks[i]->size;
> > > + bank_addr = r5_core->tcm_banks[i]->addr;
> > > + bank_name = r5_core->tcm_banks[i]->bank_name;
> > > + bank_size = r5_core->tcm_banks[i]->size;
> > > pm_domain_id = r5_core->tcm_banks[i]->pm_domain_id;
> > > + pm_domain_id2 = r5_core->tcm_banks[i]->pm_domain_id2;
> > > + da = r5_core->tcm_banks[i]->da;
> > > +
> > > + dev_dbg(dev, "TCM %s addr=0x%x, size=0x%x",
> > > + bank_name, bank_addr, bank_size);
> > >
> > > /* Turn on each TCM bank individually */
> > > ret = zynqmp_pm_request_node(pm_domain_id,
> > > @@ -668,23 +661,28 @@ static int add_tcm_carveout_lockstep_mode(struct rproc *rproc)
> > > dev_err(dev, "failed to turn on TCM 0x%x", pm_domain_id);
> > > goto release_tcm_lockstep;
> > > }
> > > - }
> > >
> > > - dev_dbg(dev, "TCM add carveout lockstep mode %s addr=0x%llx, size=0x%lx",
> > > - bank_name, bank_addr, bank_size);
> > > -
> > > - /* Register TCM address range, TCM map and unmap functions */
> > > - rproc_mem = rproc_mem_entry_init(dev, NULL, bank_addr,
> > > - bank_size, bank_addr,
> > > - tcm_mem_map, tcm_mem_unmap,
> > > - bank_name);
> > > - if (!rproc_mem) {
> > > - ret = -ENOMEM;
> > > - goto release_tcm_lockstep;
> > > - }
> > > + /* Turn on each TCM bank individually */
> > > + ret = zynqmp_pm_request_node(pm_domain_id2,
> > > + ZYNQMP_PM_CAPABILITY_ACCESS, 0,
> > > + ZYNQMP_PM_REQUEST_ACK_BLOCKING);
> > > + if (ret < 0) {
> > > + dev_err(dev, "failed to turn on TCM 0x%x", pm_domain_id2);
> > > + goto release_tcm_lockstep;
> > > + }
> > >
> > > - /* If registration is success, add carveouts */
> > > - rproc_add_carveout(rproc, rproc_mem);
> > > + /* Register TCM address range, TCM map and unmap functions */
> > > + rproc_mem = rproc_mem_entry_init(dev, NULL, bank_addr,
> > > + bank_size, da,
> > > + tcm_mem_map, tcm_mem_unmap,
> > > + bank_name);
> > > + if (!rproc_mem) {
> > > + ret = -ENOMEM;
> > > + goto release_tcm_lockstep;
> > > + }
> > > +
> > > + rproc_add_carveout(rproc, rproc_mem);
> > > + }
> > >
> > > return 0;
> > >
> > > @@ -693,7 +691,12 @@ static int add_tcm_carveout_lockstep_mode(struct rproc *rproc)
> > > for (i--; i >= 0; i--) {
> > > pm_domain_id = r5_core->tcm_banks[i]->pm_domain_id;
> > > zynqmp_pm_release_node(pm_domain_id);
> > > + if (pm_domain_id2) {
> > > + pm_domain_id2 = r5_core->tcm_banks[i]->pm_domain_id2;
> > > + zynqmp_pm_release_node(pm_domain_id2);
> > > + }
> > > }
> > > +
> > > return ret;
> > > }
> > >
> > > @@ -800,17 +803,23 @@ static int zynqmp_r5_rproc_prepare(struct rproc *rproc)
> > > */
> > > static int zynqmp_r5_rproc_unprepare(struct rproc *rproc)
> > > {
> > > + u32 pm_domain_id, pm_domain_id2;
> > > struct zynqmp_r5_core *r5_core;
> > > - u32 pm_domain_id;
> > > int i;
> > >
> > > r5_core = rproc->priv;
> > >
> > > for (i = 0; i < r5_core->tcm_bank_count; i++) {
> > > pm_domain_id = r5_core->tcm_banks[i]->pm_domain_id;
> > > + pm_domain_id2 = r5_core->tcm_banks[i]->pm_domain_id2;
> > > if (zynqmp_pm_release_node(pm_domain_id))
> > > dev_warn(r5_core->dev,
> > > "can't turn off TCM bank 0x%x", pm_domain_id);
> > > + if (pm_domain_id2 && zynqmp_pm_release_node(pm_domain_id2))
> > > + dev_warn(r5_core->dev,
> > > + "can't turn off TCM bank 0x%x", pm_domain_id2);
> > > + dev_dbg(r5_core->dev, "pm_domain_id=%d, pm_domain_id2=%d\n",
> > > + pm_domain_id, pm_domain_id2);
> > > }
> > >
> > > return 0;
> > > @@ -883,6 +892,137 @@ static struct zynqmp_r5_core *zynqmp_r5_add_rproc_core(struct device *cdev)
> > > return ERR_PTR(ret);
> > > }
> > >
> > > +static int zynqmp_r5_get_tcm_node_from_dt(struct zynqmp_r5_cluster *cluster)
> > > +{
> > > + int i, j, tcm_bank_count, ret = -EINVAL;
> > > + struct zynqmp_r5_core *r5_core;
> > > + struct of_phandle_args out_arg;
> > > + struct platform_device *cpdev;
> > > + struct resource *res = NULL;
> > > + u64 abs_addr = 0, size = 0;
> > > + struct mem_bank_data *tcm;
> > > + struct device_node *np, *np1 = NULL;
> > > + struct device *dev;
> > > +
> > > + for (i = 0; i < cluster->core_count; i++) {
> > > + r5_core = cluster->r5_cores[i];
> > > + dev = r5_core->dev;
> > > + np = dev_of_node(dev);
> > > +
> > > + /* we have address cell 2 and size cell as 2 */
> > > + ret = of_property_count_elems_of_size(np, "reg",
> > > + 4 * sizeof(u32));
> > > + if (ret <= 0) {
> > > + ret = -EINVAL;
> > > + goto fail_tcm;
> > > + }
> > > +
> > > + tcm_bank_count = ret;
> > > +
> > > + r5_core->tcm_banks = devm_kcalloc(dev, tcm_bank_count,
> > > + sizeof(struct mem_bank_data *),
> > > + GFP_KERNEL);
> > > + if (!r5_core->tcm_banks) {
> > > + ret = -ENOMEM;
> > > + goto fail_tcm;
> > > + }
> > > +
> > > + r5_core->tcm_bank_count = tcm_bank_count;
> > > + for (j = 0; j < tcm_bank_count; j++) {
> > > + tcm = kzalloc(sizeof(struct mem_bank_data *), GFP_KERNEL);
> > > + if (!tcm) {
> > > + ret = -ENOMEM;
> > > + goto fail_tcm;
> > > + }
> > > +
> > > + r5_core->tcm_banks[j] = tcm;
> > > + /* get tcm address without translation */
> > > + ret = of_property_read_reg(np, j, &abs_addr, &size);
> > > + if (ret) {
> > > + dev_err(dev, "failed to get reg property\n");
> > > + goto fail_tcm;
> > > + }
> > > +
> > > + /*
> > > + * remote processor can address only 32 bits
> > > + * so convert 64-bits into 32-bits. This will discard
> > > + * any unwanted upper 32-bits.
> > > + */
> > > + tcm->da = (u32)abs_addr;
> > > + tcm->size = (u32)size;
> > > +
> > > + cpdev = to_platform_device(dev);
> > > + res = platform_get_resource(cpdev, IORESOURCE_MEM, j);
> > > + if (!res) {
> > > + dev_err(dev, "failed to get tcm resource\n");
> > > + ret = -EINVAL;
> > > + goto fail_tcm;
> > > + }
> > > +
> > > + tcm->addr = (u32)res->start;
> > > + res = devm_request_mem_region(dev, tcm->addr, tcm->size, res->name);
> > > + if (!res) {
> > > + dev_err(dev, "failed to request tcm resource\n");
> > > + ret = -EINVAL;
> > > + goto fail_tcm;
> > > + }
> > > +
> > > + memcpy(tcm->bank_name, res->name, ARRAY_SIZE(tcm->bank_name));
> > > + np = of_node_get(dev_of_node(dev));
> > > + /*
> > > + * In dt power-domains are described in this order:
> > > + * <RPU core>, <atcm>, <btcm>
> > > + * parse power domains for tcm accordingly
> > > + */
> > > + of_parse_phandle_with_args(np, "power-domains",
> > > + "#power-domain-cells",
> > > + j + 1, &out_arg);
> > > + tcm->pm_domain_id = out_arg.args[0];
> > > + of_node_put(out_arg.np);
> > > +
> > > + dev_dbg(dev, "TCM: %s, dma=0x%x, da=0x%x, size=0x%x\n",
> > > + tcm->bank_name, tcm->addr, tcm->da, tcm->size);
> > > + dev_dbg(dev, "tcm pm domain id %d\n", tcm->pm_domain_id);
> > > +
> > > + if (cluster->mode == SPLIT_MODE)
> > > + continue;
> > > +
> > > + /* Turn on core-1's TCM as well */
> > > + np1 = of_get_next_child(dev_of_node(cluster->dev),
> > > + r5_core->np);
> > > + if (!np1) {
> > > + of_node_put(np1);
> > > + np1 = NULL;
> > > + goto fail_tcm;
> > > + }
> > > +
> > > + of_parse_phandle_with_args(np1, "power-domains",
> > > + "#power-domain-cells",
> > > + j + 1, &out_arg);
> > > + tcm->pm_domain_id2 = out_arg.args[0];
> > > + of_node_put(out_arg.np);
> > > + dev_dbg(dev, "tcm pm domain id %d\n", tcm->pm_domain_id2);
> > > + }
> > > + }
> > > +
> > > + return 0;
> > > +
> > > +fail_tcm:
> > > + while (i >= 0) {
> > > + r5_core = cluster->r5_cores[i];
> > > + for (j = 0; j < r5_core->tcm_bank_count; j++) {
> > > + if (!r5_core->tcm_banks)
> > > + continue;
> > > + tcm = r5_core->tcm_banks[j];
> > > + kfree(tcm);
> > > + }
> > > + kfree(r5_core->tcm_banks);
> > > + i--;
> > > + }
> > > +
> > > + return ret;
> > > +}
> > > +
> > > /**
> > > * zynqmp_r5_get_tcm_node()
> > > * Ideally this function should parse tcm node and store information
> > > @@ -895,12 +1035,20 @@ static struct zynqmp_r5_core *zynqmp_r5_add_rproc_core(struct device *cdev)
> > > */
> > > static int zynqmp_r5_get_tcm_node(struct zynqmp_r5_cluster *cluster)
> > > {
> > > + const struct mem_bank_data *zynqmp_tcm_banks;
> > > struct device *dev = cluster->dev;
> > > struct zynqmp_r5_core *r5_core;
> > > int tcm_bank_count, tcm_node;
> > > int i, j;
> > >
> > > - tcm_bank_count = ARRAY_SIZE(zynqmp_tcm_banks);
> > > + if (cluster->mode == SPLIT_MODE) {
> > > + zynqmp_tcm_banks = zynqmp_tcm_banks_split;
> > > + tcm_bank_count = ARRAY_SIZE(zynqmp_tcm_banks_split);
> > > + } else {
> > > + zynqmp_tcm_banks = zynqmp_tcm_banks_lockstep;
> > > + tcm_bank_count = ARRAY_SIZE(zynqmp_tcm_banks_lockstep);
> > > + }
> >
> > Why are the changes to get TCM bank information from the DT and enhancement to
> > support lockstep mode in the same patch?
>
> Actually TCM in lockstep mode was supported before as well. It's just I was using same table in lockstep mode before.
>
> However, now I am having two tables for split mode and lockstep mode.
>
> I had to do this as I have introduced "da" field in "struct mem_bank_data" object. This makes it easy to process
>
> "device address" derived from device-tree.
>
> And as I have introduced "u32 da", I had to modify table as well and remove hardcoding of "da" calculation in "tcm_mem_map" function.
>
> As all of this is connected, I have them in same patch. No new functionality is added, but just code refactoring.
>
> > > +
> > >
> > > /* count per core tcm banks */
> > > tcm_bank_count = tcm_bank_count / cluster->core_count;
> > > @@ -951,10 +1099,25 @@ static int zynqmp_r5_core_init(struct zynqmp_r5_cluster *cluster,
> > > enum rpu_tcm_comb tcm_mode)
> > > {
> > > struct device *dev = cluster->dev;
> > > + struct device_node *np;
> > > struct zynqmp_r5_core *r5_core;
> > > int ret, i;
> > >
> > > - ret = zynqmp_r5_get_tcm_node(cluster);
> > > + /*
> > > + * try to get tcm nodes from dt but if fail, use hardcode addresses only
> > > + * for zynqmp platform. New platforms must use dt bindings for TCM.
> > > + */
> > > + ret = zynqmp_r5_get_tcm_node_from_dt(cluster);
> > > + if (ret) {
> > > + np = of_get_compatible_child(dev_of_node(dev), "xlnx,zynqmp-r5f");
> > > + if (np) {
> >
> > Why was this check added?
>
> We want to maintain backward compatibility with previous bindings only for zynqmp platform.
>
> So, hardcode table is used only for "zynqmp" platform if getting "reg" information from device-tree fails.
>
> If node is not compatible with "xlnx,zynqmp-r5f" then it is new platform and we must not use hardcode
>
> table instead we should fail.
>
>
> > So far there are too many unanswered questions with this patchset and as such I
> > will stop here.
>
> No problem. Please let me know if you have any further questions.
Hi Mathieu,
Did you want me to document all the comments I mentioned in driver and send new patchset or can we continue reviews ?
I am fine either way. Let me know.
Thanks,
Tanmay
>
>
> > Mathieu
> >
> > > + ret = zynqmp_r5_get_tcm_node(cluster);
> > > + } else {
> > > + dev_err(dev, "tcm not found\n");
> > > + return -EINVAL;
> > > + }
> > > + }
> > > +
> > > if (ret < 0) {
> > > dev_err(dev, "can't get tcm node, err %d\n", ret);
> > > return ret;
> > > --
> > > 2.25.1
> > >
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: [PATCH v4 3/3] remoteproc: zynqmp: get TCM from device-tree
2023-09-06 22:02 ` Tanmay Shah
2023-09-07 17:23 ` Tanmay Shah
@ 2023-09-07 18:08 ` Mathieu Poirier
2023-09-07 23:11 ` Tanmay Shah
2023-09-25 16:33 ` Tanmay Shah
1 sibling, 2 replies; 16+ messages in thread
From: Mathieu Poirier @ 2023-09-07 18:08 UTC (permalink / raw)
To: Tanmay Shah
Cc: linux-remoteproc, devicetree, linux-arm-kernel, linux-kernel,
Bjorn Andersson, Rob Herring, Krzysztof Kozlowski, Michal Simek,
Conor Dooley, Radhey Shyam Pandey, Ben Levinsky
On Wed, Sep 06, 2023 at 05:02:40PM -0500, Tanmay Shah wrote:
> HI Mathieu,
>
> Thanks for reviews. Please find my comments below.
>
I took another look after reading your comment and found more problems...
>
> On 9/6/23 2:47 PM, Mathieu Poirier wrote:
> > Hi Tanmay,
> >
> > On Tue, Aug 29, 2023 at 11:19:00AM -0700, Tanmay Shah wrote:
> > > Use new dt bindings to get TCM address and size
> > > information. Also make sure that driver stays
> > > compatible with previous device-tree bindings.
> > > So, if TCM information isn't available in device-tree
> > > for zynqmp platform, hard-coded address of TCM will
> > > be used.
> > >
> > > New platforms that are compatible with this
> > > driver must add TCM support in device-tree as per new
> > > bindings.
> > >
> > > Signed-off-by: Tanmay Shah <tanmay.shah@amd.com>
> > > ---
> > > drivers/remoteproc/xlnx_r5_remoteproc.c | 279 +++++++++++++++++++-----
> > > 1 file changed, 221 insertions(+), 58 deletions(-)
> > >
> > > diff --git a/drivers/remoteproc/xlnx_r5_remoteproc.c b/drivers/remoteproc/xlnx_r5_remoteproc.c
> > > index feca6de68da2..4eb62eb545c2 100644
> > > --- a/drivers/remoteproc/xlnx_r5_remoteproc.c
> > > +++ b/drivers/remoteproc/xlnx_r5_remoteproc.c
> > > @@ -39,15 +39,19 @@ enum zynqmp_r5_cluster_mode {
> > > * struct mem_bank_data - Memory Bank description
> > > *
> > > * @addr: Start address of memory bank
> > > + * @da: device address for this tcm bank
> > > * @size: Size of Memory bank
> > > * @pm_domain_id: Power-domains id of memory bank for firmware to turn on/off
> > > + * @pm_domain_id2: second core's corresponding TCM's pm_domain_id
> > > * @bank_name: name of the bank for remoteproc framework
> > > */
> > > struct mem_bank_data {
> > > - phys_addr_t addr;
> > > - size_t size;
> > > + u32 addr;
> > > + u32 da;
> > > + u32 size;
> >
> > Why are the types of @addr and @size changed?
>
> So, R5 can access 32-bit address range only. Before I had missed this.
>
> In Devce-tree bindings I am keeping address-cells and size-cells as 2.
>
> So, out of 64-bits only 32-bits will be used to get address of TCM. Same for size.
>
> This motivated me to change the type of @addr and @size fields. It doesn't have any side effects.
It doesn't have an effect but it also doesn't need to be in this patch,
especially since it is not documented.
This patch needs to be broken in 3 parts:
1) One patch that deals with the addition of the static mem_bank_data for
lockstep mode.
2) One patch that deals with the addition of ->pm_domain_id2 and the potential
bug I may have highlighted below.
3) One patch that deals with extracting the TCM information from the DT.
Everything else needs to be in another patch.
>
>
> >
> > > u32 pm_domain_id;
> > > - char *bank_name;
> > > + u32 pm_domain_id2;
> > > + char bank_name[32];
> >
> > Same
>
> Now we have "reg-names" property in dts so, when that is available, I try to use it.
>
> So, instead of keeping simple pointer, I copy name from "struct resources". So, I changed bank_name
>
> from pointer to array.
>
I'll look at that part again when the rest of may comments are addressed.
>
> >
> > > };
> > >
> > > /**
> > > @@ -75,11 +79,17 @@ struct mbox_info {
> > > * Hardcoded TCM bank values. This will be removed once TCM bindings are
> > > * accepted for system-dt specifications and upstreamed in linux kernel
> > > */
> > > -static const struct mem_bank_data zynqmp_tcm_banks[] = {
> > > - {0xffe00000UL, 0x10000UL, PD_R5_0_ATCM, "atcm0"}, /* TCM 64KB each */
> > > - {0xffe20000UL, 0x10000UL, PD_R5_0_BTCM, "btcm0"},
> > > - {0xffe90000UL, 0x10000UL, PD_R5_1_ATCM, "atcm1"},
> > > - {0xffeb0000UL, 0x10000UL, PD_R5_1_BTCM, "btcm1"},
> > > +static const struct mem_bank_data zynqmp_tcm_banks_split[] = {
> > > + {0xffe00000, 0x0, 0x10000, PD_R5_0_ATCM, 0, "atcm0"}, /* TCM 64KB each */
> > > + {0xffe20000, 0x20000, 0x10000, PD_R5_0_BTCM, 0, "btcm0"},
> >
> > Here the device address for btcm0 is 0x20000 while in the cover letter it is
> > 0x2000.
>
> Thanks for catching this. This is actually typo in cover-letter. It should be 0x20000 in cover-letter.
>
> >
> > > + {0xffe90000, 0x0, 0x10000, PD_R5_1_ATCM, 0, "atcm1"},
> > > + {0xffeb0000, 0x20000, 0x10000, PD_R5_1_BTCM, 0, "btcm1"},
> >
> > Same
>
> Same here: It should be 0x20000 in cover-letter.
>
> >
> > > +};
> > > +
> > > +/* TCM 128KB each */
> > > +static const struct mem_bank_data zynqmp_tcm_banks_lockstep[] = {
> > > + {0xffe00000, 0x0, 0x20000, PD_R5_0_ATCM, PD_R5_1_ATCM, "atcm0"},
> > > + {0xffe20000, 0x20000, 0x20000, PD_R5_0_BTCM, PD_R5_1_BTCM, "btcm0"},
> > > };
> > >
> > > /**
> > > @@ -422,6 +432,7 @@ static int zynqmp_r5_mem_region_unmap(struct rproc *rproc,
> > > struct rproc_mem_entry *mem)
> > > {
> > > iounmap((void __iomem *)mem->va);
> > > +
> >
> > Spurious change
> Sure, I will remove it.
> >
> > > return 0;
> > > }
> > >
> > > @@ -526,30 +537,6 @@ static int tcm_mem_map(struct rproc *rproc,
> > > /* clear TCMs */
> > > memset_io(va, 0, mem->len);
> > >
> > > - /*
> > > - * The R5s expect their TCM banks to be at address 0x0 and 0x2000,
> > > - * while on the Linux side they are at 0xffexxxxx.
> > > - *
> > > - * Zero out the high 12 bits of the address. This will give
> > > - * expected values for TCM Banks 0A and 0B (0x0 and 0x20000).
> > > - */
> > > - mem->da &= 0x000fffff;
> > > -
> > > - /*
> > > - * TCM Banks 1A and 1B still have to be translated.
> > > - *
> > > - * Below handle these two banks' absolute addresses (0xffe90000 and
> > > - * 0xffeb0000) and convert to the expected relative addresses
> > > - * (0x0 and 0x20000).
> > > - */
> > > - if (mem->da == 0x90000 || mem->da == 0xB0000)
> > > - mem->da -= 0x90000;
> > > -
> > > - /* if translated TCM bank address is not valid report error */
> > > - if (mem->da != 0x0 && mem->da != 0x20000) {
> > > - dev_err(&rproc->dev, "invalid TCM address: %x\n", mem->da);
> > > - return -EINVAL;
> > > - }
> > > return 0;
> > > }
> > >
> > > @@ -571,6 +558,7 @@ static int add_tcm_carveout_split_mode(struct rproc *rproc)
> > > u32 pm_domain_id;
> > > size_t bank_size;
> > > char *bank_name;
> > > + u32 da;
> > >
> > > r5_core = rproc->priv;
> > > dev = r5_core->dev;
> > > @@ -586,6 +574,7 @@ static int add_tcm_carveout_split_mode(struct rproc *rproc)
> > > bank_name = r5_core->tcm_banks[i]->bank_name;
> > > bank_size = r5_core->tcm_banks[i]->size;
> > > pm_domain_id = r5_core->tcm_banks[i]->pm_domain_id;
> > > + da = r5_core->tcm_banks[i]->da;
> > >
> > > ret = zynqmp_pm_request_node(pm_domain_id,
> > > ZYNQMP_PM_CAPABILITY_ACCESS, 0,
> > > @@ -599,7 +588,7 @@ static int add_tcm_carveout_split_mode(struct rproc *rproc)
> > > bank_name, bank_addr, bank_size);
> > >
> > > rproc_mem = rproc_mem_entry_init(dev, NULL, bank_addr,
> > > - bank_size, bank_addr,
> > > + bank_size, da,
> > > tcm_mem_map, tcm_mem_unmap,
> > > bank_name);
> > > if (!rproc_mem) {
> > > @@ -632,14 +621,14 @@ static int add_tcm_carveout_split_mode(struct rproc *rproc)
> > > */
> > > static int add_tcm_carveout_lockstep_mode(struct rproc *rproc)
> > > {
> > > + u32 pm_domain_id, da, pm_domain_id2;
> > > struct rproc_mem_entry *rproc_mem;
> > > struct zynqmp_r5_core *r5_core;
> > > int i, num_banks, ret;
> > > - phys_addr_t bank_addr;
> > > - size_t bank_size = 0;
> > > + u32 bank_size = 0;
Why is this changed to a u32 when rproc_mem_entry_init() takes a size_t for
@len? This is especially concerning since add_tcm_carveout_split_mode() still
uses a size_t.
> > > struct device *dev;
> > > - u32 pm_domain_id;
> > > char *bank_name;
> > > + u32 bank_addr;
> > >
> > > r5_core = rproc->priv;
> > > dev = r5_core->dev;
> > > @@ -653,12 +642,16 @@ static int add_tcm_carveout_lockstep_mode(struct rproc *rproc)
> > > * So, Enable each TCM block individually, but add their size
> > > * to create contiguous memory region.
> > > */
> > > - bank_addr = r5_core->tcm_banks[0]->addr;
> > > - bank_name = r5_core->tcm_banks[0]->bank_name;
> > > -
> > > for (i = 0; i < num_banks; i++) {
> > > - bank_size += r5_core->tcm_banks[i]->size;
> > > + bank_addr = r5_core->tcm_banks[i]->addr;
> > > + bank_name = r5_core->tcm_banks[i]->bank_name;
> > > + bank_size = r5_core->tcm_banks[i]->size;
> > > pm_domain_id = r5_core->tcm_banks[i]->pm_domain_id;
> > > + pm_domain_id2 = r5_core->tcm_banks[i]->pm_domain_id2;
> > > + da = r5_core->tcm_banks[i]->da;
> > > +
> > > + dev_dbg(dev, "TCM %s addr=0x%x, size=0x%x",
> > > + bank_name, bank_addr, bank_size);
> > >
> > > /* Turn on each TCM bank individually */
> > > ret = zynqmp_pm_request_node(pm_domain_id,
> > > @@ -668,23 +661,28 @@ static int add_tcm_carveout_lockstep_mode(struct rproc *rproc)
> > > dev_err(dev, "failed to turn on TCM 0x%x", pm_domain_id);
> > > goto release_tcm_lockstep;
> > > }
> > > - }
> > >
> > > - dev_dbg(dev, "TCM add carveout lockstep mode %s addr=0x%llx, size=0x%lx",
> > > - bank_name, bank_addr, bank_size);
> > > -
> > > - /* Register TCM address range, TCM map and unmap functions */
> > > - rproc_mem = rproc_mem_entry_init(dev, NULL, bank_addr,
> > > - bank_size, bank_addr,
> > > - tcm_mem_map, tcm_mem_unmap,
> > > - bank_name);
> > > - if (!rproc_mem) {
> > > - ret = -ENOMEM;
> > > - goto release_tcm_lockstep;
> > > - }
> > > + /* Turn on each TCM bank individually */
> > > + ret = zynqmp_pm_request_node(pm_domain_id2,
> > > + ZYNQMP_PM_CAPABILITY_ACCESS, 0,
> > > + ZYNQMP_PM_REQUEST_ACK_BLOCKING);
> > > + if (ret < 0) {
> > > + dev_err(dev, "failed to turn on TCM 0x%x", pm_domain_id2);
> > > + goto release_tcm_lockstep;
> > > + }
> > >
> > > - /* If registration is success, add carveouts */
> > > - rproc_add_carveout(rproc, rproc_mem);
> > > + /* Register TCM address range, TCM map and unmap functions */
> > > + rproc_mem = rproc_mem_entry_init(dev, NULL, bank_addr,
> > > + bank_size, da,
> > > + tcm_mem_map, tcm_mem_unmap,
> > > + bank_name);
The original code adds a single carveout while this code is adding one for each
memory bank? Is this done on purpose or is it a bug? No comment is provided.
> > > + if (!rproc_mem) {
> > > + ret = -ENOMEM;
> > > + goto release_tcm_lockstep;
> > > + }
> > > +
> > > + rproc_add_carveout(rproc, rproc_mem);
> > > + }
> > >
> > > return 0;
> > >
> > > @@ -693,7 +691,12 @@ static int add_tcm_carveout_lockstep_mode(struct rproc *rproc)
> > > for (i--; i >= 0; i--) {
> > > pm_domain_id = r5_core->tcm_banks[i]->pm_domain_id;
> > > zynqmp_pm_release_node(pm_domain_id);
> > > + if (pm_domain_id2) {
> > > + pm_domain_id2 = r5_core->tcm_banks[i]->pm_domain_id2;
> > > + zynqmp_pm_release_node(pm_domain_id2);
> > > + }
> > > }
> > > +
> > > return ret;
> > > }
> > >
> > > @@ -800,17 +803,23 @@ static int zynqmp_r5_rproc_prepare(struct rproc *rproc)
> > > */
> > > static int zynqmp_r5_rproc_unprepare(struct rproc *rproc)
> > > {
> > > + u32 pm_domain_id, pm_domain_id2;
> > > struct zynqmp_r5_core *r5_core;
> > > - u32 pm_domain_id;
> > > int i;
> > >
> > > r5_core = rproc->priv;
> > >
> > > for (i = 0; i < r5_core->tcm_bank_count; i++) {
> > > pm_domain_id = r5_core->tcm_banks[i]->pm_domain_id;
> > > + pm_domain_id2 = r5_core->tcm_banks[i]->pm_domain_id2;
> > > if (zynqmp_pm_release_node(pm_domain_id))
> > > dev_warn(r5_core->dev,
> > > "can't turn off TCM bank 0x%x", pm_domain_id);
> > > + if (pm_domain_id2 && zynqmp_pm_release_node(pm_domain_id2))
> > > + dev_warn(r5_core->dev,
> > > + "can't turn off TCM bank 0x%x", pm_domain_id2);
> > > + dev_dbg(r5_core->dev, "pm_domain_id=%d, pm_domain_id2=%d\n",
> > > + pm_domain_id, pm_domain_id2);
> > > }
> > >
> > > return 0;
> > > @@ -883,6 +892,137 @@ static struct zynqmp_r5_core *zynqmp_r5_add_rproc_core(struct device *cdev)
> > > return ERR_PTR(ret);
> > > }
> > >
> > > +static int zynqmp_r5_get_tcm_node_from_dt(struct zynqmp_r5_cluster *cluster)
> > > +{
> > > + int i, j, tcm_bank_count, ret = -EINVAL;
> > > + struct zynqmp_r5_core *r5_core;
> > > + struct of_phandle_args out_arg;
> > > + struct platform_device *cpdev;
> > > + struct resource *res = NULL;
> > > + u64 abs_addr = 0, size = 0;
> > > + struct mem_bank_data *tcm;
> > > + struct device_node *np, *np1 = NULL;
> > > + struct device *dev;
As far as I can tell @ret, @res and @np1 don't need initilisation. It may also
be the case for @abs_addr and @size.
> > > +
> > > + for (i = 0; i < cluster->core_count; i++) {
> > > + r5_core = cluster->r5_cores[i];
> > > + dev = r5_core->dev;
> > > + np = dev_of_node(dev);
> > > +
> > > + /* we have address cell 2 and size cell as 2 */
> > > + ret = of_property_count_elems_of_size(np, "reg",
> > > + 4 * sizeof(u32));
> > > + if (ret <= 0) {
> > > + ret = -EINVAL;
> > > + goto fail_tcm;
> > > + }
> > > +
> > > + tcm_bank_count = ret;
> > > +
> > > + r5_core->tcm_banks = devm_kcalloc(dev, tcm_bank_count,
> > > + sizeof(struct mem_bank_data *),
> > > + GFP_KERNEL);
> > > + if (!r5_core->tcm_banks) {
> > > + ret = -ENOMEM;
> > > + goto fail_tcm;
> > > + }
> > > +
> > > + r5_core->tcm_bank_count = tcm_bank_count;
> > > + for (j = 0; j < tcm_bank_count; j++) {
> > > + tcm = kzalloc(sizeof(struct mem_bank_data *), GFP_KERNEL);
> > > + if (!tcm) {
> > > + ret = -ENOMEM;
> > > + goto fail_tcm;
> > > + }
> > > +
> > > + r5_core->tcm_banks[j] = tcm;
> > > + /* get tcm address without translation */
> > > + ret = of_property_read_reg(np, j, &abs_addr, &size);
> > > + if (ret) {
> > > + dev_err(dev, "failed to get reg property\n");
> > > + goto fail_tcm;
> > > + }
> > > +
> > > + /*
> > > + * remote processor can address only 32 bits
> > > + * so convert 64-bits into 32-bits. This will discard
> > > + * any unwanted upper 32-bits.
> > > + */
> > > + tcm->da = (u32)abs_addr;
> > > + tcm->size = (u32)size;
> > > +
> > > + cpdev = to_platform_device(dev);
> > > + res = platform_get_resource(cpdev, IORESOURCE_MEM, j);
> > > + if (!res) {
> > > + dev_err(dev, "failed to get tcm resource\n");
> > > + ret = -EINVAL;
> > > + goto fail_tcm;
> > > + }
> > > +
> > > + tcm->addr = (u32)res->start;
> > > + res = devm_request_mem_region(dev, tcm->addr, tcm->size, res->name);
> > > + if (!res) {
> > > + dev_err(dev, "failed to request tcm resource\n");
> > > + ret = -EINVAL;
> > > + goto fail_tcm;
> > > + }
> > > +
> > > + memcpy(tcm->bank_name, res->name, ARRAY_SIZE(tcm->bank_name));
> > > + np = of_node_get(dev_of_node(dev));
> > > + /*
> > > + * In dt power-domains are described in this order:
> > > + * <RPU core>, <atcm>, <btcm>
> > > + * parse power domains for tcm accordingly
> > > + */
> > > + of_parse_phandle_with_args(np, "power-domains",
> > > + "#power-domain-cells",
> > > + j + 1, &out_arg);
> > > + tcm->pm_domain_id = out_arg.args[0];
> > > + of_node_put(out_arg.np);
> > > +
> > > + dev_dbg(dev, "TCM: %s, dma=0x%x, da=0x%x, size=0x%x\n",
> > > + tcm->bank_name, tcm->addr, tcm->da, tcm->size);
> > > + dev_dbg(dev, "tcm pm domain id %d\n", tcm->pm_domain_id);
> > > +
> > > + if (cluster->mode == SPLIT_MODE)
> > > + continue;
> > > +
> > > + /* Turn on core-1's TCM as well */
> > > + np1 = of_get_next_child(dev_of_node(cluster->dev),
> > > + r5_core->np);
> > > + if (!np1) {
> > > + of_node_put(np1);
> > > + np1 = NULL;
> > > + goto fail_tcm;
> > > + }
> > > +
> > > + of_parse_phandle_with_args(np1, "power-domains",
> > > + "#power-domain-cells",
> > > + j + 1, &out_arg);
> > > + tcm->pm_domain_id2 = out_arg.args[0];
> > > + of_node_put(out_arg.np);
> > > + dev_dbg(dev, "tcm pm domain id %d\n", tcm->pm_domain_id2);
> > > + }
> > > + }
> > > +
> > > + return 0;
> > > +
> > > +fail_tcm:
> > > + while (i >= 0) {
> > > + r5_core = cluster->r5_cores[i];
> > > + for (j = 0; j < r5_core->tcm_bank_count; j++) {
> > > + if (!r5_core->tcm_banks)
> > > + continue;
> > > + tcm = r5_core->tcm_banks[j];
> > > + kfree(tcm);
> > > + }
> > > + kfree(r5_core->tcm_banks);
> > > + i--;
> > > + }
> > > +
> > > + return ret;
> > > +}
> > > +
> > > /**
> > > * zynqmp_r5_get_tcm_node()
> > > * Ideally this function should parse tcm node and store information
> > > @@ -895,12 +1035,20 @@ static struct zynqmp_r5_core *zynqmp_r5_add_rproc_core(struct device *cdev)
> > > */
> > > static int zynqmp_r5_get_tcm_node(struct zynqmp_r5_cluster *cluster)
> > > {
> > > + const struct mem_bank_data *zynqmp_tcm_banks;
> > > struct device *dev = cluster->dev;
> > > struct zynqmp_r5_core *r5_core;
> > > int tcm_bank_count, tcm_node;
> > > int i, j;
> > >
> > > - tcm_bank_count = ARRAY_SIZE(zynqmp_tcm_banks);
> > > + if (cluster->mode == SPLIT_MODE) {
> > > + zynqmp_tcm_banks = zynqmp_tcm_banks_split;
> > > + tcm_bank_count = ARRAY_SIZE(zynqmp_tcm_banks_split);
> > > + } else {
> > > + zynqmp_tcm_banks = zynqmp_tcm_banks_lockstep;
> > > + tcm_bank_count = ARRAY_SIZE(zynqmp_tcm_banks_lockstep);
> > > + }
> >
> > Why are the changes to get TCM bank information from the DT and enhancement to
> > support lockstep mode in the same patch?
>
> Actually TCM in lockstep mode was supported before as well. It's just I was using same table in lockstep mode before.
>
> However, now I am having two tables for split mode and lockstep mode.
>
> I had to do this as I have introduced "da" field in "struct mem_bank_data" object. This makes it easy to process
>
> "device address" derived from device-tree.
>
> And as I have introduced "u32 da", I had to modify table as well and remove hardcoding of "da" calculation in "tcm_mem_map" function.
>
> As all of this is connected, I have them in same patch. No new functionality is added, but just code refactoring.
>
> > > +
> > >
> > > /* count per core tcm banks */
> > > tcm_bank_count = tcm_bank_count / cluster->core_count;
> > > @@ -951,10 +1099,25 @@ static int zynqmp_r5_core_init(struct zynqmp_r5_cluster *cluster,
> > > enum rpu_tcm_comb tcm_mode)
> > > {
> > > struct device *dev = cluster->dev;
> > > + struct device_node *np;
> > > struct zynqmp_r5_core *r5_core;
> > > int ret, i;
> > >
> > > - ret = zynqmp_r5_get_tcm_node(cluster);
> > > + /*
> > > + * try to get tcm nodes from dt but if fail, use hardcode addresses only
> > > + * for zynqmp platform. New platforms must use dt bindings for TCM.
> > > + */
> > > + ret = zynqmp_r5_get_tcm_node_from_dt(cluster);
> > > + if (ret) {
> > > + np = of_get_compatible_child(dev_of_node(dev), "xlnx,zynqmp-r5f");
> > > + if (np) {
> >
> > Why was this check added?
>
> We want to maintain backward compatibility with previous bindings only for zynqmp platform.
>
That check has nothing to do with backward compatibility.
> So, hardcode table is used only for "zynqmp" platform if getting "reg" information from device-tree fails.
>
> If node is not compatible with "xlnx,zynqmp-r5f" then it is new platform and we must not use hardcode
>
> table instead we should fail.
>
So this is the real reason for the check, but zynqmp-r5f is still the only
platform supported by this driver. Please remove and re-introduce if/when a new
platform is added.
>
> > So far there are too many unanswered questions with this patchset and as such I
> > will stop here.
>
> No problem. Please let me know if you have any further questions.
>
>
> > Mathieu
> >
> > > + ret = zynqmp_r5_get_tcm_node(cluster);
> > > + } else {
> > > + dev_err(dev, "tcm not found\n");
> > > + return -EINVAL;
> > > + }
> > > + }
> > > +
> > > if (ret < 0) {
> > > dev_err(dev, "can't get tcm node, err %d\n", ret);
> > > return ret;
> > > --
> > > 2.25.1
> > >
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: [PATCH v4 3/3] remoteproc: zynqmp: get TCM from device-tree
2023-09-07 18:08 ` Mathieu Poirier
@ 2023-09-07 23:11 ` Tanmay Shah
2023-09-08 15:12 ` Mathieu Poirier
2023-09-25 16:33 ` Tanmay Shah
1 sibling, 1 reply; 16+ messages in thread
From: Tanmay Shah @ 2023-09-07 23:11 UTC (permalink / raw)
To: Mathieu Poirier
Cc: linux-remoteproc, devicetree, linux-arm-kernel, linux-kernel,
Bjorn Andersson, Rob Herring, Krzysztof Kozlowski, Michal Simek,
Conor Dooley, Radhey Shyam Pandey, Ben Levinsky
On 9/7/23 1:08 PM, Mathieu Poirier wrote:
> On Wed, Sep 06, 2023 at 05:02:40PM -0500, Tanmay Shah wrote:
> > HI Mathieu,
> >
> > Thanks for reviews. Please find my comments below.
> >
>
> I took another look after reading your comment and found more problems...
>
> >
> > On 9/6/23 2:47 PM, Mathieu Poirier wrote:
> > > Hi Tanmay,
> > >
> > > On Tue, Aug 29, 2023 at 11:19:00AM -0700, Tanmay Shah wrote:
> > > > Use new dt bindings to get TCM address and size
> > > > information. Also make sure that driver stays
> > > > compatible with previous device-tree bindings.
> > > > So, if TCM information isn't available in device-tree
> > > > for zynqmp platform, hard-coded address of TCM will
> > > > be used.
> > > >
> > > > New platforms that are compatible with this
> > > > driver must add TCM support in device-tree as per new
> > > > bindings.
> > > >
> > > > Signed-off-by: Tanmay Shah <tanmay.shah@amd.com>
> > > > ---
> > > > drivers/remoteproc/xlnx_r5_remoteproc.c | 279 +++++++++++++++++++-----
> > > > 1 file changed, 221 insertions(+), 58 deletions(-)
> > > >
> > > > diff --git a/drivers/remoteproc/xlnx_r5_remoteproc.c b/drivers/remoteproc/xlnx_r5_remoteproc.c
> > > > index feca6de68da2..4eb62eb545c2 100644
> > > > --- a/drivers/remoteproc/xlnx_r5_remoteproc.c
> > > > +++ b/drivers/remoteproc/xlnx_r5_remoteproc.c
> > > > @@ -39,15 +39,19 @@ enum zynqmp_r5_cluster_mode {
> > > > * struct mem_bank_data - Memory Bank description
> > > > *
> > > > * @addr: Start address of memory bank
> > > > + * @da: device address for this tcm bank
> > > > * @size: Size of Memory bank
> > > > * @pm_domain_id: Power-domains id of memory bank for firmware to turn on/off
> > > > + * @pm_domain_id2: second core's corresponding TCM's pm_domain_id
> > > > * @bank_name: name of the bank for remoteproc framework
> > > > */
> > > > struct mem_bank_data {
> > > > - phys_addr_t addr;
> > > > - size_t size;
> > > > + u32 addr;
> > > > + u32 da;
> > > > + u32 size;
> > >
> > > Why are the types of @addr and @size changed?
> >
> > So, R5 can access 32-bit address range only. Before I had missed this.
> >
> > In Devce-tree bindings I am keeping address-cells and size-cells as 2.
> >
> > So, out of 64-bits only 32-bits will be used to get address of TCM. Same for size.
> >
> > This motivated me to change the type of @addr and @size fields. It doesn't have any side effects.
>
> It doesn't have an effect but it also doesn't need to be in this patch,
> especially since it is not documented.
>
>
> This patch needs to be broken in 3 parts:
>
> 1) One patch that deals with the addition of the static mem_bank_data for
> lockstep mode.
>
> 2) One patch that deals with the addition of ->pm_domain_id2 and the potential
> bug I may have highlighted below.
>
> 3) One patch that deals with extracting the TCM information from the DT.
> Everything else needs to be in another patch.
Thanks Mathieu, for further reviews.
Ok I agree with this sequence. I will send all of them as separate patches instead of having them in same series.
So, once I get ack on first two, it will make much more easy for me to rebase on those two patches, instead of
maintaining whole series.
Thanks,
Tanmay
>
> >
> >
> > >
> > > > u32 pm_domain_id;
> > > > - char *bank_name;
> > > > + u32 pm_domain_id2;
> > > > + char bank_name[32];
> > >
> > > Same
> >
> > Now we have "reg-names" property in dts so, when that is available, I try to use it.
> >
> > So, instead of keeping simple pointer, I copy name from "struct resources". So, I changed bank_name
> >
> > from pointer to array.
> >
>
> I'll look at that part again when the rest of may comments are addressed.
>
> >
> > >
> > > > };
> > > >
> > > > /**
> > > > @@ -75,11 +79,17 @@ struct mbox_info {
> > > > * Hardcoded TCM bank values. This will be removed once TCM bindings are
> > > > * accepted for system-dt specifications and upstreamed in linux kernel
> > > > */
> > > > -static const struct mem_bank_data zynqmp_tcm_banks[] = {
> > > > - {0xffe00000UL, 0x10000UL, PD_R5_0_ATCM, "atcm0"}, /* TCM 64KB each */
> > > > - {0xffe20000UL, 0x10000UL, PD_R5_0_BTCM, "btcm0"},
> > > > - {0xffe90000UL, 0x10000UL, PD_R5_1_ATCM, "atcm1"},
> > > > - {0xffeb0000UL, 0x10000UL, PD_R5_1_BTCM, "btcm1"},
> > > > +static const struct mem_bank_data zynqmp_tcm_banks_split[] = {
> > > > + {0xffe00000, 0x0, 0x10000, PD_R5_0_ATCM, 0, "atcm0"}, /* TCM 64KB each */
> > > > + {0xffe20000, 0x20000, 0x10000, PD_R5_0_BTCM, 0, "btcm0"},
> > >
> > > Here the device address for btcm0 is 0x20000 while in the cover letter it is
> > > 0x2000.
> >
> > Thanks for catching this. This is actually typo in cover-letter. It should be 0x20000 in cover-letter.
> >
> > >
> > > > + {0xffe90000, 0x0, 0x10000, PD_R5_1_ATCM, 0, "atcm1"},
> > > > + {0xffeb0000, 0x20000, 0x10000, PD_R5_1_BTCM, 0, "btcm1"},
> > >
> > > Same
> >
> > Same here: It should be 0x20000 in cover-letter.
> >
> > >
> > > > +};
> > > > +
> > > > +/* TCM 128KB each */
> > > > +static const struct mem_bank_data zynqmp_tcm_banks_lockstep[] = {
> > > > + {0xffe00000, 0x0, 0x20000, PD_R5_0_ATCM, PD_R5_1_ATCM, "atcm0"},
> > > > + {0xffe20000, 0x20000, 0x20000, PD_R5_0_BTCM, PD_R5_1_BTCM, "btcm0"},
> > > > };
> > > >
> > > > /**
> > > > @@ -422,6 +432,7 @@ static int zynqmp_r5_mem_region_unmap(struct rproc *rproc,
> > > > struct rproc_mem_entry *mem)
> > > > {
> > > > iounmap((void __iomem *)mem->va);
> > > > +
> > >
> > > Spurious change
> > Sure, I will remove it.
> > >
> > > > return 0;
> > > > }
> > > >
> > > > @@ -526,30 +537,6 @@ static int tcm_mem_map(struct rproc *rproc,
> > > > /* clear TCMs */
> > > > memset_io(va, 0, mem->len);
> > > >
> > > > - /*
> > > > - * The R5s expect their TCM banks to be at address 0x0 and 0x2000,
> > > > - * while on the Linux side they are at 0xffexxxxx.
> > > > - *
> > > > - * Zero out the high 12 bits of the address. This will give
> > > > - * expected values for TCM Banks 0A and 0B (0x0 and 0x20000).
> > > > - */
> > > > - mem->da &= 0x000fffff;
> > > > -
> > > > - /*
> > > > - * TCM Banks 1A and 1B still have to be translated.
> > > > - *
> > > > - * Below handle these two banks' absolute addresses (0xffe90000 and
> > > > - * 0xffeb0000) and convert to the expected relative addresses
> > > > - * (0x0 and 0x20000).
> > > > - */
> > > > - if (mem->da == 0x90000 || mem->da == 0xB0000)
> > > > - mem->da -= 0x90000;
> > > > -
> > > > - /* if translated TCM bank address is not valid report error */
> > > > - if (mem->da != 0x0 && mem->da != 0x20000) {
> > > > - dev_err(&rproc->dev, "invalid TCM address: %x\n", mem->da);
> > > > - return -EINVAL;
> > > > - }
> > > > return 0;
> > > > }
> > > >
> > > > @@ -571,6 +558,7 @@ static int add_tcm_carveout_split_mode(struct rproc *rproc)
> > > > u32 pm_domain_id;
> > > > size_t bank_size;
> > > > char *bank_name;
> > > > + u32 da;
> > > >
> > > > r5_core = rproc->priv;
> > > > dev = r5_core->dev;
> > > > @@ -586,6 +574,7 @@ static int add_tcm_carveout_split_mode(struct rproc *rproc)
> > > > bank_name = r5_core->tcm_banks[i]->bank_name;
> > > > bank_size = r5_core->tcm_banks[i]->size;
> > > > pm_domain_id = r5_core->tcm_banks[i]->pm_domain_id;
> > > > + da = r5_core->tcm_banks[i]->da;
> > > >
> > > > ret = zynqmp_pm_request_node(pm_domain_id,
> > > > ZYNQMP_PM_CAPABILITY_ACCESS, 0,
> > > > @@ -599,7 +588,7 @@ static int add_tcm_carveout_split_mode(struct rproc *rproc)
> > > > bank_name, bank_addr, bank_size);
> > > >
> > > > rproc_mem = rproc_mem_entry_init(dev, NULL, bank_addr,
> > > > - bank_size, bank_addr,
> > > > + bank_size, da,
> > > > tcm_mem_map, tcm_mem_unmap,
> > > > bank_name);
> > > > if (!rproc_mem) {
> > > > @@ -632,14 +621,14 @@ static int add_tcm_carveout_split_mode(struct rproc *rproc)
> > > > */
> > > > static int add_tcm_carveout_lockstep_mode(struct rproc *rproc)
> > > > {
> > > > + u32 pm_domain_id, da, pm_domain_id2;
> > > > struct rproc_mem_entry *rproc_mem;
> > > > struct zynqmp_r5_core *r5_core;
> > > > int i, num_banks, ret;
> > > > - phys_addr_t bank_addr;
> > > > - size_t bank_size = 0;
> > > > + u32 bank_size = 0;
>
> Why is this changed to a u32 when rproc_mem_entry_init() takes a size_t for
> @len? This is especially concerning since add_tcm_carveout_split_mode() still
> uses a size_t.
>
> > > > struct device *dev;
> > > > - u32 pm_domain_id;
> > > > char *bank_name;
> > > > + u32 bank_addr;
> > > >
> > > > r5_core = rproc->priv;
> > > > dev = r5_core->dev;
> > > > @@ -653,12 +642,16 @@ static int add_tcm_carveout_lockstep_mode(struct rproc *rproc)
> > > > * So, Enable each TCM block individually, but add their size
> > > > * to create contiguous memory region.
> > > > */
> > > > - bank_addr = r5_core->tcm_banks[0]->addr;
> > > > - bank_name = r5_core->tcm_banks[0]->bank_name;
> > > > -
> > > > for (i = 0; i < num_banks; i++) {
> > > > - bank_size += r5_core->tcm_banks[i]->size;
> > > > + bank_addr = r5_core->tcm_banks[i]->addr;
> > > > + bank_name = r5_core->tcm_banks[i]->bank_name;
> > > > + bank_size = r5_core->tcm_banks[i]->size;
> > > > pm_domain_id = r5_core->tcm_banks[i]->pm_domain_id;
> > > > + pm_domain_id2 = r5_core->tcm_banks[i]->pm_domain_id2;
> > > > + da = r5_core->tcm_banks[i]->da;
> > > > +
> > > > + dev_dbg(dev, "TCM %s addr=0x%x, size=0x%x",
> > > > + bank_name, bank_addr, bank_size);
> > > >
> > > > /* Turn on each TCM bank individually */
> > > > ret = zynqmp_pm_request_node(pm_domain_id,
> > > > @@ -668,23 +661,28 @@ static int add_tcm_carveout_lockstep_mode(struct rproc *rproc)
> > > > dev_err(dev, "failed to turn on TCM 0x%x", pm_domain_id);
> > > > goto release_tcm_lockstep;
> > > > }
> > > > - }
> > > >
> > > > - dev_dbg(dev, "TCM add carveout lockstep mode %s addr=0x%llx, size=0x%lx",
> > > > - bank_name, bank_addr, bank_size);
> > > > -
> > > > - /* Register TCM address range, TCM map and unmap functions */
> > > > - rproc_mem = rproc_mem_entry_init(dev, NULL, bank_addr,
> > > > - bank_size, bank_addr,
> > > > - tcm_mem_map, tcm_mem_unmap,
> > > > - bank_name);
> > > > - if (!rproc_mem) {
> > > > - ret = -ENOMEM;
> > > > - goto release_tcm_lockstep;
> > > > - }
> > > > + /* Turn on each TCM bank individually */
> > > > + ret = zynqmp_pm_request_node(pm_domain_id2,
> > > > + ZYNQMP_PM_CAPABILITY_ACCESS, 0,
> > > > + ZYNQMP_PM_REQUEST_ACK_BLOCKING);
> > > > + if (ret < 0) {
> > > > + dev_err(dev, "failed to turn on TCM 0x%x", pm_domain_id2);
> > > > + goto release_tcm_lockstep;
> > > > + }
> > > >
> > > > - /* If registration is success, add carveouts */
> > > > - rproc_add_carveout(rproc, rproc_mem);
> > > > + /* Register TCM address range, TCM map and unmap functions */
> > > > + rproc_mem = rproc_mem_entry_init(dev, NULL, bank_addr,
> > > > + bank_size, da,
> > > > + tcm_mem_map, tcm_mem_unmap,
> > > > + bank_name);
>
> The original code adds a single carveout while this code is adding one for each
> memory bank? Is this done on purpose or is it a bug? No comment is provided.
>
> > > > + if (!rproc_mem) {
> > > > + ret = -ENOMEM;
> > > > + goto release_tcm_lockstep;
> > > > + }
> > > > +
> > > > + rproc_add_carveout(rproc, rproc_mem);
> > > > + }
> > > >
> > > > return 0;
> > > >
> > > > @@ -693,7 +691,12 @@ static int add_tcm_carveout_lockstep_mode(struct rproc *rproc)
> > > > for (i--; i >= 0; i--) {
> > > > pm_domain_id = r5_core->tcm_banks[i]->pm_domain_id;
> > > > zynqmp_pm_release_node(pm_domain_id);
> > > > + if (pm_domain_id2) {
> > > > + pm_domain_id2 = r5_core->tcm_banks[i]->pm_domain_id2;
> > > > + zynqmp_pm_release_node(pm_domain_id2);
> > > > + }
> > > > }
> > > > +
> > > > return ret;
> > > > }
> > > >
> > > > @@ -800,17 +803,23 @@ static int zynqmp_r5_rproc_prepare(struct rproc *rproc)
> > > > */
> > > > static int zynqmp_r5_rproc_unprepare(struct rproc *rproc)
> > > > {
> > > > + u32 pm_domain_id, pm_domain_id2;
> > > > struct zynqmp_r5_core *r5_core;
> > > > - u32 pm_domain_id;
> > > > int i;
> > > >
> > > > r5_core = rproc->priv;
> > > >
> > > > for (i = 0; i < r5_core->tcm_bank_count; i++) {
> > > > pm_domain_id = r5_core->tcm_banks[i]->pm_domain_id;
> > > > + pm_domain_id2 = r5_core->tcm_banks[i]->pm_domain_id2;
> > > > if (zynqmp_pm_release_node(pm_domain_id))
> > > > dev_warn(r5_core->dev,
> > > > "can't turn off TCM bank 0x%x", pm_domain_id);
> > > > + if (pm_domain_id2 && zynqmp_pm_release_node(pm_domain_id2))
> > > > + dev_warn(r5_core->dev,
> > > > + "can't turn off TCM bank 0x%x", pm_domain_id2);
> > > > + dev_dbg(r5_core->dev, "pm_domain_id=%d, pm_domain_id2=%d\n",
> > > > + pm_domain_id, pm_domain_id2);
> > > > }
> > > >
> > > > return 0;
> > > > @@ -883,6 +892,137 @@ static struct zynqmp_r5_core *zynqmp_r5_add_rproc_core(struct device *cdev)
> > > > return ERR_PTR(ret);
> > > > }
> > > >
> > > > +static int zynqmp_r5_get_tcm_node_from_dt(struct zynqmp_r5_cluster *cluster)
> > > > +{
> > > > + int i, j, tcm_bank_count, ret = -EINVAL;
> > > > + struct zynqmp_r5_core *r5_core;
> > > > + struct of_phandle_args out_arg;
> > > > + struct platform_device *cpdev;
> > > > + struct resource *res = NULL;
> > > > + u64 abs_addr = 0, size = 0;
> > > > + struct mem_bank_data *tcm;
> > > > + struct device_node *np, *np1 = NULL;
> > > > + struct device *dev;
>
> As far as I can tell @ret, @res and @np1 don't need initilisation. It may also
> be the case for @abs_addr and @size.
>
> > > > +
> > > > + for (i = 0; i < cluster->core_count; i++) {
> > > > + r5_core = cluster->r5_cores[i];
> > > > + dev = r5_core->dev;
> > > > + np = dev_of_node(dev);
> > > > +
> > > > + /* we have address cell 2 and size cell as 2 */
> > > > + ret = of_property_count_elems_of_size(np, "reg",
> > > > + 4 * sizeof(u32));
> > > > + if (ret <= 0) {
> > > > + ret = -EINVAL;
> > > > + goto fail_tcm;
> > > > + }
> > > > +
> > > > + tcm_bank_count = ret;
> > > > +
> > > > + r5_core->tcm_banks = devm_kcalloc(dev, tcm_bank_count,
> > > > + sizeof(struct mem_bank_data *),
> > > > + GFP_KERNEL);
> > > > + if (!r5_core->tcm_banks) {
> > > > + ret = -ENOMEM;
> > > > + goto fail_tcm;
> > > > + }
> > > > +
> > > > + r5_core->tcm_bank_count = tcm_bank_count;
> > > > + for (j = 0; j < tcm_bank_count; j++) {
> > > > + tcm = kzalloc(sizeof(struct mem_bank_data *), GFP_KERNEL);
> > > > + if (!tcm) {
> > > > + ret = -ENOMEM;
> > > > + goto fail_tcm;
> > > > + }
> > > > +
> > > > + r5_core->tcm_banks[j] = tcm;
> > > > + /* get tcm address without translation */
> > > > + ret = of_property_read_reg(np, j, &abs_addr, &size);
> > > > + if (ret) {
> > > > + dev_err(dev, "failed to get reg property\n");
> > > > + goto fail_tcm;
> > > > + }
> > > > +
> > > > + /*
> > > > + * remote processor can address only 32 bits
> > > > + * so convert 64-bits into 32-bits. This will discard
> > > > + * any unwanted upper 32-bits.
> > > > + */
> > > > + tcm->da = (u32)abs_addr;
> > > > + tcm->size = (u32)size;
> > > > +
> > > > + cpdev = to_platform_device(dev);
> > > > + res = platform_get_resource(cpdev, IORESOURCE_MEM, j);
> > > > + if (!res) {
> > > > + dev_err(dev, "failed to get tcm resource\n");
> > > > + ret = -EINVAL;
> > > > + goto fail_tcm;
> > > > + }
> > > > +
> > > > + tcm->addr = (u32)res->start;
> > > > + res = devm_request_mem_region(dev, tcm->addr, tcm->size, res->name);
> > > > + if (!res) {
> > > > + dev_err(dev, "failed to request tcm resource\n");
> > > > + ret = -EINVAL;
> > > > + goto fail_tcm;
> > > > + }
> > > > +
> > > > + memcpy(tcm->bank_name, res->name, ARRAY_SIZE(tcm->bank_name));
> > > > + np = of_node_get(dev_of_node(dev));
> > > > + /*
> > > > + * In dt power-domains are described in this order:
> > > > + * <RPU core>, <atcm>, <btcm>
> > > > + * parse power domains for tcm accordingly
> > > > + */
> > > > + of_parse_phandle_with_args(np, "power-domains",
> > > > + "#power-domain-cells",
> > > > + j + 1, &out_arg);
> > > > + tcm->pm_domain_id = out_arg.args[0];
> > > > + of_node_put(out_arg.np);
> > > > +
> > > > + dev_dbg(dev, "TCM: %s, dma=0x%x, da=0x%x, size=0x%x\n",
> > > > + tcm->bank_name, tcm->addr, tcm->da, tcm->size);
> > > > + dev_dbg(dev, "tcm pm domain id %d\n", tcm->pm_domain_id);
> > > > +
> > > > + if (cluster->mode == SPLIT_MODE)
> > > > + continue;
> > > > +
> > > > + /* Turn on core-1's TCM as well */
> > > > + np1 = of_get_next_child(dev_of_node(cluster->dev),
> > > > + r5_core->np);
> > > > + if (!np1) {
> > > > + of_node_put(np1);
> > > > + np1 = NULL;
> > > > + goto fail_tcm;
> > > > + }
> > > > +
> > > > + of_parse_phandle_with_args(np1, "power-domains",
> > > > + "#power-domain-cells",
> > > > + j + 1, &out_arg);
> > > > + tcm->pm_domain_id2 = out_arg.args[0];
> > > > + of_node_put(out_arg.np);
> > > > + dev_dbg(dev, "tcm pm domain id %d\n", tcm->pm_domain_id2);
> > > > + }
> > > > + }
> > > > +
> > > > + return 0;
> > > > +
> > > > +fail_tcm:
> > > > + while (i >= 0) {
> > > > + r5_core = cluster->r5_cores[i];
> > > > + for (j = 0; j < r5_core->tcm_bank_count; j++) {
> > > > + if (!r5_core->tcm_banks)
> > > > + continue;
> > > > + tcm = r5_core->tcm_banks[j];
> > > > + kfree(tcm);
> > > > + }
> > > > + kfree(r5_core->tcm_banks);
> > > > + i--;
> > > > + }
> > > > +
> > > > + return ret;
> > > > +}
> > > > +
> > > > /**
> > > > * zynqmp_r5_get_tcm_node()
> > > > * Ideally this function should parse tcm node and store information
> > > > @@ -895,12 +1035,20 @@ static struct zynqmp_r5_core *zynqmp_r5_add_rproc_core(struct device *cdev)
> > > > */
> > > > static int zynqmp_r5_get_tcm_node(struct zynqmp_r5_cluster *cluster)
> > > > {
> > > > + const struct mem_bank_data *zynqmp_tcm_banks;
> > > > struct device *dev = cluster->dev;
> > > > struct zynqmp_r5_core *r5_core;
> > > > int tcm_bank_count, tcm_node;
> > > > int i, j;
> > > >
> > > > - tcm_bank_count = ARRAY_SIZE(zynqmp_tcm_banks);
> > > > + if (cluster->mode == SPLIT_MODE) {
> > > > + zynqmp_tcm_banks = zynqmp_tcm_banks_split;
> > > > + tcm_bank_count = ARRAY_SIZE(zynqmp_tcm_banks_split);
> > > > + } else {
> > > > + zynqmp_tcm_banks = zynqmp_tcm_banks_lockstep;
> > > > + tcm_bank_count = ARRAY_SIZE(zynqmp_tcm_banks_lockstep);
> > > > + }
> > >
> > > Why are the changes to get TCM bank information from the DT and enhancement to
> > > support lockstep mode in the same patch?
> >
> > Actually TCM in lockstep mode was supported before as well. It's just I was using same table in lockstep mode before.
> >
> > However, now I am having two tables for split mode and lockstep mode.
> >
> > I had to do this as I have introduced "da" field in "struct mem_bank_data" object. This makes it easy to process
> >
> > "device address" derived from device-tree.
> >
> > And as I have introduced "u32 da", I had to modify table as well and remove hardcoding of "da" calculation in "tcm_mem_map" function.
> >
> > As all of this is connected, I have them in same patch. No new functionality is added, but just code refactoring.
> >
> > > > +
> > > >
> > > > /* count per core tcm banks */
> > > > tcm_bank_count = tcm_bank_count / cluster->core_count;
> > > > @@ -951,10 +1099,25 @@ static int zynqmp_r5_core_init(struct zynqmp_r5_cluster *cluster,
> > > > enum rpu_tcm_comb tcm_mode)
> > > > {
> > > > struct device *dev = cluster->dev;
> > > > + struct device_node *np;
> > > > struct zynqmp_r5_core *r5_core;
> > > > int ret, i;
> > > >
> > > > - ret = zynqmp_r5_get_tcm_node(cluster);
> > > > + /*
> > > > + * try to get tcm nodes from dt but if fail, use hardcode addresses only
> > > > + * for zynqmp platform. New platforms must use dt bindings for TCM.
> > > > + */
> > > > + ret = zynqmp_r5_get_tcm_node_from_dt(cluster);
> > > > + if (ret) {
> > > > + np = of_get_compatible_child(dev_of_node(dev), "xlnx,zynqmp-r5f");
> > > > + if (np) {
> > >
> > > Why was this check added?
> >
> > We want to maintain backward compatibility with previous bindings only for zynqmp platform.
> >
>
> That check has nothing to do with backward compatibility.
>
> > So, hardcode table is used only for "zynqmp" platform if getting "reg" information from device-tree fails.
> >
> > If node is not compatible with "xlnx,zynqmp-r5f" then it is new platform and we must not use hardcode
> >
> > table instead we should fail.
> >
>
> So this is the real reason for the check, but zynqmp-r5f is still the only
> platform supported by this driver. Please remove and re-introduce if/when a new
> platform is added.
>
> >
> > > So far there are too many unanswered questions with this patchset and as such I
> > > will stop here.
> >
> > No problem. Please let me know if you have any further questions.
> >
> >
> > > Mathieu
> > >
> > > > + ret = zynqmp_r5_get_tcm_node(cluster);
> > > > + } else {
> > > > + dev_err(dev, "tcm not found\n");
> > > > + return -EINVAL;
> > > > + }
> > > > + }
> > > > +
> > > > if (ret < 0) {
> > > > dev_err(dev, "can't get tcm node, err %d\n", ret);
> > > > return ret;
> > > > --
> > > > 2.25.1
> > > >
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: [PATCH v4 3/3] remoteproc: zynqmp: get TCM from device-tree
2023-09-07 23:11 ` Tanmay Shah
@ 2023-09-08 15:12 ` Mathieu Poirier
0 siblings, 0 replies; 16+ messages in thread
From: Mathieu Poirier @ 2023-09-08 15:12 UTC (permalink / raw)
To: Tanmay Shah
Cc: linux-remoteproc, devicetree, linux-arm-kernel, linux-kernel,
Bjorn Andersson, Rob Herring, Krzysztof Kozlowski, Michal Simek,
Conor Dooley, Radhey Shyam Pandey, Ben Levinsky
On Thu, 7 Sept 2023 at 17:11, Tanmay Shah <tanmay.shah@amd.com> wrote:
>
>
> On 9/7/23 1:08 PM, Mathieu Poirier wrote:
> > On Wed, Sep 06, 2023 at 05:02:40PM -0500, Tanmay Shah wrote:
> > > HI Mathieu,
> > >
> > > Thanks for reviews. Please find my comments below.
> > >
> >
> > I took another look after reading your comment and found more problems...
> >
> > >
> > > On 9/6/23 2:47 PM, Mathieu Poirier wrote:
> > > > Hi Tanmay,
> > > >
> > > > On Tue, Aug 29, 2023 at 11:19:00AM -0700, Tanmay Shah wrote:
> > > > > Use new dt bindings to get TCM address and size
> > > > > information. Also make sure that driver stays
> > > > > compatible with previous device-tree bindings.
> > > > > So, if TCM information isn't available in device-tree
> > > > > for zynqmp platform, hard-coded address of TCM will
> > > > > be used.
> > > > >
> > > > > New platforms that are compatible with this
> > > > > driver must add TCM support in device-tree as per new
> > > > > bindings.
> > > > >
> > > > > Signed-off-by: Tanmay Shah <tanmay.shah@amd.com>
> > > > > ---
> > > > > drivers/remoteproc/xlnx_r5_remoteproc.c | 279 +++++++++++++++++++-----
> > > > > 1 file changed, 221 insertions(+), 58 deletions(-)
> > > > >
> > > > > diff --git a/drivers/remoteproc/xlnx_r5_remoteproc.c b/drivers/remoteproc/xlnx_r5_remoteproc.c
> > > > > index feca6de68da2..4eb62eb545c2 100644
> > > > > --- a/drivers/remoteproc/xlnx_r5_remoteproc.c
> > > > > +++ b/drivers/remoteproc/xlnx_r5_remoteproc.c
> > > > > @@ -39,15 +39,19 @@ enum zynqmp_r5_cluster_mode {
> > > > > * struct mem_bank_data - Memory Bank description
> > > > > *
> > > > > * @addr: Start address of memory bank
> > > > > + * @da: device address for this tcm bank
> > > > > * @size: Size of Memory bank
> > > > > * @pm_domain_id: Power-domains id of memory bank for firmware to turn on/off
> > > > > + * @pm_domain_id2: second core's corresponding TCM's pm_domain_id
> > > > > * @bank_name: name of the bank for remoteproc framework
> > > > > */
> > > > > struct mem_bank_data {
> > > > > - phys_addr_t addr;
> > > > > - size_t size;
> > > > > + u32 addr;
> > > > > + u32 da;
> > > > > + u32 size;
> > > >
> > > > Why are the types of @addr and @size changed?
> > >
> > > So, R5 can access 32-bit address range only. Before I had missed this.
> > >
> > > In Devce-tree bindings I am keeping address-cells and size-cells as 2.
> > >
> > > So, out of 64-bits only 32-bits will be used to get address of TCM. Same for size.
> > >
> > > This motivated me to change the type of @addr and @size fields. It doesn't have any side effects.
> >
> > It doesn't have an effect but it also doesn't need to be in this patch,
> > especially since it is not documented.
> >
> >
> > This patch needs to be broken in 3 parts:
> >
> > 1) One patch that deals with the addition of the static mem_bank_data for
> > lockstep mode.
> >
> > 2) One patch that deals with the addition of ->pm_domain_id2 and the potential
> > bug I may have highlighted below.
> >
> > 3) One patch that deals with extracting the TCM information from the DT.
> > Everything else needs to be in another patch.
>
> Thanks Mathieu, for further reviews.
>
>
> Ok I agree with this sequence. I will send all of them as separate patches instead of having them in same series.
>
I am fine with individual patches or as part of the same series, as
long as patch 03 gets broken up in accordance with what I wrote above.
> So, once I get ack on first two, it will make much more easy for me to rebase on those two patches, instead of
>
> maintaining whole series.
>
>
> Thanks,
>
> Tanmay
>
> >
> > >
> > >
> > > >
> > > > > u32 pm_domain_id;
> > > > > - char *bank_name;
> > > > > + u32 pm_domain_id2;
> > > > > + char bank_name[32];
> > > >
> > > > Same
> > >
> > > Now we have "reg-names" property in dts so, when that is available, I try to use it.
> > >
> > > So, instead of keeping simple pointer, I copy name from "struct resources". So, I changed bank_name
> > >
> > > from pointer to array.
> > >
> >
> > I'll look at that part again when the rest of may comments are addressed.
> >
> > >
> > > >
> > > > > };
> > > > >
> > > > > /**
> > > > > @@ -75,11 +79,17 @@ struct mbox_info {
> > > > > * Hardcoded TCM bank values. This will be removed once TCM bindings are
> > > > > * accepted for system-dt specifications and upstreamed in linux kernel
> > > > > */
> > > > > -static const struct mem_bank_data zynqmp_tcm_banks[] = {
> > > > > - {0xffe00000UL, 0x10000UL, PD_R5_0_ATCM, "atcm0"}, /* TCM 64KB each */
> > > > > - {0xffe20000UL, 0x10000UL, PD_R5_0_BTCM, "btcm0"},
> > > > > - {0xffe90000UL, 0x10000UL, PD_R5_1_ATCM, "atcm1"},
> > > > > - {0xffeb0000UL, 0x10000UL, PD_R5_1_BTCM, "btcm1"},
> > > > > +static const struct mem_bank_data zynqmp_tcm_banks_split[] = {
> > > > > + {0xffe00000, 0x0, 0x10000, PD_R5_0_ATCM, 0, "atcm0"}, /* TCM 64KB each */
> > > > > + {0xffe20000, 0x20000, 0x10000, PD_R5_0_BTCM, 0, "btcm0"},
> > > >
> > > > Here the device address for btcm0 is 0x20000 while in the cover letter it is
> > > > 0x2000.
> > >
> > > Thanks for catching this. This is actually typo in cover-letter. It should be 0x20000 in cover-letter.
> > >
> > > >
> > > > > + {0xffe90000, 0x0, 0x10000, PD_R5_1_ATCM, 0, "atcm1"},
> > > > > + {0xffeb0000, 0x20000, 0x10000, PD_R5_1_BTCM, 0, "btcm1"},
> > > >
> > > > Same
> > >
> > > Same here: It should be 0x20000 in cover-letter.
> > >
> > > >
> > > > > +};
> > > > > +
> > > > > +/* TCM 128KB each */
> > > > > +static const struct mem_bank_data zynqmp_tcm_banks_lockstep[] = {
> > > > > + {0xffe00000, 0x0, 0x20000, PD_R5_0_ATCM, PD_R5_1_ATCM, "atcm0"},
> > > > > + {0xffe20000, 0x20000, 0x20000, PD_R5_0_BTCM, PD_R5_1_BTCM, "btcm0"},
> > > > > };
> > > > >
> > > > > /**
> > > > > @@ -422,6 +432,7 @@ static int zynqmp_r5_mem_region_unmap(struct rproc *rproc,
> > > > > struct rproc_mem_entry *mem)
> > > > > {
> > > > > iounmap((void __iomem *)mem->va);
> > > > > +
> > > >
> > > > Spurious change
> > > Sure, I will remove it.
> > > >
> > > > > return 0;
> > > > > }
> > > > >
> > > > > @@ -526,30 +537,6 @@ static int tcm_mem_map(struct rproc *rproc,
> > > > > /* clear TCMs */
> > > > > memset_io(va, 0, mem->len);
> > > > >
> > > > > - /*
> > > > > - * The R5s expect their TCM banks to be at address 0x0 and 0x2000,
> > > > > - * while on the Linux side they are at 0xffexxxxx.
> > > > > - *
> > > > > - * Zero out the high 12 bits of the address. This will give
> > > > > - * expected values for TCM Banks 0A and 0B (0x0 and 0x20000).
> > > > > - */
> > > > > - mem->da &= 0x000fffff;
> > > > > -
> > > > > - /*
> > > > > - * TCM Banks 1A and 1B still have to be translated.
> > > > > - *
> > > > > - * Below handle these two banks' absolute addresses (0xffe90000 and
> > > > > - * 0xffeb0000) and convert to the expected relative addresses
> > > > > - * (0x0 and 0x20000).
> > > > > - */
> > > > > - if (mem->da == 0x90000 || mem->da == 0xB0000)
> > > > > - mem->da -= 0x90000;
> > > > > -
> > > > > - /* if translated TCM bank address is not valid report error */
> > > > > - if (mem->da != 0x0 && mem->da != 0x20000) {
> > > > > - dev_err(&rproc->dev, "invalid TCM address: %x\n", mem->da);
> > > > > - return -EINVAL;
> > > > > - }
> > > > > return 0;
> > > > > }
> > > > >
> > > > > @@ -571,6 +558,7 @@ static int add_tcm_carveout_split_mode(struct rproc *rproc)
> > > > > u32 pm_domain_id;
> > > > > size_t bank_size;
> > > > > char *bank_name;
> > > > > + u32 da;
> > > > >
> > > > > r5_core = rproc->priv;
> > > > > dev = r5_core->dev;
> > > > > @@ -586,6 +574,7 @@ static int add_tcm_carveout_split_mode(struct rproc *rproc)
> > > > > bank_name = r5_core->tcm_banks[i]->bank_name;
> > > > > bank_size = r5_core->tcm_banks[i]->size;
> > > > > pm_domain_id = r5_core->tcm_banks[i]->pm_domain_id;
> > > > > + da = r5_core->tcm_banks[i]->da;
> > > > >
> > > > > ret = zynqmp_pm_request_node(pm_domain_id,
> > > > > ZYNQMP_PM_CAPABILITY_ACCESS, 0,
> > > > > @@ -599,7 +588,7 @@ static int add_tcm_carveout_split_mode(struct rproc *rproc)
> > > > > bank_name, bank_addr, bank_size);
> > > > >
> > > > > rproc_mem = rproc_mem_entry_init(dev, NULL, bank_addr,
> > > > > - bank_size, bank_addr,
> > > > > + bank_size, da,
> > > > > tcm_mem_map, tcm_mem_unmap,
> > > > > bank_name);
> > > > > if (!rproc_mem) {
> > > > > @@ -632,14 +621,14 @@ static int add_tcm_carveout_split_mode(struct rproc *rproc)
> > > > > */
> > > > > static int add_tcm_carveout_lockstep_mode(struct rproc *rproc)
> > > > > {
> > > > > + u32 pm_domain_id, da, pm_domain_id2;
> > > > > struct rproc_mem_entry *rproc_mem;
> > > > > struct zynqmp_r5_core *r5_core;
> > > > > int i, num_banks, ret;
> > > > > - phys_addr_t bank_addr;
> > > > > - size_t bank_size = 0;
> > > > > + u32 bank_size = 0;
> >
> > Why is this changed to a u32 when rproc_mem_entry_init() takes a size_t for
> > @len? This is especially concerning since add_tcm_carveout_split_mode() still
> > uses a size_t.
> >
> > > > > struct device *dev;
> > > > > - u32 pm_domain_id;
> > > > > char *bank_name;
> > > > > + u32 bank_addr;
> > > > >
> > > > > r5_core = rproc->priv;
> > > > > dev = r5_core->dev;
> > > > > @@ -653,12 +642,16 @@ static int add_tcm_carveout_lockstep_mode(struct rproc *rproc)
> > > > > * So, Enable each TCM block individually, but add their size
> > > > > * to create contiguous memory region.
> > > > > */
> > > > > - bank_addr = r5_core->tcm_banks[0]->addr;
> > > > > - bank_name = r5_core->tcm_banks[0]->bank_name;
> > > > > -
> > > > > for (i = 0; i < num_banks; i++) {
> > > > > - bank_size += r5_core->tcm_banks[i]->size;
> > > > > + bank_addr = r5_core->tcm_banks[i]->addr;
> > > > > + bank_name = r5_core->tcm_banks[i]->bank_name;
> > > > > + bank_size = r5_core->tcm_banks[i]->size;
> > > > > pm_domain_id = r5_core->tcm_banks[i]->pm_domain_id;
> > > > > + pm_domain_id2 = r5_core->tcm_banks[i]->pm_domain_id2;
> > > > > + da = r5_core->tcm_banks[i]->da;
> > > > > +
> > > > > + dev_dbg(dev, "TCM %s addr=0x%x, size=0x%x",
> > > > > + bank_name, bank_addr, bank_size);
> > > > >
> > > > > /* Turn on each TCM bank individually */
> > > > > ret = zynqmp_pm_request_node(pm_domain_id,
> > > > > @@ -668,23 +661,28 @@ static int add_tcm_carveout_lockstep_mode(struct rproc *rproc)
> > > > > dev_err(dev, "failed to turn on TCM 0x%x", pm_domain_id);
> > > > > goto release_tcm_lockstep;
> > > > > }
> > > > > - }
> > > > >
> > > > > - dev_dbg(dev, "TCM add carveout lockstep mode %s addr=0x%llx, size=0x%lx",
> > > > > - bank_name, bank_addr, bank_size);
> > > > > -
> > > > > - /* Register TCM address range, TCM map and unmap functions */
> > > > > - rproc_mem = rproc_mem_entry_init(dev, NULL, bank_addr,
> > > > > - bank_size, bank_addr,
> > > > > - tcm_mem_map, tcm_mem_unmap,
> > > > > - bank_name);
> > > > > - if (!rproc_mem) {
> > > > > - ret = -ENOMEM;
> > > > > - goto release_tcm_lockstep;
> > > > > - }
> > > > > + /* Turn on each TCM bank individually */
> > > > > + ret = zynqmp_pm_request_node(pm_domain_id2,
> > > > > + ZYNQMP_PM_CAPABILITY_ACCESS, 0,
> > > > > + ZYNQMP_PM_REQUEST_ACK_BLOCKING);
> > > > > + if (ret < 0) {
> > > > > + dev_err(dev, "failed to turn on TCM 0x%x", pm_domain_id2);
> > > > > + goto release_tcm_lockstep;
> > > > > + }
> > > > >
> > > > > - /* If registration is success, add carveouts */
> > > > > - rproc_add_carveout(rproc, rproc_mem);
> > > > > + /* Register TCM address range, TCM map and unmap functions */
> > > > > + rproc_mem = rproc_mem_entry_init(dev, NULL, bank_addr,
> > > > > + bank_size, da,
> > > > > + tcm_mem_map, tcm_mem_unmap,
> > > > > + bank_name);
> >
> > The original code adds a single carveout while this code is adding one for each
> > memory bank? Is this done on purpose or is it a bug? No comment is provided.
> >
> > > > > + if (!rproc_mem) {
> > > > > + ret = -ENOMEM;
> > > > > + goto release_tcm_lockstep;
> > > > > + }
> > > > > +
> > > > > + rproc_add_carveout(rproc, rproc_mem);
> > > > > + }
> > > > >
> > > > > return 0;
> > > > >
> > > > > @@ -693,7 +691,12 @@ static int add_tcm_carveout_lockstep_mode(struct rproc *rproc)
> > > > > for (i--; i >= 0; i--) {
> > > > > pm_domain_id = r5_core->tcm_banks[i]->pm_domain_id;
> > > > > zynqmp_pm_release_node(pm_domain_id);
> > > > > + if (pm_domain_id2) {
> > > > > + pm_domain_id2 = r5_core->tcm_banks[i]->pm_domain_id2;
> > > > > + zynqmp_pm_release_node(pm_domain_id2);
> > > > > + }
> > > > > }
> > > > > +
> > > > > return ret;
> > > > > }
> > > > >
> > > > > @@ -800,17 +803,23 @@ static int zynqmp_r5_rproc_prepare(struct rproc *rproc)
> > > > > */
> > > > > static int zynqmp_r5_rproc_unprepare(struct rproc *rproc)
> > > > > {
> > > > > + u32 pm_domain_id, pm_domain_id2;
> > > > > struct zynqmp_r5_core *r5_core;
> > > > > - u32 pm_domain_id;
> > > > > int i;
> > > > >
> > > > > r5_core = rproc->priv;
> > > > >
> > > > > for (i = 0; i < r5_core->tcm_bank_count; i++) {
> > > > > pm_domain_id = r5_core->tcm_banks[i]->pm_domain_id;
> > > > > + pm_domain_id2 = r5_core->tcm_banks[i]->pm_domain_id2;
> > > > > if (zynqmp_pm_release_node(pm_domain_id))
> > > > > dev_warn(r5_core->dev,
> > > > > "can't turn off TCM bank 0x%x", pm_domain_id);
> > > > > + if (pm_domain_id2 && zynqmp_pm_release_node(pm_domain_id2))
> > > > > + dev_warn(r5_core->dev,
> > > > > + "can't turn off TCM bank 0x%x", pm_domain_id2);
> > > > > + dev_dbg(r5_core->dev, "pm_domain_id=%d, pm_domain_id2=%d\n",
> > > > > + pm_domain_id, pm_domain_id2);
> > > > > }
> > > > >
> > > > > return 0;
> > > > > @@ -883,6 +892,137 @@ static struct zynqmp_r5_core *zynqmp_r5_add_rproc_core(struct device *cdev)
> > > > > return ERR_PTR(ret);
> > > > > }
> > > > >
> > > > > +static int zynqmp_r5_get_tcm_node_from_dt(struct zynqmp_r5_cluster *cluster)
> > > > > +{
> > > > > + int i, j, tcm_bank_count, ret = -EINVAL;
> > > > > + struct zynqmp_r5_core *r5_core;
> > > > > + struct of_phandle_args out_arg;
> > > > > + struct platform_device *cpdev;
> > > > > + struct resource *res = NULL;
> > > > > + u64 abs_addr = 0, size = 0;
> > > > > + struct mem_bank_data *tcm;
> > > > > + struct device_node *np, *np1 = NULL;
> > > > > + struct device *dev;
> >
> > As far as I can tell @ret, @res and @np1 don't need initilisation. It may also
> > be the case for @abs_addr and @size.
> >
> > > > > +
> > > > > + for (i = 0; i < cluster->core_count; i++) {
> > > > > + r5_core = cluster->r5_cores[i];
> > > > > + dev = r5_core->dev;
> > > > > + np = dev_of_node(dev);
> > > > > +
> > > > > + /* we have address cell 2 and size cell as 2 */
> > > > > + ret = of_property_count_elems_of_size(np, "reg",
> > > > > + 4 * sizeof(u32));
> > > > > + if (ret <= 0) {
> > > > > + ret = -EINVAL;
> > > > > + goto fail_tcm;
> > > > > + }
> > > > > +
> > > > > + tcm_bank_count = ret;
> > > > > +
> > > > > + r5_core->tcm_banks = devm_kcalloc(dev, tcm_bank_count,
> > > > > + sizeof(struct mem_bank_data *),
> > > > > + GFP_KERNEL);
> > > > > + if (!r5_core->tcm_banks) {
> > > > > + ret = -ENOMEM;
> > > > > + goto fail_tcm;
> > > > > + }
> > > > > +
> > > > > + r5_core->tcm_bank_count = tcm_bank_count;
> > > > > + for (j = 0; j < tcm_bank_count; j++) {
> > > > > + tcm = kzalloc(sizeof(struct mem_bank_data *), GFP_KERNEL);
> > > > > + if (!tcm) {
> > > > > + ret = -ENOMEM;
> > > > > + goto fail_tcm;
> > > > > + }
> > > > > +
> > > > > + r5_core->tcm_banks[j] = tcm;
> > > > > + /* get tcm address without translation */
> > > > > + ret = of_property_read_reg(np, j, &abs_addr, &size);
> > > > > + if (ret) {
> > > > > + dev_err(dev, "failed to get reg property\n");
> > > > > + goto fail_tcm;
> > > > > + }
> > > > > +
> > > > > + /*
> > > > > + * remote processor can address only 32 bits
> > > > > + * so convert 64-bits into 32-bits. This will discard
> > > > > + * any unwanted upper 32-bits.
> > > > > + */
> > > > > + tcm->da = (u32)abs_addr;
> > > > > + tcm->size = (u32)size;
> > > > > +
> > > > > + cpdev = to_platform_device(dev);
> > > > > + res = platform_get_resource(cpdev, IORESOURCE_MEM, j);
> > > > > + if (!res) {
> > > > > + dev_err(dev, "failed to get tcm resource\n");
> > > > > + ret = -EINVAL;
> > > > > + goto fail_tcm;
> > > > > + }
> > > > > +
> > > > > + tcm->addr = (u32)res->start;
> > > > > + res = devm_request_mem_region(dev, tcm->addr, tcm->size, res->name);
> > > > > + if (!res) {
> > > > > + dev_err(dev, "failed to request tcm resource\n");
> > > > > + ret = -EINVAL;
> > > > > + goto fail_tcm;
> > > > > + }
> > > > > +
> > > > > + memcpy(tcm->bank_name, res->name, ARRAY_SIZE(tcm->bank_name));
> > > > > + np = of_node_get(dev_of_node(dev));
> > > > > + /*
> > > > > + * In dt power-domains are described in this order:
> > > > > + * <RPU core>, <atcm>, <btcm>
> > > > > + * parse power domains for tcm accordingly
> > > > > + */
> > > > > + of_parse_phandle_with_args(np, "power-domains",
> > > > > + "#power-domain-cells",
> > > > > + j + 1, &out_arg);
> > > > > + tcm->pm_domain_id = out_arg.args[0];
> > > > > + of_node_put(out_arg.np);
> > > > > +
> > > > > + dev_dbg(dev, "TCM: %s, dma=0x%x, da=0x%x, size=0x%x\n",
> > > > > + tcm->bank_name, tcm->addr, tcm->da, tcm->size);
> > > > > + dev_dbg(dev, "tcm pm domain id %d\n", tcm->pm_domain_id);
> > > > > +
> > > > > + if (cluster->mode == SPLIT_MODE)
> > > > > + continue;
> > > > > +
> > > > > + /* Turn on core-1's TCM as well */
> > > > > + np1 = of_get_next_child(dev_of_node(cluster->dev),
> > > > > + r5_core->np);
> > > > > + if (!np1) {
> > > > > + of_node_put(np1);
> > > > > + np1 = NULL;
> > > > > + goto fail_tcm;
> > > > > + }
> > > > > +
> > > > > + of_parse_phandle_with_args(np1, "power-domains",
> > > > > + "#power-domain-cells",
> > > > > + j + 1, &out_arg);
> > > > > + tcm->pm_domain_id2 = out_arg.args[0];
> > > > > + of_node_put(out_arg.np);
> > > > > + dev_dbg(dev, "tcm pm domain id %d\n", tcm->pm_domain_id2);
> > > > > + }
> > > > > + }
> > > > > +
> > > > > + return 0;
> > > > > +
> > > > > +fail_tcm:
> > > > > + while (i >= 0) {
> > > > > + r5_core = cluster->r5_cores[i];
> > > > > + for (j = 0; j < r5_core->tcm_bank_count; j++) {
> > > > > + if (!r5_core->tcm_banks)
> > > > > + continue;
> > > > > + tcm = r5_core->tcm_banks[j];
> > > > > + kfree(tcm);
> > > > > + }
> > > > > + kfree(r5_core->tcm_banks);
> > > > > + i--;
> > > > > + }
> > > > > +
> > > > > + return ret;
> > > > > +}
> > > > > +
> > > > > /**
> > > > > * zynqmp_r5_get_tcm_node()
> > > > > * Ideally this function should parse tcm node and store information
> > > > > @@ -895,12 +1035,20 @@ static struct zynqmp_r5_core *zynqmp_r5_add_rproc_core(struct device *cdev)
> > > > > */
> > > > > static int zynqmp_r5_get_tcm_node(struct zynqmp_r5_cluster *cluster)
> > > > > {
> > > > > + const struct mem_bank_data *zynqmp_tcm_banks;
> > > > > struct device *dev = cluster->dev;
> > > > > struct zynqmp_r5_core *r5_core;
> > > > > int tcm_bank_count, tcm_node;
> > > > > int i, j;
> > > > >
> > > > > - tcm_bank_count = ARRAY_SIZE(zynqmp_tcm_banks);
> > > > > + if (cluster->mode == SPLIT_MODE) {
> > > > > + zynqmp_tcm_banks = zynqmp_tcm_banks_split;
> > > > > + tcm_bank_count = ARRAY_SIZE(zynqmp_tcm_banks_split);
> > > > > + } else {
> > > > > + zynqmp_tcm_banks = zynqmp_tcm_banks_lockstep;
> > > > > + tcm_bank_count = ARRAY_SIZE(zynqmp_tcm_banks_lockstep);
> > > > > + }
> > > >
> > > > Why are the changes to get TCM bank information from the DT and enhancement to
> > > > support lockstep mode in the same patch?
> > >
> > > Actually TCM in lockstep mode was supported before as well. It's just I was using same table in lockstep mode before.
> > >
> > > However, now I am having two tables for split mode and lockstep mode.
> > >
> > > I had to do this as I have introduced "da" field in "struct mem_bank_data" object. This makes it easy to process
> > >
> > > "device address" derived from device-tree.
> > >
> > > And as I have introduced "u32 da", I had to modify table as well and remove hardcoding of "da" calculation in "tcm_mem_map" function.
> > >
> > > As all of this is connected, I have them in same patch. No new functionality is added, but just code refactoring.
> > >
> > > > > +
> > > > >
> > > > > /* count per core tcm banks */
> > > > > tcm_bank_count = tcm_bank_count / cluster->core_count;
> > > > > @@ -951,10 +1099,25 @@ static int zynqmp_r5_core_init(struct zynqmp_r5_cluster *cluster,
> > > > > enum rpu_tcm_comb tcm_mode)
> > > > > {
> > > > > struct device *dev = cluster->dev;
> > > > > + struct device_node *np;
> > > > > struct zynqmp_r5_core *r5_core;
> > > > > int ret, i;
> > > > >
> > > > > - ret = zynqmp_r5_get_tcm_node(cluster);
> > > > > + /*
> > > > > + * try to get tcm nodes from dt but if fail, use hardcode addresses only
> > > > > + * for zynqmp platform. New platforms must use dt bindings for TCM.
> > > > > + */
> > > > > + ret = zynqmp_r5_get_tcm_node_from_dt(cluster);
> > > > > + if (ret) {
> > > > > + np = of_get_compatible_child(dev_of_node(dev), "xlnx,zynqmp-r5f");
> > > > > + if (np) {
> > > >
> > > > Why was this check added?
> > >
> > > We want to maintain backward compatibility with previous bindings only for zynqmp platform.
> > >
> >
> > That check has nothing to do with backward compatibility.
> >
> > > So, hardcode table is used only for "zynqmp" platform if getting "reg" information from device-tree fails.
> > >
> > > If node is not compatible with "xlnx,zynqmp-r5f" then it is new platform and we must not use hardcode
> > >
> > > table instead we should fail.
> > >
> >
> > So this is the real reason for the check, but zynqmp-r5f is still the only
> > platform supported by this driver. Please remove and re-introduce if/when a new
> > platform is added.
> >
> > >
> > > > So far there are too many unanswered questions with this patchset and as such I
> > > > will stop here.
> > >
> > > No problem. Please let me know if you have any further questions.
> > >
> > >
> > > > Mathieu
> > > >
> > > > > + ret = zynqmp_r5_get_tcm_node(cluster);
> > > > > + } else {
> > > > > + dev_err(dev, "tcm not found\n");
> > > > > + return -EINVAL;
> > > > > + }
> > > > > + }
> > > > > +
> > > > > if (ret < 0) {
> > > > > dev_err(dev, "can't get tcm node, err %d\n", ret);
> > > > > return ret;
> > > > > --
> > > > > 2.25.1
> > > > >
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: [PATCH v4 3/3] remoteproc: zynqmp: get TCM from device-tree
2023-09-07 18:08 ` Mathieu Poirier
2023-09-07 23:11 ` Tanmay Shah
@ 2023-09-25 16:33 ` Tanmay Shah
1 sibling, 0 replies; 16+ messages in thread
From: Tanmay Shah @ 2023-09-25 16:33 UTC (permalink / raw)
To: Mathieu Poirier
Cc: linux-remoteproc, devicetree, linux-arm-kernel, linux-kernel,
Bjorn Andersson, Rob Herring, Krzysztof Kozlowski, Michal Simek,
Conor Dooley, Radhey Shyam Pandey, Ben Levinsky
On 9/7/23 1:08 PM, Mathieu Poirier wrote:
> On Wed, Sep 06, 2023 at 05:02:40PM -0500, Tanmay Shah wrote:
> > HI Mathieu,
> >
> > Thanks for reviews. Please find my comments below.
> >
>
> I took another look after reading your comment and found more problems...
>
> >
> > On 9/6/23 2:47 PM, Mathieu Poirier wrote:
> > > Hi Tanmay,
> > >
> > > On Tue, Aug 29, 2023 at 11:19:00AM -0700, Tanmay Shah wrote:
> > > > Use new dt bindings to get TCM address and size
> > > > information. Also make sure that driver stays
> > > > compatible with previous device-tree bindings.
> > > > So, if TCM information isn't available in device-tree
> > > > for zynqmp platform, hard-coded address of TCM will
> > > > be used.
> > > >
> > > > New platforms that are compatible with this
> > > > driver must add TCM support in device-tree as per new
> > > > bindings.
> > > >
> > > > Signed-off-by: Tanmay Shah <tanmay.shah@amd.com>
> > > > ---
> > > > drivers/remoteproc/xlnx_r5_remoteproc.c | 279 +++++++++++++++++++-----
> > > > 1 file changed, 221 insertions(+), 58 deletions(-)
> > > >
> > > > diff --git a/drivers/remoteproc/xlnx_r5_remoteproc.c b/drivers/remoteproc/xlnx_r5_remoteproc.c
> > > > index feca6de68da2..4eb62eb545c2 100644
> > > > --- a/drivers/remoteproc/xlnx_r5_remoteproc.c
> > > > +++ b/drivers/remoteproc/xlnx_r5_remoteproc.c
> > > > @@ -39,15 +39,19 @@ enum zynqmp_r5_cluster_mode {
> > > > * struct mem_bank_data - Memory Bank description
> > > > *
> > > > * @addr: Start address of memory bank
> > > > + * @da: device address for this tcm bank
> > > > * @size: Size of Memory bank
> > > > * @pm_domain_id: Power-domains id of memory bank for firmware to turn on/off
> > > > + * @pm_domain_id2: second core's corresponding TCM's pm_domain_id
> > > > * @bank_name: name of the bank for remoteproc framework
> > > > */
> > > > struct mem_bank_data {
> > > > - phys_addr_t addr;
> > > > - size_t size;
> > > > + u32 addr;
> > > > + u32 da;
> > > > + u32 size;
> > >
> > > Why are the types of @addr and @size changed?
> >
> > So, R5 can access 32-bit address range only. Before I had missed this.
> >
> > In Devce-tree bindings I am keeping address-cells and size-cells as 2.
> >
> > So, out of 64-bits only 32-bits will be used to get address of TCM. Same for size.
> >
> > This motivated me to change the type of @addr and @size fields. It doesn't have any side effects.
>
> It doesn't have an effect but it also doesn't need to be in this patch,
> especially since it is not documented.
>
>
> This patch needs to be broken in 3 parts:
>
> 1) One patch that deals with the addition of the static mem_bank_data for
> lockstep mode.
>
> 2) One patch that deals with the addition of ->pm_domain_id2 and the potential
> bug I may have highlighted below.
Hi Mathieu,
Just heads up. There is change in this plan. I found out that pm domain framework can be used to power-on/off devices
with pm domains in device-tree. So, I am developing patches accordingly.
I will still split patches but it won't be same as what was posted here. There will be patch that is using
pm domain (genpd) framework to power-on/off TCM.
Tanmay
> 3) One patch that deals with extracting the TCM information from the DT.
> Everything else needs to be in another patch.
>
> >
> >
> > >
> > > > u32 pm_domain_id;
> > > > - char *bank_name;
> > > > + u32 pm_domain_id2;
> > > > + char bank_name[32];
> > >
> > > Same
> >
> > Now we have "reg-names" property in dts so, when that is available, I try to use it.
> >
> > So, instead of keeping simple pointer, I copy name from "struct resources". So, I changed bank_name
> >
> > from pointer to array.
> >
>
> I'll look at that part again when the rest of may comments are addressed.
>
> >
> > >
> > > > };
> > > >
> > > > /**
> > > > @@ -75,11 +79,17 @@ struct mbox_info {
> > > > * Hardcoded TCM bank values. This will be removed once TCM bindings are
> > > > * accepted for system-dt specifications and upstreamed in linux kernel
> > > > */
> > > > -static const struct mem_bank_data zynqmp_tcm_banks[] = {
> > > > - {0xffe00000UL, 0x10000UL, PD_R5_0_ATCM, "atcm0"}, /* TCM 64KB each */
> > > > - {0xffe20000UL, 0x10000UL, PD_R5_0_BTCM, "btcm0"},
> > > > - {0xffe90000UL, 0x10000UL, PD_R5_1_ATCM, "atcm1"},
> > > > - {0xffeb0000UL, 0x10000UL, PD_R5_1_BTCM, "btcm1"},
> > > > +static const struct mem_bank_data zynqmp_tcm_banks_split[] = {
> > > > + {0xffe00000, 0x0, 0x10000, PD_R5_0_ATCM, 0, "atcm0"}, /* TCM 64KB each */
> > > > + {0xffe20000, 0x20000, 0x10000, PD_R5_0_BTCM, 0, "btcm0"},
> > >
> > > Here the device address for btcm0 is 0x20000 while in the cover letter it is
> > > 0x2000.
> >
> > Thanks for catching this. This is actually typo in cover-letter. It should be 0x20000 in cover-letter.
> >
> > >
> > > > + {0xffe90000, 0x0, 0x10000, PD_R5_1_ATCM, 0, "atcm1"},
> > > > + {0xffeb0000, 0x20000, 0x10000, PD_R5_1_BTCM, 0, "btcm1"},
> > >
> > > Same
> >
> > Same here: It should be 0x20000 in cover-letter.
> >
> > >
> > > > +};
> > > > +
> > > > +/* TCM 128KB each */
> > > > +static const struct mem_bank_data zynqmp_tcm_banks_lockstep[] = {
> > > > + {0xffe00000, 0x0, 0x20000, PD_R5_0_ATCM, PD_R5_1_ATCM, "atcm0"},
> > > > + {0xffe20000, 0x20000, 0x20000, PD_R5_0_BTCM, PD_R5_1_BTCM, "btcm0"},
> > > > };
> > > >
> > > > /**
> > > > @@ -422,6 +432,7 @@ static int zynqmp_r5_mem_region_unmap(struct rproc *rproc,
> > > > struct rproc_mem_entry *mem)
> > > > {
> > > > iounmap((void __iomem *)mem->va);
> > > > +
> > >
> > > Spurious change
> > Sure, I will remove it.
> > >
> > > > return 0;
> > > > }
> > > >
> > > > @@ -526,30 +537,6 @@ static int tcm_mem_map(struct rproc *rproc,
> > > > /* clear TCMs */
> > > > memset_io(va, 0, mem->len);
> > > >
> > > > - /*
> > > > - * The R5s expect their TCM banks to be at address 0x0 and 0x2000,
> > > > - * while on the Linux side they are at 0xffexxxxx.
> > > > - *
> > > > - * Zero out the high 12 bits of the address. This will give
> > > > - * expected values for TCM Banks 0A and 0B (0x0 and 0x20000).
> > > > - */
> > > > - mem->da &= 0x000fffff;
> > > > -
> > > > - /*
> > > > - * TCM Banks 1A and 1B still have to be translated.
> > > > - *
> > > > - * Below handle these two banks' absolute addresses (0xffe90000 and
> > > > - * 0xffeb0000) and convert to the expected relative addresses
> > > > - * (0x0 and 0x20000).
> > > > - */
> > > > - if (mem->da == 0x90000 || mem->da == 0xB0000)
> > > > - mem->da -= 0x90000;
> > > > -
> > > > - /* if translated TCM bank address is not valid report error */
> > > > - if (mem->da != 0x0 && mem->da != 0x20000) {
> > > > - dev_err(&rproc->dev, "invalid TCM address: %x\n", mem->da);
> > > > - return -EINVAL;
> > > > - }
> > > > return 0;
> > > > }
> > > >
> > > > @@ -571,6 +558,7 @@ static int add_tcm_carveout_split_mode(struct rproc *rproc)
> > > > u32 pm_domain_id;
> > > > size_t bank_size;
> > > > char *bank_name;
> > > > + u32 da;
> > > >
> > > > r5_core = rproc->priv;
> > > > dev = r5_core->dev;
> > > > @@ -586,6 +574,7 @@ static int add_tcm_carveout_split_mode(struct rproc *rproc)
> > > > bank_name = r5_core->tcm_banks[i]->bank_name;
> > > > bank_size = r5_core->tcm_banks[i]->size;
> > > > pm_domain_id = r5_core->tcm_banks[i]->pm_domain_id;
> > > > + da = r5_core->tcm_banks[i]->da;
> > > >
> > > > ret = zynqmp_pm_request_node(pm_domain_id,
> > > > ZYNQMP_PM_CAPABILITY_ACCESS, 0,
> > > > @@ -599,7 +588,7 @@ static int add_tcm_carveout_split_mode(struct rproc *rproc)
> > > > bank_name, bank_addr, bank_size);
> > > >
> > > > rproc_mem = rproc_mem_entry_init(dev, NULL, bank_addr,
> > > > - bank_size, bank_addr,
> > > > + bank_size, da,
> > > > tcm_mem_map, tcm_mem_unmap,
> > > > bank_name);
> > > > if (!rproc_mem) {
> > > > @@ -632,14 +621,14 @@ static int add_tcm_carveout_split_mode(struct rproc *rproc)
> > > > */
> > > > static int add_tcm_carveout_lockstep_mode(struct rproc *rproc)
> > > > {
> > > > + u32 pm_domain_id, da, pm_domain_id2;
> > > > struct rproc_mem_entry *rproc_mem;
> > > > struct zynqmp_r5_core *r5_core;
> > > > int i, num_banks, ret;
> > > > - phys_addr_t bank_addr;
> > > > - size_t bank_size = 0;
> > > > + u32 bank_size = 0;
>
> Why is this changed to a u32 when rproc_mem_entry_init() takes a size_t for
> @len? This is especially concerning since add_tcm_carveout_split_mode() still
> uses a size_t.
>
> > > > struct device *dev;
> > > > - u32 pm_domain_id;
> > > > char *bank_name;
> > > > + u32 bank_addr;
> > > >
> > > > r5_core = rproc->priv;
> > > > dev = r5_core->dev;
> > > > @@ -653,12 +642,16 @@ static int add_tcm_carveout_lockstep_mode(struct rproc *rproc)
> > > > * So, Enable each TCM block individually, but add their size
> > > > * to create contiguous memory region.
> > > > */
> > > > - bank_addr = r5_core->tcm_banks[0]->addr;
> > > > - bank_name = r5_core->tcm_banks[0]->bank_name;
> > > > -
> > > > for (i = 0; i < num_banks; i++) {
> > > > - bank_size += r5_core->tcm_banks[i]->size;
> > > > + bank_addr = r5_core->tcm_banks[i]->addr;
> > > > + bank_name = r5_core->tcm_banks[i]->bank_name;
> > > > + bank_size = r5_core->tcm_banks[i]->size;
> > > > pm_domain_id = r5_core->tcm_banks[i]->pm_domain_id;
> > > > + pm_domain_id2 = r5_core->tcm_banks[i]->pm_domain_id2;
> > > > + da = r5_core->tcm_banks[i]->da;
> > > > +
> > > > + dev_dbg(dev, "TCM %s addr=0x%x, size=0x%x",
> > > > + bank_name, bank_addr, bank_size);
> > > >
> > > > /* Turn on each TCM bank individually */
> > > > ret = zynqmp_pm_request_node(pm_domain_id,
> > > > @@ -668,23 +661,28 @@ static int add_tcm_carveout_lockstep_mode(struct rproc *rproc)
> > > > dev_err(dev, "failed to turn on TCM 0x%x", pm_domain_id);
> > > > goto release_tcm_lockstep;
> > > > }
> > > > - }
> > > >
> > > > - dev_dbg(dev, "TCM add carveout lockstep mode %s addr=0x%llx, size=0x%lx",
> > > > - bank_name, bank_addr, bank_size);
> > > > -
> > > > - /* Register TCM address range, TCM map and unmap functions */
> > > > - rproc_mem = rproc_mem_entry_init(dev, NULL, bank_addr,
> > > > - bank_size, bank_addr,
> > > > - tcm_mem_map, tcm_mem_unmap,
> > > > - bank_name);
> > > > - if (!rproc_mem) {
> > > > - ret = -ENOMEM;
> > > > - goto release_tcm_lockstep;
> > > > - }
> > > > + /* Turn on each TCM bank individually */
> > > > + ret = zynqmp_pm_request_node(pm_domain_id2,
> > > > + ZYNQMP_PM_CAPABILITY_ACCESS, 0,
> > > > + ZYNQMP_PM_REQUEST_ACK_BLOCKING);
> > > > + if (ret < 0) {
> > > > + dev_err(dev, "failed to turn on TCM 0x%x", pm_domain_id2);
> > > > + goto release_tcm_lockstep;
> > > > + }
> > > >
> > > > - /* If registration is success, add carveouts */
> > > > - rproc_add_carveout(rproc, rproc_mem);
> > > > + /* Register TCM address range, TCM map and unmap functions */
> > > > + rproc_mem = rproc_mem_entry_init(dev, NULL, bank_addr,
> > > > + bank_size, da,
> > > > + tcm_mem_map, tcm_mem_unmap,
> > > > + bank_name);
>
> The original code adds a single carveout while this code is adding one for each
> memory bank? Is this done on purpose or is it a bug? No comment is provided.
>
> > > > + if (!rproc_mem) {
> > > > + ret = -ENOMEM;
> > > > + goto release_tcm_lockstep;
> > > > + }
> > > > +
> > > > + rproc_add_carveout(rproc, rproc_mem);
> > > > + }
> > > >
> > > > return 0;
> > > >
> > > > @@ -693,7 +691,12 @@ static int add_tcm_carveout_lockstep_mode(struct rproc *rproc)
> > > > for (i--; i >= 0; i--) {
> > > > pm_domain_id = r5_core->tcm_banks[i]->pm_domain_id;
> > > > zynqmp_pm_release_node(pm_domain_id);
> > > > + if (pm_domain_id2) {
> > > > + pm_domain_id2 = r5_core->tcm_banks[i]->pm_domain_id2;
> > > > + zynqmp_pm_release_node(pm_domain_id2);
> > > > + }
> > > > }
> > > > +
> > > > return ret;
> > > > }
> > > >
> > > > @@ -800,17 +803,23 @@ static int zynqmp_r5_rproc_prepare(struct rproc *rproc)
> > > > */
> > > > static int zynqmp_r5_rproc_unprepare(struct rproc *rproc)
> > > > {
> > > > + u32 pm_domain_id, pm_domain_id2;
> > > > struct zynqmp_r5_core *r5_core;
> > > > - u32 pm_domain_id;
> > > > int i;
> > > >
> > > > r5_core = rproc->priv;
> > > >
> > > > for (i = 0; i < r5_core->tcm_bank_count; i++) {
> > > > pm_domain_id = r5_core->tcm_banks[i]->pm_domain_id;
> > > > + pm_domain_id2 = r5_core->tcm_banks[i]->pm_domain_id2;
> > > > if (zynqmp_pm_release_node(pm_domain_id))
> > > > dev_warn(r5_core->dev,
> > > > "can't turn off TCM bank 0x%x", pm_domain_id);
> > > > + if (pm_domain_id2 && zynqmp_pm_release_node(pm_domain_id2))
> > > > + dev_warn(r5_core->dev,
> > > > + "can't turn off TCM bank 0x%x", pm_domain_id2);
> > > > + dev_dbg(r5_core->dev, "pm_domain_id=%d, pm_domain_id2=%d\n",
> > > > + pm_domain_id, pm_domain_id2);
> > > > }
> > > >
> > > > return 0;
> > > > @@ -883,6 +892,137 @@ static struct zynqmp_r5_core *zynqmp_r5_add_rproc_core(struct device *cdev)
> > > > return ERR_PTR(ret);
> > > > }
> > > >
> > > > +static int zynqmp_r5_get_tcm_node_from_dt(struct zynqmp_r5_cluster *cluster)
> > > > +{
> > > > + int i, j, tcm_bank_count, ret = -EINVAL;
> > > > + struct zynqmp_r5_core *r5_core;
> > > > + struct of_phandle_args out_arg;
> > > > + struct platform_device *cpdev;
> > > > + struct resource *res = NULL;
> > > > + u64 abs_addr = 0, size = 0;
> > > > + struct mem_bank_data *tcm;
> > > > + struct device_node *np, *np1 = NULL;
> > > > + struct device *dev;
>
> As far as I can tell @ret, @res and @np1 don't need initilisation. It may also
> be the case for @abs_addr and @size.
>
> > > > +
> > > > + for (i = 0; i < cluster->core_count; i++) {
> > > > + r5_core = cluster->r5_cores[i];
> > > > + dev = r5_core->dev;
> > > > + np = dev_of_node(dev);
> > > > +
> > > > + /* we have address cell 2 and size cell as 2 */
> > > > + ret = of_property_count_elems_of_size(np, "reg",
> > > > + 4 * sizeof(u32));
> > > > + if (ret <= 0) {
> > > > + ret = -EINVAL;
> > > > + goto fail_tcm;
> > > > + }
> > > > +
> > > > + tcm_bank_count = ret;
> > > > +
> > > > + r5_core->tcm_banks = devm_kcalloc(dev, tcm_bank_count,
> > > > + sizeof(struct mem_bank_data *),
> > > > + GFP_KERNEL);
> > > > + if (!r5_core->tcm_banks) {
> > > > + ret = -ENOMEM;
> > > > + goto fail_tcm;
> > > > + }
> > > > +
> > > > + r5_core->tcm_bank_count = tcm_bank_count;
> > > > + for (j = 0; j < tcm_bank_count; j++) {
> > > > + tcm = kzalloc(sizeof(struct mem_bank_data *), GFP_KERNEL);
> > > > + if (!tcm) {
> > > > + ret = -ENOMEM;
> > > > + goto fail_tcm;
> > > > + }
> > > > +
> > > > + r5_core->tcm_banks[j] = tcm;
> > > > + /* get tcm address without translation */
> > > > + ret = of_property_read_reg(np, j, &abs_addr, &size);
> > > > + if (ret) {
> > > > + dev_err(dev, "failed to get reg property\n");
> > > > + goto fail_tcm;
> > > > + }
> > > > +
> > > > + /*
> > > > + * remote processor can address only 32 bits
> > > > + * so convert 64-bits into 32-bits. This will discard
> > > > + * any unwanted upper 32-bits.
> > > > + */
> > > > + tcm->da = (u32)abs_addr;
> > > > + tcm->size = (u32)size;
> > > > +
> > > > + cpdev = to_platform_device(dev);
> > > > + res = platform_get_resource(cpdev, IORESOURCE_MEM, j);
> > > > + if (!res) {
> > > > + dev_err(dev, "failed to get tcm resource\n");
> > > > + ret = -EINVAL;
> > > > + goto fail_tcm;
> > > > + }
> > > > +
> > > > + tcm->addr = (u32)res->start;
> > > > + res = devm_request_mem_region(dev, tcm->addr, tcm->size, res->name);
> > > > + if (!res) {
> > > > + dev_err(dev, "failed to request tcm resource\n");
> > > > + ret = -EINVAL;
> > > > + goto fail_tcm;
> > > > + }
> > > > +
> > > > + memcpy(tcm->bank_name, res->name, ARRAY_SIZE(tcm->bank_name));
> > > > + np = of_node_get(dev_of_node(dev));
> > > > + /*
> > > > + * In dt power-domains are described in this order:
> > > > + * <RPU core>, <atcm>, <btcm>
> > > > + * parse power domains for tcm accordingly
> > > > + */
> > > > + of_parse_phandle_with_args(np, "power-domains",
> > > > + "#power-domain-cells",
> > > > + j + 1, &out_arg);
> > > > + tcm->pm_domain_id = out_arg.args[0];
> > > > + of_node_put(out_arg.np);
> > > > +
> > > > + dev_dbg(dev, "TCM: %s, dma=0x%x, da=0x%x, size=0x%x\n",
> > > > + tcm->bank_name, tcm->addr, tcm->da, tcm->size);
> > > > + dev_dbg(dev, "tcm pm domain id %d\n", tcm->pm_domain_id);
> > > > +
> > > > + if (cluster->mode == SPLIT_MODE)
> > > > + continue;
> > > > +
> > > > + /* Turn on core-1's TCM as well */
> > > > + np1 = of_get_next_child(dev_of_node(cluster->dev),
> > > > + r5_core->np);
> > > > + if (!np1) {
> > > > + of_node_put(np1);
> > > > + np1 = NULL;
> > > > + goto fail_tcm;
> > > > + }
> > > > +
> > > > + of_parse_phandle_with_args(np1, "power-domains",
> > > > + "#power-domain-cells",
> > > > + j + 1, &out_arg);
> > > > + tcm->pm_domain_id2 = out_arg.args[0];
> > > > + of_node_put(out_arg.np);
> > > > + dev_dbg(dev, "tcm pm domain id %d\n", tcm->pm_domain_id2);
> > > > + }
> > > > + }
> > > > +
> > > > + return 0;
> > > > +
> > > > +fail_tcm:
> > > > + while (i >= 0) {
> > > > + r5_core = cluster->r5_cores[i];
> > > > + for (j = 0; j < r5_core->tcm_bank_count; j++) {
> > > > + if (!r5_core->tcm_banks)
> > > > + continue;
> > > > + tcm = r5_core->tcm_banks[j];
> > > > + kfree(tcm);
> > > > + }
> > > > + kfree(r5_core->tcm_banks);
> > > > + i--;
> > > > + }
> > > > +
> > > > + return ret;
> > > > +}
> > > > +
> > > > /**
> > > > * zynqmp_r5_get_tcm_node()
> > > > * Ideally this function should parse tcm node and store information
> > > > @@ -895,12 +1035,20 @@ static struct zynqmp_r5_core *zynqmp_r5_add_rproc_core(struct device *cdev)
> > > > */
> > > > static int zynqmp_r5_get_tcm_node(struct zynqmp_r5_cluster *cluster)
> > > > {
> > > > + const struct mem_bank_data *zynqmp_tcm_banks;
> > > > struct device *dev = cluster->dev;
> > > > struct zynqmp_r5_core *r5_core;
> > > > int tcm_bank_count, tcm_node;
> > > > int i, j;
> > > >
> > > > - tcm_bank_count = ARRAY_SIZE(zynqmp_tcm_banks);
> > > > + if (cluster->mode == SPLIT_MODE) {
> > > > + zynqmp_tcm_banks = zynqmp_tcm_banks_split;
> > > > + tcm_bank_count = ARRAY_SIZE(zynqmp_tcm_banks_split);
> > > > + } else {
> > > > + zynqmp_tcm_banks = zynqmp_tcm_banks_lockstep;
> > > > + tcm_bank_count = ARRAY_SIZE(zynqmp_tcm_banks_lockstep);
> > > > + }
> > >
> > > Why are the changes to get TCM bank information from the DT and enhancement to
> > > support lockstep mode in the same patch?
> >
> > Actually TCM in lockstep mode was supported before as well. It's just I was using same table in lockstep mode before.
> >
> > However, now I am having two tables for split mode and lockstep mode.
> >
> > I had to do this as I have introduced "da" field in "struct mem_bank_data" object. This makes it easy to process
> >
> > "device address" derived from device-tree.
> >
> > And as I have introduced "u32 da", I had to modify table as well and remove hardcoding of "da" calculation in "tcm_mem_map" function.
> >
> > As all of this is connected, I have them in same patch. No new functionality is added, but just code refactoring.
> >
> > > > +
> > > >
> > > > /* count per core tcm banks */
> > > > tcm_bank_count = tcm_bank_count / cluster->core_count;
> > > > @@ -951,10 +1099,25 @@ static int zynqmp_r5_core_init(struct zynqmp_r5_cluster *cluster,
> > > > enum rpu_tcm_comb tcm_mode)
> > > > {
> > > > struct device *dev = cluster->dev;
> > > > + struct device_node *np;
> > > > struct zynqmp_r5_core *r5_core;
> > > > int ret, i;
> > > >
> > > > - ret = zynqmp_r5_get_tcm_node(cluster);
> > > > + /*
> > > > + * try to get tcm nodes from dt but if fail, use hardcode addresses only
> > > > + * for zynqmp platform. New platforms must use dt bindings for TCM.
> > > > + */
> > > > + ret = zynqmp_r5_get_tcm_node_from_dt(cluster);
> > > > + if (ret) {
> > > > + np = of_get_compatible_child(dev_of_node(dev), "xlnx,zynqmp-r5f");
> > > > + if (np) {
> > >
> > > Why was this check added?
> >
> > We want to maintain backward compatibility with previous bindings only for zynqmp platform.
> >
>
> That check has nothing to do with backward compatibility.
>
> > So, hardcode table is used only for "zynqmp" platform if getting "reg" information from device-tree fails.
> >
> > If node is not compatible with "xlnx,zynqmp-r5f" then it is new platform and we must not use hardcode
> >
> > table instead we should fail.
> >
>
> So this is the real reason for the check, but zynqmp-r5f is still the only
> platform supported by this driver. Please remove and re-introduce if/when a new
> platform is added.
>
> >
> > > So far there are too many unanswered questions with this patchset and as such I
> > > will stop here.
> >
> > No problem. Please let me know if you have any further questions.
> >
> >
> > > Mathieu
> > >
> > > > + ret = zynqmp_r5_get_tcm_node(cluster);
> > > > + } else {
> > > > + dev_err(dev, "tcm not found\n");
> > > > + return -EINVAL;
> > > > + }
> > > > + }
> > > > +
> > > > if (ret < 0) {
> > > > dev_err(dev, "can't get tcm node, err %d\n", ret);
> > > > return ret;
> > > > --
> > > > 2.25.1
> > > >
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply [flat|nested] 16+ messages in thread
end of thread, other threads:[~2023-09-25 16:34 UTC | newest]
Thread overview: 16+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2023-08-29 18:18 [PATCH v4 0/3] add zynqmp TCM bindings Tanmay Shah
2023-08-29 18:18 ` [PATCH v4 1/3] dt-bindings: remoteproc: add Tightly Coupled Memory (TCM) bindings Tanmay Shah
2023-08-30 18:16 ` Rob Herring
2023-08-29 18:18 ` [PATCH v4 2/3] dts: zynqmp: add properties for TCM in remoteproc Tanmay Shah
2023-08-29 18:19 ` [PATCH v4 3/3] remoteproc: zynqmp: get TCM from device-tree Tanmay Shah
2023-09-04 7:50 ` Philippe Mathieu-Daudé
2023-09-05 21:48 ` Tanmay Shah
2023-09-06 6:20 ` Philippe Mathieu-Daudé
2023-09-06 14:21 ` Tanmay Shah
2023-09-06 19:47 ` Mathieu Poirier
2023-09-06 22:02 ` Tanmay Shah
2023-09-07 17:23 ` Tanmay Shah
2023-09-07 18:08 ` Mathieu Poirier
2023-09-07 23:11 ` Tanmay Shah
2023-09-08 15:12 ` Mathieu Poirier
2023-09-25 16:33 ` Tanmay Shah
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).