* [PATCH v5 00/14] Enhance the PCIe controller driver
@ 2025-06-30 4:15 hans.zhang
2025-06-30 4:15 ` [PATCH v5 01/14] dt-bindings: pci: cadence: Extend compatible for new RP configuration hans.zhang
` (13 more replies)
0 siblings, 14 replies; 46+ messages in thread
From: hans.zhang @ 2025-06-30 4:15 UTC (permalink / raw)
To: bhelgaas, lpieralisi, kw, mani, robh, kwilczynski, krzk+dt,
conor+dt
Cc: mpillai, fugang.duan, guoyin.chen, peter.chen,
cix-kernel-upstream, linux-pci, devicetree, linux-kernel,
Hans Zhang
From: Hans Zhang <hans.zhang@cixtech.com>
---
Dear Maintainers,
This series is Cadence's HPA PCIe IP and the Root Port driver of our
CIX sky1. Please help review. Thank you very much.
---
Enhances the exiting Cadence PCIe controller drivers to support
HPA (High Performance Architecture) Cadence PCIe controllers.
The patch set enhances the Cadence PCIe driver for HPA support.
The header files are separated out for legacy and high performance
register maps, register address and bit definitions. The driver
read register and write register functions for HPA take the
updated offset stored from the platform driver to access the registers.
As part of refactoring of the code, few new files are added to the
driver by splitting the existing files.
This helps SoC vendor who change the address map within PCIe controller
in their designs. Setting the menuconfig appropriately will allow
selection between RP and/or EP PCIe controller support. The support
will include Legacy and HPA for the selected configuration.
The TI SoC continues to be supported with the changes incorporated.
The changes address the review comments in the previous patches where
the need to move away from "ops" pointers used in current implementation
and separate out the Legacy and HPA driver implementation was stressed.
The scripts/checkpatch.pl has been run on the patches with and without
--strict. With the --strict option, 4 checks are generated on 2 patch
(PATCH v5 2/6 and PATCH v5 3/6 of the series), which can be ignored.
There are no code fixes required for these checks. All other checks
generated by ./scripts/checkpatch.pl --strict can also be ignored.
The ./scripts/kernel-doc --none have been run on the changed files.
The changes are tested on TI platforms. The legacy controller changes are
tested on an TI J7200 EVM. HPA changes are planned to be tested on an FPGA
platform available within Cadence.
---
Changes for v5
- Header and code files separated for library functions(common
functions used by both architectures) and Legacy and HPA.
- Few new files added as part of refactoring
- No checks for "is_hpa" as the functions have been separated
out
- Review comments from previous patches have been addressed
- Add region 0 for ECAM and region 1 for message.
- Add CIX sky1 PCIe drivers. Submissions based on the following v9 patches:
https://patchwork.kernel.org/project/linux-arm-kernel/cover/20250609031627.1605851-1-peter.chen@cixtech.com/
Cix Sky1 base dts review link to show its review status:
https://lore.kernel.org/all/20250609031627.1605851-9-peter.chen@cixtech.com/
The test log on the Orion O6 board is as follows:
root@cix-localhost:~# lspci
0000:c0:00.0 PCI bridge: Device 1f6c:0001
0000:c1:00.0 Ethernet controller: Realtek Semiconductor Co., Ltd. Device 8126 (rev 01)
0001:90:00.0 PCI bridge: Device 1f6c:0001
0001:91:00.0 Non-Volatile memory controller: Samsung Electronics Co Ltd NVMe SSD Controller PM9A1/PM9A3/980PRO
0002:60:00.0 PCI bridge: Device 1f6c:0001
0002:61:00.0 Network controller: Realtek Semiconductor Co., Ltd. RTL8852BE PCIe 802.11ax Wireless Network Controller
0003:00:00.0 PCI bridge: Device 1f6c:0001
0003:01:00.0 Ethernet controller: Realtek Semiconductor Co., Ltd. Device 8126 (rev 01)
0004:30:00.0 PCI bridge: Device 1f6c:0001
0004:31:00.0 Ethernet controller: Realtek Semiconductor Co., Ltd. Device 8126 (rev 01)
root@cix-localhost:~# uname -a
Linux cix-localhost 6.16.0-rc1-00023-gbaa962a95a28 #138 SMP PREEMPT Fri Jun 27 16:43:41 CST 2025 aarch64 GNU/Linux
root@cix-localhost:~# cat /etc/issue
Debian GNU/Linux 12 \n \l
Changes for v4
- Add header file bitfield.h to pcie-cadence.h
- Addressed the following review comments
Merged the TI patch as it
Removed initialization of struct variables to '0'
Changes for v3
- Patch version v3 added to the subject
- Use HPA tag for architecture descriptions
- Remove bug related changes to be submitted later as a separate
patch
- Two patches merged from the last series to ensure readability to
address the review comments
- Fix several description related issues, coding style issues and
some misleading comments
- Remove cpu_addr_fixup() functions
Hans Zhang (5):
dt-bindings: PCI: Add CIX Sky1 PCIe Root Complex bindings
PCI: sky1: Add PCIe host support for CIX Sky1
MAINTAINERS: add entry for CIX Sky1 PCIe driver
arm64: dts: cix: Add PCIe Root Complex on sky1
arm64: dts: cix: Enable PCIe on the Orion O6 board
Manikandan K Pillai (9):
dt-bindings: pci: cadence: Extend compatible for new RP configuration
dt-bindings: pci: cadence: Extend compatible for new EP configuration
PCI: cadence: Split PCIe controller header file
PCI: cadence: Add register definitions for HPA(High Perf Architecture)
PCI: cadence: Split PCIe EP support into common and specific functions
PCI: cadence: Split PCIe RP support into common and specific functions
PCI: cadence: Split the common functions for PCIE controller support
PCI: cadence: Add support for High Performance Arch(HPA) controller
PCI: cadence: Add support for PCIe HPA controller platform
.../bindings/pci/cdns,cdns-pcie-ep.yaml | 6 +-
.../bindings/pci/cdns,cdns-pcie-host.yaml | 6 +-
.../bindings/pci/cix,sky1-pcie-host.yaml | 133 ++++
MAINTAINERS | 7 +
arch/arm64/boot/dts/cix/sky1-orion-o6.dts | 20 +
arch/arm64/boot/dts/cix/sky1.dtsi | 150 +++++
drivers/pci/controller/cadence/Kconfig | 29 +
drivers/pci/controller/cadence/Makefile | 10 +-
drivers/pci/controller/cadence/pci-sky1.c | 435 +++++++++++++
.../controller/cadence/pcie-cadence-common.c | 134 ++++
.../cadence/pcie-cadence-ep-common.c | 240 +++++++
.../cadence/pcie-cadence-ep-common.h | 36 ++
.../controller/cadence/pcie-cadence-ep-hpa.c | 523 ++++++++++++++++
.../pci/controller/cadence/pcie-cadence-ep.c | 243 +-------
.../cadence/pcie-cadence-host-common.c | 169 +++++
.../cadence/pcie-cadence-host-common.h | 25 +
.../cadence/pcie-cadence-host-hpa.c | 584 ++++++++++++++++++
.../controller/cadence/pcie-cadence-host.c | 156 +----
.../cadence/pcie-cadence-hpa-regs.h | 212 +++++++
.../pci/controller/cadence/pcie-cadence-hpa.c | 199 ++++++
.../cadence/pcie-cadence-lga-regs.h | 228 +++++++
.../cadence/pcie-cadence-plat-hpa.c | 183 ++++++
.../controller/cadence/pcie-cadence-plat.c | 23 +-
drivers/pci/controller/cadence/pcie-cadence.c | 138 +----
drivers/pci/controller/cadence/pcie-cadence.h | 416 ++++++-------
25 files changed, 3524 insertions(+), 781 deletions(-)
create mode 100644 Documentation/devicetree/bindings/pci/cix,sky1-pcie-host.yaml
create mode 100644 drivers/pci/controller/cadence/pci-sky1.c
create mode 100644 drivers/pci/controller/cadence/pcie-cadence-common.c
create mode 100644 drivers/pci/controller/cadence/pcie-cadence-ep-common.c
create mode 100644 drivers/pci/controller/cadence/pcie-cadence-ep-common.h
create mode 100644 drivers/pci/controller/cadence/pcie-cadence-ep-hpa.c
create mode 100644 drivers/pci/controller/cadence/pcie-cadence-host-common.c
create mode 100644 drivers/pci/controller/cadence/pcie-cadence-host-common.h
create mode 100644 drivers/pci/controller/cadence/pcie-cadence-host-hpa.c
create mode 100644 drivers/pci/controller/cadence/pcie-cadence-hpa-regs.h
create mode 100644 drivers/pci/controller/cadence/pcie-cadence-hpa.c
create mode 100644 drivers/pci/controller/cadence/pcie-cadence-lga-regs.h
create mode 100644 drivers/pci/controller/cadence/pcie-cadence-plat-hpa.c
base-commit: 5da173292645ab241a9ccc95044a0b56c2efc214
--
2.49.0
^ permalink raw reply [flat|nested] 46+ messages in thread
* [PATCH v5 01/14] dt-bindings: pci: cadence: Extend compatible for new RP configuration
2025-06-30 4:15 [PATCH v5 00/14] Enhance the PCIe controller driver hans.zhang
@ 2025-06-30 4:15 ` hans.zhang
2025-06-30 7:30 ` Krzysztof Kozlowski
2025-06-30 4:15 ` [PATCH v5 02/14] dt-bindings: pci: cadence: Extend compatible for new EP configuration hans.zhang
` (12 subsequent siblings)
13 siblings, 1 reply; 46+ messages in thread
From: hans.zhang @ 2025-06-30 4:15 UTC (permalink / raw)
To: bhelgaas, lpieralisi, kw, mani, robh, kwilczynski, krzk+dt,
conor+dt
Cc: mpillai, fugang.duan, guoyin.chen, peter.chen,
cix-kernel-upstream, linux-pci, devicetree, linux-kernel
From: Manikandan K Pillai <mpillai@cadence.com>
Document the compatible property for HPA (High Performance Architecture)
PCIe controller RP configuration.
Signed-off-by: Manikandan K Pillai <mpillai@cadence.com>
---
.../devicetree/bindings/pci/cdns,cdns-pcie-host.yaml | 6 ++++--
1 file changed, 4 insertions(+), 2 deletions(-)
diff --git a/Documentation/devicetree/bindings/pci/cdns,cdns-pcie-host.yaml b/Documentation/devicetree/bindings/pci/cdns,cdns-pcie-host.yaml
index a8190d9b100f..83a33c4c008f 100644
--- a/Documentation/devicetree/bindings/pci/cdns,cdns-pcie-host.yaml
+++ b/Documentation/devicetree/bindings/pci/cdns,cdns-pcie-host.yaml
@@ -7,14 +7,16 @@ $schema: http://devicetree.org/meta-schemas/core.yaml#
title: Cadence PCIe host controller
maintainers:
- - Tom Joseph <tjoseph@cadence.com>
+ - Manikandan K Pillai <mpillai@cadence.com>
allOf:
- $ref: cdns-pcie-host.yaml#
properties:
compatible:
- const: cdns,cdns-pcie-host
+ enum:
+ - cdns,cdns-pcie-host
+ - cdns,cdns-pcie-hpa-host
reg:
maxItems: 2
--
2.49.0
^ permalink raw reply related [flat|nested] 46+ messages in thread
* [PATCH v5 02/14] dt-bindings: pci: cadence: Extend compatible for new EP configuration
2025-06-30 4:15 [PATCH v5 00/14] Enhance the PCIe controller driver hans.zhang
2025-06-30 4:15 ` [PATCH v5 01/14] dt-bindings: pci: cadence: Extend compatible for new RP configuration hans.zhang
@ 2025-06-30 4:15 ` hans.zhang
2025-06-30 7:27 ` Krzysztof Kozlowski
2025-06-30 10:28 ` Krzysztof Kozlowski
2025-06-30 4:15 ` [PATCH v5 03/14] PCI: cadence: Split PCIe controller header file hans.zhang
` (11 subsequent siblings)
13 siblings, 2 replies; 46+ messages in thread
From: hans.zhang @ 2025-06-30 4:15 UTC (permalink / raw)
To: bhelgaas, lpieralisi, kw, mani, robh, kwilczynski, krzk+dt,
conor+dt
Cc: mpillai, fugang.duan, guoyin.chen, peter.chen,
cix-kernel-upstream, linux-pci, devicetree, linux-kernel
From: Manikandan K Pillai <mpillai@cadence.com>
Document the compatible property for HPA (High Performance Architecture)
PCIe controller EP configuration.
Signed-off-by: Manikandan K Pillai <mpillai@cadence.com>
---
.../devicetree/bindings/pci/cdns,cdns-pcie-ep.yaml | 6 ++++--
1 file changed, 4 insertions(+), 2 deletions(-)
diff --git a/Documentation/devicetree/bindings/pci/cdns,cdns-pcie-ep.yaml b/Documentation/devicetree/bindings/pci/cdns,cdns-pcie-ep.yaml
index 8735293962ee..c3f0a620f1c2 100644
--- a/Documentation/devicetree/bindings/pci/cdns,cdns-pcie-ep.yaml
+++ b/Documentation/devicetree/bindings/pci/cdns,cdns-pcie-ep.yaml
@@ -7,14 +7,16 @@ $schema: http://devicetree.org/meta-schemas/core.yaml#
title: Cadence PCIe EP Controller
maintainers:
- - Tom Joseph <tjoseph@cadence.com>
+ - Manikandan K Pillai <mpillai@cadence.com>
allOf:
- $ref: cdns-pcie-ep.yaml#
properties:
compatible:
- const: cdns,cdns-pcie-ep
+ enum:
+ - cdns,cdns-pcie-ep
+ - cdns,cdns-pcie-hpa-ep
reg:
maxItems: 2
--
2.49.0
^ permalink raw reply related [flat|nested] 46+ messages in thread
* [PATCH v5 03/14] PCI: cadence: Split PCIe controller header file
2025-06-30 4:15 [PATCH v5 00/14] Enhance the PCIe controller driver hans.zhang
2025-06-30 4:15 ` [PATCH v5 01/14] dt-bindings: pci: cadence: Extend compatible for new RP configuration hans.zhang
2025-06-30 4:15 ` [PATCH v5 02/14] dt-bindings: pci: cadence: Extend compatible for new EP configuration hans.zhang
@ 2025-06-30 4:15 ` hans.zhang
2025-06-30 4:15 ` [PATCH v5 04/14] PCI: cadence: Add register definitions for HPA(High Perf Architecture) hans.zhang
` (10 subsequent siblings)
13 siblings, 0 replies; 46+ messages in thread
From: hans.zhang @ 2025-06-30 4:15 UTC (permalink / raw)
To: bhelgaas, lpieralisi, kw, mani, robh, kwilczynski, krzk+dt,
conor+dt
Cc: mpillai, fugang.duan, guoyin.chen, peter.chen,
cix-kernel-upstream, linux-pci, devicetree, linux-kernel,
Hans Zhang
From: Manikandan K Pillai <mpillai@cadence.com>
Split the Cadence PCIe header file by moving the Legacy(LGA)
controller register definitions to a separate header file.
Signed-off-by: Manikandan K Pillai <mpillai@cadence.com>
Co-developed-by: Hans Zhang <hans.zhang@cixtech.com>
Signed-off-by: Hans Zhang <hans.zhang@cixtech.com>
---
.../cadence/pcie-cadence-lga-regs.h | 228 ++++++++++++++++++
drivers/pci/controller/cadence/pcie-cadence.h | 226 +----------------
2 files changed, 229 insertions(+), 225 deletions(-)
create mode 100644 drivers/pci/controller/cadence/pcie-cadence-lga-regs.h
diff --git a/drivers/pci/controller/cadence/pcie-cadence-lga-regs.h b/drivers/pci/controller/cadence/pcie-cadence-lga-regs.h
new file mode 100644
index 000000000000..0e88beb77292
--- /dev/null
+++ b/drivers/pci/controller/cadence/pcie-cadence-lga-regs.h
@@ -0,0 +1,228 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+// Copyright (c) 2017 Cadence
+// Cadence PCIe controller driver.
+// Author: Manikandan K Pillai <mpillai@cadence.com>
+
+#ifndef _PCIE_CADENCE_LGA_REGS_H
+#define _PCIE_CADENCE_LGA_REGS_H
+
+#include <linux/bitfield.h>
+
+/* Parameters for the waiting for link up routine */
+#define LINK_WAIT_MAX_RETRIES 10
+#define LINK_WAIT_USLEEP_MIN 90000
+#define LINK_WAIT_USLEEP_MAX 100000
+
+/* Local Management Registers */
+#define CDNS_PCIE_LM_BASE 0x00100000
+
+/* Vendor ID Register */
+#define CDNS_PCIE_LM_ID (CDNS_PCIE_LM_BASE + 0x0044)
+#define CDNS_PCIE_LM_ID_VENDOR_MASK GENMASK(15, 0)
+#define CDNS_PCIE_LM_ID_VENDOR_SHIFT 0
+#define CDNS_PCIE_LM_ID_VENDOR(vid) \
+ (((vid) << CDNS_PCIE_LM_ID_VENDOR_SHIFT) & CDNS_PCIE_LM_ID_VENDOR_MASK)
+#define CDNS_PCIE_LM_ID_SUBSYS_MASK GENMASK(31, 16)
+#define CDNS_PCIE_LM_ID_SUBSYS_SHIFT 16
+#define CDNS_PCIE_LM_ID_SUBSYS(sub) \
+ (((sub) << CDNS_PCIE_LM_ID_SUBSYS_SHIFT) & CDNS_PCIE_LM_ID_SUBSYS_MASK)
+
+/* Root Port Requester ID Register */
+#define CDNS_PCIE_LM_RP_RID (CDNS_PCIE_LM_BASE + 0x0228)
+#define CDNS_PCIE_LM_RP_RID_MASK GENMASK(15, 0)
+#define CDNS_PCIE_LM_RP_RID_SHIFT 0
+#define CDNS_PCIE_LM_RP_RID_(rid) \
+ (((rid) << CDNS_PCIE_LM_RP_RID_SHIFT) & CDNS_PCIE_LM_RP_RID_MASK)
+
+/* Endpoint Bus and Device Number Register */
+#define CDNS_PCIE_LM_EP_ID (CDNS_PCIE_LM_BASE + 0x022C)
+#define CDNS_PCIE_LM_EP_ID_DEV_MASK GENMASK(4, 0)
+#define CDNS_PCIE_LM_EP_ID_DEV_SHIFT 0
+#define CDNS_PCIE_LM_EP_ID_BUS_MASK GENMASK(15, 8)
+#define CDNS_PCIE_LM_EP_ID_BUS_SHIFT 8
+
+/* Endpoint Function f BAR b Configuration Registers */
+#define CDNS_PCIE_LM_EP_FUNC_BAR_CFG(bar, fn) \
+ (((bar) < BAR_4) ? CDNS_PCIE_LM_EP_FUNC_BAR_CFG0(fn) : CDNS_PCIE_LM_EP_FUNC_BAR_CFG1(fn))
+#define CDNS_PCIE_LM_EP_FUNC_BAR_CFG0(fn) \
+ (CDNS_PCIE_LM_BASE + 0x0240 + (fn) * 0x0008)
+#define CDNS_PCIE_LM_EP_FUNC_BAR_CFG1(fn) \
+ (CDNS_PCIE_LM_BASE + 0x0244 + (fn) * 0x0008)
+#define CDNS_PCIE_LM_EP_VFUNC_BAR_CFG(bar, fn) \
+ (((bar) < BAR_4) ? CDNS_PCIE_LM_EP_VFUNC_BAR_CFG0(fn) : CDNS_PCIE_LM_EP_VFUNC_BAR_CFG1(fn))
+#define CDNS_PCIE_LM_EP_VFUNC_BAR_CFG0(fn) \
+ (CDNS_PCIE_LM_BASE + 0x0280 + (fn) * 0x0008)
+#define CDNS_PCIE_LM_EP_VFUNC_BAR_CFG1(fn) \
+ (CDNS_PCIE_LM_BASE + 0x0284 + (fn) * 0x0008)
+#define CDNS_PCIE_LM_EP_FUNC_BAR_CFG_BAR_APERTURE_MASK(b) \
+ (GENMASK(4, 0) << ((b) * 8))
+#define CDNS_PCIE_LM_EP_FUNC_BAR_CFG_BAR_APERTURE(b, a) \
+ (((a) << ((b) * 8)) & CDNS_PCIE_LM_EP_FUNC_BAR_CFG_BAR_APERTURE_MASK(b))
+#define CDNS_PCIE_LM_EP_FUNC_BAR_CFG_BAR_CTRL_MASK(b) \
+ (GENMASK(7, 5) << ((b) * 8))
+#define CDNS_PCIE_LM_EP_FUNC_BAR_CFG_BAR_CTRL(b, c) \
+ (((c) << ((b) * 8 + 5)) & CDNS_PCIE_LM_EP_FUNC_BAR_CFG_BAR_CTRL_MASK(b))
+
+/* Endpoint Function Configuration Register */
+#define CDNS_PCIE_LM_EP_FUNC_CFG (CDNS_PCIE_LM_BASE + 0x02C0)
+
+/* Root Complex BAR Configuration Register */
+#define CDNS_PCIE_LM_RC_BAR_CFG (CDNS_PCIE_LM_BASE + 0x0300)
+#define CDNS_PCIE_LM_RC_BAR_CFG_BAR0_APERTURE_MASK GENMASK(5, 0)
+#define CDNS_PCIE_LM_RC_BAR_CFG_BAR0_APERTURE(a) \
+ (((a) << 0) & CDNS_PCIE_LM_RC_BAR_CFG_BAR0_APERTURE_MASK)
+#define CDNS_PCIE_LM_RC_BAR_CFG_BAR0_CTRL_MASK GENMASK(8, 6)
+#define CDNS_PCIE_LM_RC_BAR_CFG_BAR0_CTRL(c) \
+ (((c) << 6) & CDNS_PCIE_LM_RC_BAR_CFG_BAR0_CTRL_MASK)
+#define CDNS_PCIE_LM_RC_BAR_CFG_BAR1_APERTURE_MASK GENMASK(13, 9)
+#define CDNS_PCIE_LM_RC_BAR_CFG_BAR1_APERTURE(a) \
+ (((a) << 9) & CDNS_PCIE_LM_RC_BAR_CFG_BAR1_APERTURE_MASK)
+#define CDNS_PCIE_LM_RC_BAR_CFG_BAR1_CTRL_MASK GENMASK(16, 14)
+#define CDNS_PCIE_LM_RC_BAR_CFG_BAR1_CTRL(c) \
+ (((c) << 14) & CDNS_PCIE_LM_RC_BAR_CFG_BAR1_CTRL_MASK)
+#define CDNS_PCIE_LM_RC_BAR_CFG_PREFETCH_MEM_ENABLE BIT(17)
+#define CDNS_PCIE_LM_RC_BAR_CFG_PREFETCH_MEM_32BITS 0
+#define CDNS_PCIE_LM_RC_BAR_CFG_PREFETCH_MEM_64BITS BIT(18)
+#define CDNS_PCIE_LM_RC_BAR_CFG_IO_ENABLE BIT(19)
+#define CDNS_PCIE_LM_RC_BAR_CFG_IO_16BITS 0
+#define CDNS_PCIE_LM_RC_BAR_CFG_IO_32BITS BIT(20)
+#define CDNS_PCIE_LM_RC_BAR_CFG_CHECK_ENABLE BIT(31)
+
+/* BAR control values applicable to both Endpoint Function and Root Complex */
+#define CDNS_PCIE_LM_BAR_CFG_CTRL_DISABLED 0x0
+#define CDNS_PCIE_LM_BAR_CFG_CTRL_IO_32BITS 0x1
+#define CDNS_PCIE_LM_BAR_CFG_CTRL_MEM_32BITS 0x4
+#define CDNS_PCIE_LM_BAR_CFG_CTRL_PREFETCH_MEM_32BITS 0x5
+#define CDNS_PCIE_LM_BAR_CFG_CTRL_MEM_64BITS 0x6
+#define CDNS_PCIE_LM_BAR_CFG_CTRL_PREFETCH_MEM_64BITS 0x7
+
+#define LM_RC_BAR_CFG_CTRL_DISABLED(bar) \
+ (CDNS_PCIE_LM_BAR_CFG_CTRL_DISABLED << (((bar) * 8) + 6))
+#define LM_RC_BAR_CFG_CTRL_IO_32BITS(bar) \
+ (CDNS_PCIE_LM_BAR_CFG_CTRL_IO_32BITS << (((bar) * 8) + 6))
+#define LM_RC_BAR_CFG_CTRL_MEM_32BITS(bar) \
+ (CDNS_PCIE_LM_BAR_CFG_CTRL_MEM_32BITS << (((bar) * 8) + 6))
+#define LM_RC_BAR_CFG_CTRL_PREF_MEM_32BITS(bar) \
+ (CDNS_PCIE_LM_BAR_CFG_CTRL_PREFETCH_MEM_32BITS << (((bar) * 8) + 6))
+#define LM_RC_BAR_CFG_CTRL_MEM_64BITS(bar) \
+ (CDNS_PCIE_LM_BAR_CFG_CTRL_MEM_64BITS << (((bar) * 8) + 6))
+#define LM_RC_BAR_CFG_CTRL_PREF_MEM_64BITS(bar) \
+ (CDNS_PCIE_LM_BAR_CFG_CTRL_PREFETCH_MEM_64BITS << (((bar) * 8) + 6))
+#define LM_RC_BAR_CFG_APERTURE(bar, aperture) \
+ (((aperture) - 2) << ((bar) * 8))
+
+/* PTM Control Register */
+#define CDNS_PCIE_LM_PTM_CTRL (CDNS_PCIE_LM_BASE + 0x0DA8)
+#define CDNS_PCIE_LM_TPM_CTRL_PTMRSEN BIT(17)
+
+/*
+ * Endpoint Function Registers (PCI configuration space for endpoint functions)
+ */
+#define CDNS_PCIE_EP_FUNC_BASE(fn) (((fn) << 12) & GENMASK(19, 12))
+
+#define CDNS_PCIE_EP_FUNC_MSI_CAP_OFFSET 0x90
+#define CDNS_PCIE_EP_FUNC_MSIX_CAP_OFFSET 0xB0
+#define CDNS_PCIE_EP_FUNC_DEV_CAP_OFFSET 0xC0
+#define CDNS_PCIE_EP_FUNC_SRIOV_CAP_OFFSET 0x200
+
+/* Endpoint PF Registers */
+#define CDNS_PCIE_CORE_PF_I_ARI_CAP_AND_CTRL(fn) (0x144 + (fn) * 0x1000)
+#define CDNS_PCIE_ARI_CAP_NFN_MASK GENMASK(15, 8)
+
+/* Root Port Registers (PCI configuration space for the root port function) */
+#define CDNS_PCIE_RP_BASE 0x00200000
+#define CDNS_PCIE_RP_CAP_OFFSET 0xC0
+
+/* Address Translation Registers */
+#define CDNS_PCIE_AT_BASE 0x00400000
+
+/* Region r Outbound AXI to PCIe Address Translation Register 0 */
+#define CDNS_PCIE_AT_OB_REGION_PCI_ADDR0(r) \
+ (CDNS_PCIE_AT_BASE + 0x0000 + ((r) & 0x1F) * 0x0020)
+#define CDNS_PCIE_AT_OB_REGION_PCI_ADDR0_NBITS_MASK GENMASK(5, 0)
+#define CDNS_PCIE_AT_OB_REGION_PCI_ADDR0_NBITS(nbits) \
+ (((nbits) - 1) & CDNS_PCIE_AT_OB_REGION_PCI_ADDR0_NBITS_MASK)
+#define CDNS_PCIE_AT_OB_REGION_PCI_ADDR0_DEVFN_MASK GENMASK(19, 12)
+#define CDNS_PCIE_AT_OB_REGION_PCI_ADDR0_DEVFN(devfn) \
+ (((devfn) << 12) & CDNS_PCIE_AT_OB_REGION_PCI_ADDR0_DEVFN_MASK)
+#define CDNS_PCIE_AT_OB_REGION_PCI_ADDR0_BUS_MASK GENMASK(27, 20)
+#define CDNS_PCIE_AT_OB_REGION_PCI_ADDR0_BUS(bus) \
+ (((bus) << 20) & CDNS_PCIE_AT_OB_REGION_PCI_ADDR0_BUS_MASK)
+
+/* Region r Outbound AXI to PCIe Address Translation Register 1 */
+#define CDNS_PCIE_AT_OB_REGION_PCI_ADDR1(r) \
+ (CDNS_PCIE_AT_BASE + 0x0004 + ((r) & 0x1F) * 0x0020)
+
+/* Region r Outbound PCIe Descriptor Register 0 */
+#define CDNS_PCIE_AT_OB_REGION_DESC0(r) \
+ (CDNS_PCIE_AT_BASE + 0x0008 + ((r) & 0x1F) * 0x0020)
+#define CDNS_PCIE_AT_OB_REGION_DESC0_TYPE_MASK GENMASK(3, 0)
+#define CDNS_PCIE_AT_OB_REGION_DESC0_TYPE_MEM 0x2
+#define CDNS_PCIE_AT_OB_REGION_DESC0_TYPE_IO 0x6
+#define CDNS_PCIE_AT_OB_REGION_DESC0_TYPE_CONF_TYPE0 0xA
+#define CDNS_PCIE_AT_OB_REGION_DESC0_TYPE_CONF_TYPE1 0xB
+#define CDNS_PCIE_AT_OB_REGION_DESC0_TYPE_NORMAL_MSG 0xC
+#define CDNS_PCIE_AT_OB_REGION_DESC0_TYPE_VENDOR_MSG 0xD
+/* Bit 23 MUST be set in RC mode. */
+#define CDNS_PCIE_AT_OB_REGION_DESC0_HARDCODED_RID BIT(23)
+#define CDNS_PCIE_AT_OB_REGION_DESC0_DEVFN_MASK GENMASK(31, 24)
+#define CDNS_PCIE_AT_OB_REGION_DESC0_DEVFN(devfn) \
+ (((devfn) << 24) & CDNS_PCIE_AT_OB_REGION_DESC0_DEVFN_MASK)
+
+/* Region r Outbound PCIe Descriptor Register 1 */
+#define CDNS_PCIE_AT_OB_REGION_DESC1(r) \
+ (CDNS_PCIE_AT_BASE + 0x000C + ((r) & 0x1F) * 0x0020)
+#define CDNS_PCIE_AT_OB_REGION_DESC1_BUS_MASK GENMASK(7, 0)
+#define CDNS_PCIE_AT_OB_REGION_DESC1_BUS(bus) \
+ ((bus) & CDNS_PCIE_AT_OB_REGION_DESC1_BUS_MASK)
+
+/* Region r AXI Region Base Address Register 0 */
+#define CDNS_PCIE_AT_OB_REGION_CPU_ADDR0(r) \
+ (CDNS_PCIE_AT_BASE + 0x0018 + ((r) & 0x1F) * 0x0020)
+#define CDNS_PCIE_AT_OB_REGION_CPU_ADDR0_NBITS_MASK GENMASK(5, 0)
+#define CDNS_PCIE_AT_OB_REGION_CPU_ADDR0_NBITS(nbits) \
+ (((nbits) - 1) & CDNS_PCIE_AT_OB_REGION_CPU_ADDR0_NBITS_MASK)
+
+/* Region r AXI Region Base Address Register 1 */
+#define CDNS_PCIE_AT_OB_REGION_CPU_ADDR1(r) \
+ (CDNS_PCIE_AT_BASE + 0x001C + ((r) & 0x1F) * 0x0020)
+
+/* Root Port BAR Inbound PCIe to AXI Address Translation Register */
+#define CDNS_PCIE_AT_IB_RP_BAR_ADDR0(bar) \
+ (CDNS_PCIE_AT_BASE + 0x0800 + (bar) * 0x0008)
+#define CDNS_PCIE_AT_IB_RP_BAR_ADDR0_NBITS_MASK GENMASK(5, 0)
+#define CDNS_PCIE_AT_IB_RP_BAR_ADDR0_NBITS(nbits) \
+ (((nbits) - 1) & CDNS_PCIE_AT_IB_RP_BAR_ADDR0_NBITS_MASK)
+#define CDNS_PCIE_AT_IB_RP_BAR_ADDR1(bar) \
+ (CDNS_PCIE_AT_BASE + 0x0804 + (bar) * 0x0008)
+
+/* AXI link down register */
+#define CDNS_PCIE_AT_LINKDOWN (CDNS_PCIE_AT_BASE + 0x0824)
+
+/* LTSSM Capabilities register */
+#define CDNS_PCIE_LTSSM_CONTROL_CAP (CDNS_PCIE_LM_BASE + 0x0054)
+#define CDNS_PCIE_DETECT_QUIET_MIN_DELAY_MASK GENMASK(2, 1)
+#define CDNS_PCIE_DETECT_QUIET_MIN_DELAY_SHIFT 1
+#define CDNS_PCIE_DETECT_QUIET_MIN_DELAY(delay) \
+ (((delay) << CDNS_PCIE_DETECT_QUIET_MIN_DELAY_SHIFT) & \
+ CDNS_PCIE_DETECT_QUIET_MIN_DELAY_MASK)
+
+#define CDNS_PCIE_RP_MAX_IB 0x3
+#define CDNS_PCIE_MAX_OB 32
+
+/* Endpoint Function BAR Inbound PCIe to AXI Address Translation Register */
+#define CDNS_PCIE_AT_IB_EP_FUNC_BAR_ADDR0(fn, bar) \
+ (CDNS_PCIE_AT_BASE + 0x0840 + (fn) * 0x0040 + (bar) * 0x0008)
+#define CDNS_PCIE_AT_IB_EP_FUNC_BAR_ADDR1(fn, bar) \
+ (CDNS_PCIE_AT_BASE + 0x0844 + (fn) * 0x0040 + (bar) * 0x0008)
+
+/* Normal/Vendor specific message access: offset inside some outbound region */
+#define CDNS_PCIE_NORMAL_MSG_ROUTING_MASK GENMASK(7, 5)
+#define CDNS_PCIE_NORMAL_MSG_ROUTING(route) \
+ (((route) << 5) & CDNS_PCIE_NORMAL_MSG_ROUTING_MASK)
+#define CDNS_PCIE_NORMAL_MSG_CODE_MASK GENMASK(15, 8)
+#define CDNS_PCIE_NORMAL_MSG_CODE(code) \
+ (((code) << 8) & CDNS_PCIE_NORMAL_MSG_CODE_MASK)
+#define CDNS_PCIE_MSG_NO_DATA BIT(16)
+
+#endif /* _PCIE_CADENCE_LGA_REGS_H */
diff --git a/drivers/pci/controller/cadence/pcie-cadence.h b/drivers/pci/controller/cadence/pcie-cadence.h
index a149845d341a..b87fab47f2e7 100644
--- a/drivers/pci/controller/cadence/pcie-cadence.h
+++ b/drivers/pci/controller/cadence/pcie-cadence.h
@@ -10,213 +10,7 @@
#include <linux/pci.h>
#include <linux/pci-epf.h>
#include <linux/phy/phy.h>
-
-/* Parameters for the waiting for link up routine */
-#define LINK_WAIT_MAX_RETRIES 10
-#define LINK_WAIT_USLEEP_MIN 90000
-#define LINK_WAIT_USLEEP_MAX 100000
-
-/*
- * Local Management Registers
- */
-#define CDNS_PCIE_LM_BASE 0x00100000
-
-/* Vendor ID Register */
-#define CDNS_PCIE_LM_ID (CDNS_PCIE_LM_BASE + 0x0044)
-#define CDNS_PCIE_LM_ID_VENDOR_MASK GENMASK(15, 0)
-#define CDNS_PCIE_LM_ID_VENDOR_SHIFT 0
-#define CDNS_PCIE_LM_ID_VENDOR(vid) \
- (((vid) << CDNS_PCIE_LM_ID_VENDOR_SHIFT) & CDNS_PCIE_LM_ID_VENDOR_MASK)
-#define CDNS_PCIE_LM_ID_SUBSYS_MASK GENMASK(31, 16)
-#define CDNS_PCIE_LM_ID_SUBSYS_SHIFT 16
-#define CDNS_PCIE_LM_ID_SUBSYS(sub) \
- (((sub) << CDNS_PCIE_LM_ID_SUBSYS_SHIFT) & CDNS_PCIE_LM_ID_SUBSYS_MASK)
-
-/* Root Port Requester ID Register */
-#define CDNS_PCIE_LM_RP_RID (CDNS_PCIE_LM_BASE + 0x0228)
-#define CDNS_PCIE_LM_RP_RID_MASK GENMASK(15, 0)
-#define CDNS_PCIE_LM_RP_RID_SHIFT 0
-#define CDNS_PCIE_LM_RP_RID_(rid) \
- (((rid) << CDNS_PCIE_LM_RP_RID_SHIFT) & CDNS_PCIE_LM_RP_RID_MASK)
-
-/* Endpoint Bus and Device Number Register */
-#define CDNS_PCIE_LM_EP_ID (CDNS_PCIE_LM_BASE + 0x022c)
-#define CDNS_PCIE_LM_EP_ID_DEV_MASK GENMASK(4, 0)
-#define CDNS_PCIE_LM_EP_ID_DEV_SHIFT 0
-#define CDNS_PCIE_LM_EP_ID_BUS_MASK GENMASK(15, 8)
-#define CDNS_PCIE_LM_EP_ID_BUS_SHIFT 8
-
-/* Endpoint Function f BAR b Configuration Registers */
-#define CDNS_PCIE_LM_EP_FUNC_BAR_CFG(bar, fn) \
- (((bar) < BAR_4) ? CDNS_PCIE_LM_EP_FUNC_BAR_CFG0(fn) : CDNS_PCIE_LM_EP_FUNC_BAR_CFG1(fn))
-#define CDNS_PCIE_LM_EP_FUNC_BAR_CFG0(fn) \
- (CDNS_PCIE_LM_BASE + 0x0240 + (fn) * 0x0008)
-#define CDNS_PCIE_LM_EP_FUNC_BAR_CFG1(fn) \
- (CDNS_PCIE_LM_BASE + 0x0244 + (fn) * 0x0008)
-#define CDNS_PCIE_LM_EP_VFUNC_BAR_CFG(bar, fn) \
- (((bar) < BAR_4) ? CDNS_PCIE_LM_EP_VFUNC_BAR_CFG0(fn) : CDNS_PCIE_LM_EP_VFUNC_BAR_CFG1(fn))
-#define CDNS_PCIE_LM_EP_VFUNC_BAR_CFG0(fn) \
- (CDNS_PCIE_LM_BASE + 0x0280 + (fn) * 0x0008)
-#define CDNS_PCIE_LM_EP_VFUNC_BAR_CFG1(fn) \
- (CDNS_PCIE_LM_BASE + 0x0284 + (fn) * 0x0008)
-#define CDNS_PCIE_LM_EP_FUNC_BAR_CFG_BAR_APERTURE_MASK(b) \
- (GENMASK(4, 0) << ((b) * 8))
-#define CDNS_PCIE_LM_EP_FUNC_BAR_CFG_BAR_APERTURE(b, a) \
- (((a) << ((b) * 8)) & CDNS_PCIE_LM_EP_FUNC_BAR_CFG_BAR_APERTURE_MASK(b))
-#define CDNS_PCIE_LM_EP_FUNC_BAR_CFG_BAR_CTRL_MASK(b) \
- (GENMASK(7, 5) << ((b) * 8))
-#define CDNS_PCIE_LM_EP_FUNC_BAR_CFG_BAR_CTRL(b, c) \
- (((c) << ((b) * 8 + 5)) & CDNS_PCIE_LM_EP_FUNC_BAR_CFG_BAR_CTRL_MASK(b))
-
-/* Endpoint Function Configuration Register */
-#define CDNS_PCIE_LM_EP_FUNC_CFG (CDNS_PCIE_LM_BASE + 0x02c0)
-
-/* Root Complex BAR Configuration Register */
-#define CDNS_PCIE_LM_RC_BAR_CFG (CDNS_PCIE_LM_BASE + 0x0300)
-#define CDNS_PCIE_LM_RC_BAR_CFG_BAR0_APERTURE_MASK GENMASK(5, 0)
-#define CDNS_PCIE_LM_RC_BAR_CFG_BAR0_APERTURE(a) \
- (((a) << 0) & CDNS_PCIE_LM_RC_BAR_CFG_BAR0_APERTURE_MASK)
-#define CDNS_PCIE_LM_RC_BAR_CFG_BAR0_CTRL_MASK GENMASK(8, 6)
-#define CDNS_PCIE_LM_RC_BAR_CFG_BAR0_CTRL(c) \
- (((c) << 6) & CDNS_PCIE_LM_RC_BAR_CFG_BAR0_CTRL_MASK)
-#define CDNS_PCIE_LM_RC_BAR_CFG_BAR1_APERTURE_MASK GENMASK(13, 9)
-#define CDNS_PCIE_LM_RC_BAR_CFG_BAR1_APERTURE(a) \
- (((a) << 9) & CDNS_PCIE_LM_RC_BAR_CFG_BAR1_APERTURE_MASK)
-#define CDNS_PCIE_LM_RC_BAR_CFG_BAR1_CTRL_MASK GENMASK(16, 14)
-#define CDNS_PCIE_LM_RC_BAR_CFG_BAR1_CTRL(c) \
- (((c) << 14) & CDNS_PCIE_LM_RC_BAR_CFG_BAR1_CTRL_MASK)
-#define CDNS_PCIE_LM_RC_BAR_CFG_PREFETCH_MEM_ENABLE BIT(17)
-#define CDNS_PCIE_LM_RC_BAR_CFG_PREFETCH_MEM_32BITS 0
-#define CDNS_PCIE_LM_RC_BAR_CFG_PREFETCH_MEM_64BITS BIT(18)
-#define CDNS_PCIE_LM_RC_BAR_CFG_IO_ENABLE BIT(19)
-#define CDNS_PCIE_LM_RC_BAR_CFG_IO_16BITS 0
-#define CDNS_PCIE_LM_RC_BAR_CFG_IO_32BITS BIT(20)
-#define CDNS_PCIE_LM_RC_BAR_CFG_CHECK_ENABLE BIT(31)
-
-/* BAR control values applicable to both Endpoint Function and Root Complex */
-#define CDNS_PCIE_LM_BAR_CFG_CTRL_DISABLED 0x0
-#define CDNS_PCIE_LM_BAR_CFG_CTRL_IO_32BITS 0x1
-#define CDNS_PCIE_LM_BAR_CFG_CTRL_MEM_32BITS 0x4
-#define CDNS_PCIE_LM_BAR_CFG_CTRL_PREFETCH_MEM_32BITS 0x5
-#define CDNS_PCIE_LM_BAR_CFG_CTRL_MEM_64BITS 0x6
-#define CDNS_PCIE_LM_BAR_CFG_CTRL_PREFETCH_MEM_64BITS 0x7
-
-#define LM_RC_BAR_CFG_CTRL_DISABLED(bar) \
- (CDNS_PCIE_LM_BAR_CFG_CTRL_DISABLED << (((bar) * 8) + 6))
-#define LM_RC_BAR_CFG_CTRL_IO_32BITS(bar) \
- (CDNS_PCIE_LM_BAR_CFG_CTRL_IO_32BITS << (((bar) * 8) + 6))
-#define LM_RC_BAR_CFG_CTRL_MEM_32BITS(bar) \
- (CDNS_PCIE_LM_BAR_CFG_CTRL_MEM_32BITS << (((bar) * 8) + 6))
-#define LM_RC_BAR_CFG_CTRL_PREF_MEM_32BITS(bar) \
- (CDNS_PCIE_LM_BAR_CFG_CTRL_PREFETCH_MEM_32BITS << (((bar) * 8) + 6))
-#define LM_RC_BAR_CFG_CTRL_MEM_64BITS(bar) \
- (CDNS_PCIE_LM_BAR_CFG_CTRL_MEM_64BITS << (((bar) * 8) + 6))
-#define LM_RC_BAR_CFG_CTRL_PREF_MEM_64BITS(bar) \
- (CDNS_PCIE_LM_BAR_CFG_CTRL_PREFETCH_MEM_64BITS << (((bar) * 8) + 6))
-#define LM_RC_BAR_CFG_APERTURE(bar, aperture) \
- (((aperture) - 2) << ((bar) * 8))
-
-/* PTM Control Register */
-#define CDNS_PCIE_LM_PTM_CTRL (CDNS_PCIE_LM_BASE + 0x0da8)
-#define CDNS_PCIE_LM_TPM_CTRL_PTMRSEN BIT(17)
-
-/*
- * Endpoint Function Registers (PCI configuration space for endpoint functions)
- */
-#define CDNS_PCIE_EP_FUNC_BASE(fn) (((fn) << 12) & GENMASK(19, 12))
-
-#define CDNS_PCIE_EP_FUNC_MSI_CAP_OFFSET 0x90
-#define CDNS_PCIE_EP_FUNC_MSIX_CAP_OFFSET 0xb0
-#define CDNS_PCIE_EP_FUNC_DEV_CAP_OFFSET 0xc0
-#define CDNS_PCIE_EP_FUNC_SRIOV_CAP_OFFSET 0x200
-
-/*
- * Endpoint PF Registers
- */
-#define CDNS_PCIE_CORE_PF_I_ARI_CAP_AND_CTRL(fn) (0x144 + (fn) * 0x1000)
-#define CDNS_PCIE_ARI_CAP_NFN_MASK GENMASK(15, 8)
-
-/*
- * Root Port Registers (PCI configuration space for the root port function)
- */
-#define CDNS_PCIE_RP_BASE 0x00200000
-#define CDNS_PCIE_RP_CAP_OFFSET 0xc0
-
-/*
- * Address Translation Registers
- */
-#define CDNS_PCIE_AT_BASE 0x00400000
-
-/* Region r Outbound AXI to PCIe Address Translation Register 0 */
-#define CDNS_PCIE_AT_OB_REGION_PCI_ADDR0(r) \
- (CDNS_PCIE_AT_BASE + 0x0000 + ((r) & 0x1f) * 0x0020)
-#define CDNS_PCIE_AT_OB_REGION_PCI_ADDR0_NBITS_MASK GENMASK(5, 0)
-#define CDNS_PCIE_AT_OB_REGION_PCI_ADDR0_NBITS(nbits) \
- (((nbits) - 1) & CDNS_PCIE_AT_OB_REGION_PCI_ADDR0_NBITS_MASK)
-#define CDNS_PCIE_AT_OB_REGION_PCI_ADDR0_DEVFN_MASK GENMASK(19, 12)
-#define CDNS_PCIE_AT_OB_REGION_PCI_ADDR0_DEVFN(devfn) \
- (((devfn) << 12) & CDNS_PCIE_AT_OB_REGION_PCI_ADDR0_DEVFN_MASK)
-#define CDNS_PCIE_AT_OB_REGION_PCI_ADDR0_BUS_MASK GENMASK(27, 20)
-#define CDNS_PCIE_AT_OB_REGION_PCI_ADDR0_BUS(bus) \
- (((bus) << 20) & CDNS_PCIE_AT_OB_REGION_PCI_ADDR0_BUS_MASK)
-
-/* Region r Outbound AXI to PCIe Address Translation Register 1 */
-#define CDNS_PCIE_AT_OB_REGION_PCI_ADDR1(r) \
- (CDNS_PCIE_AT_BASE + 0x0004 + ((r) & 0x1f) * 0x0020)
-
-/* Region r Outbound PCIe Descriptor Register 0 */
-#define CDNS_PCIE_AT_OB_REGION_DESC0(r) \
- (CDNS_PCIE_AT_BASE + 0x0008 + ((r) & 0x1f) * 0x0020)
-#define CDNS_PCIE_AT_OB_REGION_DESC0_TYPE_MASK GENMASK(3, 0)
-#define CDNS_PCIE_AT_OB_REGION_DESC0_TYPE_MEM 0x2
-#define CDNS_PCIE_AT_OB_REGION_DESC0_TYPE_IO 0x6
-#define CDNS_PCIE_AT_OB_REGION_DESC0_TYPE_CONF_TYPE0 0xa
-#define CDNS_PCIE_AT_OB_REGION_DESC0_TYPE_CONF_TYPE1 0xb
-#define CDNS_PCIE_AT_OB_REGION_DESC0_TYPE_NORMAL_MSG 0xc
-#define CDNS_PCIE_AT_OB_REGION_DESC0_TYPE_VENDOR_MSG 0xd
-/* Bit 23 MUST be set in RC mode. */
-#define CDNS_PCIE_AT_OB_REGION_DESC0_HARDCODED_RID BIT(23)
-#define CDNS_PCIE_AT_OB_REGION_DESC0_DEVFN_MASK GENMASK(31, 24)
-#define CDNS_PCIE_AT_OB_REGION_DESC0_DEVFN(devfn) \
- (((devfn) << 24) & CDNS_PCIE_AT_OB_REGION_DESC0_DEVFN_MASK)
-
-/* Region r Outbound PCIe Descriptor Register 1 */
-#define CDNS_PCIE_AT_OB_REGION_DESC1(r) \
- (CDNS_PCIE_AT_BASE + 0x000c + ((r) & 0x1f) * 0x0020)
-#define CDNS_PCIE_AT_OB_REGION_DESC1_BUS_MASK GENMASK(7, 0)
-#define CDNS_PCIE_AT_OB_REGION_DESC1_BUS(bus) \
- ((bus) & CDNS_PCIE_AT_OB_REGION_DESC1_BUS_MASK)
-
-/* Region r AXI Region Base Address Register 0 */
-#define CDNS_PCIE_AT_OB_REGION_CPU_ADDR0(r) \
- (CDNS_PCIE_AT_BASE + 0x0018 + ((r) & 0x1f) * 0x0020)
-#define CDNS_PCIE_AT_OB_REGION_CPU_ADDR0_NBITS_MASK GENMASK(5, 0)
-#define CDNS_PCIE_AT_OB_REGION_CPU_ADDR0_NBITS(nbits) \
- (((nbits) - 1) & CDNS_PCIE_AT_OB_REGION_CPU_ADDR0_NBITS_MASK)
-
-/* Region r AXI Region Base Address Register 1 */
-#define CDNS_PCIE_AT_OB_REGION_CPU_ADDR1(r) \
- (CDNS_PCIE_AT_BASE + 0x001c + ((r) & 0x1f) * 0x0020)
-
-/* Root Port BAR Inbound PCIe to AXI Address Translation Register */
-#define CDNS_PCIE_AT_IB_RP_BAR_ADDR0(bar) \
- (CDNS_PCIE_AT_BASE + 0x0800 + (bar) * 0x0008)
-#define CDNS_PCIE_AT_IB_RP_BAR_ADDR0_NBITS_MASK GENMASK(5, 0)
-#define CDNS_PCIE_AT_IB_RP_BAR_ADDR0_NBITS(nbits) \
- (((nbits) - 1) & CDNS_PCIE_AT_IB_RP_BAR_ADDR0_NBITS_MASK)
-#define CDNS_PCIE_AT_IB_RP_BAR_ADDR1(bar) \
- (CDNS_PCIE_AT_BASE + 0x0804 + (bar) * 0x0008)
-
-/* AXI link down register */
-#define CDNS_PCIE_AT_LINKDOWN (CDNS_PCIE_AT_BASE + 0x0824)
-
-/* LTSSM Capabilities register */
-#define CDNS_PCIE_LTSSM_CONTROL_CAP (CDNS_PCIE_LM_BASE + 0x0054)
-#define CDNS_PCIE_DETECT_QUIET_MIN_DELAY_MASK GENMASK(2, 1)
-#define CDNS_PCIE_DETECT_QUIET_MIN_DELAY_SHIFT 1
-#define CDNS_PCIE_DETECT_QUIET_MIN_DELAY(delay) \
- (((delay) << CDNS_PCIE_DETECT_QUIET_MIN_DELAY_SHIFT) & \
- CDNS_PCIE_DETECT_QUIET_MIN_DELAY_MASK)
+#include "pcie-cadence-lga-regs.h"
enum cdns_pcie_rp_bar {
RP_BAR_UNDEFINED = -1,
@@ -225,29 +19,11 @@ enum cdns_pcie_rp_bar {
RP_NO_BAR
};
-#define CDNS_PCIE_RP_MAX_IB 0x3
-#define CDNS_PCIE_MAX_OB 32
-
struct cdns_pcie_rp_ib_bar {
u64 size;
bool free;
};
-/* Endpoint Function BAR Inbound PCIe to AXI Address Translation Register */
-#define CDNS_PCIE_AT_IB_EP_FUNC_BAR_ADDR0(fn, bar) \
- (CDNS_PCIE_AT_BASE + 0x0840 + (fn) * 0x0040 + (bar) * 0x0008)
-#define CDNS_PCIE_AT_IB_EP_FUNC_BAR_ADDR1(fn, bar) \
- (CDNS_PCIE_AT_BASE + 0x0844 + (fn) * 0x0040 + (bar) * 0x0008)
-
-/* Normal/Vendor specific message access: offset inside some outbound region */
-#define CDNS_PCIE_NORMAL_MSG_ROUTING_MASK GENMASK(7, 5)
-#define CDNS_PCIE_NORMAL_MSG_ROUTING(route) \
- (((route) << 5) & CDNS_PCIE_NORMAL_MSG_ROUTING_MASK)
-#define CDNS_PCIE_NORMAL_MSG_CODE_MASK GENMASK(15, 8)
-#define CDNS_PCIE_NORMAL_MSG_CODE(code) \
- (((code) << 8) & CDNS_PCIE_NORMAL_MSG_CODE_MASK)
-#define CDNS_PCIE_MSG_DATA BIT(16)
-
struct cdns_pcie;
enum cdns_pcie_msg_routing {
--
2.49.0
^ permalink raw reply related [flat|nested] 46+ messages in thread
* [PATCH v5 04/14] PCI: cadence: Add register definitions for HPA(High Perf Architecture)
2025-06-30 4:15 [PATCH v5 00/14] Enhance the PCIe controller driver hans.zhang
` (2 preceding siblings ...)
2025-06-30 4:15 ` [PATCH v5 03/14] PCI: cadence: Split PCIe controller header file hans.zhang
@ 2025-06-30 4:15 ` hans.zhang
2025-06-30 4:15 ` [PATCH v5 05/14] PCI: cadence: Split PCIe EP support into common and specific functions hans.zhang
` (9 subsequent siblings)
13 siblings, 0 replies; 46+ messages in thread
From: hans.zhang @ 2025-06-30 4:15 UTC (permalink / raw)
To: bhelgaas, lpieralisi, kw, mani, robh, kwilczynski, krzk+dt,
conor+dt
Cc: mpillai, fugang.duan, guoyin.chen, peter.chen,
cix-kernel-upstream, linux-pci, devicetree, linux-kernel,
Hans Zhang
From: Manikandan K Pillai <mpillai@cadence.com>
Add the register offsets and register definitions for HPA(High
Performance architecture) PCIe controllers from Cadence.
Signed-off-by: Manikandan K Pillai <mpillai@cadence.com>
Co-developed-by: Hans Zhang <hans.zhang@cixtech.com>
Signed-off-by: Hans Zhang <hans.zhang@cixtech.com>
---
.../cadence/pcie-cadence-hpa-regs.h | 212 ++++++++++++++++++
.../controller/cadence/pcie-cadence-plat.c | 4 -
drivers/pci/controller/cadence/pcie-cadence.h | 121 ++++++++--
3 files changed, 320 insertions(+), 17 deletions(-)
create mode 100644 drivers/pci/controller/cadence/pcie-cadence-hpa-regs.h
diff --git a/drivers/pci/controller/cadence/pcie-cadence-hpa-regs.h b/drivers/pci/controller/cadence/pcie-cadence-hpa-regs.h
new file mode 100644
index 000000000000..016144e2df81
--- /dev/null
+++ b/drivers/pci/controller/cadence/pcie-cadence-hpa-regs.h
@@ -0,0 +1,212 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+// Copyright (c) 2017 Cadence
+// Cadence PCIe controller driver.
+// Author: Manikandan K Pillai <mpillai@cadence.com>
+
+#ifndef _PCIE_CADENCE_HPA_REGS_H
+#define _PCIE_CADENCE_HPA_REGS_H
+
+#include <linux/kernel.h>
+#include <linux/pci.h>
+#include <linux/pci-epf.h>
+#include <linux/phy/phy.h>
+#include <linux/bitfield.h>
+
+/*
+ * HPA (High Performance Architecture) PCIe controller register
+ */
+#define CDNS_PCIE_HPA_IP_REG_BANK 0x01000000
+#define CDNS_PCIE_HPA_IP_CFG_CTRL_REG_BANK 0x01003C00
+#define CDNS_PCIE_HPA_IP_AXI_MASTER_COMMON 0x01020000
+/*
+ * Address Translation Registers(HPA)
+ */
+#define CDNS_PCIE_HPA_AXI_SLAVE 0x03000000
+#define CDNS_PCIE_HPA_AXI_MASTER 0x03002000
+/*
+ * Root port register base address
+ */
+#define CDNS_PCIE_HPA_RP_BASE 0x0
+
+#define CDNS_PCIE_HPA_LM_ID 0x1420
+
+/*
+ * Endpoint Function BARs(HPA) Configuration Registers
+ */
+#define CDNS_PCIE_HPA_LM_EP_FUNC_BAR_CFG(bar, fn) \
+ (((bar) < BAR_3) ? CDNS_PCIE_HPA_LM_EP_FUNC_BAR_CFG0(fn) : \
+ CDNS_PCIE_HPA_LM_EP_FUNC_BAR_CFG1(fn))
+#define CDNS_PCIE_HPA_LM_EP_FUNC_BAR_CFG0(pfn) (0x4000 * (pfn))
+#define CDNS_PCIE_HPA_LM_EP_FUNC_BAR_CFG1(pfn) ((0x4000 * (pfn)) + 0x04)
+#define CDNS_PCIE_HPA_LM_EP_VFUNC_BAR_CFG(bar, fn) \
+ (((bar) < BAR_3) ? CDNS_PCIE_HPA_LM_EP_VFUNC_BAR_CFG0(fn) : \
+ CDNS_PCIE_HPA_LM_EP_VFUNC_BAR_CFG1(fn))
+#define CDNS_PCIE_HPA_LM_EP_VFUNC_BAR_CFG0(vfn) ((0x4000 * (vfn)) + 0x08)
+#define CDNS_PCIE_HPA_LM_EP_VFUNC_BAR_CFG1(vfn) ((0x4000 * (vfn)) + 0x0C)
+#define CDNS_PCIE_HPA_LM_EP_FUNC_BAR_CFG_BAR_APERTURE_MASK(f) \
+ (GENMASK(9, 4) << ((f) * 10))
+#define CDNS_PCIE_HPA_LM_EP_FUNC_BAR_CFG_BAR_APERTURE(b, a) \
+ (((a) << (4 + ((b) * 10))) & (CDNS_PCIE_HPA_LM_EP_FUNC_BAR_CFG_BAR_APERTURE_MASK(b)))
+#define CDNS_PCIE_HPA_LM_EP_FUNC_BAR_CFG_BAR_CTRL_MASK(f) \
+ (GENMASK(3, 0) << ((f) * 10))
+#define CDNS_PCIE_HPA_LM_EP_FUNC_BAR_CFG_BAR_CTRL(b, c) \
+ (((c) << ((b) * 10)) & (CDNS_PCIE_HPA_LM_EP_FUNC_BAR_CFG_BAR_CTRL_MASK(b)))
+
+/*
+ * Endpoint Function Configuration Register
+ */
+#define CDNS_PCIE_HPA_LM_EP_FUNC_CFG 0x02C0
+
+/*
+ * Root Complex BAR Configuration Register
+ */
+#define CDNS_PCIE_HPA_LM_RC_BAR_CFG 0x14
+#define CDNS_PCIE_HPA_LM_RC_BAR_CFG_BAR0_APERTURE_MASK GENMASK(9, 4)
+#define CDNS_PCIE_HPA_LM_RC_BAR_CFG_BAR0_APERTURE(a) \
+ FIELD_PREP(CDNS_PCIE_HPA_LM_RC_BAR_CFG_BAR0_APERTURE_MASK, a)
+#define CDNS_PCIE_HPA_LM_RC_BAR_CFG_BAR0_CTRL_MASK GENMASK(3, 0)
+#define CDNS_PCIE_HPA_LM_RC_BAR_CFG_BAR0_CTRL(c) \
+ FIELD_PREP(CDNS_PCIE_HPA_LM_RC_BAR_CFG_BAR0_CTRL_MASK, c)
+#define CDNS_PCIE_HPA_LM_RC_BAR_CFG_BAR1_APERTURE_MASK GENMASK(19, 14)
+#define CDNS_PCIE_HPA_LM_RC_BAR_CFG_BAR1_APERTURE(a) \
+ FIELD_PREP(CDNS_PCIE_HPA_LM_RC_BAR_CFG_BAR1_APERTURE_MASK, a)
+#define CDNS_PCIE_HPA_LM_RC_BAR_CFG_BAR1_CTRL_MASK GENMASK(13, 10)
+#define CDNS_PCIE_HPA_LM_RC_BAR_CFG_BAR1_CTRL(c) \
+ FIELD_PREP(CDNS_PCIE_HPA_LM_RC_BAR_CFG_BAR1_CTRL_MASK, c)
+
+#define CDNS_PCIE_HPA_LM_RC_BAR_CFG_PREFETCH_MEM_ENABLE BIT(20)
+#define CDNS_PCIE_HPA_LM_RC_BAR_CFG_PREFETCH_MEM_64BITS BIT(21)
+#define CDNS_PCIE_HPA_LM_RC_BAR_CFG_IO_ENABLE BIT(22)
+#define CDNS_PCIE_HPA_LM_RC_BAR_CFG_IO_32BITS BIT(23)
+
+/* BAR control values applicable to both Endpoint Function and Root Complex */
+#define CDNS_PCIE_HPA_LM_BAR_CFG_CTRL_DISABLED 0x0
+#define CDNS_PCIE_HPA_LM_BAR_CFG_CTRL_IO_32BITS 0x3
+#define CDNS_PCIE_HPA_LM_BAR_CFG_CTRL_MEM_32BITS 0x1
+#define CDNS_PCIE_HPA_LM_BAR_CFG_CTRL_PREFETCH_MEM_32BITS 0x9
+#define CDNS_PCIE_HPA_LM_BAR_CFG_CTRL_MEM_64BITS 0x5
+#define CDNS_PCIE_HPA_LM_BAR_CFG_CTRL_PREFETCH_MEM_64BITS 0xD
+
+#define HPA_LM_RC_BAR_CFG_CTRL_DISABLED(bar) \
+ (CDNS_PCIE_HPA_LM_BAR_CFG_CTRL_DISABLED << ((bar) * 10))
+#define HPA_LM_RC_BAR_CFG_CTRL_IO_32BITS(bar) \
+ (CDNS_PCIE_HPA_LM_BAR_CFG_CTRL_IO_32BITS << ((bar) * 10))
+#define HPA_LM_RC_BAR_CFG_CTRL_MEM_32BITS(bar) \
+ (CDNS_PCIE_HPA_LM_BAR_CFG_CTRL_MEM_32BITS << ((bar) * 10))
+#define HPA_LM_RC_BAR_CFG_CTRL_PREF_MEM_32BITS(bar) \
+ (CDNS_PCIE_HPA_LM_BAR_CFG_CTRL_PREFETCH_MEM_32BITS << ((bar) * 10))
+#define HPA_LM_RC_BAR_CFG_CTRL_MEM_64BITS(bar) \
+ (CDNS_PCIE_HPA_LM_BAR_CFG_CTRL_MEM_64BITS << ((bar) * 10))
+#define HPA_LM_RC_BAR_CFG_CTRL_PREF_MEM_64BITS(bar) \
+ (CDNS_PCIE_HPA_LM_BAR_CFG_CTRL_PREFETCH_MEM_64BITS << ((bar) * 10))
+#define HPA_LM_RC_BAR_CFG_APERTURE(bar, aperture) \
+ (((aperture) - 7) << ((bar) * 10))
+
+#define CDNS_PCIE_HPA_LM_PTM_CTRL 0x0520
+#define CDNS_PCIE_HPA_LM_TPM_CTRL_PTMRSEN BIT(17)
+
+/*
+ * Root Port Registers PCI config space(HPA) for root port function
+ */
+#define CDNS_PCIE_HPA_RP_CAP_OFFSET 0xC0
+
+/*
+ * Region r Outbound AXI to PCIe Address Translation Register 0
+ */
+#define CDNS_PCIE_HPA_AT_OB_REGION_PCI_ADDR0(r) (0x1010 + ((r) & 0x1F) * 0x0080)
+#define CDNS_PCIE_HPA_AT_OB_REGION_PCI_ADDR0_NBITS_MASK GENMASK(5, 0)
+#define CDNS_PCIE_HPA_AT_OB_REGION_PCI_ADDR0_NBITS(nbits) \
+ FIELD_PREP(CDNS_PCIE_HPA_AT_OB_REGION_PCI_ADDR0_NBITS_MASK, ((nbits) - 1))
+#define CDNS_PCIE_HPA_AT_OB_REGION_PCI_ADDR0_DEVFN_MASK GENMASK(23, 16)
+#define CDNS_PCIE_HPA_AT_OB_REGION_PCI_ADDR0_DEVFN(devfn) \
+ FIELD_PREP(CDNS_PCIE_HPA_AT_OB_REGION_PCI_ADDR0_DEVFN_MASK, devfn)
+#define CDNS_PCIE_HPA_AT_OB_REGION_PCI_ADDR0_BUS_MASK GENMASK(31, 24)
+#define CDNS_PCIE_HPA_AT_OB_REGION_PCI_ADDR0_BUS(bus) \
+ FIELD_PREP(CDNS_PCIE_HPA_AT_OB_REGION_PCI_ADDR0_BUS_MASK, bus)
+
+/*
+ * Region r Outbound AXI to PCIe Address Translation Register 1
+ */
+#define CDNS_PCIE_HPA_AT_OB_REGION_PCI_ADDR1(r) (0x1014 + ((r) & 0x1F) * 0x0080)
+
+/*
+ * Region r Outbound PCIe Descriptor Register 0
+ */
+#define CDNS_PCIE_HPA_AT_OB_REGION_DESC0(r) (0x1008 + ((r) & 0x1F) * 0x0080)
+#define CDNS_PCIE_HPA_AT_OB_REGION_DESC0_TYPE_MASK GENMASK(28, 24)
+#define CDNS_PCIE_HPA_AT_OB_REGION_DESC0_TYPE_MEM \
+ FIELD_PREP(CDNS_PCIE_HPA_AT_OB_REGION_DESC0_TYPE_MASK, 0x0)
+#define CDNS_PCIE_HPA_AT_OB_REGION_DESC0_TYPE_IO \
+ FIELD_PREP(CDNS_PCIE_HPA_AT_OB_REGION_DESC0_TYPE_MASK, 0x2)
+#define CDNS_PCIE_HPA_AT_OB_REGION_DESC0_TYPE_CONF_TYPE0 \
+ FIELD_PREP(CDNS_PCIE_HPA_AT_OB_REGION_DESC0_TYPE_MASK, 0x4)
+#define CDNS_PCIE_HPA_AT_OB_REGION_DESC0_TYPE_CONF_TYPE1 \
+ FIELD_PREP(CDNS_PCIE_HPA_AT_OB_REGION_DESC0_TYPE_MASK, 0x5)
+#define CDNS_PCIE_HPA_AT_OB_REGION_DESC0_TYPE_NORMAL_MSG \
+ FIELD_PREP(CDNS_PCIE_HPA_AT_OB_REGION_DESC0_TYPE_MASK, 0x10)
+
+/*
+ * Region r Outbound PCIe Descriptor Register 1
+ */
+#define CDNS_PCIE_HPA_AT_OB_REGION_DESC1(r) (0x100C + ((r) & 0x1F) * 0x0080)
+#define CDNS_PCIE_HPA_AT_OB_REGION_DESC1_BUS_MASK GENMASK(31, 24)
+#define CDNS_PCIE_HPA_AT_OB_REGION_DESC1_BUS(bus) \
+ FIELD_PREP(CDNS_PCIE_HPA_AT_OB_REGION_DESC1_BUS_MASK, bus)
+#define CDNS_PCIE_HPA_AT_OB_REGION_DESC1_DEVFN_MASK GENMASK(23, 16)
+#define CDNS_PCIE_HPA_AT_OB_REGION_DESC1_DEVFN(devfn) \
+ FIELD_PREP(CDNS_PCIE_HPA_AT_OB_REGION_DESC1_DEVFN_MASK, devfn)
+
+#define CDNS_PCIE_HPA_AT_OB_REGION_CTRL0(r) (0x1018 + ((r) & 0x1F) * 0x0080)
+#define CDNS_PCIE_HPA_AT_OB_REGION_CTRL0_SUPPLY_BUS BIT(26)
+#define CDNS_PCIE_HPA_AT_OB_REGION_CTRL0_SUPPLY_DEV_FN BIT(25)
+
+/*
+ * Region r AXI Region Base Address Register 0
+ */
+#define CDNS_PCIE_HPA_AT_OB_REGION_CPU_ADDR0(r) (0x1000 + ((r) & 0x1F) * 0x0080)
+#define CDNS_PCIE_HPA_AT_OB_REGION_CPU_ADDR0_NBITS_MASK GENMASK(5, 0)
+#define CDNS_PCIE_HPA_AT_OB_REGION_CPU_ADDR0_NBITS(nbits) \
+ FIELD_PREP(CDNS_PCIE_HPA_AT_OB_REGION_CPU_ADDR0_NBITS_MASK, ((nbits) - 1))
+
+/*
+ * Region r AXI Region Base Address Register 1
+ */
+#define CDNS_PCIE_HPA_AT_OB_REGION_CPU_ADDR1(r) (0x1004 + ((r) & 0x1F) * 0x0080)
+
+/*
+ * Root Port BAR Inbound PCIe to AXI Address Translation Register
+ */
+#define CDNS_PCIE_HPA_AT_IB_RP_BAR_ADDR0(bar) (((bar) * 0x0008))
+#define CDNS_PCIE_HPA_AT_IB_RP_BAR_ADDR0_NBITS_MASK GENMASK(5, 0)
+#define CDNS_PCIE_HPA_AT_IB_RP_BAR_ADDR0_NBITS(nbits) \
+ FIELD_PREP(CDNS_PCIE_HPA_AT_IB_RP_BAR_ADDR0_NBITS_MASK, ((nbits) - 1))
+#define CDNS_PCIE_HPA_AT_IB_RP_BAR_ADDR1(bar) (0x04 + ((bar) * 0x0008))
+
+/*
+ * AXI link down register
+ */
+#define CDNS_PCIE_HPA_AT_LINKDOWN 0x04
+
+/*
+ * Physical Layer Configuration Register 0
+ * This register contains the parameters required for functional setup
+ * of Physical Layer.
+ */
+#define CDNS_PCIE_HPA_PHY_LAYER_CFG0 0x0400
+#define CDNS_PCIE_HPA_DETECT_QUIET_MIN_DELAY_MASK GENMASK(26, 24)
+#define CDNS_PCIE_HPA_DETECT_QUIET_MIN_DELAY(delay) \
+ FIELD_PREP(CDNS_PCIE_HPA_DETECT_QUIET_MIN_DELAY_MASK, delay)
+#define CDNS_PCIE_HPA_LINK_TRNG_EN_MASK GENMASK(27, 27)
+
+#define CDNS_PCIE_HPA_PHY_DBG_STS_REG0 0x0420
+
+#define CDNS_PCIE_HPA_RP_MAX_IB 0x3
+#define CDNS_PCIE_HPA_MAX_OB 15
+
+/*
+ * Endpoint Function BAR Inbound PCIe to AXI Address Translation Register(HPA)
+ */
+#define CDNS_PCIE_HPA_AT_IB_EP_FUNC_BAR_ADDR0(fn, bar) (((fn) * 0x0040) + ((bar) * 0x0008))
+#define CDNS_PCIE_HPA_AT_IB_EP_FUNC_BAR_ADDR1(fn, bar) (0x4 + ((fn) * 0x0040) + ((bar) * 0x0008))
+
+#endif /* _PCIE_CADENCE_HPA_REGS_H */
diff --git a/drivers/pci/controller/cadence/pcie-cadence-plat.c b/drivers/pci/controller/cadence/pcie-cadence-plat.c
index 0456845dabb9..e09f23427313 100644
--- a/drivers/pci/controller/cadence/pcie-cadence-plat.c
+++ b/drivers/pci/controller/cadence/pcie-cadence-plat.c
@@ -22,10 +22,6 @@ struct cdns_plat_pcie {
struct cdns_pcie *pcie;
};
-struct cdns_plat_pcie_of_data {
- bool is_rc;
-};
-
static const struct of_device_id cdns_plat_pcie_of_match[];
static u64 cdns_plat_cpu_addr_fixup(struct cdns_pcie *pcie, u64 cpu_addr)
diff --git a/drivers/pci/controller/cadence/pcie-cadence.h b/drivers/pci/controller/cadence/pcie-cadence.h
index b87fab47f2e7..5c0ea49551c8 100644
--- a/drivers/pci/controller/cadence/pcie-cadence.h
+++ b/drivers/pci/controller/cadence/pcie-cadence.h
@@ -10,7 +10,9 @@
#include <linux/pci.h>
#include <linux/pci-epf.h>
#include <linux/phy/phy.h>
+#include <linux/bitfield.h>
#include "pcie-cadence-lga-regs.h"
+#include "pcie-cadence-hpa-regs.h"
enum cdns_pcie_rp_bar {
RP_BAR_UNDEFINED = -1,
@@ -25,6 +27,7 @@ struct cdns_pcie_rp_ib_bar {
};
struct cdns_pcie;
+struct cdns_pcie_rc;
enum cdns_pcie_msg_routing {
/* Route to Root Complex */
@@ -46,6 +49,19 @@ enum cdns_pcie_msg_routing {
MSG_ROUTING_GATHER,
};
+enum cdns_pcie_reg_bank {
+ REG_BANK_RP,
+ REG_BANK_IP_REG,
+ REG_BANK_IP_CFG_CTRL_REG,
+ REG_BANK_AXI_MASTER_COMMON,
+ REG_BANK_AXI_MASTER,
+ REG_BANK_AXI_SLAVE,
+ REG_BANK_AXI_HLS,
+ REG_BANK_AXI_RAS,
+ REG_BANK_AXI_DTI,
+ REG_BANKS_MAX,
+};
+
struct cdns_pcie_ops {
int (*start_link)(struct cdns_pcie *pcie);
void (*stop_link)(struct cdns_pcie *pcie);
@@ -53,6 +69,30 @@ struct cdns_pcie_ops {
u64 (*cpu_addr_fixup)(struct cdns_pcie *pcie, u64 cpu_addr);
};
+/**
+ * struct cdns_plat_pcie_of_data - Register bank offset for a platform
+ * @is_rc: controller is a RC
+ * @ip_reg_bank_offset: ip register bank start offset
+ * @ip_cfg_ctrl_reg_offset: ip config control register start offset
+ * @axi_mstr_common_offset: AXI master common register start offset
+ * @axi_slave_offset: AXI slave start offset
+ * @axi_master_offset: AXI master start offset
+ * @axi_hls_offset: AXI HLS offset start
+ * @axi_ras_offset: AXI RAS offset
+ * @axi_dti_offset: AXI DTI offset
+ */
+struct cdns_plat_pcie_of_data {
+ u32 is_rc:1;
+ u32 ip_reg_bank_offset;
+ u32 ip_cfg_ctrl_reg_offset;
+ u32 axi_mstr_common_offset;
+ u32 axi_slave_offset;
+ u32 axi_master_offset;
+ u32 axi_hls_offset;
+ u32 axi_ras_offset;
+ u32 axi_dti_offset;
+};
+
/**
* struct cdns_pcie - private data for Cadence PCIe controller drivers
* @reg_base: IO mapped register base
@@ -64,16 +104,18 @@ struct cdns_pcie_ops {
* @link: list of pointers to corresponding device link representations
* @ops: Platform-specific ops to control various inputs from Cadence PCIe
* wrapper
+ * @cdns_pcie_reg_offsets: Register bank offsets for different SoC
*/
struct cdns_pcie {
- void __iomem *reg_base;
- struct resource *mem_res;
- struct device *dev;
- bool is_rc;
- int phy_count;
- struct phy **phy;
- struct device_link **link;
- const struct cdns_pcie_ops *ops;
+ void __iomem *reg_base;
+ struct resource *mem_res;
+ struct device *dev;
+ bool is_rc;
+ int phy_count;
+ struct phy **phy;
+ struct device_link **link;
+ const struct cdns_pcie_ops *ops;
+ const struct cdns_plat_pcie_of_data *cdns_pcie_reg_offsets;
};
/**
@@ -151,6 +193,40 @@ struct cdns_pcie_ep {
unsigned int quirk_disable_flr:1;
};
+static inline u32 cdns_reg_bank_to_off(struct cdns_pcie *pcie, enum cdns_pcie_reg_bank bank)
+{
+ u32 offset = 0x0;
+
+ switch (bank) {
+ case REG_BANK_IP_REG:
+ offset = pcie->cdns_pcie_reg_offsets->ip_reg_bank_offset;
+ break;
+ case REG_BANK_IP_CFG_CTRL_REG:
+ offset = pcie->cdns_pcie_reg_offsets->ip_cfg_ctrl_reg_offset;
+ break;
+ case REG_BANK_AXI_MASTER_COMMON:
+ offset = pcie->cdns_pcie_reg_offsets->axi_mstr_common_offset;
+ break;
+ case REG_BANK_AXI_MASTER:
+ offset = pcie->cdns_pcie_reg_offsets->axi_master_offset;
+ break;
+ case REG_BANK_AXI_SLAVE:
+ offset = pcie->cdns_pcie_reg_offsets->axi_slave_offset;
+ break;
+ case REG_BANK_AXI_HLS:
+ offset = pcie->cdns_pcie_reg_offsets->axi_hls_offset;
+ break;
+ case REG_BANK_AXI_RAS:
+ offset = pcie->cdns_pcie_reg_offsets->axi_ras_offset;
+ break;
+ case REG_BANK_AXI_DTI:
+ offset = pcie->cdns_pcie_reg_offsets->axi_dti_offset;
+ break;
+ default:
+ break;
+ };
+ return offset;
+}
/* Register access */
static inline void cdns_pcie_writel(struct cdns_pcie *pcie, u32 reg, u32 value)
@@ -163,6 +239,27 @@ static inline u32 cdns_pcie_readl(struct cdns_pcie *pcie, u32 reg)
return readl(pcie->reg_base + reg);
}
+static inline void cdns_pcie_hpa_writel(struct cdns_pcie *pcie,
+ enum cdns_pcie_reg_bank bank,
+ u32 reg,
+ u32 value)
+{
+ u32 offset = cdns_reg_bank_to_off(pcie, bank);
+
+ reg += offset;
+ writel(value, pcie->reg_base + reg);
+}
+
+static inline u32 cdns_pcie_hpa_readl(struct cdns_pcie *pcie,
+ enum cdns_pcie_reg_bank bank,
+ u32 reg)
+{
+ u32 offset = cdns_reg_bank_to_off(pcie, bank);
+
+ reg += offset;
+ return readl(pcie->reg_base + reg);
+}
+
static inline u32 cdns_pcie_read_sz(void __iomem *addr, int size)
{
void __iomem *aligned_addr = PTR_ALIGN_DOWN(addr, 0x4);
@@ -333,19 +430,17 @@ static inline void cdns_pcie_ep_disable(struct cdns_pcie_ep *ep)
#endif
void cdns_pcie_detect_quiet_min_delay_set(struct cdns_pcie *pcie);
-
void cdns_pcie_set_outbound_region(struct cdns_pcie *pcie, u8 busnr, u8 fn,
u32 r, bool is_io,
u64 cpu_addr, u64 pci_addr, size_t size);
-
void cdns_pcie_set_outbound_region_for_normal_msg(struct cdns_pcie *pcie,
u8 busnr, u8 fn,
u32 r, u64 cpu_addr);
-
void cdns_pcie_reset_outbound_region(struct cdns_pcie *pcie, u32 r);
void cdns_pcie_disable_phy(struct cdns_pcie *pcie);
-int cdns_pcie_enable_phy(struct cdns_pcie *pcie);
-int cdns_pcie_init_phy(struct device *dev, struct cdns_pcie *pcie);
+int cdns_pcie_enable_phy(struct cdns_pcie *pcie);
+int cdns_pcie_init_phy(struct device *dev, struct cdns_pcie *pcie);
+
extern const struct dev_pm_ops cdns_pcie_pm_ops;
#endif /* _PCIE_CADENCE_H */
--
2.49.0
^ permalink raw reply related [flat|nested] 46+ messages in thread
* [PATCH v5 05/14] PCI: cadence: Split PCIe EP support into common and specific functions
2025-06-30 4:15 [PATCH v5 00/14] Enhance the PCIe controller driver hans.zhang
` (3 preceding siblings ...)
2025-06-30 4:15 ` [PATCH v5 04/14] PCI: cadence: Add register definitions for HPA(High Perf Architecture) hans.zhang
@ 2025-06-30 4:15 ` hans.zhang
2025-06-30 4:15 ` [PATCH v5 06/14] PCI: cadence: Split PCIe RP " hans.zhang
` (8 subsequent siblings)
13 siblings, 0 replies; 46+ messages in thread
From: hans.zhang @ 2025-06-30 4:15 UTC (permalink / raw)
To: bhelgaas, lpieralisi, kw, mani, robh, kwilczynski, krzk+dt,
conor+dt
Cc: mpillai, fugang.duan, guoyin.chen, peter.chen,
cix-kernel-upstream, linux-pci, devicetree, linux-kernel,
Hans Zhang
From: Manikandan K Pillai <mpillai@cadence.com>
Split the Cadence PCIe controller EP functionality into common
library functions and functions for legacy PCIe EP controller.
Signed-off-by: Manikandan K Pillai <mpillai@cadence.com>
Co-developed-by: Hans Zhang <hans.zhang@cixtech.com>
Signed-off-by: Hans Zhang <hans.zhang@cixtech.com>
---
drivers/pci/controller/cadence/Kconfig | 4 +
drivers/pci/controller/cadence/Makefile | 1 +
.../cadence/pcie-cadence-ep-common.c | 240 +++++++++++++++++
.../cadence/pcie-cadence-ep-common.h | 36 +++
.../pci/controller/cadence/pcie-cadence-ep.c | 243 +-----------------
5 files changed, 287 insertions(+), 237 deletions(-)
create mode 100644 drivers/pci/controller/cadence/pcie-cadence-ep-common.c
create mode 100644 drivers/pci/controller/cadence/pcie-cadence-ep-common.h
diff --git a/drivers/pci/controller/cadence/Kconfig b/drivers/pci/controller/cadence/Kconfig
index 666e16b6367f..417f981ac8ca 100644
--- a/drivers/pci/controller/cadence/Kconfig
+++ b/drivers/pci/controller/cadence/Kconfig
@@ -12,11 +12,15 @@ config PCIE_CADENCE_HOST
select IRQ_DOMAIN
select PCIE_CADENCE
+config PCIE_CADENCE_COMMON
+ bool
+
config PCIE_CADENCE_EP
tristate
depends on OF
depends on PCI_ENDPOINT
select PCIE_CADENCE
+ select PCIE_CADENCE_COMMON
config PCIE_CADENCE_PLAT
bool
diff --git a/drivers/pci/controller/cadence/Makefile b/drivers/pci/controller/cadence/Makefile
index 9bac5fb2f13d..918f8c924487 100644
--- a/drivers/pci/controller/cadence/Makefile
+++ b/drivers/pci/controller/cadence/Makefile
@@ -1,5 +1,6 @@
# SPDX-License-Identifier: GPL-2.0
obj-$(CONFIG_PCIE_CADENCE) += pcie-cadence.o
+obj-$(CONFIG_PCIE_CADENCE_COMMON) += pcie-cadence-ep-common.o
obj-$(CONFIG_PCIE_CADENCE_HOST) += pcie-cadence-host.o
obj-$(CONFIG_PCIE_CADENCE_EP) += pcie-cadence-ep.o
obj-$(CONFIG_PCIE_CADENCE_PLAT) += pcie-cadence-plat.o
diff --git a/drivers/pci/controller/cadence/pcie-cadence-ep-common.c b/drivers/pci/controller/cadence/pcie-cadence-ep-common.c
new file mode 100644
index 000000000000..cf5be3b3c981
--- /dev/null
+++ b/drivers/pci/controller/cadence/pcie-cadence-ep-common.c
@@ -0,0 +1,240 @@
+// SPDX-License-Identifier: GPL-2.0
+// Copyright (c) 2017 Cadence
+// Cadence PCIe endpoint controller driver common
+// Author: Manikandan K Pillai <mpillai@cadence.com>
+
+#include <linux/bitfield.h>
+#include <linux/delay.h>
+#include <linux/of.h>
+#include <linux/platform_device.h>
+#include <linux/sizes.h>
+
+#include "pcie-cadence.h"
+#include "pcie-cadence-ep-common.h"
+
+u8 cdns_pcie_get_fn_from_vfn(struct cdns_pcie *pcie, u8 fn, u8 vfn)
+{
+ u32 cap = CDNS_PCIE_EP_FUNC_SRIOV_CAP_OFFSET;
+ u32 first_vf_offset, stride;
+
+ if (vfn == 0)
+ return fn;
+
+ first_vf_offset = cdns_pcie_ep_fn_readw(pcie, fn, cap + PCI_SRIOV_VF_OFFSET);
+ stride = cdns_pcie_ep_fn_readw(pcie, fn, cap + PCI_SRIOV_VF_STRIDE);
+ fn = fn + first_vf_offset + ((vfn - 1) * stride);
+
+ return fn;
+}
+
+int cdns_pcie_ep_write_header(struct pci_epc *epc, u8 fn, u8 vfn,
+ struct pci_epf_header *hdr)
+{
+ struct cdns_pcie_ep *ep = epc_get_drvdata(epc);
+ u32 cap = CDNS_PCIE_EP_FUNC_SRIOV_CAP_OFFSET;
+ struct cdns_pcie *pcie = &ep->pcie;
+ u32 reg;
+
+ if (vfn > 1) {
+ dev_err(&epc->dev, "Only Virtual Function #1 has deviceID\n");
+ return -EINVAL;
+ } else if (vfn == 1) {
+ reg = cap + PCI_SRIOV_VF_DID;
+ cdns_pcie_ep_fn_writew(pcie, fn, reg, hdr->deviceid);
+ return 0;
+ }
+
+ cdns_pcie_ep_fn_writew(pcie, fn, PCI_DEVICE_ID, hdr->deviceid);
+ cdns_pcie_ep_fn_writeb(pcie, fn, PCI_REVISION_ID, hdr->revid);
+ cdns_pcie_ep_fn_writeb(pcie, fn, PCI_CLASS_PROG, hdr->progif_code);
+ cdns_pcie_ep_fn_writew(pcie, fn, PCI_CLASS_DEVICE,
+ hdr->subclass_code | hdr->baseclass_code << 8);
+ cdns_pcie_ep_fn_writeb(pcie, fn, PCI_CACHE_LINE_SIZE,
+ hdr->cache_line_size);
+ cdns_pcie_ep_fn_writew(pcie, fn, PCI_SUBSYSTEM_ID, hdr->subsys_id);
+ cdns_pcie_ep_fn_writeb(pcie, fn, PCI_INTERRUPT_PIN, hdr->interrupt_pin);
+
+ /*
+ * Vendor ID can only be modified from function 0, all other functions
+ * use the same vendor ID as function 0.
+ */
+ if (fn == 0) {
+ /* Update the vendor IDs. */
+ u32 id = CDNS_PCIE_LM_ID_VENDOR(hdr->vendorid) |
+ CDNS_PCIE_LM_ID_SUBSYS(hdr->subsys_vendor_id);
+
+ cdns_pcie_writel(pcie, CDNS_PCIE_LM_ID, id);
+ }
+
+ return 0;
+}
+
+int cdns_pcie_ep_set_msi(struct pci_epc *epc, u8 fn, u8 vfn, u8 mmc)
+{
+ struct cdns_pcie_ep *ep = epc_get_drvdata(epc);
+ struct cdns_pcie *pcie = &ep->pcie;
+ u32 cap = CDNS_PCIE_EP_FUNC_MSI_CAP_OFFSET;
+ u16 flags;
+
+ fn = cdns_pcie_get_fn_from_vfn(pcie, fn, vfn);
+
+ /*
+ * Set the Multiple Message Capable bitfield into the Message Control
+ * register.
+ */
+ flags = cdns_pcie_ep_fn_readw(pcie, fn, cap + PCI_MSI_FLAGS);
+ flags = (flags & ~PCI_MSI_FLAGS_QMASK) | (mmc << 1);
+ flags |= PCI_MSI_FLAGS_64BIT;
+ flags &= ~PCI_MSI_FLAGS_MASKBIT;
+ cdns_pcie_ep_fn_writew(pcie, fn, cap + PCI_MSI_FLAGS, flags);
+
+ return 0;
+}
+
+int cdns_pcie_ep_get_msi(struct pci_epc *epc, u8 fn, u8 vfn)
+{
+ struct cdns_pcie_ep *ep = epc_get_drvdata(epc);
+ struct cdns_pcie *pcie = &ep->pcie;
+ u32 cap = CDNS_PCIE_EP_FUNC_MSI_CAP_OFFSET;
+ u16 flags, mme;
+
+ fn = cdns_pcie_get_fn_from_vfn(pcie, fn, vfn);
+
+ /* Validate that the MSI feature is actually enabled. */
+ flags = cdns_pcie_ep_fn_readw(pcie, fn, cap + PCI_MSI_FLAGS);
+ if (!(flags & PCI_MSI_FLAGS_ENABLE))
+ return -EINVAL;
+
+ /*
+ * Get the Multiple Message Enable bitfield from the Message Control
+ * register.
+ */
+ mme = FIELD_GET(PCI_MSI_FLAGS_QSIZE, flags);
+
+ return mme;
+}
+
+int cdns_pcie_ep_get_msix(struct pci_epc *epc, u8 func_no, u8 vfunc_no)
+{
+ struct cdns_pcie_ep *ep = epc_get_drvdata(epc);
+ struct cdns_pcie *pcie = &ep->pcie;
+ u32 cap = CDNS_PCIE_EP_FUNC_MSIX_CAP_OFFSET;
+ u32 val, reg;
+
+ func_no = cdns_pcie_get_fn_from_vfn(pcie, func_no, vfunc_no);
+
+ reg = cap + PCI_MSIX_FLAGS;
+ val = cdns_pcie_ep_fn_readw(pcie, func_no, reg);
+ if (!(val & PCI_MSIX_FLAGS_ENABLE))
+ return -EINVAL;
+
+ val &= PCI_MSIX_FLAGS_QSIZE;
+
+ return val;
+}
+
+int cdns_pcie_ep_set_msix(struct pci_epc *epc, u8 fn, u8 vfn,
+ u16 interrupts, enum pci_barno bir,
+ u32 offset)
+{
+ struct cdns_pcie_ep *ep = epc_get_drvdata(epc);
+ struct cdns_pcie *pcie = &ep->pcie;
+ u32 cap = CDNS_PCIE_EP_FUNC_MSIX_CAP_OFFSET;
+ u32 val, reg;
+
+ fn = cdns_pcie_get_fn_from_vfn(pcie, fn, vfn);
+
+ reg = cap + PCI_MSIX_FLAGS;
+ val = cdns_pcie_ep_fn_readw(pcie, fn, reg);
+ val &= ~PCI_MSIX_FLAGS_QSIZE;
+ val |= interrupts;
+ cdns_pcie_ep_fn_writew(pcie, fn, reg, val);
+
+ /* Set MSI-X BAR and offset */
+ reg = cap + PCI_MSIX_TABLE;
+ val = offset | bir;
+ cdns_pcie_ep_fn_writel(pcie, fn, reg, val);
+
+ /* Set PBA BAR and offset. BAR must match MSI-X BAR */
+ reg = cap + PCI_MSIX_PBA;
+ val = (offset + (interrupts * PCI_MSIX_ENTRY_SIZE)) | bir;
+ cdns_pcie_ep_fn_writel(pcie, fn, reg, val);
+
+ return 0;
+}
+
+int cdns_pcie_ep_map_msi_irq(struct pci_epc *epc, u8 fn, u8 vfn,
+ phys_addr_t addr, u8 interrupt_num,
+ u32 entry_size, u32 *msi_data,
+ u32 *msi_addr_offset)
+{
+ struct cdns_pcie_ep *ep = epc_get_drvdata(epc);
+ u32 cap = CDNS_PCIE_EP_FUNC_MSI_CAP_OFFSET;
+ struct cdns_pcie *pcie = &ep->pcie;
+ u64 pci_addr, pci_addr_mask = 0xff;
+ u16 flags, mme, data, data_mask;
+ u8 msi_count;
+ int ret;
+ int i;
+
+ fn = cdns_pcie_get_fn_from_vfn(pcie, fn, vfn);
+
+ /* Check whether the MSI feature has been enabled by the PCI host. */
+ flags = cdns_pcie_ep_fn_readw(pcie, fn, cap + PCI_MSI_FLAGS);
+ if (!(flags & PCI_MSI_FLAGS_ENABLE))
+ return -EINVAL;
+
+ /* Get the number of enabled MSIs */
+ mme = FIELD_GET(PCI_MSI_FLAGS_QSIZE, flags);
+ msi_count = 1 << mme;
+ if (!interrupt_num || interrupt_num > msi_count)
+ return -EINVAL;
+
+ /* Compute the data value to be written. */
+ data_mask = msi_count - 1;
+ data = cdns_pcie_ep_fn_readw(pcie, fn, cap + PCI_MSI_DATA_64);
+ data = data & ~data_mask;
+
+ /* Get the PCI address where to write the data into. */
+ pci_addr = cdns_pcie_ep_fn_readl(pcie, fn, cap + PCI_MSI_ADDRESS_HI);
+ pci_addr <<= 32;
+ pci_addr |= cdns_pcie_ep_fn_readl(pcie, fn, cap + PCI_MSI_ADDRESS_LO);
+ pci_addr &= GENMASK_ULL(63, 2);
+
+ for (i = 0; i < interrupt_num; i++) {
+ ret = epc->ops->map_addr(epc, fn, vfn, addr,
+ pci_addr & ~pci_addr_mask,
+ entry_size);
+ if (ret)
+ return ret;
+ addr = addr + entry_size;
+ }
+
+ *msi_data = data;
+ *msi_addr_offset = pci_addr & pci_addr_mask;
+
+ return 0;
+}
+
+static const struct pci_epc_features cdns_pcie_epc_vf_features = {
+ .linkup_notifier = false,
+ .msi_capable = true,
+ .msix_capable = true,
+ .align = 65536,
+};
+
+static const struct pci_epc_features cdns_pcie_epc_features = {
+ .linkup_notifier = false,
+ .msi_capable = true,
+ .msix_capable = true,
+ .align = 256,
+};
+
+const struct pci_epc_features*
+cdns_pcie_ep_get_features(struct pci_epc *epc, u8 func_no, u8 vfunc_no)
+{
+ if (!vfunc_no)
+ return &cdns_pcie_epc_features;
+
+ return &cdns_pcie_epc_vf_features;
+}
diff --git a/drivers/pci/controller/cadence/pcie-cadence-ep-common.h b/drivers/pci/controller/cadence/pcie-cadence-ep-common.h
new file mode 100644
index 000000000000..a91084bdedd5
--- /dev/null
+++ b/drivers/pci/controller/cadence/pcie-cadence-ep-common.h
@@ -0,0 +1,36 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+// Copyright (c) 2017 Cadence
+// Cadence PCIe Endpoint controller driver.
+// Author: Manikandan K Pillai <mpillai@cadence.com>
+
+#ifndef _PCIE_CADENCE_EP_COMMON_H_
+#define _PCIE_CADENCE_EP_COMMON_H_
+
+#include <linux/kernel.h>
+#include <linux/pci.h>
+#include <linux/pci-epf.h>
+#include <linux/pci-epc.h>
+#include "../../pci.h"
+
+#define CDNS_PCIE_EP_MIN_APERTURE 128 /* 128 bytes */
+#define CDNS_PCIE_EP_IRQ_PCI_ADDR_NONE 0x1
+#define CDNS_PCIE_EP_IRQ_PCI_ADDR_LEGACY 0x3
+
+u8 cdns_pcie_get_fn_from_vfn(struct cdns_pcie *pcie, u8 fn, u8 vfn);
+int cdns_pcie_ep_write_header(struct pci_epc *epc, u8 fn, u8 vfn,
+ struct pci_epf_header *hdr);
+int cdns_pcie_ep_set_msi(struct pci_epc *epc, u8 fn, u8 vfn, u8 mmc);
+int cdns_pcie_ep_get_msi(struct pci_epc *epc, u8 fn, u8 vfn);
+int cdns_pcie_ep_get_msix(struct pci_epc *epc, u8 func_no, u8 vfunc_no);
+int cdns_pcie_ep_set_msix(struct pci_epc *epc, u8 fn, u8 vfn,
+ u16 interrupts, enum pci_barno bir,
+ u32 offset);
+int cdns_pcie_ep_map_msi_irq(struct pci_epc *epc, u8 fn, u8 vfn,
+ phys_addr_t addr, u8 interrupt_num,
+ u32 entry_size, u32 *msi_data,
+ u32 *msi_addr_offset);
+const struct pci_epc_features *cdns_pcie_ep_get_features(struct pci_epc *epc,
+ u8 func_no,
+ u8 vfunc_no);
+
+#endif /* _PCIE_CADENCE_EP_COMMON_H_ */
diff --git a/drivers/pci/controller/cadence/pcie-cadence-ep.c b/drivers/pci/controller/cadence/pcie-cadence-ep.c
index 8ab6cf70c18e..14c9ec45cc39 100644
--- a/drivers/pci/controller/cadence/pcie-cadence-ep.c
+++ b/drivers/pci/controller/cadence/pcie-cadence-ep.c
@@ -13,68 +13,7 @@
#include <linux/sizes.h>
#include "pcie-cadence.h"
-#include "../../pci.h"
-
-#define CDNS_PCIE_EP_MIN_APERTURE 128 /* 128 bytes */
-#define CDNS_PCIE_EP_IRQ_PCI_ADDR_NONE 0x1
-#define CDNS_PCIE_EP_IRQ_PCI_ADDR_LEGACY 0x3
-
-static u8 cdns_pcie_get_fn_from_vfn(struct cdns_pcie *pcie, u8 fn, u8 vfn)
-{
- u32 cap = CDNS_PCIE_EP_FUNC_SRIOV_CAP_OFFSET;
- u32 first_vf_offset, stride;
-
- if (vfn == 0)
- return fn;
-
- first_vf_offset = cdns_pcie_ep_fn_readw(pcie, fn, cap + PCI_SRIOV_VF_OFFSET);
- stride = cdns_pcie_ep_fn_readw(pcie, fn, cap + PCI_SRIOV_VF_STRIDE);
- fn = fn + first_vf_offset + ((vfn - 1) * stride);
-
- return fn;
-}
-
-static int cdns_pcie_ep_write_header(struct pci_epc *epc, u8 fn, u8 vfn,
- struct pci_epf_header *hdr)
-{
- struct cdns_pcie_ep *ep = epc_get_drvdata(epc);
- u32 cap = CDNS_PCIE_EP_FUNC_SRIOV_CAP_OFFSET;
- struct cdns_pcie *pcie = &ep->pcie;
- u32 reg;
-
- if (vfn > 1) {
- dev_err(&epc->dev, "Only Virtual Function #1 has deviceID\n");
- return -EINVAL;
- } else if (vfn == 1) {
- reg = cap + PCI_SRIOV_VF_DID;
- cdns_pcie_ep_fn_writew(pcie, fn, reg, hdr->deviceid);
- return 0;
- }
-
- cdns_pcie_ep_fn_writew(pcie, fn, PCI_DEVICE_ID, hdr->deviceid);
- cdns_pcie_ep_fn_writeb(pcie, fn, PCI_REVISION_ID, hdr->revid);
- cdns_pcie_ep_fn_writeb(pcie, fn, PCI_CLASS_PROG, hdr->progif_code);
- cdns_pcie_ep_fn_writew(pcie, fn, PCI_CLASS_DEVICE,
- hdr->subclass_code | hdr->baseclass_code << 8);
- cdns_pcie_ep_fn_writeb(pcie, fn, PCI_CACHE_LINE_SIZE,
- hdr->cache_line_size);
- cdns_pcie_ep_fn_writew(pcie, fn, PCI_SUBSYSTEM_ID, hdr->subsys_id);
- cdns_pcie_ep_fn_writeb(pcie, fn, PCI_INTERRUPT_PIN, hdr->interrupt_pin);
-
- /*
- * Vendor ID can only be modified from function 0, all other functions
- * use the same vendor ID as function 0.
- */
- if (fn == 0) {
- /* Update the vendor IDs. */
- u32 id = CDNS_PCIE_LM_ID_VENDOR(hdr->vendorid) |
- CDNS_PCIE_LM_ID_SUBSYS(hdr->subsys_vendor_id);
-
- cdns_pcie_writel(pcie, CDNS_PCIE_LM_ID, id);
- }
-
- return 0;
-}
+#include "pcie-cadence-ep-common.h"
static int cdns_pcie_ep_set_bar(struct pci_epc *epc, u8 fn, u8 vfn,
struct pci_epf_bar *epf_bar)
@@ -222,100 +161,6 @@ static void cdns_pcie_ep_unmap_addr(struct pci_epc *epc, u8 fn, u8 vfn,
clear_bit(r, &ep->ob_region_map);
}
-static int cdns_pcie_ep_set_msi(struct pci_epc *epc, u8 fn, u8 vfn, u8 nr_irqs)
-{
- struct cdns_pcie_ep *ep = epc_get_drvdata(epc);
- struct cdns_pcie *pcie = &ep->pcie;
- u8 mmc = order_base_2(nr_irqs);
- u32 cap = CDNS_PCIE_EP_FUNC_MSI_CAP_OFFSET;
- u16 flags;
-
- fn = cdns_pcie_get_fn_from_vfn(pcie, fn, vfn);
-
- /*
- * Set the Multiple Message Capable bitfield into the Message Control
- * register.
- */
- flags = cdns_pcie_ep_fn_readw(pcie, fn, cap + PCI_MSI_FLAGS);
- flags = (flags & ~PCI_MSI_FLAGS_QMASK) | (mmc << 1);
- flags |= PCI_MSI_FLAGS_64BIT;
- flags &= ~PCI_MSI_FLAGS_MASKBIT;
- cdns_pcie_ep_fn_writew(pcie, fn, cap + PCI_MSI_FLAGS, flags);
-
- return 0;
-}
-
-static int cdns_pcie_ep_get_msi(struct pci_epc *epc, u8 fn, u8 vfn)
-{
- struct cdns_pcie_ep *ep = epc_get_drvdata(epc);
- struct cdns_pcie *pcie = &ep->pcie;
- u32 cap = CDNS_PCIE_EP_FUNC_MSI_CAP_OFFSET;
- u16 flags, mme;
-
- fn = cdns_pcie_get_fn_from_vfn(pcie, fn, vfn);
-
- /* Validate that the MSI feature is actually enabled. */
- flags = cdns_pcie_ep_fn_readw(pcie, fn, cap + PCI_MSI_FLAGS);
- if (!(flags & PCI_MSI_FLAGS_ENABLE))
- return -EINVAL;
-
- /*
- * Get the Multiple Message Enable bitfield from the Message Control
- * register.
- */
- mme = FIELD_GET(PCI_MSI_FLAGS_QSIZE, flags);
-
- return 1 << mme;
-}
-
-static int cdns_pcie_ep_get_msix(struct pci_epc *epc, u8 func_no, u8 vfunc_no)
-{
- struct cdns_pcie_ep *ep = epc_get_drvdata(epc);
- struct cdns_pcie *pcie = &ep->pcie;
- u32 cap = CDNS_PCIE_EP_FUNC_MSIX_CAP_OFFSET;
- u32 val, reg;
-
- func_no = cdns_pcie_get_fn_from_vfn(pcie, func_no, vfunc_no);
-
- reg = cap + PCI_MSIX_FLAGS;
- val = cdns_pcie_ep_fn_readw(pcie, func_no, reg);
- if (!(val & PCI_MSIX_FLAGS_ENABLE))
- return -EINVAL;
-
- val &= PCI_MSIX_FLAGS_QSIZE;
-
- return val + 1;
-}
-
-static int cdns_pcie_ep_set_msix(struct pci_epc *epc, u8 fn, u8 vfn,
- u16 nr_irqs, enum pci_barno bir, u32 offset)
-{
- struct cdns_pcie_ep *ep = epc_get_drvdata(epc);
- struct cdns_pcie *pcie = &ep->pcie;
- u32 cap = CDNS_PCIE_EP_FUNC_MSIX_CAP_OFFSET;
- u32 val, reg;
-
- fn = cdns_pcie_get_fn_from_vfn(pcie, fn, vfn);
-
- reg = cap + PCI_MSIX_FLAGS;
- val = cdns_pcie_ep_fn_readw(pcie, fn, reg);
- val &= ~PCI_MSIX_FLAGS_QSIZE;
- val |= nr_irqs - 1; /* encoded as N-1 */
- cdns_pcie_ep_fn_writew(pcie, fn, reg, val);
-
- /* Set MSI-X BAR and offset */
- reg = cap + PCI_MSIX_TABLE;
- val = offset | bir;
- cdns_pcie_ep_fn_writel(pcie, fn, reg, val);
-
- /* Set PBA BAR and offset. BAR must match MSI-X BAR */
- reg = cap + PCI_MSIX_PBA;
- val = (offset + (nr_irqs * PCI_MSIX_ENTRY_SIZE)) | bir;
- cdns_pcie_ep_fn_writel(pcie, fn, reg, val);
-
- return 0;
-}
-
static void cdns_pcie_ep_assert_intx(struct cdns_pcie_ep *ep, u8 fn, u8 intx,
bool is_asserted)
{
@@ -426,59 +271,6 @@ static int cdns_pcie_ep_send_msi_irq(struct cdns_pcie_ep *ep, u8 fn, u8 vfn,
return 0;
}
-static int cdns_pcie_ep_map_msi_irq(struct pci_epc *epc, u8 fn, u8 vfn,
- phys_addr_t addr, u8 interrupt_num,
- u32 entry_size, u32 *msi_data,
- u32 *msi_addr_offset)
-{
- struct cdns_pcie_ep *ep = epc_get_drvdata(epc);
- u32 cap = CDNS_PCIE_EP_FUNC_MSI_CAP_OFFSET;
- struct cdns_pcie *pcie = &ep->pcie;
- u64 pci_addr, pci_addr_mask = 0xff;
- u16 flags, mme, data, data_mask;
- u8 msi_count;
- int ret;
- int i;
-
- fn = cdns_pcie_get_fn_from_vfn(pcie, fn, vfn);
-
- /* Check whether the MSI feature has been enabled by the PCI host. */
- flags = cdns_pcie_ep_fn_readw(pcie, fn, cap + PCI_MSI_FLAGS);
- if (!(flags & PCI_MSI_FLAGS_ENABLE))
- return -EINVAL;
-
- /* Get the number of enabled MSIs */
- mme = FIELD_GET(PCI_MSI_FLAGS_QSIZE, flags);
- msi_count = 1 << mme;
- if (!interrupt_num || interrupt_num > msi_count)
- return -EINVAL;
-
- /* Compute the data value to be written. */
- data_mask = msi_count - 1;
- data = cdns_pcie_ep_fn_readw(pcie, fn, cap + PCI_MSI_DATA_64);
- data = data & ~data_mask;
-
- /* Get the PCI address where to write the data into. */
- pci_addr = cdns_pcie_ep_fn_readl(pcie, fn, cap + PCI_MSI_ADDRESS_HI);
- pci_addr <<= 32;
- pci_addr |= cdns_pcie_ep_fn_readl(pcie, fn, cap + PCI_MSI_ADDRESS_LO);
- pci_addr &= GENMASK_ULL(63, 2);
-
- for (i = 0; i < interrupt_num; i++) {
- ret = cdns_pcie_ep_map_addr(epc, fn, vfn, addr,
- pci_addr & ~pci_addr_mask,
- entry_size);
- if (ret)
- return ret;
- addr = addr + entry_size;
- }
-
- *msi_data = data;
- *msi_addr_offset = pci_addr & pci_addr_mask;
-
- return 0;
-}
-
static int cdns_pcie_ep_send_msix_irq(struct cdns_pcie_ep *ep, u8 fn, u8 vfn,
u16 interrupt_num)
{
@@ -589,12 +381,12 @@ static int cdns_pcie_ep_start(struct pci_epc *epc)
continue;
value = cdns_pcie_ep_fn_readl(pcie, epf,
- CDNS_PCIE_EP_FUNC_DEV_CAP_OFFSET +
- PCI_EXP_DEVCAP);
+ CDNS_PCIE_EP_FUNC_DEV_CAP_OFFSET +
+ PCI_EXP_DEVCAP);
value &= ~PCI_EXP_DEVCAP_FLR;
cdns_pcie_ep_fn_writel(pcie, epf,
- CDNS_PCIE_EP_FUNC_DEV_CAP_OFFSET +
- PCI_EXP_DEVCAP, value);
+ CDNS_PCIE_EP_FUNC_DEV_CAP_OFFSET +
+ PCI_EXP_DEVCAP, value);
}
}
@@ -607,29 +399,6 @@ static int cdns_pcie_ep_start(struct pci_epc *epc)
return 0;
}
-static const struct pci_epc_features cdns_pcie_epc_vf_features = {
- .linkup_notifier = false,
- .msi_capable = true,
- .msix_capable = true,
- .align = 65536,
-};
-
-static const struct pci_epc_features cdns_pcie_epc_features = {
- .linkup_notifier = false,
- .msi_capable = true,
- .msix_capable = true,
- .align = 256,
-};
-
-static const struct pci_epc_features*
-cdns_pcie_ep_get_features(struct pci_epc *epc, u8 func_no, u8 vfunc_no)
-{
- if (!vfunc_no)
- return &cdns_pcie_epc_features;
-
- return &cdns_pcie_epc_vf_features;
-}
-
static const struct pci_epc_ops cdns_pcie_epc_ops = {
.write_header = cdns_pcie_ep_write_header,
.set_bar = cdns_pcie_ep_set_bar,
@@ -759,7 +528,7 @@ int cdns_pcie_ep_setup(struct cdns_pcie_ep *ep)
return 0;
- free_epc_mem:
+free_epc_mem:
pci_epc_mem_exit(epc);
return ret;
--
2.49.0
^ permalink raw reply related [flat|nested] 46+ messages in thread
* [PATCH v5 06/14] PCI: cadence: Split PCIe RP support into common and specific functions
2025-06-30 4:15 [PATCH v5 00/14] Enhance the PCIe controller driver hans.zhang
` (4 preceding siblings ...)
2025-06-30 4:15 ` [PATCH v5 05/14] PCI: cadence: Split PCIe EP support into common and specific functions hans.zhang
@ 2025-06-30 4:15 ` hans.zhang
2025-06-30 4:15 ` [PATCH v5 07/14] PCI: cadence: Split the common functions for PCIE controller support hans.zhang
` (7 subsequent siblings)
13 siblings, 0 replies; 46+ messages in thread
From: hans.zhang @ 2025-06-30 4:15 UTC (permalink / raw)
To: bhelgaas, lpieralisi, kw, mani, robh, kwilczynski, krzk+dt,
conor+dt
Cc: mpillai, fugang.duan, guoyin.chen, peter.chen,
cix-kernel-upstream, linux-pci, devicetree, linux-kernel,
Hans Zhang
From: Manikandan K Pillai <mpillai@cadence.com>
Split the Cadence PCIe controller RP functionality into common
functions and functions for legacy PCIe RP controller.
Signed-off-by: Manikandan K Pillai <mpillai@cadence.com>
Co-developed-by: Hans Zhang <hans.zhang@cixtech.com>
Signed-off-by: Hans Zhang <hans.zhang@cixtech.com>
---
drivers/pci/controller/cadence/Kconfig | 12 +-
drivers/pci/controller/cadence/Makefile | 3 +-
.../cadence/pcie-cadence-ep-common.h | 8 +-
.../cadence/pcie-cadence-host-common.c | 169 ++++++++++++++++++
.../cadence/pcie-cadence-host-common.h | 25 +++
.../controller/cadence/pcie-cadence-host.c | 156 +---------------
6 files changed, 209 insertions(+), 164 deletions(-)
create mode 100644 drivers/pci/controller/cadence/pcie-cadence-host-common.c
create mode 100644 drivers/pci/controller/cadence/pcie-cadence-host-common.h
diff --git a/drivers/pci/controller/cadence/Kconfig b/drivers/pci/controller/cadence/Kconfig
index 417f981ac8ca..a1caf154888d 100644
--- a/drivers/pci/controller/cadence/Kconfig
+++ b/drivers/pci/controller/cadence/Kconfig
@@ -6,21 +6,25 @@ menu "Cadence-based PCIe controllers"
config PCIE_CADENCE
tristate
+config PCIE_CADENCE_EP_COMMON
+ bool
+
+config PCIE_CADENCE_HOST_COMMON
+ bool
+
config PCIE_CADENCE_HOST
tristate
depends on OF
select IRQ_DOMAIN
select PCIE_CADENCE
-
-config PCIE_CADENCE_COMMON
- bool
+ select PCIE_CADENCE_HOST_COMMON
config PCIE_CADENCE_EP
tristate
depends on OF
depends on PCI_ENDPOINT
select PCIE_CADENCE
- select PCIE_CADENCE_COMMON
+ select PCIE_CADENCE_EP_COMMON
config PCIE_CADENCE_PLAT
bool
diff --git a/drivers/pci/controller/cadence/Makefile b/drivers/pci/controller/cadence/Makefile
index 918f8c924487..0440ac6aba5d 100644
--- a/drivers/pci/controller/cadence/Makefile
+++ b/drivers/pci/controller/cadence/Makefile
@@ -1,6 +1,7 @@
# SPDX-License-Identifier: GPL-2.0
obj-$(CONFIG_PCIE_CADENCE) += pcie-cadence.o
-obj-$(CONFIG_PCIE_CADENCE_COMMON) += pcie-cadence-ep-common.o
+obj-$(CONFIG_PCIE_CADENCE_EP_COMMON) += pcie-cadence-ep-common.o
+obj-$(CONFIG_PCIE_CADENCE_HOST_COMMON) += pcie-cadence-host-common.o
obj-$(CONFIG_PCIE_CADENCE_HOST) += pcie-cadence-host.o
obj-$(CONFIG_PCIE_CADENCE_EP) += pcie-cadence-ep.o
obj-$(CONFIG_PCIE_CADENCE_PLAT) += pcie-cadence-plat.o
diff --git a/drivers/pci/controller/cadence/pcie-cadence-ep-common.h b/drivers/pci/controller/cadence/pcie-cadence-ep-common.h
index a91084bdedd5..9cfd0cfa7459 100644
--- a/drivers/pci/controller/cadence/pcie-cadence-ep-common.h
+++ b/drivers/pci/controller/cadence/pcie-cadence-ep-common.h
@@ -1,10 +1,10 @@
/* SPDX-License-Identifier: GPL-2.0 */
// Copyright (c) 2017 Cadence
-// Cadence PCIe Endpoint controller driver.
+// Cadence PCIe Endpoint controller driver
// Author: Manikandan K Pillai <mpillai@cadence.com>
-#ifndef _PCIE_CADENCE_EP_COMMON_H_
-#define _PCIE_CADENCE_EP_COMMON_H_
+#ifndef _PCIE_CADENCE_EP_COMMON_H
+#define _PCIE_CADENCE_EP_COMMON_H
#include <linux/kernel.h>
#include <linux/pci.h>
@@ -33,4 +33,4 @@ const struct pci_epc_features *cdns_pcie_ep_get_features(struct pci_epc *epc,
u8 func_no,
u8 vfunc_no);
-#endif /* _PCIE_CADENCE_EP_COMMON_H_ */
+#endif /* _PCIE_CADENCE_EP_COMMON_H */
diff --git a/drivers/pci/controller/cadence/pcie-cadence-host-common.c b/drivers/pci/controller/cadence/pcie-cadence-host-common.c
new file mode 100644
index 000000000000..21264247951e
--- /dev/null
+++ b/drivers/pci/controller/cadence/pcie-cadence-host-common.c
@@ -0,0 +1,169 @@
+// SPDX-License-Identifier: GPL-2.0
+// Copyright (c) 2017 Cadence
+// Cadence PCIe host controller driver.
+// Author: Manikandan K Pillai <mpillai@cadence.com>
+
+#include <linux/delay.h>
+#include <linux/kernel.h>
+#include <linux/list_sort.h>
+#include <linux/of_address.h>
+#include <linux/of_pci.h>
+#include <linux/platform_device.h>
+
+#include "pcie-cadence.h"
+#include "pcie-cadence-host-common.h"
+
+#define LINK_RETRAIN_TIMEOUT HZ
+
+u64 bar_max_size[] = {
+ [RP_BAR0] = _ULL(128 * SZ_2G),
+ [RP_BAR1] = SZ_2G,
+ [RP_NO_BAR] = _BITULL(63),
+};
+
+int cdns_pcie_host_training_complete(struct cdns_pcie *pcie)
+{
+ u32 pcie_cap_off = CDNS_PCIE_RP_CAP_OFFSET;
+ unsigned long end_jiffies;
+ u16 lnk_stat;
+
+ /* Wait for link training to complete. Exit after timeout. */
+ end_jiffies = jiffies + LINK_RETRAIN_TIMEOUT;
+ do {
+ lnk_stat = cdns_pcie_rp_readw(pcie, pcie_cap_off + PCI_EXP_LNKSTA);
+ if (!(lnk_stat & PCI_EXP_LNKSTA_LT))
+ break;
+ usleep_range(0, 1000);
+ } while (time_before(jiffies, end_jiffies));
+
+ if (!(lnk_stat & PCI_EXP_LNKSTA_LT))
+ return 0;
+
+ return -ETIMEDOUT;
+}
+
+int cdns_pcie_host_wait_for_link(struct cdns_pcie *pcie)
+{
+ struct device *dev = pcie->dev;
+ int retries;
+
+ /* Check if the link is up or not */
+ for (retries = 0; retries < LINK_WAIT_MAX_RETRIES; retries++) {
+ if (cdns_pcie_link_up(pcie)) {
+ dev_info(dev, "Link up\n");
+ return 0;
+ }
+ usleep_range(LINK_WAIT_USLEEP_MIN, LINK_WAIT_USLEEP_MAX);
+ }
+
+ return -ETIMEDOUT;
+}
+
+int cdns_pcie_retrain(struct cdns_pcie *pcie)
+{
+ u32 lnk_cap_sls, pcie_cap_off = CDNS_PCIE_RP_CAP_OFFSET;
+ u16 lnk_stat, lnk_ctl;
+ int ret = 0;
+
+ /*
+ * Set retrain bit if current speed is 2.5 GB/s,
+ * but the PCIe root port support is > 2.5 GB/s.
+ */
+
+ lnk_cap_sls = cdns_pcie_readl(pcie, (CDNS_PCIE_RP_BASE + pcie_cap_off +
+ PCI_EXP_LNKCAP));
+ if ((lnk_cap_sls & PCI_EXP_LNKCAP_SLS) <= PCI_EXP_LNKCAP_SLS_2_5GB)
+ return ret;
+
+ lnk_stat = cdns_pcie_rp_readw(pcie, pcie_cap_off + PCI_EXP_LNKSTA);
+ if ((lnk_stat & PCI_EXP_LNKSTA_CLS) == PCI_EXP_LNKSTA_CLS_2_5GB) {
+ lnk_ctl = cdns_pcie_rp_readw(pcie,
+ pcie_cap_off + PCI_EXP_LNKCTL);
+ lnk_ctl |= PCI_EXP_LNKCTL_RL;
+ cdns_pcie_rp_writew(pcie, pcie_cap_off + PCI_EXP_LNKCTL,
+ lnk_ctl);
+
+ ret = cdns_pcie_host_training_complete(pcie);
+ if (ret)
+ return ret;
+
+ ret = cdns_pcie_host_wait_for_link(pcie);
+ }
+ return ret;
+}
+
+int cdns_pcie_host_start_link(struct cdns_pcie_rc *rc)
+{
+ struct cdns_pcie *pcie = &rc->pcie;
+ int ret;
+
+ ret = cdns_pcie_host_wait_for_link(pcie);
+
+ /*
+ * Retrain link for Gen2 training defect
+ * if quirk flag is set.
+ */
+ if (!ret && rc->quirk_retrain_flag)
+ ret = cdns_pcie_retrain(pcie);
+
+ return ret;
+}
+
+enum cdns_pcie_rp_bar
+cdns_pcie_host_find_min_bar(struct cdns_pcie_rc *rc, u64 size)
+{
+ enum cdns_pcie_rp_bar bar, sel_bar;
+
+ sel_bar = RP_BAR_UNDEFINED;
+ for (bar = RP_BAR0; bar <= RP_NO_BAR; bar++) {
+ if (!rc->avail_ib_bar[bar])
+ continue;
+
+ if (size <= bar_max_size[bar]) {
+ if (sel_bar == RP_BAR_UNDEFINED) {
+ sel_bar = bar;
+ continue;
+ }
+
+ if (bar_max_size[bar] < bar_max_size[sel_bar])
+ sel_bar = bar;
+ }
+ }
+
+ return sel_bar;
+}
+
+enum cdns_pcie_rp_bar
+cdns_pcie_host_find_max_bar(struct cdns_pcie_rc *rc, u64 size)
+{
+ enum cdns_pcie_rp_bar bar, sel_bar;
+
+ sel_bar = RP_BAR_UNDEFINED;
+ for (bar = RP_BAR0; bar <= RP_NO_BAR; bar++) {
+ if (!rc->avail_ib_bar[bar])
+ continue;
+
+ if (size >= bar_max_size[bar]) {
+ if (sel_bar == RP_BAR_UNDEFINED) {
+ sel_bar = bar;
+ continue;
+ }
+
+ if (bar_max_size[bar] > bar_max_size[sel_bar])
+ sel_bar = bar;
+ }
+ }
+
+ return sel_bar;
+}
+
+int cdns_pcie_host_dma_ranges_cmp(void *priv, const struct list_head *a,
+ const struct list_head *b)
+{
+ struct resource_entry *entry1, *entry2;
+
+ entry1 = container_of(a, struct resource_entry, node);
+ entry2 = container_of(b, struct resource_entry, node);
+
+ return resource_size(entry2->res) - resource_size(entry1->res);
+}
diff --git a/drivers/pci/controller/cadence/pcie-cadence-host-common.h b/drivers/pci/controller/cadence/pcie-cadence-host-common.h
new file mode 100644
index 000000000000..f8eae2e963d8
--- /dev/null
+++ b/drivers/pci/controller/cadence/pcie-cadence-host-common.h
@@ -0,0 +1,25 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+// Copyright (c) 2017 Cadence
+// Cadence PCIe Endpoint controller driver
+// Author: Manikandan K Pillai <mpillai@cadence.com>
+
+#ifndef _PCIE_CADENCE_HOST_COMMON_H
+#define _PCIE_CADENCE_HOST_COMMON_H
+
+#include <linux/kernel.h>
+#include <linux/pci.h>
+
+extern u64 bar_max_size[];
+
+int cdns_pcie_host_training_complete(struct cdns_pcie *pcie);
+int cdns_pcie_host_wait_for_link(struct cdns_pcie *pcie);
+int cdns_pcie_retrain(struct cdns_pcie *pcie);
+int cdns_pcie_host_start_link(struct cdns_pcie_rc *rc);
+enum cdns_pcie_rp_bar
+cdns_pcie_host_find_min_bar(struct cdns_pcie_rc *rc, u64 size);
+enum cdns_pcie_rp_bar
+cdns_pcie_host_find_max_bar(struct cdns_pcie_rc *rc, u64 size);
+int cdns_pcie_host_dma_ranges_cmp(void *priv, const struct list_head *a,
+ const struct list_head *b);
+
+#endif /* _PCIE_CADENCE_HOST_COMMON_H */
diff --git a/drivers/pci/controller/cadence/pcie-cadence-host.c b/drivers/pci/controller/cadence/pcie-cadence-host.c
index 59a4631de79f..bfdd0f200cfb 100644
--- a/drivers/pci/controller/cadence/pcie-cadence-host.c
+++ b/drivers/pci/controller/cadence/pcie-cadence-host.c
@@ -12,14 +12,7 @@
#include <linux/platform_device.h>
#include "pcie-cadence.h"
-
-#define LINK_RETRAIN_TIMEOUT HZ
-
-static u64 bar_max_size[] = {
- [RP_BAR0] = _ULL(128 * SZ_2G),
- [RP_BAR1] = SZ_2G,
- [RP_NO_BAR] = _BITULL(63),
-};
+#include "pcie-cadence-host-common.h"
static u8 bar_aperture_mask[] = {
[RP_BAR0] = 0x1F,
@@ -81,77 +74,6 @@ static struct pci_ops cdns_pcie_host_ops = {
.write = pci_generic_config_write,
};
-static int cdns_pcie_host_training_complete(struct cdns_pcie *pcie)
-{
- u32 pcie_cap_off = CDNS_PCIE_RP_CAP_OFFSET;
- unsigned long end_jiffies;
- u16 lnk_stat;
-
- /* Wait for link training to complete. Exit after timeout. */
- end_jiffies = jiffies + LINK_RETRAIN_TIMEOUT;
- do {
- lnk_stat = cdns_pcie_rp_readw(pcie, pcie_cap_off + PCI_EXP_LNKSTA);
- if (!(lnk_stat & PCI_EXP_LNKSTA_LT))
- break;
- usleep_range(0, 1000);
- } while (time_before(jiffies, end_jiffies));
-
- if (!(lnk_stat & PCI_EXP_LNKSTA_LT))
- return 0;
-
- return -ETIMEDOUT;
-}
-
-static int cdns_pcie_host_wait_for_link(struct cdns_pcie *pcie)
-{
- struct device *dev = pcie->dev;
- int retries;
-
- /* Check if the link is up or not */
- for (retries = 0; retries < LINK_WAIT_MAX_RETRIES; retries++) {
- if (cdns_pcie_link_up(pcie)) {
- dev_info(dev, "Link up\n");
- return 0;
- }
- usleep_range(LINK_WAIT_USLEEP_MIN, LINK_WAIT_USLEEP_MAX);
- }
-
- return -ETIMEDOUT;
-}
-
-static int cdns_pcie_retrain(struct cdns_pcie *pcie)
-{
- u32 lnk_cap_sls, pcie_cap_off = CDNS_PCIE_RP_CAP_OFFSET;
- u16 lnk_stat, lnk_ctl;
- int ret = 0;
-
- /*
- * Set retrain bit if current speed is 2.5 GB/s,
- * but the PCIe root port support is > 2.5 GB/s.
- */
-
- lnk_cap_sls = cdns_pcie_readl(pcie, (CDNS_PCIE_RP_BASE + pcie_cap_off +
- PCI_EXP_LNKCAP));
- if ((lnk_cap_sls & PCI_EXP_LNKCAP_SLS) <= PCI_EXP_LNKCAP_SLS_2_5GB)
- return ret;
-
- lnk_stat = cdns_pcie_rp_readw(pcie, pcie_cap_off + PCI_EXP_LNKSTA);
- if ((lnk_stat & PCI_EXP_LNKSTA_CLS) == PCI_EXP_LNKSTA_CLS_2_5GB) {
- lnk_ctl = cdns_pcie_rp_readw(pcie,
- pcie_cap_off + PCI_EXP_LNKCTL);
- lnk_ctl |= PCI_EXP_LNKCTL_RL;
- cdns_pcie_rp_writew(pcie, pcie_cap_off + PCI_EXP_LNKCTL,
- lnk_ctl);
-
- ret = cdns_pcie_host_training_complete(pcie);
- if (ret)
- return ret;
-
- ret = cdns_pcie_host_wait_for_link(pcie);
- }
- return ret;
-}
-
static void cdns_pcie_host_disable_ptm_response(struct cdns_pcie *pcie)
{
u32 val;
@@ -168,23 +90,6 @@ static void cdns_pcie_host_enable_ptm_response(struct cdns_pcie *pcie)
cdns_pcie_writel(pcie, CDNS_PCIE_LM_PTM_CTRL, val | CDNS_PCIE_LM_TPM_CTRL_PTMRSEN);
}
-static int cdns_pcie_host_start_link(struct cdns_pcie_rc *rc)
-{
- struct cdns_pcie *pcie = &rc->pcie;
- int ret;
-
- ret = cdns_pcie_host_wait_for_link(pcie);
-
- /*
- * Retrain link for Gen2 training defect
- * if quirk flag is set.
- */
- if (!ret && rc->quirk_retrain_flag)
- ret = cdns_pcie_retrain(pcie);
-
- return ret;
-}
-
static void cdns_pcie_host_deinit_root_port(struct cdns_pcie_rc *rc)
{
struct cdns_pcie *pcie = &rc->pcie;
@@ -290,54 +195,6 @@ static int cdns_pcie_host_bar_ib_config(struct cdns_pcie_rc *rc,
return 0;
}
-static enum cdns_pcie_rp_bar
-cdns_pcie_host_find_min_bar(struct cdns_pcie_rc *rc, u64 size)
-{
- enum cdns_pcie_rp_bar bar, sel_bar;
-
- sel_bar = RP_BAR_UNDEFINED;
- for (bar = RP_BAR0; bar <= RP_NO_BAR; bar++) {
- if (!rc->avail_ib_bar[bar])
- continue;
-
- if (size <= bar_max_size[bar]) {
- if (sel_bar == RP_BAR_UNDEFINED) {
- sel_bar = bar;
- continue;
- }
-
- if (bar_max_size[bar] < bar_max_size[sel_bar])
- sel_bar = bar;
- }
- }
-
- return sel_bar;
-}
-
-static enum cdns_pcie_rp_bar
-cdns_pcie_host_find_max_bar(struct cdns_pcie_rc *rc, u64 size)
-{
- enum cdns_pcie_rp_bar bar, sel_bar;
-
- sel_bar = RP_BAR_UNDEFINED;
- for (bar = RP_BAR0; bar <= RP_NO_BAR; bar++) {
- if (!rc->avail_ib_bar[bar])
- continue;
-
- if (size >= bar_max_size[bar]) {
- if (sel_bar == RP_BAR_UNDEFINED) {
- sel_bar = bar;
- continue;
- }
-
- if (bar_max_size[bar] > bar_max_size[sel_bar])
- sel_bar = bar;
- }
- }
-
- return sel_bar;
-}
-
static int cdns_pcie_host_bar_config(struct cdns_pcie_rc *rc,
struct resource_entry *entry)
{
@@ -410,17 +267,6 @@ static int cdns_pcie_host_bar_config(struct cdns_pcie_rc *rc,
return 0;
}
-static int cdns_pcie_host_dma_ranges_cmp(void *priv, const struct list_head *a,
- const struct list_head *b)
-{
- struct resource_entry *entry1, *entry2;
-
- entry1 = container_of(a, struct resource_entry, node);
- entry2 = container_of(b, struct resource_entry, node);
-
- return resource_size(entry2->res) - resource_size(entry1->res);
-}
-
static void cdns_pcie_host_unmap_dma_ranges(struct cdns_pcie_rc *rc)
{
struct cdns_pcie *pcie = &rc->pcie;
--
2.49.0
^ permalink raw reply related [flat|nested] 46+ messages in thread
* [PATCH v5 07/14] PCI: cadence: Split the common functions for PCIE controller support
2025-06-30 4:15 [PATCH v5 00/14] Enhance the PCIe controller driver hans.zhang
` (5 preceding siblings ...)
2025-06-30 4:15 ` [PATCH v5 06/14] PCI: cadence: Split PCIe RP " hans.zhang
@ 2025-06-30 4:15 ` hans.zhang
2025-06-30 4:15 ` [PATCH v5 08/14] PCI: cadence: Add support for High Performance Arch(HPA) controller hans.zhang
` (6 subsequent siblings)
13 siblings, 0 replies; 46+ messages in thread
From: hans.zhang @ 2025-06-30 4:15 UTC (permalink / raw)
To: bhelgaas, lpieralisi, kw, mani, robh, kwilczynski, krzk+dt,
conor+dt
Cc: mpillai, fugang.duan, guoyin.chen, peter.chen,
cix-kernel-upstream, linux-pci, devicetree, linux-kernel,
Hans Zhang
From: Manikandan K Pillai <mpillai@cadence.com>
Separate the functions to platform specific functions and common
library functions.
Signed-off-by: Manikandan K Pillai <mpillai@cadence.com>
Co-developed-by: Hans Zhang <hans.zhang@cixtech.com>
Signed-off-by: Hans Zhang <hans.zhang@cixtech.com>
---
drivers/pci/controller/cadence/Makefile | 2 +-
.../controller/cadence/pcie-cadence-common.c | 134 ++++++++++++++++++
drivers/pci/controller/cadence/pcie-cadence.c | 128 -----------------
3 files changed, 135 insertions(+), 129 deletions(-)
create mode 100644 drivers/pci/controller/cadence/pcie-cadence-common.c
diff --git a/drivers/pci/controller/cadence/Makefile b/drivers/pci/controller/cadence/Makefile
index 0440ac6aba5d..3fe5dd2bbd5b 100644
--- a/drivers/pci/controller/cadence/Makefile
+++ b/drivers/pci/controller/cadence/Makefile
@@ -1,5 +1,5 @@
# SPDX-License-Identifier: GPL-2.0
-obj-$(CONFIG_PCIE_CADENCE) += pcie-cadence.o
+obj-$(CONFIG_PCIE_CADENCE) += pcie-cadence-common.o pcie-cadence.o
obj-$(CONFIG_PCIE_CADENCE_EP_COMMON) += pcie-cadence-ep-common.o
obj-$(CONFIG_PCIE_CADENCE_HOST_COMMON) += pcie-cadence-host-common.o
obj-$(CONFIG_PCIE_CADENCE_HOST) += pcie-cadence-host.o
diff --git a/drivers/pci/controller/cadence/pcie-cadence-common.c b/drivers/pci/controller/cadence/pcie-cadence-common.c
new file mode 100644
index 000000000000..8399a73b3a4d
--- /dev/null
+++ b/drivers/pci/controller/cadence/pcie-cadence-common.c
@@ -0,0 +1,134 @@
+// SPDX-License-Identifier: GPL-2.0
+// Copyright (c) 2017 Cadence
+// Cadence PCIe controller driver.
+// Author: Manikandan K Pillai <mpillai@cadence.com>
+
+#include <linux/kernel.h>
+#include <linux/of.h>
+
+#include "pcie-cadence.h"
+
+void cdns_pcie_disable_phy(struct cdns_pcie *pcie)
+{
+ int i = pcie->phy_count;
+
+ while (i--) {
+ phy_power_off(pcie->phy[i]);
+ phy_exit(pcie->phy[i]);
+ }
+}
+
+int cdns_pcie_enable_phy(struct cdns_pcie *pcie)
+{
+ int ret;
+ int i;
+
+ for (i = 0; i < pcie->phy_count; i++) {
+ ret = phy_init(pcie->phy[i]);
+ if (ret < 0)
+ goto err_phy;
+
+ ret = phy_power_on(pcie->phy[i]);
+ if (ret < 0) {
+ phy_exit(pcie->phy[i]);
+ goto err_phy;
+ }
+ }
+
+ return 0;
+
+err_phy:
+ while (--i >= 0) {
+ phy_power_off(pcie->phy[i]);
+ phy_exit(pcie->phy[i]);
+ }
+
+ return ret;
+}
+
+int cdns_pcie_init_phy(struct device *dev, struct cdns_pcie *pcie)
+{
+ struct device_node *np = dev->of_node;
+ int phy_count;
+ struct phy **phy;
+ struct device_link **link;
+ int i;
+ int ret;
+ const char *name;
+
+ phy_count = of_property_count_strings(np, "phy-names");
+ if (phy_count < 1) {
+ dev_info(dev, "no \"phy-names\" property found; PHY will not be initialized\n");
+ pcie->phy_count = 0;
+ return 0;
+ }
+
+ phy = devm_kcalloc(dev, phy_count, sizeof(*phy), GFP_KERNEL);
+ if (!phy)
+ return -ENOMEM;
+
+ link = devm_kcalloc(dev, phy_count, sizeof(*link), GFP_KERNEL);
+ if (!link)
+ return -ENOMEM;
+
+ for (i = 0; i < phy_count; i++) {
+ of_property_read_string_index(np, "phy-names", i, &name);
+ phy[i] = devm_phy_get(dev, name);
+ if (IS_ERR(phy[i])) {
+ ret = PTR_ERR(phy[i]);
+ goto err_phy;
+ }
+ link[i] = device_link_add(dev, &phy[i]->dev, DL_FLAG_STATELESS);
+ if (!link[i]) {
+ devm_phy_put(dev, phy[i]);
+ ret = -EINVAL;
+ goto err_phy;
+ }
+ }
+
+ pcie->phy_count = phy_count;
+ pcie->phy = phy;
+ pcie->link = link;
+
+ ret = cdns_pcie_enable_phy(pcie);
+ if (ret)
+ goto err_phy;
+
+ return 0;
+
+err_phy:
+ while (--i >= 0) {
+ device_link_del(link[i]);
+ devm_phy_put(dev, phy[i]);
+ }
+
+ return ret;
+}
+
+static int cdns_pcie_suspend_noirq(struct device *dev)
+{
+ struct cdns_pcie *pcie = dev_get_drvdata(dev);
+
+ cdns_pcie_disable_phy(pcie);
+
+ return 0;
+}
+
+static int cdns_pcie_resume_noirq(struct device *dev)
+{
+ struct cdns_pcie *pcie = dev_get_drvdata(dev);
+ int ret;
+
+ ret = cdns_pcie_enable_phy(pcie);
+ if (ret) {
+ dev_err(dev, "failed to enable PHY\n");
+ return ret;
+ }
+
+ return 0;
+}
+
+const struct dev_pm_ops cdns_pcie_pm_ops = {
+ NOIRQ_SYSTEM_SLEEP_PM_OPS(cdns_pcie_suspend_noirq,
+ cdns_pcie_resume_noirq)
+};
diff --git a/drivers/pci/controller/cadence/pcie-cadence.c b/drivers/pci/controller/cadence/pcie-cadence.c
index 70a19573440e..51c9bc4eb174 100644
--- a/drivers/pci/controller/cadence/pcie-cadence.c
+++ b/drivers/pci/controller/cadence/pcie-cadence.c
@@ -152,134 +152,6 @@ void cdns_pcie_reset_outbound_region(struct cdns_pcie *pcie, u32 r)
}
EXPORT_SYMBOL_GPL(cdns_pcie_reset_outbound_region);
-void cdns_pcie_disable_phy(struct cdns_pcie *pcie)
-{
- int i = pcie->phy_count;
-
- while (i--) {
- phy_power_off(pcie->phy[i]);
- phy_exit(pcie->phy[i]);
- }
-}
-EXPORT_SYMBOL_GPL(cdns_pcie_disable_phy);
-
-int cdns_pcie_enable_phy(struct cdns_pcie *pcie)
-{
- int ret;
- int i;
-
- for (i = 0; i < pcie->phy_count; i++) {
- ret = phy_init(pcie->phy[i]);
- if (ret < 0)
- goto err_phy;
-
- ret = phy_power_on(pcie->phy[i]);
- if (ret < 0) {
- phy_exit(pcie->phy[i]);
- goto err_phy;
- }
- }
-
- return 0;
-
-err_phy:
- while (--i >= 0) {
- phy_power_off(pcie->phy[i]);
- phy_exit(pcie->phy[i]);
- }
-
- return ret;
-}
-EXPORT_SYMBOL_GPL(cdns_pcie_enable_phy);
-
-int cdns_pcie_init_phy(struct device *dev, struct cdns_pcie *pcie)
-{
- struct device_node *np = dev->of_node;
- int phy_count;
- struct phy **phy;
- struct device_link **link;
- int i;
- int ret;
- const char *name;
-
- phy_count = of_property_count_strings(np, "phy-names");
- if (phy_count < 1) {
- dev_info(dev, "no \"phy-names\" property found; PHY will not be initialized\n");
- pcie->phy_count = 0;
- return 0;
- }
-
- phy = devm_kcalloc(dev, phy_count, sizeof(*phy), GFP_KERNEL);
- if (!phy)
- return -ENOMEM;
-
- link = devm_kcalloc(dev, phy_count, sizeof(*link), GFP_KERNEL);
- if (!link)
- return -ENOMEM;
-
- for (i = 0; i < phy_count; i++) {
- of_property_read_string_index(np, "phy-names", i, &name);
- phy[i] = devm_phy_get(dev, name);
- if (IS_ERR(phy[i])) {
- ret = PTR_ERR(phy[i]);
- goto err_phy;
- }
- link[i] = device_link_add(dev, &phy[i]->dev, DL_FLAG_STATELESS);
- if (!link[i]) {
- devm_phy_put(dev, phy[i]);
- ret = -EINVAL;
- goto err_phy;
- }
- }
-
- pcie->phy_count = phy_count;
- pcie->phy = phy;
- pcie->link = link;
-
- ret = cdns_pcie_enable_phy(pcie);
- if (ret)
- goto err_phy;
-
- return 0;
-
-err_phy:
- while (--i >= 0) {
- device_link_del(link[i]);
- devm_phy_put(dev, phy[i]);
- }
-
- return ret;
-}
-EXPORT_SYMBOL_GPL(cdns_pcie_init_phy);
-
-static int cdns_pcie_suspend_noirq(struct device *dev)
-{
- struct cdns_pcie *pcie = dev_get_drvdata(dev);
-
- cdns_pcie_disable_phy(pcie);
-
- return 0;
-}
-
-static int cdns_pcie_resume_noirq(struct device *dev)
-{
- struct cdns_pcie *pcie = dev_get_drvdata(dev);
- int ret;
-
- ret = cdns_pcie_enable_phy(pcie);
- if (ret) {
- dev_err(dev, "failed to enable PHY\n");
- return ret;
- }
-
- return 0;
-}
-
-const struct dev_pm_ops cdns_pcie_pm_ops = {
- NOIRQ_SYSTEM_SLEEP_PM_OPS(cdns_pcie_suspend_noirq,
- cdns_pcie_resume_noirq)
-};
-
MODULE_LICENSE("GPL");
MODULE_DESCRIPTION("Cadence PCIe controller driver");
MODULE_AUTHOR("Cyrille Pitchen <cyrille.pitchen@free-electrons.com>");
--
2.49.0
^ permalink raw reply related [flat|nested] 46+ messages in thread
* [PATCH v5 08/14] PCI: cadence: Add support for High Performance Arch(HPA) controller
2025-06-30 4:15 [PATCH v5 00/14] Enhance the PCIe controller driver hans.zhang
` (6 preceding siblings ...)
2025-06-30 4:15 ` [PATCH v5 07/14] PCI: cadence: Split the common functions for PCIE controller support hans.zhang
@ 2025-06-30 4:15 ` hans.zhang
2025-06-30 4:15 ` [PATCH v5 09/14] PCI: cadence: Add support for PCIe HPA controller platform hans.zhang
` (5 subsequent siblings)
13 siblings, 0 replies; 46+ messages in thread
From: hans.zhang @ 2025-06-30 4:15 UTC (permalink / raw)
To: bhelgaas, lpieralisi, kw, mani, robh, kwilczynski, krzk+dt,
conor+dt
Cc: mpillai, fugang.duan, guoyin.chen, peter.chen,
cix-kernel-upstream, linux-pci, devicetree, linux-kernel,
Hans Zhang
From: Manikandan K Pillai <mpillai@cadence.com>
Add support for Cadence PCIe RP and EP configuration for High
Performance Architecture(HPA) controllers.
Signed-off-by: Manikandan K Pillai <mpillai@cadence.com>
Co-developed-by: Hans Zhang <hans.zhang@cixtech.com>
Signed-off-by: Hans Zhang <hans.zhang@cixtech.com>
---
drivers/pci/controller/cadence/Makefile | 6 +-
.../controller/cadence/pcie-cadence-ep-hpa.c | 523 ++++++++++++++++
.../cadence/pcie-cadence-host-hpa.c | 584 ++++++++++++++++++
.../pci/controller/cadence/pcie-cadence-hpa.c | 199 ++++++
.../controller/cadence/pcie-cadence-plat.c | 19 +-
drivers/pci/controller/cadence/pcie-cadence.c | 10 +
drivers/pci/controller/cadence/pcie-cadence.h | 69 ++-
7 files changed, 1395 insertions(+), 15 deletions(-)
create mode 100644 drivers/pci/controller/cadence/pcie-cadence-ep-hpa.c
create mode 100644 drivers/pci/controller/cadence/pcie-cadence-host-hpa.c
create mode 100644 drivers/pci/controller/cadence/pcie-cadence-hpa.c
diff --git a/drivers/pci/controller/cadence/Makefile b/drivers/pci/controller/cadence/Makefile
index 3fe5dd2bbd5b..e2df24ff4c33 100644
--- a/drivers/pci/controller/cadence/Makefile
+++ b/drivers/pci/controller/cadence/Makefile
@@ -1,8 +1,8 @@
# SPDX-License-Identifier: GPL-2.0
-obj-$(CONFIG_PCIE_CADENCE) += pcie-cadence-common.o pcie-cadence.o
+obj-$(CONFIG_PCIE_CADENCE) += pcie-cadence-common.o pcie-cadence.o pcie-cadence-hpa.o
obj-$(CONFIG_PCIE_CADENCE_EP_COMMON) += pcie-cadence-ep-common.o
obj-$(CONFIG_PCIE_CADENCE_HOST_COMMON) += pcie-cadence-host-common.o
-obj-$(CONFIG_PCIE_CADENCE_HOST) += pcie-cadence-host.o
-obj-$(CONFIG_PCIE_CADENCE_EP) += pcie-cadence-ep.o
+obj-$(CONFIG_PCIE_CADENCE_HOST) += pcie-cadence-host.o pcie-cadence-host-hpa.o
+obj-$(CONFIG_PCIE_CADENCE_EP) += pcie-cadence-ep.o pcie-cadence-ep-hpa.o
obj-$(CONFIG_PCIE_CADENCE_PLAT) += pcie-cadence-plat.o
obj-$(CONFIG_PCI_J721E) += pci-j721e.o
diff --git a/drivers/pci/controller/cadence/pcie-cadence-ep-hpa.c b/drivers/pci/controller/cadence/pcie-cadence-ep-hpa.c
new file mode 100644
index 000000000000..5d769a460d76
--- /dev/null
+++ b/drivers/pci/controller/cadence/pcie-cadence-ep-hpa.c
@@ -0,0 +1,523 @@
+// SPDX-License-Identifier: GPL-2.0
+// Copyright (c) 2017 Cadence
+// Cadence PCIe endpoint controller driver.
+// Author: Manikandan K Pillai <mpillai@cadence.com>
+
+#include <linux/bitfield.h>
+#include <linux/delay.h>
+#include <linux/kernel.h>
+#include <linux/of.h>
+#include <linux/pci-epc.h>
+#include <linux/platform_device.h>
+#include <linux/sizes.h>
+
+#include "pcie-cadence.h"
+#include "pcie-cadence-ep-common.h"
+
+static int cdns_pcie_hpa_ep_map_addr(struct pci_epc *epc, u8 fn, u8 vfn,
+ phys_addr_t addr, u64 pci_addr, size_t size)
+{
+ struct cdns_pcie_ep *ep = epc_get_drvdata(epc);
+ struct cdns_pcie *pcie = &ep->pcie;
+ u32 r;
+
+ r = find_first_zero_bit(&ep->ob_region_map, BITS_PER_LONG);
+ if (r >= ep->max_regions - 1) {
+ dev_err(&epc->dev, "no free outbound region\n");
+ return -EINVAL;
+ }
+
+ fn = cdns_pcie_get_fn_from_vfn(pcie, fn, vfn);
+ cdns_pcie_hpa_set_outbound_region(pcie, 0, fn, r, false, addr, pci_addr, size);
+
+ set_bit(r, &ep->ob_region_map);
+ ep->ob_addr[r] = addr;
+
+ return 0;
+}
+
+static void cdns_pcie_hpa_ep_unmap_addr(struct pci_epc *epc, u8 fn, u8 vfn,
+ phys_addr_t addr)
+{
+ struct cdns_pcie_ep *ep = epc_get_drvdata(epc);
+ struct cdns_pcie *pcie = &ep->pcie;
+ u32 r;
+
+ for (r = 0; r < ep->max_regions - 1; r++)
+ if (ep->ob_addr[r] == addr)
+ break;
+
+ if (r == ep->max_regions - 1)
+ return;
+
+ cdns_pcie_hpa_reset_outbound_region(pcie, r);
+
+ ep->ob_addr[r] = 0;
+ clear_bit(r, &ep->ob_region_map);
+}
+
+static void cdns_pcie_hpa_ep_assert_intx(struct cdns_pcie_ep *ep, u8 fn, u8 intx,
+ bool is_asserted)
+{
+ struct cdns_pcie *pcie = &ep->pcie;
+ unsigned long flags;
+ u32 offset;
+ u16 status;
+ u8 msg_code;
+
+ intx &= 3;
+
+ /* Set the outbound region if needed. */
+ if (unlikely(ep->irq_pci_addr != CDNS_PCIE_EP_IRQ_PCI_ADDR_LEGACY ||
+ ep->irq_pci_fn != fn)) {
+ /* First region was reserved for IRQ writes. */
+ cdns_pcie_hpa_set_outbound_region_for_normal_msg(pcie, 0, fn, 0, ep->irq_phys_addr);
+ ep->irq_pci_addr = CDNS_PCIE_EP_IRQ_PCI_ADDR_LEGACY;
+ ep->irq_pci_fn = fn;
+ }
+
+ if (is_asserted) {
+ ep->irq_pending |= BIT(intx);
+ msg_code = PCIE_MSG_CODE_ASSERT_INTA + intx;
+ } else {
+ ep->irq_pending &= ~BIT(intx);
+ msg_code = PCIE_MSG_CODE_DEASSERT_INTA + intx;
+ }
+
+ spin_lock_irqsave(&ep->lock, flags);
+ status = cdns_pcie_ep_fn_readw(pcie, fn, PCI_STATUS);
+ if (((status & PCI_STATUS_INTERRUPT) != 0) ^ (ep->irq_pending != 0)) {
+ status ^= PCI_STATUS_INTERRUPT;
+ cdns_pcie_ep_fn_writew(pcie, fn, PCI_STATUS, status);
+ }
+ spin_unlock_irqrestore(&ep->lock, flags);
+
+ offset = CDNS_PCIE_NORMAL_MSG_ROUTING(MSG_ROUTING_LOCAL) |
+ CDNS_PCIE_NORMAL_MSG_CODE(msg_code);
+ writel(0, ep->irq_cpu_addr + offset);
+}
+
+static int cdns_pcie_hpa_ep_send_intx_irq(struct cdns_pcie_ep *ep, u8 fn, u8 vfn,
+ u8 intx)
+{
+ u16 cmd;
+
+ cmd = cdns_pcie_ep_fn_readw(&ep->pcie, fn, PCI_COMMAND);
+ if (cmd & PCI_COMMAND_INTX_DISABLE)
+ return -EINVAL;
+
+ cdns_pcie_hpa_ep_assert_intx(ep, fn, intx, true);
+
+ /* The mdelay() value was taken from dra7xx_pcie_raise_intx_irq() */
+ mdelay(1);
+ cdns_pcie_hpa_ep_assert_intx(ep, fn, intx, false);
+ return 0;
+}
+
+static int cdns_pcie_hpa_ep_send_msi_irq(struct cdns_pcie_ep *ep, u8 fn, u8 vfn,
+ u8 interrupt_num)
+{
+ struct cdns_pcie *pcie = &ep->pcie;
+ u32 cap = CDNS_PCIE_EP_FUNC_MSI_CAP_OFFSET;
+ u16 flags, mme, data, data_mask;
+ u8 msi_count;
+ u64 pci_addr, pci_addr_mask = 0xff;
+
+ fn = cdns_pcie_get_fn_from_vfn(pcie, fn, vfn);
+
+ /* Check whether the MSI feature has been enabled by the PCI host. */
+ flags = cdns_pcie_ep_fn_readw(pcie, fn, cap + PCI_MSI_FLAGS);
+ if (!(flags & PCI_MSI_FLAGS_ENABLE))
+ return -EINVAL;
+
+ /* Get the number of enabled MSIs */
+ mme = FIELD_GET(PCI_MSI_FLAGS_QSIZE, flags);
+ msi_count = 1 << mme;
+ if (!interrupt_num || interrupt_num > msi_count)
+ return -EINVAL;
+
+ /* Compute the data value to be written. */
+ data_mask = msi_count - 1;
+ data = cdns_pcie_ep_fn_readw(pcie, fn, cap + PCI_MSI_DATA_64);
+ data = (data & ~data_mask) | ((interrupt_num - 1) & data_mask);
+
+ /* Get the PCI address where to write the data into. */
+ pci_addr = cdns_pcie_ep_fn_readl(pcie, fn, cap + PCI_MSI_ADDRESS_HI);
+ pci_addr <<= 32;
+ pci_addr |= cdns_pcie_ep_fn_readl(pcie, fn, cap + PCI_MSI_ADDRESS_LO);
+ pci_addr &= GENMASK_ULL(63, 2);
+
+ /* Set the outbound region if needed. */
+ if (unlikely(ep->irq_pci_addr != (pci_addr & ~pci_addr_mask) ||
+ ep->irq_pci_fn != fn)) {
+ /* First region was reserved for IRQ writes. */
+ cdns_pcie_hpa_set_outbound_region(pcie, 0, fn, 0,
+ false,
+ ep->irq_phys_addr,
+ pci_addr & ~pci_addr_mask,
+ pci_addr_mask + 1);
+ ep->irq_pci_addr = (pci_addr & ~pci_addr_mask);
+ ep->irq_pci_fn = fn;
+ }
+ writel(data, ep->irq_cpu_addr + (pci_addr & pci_addr_mask));
+
+ return 0;
+}
+
+static int cdns_pcie_hpa_ep_send_msix_irq(struct cdns_pcie_ep *ep, u8 fn, u8 vfn,
+ u16 interrupt_num)
+{
+ u32 cap = CDNS_PCIE_EP_FUNC_MSIX_CAP_OFFSET;
+ u32 tbl_offset, msg_data, reg;
+ struct cdns_pcie *pcie = &ep->pcie;
+ struct pci_epf_msix_tbl *msix_tbl;
+ struct cdns_pcie_epf *epf;
+ u64 pci_addr_mask = 0xff;
+ u64 msg_addr;
+ u16 flags;
+ u8 bir;
+
+ epf = &ep->epf[fn];
+ if (vfn > 0)
+ epf = &epf->epf[vfn - 1];
+
+ fn = cdns_pcie_get_fn_from_vfn(pcie, fn, vfn);
+
+ /* Check whether the MSI-X feature has been enabled by the PCI host. */
+ flags = cdns_pcie_ep_fn_readw(pcie, fn, cap + PCI_MSIX_FLAGS);
+ if (!(flags & PCI_MSIX_FLAGS_ENABLE))
+ return -EINVAL;
+
+ reg = cap + PCI_MSIX_TABLE;
+ tbl_offset = cdns_pcie_ep_fn_readl(pcie, fn, reg);
+ bir = FIELD_GET(PCI_MSIX_TABLE_BIR, tbl_offset);
+ tbl_offset &= PCI_MSIX_TABLE_OFFSET;
+
+ msix_tbl = epf->epf_bar[bir]->addr + tbl_offset;
+ msg_addr = msix_tbl[(interrupt_num - 1)].msg_addr;
+ msg_data = msix_tbl[(interrupt_num - 1)].msg_data;
+
+ /* Set the outbound region if needed. */
+ if (ep->irq_pci_addr != (msg_addr & ~pci_addr_mask) ||
+ ep->irq_pci_fn != fn) {
+ /* First region was reserved for IRQ writes. */
+ cdns_pcie_hpa_set_outbound_region(pcie, 0, fn, 0,
+ false,
+ ep->irq_phys_addr,
+ msg_addr & ~pci_addr_mask,
+ pci_addr_mask + 1);
+ ep->irq_pci_addr = (msg_addr & ~pci_addr_mask);
+ ep->irq_pci_fn = fn;
+ }
+ writel(msg_data, ep->irq_cpu_addr + (msg_addr & pci_addr_mask));
+
+ return 0;
+}
+
+static int cdns_pcie_hpa_ep_raise_irq(struct pci_epc *epc, u8 fn, u8 vfn,
+ unsigned int type, u16 interrupt_num)
+{
+ struct cdns_pcie_ep *ep = epc_get_drvdata(epc);
+ struct cdns_pcie *pcie = &ep->pcie;
+ struct device *dev = pcie->dev;
+
+ switch (type) {
+ case PCI_IRQ_INTX:
+ if (vfn > 0) {
+ dev_err(dev, "Cannot raise INTX interrupts for VF\n");
+ return -EINVAL;
+ }
+ return cdns_pcie_hpa_ep_send_intx_irq(ep, fn, vfn, 0);
+
+ case PCI_IRQ_MSI:
+ return cdns_pcie_hpa_ep_send_msi_irq(ep, fn, vfn, interrupt_num);
+
+ case PCI_IRQ_MSIX:
+ return cdns_pcie_hpa_ep_send_msix_irq(ep, fn, vfn, interrupt_num);
+
+ default:
+ break;
+ }
+
+ return -EINVAL;
+}
+
+static int cdns_pcie_hpa_ep_start(struct pci_epc *epc)
+{
+ struct cdns_pcie_ep *ep = epc_get_drvdata(epc);
+ struct cdns_pcie *pcie = &ep->pcie;
+ struct device *dev = pcie->dev;
+ int max_epfs = sizeof(epc->function_num_map) * 8;
+ int ret, epf, last_fn;
+ u32 reg, value;
+
+ /*
+ * BIT(0) is hardwired to 1, hence function 0 is always enabled
+ * and can't be disabled anyway.
+ */
+ cdns_pcie_hpa_writel(pcie, REG_BANK_IP_REG,
+ CDNS_PCIE_HPA_LM_EP_FUNC_CFG, epc->function_num_map);
+
+ /*
+ * Next function field in ARI_CAP_AND_CTR register for last function
+ * should be 0. Clear Next Function Number field for the last
+ * function used.
+ */
+ last_fn = find_last_bit(&epc->function_num_map, BITS_PER_LONG);
+ reg = CDNS_PCIE_CORE_PF_I_ARI_CAP_AND_CTRL(last_fn);
+ value = cdns_pcie_readl(pcie, reg);
+ value &= ~CDNS_PCIE_ARI_CAP_NFN_MASK;
+ cdns_pcie_writel(pcie, reg, value);
+
+ if (ep->quirk_disable_flr) {
+ for (epf = 0; epf < max_epfs; epf++) {
+ if (!(epc->function_num_map & BIT(epf)))
+ continue;
+
+ value = cdns_pcie_ep_fn_readl(pcie, epf,
+ CDNS_PCIE_EP_FUNC_DEV_CAP_OFFSET +
+ PCI_EXP_DEVCAP);
+ value &= ~PCI_EXP_DEVCAP_FLR;
+ cdns_pcie_ep_fn_writel(pcie, epf,
+ CDNS_PCIE_EP_FUNC_DEV_CAP_OFFSET +
+ PCI_EXP_DEVCAP, value);
+ }
+ }
+
+ ret = cdns_pcie_start_link(pcie);
+ if (ret) {
+ dev_err(dev, "Failed to start link\n");
+ return ret;
+ }
+
+ return 0;
+}
+
+static int cdns_pcie_hpa_ep_set_bar(struct pci_epc *epc, u8 fn, u8 vfn,
+ struct pci_epf_bar *epf_bar)
+{
+ struct cdns_pcie_ep *ep = epc_get_drvdata(epc);
+ struct cdns_pcie_epf *epf = &ep->epf[fn];
+ struct cdns_pcie *pcie = &ep->pcie;
+ dma_addr_t bar_phys = epf_bar->phys_addr;
+ enum pci_barno bar = epf_bar->barno;
+ int flags = epf_bar->flags;
+ u32 addr0, addr1, reg, cfg, b, aperture, ctrl;
+ u64 sz;
+
+ /* BAR size is 2^(aperture + 7) */
+ sz = max_t(size_t, epf_bar->size, CDNS_PCIE_EP_MIN_APERTURE);
+
+ /*
+ * roundup_pow_of_two() returns an unsigned long, which is not suited
+ * for 64bit values.
+ */
+ sz = 1ULL << fls64(sz - 1);
+
+ /* 128B -> 0, 256B -> 1, 512B -> 2, ... */
+ aperture = ilog2(sz) - 7;
+
+ if ((flags & PCI_BASE_ADDRESS_SPACE) == PCI_BASE_ADDRESS_SPACE_IO) {
+ ctrl = CDNS_PCIE_HPA_LM_BAR_CFG_CTRL_IO_32BITS;
+ } else {
+ bool is_prefetch = !!(flags & PCI_BASE_ADDRESS_MEM_PREFETCH);
+ bool is_64bits = !!(flags & PCI_BASE_ADDRESS_MEM_TYPE_64);
+
+ if (is_64bits && (bar & 1))
+ return -EINVAL;
+
+ if (is_64bits && is_prefetch)
+ ctrl = CDNS_PCIE_HPA_LM_BAR_CFG_CTRL_PREFETCH_MEM_64BITS;
+ else if (is_prefetch)
+ ctrl = CDNS_PCIE_HPA_LM_BAR_CFG_CTRL_PREFETCH_MEM_32BITS;
+ else if (is_64bits)
+ ctrl = CDNS_PCIE_HPA_LM_BAR_CFG_CTRL_MEM_64BITS;
+ else
+ ctrl = CDNS_PCIE_HPA_LM_BAR_CFG_CTRL_MEM_32BITS;
+ }
+
+ addr0 = lower_32_bits(bar_phys);
+ addr1 = upper_32_bits(bar_phys);
+
+ if (vfn == 1)
+ reg = CDNS_PCIE_HPA_LM_EP_VFUNC_BAR_CFG(bar, fn);
+ else
+ reg = CDNS_PCIE_HPA_LM_EP_FUNC_BAR_CFG(bar, fn);
+ b = (bar < BAR_4) ? bar : bar - BAR_4;
+
+ if (vfn == 0 || vfn == 1) {
+ cfg = cdns_pcie_hpa_readl(pcie, REG_BANK_IP_CFG_CTRL_REG, reg);
+ cfg &= ~(CDNS_PCIE_HPA_LM_EP_FUNC_BAR_CFG_BAR_APERTURE_MASK(b) |
+ CDNS_PCIE_HPA_LM_EP_FUNC_BAR_CFG_BAR_CTRL_MASK(b));
+ cfg |= (CDNS_PCIE_HPA_LM_EP_FUNC_BAR_CFG_BAR_APERTURE(b, aperture) |
+ CDNS_PCIE_HPA_LM_EP_FUNC_BAR_CFG_BAR_CTRL(b, ctrl));
+ cdns_pcie_hpa_writel(pcie, REG_BANK_IP_CFG_CTRL_REG, reg, cfg);
+ }
+
+ fn = cdns_pcie_get_fn_from_vfn(pcie, fn, vfn);
+ cdns_pcie_hpa_writel(pcie, REG_BANK_AXI_MASTER_COMMON,
+ CDNS_PCIE_HPA_AT_IB_EP_FUNC_BAR_ADDR0(fn, bar), addr0);
+ cdns_pcie_hpa_writel(pcie, REG_BANK_AXI_MASTER_COMMON,
+ CDNS_PCIE_HPA_AT_IB_EP_FUNC_BAR_ADDR1(fn, bar), addr1);
+
+ if (vfn > 0)
+ epf = &epf->epf[vfn - 1];
+ epf->epf_bar[bar] = epf_bar;
+
+ return 0;
+}
+
+static void cdns_pcie_hpa_ep_clear_bar(struct pci_epc *epc, u8 fn, u8 vfn,
+ struct pci_epf_bar *epf_bar)
+{
+ struct cdns_pcie_ep *ep = epc_get_drvdata(epc);
+ struct cdns_pcie_epf *epf = &ep->epf[fn];
+ struct cdns_pcie *pcie = &ep->pcie;
+ enum pci_barno bar = epf_bar->barno;
+ u32 reg, cfg, b, ctrl;
+
+ if (vfn == 1)
+ reg = CDNS_PCIE_HPA_LM_EP_VFUNC_BAR_CFG(bar, fn);
+ else
+ reg = CDNS_PCIE_HPA_LM_EP_FUNC_BAR_CFG(bar, fn);
+ b = (bar < BAR_4) ? bar : bar - BAR_4;
+
+ if (vfn == 0 || vfn == 1) {
+ ctrl = CDNS_PCIE_HPA_LM_BAR_CFG_CTRL_DISABLED;
+ cfg = cdns_pcie_hpa_readl(pcie, REG_BANK_IP_CFG_CTRL_REG, reg);
+ cfg &= ~(CDNS_PCIE_HPA_LM_EP_FUNC_BAR_CFG_BAR_APERTURE_MASK(b) |
+ CDNS_PCIE_HPA_LM_EP_FUNC_BAR_CFG_BAR_CTRL_MASK(b));
+ cfg |= CDNS_PCIE_HPA_LM_EP_FUNC_BAR_CFG_BAR_CTRL(b, ctrl);
+ cdns_pcie_hpa_writel(pcie, REG_BANK_IP_CFG_CTRL_REG, reg, cfg);
+ }
+
+ fn = cdns_pcie_get_fn_from_vfn(pcie, fn, vfn);
+ cdns_pcie_hpa_writel(pcie, REG_BANK_AXI_MASTER_COMMON,
+ CDNS_PCIE_HPA_AT_IB_EP_FUNC_BAR_ADDR0(fn, bar), 0);
+ cdns_pcie_hpa_writel(pcie, REG_BANK_AXI_MASTER_COMMON,
+ CDNS_PCIE_HPA_AT_IB_EP_FUNC_BAR_ADDR1(fn, bar), 0);
+
+ if (vfn > 0)
+ epf = &epf->epf[vfn - 1];
+ epf->epf_bar[bar] = NULL;
+}
+
+static const struct pci_epc_ops cdns_pcie_hpa_epc_ops = {
+ .write_header = cdns_pcie_ep_write_header,
+ .set_bar = cdns_pcie_hpa_ep_set_bar,
+ .clear_bar = cdns_pcie_hpa_ep_clear_bar,
+ .map_addr = cdns_pcie_hpa_ep_map_addr,
+ .unmap_addr = cdns_pcie_hpa_ep_unmap_addr,
+ .set_msi = cdns_pcie_ep_set_msi,
+ .get_msi = cdns_pcie_ep_get_msi,
+ .set_msix = cdns_pcie_ep_set_msix,
+ .get_msix = cdns_pcie_ep_get_msix,
+ .raise_irq = cdns_pcie_hpa_ep_raise_irq,
+ .map_msi_irq = cdns_pcie_ep_map_msi_irq,
+ .start = cdns_pcie_hpa_ep_start,
+ .get_features = cdns_pcie_ep_get_features,
+};
+
+int cdns_pcie_hpa_ep_setup(struct cdns_pcie_ep *ep)
+{
+ struct device *dev = ep->pcie.dev;
+ struct platform_device *pdev = to_platform_device(dev);
+ struct device_node *np = dev->of_node;
+ struct cdns_pcie *pcie = &ep->pcie;
+ struct cdns_pcie_epf *epf;
+ struct resource *res;
+ struct pci_epc *epc;
+ int ret;
+ int i;
+
+ pcie->is_rc = false;
+
+ pcie->reg_base = devm_platform_ioremap_resource_byname(pdev, "reg");
+ if (IS_ERR(pcie->reg_base)) {
+ dev_err(dev, "missing \"reg\"\n");
+ return PTR_ERR(pcie->reg_base);
+ }
+
+ res = platform_get_resource_byname(pdev, IORESOURCE_MEM, "mem");
+ if (!res) {
+ dev_err(dev, "missing \"mem\"\n");
+ return -EINVAL;
+ }
+ pcie->mem_res = res;
+
+ ep->max_regions = CDNS_PCIE_MAX_OB;
+ of_property_read_u32(np, "cdns,max-outbound-regions", &ep->max_regions);
+
+ ep->ob_addr = devm_kcalloc(dev,
+ ep->max_regions, sizeof(*ep->ob_addr),
+ GFP_KERNEL);
+ if (!ep->ob_addr)
+ return -ENOMEM;
+
+ epc = devm_pci_epc_create(dev, &cdns_pcie_hpa_epc_ops);
+ if (IS_ERR(epc)) {
+ dev_err(dev, "failed to create epc device\n");
+ return PTR_ERR(epc);
+ }
+
+ epc_set_drvdata(epc, ep);
+
+ if (of_property_read_u8(np, "max-functions", &epc->max_functions) < 0)
+ epc->max_functions = 1;
+
+ ep->epf = devm_kcalloc(dev, epc->max_functions, sizeof(*ep->epf),
+ GFP_KERNEL);
+ if (!ep->epf)
+ return -ENOMEM;
+
+ epc->max_vfs = devm_kcalloc(dev, epc->max_functions,
+ sizeof(*epc->max_vfs), GFP_KERNEL);
+ if (!epc->max_vfs)
+ return -ENOMEM;
+
+ ret = of_property_read_u8_array(np, "max-virtual-functions",
+ epc->max_vfs, epc->max_functions);
+ if (ret == 0) {
+ for (i = 0; i < epc->max_functions; i++) {
+ epf = &ep->epf[i];
+ if (epc->max_vfs[i] == 0)
+ continue;
+ epf->epf = devm_kcalloc(dev, epc->max_vfs[i],
+ sizeof(*ep->epf), GFP_KERNEL);
+ if (!epf->epf)
+ return -ENOMEM;
+ }
+ }
+
+ ret = pci_epc_mem_init(epc, pcie->mem_res->start,
+ resource_size(pcie->mem_res), PAGE_SIZE);
+ if (ret < 0) {
+ dev_err(dev, "failed to initialize the memory space\n");
+ return ret;
+ }
+
+ ep->irq_cpu_addr = pci_epc_mem_alloc_addr(epc, &ep->irq_phys_addr,
+ SZ_128K);
+ if (!ep->irq_cpu_addr) {
+ dev_err(dev, "failed to reserve memory space for MSI\n");
+ ret = -ENOMEM;
+ goto free_epc_mem;
+ }
+ ep->irq_pci_addr = CDNS_PCIE_EP_IRQ_PCI_ADDR_NONE;
+ /* Reserve region 0 for IRQs */
+ set_bit(0, &ep->ob_region_map);
+
+ if (ep->quirk_detect_quiet_flag)
+ cdns_pcie_hpa_detect_quiet_min_delay_set(&ep->pcie);
+
+ spin_lock_init(&ep->lock);
+
+ pci_epc_init_notify(epc);
+
+ return 0;
+
+ free_epc_mem:
+ pci_epc_mem_exit(epc);
+
+ return ret;
+}
diff --git a/drivers/pci/controller/cadence/pcie-cadence-host-hpa.c b/drivers/pci/controller/cadence/pcie-cadence-host-hpa.c
new file mode 100644
index 000000000000..94cba8ec4860
--- /dev/null
+++ b/drivers/pci/controller/cadence/pcie-cadence-host-hpa.c
@@ -0,0 +1,584 @@
+// SPDX-License-Identifier: GPL-2.0
+// Copyright (c) 2017 Cadence
+// Cadence PCIe host controller driver.
+// Author: Manikandan K Pillai <mpillai@cadence.com>
+
+#include <linux/delay.h>
+#include <linux/kernel.h>
+#include <linux/list_sort.h>
+#include <linux/of_address.h>
+#include <linux/of_pci.h>
+#include <linux/platform_device.h>
+
+#include "pcie-cadence.h"
+#include "pcie-cadence-host-common.h"
+
+static u8 bar_aperture_mask[] = {
+ [RP_BAR0] = 0x1F,
+ [RP_BAR1] = 0xF,
+};
+
+void __iomem *cdns_pci_hpa_map_bus(struct pci_bus *bus, unsigned int devfn,
+ int where)
+{
+ struct pci_host_bridge *bridge = pci_find_host_bridge(bus);
+ struct cdns_pcie_rc *rc = pci_host_bridge_priv(bridge);
+ struct cdns_pcie *pcie = &rc->pcie;
+ unsigned int busn = bus->number;
+ u32 addr0, desc0, desc1, ctrl0;
+ u32 regval;
+
+ if (pci_is_root_bus(bus)) {
+ /*
+ * Only the root port (devfn == 0) is connected to this bus.
+ * All other PCI devices are behind some bridge hence on another
+ * bus.
+ */
+ if (devfn)
+ return NULL;
+
+ return pcie->reg_base + (where & 0xfff);
+ }
+
+ /* Clear AXI link-down status */
+ regval = cdns_pcie_hpa_readl(pcie, REG_BANK_AXI_SLAVE, CDNS_PCIE_HPA_AT_LINKDOWN);
+ cdns_pcie_hpa_writel(pcie, REG_BANK_AXI_SLAVE, CDNS_PCIE_HPA_AT_LINKDOWN,
+ (regval & ~GENMASK(0, 0)));
+
+ desc1 = 0;
+ ctrl0 = 0;
+
+ /* Update Output registers for AXI region 0. */
+ addr0 = CDNS_PCIE_HPA_AT_OB_REGION_PCI_ADDR0_NBITS(12) |
+ CDNS_PCIE_HPA_AT_OB_REGION_PCI_ADDR0_DEVFN(devfn) |
+ CDNS_PCIE_HPA_AT_OB_REGION_PCI_ADDR0_BUS(busn);
+ cdns_pcie_hpa_writel(pcie, REG_BANK_AXI_SLAVE,
+ CDNS_PCIE_HPA_AT_OB_REGION_PCI_ADDR0(0), addr0);
+
+ desc1 = cdns_pcie_hpa_readl(pcie, REG_BANK_AXI_SLAVE,
+ CDNS_PCIE_HPA_AT_OB_REGION_DESC1(0));
+ desc1 &= ~CDNS_PCIE_HPA_AT_OB_REGION_DESC1_DEVFN_MASK;
+ desc1 |= CDNS_PCIE_HPA_AT_OB_REGION_DESC1_DEVFN(0);
+ ctrl0 = CDNS_PCIE_HPA_AT_OB_REGION_CTRL0_SUPPLY_BUS |
+ CDNS_PCIE_HPA_AT_OB_REGION_CTRL0_SUPPLY_DEV_FN;
+
+ if (busn == bridge->busnr + 1)
+ desc0 |= CDNS_PCIE_HPA_AT_OB_REGION_DESC0_TYPE_CONF_TYPE0;
+ else
+ desc0 |= CDNS_PCIE_HPA_AT_OB_REGION_DESC0_TYPE_CONF_TYPE1;
+
+ cdns_pcie_hpa_writel(pcie, REG_BANK_AXI_SLAVE,
+ CDNS_PCIE_HPA_AT_OB_REGION_DESC0(0), desc0);
+ cdns_pcie_hpa_writel(pcie, REG_BANK_AXI_SLAVE,
+ CDNS_PCIE_HPA_AT_OB_REGION_DESC1(0), desc1);
+ cdns_pcie_hpa_writel(pcie, REG_BANK_AXI_SLAVE,
+ CDNS_PCIE_HPA_AT_OB_REGION_CTRL0(0), ctrl0);
+
+ return rc->cfg_base + (where & 0xfff);
+}
+
+int cdns_pcie_hpa_host_wait_for_link(struct cdns_pcie *pcie)
+{
+ struct device *dev = pcie->dev;
+ int retries;
+
+ /* Check if the link is up or not */
+ for (retries = 0; retries < LINK_WAIT_MAX_RETRIES; retries++) {
+ if (cdns_pcie_link_up(pcie)) {
+ dev_info(dev, "Link up\n");
+ return 0;
+ }
+ usleep_range(LINK_WAIT_USLEEP_MIN, LINK_WAIT_USLEEP_MAX);
+ }
+ return -ETIMEDOUT;
+}
+
+int cdns_pcie_hpa_host_start_link(struct cdns_pcie_rc *rc)
+{
+ struct cdns_pcie *pcie = &rc->pcie;
+ int ret;
+
+ ret = cdns_pcie_host_wait_for_link(pcie);
+
+ /*
+ * Retrain link for Gen2 training defect
+ * if quirk flag is set.
+ */
+ if (!ret && rc->quirk_retrain_flag)
+ ret = cdns_pcie_retrain(pcie);
+
+ return ret;
+}
+
+static struct pci_ops cdns_pcie_hpa_host_ops = {
+ .map_bus = cdns_pci_hpa_map_bus,
+ .read = pci_generic_config_read,
+ .write = pci_generic_config_write,
+};
+
+static void cdns_pcie_hpa_host_enable_ptm_response(struct cdns_pcie *pcie)
+{
+ u32 val;
+
+ val = cdns_pcie_hpa_readl(pcie, REG_BANK_IP_REG, CDNS_PCIE_HPA_LM_PTM_CTRL);
+ cdns_pcie_hpa_writel(pcie, REG_BANK_IP_REG, CDNS_PCIE_HPA_LM_PTM_CTRL,
+ val | CDNS_PCIE_HPA_LM_TPM_CTRL_PTMRSEN);
+}
+
+static int cdns_pcie_hpa_host_bar_ib_config(struct cdns_pcie_rc *rc,
+ enum cdns_pcie_rp_bar bar,
+ u64 cpu_addr, u64 size,
+ unsigned long flags)
+{
+ struct cdns_pcie *pcie = &rc->pcie;
+ u32 addr0, addr1, aperture, value;
+
+ if (!rc->avail_ib_bar[bar])
+ return -EBUSY;
+
+ rc->avail_ib_bar[bar] = false;
+
+ aperture = ilog2(size);
+ addr0 = CDNS_PCIE_HPA_AT_IB_RP_BAR_ADDR0_NBITS(aperture) |
+ (lower_32_bits(cpu_addr) & GENMASK(31, 8));
+ addr1 = upper_32_bits(cpu_addr);
+ cdns_pcie_hpa_writel(pcie, REG_BANK_AXI_MASTER,
+ CDNS_PCIE_HPA_AT_IB_RP_BAR_ADDR0(bar), addr0);
+ cdns_pcie_hpa_writel(pcie, REG_BANK_AXI_MASTER,
+ CDNS_PCIE_HPA_AT_IB_RP_BAR_ADDR1(bar), addr1);
+
+ if (bar == RP_NO_BAR)
+ return 0;
+
+ value = cdns_pcie_hpa_readl(pcie, REG_BANK_IP_CFG_CTRL_REG, CDNS_PCIE_HPA_LM_RC_BAR_CFG);
+ value &= ~(HPA_LM_RC_BAR_CFG_CTRL_MEM_64BITS(bar) |
+ HPA_LM_RC_BAR_CFG_CTRL_PREF_MEM_64BITS(bar) |
+ HPA_LM_RC_BAR_CFG_CTRL_MEM_32BITS(bar) |
+ HPA_LM_RC_BAR_CFG_CTRL_PREF_MEM_32BITS(bar) |
+ HPA_LM_RC_BAR_CFG_APERTURE(bar, bar_aperture_mask[bar] + 2));
+ if (size + cpu_addr >= SZ_4G) {
+ if (!(flags & IORESOURCE_PREFETCH))
+ value |= HPA_LM_RC_BAR_CFG_CTRL_MEM_64BITS(bar);
+ value |= HPA_LM_RC_BAR_CFG_CTRL_PREF_MEM_64BITS(bar);
+ } else {
+ if (!(flags & IORESOURCE_PREFETCH))
+ value |= HPA_LM_RC_BAR_CFG_CTRL_MEM_32BITS(bar);
+ value |= HPA_LM_RC_BAR_CFG_CTRL_PREF_MEM_32BITS(bar);
+ }
+
+ value |= HPA_LM_RC_BAR_CFG_APERTURE(bar, aperture);
+ cdns_pcie_hpa_writel(pcie, REG_BANK_IP_CFG_CTRL_REG, CDNS_PCIE_HPA_LM_RC_BAR_CFG, value);
+
+ return 0;
+}
+
+static int cdns_pcie_hpa_host_bar_config(struct cdns_pcie_rc *rc,
+ struct resource_entry *entry)
+{
+ u64 cpu_addr, pci_addr, size, winsize;
+ struct cdns_pcie *pcie = &rc->pcie;
+ struct device *dev = pcie->dev;
+ enum cdns_pcie_rp_bar bar;
+ unsigned long flags;
+ int ret;
+
+ cpu_addr = entry->res->start;
+ pci_addr = entry->res->start - entry->offset;
+ flags = entry->res->flags;
+ size = resource_size(entry->res);
+
+ if (entry->offset) {
+ dev_err(dev, "PCI addr: %llx must be equal to CPU addr: %llx\n",
+ pci_addr, cpu_addr);
+ return -EINVAL;
+ }
+
+ while (size > 0) {
+ /*
+ * Try to find a minimum BAR whose size is greater than
+ * or equal to the remaining resource_entry size. This will
+ * fail if the size of each of the available BARs is less than
+ * the remaining resource_entry size.
+ * If a minimum BAR is found, IB ATU will be configured and
+ * exited.
+ */
+ bar = cdns_pcie_host_find_min_bar(rc, size);
+ if (bar != RP_BAR_UNDEFINED) {
+ ret = cdns_pcie_hpa_host_bar_ib_config(rc, bar, cpu_addr,
+ size, flags);
+ if (ret)
+ dev_err(dev, "IB BAR: %d config failed\n", bar);
+ return ret;
+ }
+
+ /*
+ * If the control reaches here, it would mean the remaining
+ * resource_entry size cannot be fitted in a single BAR. So we
+ * find a maximum BAR whose size is less than or equal to the
+ * remaining resource_entry size and split the resource entry
+ * so that part of resource entry is fitted inside the maximum
+ * BAR. The remaining size would be fitted during the next
+ * iteration of the loop.
+ * If a maximum BAR is not found, there is no way we can fit
+ * this resource_entry, so we error out.
+ */
+ bar = cdns_pcie_host_find_max_bar(rc, size);
+ if (bar == RP_BAR_UNDEFINED) {
+ dev_err(dev, "No free BAR to map cpu_addr %llx\n",
+ cpu_addr);
+ return -EINVAL;
+ }
+
+ winsize = bar_max_size[bar];
+ ret = cdns_pcie_hpa_host_bar_ib_config(rc, bar, cpu_addr, winsize, flags);
+ if (ret) {
+ dev_err(dev, "IB BAR: %d config failed\n", bar);
+ return ret;
+ }
+
+ size -= winsize;
+ cpu_addr += winsize;
+ }
+
+ return 0;
+}
+
+static int cdns_pcie_hpa_host_map_dma_ranges(struct cdns_pcie_rc *rc)
+{
+ struct cdns_pcie *pcie = &rc->pcie;
+ struct device *dev = pcie->dev;
+ struct device_node *np = dev->of_node;
+ struct pci_host_bridge *bridge;
+ struct resource_entry *entry;
+ u32 no_bar_nbits = 32;
+ int err;
+
+ bridge = pci_host_bridge_from_priv(rc);
+ if (!bridge)
+ return -ENOMEM;
+
+ if (list_empty(&bridge->dma_ranges)) {
+ of_property_read_u32(np, "cdns,no-bar-match-nbits",
+ &no_bar_nbits);
+ err = cdns_pcie_hpa_host_bar_ib_config(rc, RP_NO_BAR, 0x0,
+ (u64)1 << no_bar_nbits, 0);
+ if (err)
+ dev_err(dev, "IB BAR: %d config failed\n", RP_NO_BAR);
+ return err;
+ }
+
+ list_sort(NULL, &bridge->dma_ranges, cdns_pcie_host_dma_ranges_cmp);
+
+ resource_list_for_each_entry(entry, &bridge->dma_ranges) {
+ err = cdns_pcie_hpa_host_bar_config(rc, entry);
+ if (err) {
+ dev_err(dev, "Fail to configure IB using dma-ranges\n");
+ return err;
+ }
+ }
+
+ return 0;
+}
+
+static int cdns_pcie_hpa_host_init_root_port(struct cdns_pcie_rc *rc)
+{
+ struct cdns_pcie *pcie = &rc->pcie;
+ u32 value, ctrl;
+
+ /*
+ * Set the root complex BAR configuration register:
+ * - disable both BAR0 and BAR1.
+ * - enable Prefetchable Memory Base and Limit registers in type 1
+ * config space (64 bits).
+ * - enable IO Base and Limit registers in type 1 config
+ * space (32 bits).
+ */
+
+ ctrl = CDNS_PCIE_HPA_LM_BAR_CFG_CTRL_DISABLED;
+ value = CDNS_PCIE_HPA_LM_RC_BAR_CFG_BAR0_CTRL(ctrl) |
+ CDNS_PCIE_HPA_LM_RC_BAR_CFG_BAR1_CTRL(ctrl) |
+ CDNS_PCIE_HPA_LM_RC_BAR_CFG_PREFETCH_MEM_ENABLE |
+ CDNS_PCIE_HPA_LM_RC_BAR_CFG_PREFETCH_MEM_64BITS |
+ CDNS_PCIE_HPA_LM_RC_BAR_CFG_IO_ENABLE |
+ CDNS_PCIE_HPA_LM_RC_BAR_CFG_IO_32BITS;
+ cdns_pcie_hpa_writel(pcie, REG_BANK_IP_CFG_CTRL_REG,
+ CDNS_PCIE_HPA_LM_RC_BAR_CFG, value);
+
+ if (rc->vendor_id != 0xffff)
+ cdns_pcie_hpa_rp_writew(pcie, PCI_VENDOR_ID, rc->vendor_id);
+
+ if (rc->device_id != 0xffff)
+ cdns_pcie_hpa_rp_writew(pcie, PCI_DEVICE_ID, rc->device_id);
+
+ cdns_pcie_hpa_rp_writeb(pcie, PCI_CLASS_REVISION, 0);
+ cdns_pcie_hpa_rp_writeb(pcie, PCI_CLASS_PROG, 0);
+ cdns_pcie_hpa_rp_writew(pcie, PCI_CLASS_DEVICE, PCI_CLASS_BRIDGE_PCI);
+
+ return 0;
+}
+
+static void cdns_pcie_hpa_create_region_for_ecam(struct cdns_pcie_rc *rc)
+{
+ struct pci_host_bridge *bridge = pci_host_bridge_from_priv(rc);
+ struct resource *cfg_res = rc->cfg_res;
+ struct cdns_pcie *pcie = &rc->pcie;
+ u32 value, root_port_req_id_reg, pcie_bus_number_reg;
+ u32 ecam_addr_0, region_size_0, request_id_0;
+ int busnr = 0, secbus = 0, subbus = 0;
+ struct resource_entry *entry;
+ resource_size_t size;
+ u32 axi_address_low;
+ int nbits;
+ u64 sz;
+
+ entry = resource_list_first_type(&bridge->windows, IORESOURCE_BUS);
+ if (entry) {
+ busnr = entry->res->start;
+ secbus = (busnr < 0xff) ? (busnr + 1) : 0xff;
+ subbus = entry->res->end;
+ }
+ size = resource_size(cfg_res);
+ sz = 1ULL << fls64(size - 1);
+ nbits = ilog2(sz);
+ if (nbits < 8)
+ nbits = 8;
+
+ root_port_req_id_reg = ((busnr & 0xff) << 8);
+ pcie_bus_number_reg = ((subbus & 0xff) << 16) | ((secbus & 0xff) << 8) |
+ (busnr & 0xff);
+ ecam_addr_0 = cfg_res->start;
+ region_size_0 = nbits - 1;
+ request_id_0 = ((busnr & 0xff) << 8);
+
+#define CDNS_PCIE_HPA_TAG_MANAGEMENT (0x0)
+ cdns_pcie_hpa_writel(pcie, REG_BANK_AXI_SLAVE,
+ CDNS_PCIE_HPA_TAG_MANAGEMENT, 0x200000);
+
+ /* Taking slave err as OKAY */
+#define CDNS_PCIE_HPA_SLAVE_RESP (0x100)
+ cdns_pcie_hpa_writel(pcie, REG_BANK_AXI_SLAVE, CDNS_PCIE_HPA_SLAVE_RESP,
+ 0x0);
+ cdns_pcie_hpa_writel(pcie, REG_BANK_AXI_SLAVE,
+ CDNS_PCIE_HPA_SLAVE_RESP + 0x4, 0x0);
+
+ /* Program the register "i_root_port_req_id_reg" with RP's BDF */
+#define I_ROOT_PORT_REQ_ID_REG (0x141c)
+ cdns_pcie_hpa_writel(pcie, REG_BANK_IP_REG, I_ROOT_PORT_REQ_ID_REG,
+ root_port_req_id_reg);
+
+ /**
+ * Program the register "i_pcie_bus_numbers" with Primary(RP's bus number),
+ * secondary and subordinate bus numbers
+ */
+#define I_PCIE_BUS_NUMBERS (CDNS_PCIE_HPA_RP_BASE + 0x18)
+ cdns_pcie_hpa_writel(pcie, REG_BANK_RP, I_PCIE_BUS_NUMBERS,
+ pcie_bus_number_reg);
+
+ /* Program the register "lm_hal_sbsa_ctrl[0]" to enable the sbsa */
+#define LM_HAL_SBSA_CTRL (0x1170)
+ value = cdns_pcie_hpa_readl(pcie, REG_BANK_IP_REG, LM_HAL_SBSA_CTRL);
+ value |= BIT(0);
+ cdns_pcie_hpa_writel(pcie, REG_BANK_IP_REG, LM_HAL_SBSA_CTRL, value);
+
+ /* Program region[0] for ECAM */
+ axi_address_low = (ecam_addr_0 & 0xfff00000) | region_size_0;
+ cdns_pcie_hpa_writel(pcie, REG_BANK_AXI_SLAVE,
+ CDNS_PCIE_HPA_AT_OB_REGION_CPU_ADDR0(0),
+ axi_address_low);
+
+ /* rc0-high-axi-address */
+ cdns_pcie_hpa_writel(pcie, REG_BANK_AXI_SLAVE,
+ CDNS_PCIE_HPA_AT_OB_REGION_CPU_ADDR1(0), 0x0);
+ /* Type-1 CFG */
+ cdns_pcie_hpa_writel(pcie, REG_BANK_AXI_SLAVE,
+ CDNS_PCIE_HPA_AT_OB_REGION_DESC0(0), 0x05000000);
+ cdns_pcie_hpa_writel(pcie, REG_BANK_AXI_SLAVE,
+ CDNS_PCIE_HPA_AT_OB_REGION_DESC1(0),
+ (request_id_0 << 16));
+
+ /* All AXI bits pass through PCIe */
+ cdns_pcie_hpa_writel(pcie, REG_BANK_AXI_SLAVE,
+ CDNS_PCIE_HPA_AT_OB_REGION_PCI_ADDR0(0), 0x1b);
+ /* PCIe address-high */
+ cdns_pcie_hpa_writel(pcie, REG_BANK_AXI_SLAVE,
+ CDNS_PCIE_HPA_AT_OB_REGION_PCI_ADDR1(0), 0);
+ cdns_pcie_hpa_writel(pcie, REG_BANK_AXI_SLAVE,
+ CDNS_PCIE_HPA_AT_OB_REGION_CTRL0(0), 0x06000000);
+}
+
+static void cdns_pcie_hpa_create_region_for_cfg(struct cdns_pcie_rc *rc)
+{
+ struct cdns_pcie *pcie = &rc->pcie;
+ struct pci_host_bridge *bridge = pci_host_bridge_from_priv(rc);
+ struct resource *cfg_res = rc->cfg_res;
+ struct resource_entry *entry;
+ u64 cpu_addr = cfg_res->start;
+ u32 addr0, addr1, desc1;
+ int busnr = 0;
+
+ entry = resource_list_first_type(&bridge->windows, IORESOURCE_BUS);
+ if (entry)
+ busnr = entry->res->start;
+
+ /*
+ * Reserve region 0 for PCI configure space accesses:
+ * OB_REGION_PCI_ADDR0 and OB_REGION_DESC0 are updated dynamically by
+ * cdns_pci_map_bus(), other region registers are set here once for all.
+ */
+ addr1 = 0;
+ desc1 = CDNS_PCIE_HPA_AT_OB_REGION_DESC1_BUS(busnr);
+ cdns_pcie_hpa_writel(pcie, REG_BANK_AXI_SLAVE,
+ CDNS_PCIE_HPA_AT_OB_REGION_PCI_ADDR1(0), addr1);
+ cdns_pcie_hpa_writel(pcie, REG_BANK_AXI_SLAVE,
+ CDNS_PCIE_HPA_AT_OB_REGION_DESC1(0), desc1);
+
+ addr0 = CDNS_PCIE_HPA_AT_OB_REGION_CPU_ADDR0_NBITS(12) |
+ (lower_32_bits(cpu_addr) & GENMASK(31, 8));
+ addr1 = upper_32_bits(cpu_addr);
+ cdns_pcie_hpa_writel(pcie, REG_BANK_AXI_SLAVE,
+ CDNS_PCIE_HPA_AT_OB_REGION_CPU_ADDR0(0), addr0);
+ cdns_pcie_hpa_writel(pcie, REG_BANK_AXI_SLAVE,
+ CDNS_PCIE_HPA_AT_OB_REGION_CPU_ADDR1(0), addr1);
+}
+
+static int cdns_pcie_hpa_host_init_address_translation(struct cdns_pcie_rc *rc)
+{
+ struct cdns_pcie *pcie = &rc->pcie;
+ struct device *dev = pcie->dev;
+ struct pci_host_bridge *bridge = pci_host_bridge_from_priv(rc);
+ struct resource_entry *entry;
+ int r = 0, busnr = 0;
+
+ if (rc->ecam_support_flag)
+ cdns_pcie_hpa_create_region_for_ecam(rc);
+ else
+ cdns_pcie_hpa_create_region_for_cfg(rc);
+
+ entry = resource_list_first_type(&bridge->windows, IORESOURCE_BUS);
+ if (entry)
+ busnr = entry->res->start;
+
+ r++;
+ if (pcie->msg_res)
+ cdns_pcie_hpa_set_outbound_region_for_normal_msg(pcie, busnr, 0, r,
+ pcie->msg_res->start);
+
+ r++;
+ resource_list_for_each_entry(entry, &bridge->windows) {
+ struct resource *res = entry->res;
+ u64 pci_addr = res->start - entry->offset;
+
+ if (resource_type(res) == IORESOURCE_IO)
+ cdns_pcie_hpa_set_outbound_region(pcie, busnr, 0, r,
+ true,
+ pci_pio_to_address(res->start),
+ pci_addr,
+ resource_size(res));
+ else
+ cdns_pcie_hpa_set_outbound_region(pcie, busnr, 0, r,
+ false,
+ res->start,
+ pci_addr,
+ resource_size(res));
+
+ r++;
+ }
+
+ if (device_property_read_bool(dev, "cdns,no-inbound-bar"))
+ return 0;
+ else
+ return cdns_pcie_hpa_host_map_dma_ranges(rc);
+}
+
+int cdns_pcie_hpa_host_init(struct cdns_pcie_rc *rc)
+{
+ int err;
+
+ err = cdns_pcie_hpa_host_init_root_port(rc);
+ if (err)
+ return err;
+
+ return cdns_pcie_hpa_host_init_address_translation(rc);
+}
+
+int cdns_pcie_hpa_host_link_setup(struct cdns_pcie_rc *rc)
+{
+ struct cdns_pcie *pcie = &rc->pcie;
+ struct device *dev = rc->pcie.dev;
+ int ret;
+
+ if (rc->quirk_detect_quiet_flag)
+ cdns_pcie_hpa_detect_quiet_min_delay_set(&rc->pcie);
+
+ cdns_pcie_hpa_host_enable_ptm_response(pcie);
+
+ ret = cdns_pcie_start_link(pcie);
+ if (ret) {
+ dev_err(dev, "Failed to start link\n");
+ return ret;
+ }
+
+ ret = cdns_pcie_host_start_link(rc);
+ if (ret)
+ dev_dbg(dev, "PCIe link never came up\n");
+
+ return ret;
+}
+
+int cdns_pcie_hpa_host_setup(struct cdns_pcie_rc *rc)
+{
+ struct device *dev = rc->pcie.dev;
+ struct platform_device *pdev = to_platform_device(dev);
+ struct device_node *np = dev->of_node;
+ struct pci_host_bridge *bridge;
+ enum cdns_pcie_rp_bar bar;
+ struct cdns_pcie *pcie;
+ struct resource *res;
+ int ret;
+
+ bridge = pci_host_bridge_from_priv(rc);
+ if (!bridge)
+ return -ENOMEM;
+
+ pcie = &rc->pcie;
+ pcie->is_rc = true;
+
+ rc->vendor_id = 0xffff;
+ of_property_read_u32(np, "vendor-id", &rc->vendor_id);
+
+ rc->device_id = 0xffff;
+ of_property_read_u32(np, "device-id", &rc->device_id);
+
+ if (!pcie->reg_base) {
+ pcie->reg_base = devm_platform_ioremap_resource_byname(pdev, "reg");
+ if (IS_ERR(pcie->reg_base)) {
+ dev_err(dev, "missing \"reg\"\n");
+ return PTR_ERR(pcie->reg_base);
+ }
+ }
+
+ /* ECAM config space is remapped at glue layer */
+ if (!rc->cfg_base) {
+ res = platform_get_resource_byname(pdev, IORESOURCE_MEM, "cfg");
+ rc->cfg_base = devm_pci_remap_cfg_resource(dev, res);
+ if (IS_ERR(rc->cfg_base))
+ return PTR_ERR(rc->cfg_base);
+ rc->cfg_res = res;
+ }
+
+ ret = cdns_pcie_hpa_host_link_setup(rc);
+ if (ret)
+ return ret;
+
+ for (bar = RP_BAR0; bar <= RP_NO_BAR; bar++)
+ rc->avail_ib_bar[bar] = true;
+
+ ret = cdns_pcie_hpa_host_init(rc);
+ if (ret)
+ return ret;
+
+ if (!bridge->ops)
+ bridge->ops = &cdns_pcie_hpa_host_ops;
+
+ return pci_host_probe(bridge);
+}
diff --git a/drivers/pci/controller/cadence/pcie-cadence-hpa.c b/drivers/pci/controller/cadence/pcie-cadence-hpa.c
new file mode 100644
index 000000000000..7982b40dcfe6
--- /dev/null
+++ b/drivers/pci/controller/cadence/pcie-cadence-hpa.c
@@ -0,0 +1,199 @@
+// SPDX-License-Identifier: GPL-2.0
+// Copyright (c) 2017 Cadence
+// Cadence PCIe controller driver
+// Author: Manikandan K Pillai <mpillai@cadence.com>
+
+#include <linux/kernel.h>
+#include <linux/of.h>
+
+#include "pcie-cadence.h"
+
+bool cdns_pcie_hpa_link_up(struct cdns_pcie *pcie)
+{
+ u32 pl_reg_val;
+
+ pl_reg_val = cdns_pcie_hpa_readl(pcie, REG_BANK_IP_REG, CDNS_PCIE_HPA_PHY_DBG_STS_REG0);
+ if (pl_reg_val & GENMASK(0, 0))
+ return true;
+ return false;
+}
+
+int cdns_pcie_hpa_start_link(struct cdns_pcie *pcie)
+{
+ u32 pl_reg_val;
+
+ pl_reg_val = cdns_pcie_hpa_readl(pcie, REG_BANK_IP_REG, CDNS_PCIE_HPA_PHY_LAYER_CFG0);
+ pl_reg_val |= CDNS_PCIE_HPA_LINK_TRNG_EN_MASK;
+ cdns_pcie_hpa_writel(pcie, REG_BANK_IP_REG, CDNS_PCIE_HPA_PHY_LAYER_CFG0, pl_reg_val);
+ return 0;
+}
+
+void cdns_pcie_hpa_stop_link(struct cdns_pcie *pcie)
+{
+ u32 pl_reg_val;
+
+ pl_reg_val = cdns_pcie_hpa_readl(pcie, REG_BANK_IP_REG, CDNS_PCIE_HPA_PHY_LAYER_CFG0);
+ pl_reg_val &= ~CDNS_PCIE_HPA_LINK_TRNG_EN_MASK;
+ cdns_pcie_hpa_writel(pcie, REG_BANK_IP_REG, CDNS_PCIE_HPA_PHY_LAYER_CFG0, pl_reg_val);
+}
+
+void cdns_pcie_hpa_detect_quiet_min_delay_set(struct cdns_pcie *pcie)
+{
+ u32 delay = 0x3;
+ u32 ltssm_control_cap;
+
+ /*
+ * Set the LTSSM Detect Quiet state min. delay to 2ms.
+ */
+ ltssm_control_cap = cdns_pcie_hpa_readl(pcie, REG_BANK_IP_REG,
+ CDNS_PCIE_HPA_PHY_LAYER_CFG0);
+ ltssm_control_cap = ((ltssm_control_cap &
+ ~CDNS_PCIE_HPA_DETECT_QUIET_MIN_DELAY_MASK) |
+ CDNS_PCIE_HPA_DETECT_QUIET_MIN_DELAY(delay));
+
+ cdns_pcie_hpa_writel(pcie, REG_BANK_IP_REG,
+ CDNS_PCIE_HPA_PHY_LAYER_CFG0, ltssm_control_cap);
+}
+
+void cdns_pcie_hpa_set_outbound_region(struct cdns_pcie *pcie, u8 busnr, u8 fn,
+ u32 r, bool is_io,
+ u64 cpu_addr, u64 pci_addr, size_t size)
+{
+ /*
+ * roundup_pow_of_two() returns an unsigned long, which is not suited
+ * for 64bit values.
+ */
+ u64 sz = 1ULL << fls64(size - 1);
+ int nbits = ilog2(sz);
+ u32 addr0, addr1, desc0, desc1, ctrl0;
+
+ if (nbits < 8)
+ nbits = 8;
+
+ /* Set the PCI address */
+ addr0 = CDNS_PCIE_HPA_AT_OB_REGION_PCI_ADDR0_NBITS(nbits) |
+ (lower_32_bits(pci_addr) & GENMASK(31, 8));
+ addr1 = upper_32_bits(pci_addr);
+
+ cdns_pcie_hpa_writel(pcie, REG_BANK_AXI_SLAVE,
+ CDNS_PCIE_HPA_AT_OB_REGION_PCI_ADDR0(r), addr0);
+ cdns_pcie_hpa_writel(pcie, REG_BANK_AXI_SLAVE,
+ CDNS_PCIE_HPA_AT_OB_REGION_PCI_ADDR1(r), addr1);
+
+ /* Set the PCIe header descriptor */
+ if (is_io)
+ desc0 = CDNS_PCIE_HPA_AT_OB_REGION_DESC0_TYPE_IO;
+ else
+ desc0 = CDNS_PCIE_HPA_AT_OB_REGION_DESC0_TYPE_MEM;
+ desc1 = 0;
+ ctrl0 = 0;
+
+ /*
+ * Whether Bit [26] is set or not inside DESC0 register of the outbound
+ * PCIe descriptor, the PCI function number must be set into
+ * Bits [31:24] of DESC1 anyway.
+ *
+ * In Root Complex mode, the function number is always 0 but in Endpoint
+ * mode, the PCIe controller may support more than one function. This
+ * function number needs to be set properly into the outbound PCIe
+ * descriptor.
+ *
+ * Besides, setting Bit [26] is mandatory when in Root Complex mode:
+ * then the driver must provide the bus, resp. device, number in
+ * Bits [31:24] of DESC1, resp. Bits[23:16] of DESC0. Like the function
+ * number, the device number is always 0 in Root Complex mode.
+ *
+ * However when in Endpoint mode, we can clear Bit [26] of DESC0, hence
+ * the PCIe controller will use the captured values for the bus and
+ * device numbers.
+ */
+ if (pcie->is_rc) {
+ /* The device and function numbers are always 0. */
+ desc1 = CDNS_PCIE_HPA_AT_OB_REGION_DESC1_BUS(busnr) |
+ CDNS_PCIE_HPA_AT_OB_REGION_DESC1_DEVFN(0);
+ ctrl0 = CDNS_PCIE_HPA_AT_OB_REGION_CTRL0_SUPPLY_BUS |
+ CDNS_PCIE_HPA_AT_OB_REGION_CTRL0_SUPPLY_DEV_FN;
+ } else {
+ /*
+ * Use captured values for bus and device numbers but still
+ * need to set the function number.
+ */
+ desc1 |= CDNS_PCIE_HPA_AT_OB_REGION_DESC1_DEVFN(fn);
+ }
+
+ cdns_pcie_hpa_writel(pcie, REG_BANK_AXI_SLAVE,
+ CDNS_PCIE_HPA_AT_OB_REGION_DESC0(r), desc0);
+ cdns_pcie_hpa_writel(pcie, REG_BANK_AXI_SLAVE,
+ CDNS_PCIE_HPA_AT_OB_REGION_DESC1(r), desc1);
+
+ addr0 = CDNS_PCIE_HPA_AT_OB_REGION_CPU_ADDR0_NBITS(nbits) |
+ (lower_32_bits(cpu_addr) & GENMASK(31, 8));
+ addr1 = upper_32_bits(cpu_addr);
+
+ cdns_pcie_hpa_writel(pcie, REG_BANK_AXI_SLAVE,
+ CDNS_PCIE_HPA_AT_OB_REGION_CPU_ADDR0(r), addr0);
+ cdns_pcie_hpa_writel(pcie, REG_BANK_AXI_SLAVE,
+ CDNS_PCIE_HPA_AT_OB_REGION_CPU_ADDR1(r), addr1);
+ cdns_pcie_hpa_writel(pcie, REG_BANK_AXI_SLAVE,
+ CDNS_PCIE_HPA_AT_OB_REGION_CTRL0(r), ctrl0);
+}
+
+void cdns_pcie_hpa_set_outbound_region_for_normal_msg(struct cdns_pcie *pcie,
+ u8 busnr, u8 fn,
+ u32 r, u64 cpu_addr)
+{
+ u32 addr0, addr1, desc0, desc1, ctrl0;
+
+ desc0 = CDNS_PCIE_HPA_AT_OB_REGION_DESC0_TYPE_NORMAL_MSG;
+ desc1 = 0;
+ ctrl0 = 0;
+
+ /*
+ * See cdns_pcie_set_outbound_region() comments above.
+ */
+ if (pcie->is_rc) {
+ desc1 = CDNS_PCIE_HPA_AT_OB_REGION_DESC1_BUS(busnr) |
+ CDNS_PCIE_HPA_AT_OB_REGION_DESC1_DEVFN(0);
+ ctrl0 = CDNS_PCIE_HPA_AT_OB_REGION_CTRL0_SUPPLY_BUS |
+ CDNS_PCIE_HPA_AT_OB_REGION_CTRL0_SUPPLY_DEV_FN;
+ } else {
+ desc1 |= CDNS_PCIE_HPA_AT_OB_REGION_DESC1_DEVFN(fn);
+ }
+
+ addr0 = CDNS_PCIE_HPA_AT_OB_REGION_CPU_ADDR0_NBITS(17) |
+ (lower_32_bits(cpu_addr) & GENMASK(31, 8));
+ addr1 = upper_32_bits(cpu_addr);
+
+ cdns_pcie_hpa_writel(pcie, REG_BANK_AXI_SLAVE,
+ CDNS_PCIE_HPA_AT_OB_REGION_PCI_ADDR0(r), 0);
+ cdns_pcie_hpa_writel(pcie, REG_BANK_AXI_SLAVE,
+ CDNS_PCIE_HPA_AT_OB_REGION_PCI_ADDR1(r), 0);
+ cdns_pcie_hpa_writel(pcie, REG_BANK_AXI_SLAVE,
+ CDNS_PCIE_HPA_AT_OB_REGION_DESC0(r), desc0);
+ cdns_pcie_hpa_writel(pcie, REG_BANK_AXI_SLAVE,
+ CDNS_PCIE_HPA_AT_OB_REGION_DESC1(r), desc1);
+ cdns_pcie_hpa_writel(pcie, REG_BANK_AXI_SLAVE,
+ CDNS_PCIE_HPA_AT_OB_REGION_CPU_ADDR0(r), addr0);
+ cdns_pcie_hpa_writel(pcie, REG_BANK_AXI_SLAVE,
+ CDNS_PCIE_HPA_AT_OB_REGION_CPU_ADDR1(r), addr1);
+ cdns_pcie_hpa_writel(pcie, REG_BANK_AXI_SLAVE,
+ CDNS_PCIE_HPA_AT_OB_REGION_CTRL0(r), ctrl0);
+}
+
+void cdns_pcie_hpa_reset_outbound_region(struct cdns_pcie *pcie, u32 r)
+{
+ cdns_pcie_hpa_writel(pcie, REG_BANK_AXI_SLAVE,
+ CDNS_PCIE_HPA_AT_OB_REGION_PCI_ADDR0(r), 0);
+ cdns_pcie_hpa_writel(pcie, REG_BANK_AXI_SLAVE,
+ CDNS_PCIE_HPA_AT_OB_REGION_PCI_ADDR1(r), 0);
+
+ cdns_pcie_hpa_writel(pcie, REG_BANK_AXI_SLAVE,
+ CDNS_PCIE_HPA_AT_OB_REGION_DESC0(r), 0);
+ cdns_pcie_hpa_writel(pcie, REG_BANK_AXI_SLAVE,
+ CDNS_PCIE_HPA_AT_OB_REGION_DESC1(r), 0);
+
+ cdns_pcie_hpa_writel(pcie, REG_BANK_AXI_SLAVE,
+ CDNS_PCIE_HPA_AT_OB_REGION_CPU_ADDR0(r), 0);
+ cdns_pcie_hpa_writel(pcie, REG_BANK_AXI_SLAVE,
+ CDNS_PCIE_HPA_AT_OB_REGION_CPU_ADDR1(r), 0);
+}
diff --git a/drivers/pci/controller/cadence/pcie-cadence-plat.c b/drivers/pci/controller/cadence/pcie-cadence-plat.c
index e09f23427313..882c4aef7ac5 100644
--- a/drivers/pci/controller/cadence/pcie-cadence-plat.c
+++ b/drivers/pci/controller/cadence/pcie-cadence-plat.c
@@ -12,8 +12,6 @@
#include <linux/pm_runtime.h>
#include "pcie-cadence.h"
-#define CDNS_PLAT_CPU_TO_BUS_ADDR 0x0FFFFFFF
-
/**
* struct cdns_plat_pcie - private data for this PCIe platform driver
* @pcie: Cadence PCIe controller
@@ -24,13 +22,8 @@ struct cdns_plat_pcie {
static const struct of_device_id cdns_plat_pcie_of_match[];
-static u64 cdns_plat_cpu_addr_fixup(struct cdns_pcie *pcie, u64 cpu_addr)
-{
- return cpu_addr & CDNS_PLAT_CPU_TO_BUS_ADDR;
-}
-
static const struct cdns_pcie_ops cdns_plat_ops = {
- .cpu_addr_fixup = cdns_plat_cpu_addr_fixup,
+ .link_up = cdns_pcie_linkup,
};
static int cdns_plat_pcie_probe(struct platform_device *pdev)
@@ -68,6 +61,11 @@ static int cdns_plat_pcie_probe(struct platform_device *pdev)
rc = pci_host_bridge_priv(bridge);
rc->pcie.dev = dev;
rc->pcie.ops = &cdns_plat_ops;
+ rc->pcie.is_rc = data->is_rc;
+
+ /* Store the register bank offsets pointer */
+ rc->pcie.cdns_pcie_reg_offsets = data;
+
cdns_plat_pcie->pcie = &rc->pcie;
ret = cdns_pcie_init_phy(dev, cdns_plat_pcie->pcie);
@@ -95,6 +93,11 @@ static int cdns_plat_pcie_probe(struct platform_device *pdev)
ep->pcie.dev = dev;
ep->pcie.ops = &cdns_plat_ops;
+ ep->pcie.is_rc = data->is_rc;
+
+ /* Store the register bank offset pointer */
+ ep->pcie.cdns_pcie_reg_offsets = data;
+
cdns_plat_pcie->pcie = &ep->pcie;
ret = cdns_pcie_init_phy(dev, cdns_plat_pcie->pcie);
diff --git a/drivers/pci/controller/cadence/pcie-cadence.c b/drivers/pci/controller/cadence/pcie-cadence.c
index 51c9bc4eb174..f86a44efc510 100644
--- a/drivers/pci/controller/cadence/pcie-cadence.c
+++ b/drivers/pci/controller/cadence/pcie-cadence.c
@@ -9,6 +9,16 @@
#include "pcie-cadence.h"
+bool cdns_pcie_linkup(struct cdns_pcie *pcie)
+{
+ u32 pl_reg_val;
+
+ pl_reg_val = cdns_pcie_readl(pcie, CDNS_PCIE_LM_BASE);
+ if (pl_reg_val & GENMASK(0, 0))
+ return true;
+ return false;
+}
+
void cdns_pcie_detect_quiet_min_delay_set(struct cdns_pcie *pcie)
{
u32 delay = 0x3;
diff --git a/drivers/pci/controller/cadence/pcie-cadence.h b/drivers/pci/controller/cadence/pcie-cadence.h
index 5c0ea49551c8..3215d4665d89 100644
--- a/drivers/pci/controller/cadence/pcie-cadence.h
+++ b/drivers/pci/controller/cadence/pcie-cadence.h
@@ -29,6 +29,8 @@ struct cdns_pcie_rp_ib_bar {
struct cdns_pcie;
struct cdns_pcie_rc;
+bool cdns_pcie_linkup(struct cdns_pcie *pcie);
+
enum cdns_pcie_msg_routing {
/* Route to Root Complex */
MSG_ROUTING_TO_RC,
@@ -63,9 +65,9 @@ enum cdns_pcie_reg_bank {
};
struct cdns_pcie_ops {
- int (*start_link)(struct cdns_pcie *pcie);
- void (*stop_link)(struct cdns_pcie *pcie);
- bool (*link_up)(struct cdns_pcie *pcie);
+ int (*start_link)(struct cdns_pcie *pcie);
+ void (*stop_link)(struct cdns_pcie *pcie);
+ bool (*link_up)(struct cdns_pcie *pcie);
u64 (*cpu_addr_fixup)(struct cdns_pcie *pcie, u64 cpu_addr);
};
@@ -97,6 +99,7 @@ struct cdns_plat_pcie_of_data {
* struct cdns_pcie - private data for Cadence PCIe controller drivers
* @reg_base: IO mapped register base
* @mem_res: start/end offsets in the physical system memory to map PCI accesses
+ * @msg_res: Region for send message to map PCI accesses
* @dev: PCIe controller
* @is_rc: tell whether the PCIe controller mode is Root Complex or Endpoint.
* @phy_count: number of supported PHY devices
@@ -109,6 +112,7 @@ struct cdns_plat_pcie_of_data {
struct cdns_pcie {
void __iomem *reg_base;
struct resource *mem_res;
+ struct resource *msg_res;
struct device *dev;
bool is_rc;
int phy_count;
@@ -131,6 +135,7 @@ struct cdns_pcie {
* available
* @quirk_retrain_flag: Retrain link as quirk for PCIe Gen2
* @quirk_detect_quiet_flag: LTSSM Detect Quiet min delay set as quirk
+ * @ecam_support_flag: Whether the ECAM flag is supported
*/
struct cdns_pcie_rc {
struct cdns_pcie pcie;
@@ -141,6 +146,7 @@ struct cdns_pcie_rc {
bool avail_ib_bar[CDNS_PCIE_RP_MAX_IB];
unsigned int quirk_retrain_flag:1;
unsigned int quirk_detect_quiet_flag:1;
+ unsigned int ecam_support_flag:1;
};
/**
@@ -324,6 +330,29 @@ static inline u16 cdns_pcie_rp_readw(struct cdns_pcie *pcie, u32 reg)
return cdns_pcie_read_sz(addr, 0x2);
}
+static inline void cdns_pcie_hpa_rp_writeb(struct cdns_pcie *pcie,
+ u32 reg, u8 value)
+{
+ void __iomem *addr = pcie->reg_base + CDNS_PCIE_HPA_RP_BASE + reg;
+
+ cdns_pcie_write_sz(addr, 0x1, value);
+}
+
+static inline void cdns_pcie_hpa_rp_writew(struct cdns_pcie *pcie,
+ u32 reg, u16 value)
+{
+ void __iomem *addr = pcie->reg_base + CDNS_PCIE_HPA_RP_BASE + reg;
+
+ cdns_pcie_write_sz(addr, 0x2, value);
+}
+
+static inline u16 cdns_pcie_hpa_rp_readw(struct cdns_pcie *pcie, u32 reg)
+{
+ void __iomem *addr = pcie->reg_base + CDNS_PCIE_HPA_RP_BASE + reg;
+
+ return cdns_pcie_read_sz(addr, 0x2);
+}
+
/* Endpoint Function register access */
static inline void cdns_pcie_ep_fn_writeb(struct cdns_pcie *pcie, u8 fn,
u32 reg, u8 value)
@@ -388,6 +417,7 @@ int cdns_pcie_host_setup(struct cdns_pcie_rc *rc);
void cdns_pcie_host_disable(struct cdns_pcie_rc *rc);
void __iomem *cdns_pci_map_bus(struct pci_bus *bus, unsigned int devfn,
int where);
+int cdns_pcie_hpa_host_setup(struct cdns_pcie_rc *rc);
#else
static inline int cdns_pcie_host_link_setup(struct cdns_pcie_rc *rc)
{
@@ -404,6 +434,11 @@ static inline int cdns_pcie_host_setup(struct cdns_pcie_rc *rc)
return 0;
}
+static inline int cdns_pcie_hpa_host_setup(struct cdns_pcie_rc *rc)
+{
+ return 0;
+}
+
static inline void cdns_pcie_host_disable(struct cdns_pcie_rc *rc)
{
}
@@ -418,17 +453,24 @@ static inline void __iomem *cdns_pci_map_bus(struct pci_bus *bus, unsigned int d
#if IS_ENABLED(CONFIG_PCIE_CADENCE_EP)
int cdns_pcie_ep_setup(struct cdns_pcie_ep *ep);
void cdns_pcie_ep_disable(struct cdns_pcie_ep *ep);
+int cdns_pcie_hpa_ep_setup(struct cdns_pcie_ep *ep);
#else
static inline int cdns_pcie_ep_setup(struct cdns_pcie_ep *ep)
{
return 0;
}
+static inline int cdns_pcie_hpa_ep_setup(struct cdns_pcie_ep *ep)
+{
+ return 0;
+}
+
static inline void cdns_pcie_ep_disable(struct cdns_pcie_ep *ep)
{
}
#endif
-
+int cdns_pcie_host_wait_for_link(struct cdns_pcie *pcie);
+int cdns_pcie_host_start_link(struct cdns_pcie_rc *rc);
void cdns_pcie_detect_quiet_min_delay_set(struct cdns_pcie *pcie);
void cdns_pcie_set_outbound_region(struct cdns_pcie *pcie, u8 busnr, u8 fn,
u32 r, bool is_io,
@@ -441,6 +483,25 @@ void cdns_pcie_disable_phy(struct cdns_pcie *pcie);
int cdns_pcie_enable_phy(struct cdns_pcie *pcie);
int cdns_pcie_init_phy(struct device *dev, struct cdns_pcie *pcie);
+void cdns_pcie_hpa_detect_quiet_min_delay_set(struct cdns_pcie *pcie);
+void cdns_pcie_hpa_set_outbound_region(struct cdns_pcie *pcie, u8 busnr, u8 fn,
+ u32 r, bool is_io,
+ u64 cpu_addr, u64 pci_addr, size_t size);
+void cdns_pcie_hpa_set_outbound_region_for_normal_msg(struct cdns_pcie *pcie,
+ u8 busnr, u8 fn,
+ u32 r, u64 cpu_addr);
+void cdns_pcie_hpa_reset_outbound_region(struct cdns_pcie *pcie, u32 r);
+int cdns_pcie_hpa_host_link_setup(struct cdns_pcie_rc *rc);
+int cdns_pcie_hpa_host_init(struct cdns_pcie_rc *rc);
+void __iomem *cdns_pci_hpa_map_bus(struct pci_bus *bus, unsigned int devfn,
+ int where);
+int cdns_pcie_hpa_host_wait_for_link(struct cdns_pcie *pcie);
+int cdns_pcie_hpa_host_start_link(struct cdns_pcie_rc *rc);
+
+int cdns_pcie_hpa_start_link(struct cdns_pcie *pcie);
+void cdns_pcie_hpa_stop_link(struct cdns_pcie *pcie);
+bool cdns_pcie_hpa_link_up(struct cdns_pcie *pcie);
+
extern const struct dev_pm_ops cdns_pcie_pm_ops;
#endif /* _PCIE_CADENCE_H */
--
2.49.0
^ permalink raw reply related [flat|nested] 46+ messages in thread
* [PATCH v5 09/14] PCI: cadence: Add support for PCIe HPA controller platform
2025-06-30 4:15 [PATCH v5 00/14] Enhance the PCIe controller driver hans.zhang
` (7 preceding siblings ...)
2025-06-30 4:15 ` [PATCH v5 08/14] PCI: cadence: Add support for High Performance Arch(HPA) controller hans.zhang
@ 2025-06-30 4:15 ` hans.zhang
2025-06-30 4:15 ` [PATCH v5 10/14] dt-bindings: PCI: Add CIX Sky1 PCIe Root Complex bindings hans.zhang
` (4 subsequent siblings)
13 siblings, 0 replies; 46+ messages in thread
From: hans.zhang @ 2025-06-30 4:15 UTC (permalink / raw)
To: bhelgaas, lpieralisi, kw, mani, robh, kwilczynski, krzk+dt,
conor+dt
Cc: mpillai, fugang.duan, guoyin.chen, peter.chen,
cix-kernel-upstream, linux-pci, devicetree, linux-kernel,
Hans Zhang
From: Manikandan K Pillai <mpillai@cadence.com>
Add support for Cadence HPA PCIe controller based platform.
Signed-off-by: Manikandan K Pillai <mpillai@cadence.com>
Co-developed-by: Hans Zhang <hans.zhang@cixtech.com>
Signed-off-by: Hans Zhang <hans.zhang@cixtech.com>
---
drivers/pci/controller/cadence/Kconfig | 5 +
drivers/pci/controller/cadence/Makefile | 1 +
.../cadence/pcie-cadence-plat-hpa.c | 183 ++++++++++++++++++
3 files changed, 189 insertions(+)
create mode 100644 drivers/pci/controller/cadence/pcie-cadence-plat-hpa.c
diff --git a/drivers/pci/controller/cadence/Kconfig b/drivers/pci/controller/cadence/Kconfig
index a1caf154888d..427aa9beca22 100644
--- a/drivers/pci/controller/cadence/Kconfig
+++ b/drivers/pci/controller/cadence/Kconfig
@@ -29,11 +29,15 @@ config PCIE_CADENCE_EP
config PCIE_CADENCE_PLAT
bool
+config PCIE_CADENCE_PLAT_HPA
+ bool
+
config PCIE_CADENCE_PLAT_HOST
bool "Cadence platform PCIe controller (host mode)"
depends on OF
select PCIE_CADENCE_HOST
select PCIE_CADENCE_PLAT
+ select PCIE_CADENCE_PLAT_HPA
help
Say Y here if you want to support the Cadence PCIe platform controller in
host mode. This PCIe controller may be embedded into many different
@@ -45,6 +49,7 @@ config PCIE_CADENCE_PLAT_EP
depends on PCI_ENDPOINT
select PCIE_CADENCE_EP
select PCIE_CADENCE_PLAT
+ select PCIE_CADENCE_PLAT_HPA
help
Say Y here if you want to support the Cadence PCIe platform controller in
endpoint mode. This PCIe controller may be embedded into many
diff --git a/drivers/pci/controller/cadence/Makefile b/drivers/pci/controller/cadence/Makefile
index e2df24ff4c33..f8575a0eee2d 100644
--- a/drivers/pci/controller/cadence/Makefile
+++ b/drivers/pci/controller/cadence/Makefile
@@ -5,4 +5,5 @@ obj-$(CONFIG_PCIE_CADENCE_HOST_COMMON) += pcie-cadence-host-common.o
obj-$(CONFIG_PCIE_CADENCE_HOST) += pcie-cadence-host.o pcie-cadence-host-hpa.o
obj-$(CONFIG_PCIE_CADENCE_EP) += pcie-cadence-ep.o pcie-cadence-ep-hpa.o
obj-$(CONFIG_PCIE_CADENCE_PLAT) += pcie-cadence-plat.o
+obj-$(CONFIG_PCIE_CADENCE_PLAT_HPA) += pcie-cadence-plat-hpa.o
obj-$(CONFIG_PCI_J721E) += pci-j721e.o
diff --git a/drivers/pci/controller/cadence/pcie-cadence-plat-hpa.c b/drivers/pci/controller/cadence/pcie-cadence-plat-hpa.c
new file mode 100644
index 000000000000..fb42547d47d2
--- /dev/null
+++ b/drivers/pci/controller/cadence/pcie-cadence-plat-hpa.c
@@ -0,0 +1,183 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Cadence PCIe platform driver.
+ *
+ * Copyright (c) 2019, Cadence Design Systems
+ * Author: Manikandan K Pillai <mpillai@cadence.com>
+ */
+#include <linux/kernel.h>
+#include <linux/of.h>
+#include <linux/of_pci.h>
+#include <linux/platform_device.h>
+#include <linux/pm_runtime.h>
+#include "pcie-cadence.h"
+
+/**
+ * struct cdns_plat_pcie - private data for this PCIe platform driver
+ * @pcie: Cadence PCIe controller
+ */
+struct cdns_plat_pcie {
+ struct cdns_pcie *pcie;
+};
+
+static const struct cdns_pcie_ops cdns_plat_hpa_ops = {
+ .start_link = cdns_pcie_hpa_start_link,
+ .stop_link = cdns_pcie_hpa_stop_link,
+ .link_up = cdns_pcie_hpa_link_up,
+};
+
+static int cdns_plat_pcie_hpa_probe(struct platform_device *pdev)
+{
+ const struct cdns_plat_pcie_of_data *data;
+ struct cdns_plat_pcie *cdns_plat_pcie;
+ struct device *dev = &pdev->dev;
+ struct pci_host_bridge *bridge;
+ struct cdns_pcie_ep *ep;
+ struct cdns_pcie_rc *rc;
+ int phy_count;
+ bool is_rc;
+ int ret;
+
+ data = of_device_get_match_data(dev);
+ if (!data)
+ return -EINVAL;
+
+ is_rc = data->is_rc;
+
+ pr_debug(" Started %s with is_rc: %d\n", __func__, is_rc);
+ cdns_plat_pcie = devm_kzalloc(dev, sizeof(*cdns_plat_pcie), GFP_KERNEL);
+ if (!cdns_plat_pcie)
+ return -ENOMEM;
+
+ platform_set_drvdata(pdev, cdns_plat_pcie);
+ if (is_rc) {
+ if (!IS_ENABLED(CONFIG_PCIE_CADENCE_PLAT_HOST))
+ return -ENODEV;
+
+ bridge = devm_pci_alloc_host_bridge(dev, sizeof(*rc));
+ if (!bridge)
+ return -ENOMEM;
+
+ rc = pci_host_bridge_priv(bridge);
+ rc->pcie.dev = dev;
+ rc->pcie.ops = &cdns_plat_hpa_ops;
+ rc->pcie.is_rc = data->is_rc;
+
+ /*
+ * Store the register bank offsets pointer
+ */
+ rc->pcie.cdns_pcie_reg_offsets = data;
+
+ cdns_plat_pcie->pcie = &rc->pcie;
+
+ ret = cdns_pcie_init_phy(dev, cdns_plat_pcie->pcie);
+ if (ret) {
+ dev_err(dev, "failed to init phy\n");
+ return ret;
+ }
+ pm_runtime_enable(dev);
+ ret = pm_runtime_get_sync(dev);
+ if (ret < 0) {
+ dev_err(dev, "pm_runtime_get_sync() failed\n");
+ goto err_get_sync;
+ }
+
+ ret = cdns_pcie_hpa_host_setup(rc);
+ if (ret)
+ goto err_init;
+ } else {
+ if (!IS_ENABLED(CONFIG_PCIE_CADENCE_PLAT_EP))
+ return -ENODEV;
+
+ ep = devm_kzalloc(dev, sizeof(*ep), GFP_KERNEL);
+ if (!ep)
+ return -ENOMEM;
+
+ ep->pcie.dev = dev;
+ ep->pcie.ops = &cdns_plat_hpa_ops;
+ ep->pcie.is_rc = data->is_rc;
+
+ /*
+ * Store the register bank offset pointer
+ */
+ ep->pcie.cdns_pcie_reg_offsets = data;
+
+ cdns_plat_pcie->pcie = &ep->pcie;
+
+ ret = cdns_pcie_init_phy(dev, cdns_plat_pcie->pcie);
+ if (ret) {
+ dev_err(dev, "failed to init phy\n");
+ return ret;
+ }
+
+ pm_runtime_enable(dev);
+ ret = pm_runtime_get_sync(dev);
+ if (ret < 0) {
+ dev_err(dev, "pm_runtime_get_sync() failed\n");
+ goto err_get_sync;
+ }
+
+ ret = cdns_pcie_hpa_ep_setup(ep);
+ if (ret)
+ goto err_init;
+ }
+
+ return 0;
+
+ err_init:
+ err_get_sync:
+ pm_runtime_put_sync(dev);
+ pm_runtime_disable(dev);
+ cdns_pcie_disable_phy(cdns_plat_pcie->pcie);
+ phy_count = cdns_plat_pcie->pcie->phy_count;
+ while (phy_count--)
+ device_link_del(cdns_plat_pcie->pcie->link[phy_count]);
+
+ return 0;
+}
+
+static void cdns_plat_pcie_hpa_shutdown(struct platform_device *pdev)
+{
+ struct device *dev = &pdev->dev;
+ struct cdns_pcie *pcie = dev_get_drvdata(dev);
+ int ret;
+
+ ret = pm_runtime_put_sync(dev);
+ if (ret < 0)
+ dev_dbg(dev, "pm_runtime_put_sync failed\n");
+
+ pm_runtime_disable(dev);
+
+ cdns_pcie_disable_phy(pcie);
+}
+
+static const struct cdns_plat_pcie_of_data cdns_plat_pcie_hpa_host_of_data = {
+ .is_rc = true,
+};
+
+static const struct cdns_plat_pcie_of_data cdns_plat_pcie_hpa_ep_of_data = {
+ .is_rc = false,
+};
+
+static const struct of_device_id cdns_plat_pcie_hpa_of_match[] = {
+ {
+ .compatible = "cdns,cdns-pcie-hpa-host",
+ .data = &cdns_plat_pcie_hpa_host_of_data,
+ },
+ {
+ .compatible = "cdns,cdns-pcie-hpa-ep",
+ .data = &cdns_plat_pcie_hpa_ep_of_data,
+ },
+ {},
+};
+
+static struct platform_driver cdns_plat_pcie_hpa_driver = {
+ .driver = {
+ .name = "cdns-pcie-hpa",
+ .of_match_table = cdns_plat_pcie_hpa_of_match,
+ .pm = &cdns_pcie_pm_ops,
+ },
+ .probe = cdns_plat_pcie_hpa_probe,
+ .shutdown = cdns_plat_pcie_hpa_shutdown,
+};
+builtin_platform_driver(cdns_plat_pcie_hpa_driver);
--
2.49.0
^ permalink raw reply related [flat|nested] 46+ messages in thread
* [PATCH v5 10/14] dt-bindings: PCI: Add CIX Sky1 PCIe Root Complex bindings
2025-06-30 4:15 [PATCH v5 00/14] Enhance the PCIe controller driver hans.zhang
` (8 preceding siblings ...)
2025-06-30 4:15 ` [PATCH v5 09/14] PCI: cadence: Add support for PCIe HPA controller platform hans.zhang
@ 2025-06-30 4:15 ` hans.zhang
2025-06-30 5:36 ` Rob Herring (Arm)
2025-06-30 7:26 ` Krzysztof Kozlowski
2025-06-30 4:15 ` [PATCH v5 11/14] PCI: sky1: Add PCIe host support for CIX Sky1 hans.zhang
` (3 subsequent siblings)
13 siblings, 2 replies; 46+ messages in thread
From: hans.zhang @ 2025-06-30 4:15 UTC (permalink / raw)
To: bhelgaas, lpieralisi, kw, mani, robh, kwilczynski, krzk+dt,
conor+dt
Cc: mpillai, fugang.duan, guoyin.chen, peter.chen,
cix-kernel-upstream, linux-pci, devicetree, linux-kernel,
Hans Zhang
From: Hans Zhang <hans.zhang@cixtech.com>
Document the bindings for CIX Sky1 PCIe Controller configured in
root complex mode with five root port.
Supports 4 INTx, MSI and MSI-x interrupts from the ARM GICv3 controller.
Signed-off-by: Hans Zhang <hans.zhang@cixtech.com>
Reviewed-by: Peter Chen <peter.chen@cixtech.com>
Reviewed-by: Manikandan K Pillai <mpillai@cadence.com>
---
.../bindings/pci/cix,sky1-pcie-host.yaml | 133 ++++++++++++++++++
1 file changed, 133 insertions(+)
create mode 100644 Documentation/devicetree/bindings/pci/cix,sky1-pcie-host.yaml
diff --git a/Documentation/devicetree/bindings/pci/cix,sky1-pcie-host.yaml b/Documentation/devicetree/bindings/pci/cix,sky1-pcie-host.yaml
new file mode 100644
index 000000000000..b4395bc06f2f
--- /dev/null
+++ b/Documentation/devicetree/bindings/pci/cix,sky1-pcie-host.yaml
@@ -0,0 +1,133 @@
+# SPDX-License-Identifier: (GPL-2.0 OR BSD-2-Clause)
+%YAML 1.2
+---
+$id: http://devicetree.org/schemas/pci/cix,sky1-pcie-host.yaml#
+$schema: http://devicetree.org/meta-schemas/core.yaml#
+
+title: CIX Sky1 PCIe Root Complex
+
+maintainers:
+ - Hans Zhang <hans.zhang@cixtech.com>
+
+description:
+ PCIe root complex controller based on the Cadence PCIe core.
+
+allOf:
+ - $ref: /schemas/pci/pci-host-bridge.yaml#
+ - $ref: /schemas/pci/cdns-pcie.yaml#
+
+properties:
+ compatible:
+ oneOf:
+ - const: cix,sky1-pcie-host
+
+ reg:
+ items:
+ - description: PCIe controller registers.
+ - description: Remote CIX System Unit registers.
+ - description: ECAM registers.
+ - description: Region for sending messages registers.
+
+ reg-names:
+ items:
+ - const: reg
+ - const: rcsu
+ - const: cfg
+ - const: msg
+
+ "#interrupt-cells":
+ const: 1
+
+ interrupt-map-mask:
+ items:
+ - const: 0
+ - const: 0
+ - const: 0
+ - const: 7
+
+ interrupt-map:
+ maxItems: 4
+
+ max-link-speed:
+ maximum: 4
+
+ num-lanes:
+ maximum: 8
+
+ ranges:
+ maxItems: 3
+
+ msi-map:
+ maxItems: 1
+
+ vendor-id:
+ const: 0x1f6c
+
+ device-id:
+ enum:
+ - 0x0001
+
+ cdns,no-inbound-bar:
+ description: |
+ Indicates the PCIe controller does not require an inbound BAR region.
+ type: boolean
+
+ sky1,pcie-ctrl-id:
+ description: |
+ Specifies the PCIe controller instance identifier (0-4).
+ $ref: /schemas/types.yaml#/definitions/uint32
+ minimum: 0
+ maximum: 4
+
+required:
+ - compatible
+ - reg
+ - reg-names
+ - "#interrupt-cells"
+ - interrupt-map-mask
+ - interrupt-map
+ - max-link-speed
+ - num-lanes
+ - bus-range
+ - device_type
+ - ranges
+ - msi-map
+ - vendor-id
+ - device-id
+ - cdns,no-inbound-bar
+ - sky1,pcie-ctrl-id
+
+unevaluatedProperties: false
+
+examples:
+ - |
+ #include <dt-bindings/gpio/gpio.h>
+
+ pcie_x8_rc: pcie@a010000 {
+ compatible = "cix,sky1-pcie-host";
+ reg = <0x00 0x0a010000 0x00 0x10000>,
+ <0x00 0x0a000000 0x00 0x10000>,
+ <0x00 0x2c000000 0x00 0x4000000>,
+ <0x00 0x60000000 0x00 0x00100000>;
+ reg-names = "reg", "rcsu", "cfg", "msg";
+ #interrupt-cells = <1>;
+ interrupt-map-mask = <0 0 0 0x7>;
+ interrupt-map = <0 0 0 1 &gic 0 0 GIC_SPI 407 IRQ_TYPE_LEVEL_HIGH 0>,
+ <0 0 0 2 &gic 0 0 GIC_SPI 408 IRQ_TYPE_LEVEL_HIGH 0>,
+ <0 0 0 3 &gic 0 0 GIC_SPI 409 IRQ_TYPE_LEVEL_HIGH 0>,
+ <0 0 0 4 &gic 0 0 GIC_SPI 410 IRQ_TYPE_LEVEL_HIGH 0>;
+ max-link-speed = <4>;
+ num-lanes = <8>;
+ #address-cells = <3>;
+ #size-cells = <2>;
+ bus-range = <0xc0 0xff>;
+ device_type = "pci";
+ ranges = <0x01000000 0x00 0x60100000 0x00 0x60100000 0x00 0x00100000>,
+ <0x02000000 0x00 0x60200000 0x00 0x60200000 0x00 0x1fe00000>,
+ <0x43000000 0x18 0x00000000 0x18 0x00000000 0x04 0x00000000>;
+ msi-map = <0xc000 &gic_its 0xc000 0x4000>;
+ vendor-id = <0x1f6c>;
+ device-id = <0x0001>;
+ sky1,pcie-ctrl-id = <0x0>;
+ cdns,no-inbound-bar;
+ };
--
2.49.0
^ permalink raw reply related [flat|nested] 46+ messages in thread
* [PATCH v5 11/14] PCI: sky1: Add PCIe host support for CIX Sky1
2025-06-30 4:15 [PATCH v5 00/14] Enhance the PCIe controller driver hans.zhang
` (9 preceding siblings ...)
2025-06-30 4:15 ` [PATCH v5 10/14] dt-bindings: PCI: Add CIX Sky1 PCIe Root Complex bindings hans.zhang
@ 2025-06-30 4:15 ` hans.zhang
2025-06-30 4:15 ` [PATCH v5 12/14] MAINTAINERS: add entry for CIX Sky1 PCIe driver hans.zhang
` (2 subsequent siblings)
13 siblings, 0 replies; 46+ messages in thread
From: hans.zhang @ 2025-06-30 4:15 UTC (permalink / raw)
To: bhelgaas, lpieralisi, kw, mani, robh, kwilczynski, krzk+dt,
conor+dt
Cc: mpillai, fugang.duan, guoyin.chen, peter.chen,
cix-kernel-upstream, linux-pci, devicetree, linux-kernel,
Hans Zhang
From: Hans Zhang <hans.zhang@cixtech.com>
Add driver for the CIX Sky1 SoC PCIe Gen4 16 GT/s controller based
on the Cadence PCIe core.
Supports MSI/MSI-x via GICv3, Single Virtual Channel, Single Function.
Signed-off-by: Hans Zhang <hans.zhang@cixtech.com>
Reviewed-by: Peter Chen <peter.chen@cixtech.com>
Reviewed-by: Manikandan K Pillai <mpillai@cadence.com>
---
drivers/pci/controller/cadence/Kconfig | 16 +
drivers/pci/controller/cadence/Makefile | 1 +
drivers/pci/controller/cadence/pci-sky1.c | 435 ++++++++++++++++++++++
3 files changed, 452 insertions(+)
create mode 100644 drivers/pci/controller/cadence/pci-sky1.c
diff --git a/drivers/pci/controller/cadence/Kconfig b/drivers/pci/controller/cadence/Kconfig
index 427aa9beca22..63993495b20d 100644
--- a/drivers/pci/controller/cadence/Kconfig
+++ b/drivers/pci/controller/cadence/Kconfig
@@ -80,4 +80,20 @@ config PCI_J721E_EP
Say Y here if you want to support the TI J721E PCIe platform
controller in endpoint mode. TI J721E PCIe controller uses Cadence PCIe
core.
+
+config PCI_SKY1
+ bool
+
+config PCI_SKY1_HOST
+ tristate "CIX SKY1 PCIe controller (host mode)"
+ depends on OF
+ select PCIE_CADENCE_HOST
+ select PCI_SKY1
+ help
+ Say Y here if you want to support the CIX SKY1 PCIe platform
+ controller in host mode. CIX SKY1 PCIe controller uses Cadence HPA(High
+ Performance Architecture IP[Second generation of cadence PCIe IP])
+
+ This driver requires Cadence PCIe core infrastructure (PCIE_CADENCE_HOST)
+ and hardware platform adaptation layer to function.
endmenu
diff --git a/drivers/pci/controller/cadence/Makefile b/drivers/pci/controller/cadence/Makefile
index f8575a0eee2d..cfe8c89c0427 100644
--- a/drivers/pci/controller/cadence/Makefile
+++ b/drivers/pci/controller/cadence/Makefile
@@ -7,3 +7,4 @@ obj-$(CONFIG_PCIE_CADENCE_EP) += pcie-cadence-ep.o pcie-cadence-ep-hpa.o
obj-$(CONFIG_PCIE_CADENCE_PLAT) += pcie-cadence-plat.o
obj-$(CONFIG_PCIE_CADENCE_PLAT_HPA) += pcie-cadence-plat-hpa.o
obj-$(CONFIG_PCI_J721E) += pci-j721e.o
+obj-$(CONFIG_PCI_SKY1) += pci-sky1.o
diff --git a/drivers/pci/controller/cadence/pci-sky1.c b/drivers/pci/controller/cadence/pci-sky1.c
new file mode 100644
index 000000000000..a4828b92159e
--- /dev/null
+++ b/drivers/pci/controller/cadence/pci-sky1.c
@@ -0,0 +1,435 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * pci-sky1 - PCIe controller driver for CIX's sky1 SoCs
+ *
+ * Author: Hans Zhang <hans.zhang@cixtech.com>
+ */
+
+#include <linux/of.h>
+#include <linux/of_device.h>
+#include <linux/pci.h>
+#include <linux/pci-ecam.h>
+
+#include "../../pci.h"
+#include "pcie-cadence.h"
+#include "pcie-cadence-host-common.h"
+
+#define STRAP_REG(n) ((n) * 0x04)
+#define STATUS_REG(n) ((n) * 0x04)
+
+#define RCSU_STRAP_REG 0x300
+#define RCSU_STATUS_REG 0x400
+
+#define RCSU_STRAP_STATUS_SUBREG_X2 0x40
+#define RCSU_STRAP_STATUS_SUBREG_X10 0x60
+#define RCSU_STRAP_STATUS_SUBREG_X11 0x80
+
+#define SKY1_IP_REG_BANK_OFFSET 0x1000
+#define SKY1_IP_CFG_CTRL_REG_BANK_OFFSET 0x4c00
+#define SKY1_IP_AXI_MASTER_COMMON_OFFSET 0xf000
+#define SKY1_AXI_SLAVE_OFFSET 0x9000
+#define SKY1_AXI_MASTER_OFFSET 0xb000
+#define SKY1_AXI_HLS_REGISTERS_OFFSET 0xc000
+#define SKY1_AXI_RAS_REGISTERS_OFFSET 0xe000
+#define SKY1_DTI_REGISTERS_OFFSET 0xd000
+
+#define IP_REG_I_DBG_STS_0 0x420
+
+#define LINK_TRAINING_ENABLE BIT(0)
+#define LINK_COMPLETE BIT(0)
+#define SKY1_MAX_LANES 8
+
+#define BYPASS_PHASE23_MASK BIT(26)
+#define BYPASS_REMOTE_TX_EQ_MASK BIT(25)
+#define DC_MAX_EVAL_ITERATION_MASK GENMASK(24, 18)
+#define LANE_COUNT_IN_MASK GENMASK(17, 15)
+#define PCIE_RATE_MAX_MASK GENMASK(14, 12)
+#define SUPPORTED_PRESET_MASK GENMASK(10, 0)
+
+enum sky1_pcie_id {
+ PCIE_ID_x8,
+ PCIE_ID_x4,
+ PCIE_ID_x2,
+ PCIE_ID_x1_1,
+ PCIE_ID_x1_0,
+};
+
+struct sky1_def_speed_lane {
+ u32 link_speed;
+ u32 max_lanes;
+};
+
+struct sky1_pcie_data {
+ const struct sky1_def_speed_lane *speed_lane;
+ struct cdns_plat_pcie_of_data reg_off;
+};
+
+struct sky1_pcie {
+ struct device *dev;
+ const struct sky1_pcie_data *data;
+ const struct sky1_def_speed_lane *speed_lane;
+ struct cdns_pcie *cdns_pcie;
+ struct cdns_pcie_rc *cdns_pcie_rc;
+
+ struct resource *cfg_res;
+ struct resource *msg_res;
+ struct pci_config_window *cfg;
+ void __iomem *rcsu_base;
+ void __iomem *strap_base;
+ void __iomem *status_base;
+ void __iomem *reg_base;
+ void __iomem *cfg_base;
+ void __iomem *msg_base;
+
+ u32 id;
+ u32 link_speed;
+ u32 num_lanes;
+};
+
+static const struct sky1_def_speed_lane def_speed_lane[] = {
+ [PCIE_ID_x8] = { 4, 8 },
+ [PCIE_ID_x4] = { 4, 4 },
+ [PCIE_ID_x2] = { 4, 2 },
+ [PCIE_ID_x1_1] = { 4, 1 },
+ [PCIE_ID_x1_0] = { 4, 1 },
+};
+
+static void sky1_pcie_clear_and_set_dword(void __iomem *addr, u32 clear,
+ u32 set)
+{
+ u32 val;
+
+ val = readl(addr);
+ val &= ~clear;
+ val |= set;
+ writel(val, addr);
+}
+
+static void sky1_pcie_init_bases(struct sky1_pcie *pcie)
+{
+ u32 strap = 0, status = 0;
+
+ switch (pcie->id) {
+ case PCIE_ID_x1_1:
+ strap = status = RCSU_STRAP_STATUS_SUBREG_X11;
+ break;
+ case PCIE_ID_x1_0:
+ strap = status = RCSU_STRAP_STATUS_SUBREG_X10;
+ break;
+ case PCIE_ID_x2:
+ strap = status = RCSU_STRAP_STATUS_SUBREG_X2;
+ break;
+ case PCIE_ID_x8:
+ case PCIE_ID_x4:
+ default:
+ break;
+ }
+
+ pcie->strap_base = pcie->rcsu_base + RCSU_STRAP_REG + strap;
+ pcie->status_base = pcie->rcsu_base + RCSU_STATUS_REG + status;
+}
+
+static int sky1_pcie_parse_mem(struct sky1_pcie *pcie)
+{
+ struct device *dev = pcie->dev;
+ struct platform_device *pdev = to_platform_device(dev);
+ struct resource *res;
+ void __iomem *base;
+ int ret = 0;
+
+ res = platform_get_resource_byname(pdev, IORESOURCE_MEM, "rcsu");
+ if (!res) {
+ dev_err(dev, "Parse \"rcsu\" resource err\n");
+ return -ENXIO;
+ }
+ pcie->rcsu_base = devm_ioremap(dev, res->start, resource_size(res));
+ if (!pcie->rcsu_base) {
+ dev_err(dev, "ioremap failed for resource %pR\n", res);
+ return -ENOMEM;
+ }
+
+ base = devm_platform_ioremap_resource_byname(pdev, "reg");
+ if (IS_ERR(base)) {
+ dev_err(dev, "Parse \"reg\" resource err\n");
+ return PTR_ERR(base);
+ }
+ pcie->reg_base = base;
+
+ res = platform_get_resource_byname(pdev, IORESOURCE_MEM, "msg");
+ if (!res) {
+ dev_err(dev, "Parse \"msg\" resource err\n");
+ return -ENXIO;
+ }
+ pcie->msg_res = res;
+ pcie->msg_base = devm_ioremap(dev, res->start, resource_size(res));
+ if (!pcie->msg_base) {
+ dev_err(dev, "ioremap failed for resource %pR\n", res);
+ return -ENOMEM;
+ }
+
+ res = platform_get_resource_byname(pdev, IORESOURCE_MEM, "cfg");
+ if (!res) {
+ dev_err(dev, "Parse \"cfg\" resource err\n");
+ return -ENXIO;
+ }
+ pcie->cfg_res = res;
+
+ return ret;
+}
+
+static int sky1_pcie_parse_ctrl_id(struct sky1_pcie *pcie)
+{
+ struct device *dev = pcie->dev;
+ int id, ret = 0;
+
+ ret = of_property_read_u32(dev->of_node, "sky1,pcie-ctrl-id", &id);
+ if (ret < 0) {
+ dev_err(dev, "Failed to read sky1,pcie-ctrl-id: %d\n", ret);
+ return ret;
+ }
+
+ if ((id < PCIE_ID_x8) || (id > PCIE_ID_x1_0)) {
+ dev_err(dev, "get illegal pcie-ctrl-id %d\n", id);
+ return -EINVAL;
+ }
+ pcie->id = id;
+ pcie->speed_lane = &def_speed_lane[id];
+
+ return ret;
+}
+
+static void sky1_pcie_parse_link_speed(struct sky1_pcie *pcie)
+{
+ int link_speed;
+
+ link_speed = of_pci_get_max_link_speed(pcie->dev->of_node);
+ if (link_speed < 0)
+ link_speed = pcie->speed_lane->link_speed;
+ pcie->link_speed = link_speed;
+}
+
+static int sky1_pcie_parse_num_lanes(struct sky1_pcie *pcie)
+{
+ struct device *dev = pcie->dev;
+ int ret = 0;
+ u32 lanes;
+
+ ret = of_property_read_u32(dev->of_node, "num-lanes", &lanes);
+ if (ret) {
+ dev_err(dev, "error:%x, lane number:%d\n", ret, lanes);
+ ret = -EINVAL;
+ return ret;
+ }
+
+ if ((lanes < 1) || (lanes > pcie->speed_lane->max_lanes))
+ lanes = pcie->speed_lane->max_lanes;
+ pcie->num_lanes = lanes;
+
+ return ret;
+}
+
+static int sky1_pcie_get_max_lane_count(struct sky1_pcie *pcie)
+{
+ if (is_power_of_2(pcie->num_lanes) && pcie->num_lanes <= SKY1_MAX_LANES)
+ return ilog2(pcie->num_lanes);
+
+ pcie->num_lanes = 1;
+ return pcie->num_lanes;
+}
+
+static void sky1_pcie_set_strap_pin0(struct sky1_pcie *pcie)
+{
+ u32 val;
+
+ val = readl(pcie->strap_base + STRAP_REG(0));
+
+ /* clear bypass_phase23 and bypass_remote_eq */
+ val &= ~(BYPASS_PHASE23_MASK | BYPASS_REMOTE_TX_EQ_MASK);
+
+ /* set iteration timeout */
+ val &= ~DC_MAX_EVAL_ITERATION_MASK;
+ val |= FIELD_PREP(DC_MAX_EVAL_ITERATION_MASK, 0x2);
+
+ /* set support preset val */
+ val &= ~SUPPORTED_PRESET_MASK;
+ val |= FIELD_PREP(SUPPORTED_PRESET_MASK, 0x7ff);
+
+ /* Set link speed */
+ val &= ~PCIE_RATE_MAX_MASK;
+ val |= FIELD_PREP(PCIE_RATE_MAX_MASK, pcie->link_speed - 1);
+
+ /* Set lane number */
+ val &= ~LANE_COUNT_IN_MASK;
+ val |= FIELD_PREP(LANE_COUNT_IN_MASK,
+ sky1_pcie_get_max_lane_count(pcie));
+
+ writel(val, pcie->strap_base + STRAP_REG(0));
+}
+
+static int sky1_pcie_parse_property(struct platform_device *pdev,
+ struct sky1_pcie *pcie)
+{
+ int ret = 0;
+
+ ret = sky1_pcie_parse_ctrl_id(pcie);
+ if (ret < 0)
+ return ret;
+
+ sky1_pcie_parse_link_speed(pcie);
+
+ ret = sky1_pcie_parse_num_lanes(pcie);
+ if (ret < 0)
+ return ret;
+
+ ret = sky1_pcie_parse_mem(pcie);
+ if (ret < 0)
+ return ret;
+
+ sky1_pcie_init_bases(pcie);
+
+ return ret;
+}
+
+static int sky1_pcie_start_link(struct cdns_pcie *cdns_pcie)
+{
+ struct sky1_pcie *pcie = dev_get_drvdata(cdns_pcie->dev);
+
+ sky1_pcie_clear_and_set_dword(pcie->strap_base + STRAP_REG(1),
+ 0, LINK_TRAINING_ENABLE);
+
+ return 0;
+}
+
+static void sky1_pcie_stop_link(struct cdns_pcie *cdns_pcie)
+{
+ struct sky1_pcie *pcie = dev_get_drvdata(cdns_pcie->dev);
+
+ sky1_pcie_clear_and_set_dword(pcie->strap_base + STRAP_REG(1),
+ LINK_TRAINING_ENABLE, 0);
+}
+
+
+static bool sky1_pcie_link_up(struct cdns_pcie *cdns_pcie)
+{
+ u32 val;
+
+ val = cdns_pcie_hpa_readl(cdns_pcie, REG_BANK_IP_REG,
+ IP_REG_I_DBG_STS_0);
+ return val & LINK_COMPLETE;
+}
+
+static const struct cdns_pcie_ops sky1_pcie_ops = {
+ .start_link = sky1_pcie_start_link,
+ .stop_link = sky1_pcie_stop_link,
+ .link_up = sky1_pcie_link_up,
+};
+
+static int sky1_pcie_probe(struct platform_device *pdev)
+{
+ const struct sky1_pcie_data *data;
+ struct device *dev = &pdev->dev;
+ struct pci_host_bridge *bridge;
+ struct cdns_pcie *cdns_pcie;
+ struct resource_entry *bus;
+ struct cdns_pcie_rc *rc;
+ struct sky1_pcie *pcie;
+ int ret;
+
+ pcie = devm_kzalloc(dev, sizeof(*pcie), GFP_KERNEL);
+ if (!pcie)
+ return -ENOMEM;
+
+ data = of_device_get_match_data(dev);
+ if (!data)
+ return -EINVAL;
+
+ pcie->data = data;
+ pcie->dev = dev;
+ dev_set_drvdata(dev, pcie);
+
+ bridge = devm_pci_alloc_host_bridge(dev, sizeof(*rc));
+ if (!bridge)
+ return -ENOMEM;
+
+ bus = resource_list_first_type(&bridge->windows, IORESOURCE_BUS);
+ if (!bus)
+ return -ENODEV;
+
+ ret = sky1_pcie_parse_property(pdev, pcie);
+ if (ret < 0)
+ return -ENXIO;
+
+ sky1_pcie_set_strap_pin0(pcie);
+
+ pcie->cfg = pci_ecam_create(dev, pcie->cfg_res, bus->res,
+ &pci_generic_ecam_ops);
+ if (IS_ERR(pcie->cfg))
+ return PTR_ERR(pcie->cfg);
+
+ bridge->ops = (struct pci_ops *)&pci_generic_ecam_ops.pci_ops;
+ rc = pci_host_bridge_priv(bridge);
+ rc->ecam_support_flag = 1;
+ rc->cfg_base = pcie->cfg->win;
+ rc->cfg_res = &pcie->cfg->res;
+
+ cdns_pcie = &rc->pcie;
+ cdns_pcie->dev = dev;
+ cdns_pcie->ops = &sky1_pcie_ops;
+ cdns_pcie->reg_base = pcie->reg_base;
+ cdns_pcie->msg_res = pcie->msg_res;
+ cdns_pcie->cdns_pcie_reg_offsets = &data->reg_off;
+ cdns_pcie->is_rc = data->reg_off.is_rc;
+
+ pcie->cdns_pcie = cdns_pcie;
+ pcie->cdns_pcie_rc = rc;
+ pcie->cfg_base = rc->cfg_base;
+ bridge->sysdata = pcie->cfg;
+
+ ret = cdns_pcie_hpa_host_setup(rc);
+ if (ret < 0) {
+ pci_ecam_free(pcie->cfg);
+ return ret;
+ }
+
+ return 0;
+}
+
+static const struct sky1_pcie_data sky1_pcie_rc_data = {
+ .speed_lane = &def_speed_lane[0],
+ .reg_off = {
+ .is_rc = true,
+ .ip_reg_bank_offset = SKY1_IP_REG_BANK_OFFSET,
+ .ip_cfg_ctrl_reg_offset = SKY1_IP_CFG_CTRL_REG_BANK_OFFSET,
+ .axi_mstr_common_offset = SKY1_IP_AXI_MASTER_COMMON_OFFSET,
+ .axi_slave_offset = SKY1_AXI_SLAVE_OFFSET,
+ .axi_master_offset = SKY1_AXI_MASTER_OFFSET,
+ .axi_hls_offset = SKY1_AXI_HLS_REGISTERS_OFFSET,
+ .axi_ras_offset = SKY1_AXI_RAS_REGISTERS_OFFSET,
+ .axi_dti_offset = SKY1_DTI_REGISTERS_OFFSET,
+ },
+};
+
+static const struct of_device_id of_sky1_pcie_match[] = {
+ {
+ .compatible = "cix,sky1-pcie-host",
+ .data = &sky1_pcie_rc_data,
+ },
+ {},
+};
+
+static void sky1_pcie_remove(struct platform_device *pdev)
+{
+ struct sky1_pcie *pcie = platform_get_drvdata(pdev);
+
+ pci_ecam_free(pcie->cfg);
+}
+
+static struct platform_driver sky1_pcie_driver = {
+ .probe = sky1_pcie_probe,
+ .remove = sky1_pcie_remove,
+ .driver = {
+ .name = "sky1-pcie",
+ .of_match_table = of_sky1_pcie_match,
+ },
+};
+module_platform_driver(sky1_pcie_driver);
--
2.49.0
^ permalink raw reply related [flat|nested] 46+ messages in thread
* [PATCH v5 12/14] MAINTAINERS: add entry for CIX Sky1 PCIe driver
2025-06-30 4:15 [PATCH v5 00/14] Enhance the PCIe controller driver hans.zhang
` (10 preceding siblings ...)
2025-06-30 4:15 ` [PATCH v5 11/14] PCI: sky1: Add PCIe host support for CIX Sky1 hans.zhang
@ 2025-06-30 4:15 ` hans.zhang
2025-06-30 7:29 ` Krzysztof Kozlowski
2025-06-30 4:16 ` [PATCH v5 13/14] arm64: dts: cix: Add PCIe Root Complex on sky1 hans.zhang
2025-06-30 4:16 ` [PATCH v5 14/14] arm64: dts: cix: Enable PCIe on the Orion O6 board hans.zhang
13 siblings, 1 reply; 46+ messages in thread
From: hans.zhang @ 2025-06-30 4:15 UTC (permalink / raw)
To: bhelgaas, lpieralisi, kw, mani, robh, kwilczynski, krzk+dt,
conor+dt
Cc: mpillai, fugang.duan, guoyin.chen, peter.chen,
cix-kernel-upstream, linux-pci, devicetree, linux-kernel,
Hans Zhang
From: Hans Zhang <hans.zhang@cixtech.com>
Add myself as maintainer of Sky1 PCIe host driver
Signed-off-by: Hans Zhang <hans.zhang@cixtech.com>
Reviewed-by: Peter Chen <peter.chen@cixtech.com>
Reviewed-by: Manikandan K Pillai <mpillai@cadence.com>
---
MAINTAINERS | 7 +++++++
1 file changed, 7 insertions(+)
diff --git a/MAINTAINERS b/MAINTAINERS
index 7f8bee29bb8f..2972e24c7b45 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -18951,6 +18951,13 @@ S: Orphan
F: Documentation/devicetree/bindings/pci/cdns,*
F: drivers/pci/controller/cadence/*cadence*
+PCI DRIVER FOR CIX Sky1
+M: Hans Zhang <hans.zhang@cixtech.com>
+L: linux-pci@vger.kernel.org
+S: Maintained
+F: Documentation/devicetree/bindings/pci/cix,sky1-pcie-*.yaml
+F: drivers/pci/controller/cadence/*sky1*
+
PCI DRIVER FOR FREESCALE LAYERSCAPE
M: Minghuan Lian <minghuan.Lian@nxp.com>
M: Mingkai Hu <mingkai.hu@nxp.com>
--
2.49.0
^ permalink raw reply related [flat|nested] 46+ messages in thread
* [PATCH v5 13/14] arm64: dts: cix: Add PCIe Root Complex on sky1
2025-06-30 4:15 [PATCH v5 00/14] Enhance the PCIe controller driver hans.zhang
` (11 preceding siblings ...)
2025-06-30 4:15 ` [PATCH v5 12/14] MAINTAINERS: add entry for CIX Sky1 PCIe driver hans.zhang
@ 2025-06-30 4:16 ` hans.zhang
2025-06-30 7:33 ` Krzysztof Kozlowski
2025-06-30 4:16 ` [PATCH v5 14/14] arm64: dts: cix: Enable PCIe on the Orion O6 board hans.zhang
13 siblings, 1 reply; 46+ messages in thread
From: hans.zhang @ 2025-06-30 4:16 UTC (permalink / raw)
To: bhelgaas, lpieralisi, kw, mani, robh, kwilczynski, krzk+dt,
conor+dt
Cc: mpillai, fugang.duan, guoyin.chen, peter.chen,
cix-kernel-upstream, linux-pci, devicetree, linux-kernel,
Hans Zhang
From: Hans Zhang <hans.zhang@cixtech.com>
Add pcie_x*_rc node to support Sky1 PCIe driver based on the
Cadence PCIe core.
Supports Gen1/Gen2/Gen3/Gen4, 1/2/4/8 lane, MSI/MSI-x interrupts
using the ARM GICv3.
Signed-off-by: Hans Zhang <hans.zhang@cixtech.com>
Reviewed-by: Peter Chen <peter.chen@cixtech.com>
Reviewed-by: Manikandan K Pillai <mpillai@cadence.com>
---
arch/arm64/boot/dts/cix/sky1.dtsi | 150 ++++++++++++++++++++++++++++++
1 file changed, 150 insertions(+)
diff --git a/arch/arm64/boot/dts/cix/sky1.dtsi b/arch/arm64/boot/dts/cix/sky1.dtsi
index 9c723917d8ca..1dac0e8d5fc1 100644
--- a/arch/arm64/boot/dts/cix/sky1.dtsi
+++ b/arch/arm64/boot/dts/cix/sky1.dtsi
@@ -289,6 +289,156 @@ mbox_ap2sfh: mailbox@80a0000 {
cix,mbox-dir = "tx";
};
+ pcie_x8_rc: pcie@a010000 { /* X8 */
+ compatible = "cix,sky1-pcie-host";
+ reg = <0x00 0x0a010000 0x00 0x10000>,
+ <0x00 0x0a000000 0x00 0x10000>,
+ <0x00 0x2c000000 0x00 0x4000000>,
+ <0x00 0x60000000 0x00 0x00100000>;
+ reg-names = "reg", "rcsu", "cfg", "msg";
+ #interrupt-cells = <1>;
+ interrupt-map-mask = <0 0 0 0x7>;
+ interrupt-map = <0 0 0 1 &gic 0 0 GIC_SPI 407 IRQ_TYPE_LEVEL_HIGH 0>,
+ <0 0 0 2 &gic 0 0 GIC_SPI 408 IRQ_TYPE_LEVEL_HIGH 0>,
+ <0 0 0 3 &gic 0 0 GIC_SPI 409 IRQ_TYPE_LEVEL_HIGH 0>,
+ <0 0 0 4 &gic 0 0 GIC_SPI 410 IRQ_TYPE_LEVEL_HIGH 0>;
+ max-link-speed = <4>;
+ num-lanes = <8>;
+ #address-cells = <3>;
+ #size-cells = <2>;
+ bus-range = <0xc0 0xff>;
+ device_type = "pci";
+ ranges = <0x01000000 0x0 0x60100000 0x0 0x60100000 0x0 0x00100000>,
+ <0x02000000 0x0 0x60200000 0x0 0x60200000 0x0 0x1fe00000>,
+ <0x43000000 0x18 0x00000000 0x18 0x00000000 0x04 0x00000000>;
+ msi-map = <0xc000 &gic_its 0xc000 0x4000>;
+ vendor-id = <0x1f6c>;
+ device-id = <0x0001>;
+ cdns,no-inbound-bar;
+ sky1,pcie-ctrl-id = <0x0>;
+ status = "disabled";
+ };
+
+ pcie_x4_rc: pcie@a070000 { /* X4 */
+ compatible = "cix,sky1-pcie-host";
+ reg = <0x00 0x0a070000 0x00 0x10000>,
+ <0x00 0x0a060000 0x00 0x10000>,
+ <0x00 0x29000000 0x00 0x3000000>,
+ <0x00 0x50000000 0x00 0x00100000>;
+ reg-names = "reg", "rcsu", "cfg", "msg";
+ #interrupt-cells = <1>;
+ interrupt-map-mask = <0 0 0 0x7>;
+ interrupt-map = <0 0 0 1 &gic 0 0 GIC_SPI 417 IRQ_TYPE_LEVEL_HIGH 0>,
+ <0 0 0 2 &gic 0 0 GIC_SPI 418 IRQ_TYPE_LEVEL_HIGH 0>,
+ <0 0 0 3 &gic 0 0 GIC_SPI 419 IRQ_TYPE_LEVEL_HIGH 0>,
+ <0 0 0 4 &gic 0 0 GIC_SPI 420 IRQ_TYPE_LEVEL_HIGH 0>;
+ max-link-speed = <4>;
+ num-lanes = <4>;
+ #address-cells = <3>;
+ #size-cells = <2>;
+ bus-range = <0x90 0xbf>;
+ device_type = "pci";
+ ranges = <0x01000000 0x00 0x50100000 0x00 0x50100000 0x00 0x00100000>,
+ <0x02000000 0x00 0x50200000 0x00 0x50200000 0x00 0x0fe00000>,
+ <0x43000000 0x14 0x00000000 0x14 0x00000000 0x04 0x00000000>;
+ msi-map = <0x9000 &gic_its 0x9000 0x3000>;
+ vendor-id = <0x1f6c>;
+ device-id = <0x0001>;
+ cdns,no-inbound-bar;
+ sky1,pcie-ctrl-id = <0x1>;
+ status = "disabled";
+ };
+
+ pcie_x2_rc: pcie@a0c0000 { /* X2 */
+ compatible = "cix,sky1-pcie-host";
+ reg = <0x00 0x0a0c0000 0x00 0x10000>,
+ <0x00 0x0a060000 0x00 0x10000>,
+ <0x00 0x26000000 0x00 0x3000000>,
+ <0x00 0x40000000 0x00 0x00100000>;
+ reg-names = "reg", "rcsu", "cfg", "msg";
+ #interrupt-cells = <1>;
+ interrupt-map-mask = <0 0 0 0x7>;
+ interrupt-map = <0 0 0 1 &gic 0 0 GIC_SPI 427 IRQ_TYPE_LEVEL_HIGH 0>,
+ <0 0 0 2 &gic 0 0 GIC_SPI 428 IRQ_TYPE_LEVEL_HIGH 0>,
+ <0 0 0 3 &gic 0 0 GIC_SPI 429 IRQ_TYPE_LEVEL_HIGH 0>,
+ <0 0 0 4 &gic 0 0 GIC_SPI 430 IRQ_TYPE_LEVEL_HIGH 0>;
+ max-link-speed = <4>;
+ num-lanes = <2>;
+ #address-cells = <3>;
+ #size-cells = <2>;
+ bus-range = <0x60 0x8f>;
+ device_type = "pci";
+ ranges = <0x01000000 0x0 0x40100000 0x0 0x40100000 0x0 0x00100000>,
+ <0x02000000 0x0 0x40200000 0x0 0x40200000 0x0 0x0fe00000>,
+ <0x43000000 0x10 0x00000000 0x10 0x00000000 0x04 0x00000000>;
+ msi-map = <0x6000 &gic_its 0x6000 0x3000>;
+ vendor-id = <0x1f6c>;
+ device-id = <0x0001>;
+ cdns,no-inbound-bar;
+ sky1,pcie-ctrl-id = <0x2>;
+ status = "disabled";
+ };
+
+ pcie_x1_0_rc: pcie@a0d0000 { /* X1_0 */
+ compatible = "cix,sky1-pcie-host";
+ reg = <0x00 0x0a0d0000 0x00 0x10000>,
+ <0x00 0x0a060000 0x00 0x10000>,
+ <0x00 0x20000000 0x00 0x3000000>,
+ <0x00 0x30000000 0x00 0x00100000>;
+ reg-names = "reg", "rcsu", "cfg", "msg";
+ #interrupt-cells = <1>;
+ interrupt-map-mask = <0 0 0 0x7>;
+ interrupt-map = <0 0 0 1 &gic 0 0 GIC_SPI 436 IRQ_TYPE_LEVEL_HIGH 0>,
+ <0 0 0 2 &gic 0 0 GIC_SPI 437 IRQ_TYPE_LEVEL_HIGH 0>,
+ <0 0 0 3 &gic 0 0 GIC_SPI 438 IRQ_TYPE_LEVEL_HIGH 0>,
+ <0 0 0 4 &gic 0 0 GIC_SPI 439 IRQ_TYPE_LEVEL_HIGH 0>;
+ max-link-speed = <4>;
+ num-lanes = <1>;
+ #address-cells = <3>;
+ #size-cells = <2>;
+ bus-range = <0x00 0x2f>;
+ device_type = "pci";
+ ranges = <0x01000000 0x0 0x30100000 0x0 0x30100000 0x0 0x00100000>,
+ <0x02000000 0x0 0x30200000 0x0 0x30200000 0x0 0x07e00000>,
+ <0x43000000 0x08 0x00000000 0x08 0x00000000 0x04 0x00000000>;
+ msi-map = <0x0000 &gic_its 0x0000 0x3000>;
+ vendor-id = <0x1f6c>;
+ device-id = <0x0001>;
+ cdns,no-inbound-bar;
+ sky1,pcie-ctrl-id = <0x4>;
+ status = "disabled";
+ };
+
+ pcie_x1_1_rc: pcie@a0e0000 { /* X1_1 */
+ compatible = "cix,sky1-pcie-host";
+ reg = <0x00 0x0a0e0000 0x00 0x10000>,
+ <0x00 0x0a060000 0x00 0x10000>,
+ <0x00 0x23000000 0x00 0x3000000>,
+ <0x00 0x38000000 0x00 0x00100000>;
+ reg-names = "reg", "rcsu", "cfg", "msg";
+ #interrupt-cells = <1>;
+ interrupt-map-mask = <0 0 0 0x7>;
+ interrupt-map = <0 0 0 1 &gic 0 0 GIC_SPI 445 IRQ_TYPE_LEVEL_HIGH 0>,
+ <0 0 0 2 &gic 0 0 GIC_SPI 446 IRQ_TYPE_LEVEL_HIGH 0>,
+ <0 0 0 3 &gic 0 0 GIC_SPI 447 IRQ_TYPE_LEVEL_HIGH 0>,
+ <0 0 0 4 &gic 0 0 GIC_SPI 448 IRQ_TYPE_LEVEL_HIGH 0>;
+ max-link-speed = <4>;
+ num-lanes = <1>;
+ #address-cells = <3>;
+ #size-cells = <2>;
+ bus-range = <0x30 0x5f>;
+ device_type = "pci";
+ ranges = <0x01000000 0x0 0x38100000 0x0 0x38100000 0x0 0x00100000>,
+ <0x02000000 0x0 0x38200000 0x0 0x38200000 0x0 0x07e00000>,
+ <0x43000000 0x0C 0x00000000 0x0C 0x00000000 0x04 0x00000000>;
+ msi-map = <0x3000 &gic_its 0x3000 0x3000>;
+ vendor-id = <0x1f6c>;
+ device-id = <0x0001>;
+ sky1,pcie-ctrl-id = <0x3>;
+ cdns,no-inbound-bar;
+ status = "disabled";
+ };
+
gic: interrupt-controller@e010000 {
compatible = "arm,gic-v3";
reg = <0x0 0x0e010000 0 0x10000>, /* GICD */
--
2.49.0
^ permalink raw reply related [flat|nested] 46+ messages in thread
* [PATCH v5 14/14] arm64: dts: cix: Enable PCIe on the Orion O6 board
2025-06-30 4:15 [PATCH v5 00/14] Enhance the PCIe controller driver hans.zhang
` (12 preceding siblings ...)
2025-06-30 4:16 ` [PATCH v5 13/14] arm64: dts: cix: Add PCIe Root Complex on sky1 hans.zhang
@ 2025-06-30 4:16 ` hans.zhang
2025-06-30 7:32 ` Krzysztof Kozlowski
13 siblings, 1 reply; 46+ messages in thread
From: hans.zhang @ 2025-06-30 4:16 UTC (permalink / raw)
To: bhelgaas, lpieralisi, kw, mani, robh, kwilczynski, krzk+dt,
conor+dt
Cc: mpillai, fugang.duan, guoyin.chen, peter.chen,
cix-kernel-upstream, linux-pci, devicetree, linux-kernel,
Hans Zhang
From: Hans Zhang <hans.zhang@cixtech.com>
Add PCIe RC support on Orion O6 board.
Signed-off-by: Hans Zhang <hans.zhang@cixtech.com>
Reviewed-by: Peter Chen <peter.chen@cixtech.com>
Reviewed-by: Manikandan K Pillai <mpillai@cadence.com>
---
arch/arm64/boot/dts/cix/sky1-orion-o6.dts | 20 ++++++++++++++++++++
1 file changed, 20 insertions(+)
diff --git a/arch/arm64/boot/dts/cix/sky1-orion-o6.dts b/arch/arm64/boot/dts/cix/sky1-orion-o6.dts
index d74964d53c3b..44710d54ddad 100644
--- a/arch/arm64/boot/dts/cix/sky1-orion-o6.dts
+++ b/arch/arm64/boot/dts/cix/sky1-orion-o6.dts
@@ -37,3 +37,23 @@ linux,cma {
&uart2 {
status = "okay";
};
+
+&pcie_x8_rc {
+ status = "okay";
+};
+
+&pcie_x4_rc {
+ status = "okay";
+};
+
+&pcie_x2_rc {
+ status = "okay";
+};
+
+&pcie_x1_0_rc {
+ status = "okay";
+};
+
+&pcie_x1_1_rc {
+ status = "okay";
+};
--
2.49.0
^ permalink raw reply related [flat|nested] 46+ messages in thread
* Re: [PATCH v5 10/14] dt-bindings: PCI: Add CIX Sky1 PCIe Root Complex bindings
2025-06-30 4:15 ` [PATCH v5 10/14] dt-bindings: PCI: Add CIX Sky1 PCIe Root Complex bindings hans.zhang
@ 2025-06-30 5:36 ` Rob Herring (Arm)
2025-06-30 5:56 ` Hans Zhang
2025-06-30 7:26 ` Krzysztof Kozlowski
1 sibling, 1 reply; 46+ messages in thread
From: Rob Herring (Arm) @ 2025-06-30 5:36 UTC (permalink / raw)
To: hans.zhang
Cc: linux-kernel, kw, mpillai, lpieralisi, krzk+dt, fugang.duan,
kwilczynski, linux-pci, mani, guoyin.chen, bhelgaas, devicetree,
conor+dt, cix-kernel-upstream, peter.chen
On Mon, 30 Jun 2025 12:15:57 +0800, hans.zhang@cixtech.com wrote:
> From: Hans Zhang <hans.zhang@cixtech.com>
>
> Document the bindings for CIX Sky1 PCIe Controller configured in
> root complex mode with five root port.
>
> Supports 4 INTx, MSI and MSI-x interrupts from the ARM GICv3 controller.
>
> Signed-off-by: Hans Zhang <hans.zhang@cixtech.com>
> Reviewed-by: Peter Chen <peter.chen@cixtech.com>
> Reviewed-by: Manikandan K Pillai <mpillai@cadence.com>
> ---
> .../bindings/pci/cix,sky1-pcie-host.yaml | 133 ++++++++++++++++++
> 1 file changed, 133 insertions(+)
> create mode 100644 Documentation/devicetree/bindings/pci/cix,sky1-pcie-host.yaml
>
My bot found errors running 'make dt_binding_check' on your patch:
yamllint warnings/errors:
dtschema/dtc warnings/errors:
/builds/robherring/dt-review-ci/linux/Documentation/devicetree/bindings/pci/cix,sky1-pcie-host.yaml: properties:compatible:oneOf: [{'const': 'cix,sky1-pcie-host'}] should not be valid under {'items': {'propertyNames': {'const': 'const'}, 'required': ['const']}}
hint: Use 'enum' rather than 'oneOf' + 'const' entries
from schema $id: http://devicetree.org/meta-schemas/keywords.yaml#
Error: Documentation/devicetree/bindings/pci/cix,sky1-pcie-host.example.dts:29.47-48 syntax error
FATAL ERROR: Unable to parse input tree
make[2]: *** [scripts/Makefile.dtbs:131: Documentation/devicetree/bindings/pci/cix,sky1-pcie-host.example.dtb] Error 1
make[2]: *** Waiting for unfinished jobs....
make[1]: *** [/builds/robherring/dt-review-ci/linux/Makefile:1525: dt_binding_check] Error 2
make: *** [Makefile:248: __sub-make] Error 2
doc reference errors (make refcheckdocs):
See https://patchwork.ozlabs.org/project/devicetree-bindings/patch/20250630041601.399921-11-hans.zhang@cixtech.com
The base for the series is generally the latest rc1. A different dependency
should be noted in *this* patch.
If you already ran 'make dt_binding_check' and didn't see the above
error(s), then make sure 'yamllint' is installed and dt-schema is up to
date:
pip3 install dtschema --upgrade
Please check and re-submit after running the above command yourself. Note
that DT_SCHEMA_FILES can be set to your schema file to speed up checking
your schema. However, it must be unset to test all examples with your schema.
^ permalink raw reply [flat|nested] 46+ messages in thread
* Re: [PATCH v5 10/14] dt-bindings: PCI: Add CIX Sky1 PCIe Root Complex bindings
2025-06-30 5:36 ` Rob Herring (Arm)
@ 2025-06-30 5:56 ` Hans Zhang
0 siblings, 0 replies; 46+ messages in thread
From: Hans Zhang @ 2025-06-30 5:56 UTC (permalink / raw)
To: Rob Herring (Arm)
Cc: linux-kernel, kw, mpillai, lpieralisi, krzk+dt, fugang.duan,
kwilczynski, linux-pci, mani, guoyin.chen, bhelgaas, devicetree,
conor+dt, cix-kernel-upstream, peter.chen
On 2025/6/30 13:36, Rob Herring (Arm) wrote:
> [Some people who received this message don't often get email from robh@kernel.org. Learn why this is important at https://aka.ms/LearnAboutSenderIdentification ]
>
> EXTERNAL EMAIL
>
> On Mon, 30 Jun 2025 12:15:57 +0800, hans.zhang@cixtech.com wrote:
>> From: Hans Zhang <hans.zhang@cixtech.com>
>>
>> Document the bindings for CIX Sky1 PCIe Controller configured in
>> root complex mode with five root port.
>>
>> Supports 4 INTx, MSI and MSI-x interrupts from the ARM GICv3 controller.
>>
>> Signed-off-by: Hans Zhang <hans.zhang@cixtech.com>
>> Reviewed-by: Peter Chen <peter.chen@cixtech.com>
>> Reviewed-by: Manikandan K Pillai <mpillai@cadence.com>
>> ---
>> .../bindings/pci/cix,sky1-pcie-host.yaml | 133 ++++++++++++++++++
>> 1 file changed, 133 insertions(+)
>> create mode 100644 Documentation/devicetree/bindings/pci/cix,sky1-pcie-host.yaml
>>
>
> My bot found errors running 'make dt_binding_check' on your patch:
>
> yamllint warnings/errors:
>
> dtschema/dtc warnings/errors:
> /builds/robherring/dt-review-ci/linux/Documentation/devicetree/bindings/pci/cix,sky1-pcie-host.yaml: properties:compatible:oneOf: [{'const': 'cix,sky1-pcie-host'}] should not be valid under {'items': {'propertyNames': {'const': 'const'}, 'required': ['const']}}
> hint: Use 'enum' rather than 'oneOf' + 'const' entries
> from schema $id: http://devicetree.org/meta-schemas/keywords.yaml#
> Error: Documentation/devicetree/bindings/pci/cix,sky1-pcie-host.example.dts:29.47-48 syntax error
> FATAL ERROR: Unable to parse input tree
> make[2]: *** [scripts/Makefile.dtbs:131: Documentation/devicetree/bindings/pci/cix,sky1-pcie-host.example.dtb] Error 1
> make[2]: *** Waiting for unfinished jobs....
> make[1]: *** [/builds/robherring/dt-review-ci/linux/Makefile:1525: dt_binding_check] Error 2
> make: *** [Makefile:248: __sub-make] Error 2
>
> doc reference errors (make refcheckdocs):
>
> See https://patchwork.ozlabs.org/project/devicetree-bindings/patch/20250630041601.399921-11-hans.zhang@cixtech.com
>
> The base for the series is generally the latest rc1. A different dependency
> should be noted in *this* patch.
>
> If you already ran 'make dt_binding_check' and didn't see the above
> error(s), then make sure 'yamllint' is installed and dt-schema is up to
> date:
>
> pip3 install dtschema --upgrade
>
> Please check and re-submit after running the above command yourself. Note
> that DT_SCHEMA_FILES can be set to your schema file to speed up checking
> your schema. However, it must be unset to test all examples with your schema.
>
Dear Rob,
Thank you very much for your reply and reminder. The next version will fix.
The modification is as follows: make dt_binding_check. There are no errors.
s/sky1,pcie-ctrl-id/cix,pcie-ctrl-id/
Best regards,
Hans
^ permalink raw reply [flat|nested] 46+ messages in thread
* Re: [PATCH v5 10/14] dt-bindings: PCI: Add CIX Sky1 PCIe Root Complex bindings
2025-06-30 4:15 ` [PATCH v5 10/14] dt-bindings: PCI: Add CIX Sky1 PCIe Root Complex bindings hans.zhang
2025-06-30 5:36 ` Rob Herring (Arm)
@ 2025-06-30 7:26 ` Krzysztof Kozlowski
2025-06-30 8:29 ` Hans Zhang
2025-06-30 15:54 ` Hans Zhang
1 sibling, 2 replies; 46+ messages in thread
From: Krzysztof Kozlowski @ 2025-06-30 7:26 UTC (permalink / raw)
To: hans.zhang
Cc: bhelgaas, lpieralisi, kw, mani, robh, kwilczynski, krzk+dt,
conor+dt, mpillai, fugang.duan, guoyin.chen, peter.chen,
cix-kernel-upstream, linux-pci, devicetree, linux-kernel
On Mon, Jun 30, 2025 at 12:15:57PM +0800, hans.zhang@cixtech.com wrote:
> From: Hans Zhang <hans.zhang@cixtech.com>
>
> Document the bindings for CIX Sky1 PCIe Controller configured in
> root complex mode with five root port.
>
> Supports 4 INTx, MSI and MSI-x interrupts from the ARM GICv3 controller.
>
> Signed-off-by: Hans Zhang <hans.zhang@cixtech.com>
> Reviewed-by: Peter Chen <peter.chen@cixtech.com>
> Reviewed-by: Manikandan K Pillai <mpillai@cadence.com>
> ---
> .../bindings/pci/cix,sky1-pcie-host.yaml | 133 ++++++++++++++++++
> 1 file changed, 133 insertions(+)
> create mode 100644 Documentation/devicetree/bindings/pci/cix,sky1-pcie-host.yaml
>
> diff --git a/Documentation/devicetree/bindings/pci/cix,sky1-pcie-host.yaml b/Documentation/devicetree/bindings/pci/cix,sky1-pcie-host.yaml
> new file mode 100644
> index 000000000000..b4395bc06f2f
> --- /dev/null
> +++ b/Documentation/devicetree/bindings/pci/cix,sky1-pcie-host.yaml
> @@ -0,0 +1,133 @@
> +# SPDX-License-Identifier: (GPL-2.0 OR BSD-2-Clause)
> +%YAML 1.2
> +---
> +$id: http://devicetree.org/schemas/pci/cix,sky1-pcie-host.yaml#
> +$schema: http://devicetree.org/meta-schemas/core.yaml#
> +
> +title: CIX Sky1 PCIe Root Complex
> +
> +maintainers:
> + - Hans Zhang <hans.zhang@cixtech.com>
> +
> +description:
> + PCIe root complex controller based on the Cadence PCIe core.
> +
> +allOf:
> + - $ref: /schemas/pci/pci-host-bridge.yaml#
> + - $ref: /schemas/pci/cdns-pcie.yaml#
> +
> +properties:
> + compatible:
> + oneOf:
> + - const: cix,sky1-pcie-host
> +
> + reg:
> + items:
> + - description: PCIe controller registers.
> + - description: Remote CIX System Unit registers.
> + - description: ECAM registers.
> + - description: Region for sending messages registers.
> +
> + reg-names:
> + items:
> + - const: reg
> + - const: rcsu
> + - const: cfg
cfg is the second, look at cdns bindings.
> + - const: msg
> +
> + "#interrupt-cells":
> + const: 1
> +
> + interrupt-map-mask:
> + items:
> + - const: 0
> + - const: 0
> + - const: 0
> + - const: 7
> +
> + interrupt-map:
> + maxItems: 4
> +
> + max-link-speed:
> + maximum: 4
Why are you redefining core properties?
> +
> + num-lanes:
> + maximum: 8
> +
> + ranges:
> + maxItems: 3
> +
> + msi-map:
> + maxItems: 1
> +
> + vendor-id:
> + const: 0x1f6c
Why? This is implied by compatible.
> +
> + device-id:
> + enum:
> + - 0x0001
Why? This is implied by compatible.
> +
> + cdns,no-inbound-bar:
That's not a cdns binding, so wrong prefix.
> + description: |
Do not need '|' unless you need to preserve formatting.
> + Indicates the PCIe controller does not require an inbound BAR region.
And anyway this is implied by compatible, drop.
> + type: boolean
> +
> + sky1,pcie-ctrl-id:
> + description: |
> + Specifies the PCIe controller instance identifier (0-4).
No, you don't get an instance ID. Drop the property and look how other
bindings encoded it (not sure about the purpose and you did not explain
it, so cannot advise).
> + $ref: /schemas/types.yaml#/definitions/uint32
> + minimum: 0
> + maximum: 4
> +
> +required:
> + - compatible
> + - reg
> + - reg-names
> + - "#interrupt-cells"
> + - interrupt-map-mask
> + - interrupt-map
> + - max-link-speed
> + - num-lanes
> + - bus-range
> + - device_type
> + - ranges
> + - msi-map
> + - vendor-id
> + - device-id
> + - cdns,no-inbound-bar
> + - sky1,pcie-ctrl-id
> +
> +unevaluatedProperties: false
> +
> +examples:
> + - |
> + #include <dt-bindings/gpio/gpio.h>
> +
> + pcie_x8_rc: pcie@a010000 {
Drop unused label.
Best regards,
Krzysztof
^ permalink raw reply [flat|nested] 46+ messages in thread
* Re: [PATCH v5 02/14] dt-bindings: pci: cadence: Extend compatible for new EP configuration
2025-06-30 4:15 ` [PATCH v5 02/14] dt-bindings: pci: cadence: Extend compatible for new EP configuration hans.zhang
@ 2025-06-30 7:27 ` Krzysztof Kozlowski
2025-06-30 8:03 ` Hans Zhang
2025-06-30 10:28 ` Krzysztof Kozlowski
1 sibling, 1 reply; 46+ messages in thread
From: Krzysztof Kozlowski @ 2025-06-30 7:27 UTC (permalink / raw)
To: hans.zhang
Cc: bhelgaas, lpieralisi, kw, mani, robh, kwilczynski, krzk+dt,
conor+dt, mpillai, fugang.duan, guoyin.chen, peter.chen,
cix-kernel-upstream, linux-pci, devicetree, linux-kernel
On Mon, Jun 30, 2025 at 12:15:49PM +0800, hans.zhang@cixtech.com wrote:
> From: Manikandan K Pillai <mpillai@cadence.com>
>
> Document the compatible property for HPA (High Performance Architecture)
> PCIe controller EP configuration.
>
> Signed-off-by: Manikandan K Pillai <mpillai@cadence.com>
Missing SoB.
Why are you sending someone else's patches? This just duplicates the
review and creates confusion.
Did you address ENTIRE previous review when you resent this?
Best regards,
Krzysztof
^ permalink raw reply [flat|nested] 46+ messages in thread
* Re: [PATCH v5 12/14] MAINTAINERS: add entry for CIX Sky1 PCIe driver
2025-06-30 4:15 ` [PATCH v5 12/14] MAINTAINERS: add entry for CIX Sky1 PCIe driver hans.zhang
@ 2025-06-30 7:29 ` Krzysztof Kozlowski
2025-06-30 8:06 ` Hans Zhang
0 siblings, 1 reply; 46+ messages in thread
From: Krzysztof Kozlowski @ 2025-06-30 7:29 UTC (permalink / raw)
To: hans.zhang
Cc: bhelgaas, lpieralisi, kw, mani, robh, kwilczynski, krzk+dt,
conor+dt, mpillai, fugang.duan, guoyin.chen, peter.chen,
cix-kernel-upstream, linux-pci, devicetree, linux-kernel
On Mon, Jun 30, 2025 at 12:15:59PM +0800, hans.zhang@cixtech.com wrote:
> From: Hans Zhang <hans.zhang@cixtech.com>
>
> Add myself as maintainer of Sky1 PCIe host driver
>
> Signed-off-by: Hans Zhang <hans.zhang@cixtech.com>
> Reviewed-by: Peter Chen <peter.chen@cixtech.com>
> Reviewed-by: Manikandan K Pillai <mpillai@cadence.com>
Where? Provide please lore links, since your changelog/cover letter is
missing them.
Best regards,
Krzysztof
^ permalink raw reply [flat|nested] 46+ messages in thread
* Re: [PATCH v5 01/14] dt-bindings: pci: cadence: Extend compatible for new RP configuration
2025-06-30 4:15 ` [PATCH v5 01/14] dt-bindings: pci: cadence: Extend compatible for new RP configuration hans.zhang
@ 2025-06-30 7:30 ` Krzysztof Kozlowski
2025-06-30 8:02 ` Hans Zhang
0 siblings, 1 reply; 46+ messages in thread
From: Krzysztof Kozlowski @ 2025-06-30 7:30 UTC (permalink / raw)
To: hans.zhang
Cc: bhelgaas, lpieralisi, kw, mani, robh, kwilczynski, krzk+dt,
conor+dt, mpillai, fugang.duan, guoyin.chen, peter.chen,
cix-kernel-upstream, linux-pci, devicetree, linux-kernel
On Mon, Jun 30, 2025 at 12:15:48PM +0800, hans.zhang@cixtech.com wrote:
> From: Manikandan K Pillai <mpillai@cadence.com>
>
> Document the compatible property for HPA (High Performance Architecture)
> PCIe controller RP configuration.
I don't see Conor's comment addressed:
https://lore.kernel.org/linux-devicetree/20250424-elm-magma-b791798477ab@spud/
You cannot just send someone's work and bypassing the review feedback.
>
> Signed-off-by: Manikandan K Pillai <mpillai@cadence.com>
SoB.
Best regards,
Krzysztof
^ permalink raw reply [flat|nested] 46+ messages in thread
* Re: [PATCH v5 14/14] arm64: dts: cix: Enable PCIe on the Orion O6 board
2025-06-30 4:16 ` [PATCH v5 14/14] arm64: dts: cix: Enable PCIe on the Orion O6 board hans.zhang
@ 2025-06-30 7:32 ` Krzysztof Kozlowski
2025-06-30 8:08 ` Hans Zhang
0 siblings, 1 reply; 46+ messages in thread
From: Krzysztof Kozlowski @ 2025-06-30 7:32 UTC (permalink / raw)
To: hans.zhang
Cc: bhelgaas, lpieralisi, kw, mani, robh, kwilczynski, krzk+dt,
conor+dt, mpillai, fugang.duan, guoyin.chen, peter.chen,
cix-kernel-upstream, linux-pci, devicetree, linux-kernel
On Mon, Jun 30, 2025 at 12:16:01PM +0800, hans.zhang@cixtech.com wrote:
> From: Hans Zhang <hans.zhang@cixtech.com>
>
> Add PCIe RC support on Orion O6 board.
>
> Signed-off-by: Hans Zhang <hans.zhang@cixtech.com>
> Reviewed-by: Peter Chen <peter.chen@cixtech.com>
> Reviewed-by: Manikandan K Pillai <mpillai@cadence.com>
Where? Please provide lore links. The happened AFTER the SoB, so they
must have been made public, right?
> ---
> arch/arm64/boot/dts/cix/sky1-orion-o6.dts | 20 ++++++++++++++++++++
> 1 file changed, 20 insertions(+)
>
> diff --git a/arch/arm64/boot/dts/cix/sky1-orion-o6.dts b/arch/arm64/boot/dts/cix/sky1-orion-o6.dts
> index d74964d53c3b..44710d54ddad 100644
> --- a/arch/arm64/boot/dts/cix/sky1-orion-o6.dts
> +++ b/arch/arm64/boot/dts/cix/sky1-orion-o6.dts
> @@ -37,3 +37,23 @@ linux,cma {
> &uart2 {
> status = "okay";
> };
> +
> +&pcie_x8_rc {
> + status = "okay";
And really two people reviewed this trivial changes? Really?
Plus what their review actually checked? This is obviously wrong - not
following DTS coding style, so what such review meant? What did it
Best regards,
Krzysztof
^ permalink raw reply [flat|nested] 46+ messages in thread
* Re: [PATCH v5 13/14] arm64: dts: cix: Add PCIe Root Complex on sky1
2025-06-30 4:16 ` [PATCH v5 13/14] arm64: dts: cix: Add PCIe Root Complex on sky1 hans.zhang
@ 2025-06-30 7:33 ` Krzysztof Kozlowski
2025-06-30 8:44 ` Hans Zhang
0 siblings, 1 reply; 46+ messages in thread
From: Krzysztof Kozlowski @ 2025-06-30 7:33 UTC (permalink / raw)
To: hans.zhang
Cc: bhelgaas, lpieralisi, kw, mani, robh, kwilczynski, krzk+dt,
conor+dt, mpillai, fugang.duan, guoyin.chen, peter.chen,
cix-kernel-upstream, linux-pci, devicetree, linux-kernel
On Mon, Jun 30, 2025 at 12:16:00PM +0800, hans.zhang@cixtech.com wrote:
> From: Hans Zhang <hans.zhang@cixtech.com>
>
> Add pcie_x*_rc node to support Sky1 PCIe driver based on the
> Cadence PCIe core.
>
> Supports Gen1/Gen2/Gen3/Gen4, 1/2/4/8 lane, MSI/MSI-x interrupts
> using the ARM GICv3.
>
> Signed-off-by: Hans Zhang <hans.zhang@cixtech.com>
> Reviewed-by: Peter Chen <peter.chen@cixtech.com>
> Reviewed-by: Manikandan K Pillai <mpillai@cadence.com>
Where?
> ---
> arch/arm64/boot/dts/cix/sky1.dtsi | 150 ++++++++++++++++++++++++++++++
> 1 file changed, 150 insertions(+)
>
> diff --git a/arch/arm64/boot/dts/cix/sky1.dtsi b/arch/arm64/boot/dts/cix/sky1.dtsi
> index 9c723917d8ca..1dac0e8d5fc1 100644
> --- a/arch/arm64/boot/dts/cix/sky1.dtsi
> +++ b/arch/arm64/boot/dts/cix/sky1.dtsi
> @@ -289,6 +289,156 @@ mbox_ap2sfh: mailbox@80a0000 {
> cix,mbox-dir = "tx";
> };
>
> + pcie_x8_rc: pcie@a010000 { /* X8 */
> + compatible = "cix,sky1-pcie-host";
> + reg = <0x00 0x0a010000 0x00 0x10000>,
> + <0x00 0x0a000000 0x00 0x10000>,
> + <0x00 0x2c000000 0x00 0x4000000>,
> + <0x00 0x60000000 0x00 0x00100000>;
> + reg-names = "reg", "rcsu", "cfg", "msg";
> + #interrupt-cells = <1>;
> + interrupt-map-mask = <0 0 0 0x7>;
> + interrupt-map = <0 0 0 1 &gic 0 0 GIC_SPI 407 IRQ_TYPE_LEVEL_HIGH 0>,
> + <0 0 0 2 &gic 0 0 GIC_SPI 408 IRQ_TYPE_LEVEL_HIGH 0>,
> + <0 0 0 3 &gic 0 0 GIC_SPI 409 IRQ_TYPE_LEVEL_HIGH 0>,
> + <0 0 0 4 &gic 0 0 GIC_SPI 410 IRQ_TYPE_LEVEL_HIGH 0>;
> + max-link-speed = <4>;
> + num-lanes = <8>;
> + #address-cells = <3>;
> + #size-cells = <2>;
> + bus-range = <0xc0 0xff>;
> + device_type = "pci";
> + ranges = <0x01000000 0x0 0x60100000 0x0 0x60100000 0x0 0x00100000>,
> + <0x02000000 0x0 0x60200000 0x0 0x60200000 0x0 0x1fe00000>,
> + <0x43000000 0x18 0x00000000 0x18 0x00000000 0x04 0x00000000>;
And none of the two reviewers asked you to follow DTS coding style? If
reviewer knows not much about DTS, don't review. Add an ack or
something, dunno, or actually perform proper review.
Best regards,
Krzysztof
^ permalink raw reply [flat|nested] 46+ messages in thread
* Re: [PATCH v5 01/14] dt-bindings: pci: cadence: Extend compatible for new RP configuration
2025-06-30 7:30 ` Krzysztof Kozlowski
@ 2025-06-30 8:02 ` Hans Zhang
2025-06-30 8:06 ` Manikandan Karunakaran Pillai
0 siblings, 1 reply; 46+ messages in thread
From: Hans Zhang @ 2025-06-30 8:02 UTC (permalink / raw)
To: Krzysztof Kozlowski
Cc: bhelgaas, lpieralisi, kw, mani, robh, kwilczynski, krzk+dt,
conor+dt, mpillai, fugang.duan, guoyin.chen, peter.chen,
cix-kernel-upstream, linux-pci, devicetree, linux-kernel
On 2025/6/30 15:30, Krzysztof Kozlowski wrote:
> EXTERNAL EMAIL
>
> On Mon, Jun 30, 2025 at 12:15:48PM +0800, hans.zhang@cixtech.com wrote:
>> From: Manikandan K Pillai <mpillai@cadence.com>
>>
>> Document the compatible property for HPA (High Performance Architecture)
>> PCIe controller RP configuration.
>
> I don't see Conor's comment addressed:
>
> https://lore.kernel.org/linux-devicetree/20250424-elm-magma-b791798477ab@spud/
>
> You cannot just send someone's work and bypassing the review feedback.
>
>>
>> Signed-off-by: Manikandan K Pillai <mpillai@cadence.com>
>
> SoB.
Dear Krzysztof,
Thank you very much for your reply. The questions mentioned above,
please answer by Manikandan.
Sorry, I missed it. Will add:
Signed-off-by: Hans Zhang <hans.zhang@cixtech.com>
Best regards,
Hans
>
> Best regards,
> Krzysztof
>
^ permalink raw reply [flat|nested] 46+ messages in thread
* Re: [PATCH v5 02/14] dt-bindings: pci: cadence: Extend compatible for new EP configuration
2025-06-30 7:27 ` Krzysztof Kozlowski
@ 2025-06-30 8:03 ` Hans Zhang
0 siblings, 0 replies; 46+ messages in thread
From: Hans Zhang @ 2025-06-30 8:03 UTC (permalink / raw)
To: Krzysztof Kozlowski
Cc: bhelgaas, lpieralisi, kw, mani, robh, kwilczynski, krzk+dt,
conor+dt, mpillai, fugang.duan, guoyin.chen, peter.chen,
cix-kernel-upstream, linux-pci, devicetree, linux-kernel
On 2025/6/30 15:27, Krzysztof Kozlowski wrote:
> EXTERNAL EMAIL
>
> On Mon, Jun 30, 2025 at 12:15:49PM +0800, hans.zhang@cixtech.com wrote:
>> From: Manikandan K Pillai <mpillai@cadence.com>
>>
>> Document the compatible property for HPA (High Performance Architecture)
>> PCIe controller EP configuration.
>>
>> Signed-off-by: Manikandan K Pillai <mpillai@cadence.com>
>
> Missing SoB.
>
Dear Krzysztof,
Thank you very much for your reply. Sorry, I missed it. Will add:
Signed-off-by: Hans Zhang <hans.zhang@cixtech.com>
> Why are you sending someone else's patches? This just duplicates the
> review and creates confusion.
>
> Did you address ENTIRE previous review when you resent this?
>
Previously, due to the Manikandan environment issue, the entire series
of patches couldn't be sent. Now it is Manikandan who sends the patch to
me by email, not by git send email. Then I will send out the patches of
Manikandan together with those of Cix Sky1.
The following is the previous communication record:
https://lore.kernel.org/linux-pci/4bcc07b1-00ce-4ff9-bf23-e06b78950026@cixtech.com/
https://lore.kernel.org/linux-pci/d275cfe1-db7e-47d6-9ec6-b36f13524d65@kernel.org/
Regarding the opinions you mentioned that some Maintainers' issues were
not resolved before, please reply from Manikandan here.
Best regards,
Hans
> Best regards,
> Krzysztof
>
^ permalink raw reply [flat|nested] 46+ messages in thread
* Re: [PATCH v5 12/14] MAINTAINERS: add entry for CIX Sky1 PCIe driver
2025-06-30 7:29 ` Krzysztof Kozlowski
@ 2025-06-30 8:06 ` Hans Zhang
0 siblings, 0 replies; 46+ messages in thread
From: Hans Zhang @ 2025-06-30 8:06 UTC (permalink / raw)
To: Krzysztof Kozlowski
Cc: bhelgaas, lpieralisi, kw, mani, robh, kwilczynski, krzk+dt,
conor+dt, mpillai, fugang.duan, guoyin.chen, peter.chen,
cix-kernel-upstream, linux-pci, devicetree, linux-kernel
On 2025/6/30 15:29, Krzysztof Kozlowski wrote:
> EXTERNAL EMAIL
>
> On Mon, Jun 30, 2025 at 12:15:59PM +0800, hans.zhang@cixtech.com wrote:
>> From: Hans Zhang <hans.zhang@cixtech.com>
>>
>> Add myself as maintainer of Sky1 PCIe host driver
>>
>> Signed-off-by: Hans Zhang <hans.zhang@cixtech.com>
>> Reviewed-by: Peter Chen <peter.chen@cixtech.com>
>> Reviewed-by: Manikandan K Pillai <mpillai@cadence.com>
>
> Where? Provide please lore links, since your changelog/cover letter is
> missing them.
>
Dear Krzysztof,
Thank you very much for your reply.
We reviewed it internally first and then sent it to the PCI mail list.
Anyway, I will delete the internal review tag next and wait for the
maintained review tag. I'm very sorry.
Best regards,
Hans
> Best regards,
> Krzysztof
>
^ permalink raw reply [flat|nested] 46+ messages in thread
* RE: [PATCH v5 01/14] dt-bindings: pci: cadence: Extend compatible for new RP configuration
2025-06-30 8:02 ` Hans Zhang
@ 2025-06-30 8:06 ` Manikandan Karunakaran Pillai
2025-06-30 11:11 ` Krzysztof Kozlowski
0 siblings, 1 reply; 46+ messages in thread
From: Manikandan Karunakaran Pillai @ 2025-06-30 8:06 UTC (permalink / raw)
To: Hans Zhang, Krzysztof Kozlowski
Cc: bhelgaas@google.com, lpieralisi@kernel.org, kw@linux.com,
mani@kernel.org, robh@kernel.org, kwilczynski@kernel.org,
krzk+dt@kernel.org, conor+dt@kernel.org, fugang.duan@cixtech.com,
guoyin.chen@cixtech.com, peter.chen@cixtech.com,
cix-kernel-upstream@cixtech.com, linux-pci@vger.kernel.org,
devicetree@vger.kernel.org, linux-kernel@vger.kernel.org
>EXTERNAL MAIL
>
>
>
>
>On 2025/6/30 15:30, Krzysztof Kozlowski wrote:
>> EXTERNAL EMAIL
>>
>> On Mon, Jun 30, 2025 at 12:15:48PM +0800, hans.zhang@cixtech.com wrote:
>>> From: Manikandan K Pillai <mpillai@cadence.com>
>>>
>>> Document the compatible property for HPA (High Performance
>Architecture)
>>> PCIe controller RP configuration.
>>
>> I don't see Conor's comment addressed:
>>
>> https://urldefense.com/v3/__https://lore.kernel.org/linux-
>devicetree/20250424-elm-magma-
>b791798477ab@spud/__;!!EHscmS1ygiU1lA!Bo-
>ayMVqCWXSbSgFpsBZzgk1ADft8pqRQbuOeAhIuAjz0zI015s4dmzxgaWKycqKMn
>1cejS8kKZvjF5xDAse$
>>
>> You cannot just send someone's work and bypassing the review feedback.
I thought the comment was implicitly addressed when the device drivers were separated out based on other review comments in this patch.
To make it more clear, in the next patch I will add the following description for the dt-binding patch
"The High performance architecture is different from legacy architecture controller in design of register banks,
register definitions, hardware sequences of initialization and is considered as a different device due to the
large number of changes required in the device driver and hence adding a new compatible."
>>
>>>
>>> Signed-off-by: Manikandan K Pillai <mpillai@cadence.com>
>>
>> SoB.
>
>Dear Krzysztof,
>
>Thank you very much for your reply. The questions mentioned above,
>please answer by Manikandan.
>
>Sorry, I missed it. Will add:
>Signed-off-by: Hans Zhang <hans.zhang@cixtech.com>
>
>Best regards,
>Hans
>
>>
>> Best regards,
>> Krzysztof
>>
^ permalink raw reply [flat|nested] 46+ messages in thread
* Re: [PATCH v5 14/14] arm64: dts: cix: Enable PCIe on the Orion O6 board
2025-06-30 7:32 ` Krzysztof Kozlowski
@ 2025-06-30 8:08 ` Hans Zhang
0 siblings, 0 replies; 46+ messages in thread
From: Hans Zhang @ 2025-06-30 8:08 UTC (permalink / raw)
To: Krzysztof Kozlowski
Cc: bhelgaas, lpieralisi, kw, mani, robh, kwilczynski, krzk+dt,
conor+dt, mpillai, fugang.duan, guoyin.chen, peter.chen,
cix-kernel-upstream, linux-pci, devicetree, linux-kernel
On 2025/6/30 15:32, Krzysztof Kozlowski wrote:
> EXTERNAL EMAIL
>
> On Mon, Jun 30, 2025 at 12:16:01PM +0800, hans.zhang@cixtech.com wrote:
>> From: Hans Zhang <hans.zhang@cixtech.com>
>>
>> Add PCIe RC support on Orion O6 board.
>>
>> Signed-off-by: Hans Zhang <hans.zhang@cixtech.com>
>> Reviewed-by: Peter Chen <peter.chen@cixtech.com>
>> Reviewed-by: Manikandan K Pillai <mpillai@cadence.com>
>
> Where? Please provide lore links. The happened AFTER the SoB, so they
> must have been made public, right?
>
Dear Krzysztof,
I have replied to patch 12/14. The subsequent versions will be deleted.
>> ---
>> arch/arm64/boot/dts/cix/sky1-orion-o6.dts | 20 ++++++++++++++++++++
>> 1 file changed, 20 insertions(+)
>>
>> diff --git a/arch/arm64/boot/dts/cix/sky1-orion-o6.dts b/arch/arm64/boot/dts/cix/sky1-orion-o6.dts
>> index d74964d53c3b..44710d54ddad 100644
>> --- a/arch/arm64/boot/dts/cix/sky1-orion-o6.dts
>> +++ b/arch/arm64/boot/dts/cix/sky1-orion-o6.dts
>> @@ -37,3 +37,23 @@ linux,cma {
>> &uart2 {
>> status = "okay";
>> };
>> +
>> +&pcie_x8_rc {
>> + status = "okay";
>
> And really two people reviewed this trivial changes? Really?
>
> Plus what their review actually checked? This is obviously wrong - not
> following DTS coding style, so what such review meant? What did it
>
Will delete.
Best regards,
Hans
> Best regards,
> Krzysztof
>
^ permalink raw reply [flat|nested] 46+ messages in thread
* Re: [PATCH v5 10/14] dt-bindings: PCI: Add CIX Sky1 PCIe Root Complex bindings
2025-06-30 7:26 ` Krzysztof Kozlowski
@ 2025-06-30 8:29 ` Hans Zhang
2025-06-30 11:14 ` Krzysztof Kozlowski
2025-06-30 15:54 ` Hans Zhang
1 sibling, 1 reply; 46+ messages in thread
From: Hans Zhang @ 2025-06-30 8:29 UTC (permalink / raw)
To: Krzysztof Kozlowski
Cc: bhelgaas, lpieralisi, kw, mani, robh, kwilczynski, krzk+dt,
conor+dt, mpillai, fugang.duan, guoyin.chen, peter.chen,
cix-kernel-upstream, linux-pci, devicetree, linux-kernel
On 2025/6/30 15:26, Krzysztof Kozlowski wrote:
> EXTERNAL EMAIL
>
> On Mon, Jun 30, 2025 at 12:15:57PM +0800, hans.zhang@cixtech.com wrote:
>> From: Hans Zhang <hans.zhang@cixtech.com>
>>
>> Document the bindings for CIX Sky1 PCIe Controller configured in
>> root complex mode with five root port.
>>
>> Supports 4 INTx, MSI and MSI-x interrupts from the ARM GICv3 controller.
>>
>> Signed-off-by: Hans Zhang <hans.zhang@cixtech.com>
>> Reviewed-by: Peter Chen <peter.chen@cixtech.com>
>> Reviewed-by: Manikandan K Pillai <mpillai@cadence.com>
>> ---
>> .../bindings/pci/cix,sky1-pcie-host.yaml | 133 ++++++++++++++++++
>> 1 file changed, 133 insertions(+)
>> create mode 100644 Documentation/devicetree/bindings/pci/cix,sky1-pcie-host.yaml
>>
>> diff --git a/Documentation/devicetree/bindings/pci/cix,sky1-pcie-host.yaml b/Documentation/devicetree/bindings/pci/cix,sky1-pcie-host.yaml
>> new file mode 100644
>> index 000000000000..b4395bc06f2f
>> --- /dev/null
>> +++ b/Documentation/devicetree/bindings/pci/cix,sky1-pcie-host.yaml
>> @@ -0,0 +1,133 @@
>> +# SPDX-License-Identifier: (GPL-2.0 OR BSD-2-Clause)
>> +%YAML 1.2
>> +---
>> +$id: http://devicetree.org/schemas/pci/cix,sky1-pcie-host.yaml#
>> +$schema: http://devicetree.org/meta-schemas/core.yaml#
>> +
>> +title: CIX Sky1 PCIe Root Complex
>> +
>> +maintainers:
>> + - Hans Zhang <hans.zhang@cixtech.com>
>> +
>> +description:
>> + PCIe root complex controller based on the Cadence PCIe core.
>> +
>> +allOf:
>> + - $ref: /schemas/pci/pci-host-bridge.yaml#
>> + - $ref: /schemas/pci/cdns-pcie.yaml#
>> +
>> +properties:
>> + compatible:
>> + oneOf:
>> + - const: cix,sky1-pcie-host
>> +
>> + reg:
>> + items:
>> + - description: PCIe controller registers.
>> + - description: Remote CIX System Unit registers.
>> + - description: ECAM registers.
>> + - description: Region for sending messages registers.
>> +
>> + reg-names:
>> + items:
>> + - const: reg
>> + - const: rcsu
>> + - const: cfg
>
> cfg is the second, look at cdns bindings.
>
Dear Krzysztof,
Thank you very much for your reply. Will delete it.
>> + - const: msg
>> +
>> + "#interrupt-cells":
>> + const: 1
>> +
>> + interrupt-map-mask:
>> + items:
>> + - const: 0
>> + - const: 0
>> + - const: 0
>> + - const: 7
>> +
>> + interrupt-map:
>> + maxItems: 4
>> +
>> + max-link-speed:
>> + maximum: 4
>
> Why are you redefining core properties?
I see. Just add it in "required". Will delete.
>
>> +
>> + num-lanes:
>> + maximum: 8
>> +
>> + ranges:
>> + maxItems: 3
>> +
>> + msi-map:
>> + maxItems: 1
>> +
>> + vendor-id:
>> + const: 0x1f6c
>
> Why? This is implied by compatible.
Because when we designed the SOC RTL, it was not set to the vendor id
and device id of our company. We are members of PCI-SIG. So we need to
set the vendor id and device id in the Root Port driver. Otherwise, the
output of lspci will be displayed incorrectly.
>
>> +
>> + device-id:
>> + enum:
>> + - 0x0001
>
> Why? This is implied by compatible.
The reason is the same as above.
>
>> +
>> + cdns,no-inbound-bar:
>
> That's not a cdns binding, so wrong prefix.
It will be added to Cadence's Doc. I will add a separate patch. What do
you think?
>
>> + description: |
>
> Do not need '|' unless you need to preserve formatting.
Will delete '|'.
>
>> + Indicates the PCIe controller does not require an inbound BAR region.
>
> And anyway this is implied by compatible, drop.
>
Because Cadence core driver has this judgment, the latest code of the
current linux master all has this process. As follows:
int cdns_pcie_host_init(struct cdns_pcie_rc *rc)
cdns_pcie_host_init_address_translation(rc);
cdns_pcie_host_map_dma_ranges(rc);
cdns_pcie_host_bar_ib_config
So this attribute has been added here, or is there a better way?
>> + type: boolean
>> +
>> + sky1,pcie-ctrl-id:
>> + description: |
>> + Specifies the PCIe controller instance identifier (0-4).
>
> No, you don't get an instance ID. Drop the property and look how other
> bindings encoded it (not sure about the purpose and you did not explain
> it, so cannot advise).
>
>> + $ref: /schemas/types.yaml#/definitions/uint32
>> + minimum: 0
>> + maximum: 4
>> +
>> +required:
>> + - compatible
>> + - reg
>> + - reg-names
>> + - "#interrupt-cells"
>> + - interrupt-map-mask
>> + - interrupt-map
>> + - max-link-speed
>> + - num-lanes
>> + - bus-range
>> + - device_type
>> + - ranges
>> + - msi-map
>> + - vendor-id
>> + - device-id
>> + - cdns,no-inbound-bar
>> + - sky1,pcie-ctrl-id
>> +
>> +unevaluatedProperties: false
>> +
>> +examples:
>> + - |
>> + #include <dt-bindings/gpio/gpio.h>
>> +
>> + pcie_x8_rc: pcie@a010000 {
>
> Drop unused label.
Will delete pcie_x8_rc.
Best regards,
Hans
>
>
> Best regards,
> Krzysztof
>
^ permalink raw reply [flat|nested] 46+ messages in thread
* Re: [PATCH v5 13/14] arm64: dts: cix: Add PCIe Root Complex on sky1
2025-06-30 7:33 ` Krzysztof Kozlowski
@ 2025-06-30 8:44 ` Hans Zhang
0 siblings, 0 replies; 46+ messages in thread
From: Hans Zhang @ 2025-06-30 8:44 UTC (permalink / raw)
To: Krzysztof Kozlowski
Cc: bhelgaas, lpieralisi, kw, mani, robh, kwilczynski, krzk+dt,
conor+dt, mpillai, fugang.duan, guoyin.chen, peter.chen,
cix-kernel-upstream, linux-pci, devicetree, linux-kernel
On 2025/6/30 15:33, Krzysztof Kozlowski wrote:
> EXTERNAL EMAIL
>
> On Mon, Jun 30, 2025 at 12:16:00PM +0800, hans.zhang@cixtech.com wrote:
>> From: Hans Zhang <hans.zhang@cixtech.com>
>>
>> Add pcie_x*_rc node to support Sky1 PCIe driver based on the
>> Cadence PCIe core.
>>
>> Supports Gen1/Gen2/Gen3/Gen4, 1/2/4/8 lane, MSI/MSI-x interrupts
>> using the ARM GICv3.
>>
>> Signed-off-by: Hans Zhang <hans.zhang@cixtech.com>
>> Reviewed-by: Peter Chen <peter.chen@cixtech.com>
>> Reviewed-by: Manikandan K Pillai <mpillai@cadence.com>
>
> Where?
Dear Krzysztof,
Thank you very much for your reply. Will delete.
>
>> ---
>> arch/arm64/boot/dts/cix/sky1.dtsi | 150 ++++++++++++++++++++++++++++++
>> 1 file changed, 150 insertions(+)
>>
>> diff --git a/arch/arm64/boot/dts/cix/sky1.dtsi b/arch/arm64/boot/dts/cix/sky1.dtsi
>> index 9c723917d8ca..1dac0e8d5fc1 100644
>> --- a/arch/arm64/boot/dts/cix/sky1.dtsi
>> +++ b/arch/arm64/boot/dts/cix/sky1.dtsi
>> @@ -289,6 +289,156 @@ mbox_ap2sfh: mailbox@80a0000 {
>> cix,mbox-dir = "tx";
>> };
>>
>> + pcie_x8_rc: pcie@a010000 { /* X8 */
>> + compatible = "cix,sky1-pcie-host";
>> + reg = <0x00 0x0a010000 0x00 0x10000>,
>> + <0x00 0x0a000000 0x00 0x10000>,
>> + <0x00 0x2c000000 0x00 0x4000000>,
>> + <0x00 0x60000000 0x00 0x00100000>;
>> + reg-names = "reg", "rcsu", "cfg", "msg";
>> + #interrupt-cells = <1>;
>> + interrupt-map-mask = <0 0 0 0x7>;
>> + interrupt-map = <0 0 0 1 &gic 0 0 GIC_SPI 407 IRQ_TYPE_LEVEL_HIGH 0>,
>> + <0 0 0 2 &gic 0 0 GIC_SPI 408 IRQ_TYPE_LEVEL_HIGH 0>,
>> + <0 0 0 3 &gic 0 0 GIC_SPI 409 IRQ_TYPE_LEVEL_HIGH 0>,
>> + <0 0 0 4 &gic 0 0 GIC_SPI 410 IRQ_TYPE_LEVEL_HIGH 0>;
>> + max-link-speed = <4>;
>> + num-lanes = <8>;
>> + #address-cells = <3>;
>> + #size-cells = <2>;
>> + bus-range = <0xc0 0xff>;
>> + device_type = "pci";
>> + ranges = <0x01000000 0x0 0x60100000 0x0 0x60100000 0x0 0x00100000>,
>> + <0x02000000 0x0 0x60200000 0x0 0x60200000 0x0 0x1fe00000>,
>> + <0x43000000 0x18 0x00000000 0x18 0x00000000 0x04 0x00000000>;
>
> And none of the two reviewers asked you to follow DTS coding style? If
> reviewer knows not much about DTS, don't review. Add an ack or
> something, dunno, or actually perform proper review.
>
Understood.
For the arrangement of attributes this time, I referred to the following
submission:
linux master branch:
arch/arm64/boot/dts/rockchip/rk3588-base.dtsi
pcie2x1l2: pcie@fe190000
Submissions under review:
https://patchwork.kernel.org/project/linux-pci/patch/20250610090714.3321129-8-christian.bruel@foss.st.com/
Then should I follow the following documents exactly?
Documentation/devicetree/bindings/dts-coding-style.rst
The following order of properties in device nodes is preferred:
1. "compatible"
2. "reg"
3. "ranges"
4. Standard/common properties (defined by common bindings, e.g. without
vendor-prefixes)
5. Vendor-specific properties
6. "status" (if applicable)
7. Child nodes, where each node is preceded with a blank line
Best regards,
Hans
> Best regards,
> Krzysztof
>
^ permalink raw reply [flat|nested] 46+ messages in thread
* Re: [PATCH v5 02/14] dt-bindings: pci: cadence: Extend compatible for new EP configuration
2025-06-30 4:15 ` [PATCH v5 02/14] dt-bindings: pci: cadence: Extend compatible for new EP configuration hans.zhang
2025-06-30 7:27 ` Krzysztof Kozlowski
@ 2025-06-30 10:28 ` Krzysztof Kozlowski
1 sibling, 0 replies; 46+ messages in thread
From: Krzysztof Kozlowski @ 2025-06-30 10:28 UTC (permalink / raw)
To: hans.zhang
Cc: bhelgaas, lpieralisi, kw, mani, robh, kwilczynski, krzk+dt,
conor+dt, mpillai, fugang.duan, guoyin.chen, peter.chen,
cix-kernel-upstream, linux-pci, devicetree, linux-kernel
On Mon, Jun 30, 2025 at 12:15:49PM +0800, hans.zhang@cixtech.com wrote:
> From: Manikandan K Pillai <mpillai@cadence.com>
>
> Document the compatible property for HPA (High Performance Architecture)
> PCIe controller EP configuration.
That's the same patch which we already commented on. :/
>
> Signed-off-by: Manikandan K Pillai <mpillai@cadence.com>
> ---
> .../devicetree/bindings/pci/cdns,cdns-pcie-ep.yaml | 6 ++++--
> 1 file changed, 4 insertions(+), 2 deletions(-)
>
> diff --git a/Documentation/devicetree/bindings/pci/cdns,cdns-pcie-ep.yaml b/Documentation/devicetree/bindings/pci/cdns,cdns-pcie-ep.yaml
> index 8735293962ee..c3f0a620f1c2 100644
> --- a/Documentation/devicetree/bindings/pci/cdns,cdns-pcie-ep.yaml
> +++ b/Documentation/devicetree/bindings/pci/cdns,cdns-pcie-ep.yaml
> @@ -7,14 +7,16 @@ $schema: http://devicetree.org/meta-schemas/core.yaml#
> title: Cadence PCIe EP Controller
>
> maintainers:
> - - Tom Joseph <tjoseph@cadence.com>
> + - Manikandan K Pillai <mpillai@cadence.com>
This is not explained in commit msg. You need to say WHY you are doing
such changes.
Best regards,
Krzysztof
^ permalink raw reply [flat|nested] 46+ messages in thread
* Re: [PATCH v5 01/14] dt-bindings: pci: cadence: Extend compatible for new RP configuration
2025-06-30 8:06 ` Manikandan Karunakaran Pillai
@ 2025-06-30 11:11 ` Krzysztof Kozlowski
2025-07-01 11:56 ` Manikandan Karunakaran Pillai
0 siblings, 1 reply; 46+ messages in thread
From: Krzysztof Kozlowski @ 2025-06-30 11:11 UTC (permalink / raw)
To: Manikandan Karunakaran Pillai, Hans Zhang
Cc: bhelgaas@google.com, lpieralisi@kernel.org, kw@linux.com,
mani@kernel.org, robh@kernel.org, kwilczynski@kernel.org,
krzk+dt@kernel.org, conor+dt@kernel.org, fugang.duan@cixtech.com,
guoyin.chen@cixtech.com, peter.chen@cixtech.com,
cix-kernel-upstream@cixtech.com, linux-pci@vger.kernel.org,
devicetree@vger.kernel.org, linux-kernel@vger.kernel.org
On 30/06/2025 10:06, Manikandan Karunakaran Pillai wrote:
>
>
>> EXTERNAL MAIL
>>
>>
>>
>>
>> On 2025/6/30 15:30, Krzysztof Kozlowski wrote:
>>> EXTERNAL EMAIL
>>>
>>> On Mon, Jun 30, 2025 at 12:15:48PM +0800, hans.zhang@cixtech.com wrote:
>>>> From: Manikandan K Pillai <mpillai@cadence.com>
>>>>
>>>> Document the compatible property for HPA (High Performance
>> Architecture)
>>>> PCIe controller RP configuration.
>>>
>>> I don't see Conor's comment addressed:
>>>
>>> https://urldefense.com/v3/__https://lore.kernel.org/linux-
>> devicetree/20250424-elm-magma-
>> b791798477ab@spud/__;!!EHscmS1ygiU1lA!Bo-
>> ayMVqCWXSbSgFpsBZzgk1ADft8pqRQbuOeAhIuAjz0zI015s4dmzxgaWKycqKMn
>> 1cejS8kKZvjF5xDAse$
>>>
>>> You cannot just send someone's work and bypassing the review feedback.
>
> I thought the comment was implicitly addressed when the device drivers were separated out based on other review comments in this patch.
> To make it more clear, in the next patch I will add the following description for the dt-binding patch
>
> "The High performance architecture is different from legacy architecture controller in design of register banks,
> register definitions, hardware sequences of initialization and is considered as a different device due to the
> large number of changes required in the device driver and hence adding a new compatible."
That's still vague. Anyway this does not address other concern that the
generic compatible is discouraged and we expect specific compatibles. We
already said that and what? You send the same patch.
So no, don't send the same patch.
Best regards,
Krzysztof
^ permalink raw reply [flat|nested] 46+ messages in thread
* Re: [PATCH v5 10/14] dt-bindings: PCI: Add CIX Sky1 PCIe Root Complex bindings
2025-06-30 8:29 ` Hans Zhang
@ 2025-06-30 11:14 ` Krzysztof Kozlowski
2025-06-30 15:30 ` Hans Zhang
0 siblings, 1 reply; 46+ messages in thread
From: Krzysztof Kozlowski @ 2025-06-30 11:14 UTC (permalink / raw)
To: Hans Zhang
Cc: bhelgaas, lpieralisi, kw, mani, robh, kwilczynski, krzk+dt,
conor+dt, mpillai, fugang.duan, guoyin.chen, peter.chen,
cix-kernel-upstream, linux-pci, devicetree, linux-kernel
On 30/06/2025 10:29, Hans Zhang wrote:
>>> +
>>> + num-lanes:
>>> + maximum: 8
>>> +
>>> + ranges:
>>> + maxItems: 3
>>> +
>>> + msi-map:
>>> + maxItems: 1
>>> +
>>> + vendor-id:
>>> + const: 0x1f6c
>>
>> Why? This is implied by compatible.
>
> Because when we designed the SOC RTL, it was not set to the vendor id
> and device id of our company. We are members of PCI-SIG. So we need to
> set the vendor id and device id in the Root Port driver. Otherwise, the
> output of lspci will be displayed incorrectly.
Please read carefully. Previous discussions were also pointlessly
ping-ponging on irrelevant arguments. Did I suggest you do not have to
set it in root port driver? No. If this is const here, this is implied
by compatible and completely redundant, because your driver knows this
value already. It already has all the information to deduce this value
from the compatible.
>
>>
>>> +
>>> + device-id:
>>> + enum:
>>> + - 0x0001
>>
>> Why? This is implied by compatible.
>
> The reason is the same as above.
>
>>
>>> +
>>> + cdns,no-inbound-bar:
>>
>> That's not a cdns binding, so wrong prefix.
>
> It will be added to Cadence's Doc. I will add a separate patch. What do
> you think?
>
>>
>>> + description: |
>>
>> Do not need '|' unless you need to preserve formatting.
>
> Will delete '|'.
>
>>
>>> + Indicates the PCIe controller does not require an inbound BAR region.
>>
>> And anyway this is implied by compatible, drop.
>>
>
> Because Cadence core driver has this judgment, the latest code of the
> current linux master all has this process. As follows:
> int cdns_pcie_host_init(struct cdns_pcie_rc *rc)
> cdns_pcie_host_init_address_translation(rc);
> cdns_pcie_host_map_dma_ranges(rc);
> cdns_pcie_host_bar_ib_config
And you cannot fix or change drivers? How does it matter for discussion
here?
>
> So this attribute has been added here, or is there a better way?
Of course, like every other driver in Linux kernel. This is FIXED for
your platform, so set it in your CIX driver.
Best regards,
Krzysztof
^ permalink raw reply [flat|nested] 46+ messages in thread
* Re: [PATCH v5 10/14] dt-bindings: PCI: Add CIX Sky1 PCIe Root Complex bindings
2025-06-30 11:14 ` Krzysztof Kozlowski
@ 2025-06-30 15:30 ` Hans Zhang
2025-07-02 20:23 ` Krzysztof Kozlowski
0 siblings, 1 reply; 46+ messages in thread
From: Hans Zhang @ 2025-06-30 15:30 UTC (permalink / raw)
To: Krzysztof Kozlowski
Cc: bhelgaas, lpieralisi, kw, mani, robh, kwilczynski, krzk+dt,
conor+dt, mpillai, fugang.duan, guoyin.chen, peter.chen,
cix-kernel-upstream, linux-pci, devicetree, linux-kernel
On 2025/6/30 19:14, Krzysztof Kozlowski wrote:
> EXTERNAL EMAIL
>
> On 30/06/2025 10:29, Hans Zhang wrote:
>>>> +
>>>> + num-lanes:
>>>> + maximum: 8
>>>> +
>>>> + ranges:
>>>> + maxItems: 3
>>>> +
>>>> + msi-map:
>>>> + maxItems: 1
>>>> +
>>>> + vendor-id:
>>>> + const: 0x1f6c
>>>
>>> Why? This is implied by compatible.
>>
>> Because when we designed the SOC RTL, it was not set to the vendor id
>> and device id of our company. We are members of PCI-SIG. So we need to
>> set the vendor id and device id in the Root Port driver. Otherwise, the
>> output of lspci will be displayed incorrectly.
>
> Please read carefully. Previous discussions were also pointlessly
> ping-ponging on irrelevant arguments. Did I suggest you do not have to
> set it in root port driver? No. If this is const here, this is implied
> by compatible and completely redundant, because your driver knows this
> value already. It already has all the information to deduce this value
> from the compatible.
>
>
Dear Krzysztof,
Thank you very much for your reply.
These two attributes are also in the following document. Is this place
out of date?
Documentation/devicetree/bindings/pci/ti,j721e-pci-host.yaml
We initially used the logic of Cadence common driver as follows:
drivers/pci/controller/cadence/pcie-cadence-host.c
of_property_read_u32(np, "vendor-id", &rc->vendor_id);
of_property_read_u32(np, "device-id", &rc->device_id);
So, can the code in Cadence be deleted?
I see. It will be removed in the next version. The vendor id and device
id are directly assigned by the Root Port driver based on compatible.
Best regards,
Hans
>
>
>>
>>>
>>>> +
>>>> + device-id:
>>>> + enum:
>>>> + - 0x0001
>>>
>>> Why? This is implied by compatible.
>>
>> The reason is the same as above.
>>
>>>
>>>> +
>>>> + cdns,no-inbound-bar:
>>>
>>> That's not a cdns binding, so wrong prefix.
>>
>> It will be added to Cadence's Doc. I will add a separate patch. What do
>> you think?
>>
>>>
>>>> + description: |
>>>
>>> Do not need '|' unless you need to preserve formatting.
>>
>> Will delete '|'.
>>
>>>
>>>> + Indicates the PCIe controller does not require an inbound BAR region.
>>>
>>> And anyway this is implied by compatible, drop.
>>>
>>
>> Because Cadence core driver has this judgment, the latest code of the
>> current linux master all has this process. As follows:
>> int cdns_pcie_host_init(struct cdns_pcie_rc *rc)
>> cdns_pcie_host_init_address_translation(rc);
>> cdns_pcie_host_map_dma_ranges(rc);
>> cdns_pcie_host_bar_ib_config
>
> And you cannot fix or change drivers? How does it matter for discussion
> here?
>
>>
>> So this attribute has been added here, or is there a better way?
>
> Of course, like every other driver in Linux kernel. This is FIXED for
> your platform, so set it in your CIX driver.
>
>
>
> Best regards,
> Krzysztof
^ permalink raw reply [flat|nested] 46+ messages in thread
* Re: [PATCH v5 10/14] dt-bindings: PCI: Add CIX Sky1 PCIe Root Complex bindings
2025-06-30 7:26 ` Krzysztof Kozlowski
2025-06-30 8:29 ` Hans Zhang
@ 2025-06-30 15:54 ` Hans Zhang
2025-07-02 20:28 ` Krzysztof Kozlowski
1 sibling, 1 reply; 46+ messages in thread
From: Hans Zhang @ 2025-06-30 15:54 UTC (permalink / raw)
To: Krzysztof Kozlowski
Cc: bhelgaas, lpieralisi, kw, mani, robh, kwilczynski, krzk+dt,
conor+dt, mpillai, fugang.duan, guoyin.chen, peter.chen,
cix-kernel-upstream, linux-pci, devicetree, linux-kernel
On 2025/6/30 15:26, Krzysztof Kozlowski wrote:
>> + sky1,pcie-ctrl-id:
>> + description: |
>> + Specifies the PCIe controller instance identifier (0-4).
> No, you don't get an instance ID. Drop the property and look how other
> bindings encoded it (not sure about the purpose and you did not explain
> it, so cannot advise).
Dear Krzysztof,
Sorry, I missed your reply to this in the previous email.
Because our Root Port driver needs to support 5 PCIe ports, and the
register configuration and offset of each port are different, it is
necessary to know which port it is currently. Perhaps I can use the
following method and then delete this attribute.
aliases {
......
pcie_rc0 = &pcie_x8_rc;
pcie_rc1 = &pcie_x4_rc;
pcie_rc2 = &pcie_x2_rc;
pcie_rc3 = &pcie_x1_0_rc;
pcie_rc4 = &pcie_x1_1_rc;
id = of_alias_get_id(dev->of_node, "pcie_rc");
Best regards,
Hans
^ permalink raw reply [flat|nested] 46+ messages in thread
* RE: [PATCH v5 01/14] dt-bindings: pci: cadence: Extend compatible for new RP configuration
2025-06-30 11:11 ` Krzysztof Kozlowski
@ 2025-07-01 11:56 ` Manikandan Karunakaran Pillai
2025-07-02 20:20 ` Krzysztof Kozlowski
0 siblings, 1 reply; 46+ messages in thread
From: Manikandan Karunakaran Pillai @ 2025-07-01 11:56 UTC (permalink / raw)
To: Krzysztof Kozlowski, Hans Zhang
Cc: bhelgaas@google.com, lpieralisi@kernel.org, kw@linux.com,
mani@kernel.org, robh@kernel.org, kwilczynski@kernel.org,
krzk+dt@kernel.org, conor+dt@kernel.org, fugang.duan@cixtech.com,
guoyin.chen@cixtech.com, peter.chen@cixtech.com,
cix-kernel-upstream@cixtech.com, linux-pci@vger.kernel.org,
devicetree@vger.kernel.org, linux-kernel@vger.kernel.org
>>> On 2025/6/30 15:30, Krzysztof Kozlowski wrote:
>>>> EXTERNAL EMAIL
>>>>
>>>> On Mon, Jun 30, 2025 at 12:15:48PM +0800, hans.zhang@cixtech.com
>wrote:
>>>>> From: Manikandan K Pillai <mpillai@cadence.com>
>>>>>
>>>>> Document the compatible property for HPA (High Performance
>>> Architecture)
>>>>> PCIe controller RP configuration.
>>>>
>>>> I don't see Conor's comment addressed:
>>>>
>>>> https://urldefense.com/v3/__https://lore.kernel.org/linux-
>>> devicetree/20250424-elm-magma-
>>> b791798477ab@spud/__;!!EHscmS1ygiU1lA!Bo-
>>>
>ayMVqCWXSbSgFpsBZzgk1ADft8pqRQbuOeAhIuAjz0zI015s4dmzxgaWKycqKMn
>>> 1cejS8kKZvjF5xDAse$
>>>>
>>>> You cannot just send someone's work and bypassing the review feedback.
>>
>> I thought the comment was implicitly addressed when the device drivers
>were separated out based on other review comments in this patch.
>> To make it more clear, in the next patch I will add the following description
>for the dt-binding patch
>>
>> "The High performance architecture is different from legacy architecture
>controller in design of register banks,
>> register definitions, hardware sequences of initialization and is considered as
>a different device due to the
>> large number of changes required in the device driver and hence adding a
>new compatible."
>That's still vague. Anyway this does not address other concern that the
>generic compatible is discouraged and we expect specific compatibles. We
>already said that and what? You send the same patch.
>
>So no, don't send the same patch.
Hi Kryzsztof,
Are you suggesting to create new file for both RC and EP for HPA host like:
cdns,cdns-pcie-hpa-host.yaml
cdns,cdns-pcie-hpa-ep.yaml
And during the commit log, explain why you need to create a new file for HPA, and not use the legacy one.
>Best regards,
>Krzysztof
^ permalink raw reply [flat|nested] 46+ messages in thread
* Re: [PATCH v5 01/14] dt-bindings: pci: cadence: Extend compatible for new RP configuration
2025-07-01 11:56 ` Manikandan Karunakaran Pillai
@ 2025-07-02 20:20 ` Krzysztof Kozlowski
2025-07-03 1:35 ` Manikandan Karunakaran Pillai
0 siblings, 1 reply; 46+ messages in thread
From: Krzysztof Kozlowski @ 2025-07-02 20:20 UTC (permalink / raw)
To: Manikandan Karunakaran Pillai, Hans Zhang
Cc: bhelgaas@google.com, lpieralisi@kernel.org, kw@linux.com,
mani@kernel.org, robh@kernel.org, kwilczynski@kernel.org,
krzk+dt@kernel.org, conor+dt@kernel.org, fugang.duan@cixtech.com,
guoyin.chen@cixtech.com, peter.chen@cixtech.com,
cix-kernel-upstream@cixtech.com, linux-pci@vger.kernel.org,
devicetree@vger.kernel.org, linux-kernel@vger.kernel.org
On 01/07/2025 13:56, Manikandan Karunakaran Pillai wrote:
>
>>>> On 2025/6/30 15:30, Krzysztof Kozlowski wrote:
>>>>> EXTERNAL EMAIL
>>>>>
>>>>> On Mon, Jun 30, 2025 at 12:15:48PM +0800, hans.zhang@cixtech.com
>> wrote:
>>>>>> From: Manikandan K Pillai <mpillai@cadence.com>
>>>>>>
>>>>>> Document the compatible property for HPA (High Performance
>>>> Architecture)
>>>>>> PCIe controller RP configuration.
>>>>>
>>>>> I don't see Conor's comment addressed:
>>>>>
>>>>> https://urldefense.com/v3/__https://lore.kernel.org/linux-
>>>> devicetree/20250424-elm-magma-
>>>> b791798477ab@spud/__;!!EHscmS1ygiU1lA!Bo-
>>>>
>> ayMVqCWXSbSgFpsBZzgk1ADft8pqRQbuOeAhIuAjz0zI015s4dmzxgaWKycqKMn
>>>> 1cejS8kKZvjF5xDAse$
>>>>>
>>>>> You cannot just send someone's work and bypassing the review feedback.
>>>
>>> I thought the comment was implicitly addressed when the device drivers
>> were separated out based on other review comments in this patch.
>>> To make it more clear, in the next patch I will add the following description
>> for the dt-binding patch
>>>
>>> "The High performance architecture is different from legacy architecture
>> controller in design of register banks,
>>> register definitions, hardware sequences of initialization and is considered as
>> a different device due to the
>>> large number of changes required in the device driver and hence adding a
>> new compatible."
>> That's still vague. Anyway this does not address other concern that the
>> generic compatible is discouraged and we expect specific compatibles. We
>> already said that and what? You send the same patch.
>>
>> So no, don't send the same patch.
>
>
> Hi Kryzsztof,
>
> Are you suggesting to create new file for both RC and EP for HPA host like:
> cdns,cdns-pcie-hpa-host.yaml
> cdns,cdns-pcie-hpa-ep.yaml
> And during the commit log, explain why you need to create a new file for HPA, and not use the legacy one.
No, there was no such suggestions in any previous or current
discussions. IIRC, this was simply rejected previously. I consider this
rejected still, with the same arguments: you should use specific SoC
compatibles. The generic compatible alone is rather legacy approach and
we have been commenting on this sooooo many times.
Best regards,
Krzysztof
^ permalink raw reply [flat|nested] 46+ messages in thread
* Re: [PATCH v5 10/14] dt-bindings: PCI: Add CIX Sky1 PCIe Root Complex bindings
2025-06-30 15:30 ` Hans Zhang
@ 2025-07-02 20:23 ` Krzysztof Kozlowski
2025-07-03 1:47 ` Hans Zhang
0 siblings, 1 reply; 46+ messages in thread
From: Krzysztof Kozlowski @ 2025-07-02 20:23 UTC (permalink / raw)
To: Hans Zhang
Cc: bhelgaas, lpieralisi, kw, mani, robh, kwilczynski, krzk+dt,
conor+dt, mpillai, fugang.duan, guoyin.chen, peter.chen,
cix-kernel-upstream, linux-pci, devicetree, linux-kernel
On 30/06/2025 17:30, Hans Zhang wrote:
>
>
> On 2025/6/30 19:14, Krzysztof Kozlowski wrote:
>> EXTERNAL EMAIL
>>
>> On 30/06/2025 10:29, Hans Zhang wrote:
>>>>> +
>>>>> + num-lanes:
>>>>> + maximum: 8
>>>>> +
>>>>> + ranges:
>>>>> + maxItems: 3
>>>>> +
>>>>> + msi-map:
>>>>> + maxItems: 1
>>>>> +
>>>>> + vendor-id:
>>>>> + const: 0x1f6c
>>>>
>>>> Why? This is implied by compatible.
>>>
>>> Because when we designed the SOC RTL, it was not set to the vendor id
>>> and device id of our company. We are members of PCI-SIG. So we need to
>>> set the vendor id and device id in the Root Port driver. Otherwise, the
>>> output of lspci will be displayed incorrectly.
>>
>> Please read carefully. Previous discussions were also pointlessly
>> ping-ponging on irrelevant arguments. Did I suggest you do not have to
>> set it in root port driver? No. If this is const here, this is implied
>> by compatible and completely redundant, because your driver knows this
>> value already. It already has all the information to deduce this value
>> from the compatible.
>>
>>
> Dear Krzysztof,
>
> Thank you very much for your reply.
>
> These two attributes are also in the following document. Is this place
> out of date?
> Documentation/devicetree/bindings/pci/ti,j721e-pci-host.yaml
I would need to spend time to investigate that and I choose to do other
things instead. I am recently very grumpy on arguments "I found this
somewhere else". I found bugs somewhere else, so am I okay to introduce
them?
>
>
> We initially used the logic of Cadence common driver as follows:
> drivers/pci/controller/cadence/pcie-cadence-host.c
> of_property_read_u32(np, "vendor-id", &rc->vendor_id);
>
> of_property_read_u32(np, "device-id", &rc->device_id);
>
> So, can the code in Cadence be deleted?
Don't know. If this is ABI, then not.
Best regards,
Krzysztof
^ permalink raw reply [flat|nested] 46+ messages in thread
* Re: [PATCH v5 10/14] dt-bindings: PCI: Add CIX Sky1 PCIe Root Complex bindings
2025-06-30 15:54 ` Hans Zhang
@ 2025-07-02 20:28 ` Krzysztof Kozlowski
0 siblings, 0 replies; 46+ messages in thread
From: Krzysztof Kozlowski @ 2025-07-02 20:28 UTC (permalink / raw)
To: Hans Zhang
Cc: bhelgaas, lpieralisi, kw, mani, robh, kwilczynski, krzk+dt,
conor+dt, mpillai, fugang.duan, guoyin.chen, peter.chen,
cix-kernel-upstream, linux-pci, devicetree, linux-kernel
On 30/06/2025 17:54, Hans Zhang wrote:
>
>
> On 2025/6/30 15:26, Krzysztof Kozlowski wrote:
>>> + sky1,pcie-ctrl-id:
>>> + description: |
>>> + Specifies the PCIe controller instance identifier (0-4).
>> No, you don't get an instance ID. Drop the property and look how other
>> bindings encoded it (not sure about the purpose and you did not explain
>> it, so cannot advise).
>
>
> Dear Krzysztof,
>
> Sorry, I missed your reply to this in the previous email.
>
> Because our Root Port driver needs to support 5 PCIe ports, and the
> register configuration and offset of each port are different, it is
> necessary to know which port it is currently. Perhaps I can use the
> following method and then delete this attribute.
>
> aliases {
> ......
> pcie_rc0 = &pcie_x8_rc;
> pcie_rc1 = &pcie_x4_rc;
> pcie_rc2 = &pcie_x2_rc;
> pcie_rc3 = &pcie_x1_0_rc;
> pcie_rc4 = &pcie_x1_1_rc;
>
> id = of_alias_get_id(dev->of_node, "pcie_rc");
I think Rob commented pretty strongly about aliases in this thread... or
was it other one? Maybe it was about Tesla FSD PCI PHY... huh, same pattern.
So no, you do not get your own aliases.
Explain the differences in the hardware. If the hardware is different,
then it warrants different compatibles or other properties. But you need
to explain these differences. What is there? Different number of lanes?
Different phy? We have properties for that, use these. Different speed?
All of them have their own properties already, so use them. Maybe
something else... Do the homework and look at schemas and dtschema (yes,
I know that I said other poor solutions are not excuse to copy them, but
look for good solutions).
Best regards,
Krzysztof
^ permalink raw reply [flat|nested] 46+ messages in thread
* RE: [PATCH v5 01/14] dt-bindings: pci: cadence: Extend compatible for new RP configuration
2025-07-02 20:20 ` Krzysztof Kozlowski
@ 2025-07-03 1:35 ` Manikandan Karunakaran Pillai
2025-07-03 6:55 ` Krzysztof Kozlowski
0 siblings, 1 reply; 46+ messages in thread
From: Manikandan Karunakaran Pillai @ 2025-07-03 1:35 UTC (permalink / raw)
To: Krzysztof Kozlowski, Hans Zhang
Cc: bhelgaas@google.com, lpieralisi@kernel.org, kw@linux.com,
mani@kernel.org, robh@kernel.org, kwilczynski@kernel.org,
krzk+dt@kernel.org, conor+dt@kernel.org, fugang.duan@cixtech.com,
guoyin.chen@cixtech.com, peter.chen@cixtech.com,
cix-kernel-upstream@cixtech.com, linux-pci@vger.kernel.org,
devicetree@vger.kernel.org, linux-kernel@vger.kernel.org
:
>>>>>>
>>>>>> https://urldefense.com/v3/__https://lore.kernel.org/linux-
>>>>> devicetree/20250424-elm-magma-
>>>>> b791798477ab@spud/__;!!EHscmS1ygiU1lA!Bo-
>>>>>
>>>
>ayMVqCWXSbSgFpsBZzgk1ADft8pqRQbuOeAhIuAjz0zI015s4dmzxgaWKycqKMn
>>>>> 1cejS8kKZvjF5xDAse$
>>>>>>
>>>>>> You cannot just send someone's work and bypassing the review
>feedback.
>>>>
>>>> I thought the comment was implicitly addressed when the device drivers
>>> were separated out based on other review comments in this patch.
>>>> To make it more clear, in the next patch I will add the following description
>>> for the dt-binding patch
>>>>
>>>> "The High performance architecture is different from legacy architecture
>>> controller in design of register banks,
>>>> register definitions, hardware sequences of initialization and is considered
>as
>>> a different device due to the
>>>> large number of changes required in the device driver and hence adding a
>>> new compatible."
>>> That's still vague. Anyway this does not address other concern that the
>>> generic compatible is discouraged and we expect specific compatibles. We
>>> already said that and what? You send the same patch.
>>>
>>> So no, don't send the same patch.
>>
>>
>> Hi Kryzsztof,
>>
>> Are you suggesting to create new file for both RC and EP for HPA host like:
>> cdns,cdns-pcie-hpa-host.yaml
>> cdns,cdns-pcie-hpa-ep.yaml
>> And during the commit log, explain why you need to create a new file for
>HPA, and not use the legacy one.
>
>No, there was no such suggestions in any previous or current
>discussions. IIRC, this was simply rejected previously. I consider this
>rejected still, with the same arguments: you should use specific SoC
>compatibles. The generic compatible alone is rather legacy approach and
>we have been commenting on this sooooo many times.
>
Hi Kryzsztof,
Thanks for your response.
The SoC specific dts patches are already being submitted by CIX team for their SoC based on the same PCIe controller IP.
Since there is no SoC for this platform(it only an FPGA based board),
are you suggesting to drop the dt-bindings patch altogether as the SoC specific dts bindings are already being in the same patch set.
There will be a compatible in the platform code that would not have a binding.
Pls let me know if this is the approach
>Best regards,
>Krzysztof
^ permalink raw reply [flat|nested] 46+ messages in thread
* Re: [PATCH v5 10/14] dt-bindings: PCI: Add CIX Sky1 PCIe Root Complex bindings
2025-07-02 20:23 ` Krzysztof Kozlowski
@ 2025-07-03 1:47 ` Hans Zhang
2025-07-14 7:43 ` Krzysztof Kozlowski
0 siblings, 1 reply; 46+ messages in thread
From: Hans Zhang @ 2025-07-03 1:47 UTC (permalink / raw)
To: Krzysztof Kozlowski
Cc: bhelgaas, lpieralisi, kw, mani, robh, kwilczynski, krzk+dt,
conor+dt, mpillai, fugang.duan, guoyin.chen, peter.chen,
cix-kernel-upstream, linux-pci, devicetree, linux-kernel
On 2025/7/3 04:23, Krzysztof Kozlowski wrote:
> EXTERNAL EMAIL
>
> On 30/06/2025 17:30, Hans Zhang wrote:
>>
>>
>> On 2025/6/30 19:14, Krzysztof Kozlowski wrote:
>>> EXTERNAL EMAIL
>>>
>>> On 30/06/2025 10:29, Hans Zhang wrote:
>>>>>> +
>>>>>> + num-lanes:
>>>>>> + maximum: 8
>>>>>> +
>>>>>> + ranges:
>>>>>> + maxItems: 3
>>>>>> +
>>>>>> + msi-map:
>>>>>> + maxItems: 1
>>>>>> +
>>>>>> + vendor-id:
>>>>>> + const: 0x1f6c
>>>>>
>>>>> Why? This is implied by compatible.
>>>>
>>>> Because when we designed the SOC RTL, it was not set to the vendor id
>>>> and device id of our company. We are members of PCI-SIG. So we need to
>>>> set the vendor id and device id in the Root Port driver. Otherwise, the
>>>> output of lspci will be displayed incorrectly.
>>>
>>> Please read carefully. Previous discussions were also pointlessly
>>> ping-ponging on irrelevant arguments. Did I suggest you do not have to
>>> set it in root port driver? No. If this is const here, this is implied
>>> by compatible and completely redundant, because your driver knows this
>>> value already. It already has all the information to deduce this value
>>> from the compatible.
>>>
>>>
>> Dear Krzysztof,
>>
>> Thank you very much for your reply.
>>
>> These two attributes are also in the following document. Is this place
>> out of date?
>> Documentation/devicetree/bindings/pci/ti,j721e-pci-host.yaml
>
> I would need to spend time to investigate that and I choose to do other
> things instead. I am recently very grumpy on arguments "I found this
> somewhere else". I found bugs somewhere else, so am I okay to introduce
> them?
>
Dear Krzysztof,
Thank you very much for your reply.
No, no, no. You misunderstood me. I didn't mean to say this because we
don't study dt-binding doc every day. So we can only refer to the
practices of other SOC manufacturers. If it's incorrect, we will
definitely listen to your opinion. Here, I'm just explaining the origin
of what I did.
Anyway, I have solved this problem by following your method and using
compatible.
>>
>>
>> We initially used the logic of Cadence common driver as follows:
>> drivers/pci/controller/cadence/pcie-cadence-host.c
>> of_property_read_u32(np, "vendor-id", &rc->vendor_id);
>>
>> of_property_read_u32(np, "device-id", &rc->device_id);
>>
>> So, can the code in Cadence be deleted?
>
> Don't know. If this is ABI, then not.
>
According to my understanding, this is not ABI.
Best regards,
Hans
>
> Best regards,
> Krzysztof
^ permalink raw reply [flat|nested] 46+ messages in thread
* Re: [PATCH v5 01/14] dt-bindings: pci: cadence: Extend compatible for new RP configuration
2025-07-03 1:35 ` Manikandan Karunakaran Pillai
@ 2025-07-03 6:55 ` Krzysztof Kozlowski
0 siblings, 0 replies; 46+ messages in thread
From: Krzysztof Kozlowski @ 2025-07-03 6:55 UTC (permalink / raw)
To: Manikandan Karunakaran Pillai, Hans Zhang
Cc: bhelgaas@google.com, lpieralisi@kernel.org, kw@linux.com,
mani@kernel.org, robh@kernel.org, kwilczynski@kernel.org,
krzk+dt@kernel.org, conor+dt@kernel.org, fugang.duan@cixtech.com,
guoyin.chen@cixtech.com, peter.chen@cixtech.com,
cix-kernel-upstream@cixtech.com, linux-pci@vger.kernel.org,
devicetree@vger.kernel.org, linux-kernel@vger.kernel.org
On 03/07/2025 03:35, Manikandan Karunakaran Pillai wrote:
>>> Hi Kryzsztof,
>>>
>>> Are you suggesting to create new file for both RC and EP for HPA host like:
>>> cdns,cdns-pcie-hpa-host.yaml
>>> cdns,cdns-pcie-hpa-ep.yaml
>>> And during the commit log, explain why you need to create a new file for
>> HPA, and not use the legacy one.
>>
>> No, there was no such suggestions in any previous or current
>> discussions. IIRC, this was simply rejected previously. I consider this
>> rejected still, with the same arguments: you should use specific SoC
>> compatibles. The generic compatible alone is rather legacy approach and
>> we have been commenting on this sooooo many times.
>>
>
> Hi Kryzsztof,
>
> Thanks for your response.
> The SoC specific dts patches are already being submitted by CIX team for their SoC based on the same PCIe controller IP.
There is a SoC, otherwise why is this attached to completely unrelated
patches?
>
> Since there is no SoC for this platform(it only an FPGA based board),
> are you suggesting to drop the dt-bindings patch altogether as the SoC specific dts bindings are already being in the same patch set.
I have impression I discussed it... either in this thread or other. I am
fine with adding compatible for your virtual setup / FPGA platform, but
this has to reflect that case. Otherwise everyone will use this one
here, just like it happened with other cdns cores.
Best regards,
Krzysztof
^ permalink raw reply [flat|nested] 46+ messages in thread
* Re: [PATCH v5 10/14] dt-bindings: PCI: Add CIX Sky1 PCIe Root Complex bindings
2025-07-03 1:47 ` Hans Zhang
@ 2025-07-14 7:43 ` Krzysztof Kozlowski
2025-07-14 8:03 ` Hans Zhang
0 siblings, 1 reply; 46+ messages in thread
From: Krzysztof Kozlowski @ 2025-07-14 7:43 UTC (permalink / raw)
To: Hans Zhang
Cc: bhelgaas, lpieralisi, kw, mani, robh, kwilczynski, krzk+dt,
conor+dt, mpillai, fugang.duan, guoyin.chen, peter.chen,
cix-kernel-upstream, linux-pci, devicetree, linux-kernel
On 03/07/2025 03:47, Hans Zhang wrote:
>>>
>>> We initially used the logic of Cadence common driver as follows:
>>> drivers/pci/controller/cadence/pcie-cadence-host.c
>>> of_property_read_u32(np, "vendor-id", &rc->vendor_id);
>>>
>>> of_property_read_u32(np, "device-id", &rc->device_id);
>>>
>>> So, can the code in Cadence be deleted?
>>
>> Don't know. If this is ABI, then not.
>>
>
> According to my understanding, this is not ABI.
Huh? Then what is ABI, by your understanding?
Best regards,
Krzysztof
^ permalink raw reply [flat|nested] 46+ messages in thread
* Re: [PATCH v5 10/14] dt-bindings: PCI: Add CIX Sky1 PCIe Root Complex bindings
2025-07-14 7:43 ` Krzysztof Kozlowski
@ 2025-07-14 8:03 ` Hans Zhang
2025-07-15 6:40 ` Krzysztof Kozlowski
0 siblings, 1 reply; 46+ messages in thread
From: Hans Zhang @ 2025-07-14 8:03 UTC (permalink / raw)
To: Krzysztof Kozlowski
Cc: bhelgaas, lpieralisi, kw, mani, robh, kwilczynski, krzk+dt,
conor+dt, mpillai, fugang.duan, guoyin.chen, peter.chen,
cix-kernel-upstream, linux-pci, devicetree, linux-kernel
On 2025/7/14 15:43, Krzysztof Kozlowski wrote:
> EXTERNAL EMAIL
>
> On 03/07/2025 03:47, Hans Zhang wrote:
>>>>
>>>> We initially used the logic of Cadence common driver as follows:
>>>> drivers/pci/controller/cadence/pcie-cadence-host.c
>>>> of_property_read_u32(np, "vendor-id", &rc->vendor_id);
>>>>
>>>> of_property_read_u32(np, "device-id", &rc->device_id);
>>>>
>>>> So, can the code in Cadence be deleted?
>>>
>>> Don't know. If this is ABI, then not.
>>>
>>
>> According to my understanding, this is not ABI.
>
> Huh? Then what is ABI, by your understanding?
>
Dear Krzysztof,
I understand kernel ABI primarily refers to the stable binary contract
between the kernel and userspace (e.g., syscalls, /sys/proc interfaces).
Device tree properties are part of the boot-time hardware description
consumed by drivers during initialization. They are not directly exposed
to userspace as ABI interfaces.
If I understand wrongly, please correct me.
It was about half a year ago when I submitted the patch that could view
LTSSM link status in dwc that I learned about the ABI. There are not
many studies on this.
https://lore.kernel.org/linux-pci/20250123164944.GA1223935@bhelgaas/
Best regards,
Hans
^ permalink raw reply [flat|nested] 46+ messages in thread
* Re: [PATCH v5 10/14] dt-bindings: PCI: Add CIX Sky1 PCIe Root Complex bindings
2025-07-14 8:03 ` Hans Zhang
@ 2025-07-15 6:40 ` Krzysztof Kozlowski
2025-07-15 6:46 ` Hans Zhang
0 siblings, 1 reply; 46+ messages in thread
From: Krzysztof Kozlowski @ 2025-07-15 6:40 UTC (permalink / raw)
To: Hans Zhang
Cc: bhelgaas, lpieralisi, kw, mani, robh, kwilczynski, krzk+dt,
conor+dt, mpillai, fugang.duan, guoyin.chen, peter.chen,
cix-kernel-upstream, linux-pci, devicetree, linux-kernel
On 14/07/2025 10:03, Hans Zhang wrote:
>
>
> On 2025/7/14 15:43, Krzysztof Kozlowski wrote:
>> EXTERNAL EMAIL
>>
>> On 03/07/2025 03:47, Hans Zhang wrote:
>>>>>
>>>>> We initially used the logic of Cadence common driver as follows:
>>>>> drivers/pci/controller/cadence/pcie-cadence-host.c
>>>>> of_property_read_u32(np, "vendor-id", &rc->vendor_id);
>>>>>
>>>>> of_property_read_u32(np, "device-id", &rc->device_id);
>>>>>
>>>>> So, can the code in Cadence be deleted?
>>>>
>>>> Don't know. If this is ABI, then not.
>>>>
>>>
>>> According to my understanding, this is not ABI.
>>
>> Huh? Then what is ABI, by your understanding?
>>
>
> Dear Krzysztof,
>
> I understand kernel ABI primarily refers to the stable binary contract
> between the kernel and userspace (e.g., syscalls, /sys/proc interfaces).
> Device tree properties are part of the boot-time hardware description
> consumed by drivers during initialization. They are not directly exposed
> to userspace as ABI interfaces.
>
> If I understand wrongly, please correct me.
Then that's wrong understanding.
The DT interface, documented explicitly and one implied by kernel
drivers in case it differs, is the ABI, as explained in docs in the
kernel and what we said on the lists thousands of times.
Best regards,
Krzysztof
^ permalink raw reply [flat|nested] 46+ messages in thread
* Re: [PATCH v5 10/14] dt-bindings: PCI: Add CIX Sky1 PCIe Root Complex bindings
2025-07-15 6:40 ` Krzysztof Kozlowski
@ 2025-07-15 6:46 ` Hans Zhang
0 siblings, 0 replies; 46+ messages in thread
From: Hans Zhang @ 2025-07-15 6:46 UTC (permalink / raw)
To: Krzysztof Kozlowski
Cc: bhelgaas, lpieralisi, kw, mani, robh, kwilczynski, krzk+dt,
conor+dt, mpillai, fugang.duan, guoyin.chen, peter.chen,
cix-kernel-upstream, linux-pci, devicetree, linux-kernel
On 2025/7/15 14:40, Krzysztof Kozlowski wrote:
> EXTERNAL EMAIL
>
> On 14/07/2025 10:03, Hans Zhang wrote:
>>
>>
>> On 2025/7/14 15:43, Krzysztof Kozlowski wrote:
>>> EXTERNAL EMAIL
>>>
>>> On 03/07/2025 03:47, Hans Zhang wrote:
>>>>>>
>>>>>> We initially used the logic of Cadence common driver as follows:
>>>>>> drivers/pci/controller/cadence/pcie-cadence-host.c
>>>>>> of_property_read_u32(np, "vendor-id", &rc->vendor_id);
>>>>>>
>>>>>> of_property_read_u32(np, "device-id", &rc->device_id);
>>>>>>
>>>>>> So, can the code in Cadence be deleted?
>>>>>
>>>>> Don't know. If this is ABI, then not.
>>>>>
>>>>
>>>> According to my understanding, this is not ABI.
>>>
>>> Huh? Then what is ABI, by your understanding?
>>>
>>
>> Dear Krzysztof,
>>
>> I understand kernel ABI primarily refers to the stable binary contract
>> between the kernel and userspace (e.g., syscalls, /sys/proc interfaces).
>> Device tree properties are part of the boot-time hardware description
>> consumed by drivers during initialization. They are not directly exposed
>> to userspace as ABI interfaces.
>>
>> If I understand wrongly, please correct me.
>
>
> Then that's wrong understanding.
>
> The DT interface, documented explicitly and one implied by kernel
> drivers in case it differs, is the ABI, as explained in docs in the
> kernel and what we said on the lists thousands of times.
>
Dear Krzysztof,
Thank you very much for your reply and explanation. Now I understand
that I have been discussing issues in the linux community for about half
a year. I didn't pay attention to it before. Thank you again.
Best regards,
Hans
^ permalink raw reply [flat|nested] 46+ messages in thread
end of thread, other threads:[~2025-07-15 6:46 UTC | newest]
Thread overview: 46+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2025-06-30 4:15 [PATCH v5 00/14] Enhance the PCIe controller driver hans.zhang
2025-06-30 4:15 ` [PATCH v5 01/14] dt-bindings: pci: cadence: Extend compatible for new RP configuration hans.zhang
2025-06-30 7:30 ` Krzysztof Kozlowski
2025-06-30 8:02 ` Hans Zhang
2025-06-30 8:06 ` Manikandan Karunakaran Pillai
2025-06-30 11:11 ` Krzysztof Kozlowski
2025-07-01 11:56 ` Manikandan Karunakaran Pillai
2025-07-02 20:20 ` Krzysztof Kozlowski
2025-07-03 1:35 ` Manikandan Karunakaran Pillai
2025-07-03 6:55 ` Krzysztof Kozlowski
2025-06-30 4:15 ` [PATCH v5 02/14] dt-bindings: pci: cadence: Extend compatible for new EP configuration hans.zhang
2025-06-30 7:27 ` Krzysztof Kozlowski
2025-06-30 8:03 ` Hans Zhang
2025-06-30 10:28 ` Krzysztof Kozlowski
2025-06-30 4:15 ` [PATCH v5 03/14] PCI: cadence: Split PCIe controller header file hans.zhang
2025-06-30 4:15 ` [PATCH v5 04/14] PCI: cadence: Add register definitions for HPA(High Perf Architecture) hans.zhang
2025-06-30 4:15 ` [PATCH v5 05/14] PCI: cadence: Split PCIe EP support into common and specific functions hans.zhang
2025-06-30 4:15 ` [PATCH v5 06/14] PCI: cadence: Split PCIe RP " hans.zhang
2025-06-30 4:15 ` [PATCH v5 07/14] PCI: cadence: Split the common functions for PCIE controller support hans.zhang
2025-06-30 4:15 ` [PATCH v5 08/14] PCI: cadence: Add support for High Performance Arch(HPA) controller hans.zhang
2025-06-30 4:15 ` [PATCH v5 09/14] PCI: cadence: Add support for PCIe HPA controller platform hans.zhang
2025-06-30 4:15 ` [PATCH v5 10/14] dt-bindings: PCI: Add CIX Sky1 PCIe Root Complex bindings hans.zhang
2025-06-30 5:36 ` Rob Herring (Arm)
2025-06-30 5:56 ` Hans Zhang
2025-06-30 7:26 ` Krzysztof Kozlowski
2025-06-30 8:29 ` Hans Zhang
2025-06-30 11:14 ` Krzysztof Kozlowski
2025-06-30 15:30 ` Hans Zhang
2025-07-02 20:23 ` Krzysztof Kozlowski
2025-07-03 1:47 ` Hans Zhang
2025-07-14 7:43 ` Krzysztof Kozlowski
2025-07-14 8:03 ` Hans Zhang
2025-07-15 6:40 ` Krzysztof Kozlowski
2025-07-15 6:46 ` Hans Zhang
2025-06-30 15:54 ` Hans Zhang
2025-07-02 20:28 ` Krzysztof Kozlowski
2025-06-30 4:15 ` [PATCH v5 11/14] PCI: sky1: Add PCIe host support for CIX Sky1 hans.zhang
2025-06-30 4:15 ` [PATCH v5 12/14] MAINTAINERS: add entry for CIX Sky1 PCIe driver hans.zhang
2025-06-30 7:29 ` Krzysztof Kozlowski
2025-06-30 8:06 ` Hans Zhang
2025-06-30 4:16 ` [PATCH v5 13/14] arm64: dts: cix: Add PCIe Root Complex on sky1 hans.zhang
2025-06-30 7:33 ` Krzysztof Kozlowski
2025-06-30 8:44 ` Hans Zhang
2025-06-30 4:16 ` [PATCH v5 14/14] arm64: dts: cix: Enable PCIe on the Orion O6 board hans.zhang
2025-06-30 7:32 ` Krzysztof Kozlowski
2025-06-30 8:08 ` Hans Zhang
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).