* [PATCH v4 0/5] Enhance the PCIe controller driver
@ 2025-04-24 1:04 hans.zhang
2025-04-24 1:04 ` [PATCH v4 1/5] dt-bindings: pci: cadence: Extend compatible for new RP configuration hans.zhang
` (5 more replies)
0 siblings, 6 replies; 25+ messages in thread
From: hans.zhang @ 2025-04-24 1:04 UTC (permalink / raw)
To: bhelgaas, lpieralisi, kw, manivannan.sadhasivam, robh, krzk+dt,
conor+dt
Cc: peter.chen, linux-pci, devicetree, linux-kernel, Hans Zhang
From: Hans Zhang <hans.zhang@cixtech.com>
Enhances the exiting Cadence PCIe controller drivers to support
HPA (High Performance Architecture) Cadence PCIe controllers.
The patch set enhances the Cadence PCIe driver for HPA support.
The "compatible" property in DTS is added with more enum to support
the new platform architecture and the register maps that change with
it. The driver read register and write register functions take the
updated offset stored from the platform driver to access the registers.
The driver now supports the legacy and HPA architecture, with the
legacy code changes beingminimal.
SoC related changes are not available in this patch set.
The TI SoC continues to be supported with the changes incorporated.
The changes are also in tune with how multiple platforms are supported
in related drivers.
The scripts/checkpatch.pl has been run on the patches with and without
--strict. With the --strict option, 4 checks are generated on 1 patch
(PATCH v3 3/6) of the series), which can be ignored. There are no code
fixes required for these checks. The rest of the 'scripts/checkpatch.pl'
is clean.
The ./scripts/kernel-doc --none have been run on the changed files.
The changes are tested on TI platforms. The legacy controller changes are
tested on an TI J7200 EVM and HPA changes are planned for on an FPGA
platform available within Cadence.
Changes for v4
- Add header file bitfield.h to pcie-cadence.h.
- Addressed the following review comments.
Merged the TI patch as it.
Removed initialization of struct variables to '0'.
Changes for v3
- Patch version v3 added to the subject.
- Use HPA tag for architecture descriptions.
- Remove bug related changes to be submitted later as a separate patch.
- Two patches merged from the last series to ensure readability to address
the review comments.
- Fix several description related issues, coding style issues and some
misleading comments.
- Remove cpu_addr_fixup() functions.
Manikandan K Pillai (5):
dt-bindings: pci: cadence: Extend compatible for new RP configuration
dt-bindings: pci: cadence: Extend compatible for new EP configurations
PCI: cadence: Add header support for PCIe HPA controller
PCI: cadence: Add support for PCIe Endpoint HPA controller
PCI: cadence: Add callback functions for RP and EP controller
.../bindings/pci/cdns,cdns-pcie-ep.yaml | 6 +-
.../bindings/pci/cdns,cdns-pcie-host.yaml | 6 +-
drivers/pci/controller/cadence/pci-j721e.c | 12 +
.../pci/controller/cadence/pcie-cadence-ep.c | 170 +++++++--
.../controller/cadence/pcie-cadence-host.c | 276 ++++++++++++--
.../controller/cadence/pcie-cadence-plat.c | 73 +++-
drivers/pci/controller/cadence/pcie-cadence.c | 197 +++++++++-
drivers/pci/controller/cadence/pcie-cadence.h | 340 +++++++++++++++++-
8 files changed, 1011 insertions(+), 69 deletions(-)
base-commit: fc96b232f8e7c0a6c282f47726b2ff6a5fb341d2
--
2.47.1
^ permalink raw reply [flat|nested] 25+ messages in thread
* [PATCH v4 1/5] dt-bindings: pci: cadence: Extend compatible for new RP configuration
2025-04-24 1:04 [PATCH v4 0/5] Enhance the PCIe controller driver hans.zhang
@ 2025-04-24 1:04 ` hans.zhang
2025-04-24 1:04 ` [PATCH v4 2/5] dt-bindings: pci: cadence: Extend compatible for new EP configurations hans.zhang
` (4 subsequent siblings)
5 siblings, 0 replies; 25+ messages in thread
From: hans.zhang @ 2025-04-24 1:04 UTC (permalink / raw)
To: bhelgaas, lpieralisi, kw, manivannan.sadhasivam, robh, krzk+dt,
conor+dt
Cc: peter.chen, linux-pci, devicetree, linux-kernel,
Manikandan K Pillai, Hans Zhang
From: Manikandan K Pillai <mpillai@cadence.com>
Document the compatible property for HPA (High Performance Architecture)
PCIe controller RP configuration.
Signed-off-by: Manikandan K Pillai <mpillai@cadence.com>
Signed-off-by: Hans Zhang <hans.zhang@cixtech.com>
---
.../devicetree/bindings/pci/cdns,cdns-pcie-host.yaml | 6 ++++--
1 file changed, 4 insertions(+), 2 deletions(-)
diff --git a/Documentation/devicetree/bindings/pci/cdns,cdns-pcie-host.yaml b/Documentation/devicetree/bindings/pci/cdns,cdns-pcie-host.yaml
index a8190d9b100f..83a33c4c008f 100644
--- a/Documentation/devicetree/bindings/pci/cdns,cdns-pcie-host.yaml
+++ b/Documentation/devicetree/bindings/pci/cdns,cdns-pcie-host.yaml
@@ -7,14 +7,16 @@ $schema: http://devicetree.org/meta-schemas/core.yaml#
title: Cadence PCIe host controller
maintainers:
- - Tom Joseph <tjoseph@cadence.com>
+ - Manikandan K Pillai <mpillai@cadence.com>
allOf:
- $ref: cdns-pcie-host.yaml#
properties:
compatible:
- const: cdns,cdns-pcie-host
+ enum:
+ - cdns,cdns-pcie-host
+ - cdns,cdns-pcie-hpa-host
reg:
maxItems: 2
--
2.47.1
^ permalink raw reply related [flat|nested] 25+ messages in thread
* [PATCH v4 2/5] dt-bindings: pci: cadence: Extend compatible for new EP configurations
2025-04-24 1:04 [PATCH v4 0/5] Enhance the PCIe controller driver hans.zhang
2025-04-24 1:04 ` [PATCH v4 1/5] dt-bindings: pci: cadence: Extend compatible for new RP configuration hans.zhang
@ 2025-04-24 1:04 ` hans.zhang
2025-04-24 15:29 ` Conor Dooley
2025-04-24 1:04 ` [PATCH v4 3/5] PCI: cadence: Add header support for PCIe HPA controller hans.zhang
` (3 subsequent siblings)
5 siblings, 1 reply; 25+ messages in thread
From: hans.zhang @ 2025-04-24 1:04 UTC (permalink / raw)
To: bhelgaas, lpieralisi, kw, manivannan.sadhasivam, robh, krzk+dt,
conor+dt
Cc: peter.chen, linux-pci, devicetree, linux-kernel,
Manikandan K Pillai, Hans Zhang
From: Manikandan K Pillai <mpillai@cadence.com>
Document the compatible property for HPA (High Performance Architecture)
PCIe controller EP configuration.
Signed-off-by: Manikandan K Pillai <mpillai@cadence.com>
Signed-off-by: Hans Zhang <hans.zhang@cixtech.com>
---
.../devicetree/bindings/pci/cdns,cdns-pcie-ep.yaml | 6 ++++--
1 file changed, 4 insertions(+), 2 deletions(-)
diff --git a/Documentation/devicetree/bindings/pci/cdns,cdns-pcie-ep.yaml b/Documentation/devicetree/bindings/pci/cdns,cdns-pcie-ep.yaml
index 98651ab22103..a7e404e4f690 100644
--- a/Documentation/devicetree/bindings/pci/cdns,cdns-pcie-ep.yaml
+++ b/Documentation/devicetree/bindings/pci/cdns,cdns-pcie-ep.yaml
@@ -7,14 +7,16 @@ $schema: http://devicetree.org/meta-schemas/core.yaml#
title: Cadence PCIe EP Controller
maintainers:
- - Tom Joseph <tjoseph@cadence.com>
+ - Manikandan K Pillai <mpillai@cadence.com>
allOf:
- $ref: cdns-pcie-ep.yaml#
properties:
compatible:
- const: cdns,cdns-pcie-ep
+ enum:
+ - cdns,cdns-pcie-ep
+ - cdns,cdns-pcie-hpa-ep
reg:
maxItems: 2
--
2.47.1
^ permalink raw reply related [flat|nested] 25+ messages in thread
* [PATCH v4 3/5] PCI: cadence: Add header support for PCIe HPA controller
2025-04-24 1:04 [PATCH v4 0/5] Enhance the PCIe controller driver hans.zhang
2025-04-24 1:04 ` [PATCH v4 1/5] dt-bindings: pci: cadence: Extend compatible for new RP configuration hans.zhang
2025-04-24 1:04 ` [PATCH v4 2/5] dt-bindings: pci: cadence: Extend compatible for new EP configurations hans.zhang
@ 2025-04-24 1:04 ` hans.zhang
2025-04-24 3:36 ` Peter Chen (CIX)
2025-04-25 4:18 ` kernel test robot
2025-04-24 1:04 ` [PATCH v4 4/5] PCI: cadence: Add support for PCIe Endpoint " hans.zhang
` (2 subsequent siblings)
5 siblings, 2 replies; 25+ messages in thread
From: hans.zhang @ 2025-04-24 1:04 UTC (permalink / raw)
To: bhelgaas, lpieralisi, kw, manivannan.sadhasivam, robh, krzk+dt,
conor+dt
Cc: peter.chen, linux-pci, devicetree, linux-kernel,
Manikandan K Pillai, Hans Zhang
From: Manikandan K Pillai <mpillai@cadence.com>
Add the required definitions for register addresses and register bits
for the Cadence PCIe HPA controllers. Add the register bank offsets
for different platform architecture and update the global platform
data - platform architecture, EP or RP configuration and the correct
values of register offsets for different register banks during the
platform probe.
Signed-off-by: Manikandan K Pillai <mpillai@cadence.com>
Co-developed-by: Hans Zhang <hans.zhang@cixtech.com>
Signed-off-by: Hans Zhang <hans.zhang@cixtech.com>
---
.../controller/cadence/pcie-cadence-host.c | 13 +-
.../controller/cadence/pcie-cadence-plat.c | 46 ++-
drivers/pci/controller/cadence/pcie-cadence.h | 331 +++++++++++++++++-
3 files changed, 373 insertions(+), 17 deletions(-)
diff --git a/drivers/pci/controller/cadence/pcie-cadence-host.c b/drivers/pci/controller/cadence/pcie-cadence-host.c
index 8af95e9da7ce..ce035eef0a5c 100644
--- a/drivers/pci/controller/cadence/pcie-cadence-host.c
+++ b/drivers/pci/controller/cadence/pcie-cadence-host.c
@@ -175,7 +175,7 @@ static int cdns_pcie_host_start_link(struct cdns_pcie_rc *rc)
return ret;
}
-static int cdns_pcie_host_init_root_port(struct cdns_pcie_rc *rc)
+int cdns_pcie_host_init_root_port(struct cdns_pcie_rc *rc)
{
struct cdns_pcie *pcie = &rc->pcie;
u32 value, ctrl;
@@ -215,10 +215,10 @@ static int cdns_pcie_host_init_root_port(struct cdns_pcie_rc *rc)
return 0;
}
-static int cdns_pcie_host_bar_ib_config(struct cdns_pcie_rc *rc,
- enum cdns_pcie_rp_bar bar,
- u64 cpu_addr, u64 size,
- unsigned long flags)
+int cdns_pcie_host_bar_ib_config(struct cdns_pcie_rc *rc,
+ enum cdns_pcie_rp_bar bar,
+ u64 cpu_addr, u64 size,
+ unsigned long flags)
{
struct cdns_pcie *pcie = &rc->pcie;
u32 addr0, addr1, aperture, value;
@@ -428,7 +428,7 @@ static int cdns_pcie_host_map_dma_ranges(struct cdns_pcie_rc *rc)
return 0;
}
-static int cdns_pcie_host_init_address_translation(struct cdns_pcie_rc *rc)
+int cdns_pcie_host_init_address_translation(struct cdns_pcie_rc *rc)
{
struct cdns_pcie *pcie = &rc->pcie;
struct pci_host_bridge *bridge = pci_host_bridge_from_priv(rc);
@@ -536,7 +536,6 @@ int cdns_pcie_host_setup(struct cdns_pcie_rc *rc)
return -ENOMEM;
pcie = &rc->pcie;
- pcie->is_rc = true;
rc->vendor_id = 0xffff;
of_property_read_u32(np, "vendor-id", &rc->vendor_id);
diff --git a/drivers/pci/controller/cadence/pcie-cadence-plat.c b/drivers/pci/controller/cadence/pcie-cadence-plat.c
index 0456845dabb9..93c21c899309 100644
--- a/drivers/pci/controller/cadence/pcie-cadence-plat.c
+++ b/drivers/pci/controller/cadence/pcie-cadence-plat.c
@@ -22,10 +22,6 @@ struct cdns_plat_pcie {
struct cdns_pcie *pcie;
};
-struct cdns_plat_pcie_of_data {
- bool is_rc;
-};
-
static const struct of_device_id cdns_plat_pcie_of_match[];
static u64 cdns_plat_cpu_addr_fixup(struct cdns_pcie *pcie, u64 cpu_addr)
@@ -72,6 +68,12 @@ static int cdns_plat_pcie_probe(struct platform_device *pdev)
rc = pci_host_bridge_priv(bridge);
rc->pcie.dev = dev;
rc->pcie.ops = &cdns_plat_ops;
+ rc->pcie.is_hpa = data->is_hpa;
+ rc->pcie.is_rc = data->is_rc;
+
+ /* Store the register bank offsets pointer */
+ rc->pcie.cdns_pcie_reg_offsets = data;
+
cdns_plat_pcie->pcie = &rc->pcie;
ret = cdns_pcie_init_phy(dev, cdns_plat_pcie->pcie);
@@ -99,6 +101,12 @@ static int cdns_plat_pcie_probe(struct platform_device *pdev)
ep->pcie.dev = dev;
ep->pcie.ops = &cdns_plat_ops;
+ ep->pcie.is_hpa = data->is_hpa;
+ ep->pcie.is_rc = data->is_rc;
+
+ /* Store the register bank offsets pointer */
+ ep->pcie.cdns_pcie_reg_offsets = data;
+
cdns_plat_pcie->pcie = &ep->pcie;
ret = cdns_pcie_init_phy(dev, cdns_plat_pcie->pcie);
@@ -150,10 +158,32 @@ static void cdns_plat_pcie_shutdown(struct platform_device *pdev)
static const struct cdns_plat_pcie_of_data cdns_plat_pcie_host_of_data = {
.is_rc = true,
+ .is_hpa = false,
};
static const struct cdns_plat_pcie_of_data cdns_plat_pcie_ep_of_data = {
.is_rc = false,
+ .is_hpa = false,
+};
+
+static const struct cdns_plat_pcie_of_data cdns_plat_pcie_hpa_host_of_data = {
+ .is_rc = true,
+ .is_hpa = true,
+ .ip_reg_bank_off = CDNS_PCIE_HPA_IP_REG_BANK,
+ .ip_cfg_ctrl_reg_off = CDNS_PCIE_HPA_IP_CFG_CTRL_REG_BANK,
+ .axi_mstr_common_off = CDNS_PCIE_HPA_IP_AXI_MASTER_COMMON,
+ .axi_slave_off = CDNS_PCIE_HPA_AXI_SLAVE,
+ .axi_master_off = CDNS_PCIE_HPA_AXI_MASTER,
+};
+
+static const struct cdns_plat_pcie_of_data cdns_plat_pcie_hpa_ep_of_data = {
+ .is_rc = false,
+ .is_hpa = true,
+ .ip_reg_bank_off = CDNS_PCIE_HPA_IP_REG_BANK,
+ .ip_cfg_ctrl_reg_off = CDNS_PCIE_HPA_IP_CFG_CTRL_REG_BANK,
+ .axi_mstr_common_off = CDNS_PCIE_HPA_IP_AXI_MASTER_COMMON,
+ .axi_slave_off = CDNS_PCIE_HPA_AXI_SLAVE,
+ .axi_master_off = CDNS_PCIE_HPA_AXI_MASTER,
};
static const struct of_device_id cdns_plat_pcie_of_match[] = {
@@ -165,6 +195,14 @@ static const struct of_device_id cdns_plat_pcie_of_match[] = {
.compatible = "cdns,cdns-pcie-ep",
.data = &cdns_plat_pcie_ep_of_data,
},
+ {
+ .compatible = "cdns,cdns-pcie-hpa-host",
+ .data = &cdns_plat_pcie_hpa_host_of_data,
+ },
+ {
+ .compatible = "cdns,cdns-pcie-hpa-ep",
+ .data = &cdns_plat_pcie_hpa_ep_of_data,
+ },
{},
};
diff --git a/drivers/pci/controller/cadence/pcie-cadence.h b/drivers/pci/controller/cadence/pcie-cadence.h
index 39ee9945c903..72cb27c6f9e4 100644
--- a/drivers/pci/controller/cadence/pcie-cadence.h
+++ b/drivers/pci/controller/cadence/pcie-cadence.h
@@ -6,6 +6,7 @@
#ifndef _PCIE_CADENCE_H
#define _PCIE_CADENCE_H
+#include <linux/bitfield.h>
#include <linux/kernel.h>
#include <linux/pci.h>
#include <linux/pci-epf.h>
@@ -218,6 +219,173 @@
(((delay) << CDNS_PCIE_DETECT_QUIET_MIN_DELAY_SHIFT) & \
CDNS_PCIE_DETECT_QUIET_MIN_DELAY_MASK)
+/* HPA (High Performance Architecture) PCIe controller register */
+#define CDNS_PCIE_HPA_IP_REG_BANK 0x01000000
+#define CDNS_PCIE_HPA_IP_CFG_CTRL_REG_BANK 0x01003c00
+#define CDNS_PCIE_HPA_IP_AXI_MASTER_COMMON 0x01020000
+
+/* Address Translation Registers (HPA) */
+#define CDNS_PCIE_HPA_AXI_SLAVE 0x03000000
+#define CDNS_PCIE_HPA_AXI_MASTER 0x03002000
+
+/* Root port register base address */
+#define CDNS_PCIE_HPA_RP_BASE 0x0
+
+#define CDNS_PCIE_HPA_LM_ID 0x1420
+
+/* Endpoint Function BARs (HPA) Configuration Registers */
+#define CDNS_PCIE_HPA_LM_EP_FUNC_BAR_CFG(bar, fn) \
+ (((bar) < BAR_3) ? CDNS_PCIE_HPA_LM_EP_FUNC_BAR_CFG0(fn) : \
+ CDNS_PCIE_HPA_LM_EP_FUNC_BAR_CFG1(fn))
+#define CDNS_PCIE_HPA_LM_EP_FUNC_BAR_CFG0(pfn) (0x4000 * (pfn))
+#define CDNS_PCIE_HPA_LM_EP_FUNC_BAR_CFG1(pfn) ((0x4000 * (pfn)) + 0x04)
+#define CDNS_PCIE_HPA_LM_EP_VFUNC_BAR_CFG(bar, fn) \
+ (((bar) < BAR_3) ? CDNS_PCIE_HPA_LM_EP_VFUNC_BAR_CFG0(fn) : \
+ CDNS_PCIE_HPA_LM_EP_VFUNC_BAR_CFG1(fn))
+#define CDNS_PCIE_HPA_LM_EP_VFUNC_BAR_CFG0(vfn) ((0x4000 * (vfn)) + 0x08)
+#define CDNS_PCIE_HPA_LM_EP_VFUNC_BAR_CFG1(vfn) ((0x4000 * (vfn)) + 0x0c)
+#define CDNS_PCIE_HPA_LM_EP_FUNC_BAR_CFG_BAR_APERTURE_MASK(f) \
+ (GENMASK(9, 4) << ((f) * 10))
+#define CDNS_PCIE_HPA_LM_EP_FUNC_BAR_CFG_BAR_APERTURE(b, a) \
+ (((a) << (4 + ((b) * 10))) & (CDNS_PCIE_HPA_LM_EP_FUNC_BAR_CFG_BAR_APERTURE_MASK(b)))
+#define CDNS_PCIE_HPA_LM_EP_FUNC_BAR_CFG_BAR_CTRL_MASK(f) \
+ (GENMASK(3, 0) << ((f) * 10))
+#define CDNS_PCIE_HPA_LM_EP_FUNC_BAR_CFG_BAR_CTRL(b, c) \
+ (((c) << ((b) * 10)) & (CDNS_PCIE_HPA_LM_EP_FUNC_BAR_CFG_BAR_CTRL_MASK(b)))
+
+/* Endpoint Function Configuration Register */
+#define CDNS_PCIE_HPA_LM_EP_FUNC_CFG 0x02c0
+
+/* Root Complex BAR Configuration Register */
+#define CDNS_PCIE_HPA_LM_RC_BAR_CFG 0x14
+#define CDNS_PCIE_HPA_LM_RC_BAR_CFG_BAR0_APERTURE_MASK GENMASK(9, 4)
+#define CDNS_PCIE_HPA_LM_RC_BAR_CFG_BAR0_APERTURE(a) \
+ FIELD_PREP(CDNS_PCIE_HPA_LM_RC_BAR_CFG_BAR0_APERTURE_MASK, a)
+#define CDNS_PCIE_HPA_LM_RC_BAR_CFG_BAR0_CTRL_MASK GENMASK(3, 0)
+#define CDNS_PCIE_HPA_LM_RC_BAR_CFG_BAR0_CTRL(c) \
+ FIELD_PREP(CDNS_PCIE_HPA_LM_RC_BAR_CFG_BAR0_CTRL_MASK, c)
+#define CDNS_PCIE_HPA_LM_RC_BAR_CFG_BAR1_APERTURE_MASK GENMASK(19, 14)
+#define CDNS_PCIE_HPA_LM_RC_BAR_CFG_BAR1_APERTURE(a) \
+ FIELD_PREP(CDNS_PCIE_HPA_LM_RC_BAR_CFG_BAR1_APERTURE_MASK, a)
+#define CDNS_PCIE_HPA_LM_RC_BAR_CFG_BAR1_CTRL_MASK GENMASK(13, 10)
+#define CDNS_PCIE_HPA_LM_RC_BAR_CFG_BAR1_CTRL(c) \
+ FIELD_PREP(CDNS_PCIE_HPA_LM_RC_BAR_CFG_BAR1_CTRL_MASK, c)
+
+#define CDNS_PCIE_HPA_LM_RC_BAR_CFG_PREFETCH_MEM_ENABLE BIT(20)
+#define CDNS_PCIE_HPA_LM_RC_BAR_CFG_PREFETCH_MEM_64BITS BIT(21)
+#define CDNS_PCIE_HPA_LM_RC_BAR_CFG_IO_ENABLE BIT(22)
+#define CDNS_PCIE_HPA_LM_RC_BAR_CFG_IO_32BITS BIT(23)
+
+/* BAR control values applicable to both Endpoint Function and Root Complex */
+#define CDNS_PCIE_HPA_LM_BAR_CFG_CTRL_DISABLED 0x0
+#define CDNS_PCIE_HPA_LM_BAR_CFG_CTRL_IO_32BITS 0x3
+#define CDNS_PCIE_HPA_LM_BAR_CFG_CTRL_MEM_32BITS 0x1
+#define CDNS_PCIE_HPA_LM_BAR_CFG_CTRL_PREFETCH_MEM_32BITS 0x9
+#define CDNS_PCIE_HPA_LM_BAR_CFG_CTRL_MEM_64BITS 0x5
+#define CDNS_PCIE_HPA_LM_BAR_CFG_CTRL_PREFETCH_MEM_64BITS 0xd
+
+#define HPA_LM_RC_BAR_CFG_CTRL_DISABLED(bar) \
+ (CDNS_PCIE_HPA_LM_BAR_CFG_CTRL_DISABLED << ((bar) * 10))
+#define HPA_LM_RC_BAR_CFG_CTRL_IO_32BITS(bar) \
+ (CDNS_PCIE_HPA_LM_BAR_CFG_CTRL_IO_32BITS << ((bar) * 10))
+#define HPA_LM_RC_BAR_CFG_CTRL_MEM_32BITS(bar) \
+ (CDNS_PCIE_HPA_LM_BAR_CFG_CTRL_MEM_32BITS << ((bar) * 10))
+#define HPA_LM_RC_BAR_CFG_CTRL_PREF_MEM_32BITS(bar) \
+ (CDNS_PCIE_HPA_LM_BAR_CFG_CTRL_PREFETCH_MEM_32BITS << ((bar) * 10))
+#define HPA_LM_RC_BAR_CFG_CTRL_MEM_64BITS(bar) \
+ (CDNS_PCIE_HPA_LM_BAR_CFG_CTRL_MEM_64BITS << ((bar) * 10))
+#define HPA_LM_RC_BAR_CFG_CTRL_PREF_MEM_64BITS(bar) \
+ (CDNS_PCIE_HPA_LM_BAR_CFG_CTRL_PREFETCH_MEM_64BITS << ((bar) * 10))
+#define HPA_LM_RC_BAR_CFG_APERTURE(bar, aperture) \
+ (((aperture) - 7) << ((bar) * 10))
+
+#define CDNS_PCIE_HPA_LM_PTM_CTRL 0x0520
+#define CDNS_PCIE_HPA_LM_TPM_CTRL_PTMRSEN BIT(17)
+
+/* Root Port Registers PCI config space (HPA) for root port function */
+#define CDNS_PCIE_HPA_RP_CAP_OFFSET 0xc0
+
+/* Region r Outbound AXI to PCIe Address Translation Register 0 */
+#define CDNS_PCIE_HPA_AT_OB_REGION_PCI_ADDR0(r) (0x1010 + ((r) & 0x1f) * 0x0080)
+#define CDNS_PCIE_HPA_AT_OB_REGION_PCI_ADDR0_NBITS_MASK GENMASK(5, 0)
+#define CDNS_PCIE_HPA_AT_OB_REGION_PCI_ADDR0_NBITS(nbits) \
+ FIELD_PREP(CDNS_PCIE_HPA_AT_OB_REGION_PCI_ADDR0_NBITS_MASK, ((nbits) - 1))
+#define CDNS_PCIE_HPA_AT_OB_REGION_PCI_ADDR0_DEVFN_MASK GENMASK(23, 16)
+#define CDNS_PCIE_HPA_AT_OB_REGION_PCI_ADDR0_DEVFN(devfn) \
+ FIELD_PREP(CDNS_PCIE_HPA_AT_OB_REGION_PCI_ADDR0_DEVFN_MASK, devfn)
+#define CDNS_PCIE_HPA_AT_OB_REGION_PCI_ADDR0_BUS_MASK GENMASK(31, 24)
+#define CDNS_PCIE_HPA_AT_OB_REGION_PCI_ADDR0_BUS(bus) \
+ FIELD_PREP(CDNS_PCIE_HPA_AT_OB_REGION_PCI_ADDR0_BUS_MASK, bus)
+
+/* Region r Outbound AXI to PCIe Address Translation Register 1 */
+#define CDNS_PCIE_HPA_AT_OB_REGION_PCI_ADDR1(r) (0x1014 + ((r) & 0x1f) * 0x0080)
+
+/* Region r Outbound PCIe Descriptor Register 0 */
+#define CDNS_PCIE_HPA_AT_OB_REGION_DESC0(r) (0x1008 + ((r) & 0x1f) * 0x0080)
+#define CDNS_PCIE_HPA_AT_OB_REGION_DESC0_TYPE_MASK GENMASK(28, 24)
+#define CDNS_PCIE_HPA_AT_OB_REGION_DESC0_TYPE_MEM \
+ FIELD_PREP(CDNS_PCIE_HPA_AT_OB_REGION_DESC0_TYPE_MASK, 0x0)
+#define CDNS_PCIE_HPA_AT_OB_REGION_DESC0_TYPE_IO \
+ FIELD_PREP(CDNS_PCIE_HPA_AT_OB_REGION_DESC0_TYPE_MASK, 0x2)
+#define CDNS_PCIE_HPA_AT_OB_REGION_DESC0_TYPE_CONF_TYPE0 \
+ FIELD_PREP(CDNS_PCIE_HPA_AT_OB_REGION_DESC0_TYPE_MASK, 0x4)
+#define CDNS_PCIE_HPA_AT_OB_REGION_DESC0_TYPE_CONF_TYPE1 \
+ FIELD_PREP(CDNS_PCIE_HPA_AT_OB_REGION_DESC0_TYPE_MASK, 0x5)
+#define CDNS_PCIE_HPA_AT_OB_REGION_DESC0_TYPE_NORMAL_MSG \
+ FIELD_PREP(CDNS_PCIE_HPA_AT_OB_REGION_DESC0_TYPE_MASK, 0x10)
+
+/* Region r Outbound PCIe Descriptor Register 1 */
+#define CDNS_PCIE_HPA_AT_OB_REGION_DESC1(r) (0x100c + ((r) & 0x1f) * 0x0080)
+#define CDNS_PCIE_HPA_AT_OB_REGION_DESC1_BUS_MASK GENMASK(31, 24)
+#define CDNS_PCIE_HPA_AT_OB_REGION_DESC1_BUS(bus) \
+ FIELD_PREP(CDNS_PCIE_HPA_AT_OB_REGION_DESC1_BUS_MASK, bus)
+#define CDNS_PCIE_HPA_AT_OB_REGION_DESC1_DEVFN_MASK GENMASK(23, 16)
+#define CDNS_PCIE_HPA_AT_OB_REGION_DESC1_DEVFN(devfn) \
+ FIELD_PREP(CDNS_PCIE_HPA_AT_OB_REGION_DESC1_DEVFN_MASK, devfn)
+
+#define CDNS_PCIE_HPA_AT_OB_REGION_CTRL0(r) (0x1018 + ((r) & 0x1f) * 0x0080)
+#define CDNS_PCIE_HPA_AT_OB_REGION_CTRL0_SUPPLY_BUS BIT(26)
+#define CDNS_PCIE_HPA_AT_OB_REGION_CTRL0_SUPPLY_DEV_FN BIT(25)
+
+/* Region r AXI Region Base Address Register 0 */
+#define CDNS_PCIE_HPA_AT_OB_REGION_CPU_ADDR0(r) (0x1000 + ((r) & 0x1f) * 0x0080)
+#define CDNS_PCIE_HPA_AT_OB_REGION_CPU_ADDR0_NBITS_MASK GENMASK(5, 0)
+#define CDNS_PCIE_HPA_AT_OB_REGION_CPU_ADDR0_NBITS(nbits) \
+ FIELD_PREP(CDNS_PCIE_HPA_AT_OB_REGION_CPU_ADDR0_NBITS_MASK, ((nbits) - 1))
+
+/* Region r AXI Region Base Address Register 1 */
+#define CDNS_PCIE_HPA_AT_OB_REGION_CPU_ADDR1(r) (0x1004 + ((r) & 0x1f) * 0x0080)
+
+/* Root Port BAR Inbound PCIe to AXI Address Translation Register */
+#define CDNS_PCIE_HPA_AT_IB_RP_BAR_ADDR0(bar) (((bar) * 0x0008))
+#define CDNS_PCIE_HPA_AT_IB_RP_BAR_ADDR0_NBITS_MASK GENMASK(5, 0)
+#define CDNS_PCIE_HPA_AT_IB_RP_BAR_ADDR0_NBITS(nbits) \
+ FIELD_PREP(CDNS_PCIE_HPA_AT_IB_RP_BAR_ADDR0_NBITS_MASK, ((nbits) - 1))
+#define CDNS_PCIE_HPA_AT_IB_RP_BAR_ADDR1(bar) (0x04 + ((bar) * 0x0008))
+
+/* AXI link down register */
+#define CDNS_PCIE_HPA_AT_LINKDOWN 0x04
+
+/*
+ * Physical Layer Configuration Register 0
+ * This register contains the parameters required for functional setup
+ * of Physical Layer.
+ */
+#define CDNS_PCIE_HPA_PHY_LAYER_CFG0 0x0400
+#define CDNS_PCIE_HPA_DETECT_QUIET_MIN_DELAY_MASK GENMASK(26, 24)
+#define CDNS_PCIE_HPA_DETECT_QUIET_MIN_DELAY(delay) \
+ FIELD_PREP(CDNS_PCIE_HPA_DETECT_QUIET_MIN_DELAY_MASK, delay)
+#define CDNS_PCIE_HPA_LINK_TRNG_EN_MASK GENMASK(27, 27)
+
+#define CDNS_PCIE_HPA_PHY_DBG_STS_REG0 0x0420
+
+#define CDNS_PCIE_HPA_RP_MAX_IB 0x3
+#define CDNS_PCIE_HPA_MAX_OB 15
+
+/* Endpoint Function BAR Inbound PCIe to AXI Address Translation Register (HPA) */
+#define CDNS_PCIE_HPA_AT_IB_EP_FUNC_BAR_ADDR0(fn, bar) (((fn) * 0x0040) + ((bar) * 0x0008))
+#define CDNS_PCIE_HPA_AT_IB_EP_FUNC_BAR_ADDR1(fn, bar) (0x4 + ((fn) * 0x0040) + ((bar) * 0x0008))
+
enum cdns_pcie_rp_bar {
RP_BAR_UNDEFINED = -1,
RP_BAR0,
@@ -249,6 +417,7 @@ struct cdns_pcie_rp_ib_bar {
#define CDNS_PCIE_MSG_DATA BIT(16)
struct cdns_pcie;
+struct cdns_pcie_rc;
enum cdns_pcie_msg_code {
MSG_CODE_ASSERT_INTA = 0x20,
@@ -281,11 +450,64 @@ enum cdns_pcie_msg_routing {
MSG_ROUTING_GATHER,
};
+enum cdns_pcie_reg_bank {
+ REG_BANK_RP,
+ REG_BANK_IP_REG,
+ REG_BANK_IP_CFG_CTRL_REG,
+ REG_BANK_AXI_MASTER_COMMON,
+ REG_BANK_AXI_MASTER,
+ REG_BANK_AXI_SLAVE,
+ REG_BANK_AXI_HLS,
+ REG_BANK_AXI_RAS,
+ REG_BANK_AXI_DTI,
+ REG_BANKS_MAX,
+};
+
struct cdns_pcie_ops {
int (*start_link)(struct cdns_pcie *pcie);
void (*stop_link)(struct cdns_pcie *pcie);
bool (*link_up)(struct cdns_pcie *pcie);
u64 (*cpu_addr_fixup)(struct cdns_pcie *pcie, u64 cpu_addr);
+ int (*host_init_root_port)(struct cdns_pcie_rc *rc);
+ int (*host_bar_ib_config)(struct cdns_pcie_rc *rc,
+ enum cdns_pcie_rp_bar bar,
+ u64 cpu_addr, u64 size,
+ unsigned long flags);
+ int (*host_init_address_translation)(struct cdns_pcie_rc *rc);
+ void (*detect_quiet_min_delay_set)(struct cdns_pcie *pcie);
+ void (*set_outbound_region)(struct cdns_pcie *pcie, u8 busnr, u8 fn,
+ u32 r, bool is_io, u64 cpu_addr,
+ u64 pci_addr, size_t size);
+ void (*set_outbound_region_for_normal_msg)(struct cdns_pcie *pcie,
+ u8 busnr, u8 fn, u32 r,
+ u64 cpu_addr);
+ void (*reset_outbound_region)(struct cdns_pcie *pcie, u32 r);
+};
+
+/**
+ * struct cdns_plat_pcie_of_data - Register bank offset for a platform
+ * @is_rc: controller is a RC
+ * @is_hpa: Controller architecture is HPA
+ * @ip_reg_bank_off: ip register bank start offset
+ * @ip_cfg_ctrl_reg_off: ip config control register start offset
+ * @axi_mstr_common_off: AXI master common register start
+ * @axi_slave_off: AXI slave offset start
+ * @axi_master_off: AXI master offset start
+ * @axi_hls_off: AXI HLS offset start
+ * @axi_ras_off: AXI RAS offset
+ * @axi_dti_off: AXI DTI offset
+ */
+struct cdns_plat_pcie_of_data {
+ u32 is_rc:1;
+ u32 is_hpa:1;
+ u32 ip_reg_bank_off;
+ u32 ip_cfg_ctrl_reg_off;
+ u32 axi_mstr_common_off;
+ u32 axi_slave_off;
+ u32 axi_master_off;
+ u32 axi_hls_off;
+ u32 axi_ras_off;
+ u32 axi_dti_off;
};
/**
@@ -294,21 +516,25 @@ struct cdns_pcie_ops {
* @mem_res: start/end offsets in the physical system memory to map PCI accesses
* @dev: PCIe controller
* @is_rc: tell whether the PCIe controller mode is Root Complex or Endpoint.
+ * @is_hpa: indicates if the architecture is HPA
* @phy_count: number of supported PHY devices
* @phy: list of pointers to specific PHY control blocks
* @link: list of pointers to corresponding device link representations
* @ops: Platform-specific ops to control various inputs from Cadence PCIe
* wrapper
+ * @cdns_pcie_reg_offsets: Register bank offsets for different SoC
*/
struct cdns_pcie {
void __iomem *reg_base;
struct resource *mem_res;
struct device *dev;
bool is_rc;
+ bool is_hpa;
int phy_count;
struct phy **phy;
struct device_link **link;
const struct cdns_pcie_ops *ops;
+ const struct cdns_plat_pcie_of_data *cdns_pcie_reg_offsets;
};
/**
@@ -386,6 +612,40 @@ struct cdns_pcie_ep {
unsigned int quirk_disable_flr:1;
};
+static inline u32 cdns_reg_bank_to_off(struct cdns_pcie *pcie, enum cdns_pcie_reg_bank bank)
+{
+ u32 offset = 0x0;
+
+ switch (bank) {
+ case REG_BANK_IP_REG:
+ offset = pcie->cdns_pcie_reg_offsets->ip_reg_bank_off;
+ break;
+ case REG_BANK_IP_CFG_CTRL_REG:
+ offset = pcie->cdns_pcie_reg_offsets->ip_cfg_ctrl_reg_off;
+ break;
+ case REG_BANK_AXI_MASTER_COMMON:
+ offset = pcie->cdns_pcie_reg_offsets->axi_mstr_common_off;
+ break;
+ case REG_BANK_AXI_MASTER:
+ offset = pcie->cdns_pcie_reg_offsets->axi_master_off;
+ break;
+ case REG_BANK_AXI_SLAVE:
+ offset = pcie->cdns_pcie_reg_offsets->axi_slave_off;
+ break;
+ case REG_BANK_AXI_HLS:
+ offset = pcie->cdns_pcie_reg_offsets->axi_hls_off;
+ break;
+ case REG_BANK_AXI_RAS:
+ offset = pcie->cdns_pcie_reg_offsets->axi_ras_off;
+ break;
+ case REG_BANK_AXI_DTI:
+ offset = pcie->cdns_pcie_reg_offsets->axi_dti_off;
+ break;
+ default:
+ break;
+ };
+ return offset;
+}
/* Register access */
static inline void cdns_pcie_writel(struct cdns_pcie *pcie, u32 reg, u32 value)
@@ -398,6 +658,27 @@ static inline u32 cdns_pcie_readl(struct cdns_pcie *pcie, u32 reg)
return readl(pcie->reg_base + reg);
}
+static inline void cdns_pcie_hpa_writel(struct cdns_pcie *pcie,
+ enum cdns_pcie_reg_bank bank,
+ u32 reg,
+ u32 value)
+{
+ u32 offset = cdns_reg_bank_to_off(pcie, bank);
+
+ reg += offset;
+ writel(value, pcie->reg_base + reg);
+}
+
+static inline u32 cdns_pcie_hpa_readl(struct cdns_pcie *pcie,
+ enum cdns_pcie_reg_bank bank,
+ u32 reg)
+{
+ u32 offset = cdns_reg_bank_to_off(pcie, bank);
+
+ reg += offset;
+ return readl(pcie->reg_base + reg);
+}
+
static inline u32 cdns_pcie_read_sz(void __iomem *addr, int size)
{
void __iomem *aligned_addr = PTR_ALIGN_DOWN(addr, 0x4);
@@ -444,6 +725,9 @@ static inline void cdns_pcie_rp_writeb(struct cdns_pcie *pcie,
{
void __iomem *addr = pcie->reg_base + CDNS_PCIE_RP_BASE + reg;
+ if (pcie->is_hpa)
+ addr = pcie->reg_base + CDNS_PCIE_HPA_RP_BASE + reg;
+
cdns_pcie_write_sz(addr, 0x1, value);
}
@@ -452,6 +736,9 @@ static inline void cdns_pcie_rp_writew(struct cdns_pcie *pcie,
{
void __iomem *addr = pcie->reg_base + CDNS_PCIE_RP_BASE + reg;
+ if (pcie->is_hpa)
+ addr = pcie->reg_base + CDNS_PCIE_HPA_RP_BASE + reg;
+
cdns_pcie_write_sz(addr, 0x2, value);
}
@@ -459,6 +746,9 @@ static inline u16 cdns_pcie_rp_readw(struct cdns_pcie *pcie, u32 reg)
{
void __iomem *addr = pcie->reg_base + CDNS_PCIE_RP_BASE + reg;
+ if (pcie->is_hpa)
+ addr = pcie->reg_base + CDNS_PCIE_HPA_RP_BASE + reg;
+
return cdns_pcie_read_sz(addr, 0x2);
}
@@ -525,27 +815,58 @@ int cdns_pcie_host_init(struct cdns_pcie_rc *rc);
int cdns_pcie_host_setup(struct cdns_pcie_rc *rc);
void __iomem *cdns_pci_map_bus(struct pci_bus *bus, unsigned int devfn,
int where);
+int cdns_pcie_host_init_root_port(struct cdns_pcie_rc *rc);
+int cdns_pcie_host_bar_ib_config(struct cdns_pcie_rc *rc,
+ enum cdns_pcie_rp_bar bar,
+ u64 cpu_addr, u64 size,
+ unsigned long flags);
+int cdns_pcie_host_init_address_translation(struct cdns_pcie_rc *rc);
+void __iomem *cdns_pci_hpa_map_bus(struct pci_bus *bus, unsigned int devfn, int where);
+int cdns_pcie_hpa_host_init_root_port(struct cdns_pcie_rc *rc);
+int cdns_pcie_hpa_host_bar_ib_config(struct cdns_pcie_rc *rc,
+ enum cdns_pcie_rp_bar bar,
+ u64 cpu_addr, u64 size,
+ unsigned long flags);
+int cdns_pcie_hpa_host_init_address_translation(struct cdns_pcie_rc *rc);
+int cdns_pcie_hpa_host_init(struct cdns_pcie_rc *rc);
#else
static inline int cdns_pcie_host_link_setup(struct cdns_pcie_rc *rc)
{
return 0;
}
-
static inline int cdns_pcie_host_init(struct cdns_pcie_rc *rc)
{
return 0;
}
-
static inline int cdns_pcie_host_setup(struct cdns_pcie_rc *rc)
{
return 0;
}
-
static inline void __iomem *cdns_pci_map_bus(struct pci_bus *bus, unsigned int devfn,
int where)
{
return NULL;
}
+static inline void __iomem *cdns_pci_hpa_map_bus(struct pci_bus *bus, unsigned int devfn
+ int where)
+{
+ return NULL;
+}
+static inline int cdns_pcie_hpa_host_init_root_port(struct cdns_pcie_rc *rc)
+{
+ return 0;
+}
+static inline int cdns_pcie_hpa_host_bar_ib_config(struct cdns_pcie_rc *rc,
+ enum cdns_pcie_rp_bar bar,
+ u64 cpu_addr, u64 size,
+ unsigned long flags)
+{
+ return 0;
+}
+static inline int cdns_pcie_hpa_host_init_address_translation(struct cdns_pcie_rc *rc)
+{
+ return 0;
+}
#endif
#ifdef CONFIG_PCIE_CADENCE_EP
@@ -558,19 +879,17 @@ static inline int cdns_pcie_ep_setup(struct cdns_pcie_ep *ep)
#endif
void cdns_pcie_detect_quiet_min_delay_set(struct cdns_pcie *pcie);
-
void cdns_pcie_set_outbound_region(struct cdns_pcie *pcie, u8 busnr, u8 fn,
u32 r, bool is_io,
u64 cpu_addr, u64 pci_addr, size_t size);
-
void cdns_pcie_set_outbound_region_for_normal_msg(struct cdns_pcie *pcie,
u8 busnr, u8 fn,
u32 r, u64 cpu_addr);
-
void cdns_pcie_reset_outbound_region(struct cdns_pcie *pcie, u32 r);
void cdns_pcie_disable_phy(struct cdns_pcie *pcie);
int cdns_pcie_enable_phy(struct cdns_pcie *pcie);
int cdns_pcie_init_phy(struct device *dev, struct cdns_pcie *pcie);
+
extern const struct dev_pm_ops cdns_pcie_pm_ops;
#endif /* _PCIE_CADENCE_H */
--
2.47.1
^ permalink raw reply related [flat|nested] 25+ messages in thread
* [PATCH v4 4/5] PCI: cadence: Add support for PCIe Endpoint HPA controller
2025-04-24 1:04 [PATCH v4 0/5] Enhance the PCIe controller driver hans.zhang
` (2 preceding siblings ...)
2025-04-24 1:04 ` [PATCH v4 3/5] PCI: cadence: Add header support for PCIe HPA controller hans.zhang
@ 2025-04-24 1:04 ` hans.zhang
2025-04-24 1:04 ` [PATCH v4 5/5] PCI: cadence: Add callback functions for RP and EP controller hans.zhang
2025-04-25 16:24 ` [PATCH v4 0/5] Enhance the PCIe controller driver Krzysztof Kozlowski
5 siblings, 0 replies; 25+ messages in thread
From: hans.zhang @ 2025-04-24 1:04 UTC (permalink / raw)
To: bhelgaas, lpieralisi, kw, manivannan.sadhasivam, robh, krzk+dt,
conor+dt
Cc: peter.chen, linux-pci, devicetree, linux-kernel,
Manikandan K Pillai, Hans Zhang
From: Manikandan K Pillai <mpillai@cadence.com>
Add support for the Cadence PCIe endpoint HPA controller by
adding the required functions based on the HPA registers
and register bit definitions.
Signed-off-by: Manikandan K Pillai <mpillai@cadence.com>
Co-developed-by: Hans Zhang <hans.zhang@cixtech.com>
Signed-off-by: Hans Zhang <hans.zhang@cixtech.com>
---
.../pci/controller/cadence/pcie-cadence-ep.c | 141 +++++++++++++++++-
1 file changed, 136 insertions(+), 5 deletions(-)
diff --git a/drivers/pci/controller/cadence/pcie-cadence-ep.c b/drivers/pci/controller/cadence/pcie-cadence-ep.c
index 599ec4b1223e..f3f956fa116b 100644
--- a/drivers/pci/controller/cadence/pcie-cadence-ep.c
+++ b/drivers/pci/controller/cadence/pcie-cadence-ep.c
@@ -568,7 +568,11 @@ static int cdns_pcie_ep_start(struct pci_epc *epc)
* BIT(0) is hardwired to 1, hence function 0 is always enabled
* and can't be disabled anyway.
*/
- cdns_pcie_writel(pcie, CDNS_PCIE_LM_EP_FUNC_CFG, epc->function_num_map);
+ if (pcie->is_hpa)
+ cdns_pcie_hpa_writel(pcie, REG_BANK_IP_REG,
+ CDNS_PCIE_HPA_LM_EP_FUNC_CFG, epc->function_num_map);
+ else
+ cdns_pcie_writel(pcie, CDNS_PCIE_LM_EP_FUNC_CFG, epc->function_num_map);
/*
* Next function field in ARI_CAP_AND_CTR register for last function
@@ -605,6 +609,115 @@ static int cdns_pcie_ep_start(struct pci_epc *epc)
return 0;
}
+static int cdns_pcie_hpa_ep_set_bar(struct pci_epc *epc, u8 fn, u8 vfn,
+ struct pci_epf_bar *epf_bar)
+{
+ struct cdns_pcie_ep *ep = epc_get_drvdata(epc);
+ struct cdns_pcie_epf *epf = &ep->epf[fn];
+ struct cdns_pcie *pcie = &ep->pcie;
+ dma_addr_t bar_phys = epf_bar->phys_addr;
+ enum pci_barno bar = epf_bar->barno;
+ int flags = epf_bar->flags;
+ u32 addr0, addr1, reg, cfg, b, aperture, ctrl;
+ u64 sz;
+
+ /* BAR size is 2^(aperture + 7) */
+ sz = max_t(size_t, epf_bar->size, CDNS_PCIE_EP_MIN_APERTURE);
+
+ /*
+ * roundup_pow_of_two() returns an unsigned long, which is not suited
+ * for 64bit values.
+ */
+ sz = 1ULL << fls64(sz - 1);
+
+ /* 128B -> 0, 256B -> 1, 512B -> 2, ... */
+ aperture = ilog2(sz) - 7;
+
+ if ((flags & PCI_BASE_ADDRESS_SPACE) == PCI_BASE_ADDRESS_SPACE_IO) {
+ ctrl = CDNS_PCIE_HPA_LM_BAR_CFG_CTRL_IO_32BITS;
+ } else {
+ bool is_prefetch = !!(flags & PCI_BASE_ADDRESS_MEM_PREFETCH);
+ bool is_64bits = !!(flags & PCI_BASE_ADDRESS_MEM_TYPE_64);
+
+ if (is_64bits && (bar & 1))
+ return -EINVAL;
+
+ if (is_64bits && is_prefetch)
+ ctrl = CDNS_PCIE_HPA_LM_BAR_CFG_CTRL_PREFETCH_MEM_64BITS;
+ else if (is_prefetch)
+ ctrl = CDNS_PCIE_HPA_LM_BAR_CFG_CTRL_PREFETCH_MEM_32BITS;
+ else if (is_64bits)
+ ctrl = CDNS_PCIE_HPA_LM_BAR_CFG_CTRL_MEM_64BITS;
+ else
+ ctrl = CDNS_PCIE_HPA_LM_BAR_CFG_CTRL_MEM_32BITS;
+ }
+
+ addr0 = lower_32_bits(bar_phys);
+ addr1 = upper_32_bits(bar_phys);
+
+ if (vfn == 1)
+ reg = CDNS_PCIE_HPA_LM_EP_VFUNC_BAR_CFG(bar, fn);
+ else
+ reg = CDNS_PCIE_HPA_LM_EP_FUNC_BAR_CFG(bar, fn);
+ b = (bar < BAR_4) ? bar : bar - BAR_4;
+
+ if (vfn == 0 || vfn == 1) {
+ cfg = cdns_pcie_hpa_readl(pcie, REG_BANK_IP_CFG_CTRL_REG, reg);
+ cfg &= ~(CDNS_PCIE_HPA_LM_EP_FUNC_BAR_CFG_BAR_APERTURE_MASK(b) |
+ CDNS_PCIE_HPA_LM_EP_FUNC_BAR_CFG_BAR_CTRL_MASK(b));
+ cfg |= (CDNS_PCIE_HPA_LM_EP_FUNC_BAR_CFG_BAR_APERTURE(b, aperture) |
+ CDNS_PCIE_HPA_LM_EP_FUNC_BAR_CFG_BAR_CTRL(b, ctrl));
+ cdns_pcie_hpa_writel(pcie, REG_BANK_IP_CFG_CTRL_REG, reg, cfg);
+ }
+
+ fn = cdns_pcie_get_fn_from_vfn(pcie, fn, vfn);
+ cdns_pcie_hpa_writel(pcie, REG_BANK_AXI_MASTER_COMMON,
+ CDNS_PCIE_HPA_AT_IB_EP_FUNC_BAR_ADDR0(fn, bar), addr0);
+ cdns_pcie_hpa_writel(pcie, REG_BANK_AXI_MASTER_COMMON,
+ CDNS_PCIE_HPA_AT_IB_EP_FUNC_BAR_ADDR1(fn, bar), addr1);
+
+ if (vfn > 0)
+ epf = &epf->epf[vfn - 1];
+ epf->epf_bar[bar] = epf_bar;
+
+ return 0;
+}
+
+static void cdns_pcie_hpa_ep_clear_bar(struct pci_epc *epc, u8 fn, u8 vfn,
+ struct pci_epf_bar *epf_bar)
+{
+ struct cdns_pcie_ep *ep = epc_get_drvdata(epc);
+ struct cdns_pcie_epf *epf = &ep->epf[fn];
+ struct cdns_pcie *pcie = &ep->pcie;
+ enum pci_barno bar = epf_bar->barno;
+ u32 reg, cfg, b, ctrl;
+
+ if (vfn == 1)
+ reg = CDNS_PCIE_HPA_LM_EP_VFUNC_BAR_CFG(bar, fn);
+ else
+ reg = CDNS_PCIE_HPA_LM_EP_FUNC_BAR_CFG(bar, fn);
+ b = (bar < BAR_4) ? bar : bar - BAR_4;
+
+ if (vfn == 0 || vfn == 1) {
+ ctrl = CDNS_PCIE_HPA_LM_BAR_CFG_CTRL_DISABLED;
+ cfg = cdns_pcie_hpa_readl(pcie, REG_BANK_IP_CFG_CTRL_REG, reg);
+ cfg &= ~(CDNS_PCIE_HPA_LM_EP_FUNC_BAR_CFG_BAR_APERTURE_MASK(b) |
+ CDNS_PCIE_HPA_LM_EP_FUNC_BAR_CFG_BAR_CTRL_MASK(b));
+ cfg |= CDNS_PCIE_HPA_LM_EP_FUNC_BAR_CFG_BAR_CTRL(b, ctrl);
+ cdns_pcie_hpa_writel(pcie, REG_BANK_IP_CFG_CTRL_REG, reg, cfg);
+ }
+
+ fn = cdns_pcie_get_fn_from_vfn(pcie, fn, vfn);
+ cdns_pcie_hpa_writel(pcie, REG_BANK_AXI_MASTER_COMMON,
+ CDNS_PCIE_HPA_AT_IB_EP_FUNC_BAR_ADDR0(fn, bar), 0);
+ cdns_pcie_hpa_writel(pcie, REG_BANK_AXI_MASTER_COMMON,
+ CDNS_PCIE_HPA_AT_IB_EP_FUNC_BAR_ADDR1(fn, bar), 0);
+
+ if (vfn > 0)
+ epf = &epf->epf[vfn - 1];
+ epf->epf_bar[bar] = NULL;
+}
+
static const struct pci_epc_features cdns_pcie_epc_vf_features = {
.linkup_notifier = false,
.msi_capable = true,
@@ -644,6 +757,21 @@ static const struct pci_epc_ops cdns_pcie_epc_ops = {
.get_features = cdns_pcie_ep_get_features,
};
+static const struct pci_epc_ops cdns_pcie_hpa_epc_ops = {
+ .write_header = cdns_pcie_ep_write_header,
+ .set_bar = cdns_pcie_hpa_ep_set_bar,
+ .clear_bar = cdns_pcie_hpa_ep_clear_bar,
+ .map_addr = cdns_pcie_ep_map_addr,
+ .unmap_addr = cdns_pcie_ep_unmap_addr,
+ .set_msi = cdns_pcie_ep_set_msi,
+ .get_msi = cdns_pcie_ep_get_msi,
+ .set_msix = cdns_pcie_ep_set_msix,
+ .get_msix = cdns_pcie_ep_get_msix,
+ .raise_irq = cdns_pcie_ep_raise_irq,
+ .map_msi_irq = cdns_pcie_ep_map_msi_irq,
+ .start = cdns_pcie_ep_start,
+ .get_features = cdns_pcie_ep_get_features,
+};
int cdns_pcie_ep_setup(struct cdns_pcie_ep *ep)
{
@@ -681,10 +809,13 @@ int cdns_pcie_ep_setup(struct cdns_pcie_ep *ep)
if (!ep->ob_addr)
return -ENOMEM;
- /* Disable all but function 0 (anyway BIT(0) is hardwired to 1). */
- cdns_pcie_writel(pcie, CDNS_PCIE_LM_EP_FUNC_CFG, BIT(0));
-
- epc = devm_pci_epc_create(dev, &cdns_pcie_epc_ops);
+ if (pcie->is_hpa) {
+ epc = devm_pci_epc_create(dev, &cdns_pcie_hpa_epc_ops);
+ } else {
+ /* Disable all but function 0 (anyway BIT(0) is hardwired to 1) */
+ cdns_pcie_writel(pcie, CDNS_PCIE_LM_EP_FUNC_CFG, BIT(0));
+ epc = devm_pci_epc_create(dev, &cdns_pcie_epc_ops);
+ }
if (IS_ERR(epc)) {
dev_err(dev, "failed to create epc device\n");
return PTR_ERR(epc);
--
2.47.1
^ permalink raw reply related [flat|nested] 25+ messages in thread
* [PATCH v4 5/5] PCI: cadence: Add callback functions for RP and EP controller
2025-04-24 1:04 [PATCH v4 0/5] Enhance the PCIe controller driver hans.zhang
` (3 preceding siblings ...)
2025-04-24 1:04 ` [PATCH v4 4/5] PCI: cadence: Add support for PCIe Endpoint " hans.zhang
@ 2025-04-24 1:04 ` hans.zhang
2025-04-25 6:01 ` kernel test robot
2025-04-25 16:27 ` Krzysztof Kozlowski
2025-04-25 16:24 ` [PATCH v4 0/5] Enhance the PCIe controller driver Krzysztof Kozlowski
5 siblings, 2 replies; 25+ messages in thread
From: hans.zhang @ 2025-04-24 1:04 UTC (permalink / raw)
To: bhelgaas, lpieralisi, kw, manivannan.sadhasivam, robh, krzk+dt,
conor+dt
Cc: peter.chen, linux-pci, devicetree, linux-kernel,
Manikandan K Pillai, Hans Zhang
From: Manikandan K Pillai <mpillai@cadence.com>
Add support for the Cadence PCIe HPA controller by adding
the required callback functions. Update the common functions for
RP and EP configuration. Invoke the relevant callback functions
for platform probe of PCIe controller using the callback function.
Update the support for TI J721 boards to use the updated Cadence
PCIe controller code.
Signed-off-by: Manikandan K Pillai <mpillai@cadence.com>
Co-developed-by: Hans Zhang <hans.zhang@cixtech.com>
Signed-off-by: Hans Zhang <hans.zhang@cixtech.com>
---
drivers/pci/controller/cadence/pci-j721e.c | 12 +
.../pci/controller/cadence/pcie-cadence-ep.c | 29 +-
.../controller/cadence/pcie-cadence-host.c | 263 ++++++++++++++++--
.../controller/cadence/pcie-cadence-plat.c | 27 +-
drivers/pci/controller/cadence/pcie-cadence.c | 197 ++++++++++++-
drivers/pci/controller/cadence/pcie-cadence.h | 11 +-
6 files changed, 495 insertions(+), 44 deletions(-)
diff --git a/drivers/pci/controller/cadence/pci-j721e.c b/drivers/pci/controller/cadence/pci-j721e.c
index ef1cfdae33bb..154b36c30101 100644
--- a/drivers/pci/controller/cadence/pci-j721e.c
+++ b/drivers/pci/controller/cadence/pci-j721e.c
@@ -164,6 +164,14 @@ static const struct cdns_pcie_ops j721e_pcie_ops = {
.start_link = j721e_pcie_start_link,
.stop_link = j721e_pcie_stop_link,
.link_up = j721e_pcie_link_up,
+ .host_init_root_port = cdns_pcie_host_init_root_port,
+ .host_bar_ib_config = cdns_pcie_host_bar_ib_config,
+ .host_init_address_translation = cdns_pcie_host_init_address_translation,
+ .detect_quiet_min_delay_set = cdns_pcie_detect_quiet_min_delay_set,
+ .set_outbound_region = cdns_pcie_set_outbound_region,
+ .set_outbound_region_for_normal_msg =
+ cdns_pcie_set_outbound_region_for_normal_msg,
+ .reset_outbound_region = cdns_pcie_reset_outbound_region,
};
static int j721e_pcie_set_mode(struct j721e_pcie *pcie, struct regmap *syscon,
@@ -479,6 +487,8 @@ static int j721e_pcie_probe(struct platform_device *pdev)
cdns_pcie = &rc->pcie;
cdns_pcie->dev = dev;
+ cdns_pcie->is_rc = true;
+ cdns_pcie->is_hpa = false;
cdns_pcie->ops = &j721e_pcie_ops;
pcie->cdns_pcie = cdns_pcie;
break;
@@ -495,6 +505,8 @@ static int j721e_pcie_probe(struct platform_device *pdev)
cdns_pcie = &ep->pcie;
cdns_pcie->dev = dev;
+ cdns_pcie->is_rc = false;
+ cdns_pcie->is_hpa = false;
cdns_pcie->ops = &j721e_pcie_ops;
pcie->cdns_pcie = cdns_pcie;
break;
diff --git a/drivers/pci/controller/cadence/pcie-cadence-ep.c b/drivers/pci/controller/cadence/pcie-cadence-ep.c
index f3f956fa116b..f4961c760434 100644
--- a/drivers/pci/controller/cadence/pcie-cadence-ep.c
+++ b/drivers/pci/controller/cadence/pcie-cadence-ep.c
@@ -192,7 +192,7 @@ static int cdns_pcie_ep_map_addr(struct pci_epc *epc, u8 fn, u8 vfn,
}
fn = cdns_pcie_get_fn_from_vfn(pcie, fn, vfn);
- cdns_pcie_set_outbound_region(pcie, 0, fn, r, false, addr, pci_addr, size);
+ pcie->ops->set_outbound_region(pcie, 0, fn, r, false, addr, pci_addr, size);
set_bit(r, &ep->ob_region_map);
ep->ob_addr[r] = addr;
@@ -214,7 +214,7 @@ static void cdns_pcie_ep_unmap_addr(struct pci_epc *epc, u8 fn, u8 vfn,
if (r == ep->max_regions - 1)
return;
- cdns_pcie_reset_outbound_region(pcie, r);
+ pcie->ops->reset_outbound_region(pcie, r);
ep->ob_addr[r] = 0;
clear_bit(r, &ep->ob_region_map);
@@ -329,8 +329,7 @@ static void cdns_pcie_ep_assert_intx(struct cdns_pcie_ep *ep, u8 fn, u8 intx,
if (unlikely(ep->irq_pci_addr != CDNS_PCIE_EP_IRQ_PCI_ADDR_LEGACY ||
ep->irq_pci_fn != fn)) {
/* First region was reserved for IRQ writes. */
- cdns_pcie_set_outbound_region_for_normal_msg(pcie, 0, fn, 0,
- ep->irq_phys_addr);
+ pcie->ops->set_outbound_region_for_normal_msg(pcie, 0, fn, 0, ep->irq_phys_addr);
ep->irq_pci_addr = CDNS_PCIE_EP_IRQ_PCI_ADDR_LEGACY;
ep->irq_pci_fn = fn;
}
@@ -411,11 +410,11 @@ static int cdns_pcie_ep_send_msi_irq(struct cdns_pcie_ep *ep, u8 fn, u8 vfn,
if (unlikely(ep->irq_pci_addr != (pci_addr & ~pci_addr_mask) ||
ep->irq_pci_fn != fn)) {
/* First region was reserved for IRQ writes. */
- cdns_pcie_set_outbound_region(pcie, 0, fn, 0,
- false,
- ep->irq_phys_addr,
- pci_addr & ~pci_addr_mask,
- pci_addr_mask + 1);
+ pcie->ops->set_outbound_region(pcie, 0, fn, 0,
+ false,
+ ep->irq_phys_addr,
+ pci_addr & ~pci_addr_mask,
+ pci_addr_mask + 1);
ep->irq_pci_addr = (pci_addr & ~pci_addr_mask);
ep->irq_pci_fn = fn;
}
@@ -514,11 +513,11 @@ static int cdns_pcie_ep_send_msix_irq(struct cdns_pcie_ep *ep, u8 fn, u8 vfn,
if (ep->irq_pci_addr != (msg_addr & ~pci_addr_mask) ||
ep->irq_pci_fn != fn) {
/* First region was reserved for IRQ writes. */
- cdns_pcie_set_outbound_region(pcie, 0, fn, 0,
- false,
- ep->irq_phys_addr,
- msg_addr & ~pci_addr_mask,
- pci_addr_mask + 1);
+ pcie->ops->set_outbound_region(pcie, 0, fn, 0,
+ false,
+ ep->irq_phys_addr,
+ msg_addr & ~pci_addr_mask,
+ pci_addr_mask + 1);
ep->irq_pci_addr = (msg_addr & ~pci_addr_mask);
ep->irq_pci_fn = fn;
}
@@ -869,7 +868,7 @@ int cdns_pcie_ep_setup(struct cdns_pcie_ep *ep)
set_bit(0, &ep->ob_region_map);
if (ep->quirk_detect_quiet_flag)
- cdns_pcie_detect_quiet_min_delay_set(&ep->pcie);
+ pcie->ops->detect_quiet_min_delay_set(&ep->pcie);
spin_lock_init(&ep->lock);
diff --git a/drivers/pci/controller/cadence/pcie-cadence-host.c b/drivers/pci/controller/cadence/pcie-cadence-host.c
index ce035eef0a5c..c191c887a93b 100644
--- a/drivers/pci/controller/cadence/pcie-cadence-host.c
+++ b/drivers/pci/controller/cadence/pcie-cadence-host.c
@@ -60,10 +60,7 @@ void __iomem *cdns_pci_map_bus(struct pci_bus *bus, unsigned int devfn,
/* Configuration Type 0 or Type 1 access. */
desc0 = CDNS_PCIE_AT_OB_REGION_DESC0_HARDCODED_RID |
CDNS_PCIE_AT_OB_REGION_DESC0_DEVFN(0);
- /*
- * The bus number was already set once for all in desc1 by
- * cdns_pcie_host_init_address_translation().
- */
+
if (busn == bridge->busnr + 1)
desc0 |= CDNS_PCIE_AT_OB_REGION_DESC0_TYPE_CONF_TYPE0;
else
@@ -73,12 +70,77 @@ void __iomem *cdns_pci_map_bus(struct pci_bus *bus, unsigned int devfn,
return rc->cfg_base + (where & 0xfff);
}
+void __iomem *cdns_pci_hpa_map_bus(struct pci_bus *bus, unsigned int devfn,
+ int where)
+{
+ struct pci_host_bridge *bridge = pci_find_host_bridge(bus);
+ struct cdns_pcie_rc *rc = pci_host_bridge_priv(bridge);
+ struct cdns_pcie *pcie = &rc->pcie;
+ unsigned int busn = bus->number;
+ u32 addr0, desc0, desc1, ctrl0;
+ u32 regval;
+
+ if (pci_is_root_bus(bus)) {
+ /*
+ * Only the root port (devfn == 0) is connected to this bus.
+ * All other PCI devices are behind some bridge hence on another
+ * bus.
+ */
+ if (devfn)
+ return NULL;
+
+ return pcie->reg_base + (where & 0xfff);
+ }
+
+ /* Clear AXI link-down status */
+ regval = cdns_pcie_hpa_readl(pcie, REG_BANK_AXI_SLAVE, CDNS_PCIE_HPA_AT_LINKDOWN);
+ cdns_pcie_hpa_writel(pcie, REG_BANK_AXI_SLAVE, CDNS_PCIE_HPA_AT_LINKDOWN,
+ (regval & ~GENMASK(0, 0)));
+
+ desc1 = 0;
+ ctrl0 = 0;
+
+ /* Update Output registers for AXI region 0. */
+ addr0 = CDNS_PCIE_HPA_AT_OB_REGION_PCI_ADDR0_NBITS(12) |
+ CDNS_PCIE_HPA_AT_OB_REGION_PCI_ADDR0_DEVFN(devfn) |
+ CDNS_PCIE_HPA_AT_OB_REGION_PCI_ADDR0_BUS(busn);
+ cdns_pcie_hpa_writel(pcie, REG_BANK_AXI_SLAVE,
+ CDNS_PCIE_HPA_AT_OB_REGION_PCI_ADDR0(0), addr0);
+
+ desc1 = cdns_pcie_hpa_readl(pcie, REG_BANK_AXI_SLAVE,
+ CDNS_PCIE_HPA_AT_OB_REGION_DESC1(0));
+ desc1 &= ~CDNS_PCIE_HPA_AT_OB_REGION_DESC1_DEVFN_MASK;
+ desc1 |= CDNS_PCIE_HPA_AT_OB_REGION_DESC1_DEVFN(0);
+ ctrl0 = CDNS_PCIE_HPA_AT_OB_REGION_CTRL0_SUPPLY_BUS |
+ CDNS_PCIE_HPA_AT_OB_REGION_CTRL0_SUPPLY_DEV_FN;
+
+ if (busn == bridge->busnr + 1)
+ desc0 |= CDNS_PCIE_HPA_AT_OB_REGION_DESC0_TYPE_CONF_TYPE0;
+ else
+ desc0 |= CDNS_PCIE_HPA_AT_OB_REGION_DESC0_TYPE_CONF_TYPE1;
+
+ cdns_pcie_hpa_writel(pcie, REG_BANK_AXI_SLAVE,
+ CDNS_PCIE_HPA_AT_OB_REGION_DESC0(0), desc0);
+ cdns_pcie_hpa_writel(pcie, REG_BANK_AXI_SLAVE,
+ CDNS_PCIE_HPA_AT_OB_REGION_DESC1(0), desc1);
+ cdns_pcie_hpa_writel(pcie, REG_BANK_AXI_SLAVE,
+ CDNS_PCIE_HPA_AT_OB_REGION_CTRL0(0), ctrl0);
+
+ return rc->cfg_base + (where & 0xfff);
+}
+
static struct pci_ops cdns_pcie_host_ops = {
.map_bus = cdns_pci_map_bus,
.read = pci_generic_config_read,
.write = pci_generic_config_write,
};
+static struct pci_ops cdns_pcie_hpa_host_ops = {
+ .map_bus = cdns_pci_hpa_map_bus,
+ .read = pci_generic_config_read,
+ .write = pci_generic_config_write,
+};
+
static int cdns_pcie_host_training_complete(struct cdns_pcie *pcie)
{
u32 pcie_cap_off = CDNS_PCIE_RP_CAP_OFFSET;
@@ -154,8 +216,14 @@ static void cdns_pcie_host_enable_ptm_response(struct cdns_pcie *pcie)
{
u32 val;
- val = cdns_pcie_readl(pcie, CDNS_PCIE_LM_PTM_CTRL);
- cdns_pcie_writel(pcie, CDNS_PCIE_LM_PTM_CTRL, val | CDNS_PCIE_LM_TPM_CTRL_PTMRSEN);
+ if (!pcie->is_hpa) {
+ val = cdns_pcie_readl(pcie, CDNS_PCIE_LM_PTM_CTRL);
+ cdns_pcie_writel(pcie, CDNS_PCIE_LM_PTM_CTRL, val | CDNS_PCIE_LM_TPM_CTRL_PTMRSEN);
+ } else {
+ val = cdns_pcie_hpa_readl(pcie, REG_BANK_IP_REG, CDNS_PCIE_HPA_LM_PTM_CTRL);
+ cdns_pcie_hpa_writel(pcie, REG_BANK_IP_REG, CDNS_PCIE_HPA_LM_PTM_CTRL,
+ val | CDNS_PCIE_HPA_LM_TPM_CTRL_PTMRSEN);
+ }
}
static int cdns_pcie_host_start_link(struct cdns_pcie_rc *rc)
@@ -340,7 +408,7 @@ static int cdns_pcie_host_bar_config(struct cdns_pcie_rc *rc,
*/
bar = cdns_pcie_host_find_min_bar(rc, size);
if (bar != RP_BAR_UNDEFINED) {
- ret = cdns_pcie_host_bar_ib_config(rc, bar, cpu_addr,
+ ret = pcie->ops->host_bar_ib_config(rc, bar, cpu_addr,
size, flags);
if (ret)
dev_err(dev, "IB BAR: %d config failed\n", bar);
@@ -366,8 +434,7 @@ static int cdns_pcie_host_bar_config(struct cdns_pcie_rc *rc,
}
winsize = bar_max_size[bar];
- ret = cdns_pcie_host_bar_ib_config(rc, bar, cpu_addr, winsize,
- flags);
+ ret = pcie->ops->host_bar_ib_config(rc, bar, cpu_addr, winsize, flags);
if (ret) {
dev_err(dev, "IB BAR: %d config failed\n", bar);
return ret;
@@ -408,7 +475,7 @@ static int cdns_pcie_host_map_dma_ranges(struct cdns_pcie_rc *rc)
if (list_empty(&bridge->dma_ranges)) {
of_property_read_u32(np, "cdns,no-bar-match-nbits",
&no_bar_nbits);
- err = cdns_pcie_host_bar_ib_config(rc, RP_NO_BAR, 0x0,
+ err = pcie->ops->host_bar_ib_config(rc, RP_NO_BAR, 0x0,
(u64)1 << no_bar_nbits, 0);
if (err)
dev_err(dev, "IB BAR: %d config failed\n", RP_NO_BAR);
@@ -467,17 +534,159 @@ int cdns_pcie_host_init_address_translation(struct cdns_pcie_rc *rc)
u64 pci_addr = res->start - entry->offset;
if (resource_type(res) == IORESOURCE_IO)
- cdns_pcie_set_outbound_region(pcie, busnr, 0, r,
- true,
- pci_pio_to_address(res->start),
- pci_addr,
- resource_size(res));
+ pcie->ops->set_outbound_region(pcie, busnr, 0, r,
+ true,
+ pci_pio_to_address(res->start),
+ pci_addr,
+ resource_size(res));
+ else
+ pcie->ops->set_outbound_region(pcie, busnr, 0, r,
+ false,
+ res->start,
+ pci_addr,
+ resource_size(res));
+
+ r++;
+ }
+
+ return cdns_pcie_host_map_dma_ranges(rc);
+}
+
+int cdns_pcie_hpa_host_init_root_port(struct cdns_pcie_rc *rc)
+{
+ struct cdns_pcie *pcie = &rc->pcie;
+ u32 value, ctrl;
+
+ /*
+ * Set the root complex BAR configuration register:
+ * - disable both BAR0 and BAR1.
+ * - enable Prefetchable Memory Base and Limit registers in type 1
+ * config space (64 bits).
+ * - enable IO Base and Limit registers in type 1 config
+ * space (32 bits).
+ */
+
+ ctrl = CDNS_PCIE_HPA_LM_BAR_CFG_CTRL_DISABLED;
+ value = CDNS_PCIE_HPA_LM_RC_BAR_CFG_BAR0_CTRL(ctrl) |
+ CDNS_PCIE_HPA_LM_RC_BAR_CFG_BAR1_CTRL(ctrl) |
+ CDNS_PCIE_HPA_LM_RC_BAR_CFG_PREFETCH_MEM_ENABLE |
+ CDNS_PCIE_HPA_LM_RC_BAR_CFG_PREFETCH_MEM_64BITS |
+ CDNS_PCIE_HPA_LM_RC_BAR_CFG_IO_ENABLE |
+ CDNS_PCIE_HPA_LM_RC_BAR_CFG_IO_32BITS;
+ cdns_pcie_hpa_writel(pcie, REG_BANK_IP_CFG_CTRL_REG,
+ CDNS_PCIE_HPA_LM_RC_BAR_CFG, value);
+
+ if (rc->vendor_id != 0xffff)
+ cdns_pcie_rp_writew(pcie, PCI_VENDOR_ID, rc->vendor_id);
+
+ if (rc->device_id != 0xffff)
+ cdns_pcie_rp_writew(pcie, PCI_DEVICE_ID, rc->device_id);
+
+ cdns_pcie_rp_writeb(pcie, PCI_CLASS_REVISION, 0);
+ cdns_pcie_rp_writeb(pcie, PCI_CLASS_PROG, 0);
+ cdns_pcie_rp_writew(pcie, PCI_CLASS_DEVICE, PCI_CLASS_BRIDGE_PCI);
+
+ return 0;
+}
+
+int cdns_pcie_hpa_host_bar_ib_config(struct cdns_pcie_rc *rc,
+ enum cdns_pcie_rp_bar bar,
+ u64 cpu_addr, u64 size,
+ unsigned long flags)
+{
+ struct cdns_pcie *pcie = &rc->pcie;
+ u32 addr0, addr1, aperture, value;
+
+ if (!rc->avail_ib_bar[bar])
+ return -EBUSY;
+
+ rc->avail_ib_bar[bar] = false;
+
+ aperture = ilog2(size);
+ addr0 = CDNS_PCIE_HPA_AT_IB_RP_BAR_ADDR0_NBITS(aperture) |
+ (lower_32_bits(cpu_addr) & GENMASK(31, 8));
+ addr1 = upper_32_bits(cpu_addr);
+ cdns_pcie_hpa_writel(pcie, REG_BANK_AXI_MASTER,
+ CDNS_PCIE_HPA_AT_IB_RP_BAR_ADDR0(bar), addr0);
+ cdns_pcie_hpa_writel(pcie, REG_BANK_AXI_MASTER,
+ CDNS_PCIE_HPA_AT_IB_RP_BAR_ADDR1(bar), addr1);
+
+ if (bar == RP_NO_BAR)
+ return 0;
+
+ value = cdns_pcie_hpa_readl(pcie, REG_BANK_IP_CFG_CTRL_REG, CDNS_PCIE_HPA_LM_RC_BAR_CFG);
+ value &= ~(HPA_LM_RC_BAR_CFG_CTRL_MEM_64BITS(bar) |
+ HPA_LM_RC_BAR_CFG_CTRL_PREF_MEM_64BITS(bar) |
+ HPA_LM_RC_BAR_CFG_CTRL_MEM_32BITS(bar) |
+ HPA_LM_RC_BAR_CFG_CTRL_PREF_MEM_32BITS(bar) |
+ HPA_LM_RC_BAR_CFG_APERTURE(bar, bar_aperture_mask[bar] + 2));
+ if (size + cpu_addr >= SZ_4G) {
+ if (!(flags & IORESOURCE_PREFETCH))
+ value |= HPA_LM_RC_BAR_CFG_CTRL_MEM_64BITS(bar);
+ value |= HPA_LM_RC_BAR_CFG_CTRL_PREF_MEM_64BITS(bar);
+ } else {
+ if (!(flags & IORESOURCE_PREFETCH))
+ value |= HPA_LM_RC_BAR_CFG_CTRL_MEM_32BITS(bar);
+ value |= HPA_LM_RC_BAR_CFG_CTRL_PREF_MEM_32BITS(bar);
+ }
+
+ value |= HPA_LM_RC_BAR_CFG_APERTURE(bar, aperture);
+ cdns_pcie_hpa_writel(pcie, REG_BANK_IP_CFG_CTRL_REG, CDNS_PCIE_HPA_LM_RC_BAR_CFG, value);
+
+ return 0;
+}
+
+int cdns_pcie_hpa_host_init_address_translation(struct cdns_pcie_rc *rc)
+{
+ struct cdns_pcie *pcie = &rc->pcie;
+ struct pci_host_bridge *bridge = pci_host_bridge_from_priv(rc);
+ struct resource *cfg_res = rc->cfg_res;
+ struct resource_entry *entry;
+ u64 cpu_addr = cfg_res->start;
+ u32 addr0, addr1, desc1;
+ int r, busnr = 0;
+
+ entry = resource_list_first_type(&bridge->windows, IORESOURCE_BUS);
+ if (entry)
+ busnr = entry->res->start;
+
+ /*
+ * Reserve region 0 for PCI configure space accesses:
+ * OB_REGION_PCI_ADDR0 and OB_REGION_DESC0 are updated dynamically by
+ * cdns_pci_map_bus(), other region registers are set here once for all.
+ */
+ addr1 = 0;
+ desc1 = CDNS_PCIE_HPA_AT_OB_REGION_DESC1_BUS(busnr);
+ cdns_pcie_hpa_writel(pcie, REG_BANK_AXI_SLAVE,
+ CDNS_PCIE_HPA_AT_OB_REGION_PCI_ADDR1(0), addr1);
+ cdns_pcie_hpa_writel(pcie, REG_BANK_AXI_SLAVE,
+ CDNS_PCIE_HPA_AT_OB_REGION_DESC1(0), desc1);
+
+ addr0 = CDNS_PCIE_HPA_AT_OB_REGION_CPU_ADDR0_NBITS(12) |
+ (lower_32_bits(cpu_addr) & GENMASK(31, 8));
+ addr1 = upper_32_bits(cpu_addr);
+ cdns_pcie_hpa_writel(pcie, REG_BANK_AXI_SLAVE,
+ CDNS_PCIE_HPA_AT_OB_REGION_CPU_ADDR0(0), addr0);
+ cdns_pcie_hpa_writel(pcie, REG_BANK_AXI_SLAVE,
+ CDNS_PCIE_HPA_AT_OB_REGION_CPU_ADDR1(0), addr1);
+
+ r = 1;
+ resource_list_for_each_entry(entry, &bridge->windows) {
+ struct resource *res = entry->res;
+ u64 pci_addr = res->start - entry->offset;
+
+ if (resource_type(res) == IORESOURCE_IO)
+ pcie->ops->set_outbound_region(pcie, busnr, 0, r,
+ true,
+ pci_pio_to_address(res->start),
+ pci_addr,
+ resource_size(res));
else
- cdns_pcie_set_outbound_region(pcie, busnr, 0, r,
- false,
- res->start,
- pci_addr,
- resource_size(res));
+ pcie->ops->set_outbound_region(pcie, busnr, 0, r,
+ false,
+ res->start,
+ pci_addr,
+ resource_size(res));
r++;
}
@@ -489,11 +698,11 @@ int cdns_pcie_host_init(struct cdns_pcie_rc *rc)
{
int err;
- err = cdns_pcie_host_init_root_port(rc);
+ err = rc->pcie.ops->host_init_root_port(rc);
if (err)
return err;
- return cdns_pcie_host_init_address_translation(rc);
+ return rc->pcie.ops->host_init_address_translation(rc);
}
int cdns_pcie_host_link_setup(struct cdns_pcie_rc *rc)
@@ -503,7 +712,7 @@ int cdns_pcie_host_link_setup(struct cdns_pcie_rc *rc)
int ret;
if (rc->quirk_detect_quiet_flag)
- cdns_pcie_detect_quiet_min_delay_set(&rc->pcie);
+ pcie->ops->detect_quiet_min_delay_set(&rc->pcie);
cdns_pcie_host_enable_ptm_response(pcie);
@@ -566,8 +775,12 @@ int cdns_pcie_host_setup(struct cdns_pcie_rc *rc)
if (ret)
return ret;
- if (!bridge->ops)
- bridge->ops = &cdns_pcie_host_ops;
+ if (!bridge->ops) {
+ if (pcie->is_hpa)
+ bridge->ops = &cdns_pcie_hpa_host_ops;
+ else
+ bridge->ops = &cdns_pcie_host_ops;
+ }
ret = pci_host_probe(bridge);
if (ret < 0)
diff --git a/drivers/pci/controller/cadence/pcie-cadence-plat.c b/drivers/pci/controller/cadence/pcie-cadence-plat.c
index 93c21c899309..21ca3d1b07e2 100644
--- a/drivers/pci/controller/cadence/pcie-cadence-plat.c
+++ b/drivers/pci/controller/cadence/pcie-cadence-plat.c
@@ -30,7 +30,30 @@ static u64 cdns_plat_cpu_addr_fixup(struct cdns_pcie *pcie, u64 cpu_addr)
}
static const struct cdns_pcie_ops cdns_plat_ops = {
+ .link_up = cdns_pcie_linkup,
.cpu_addr_fixup = cdns_plat_cpu_addr_fixup,
+ .host_init_root_port = cdns_pcie_host_init_root_port,
+ .host_bar_ib_config = cdns_pcie_host_bar_ib_config,
+ .host_init_address_translation = cdns_pcie_host_init_address_translation,
+ .detect_quiet_min_delay_set = cdns_pcie_detect_quiet_min_delay_set,
+ .set_outbound_region = cdns_pcie_set_outbound_region,
+ .set_outbound_region_for_normal_msg =
+ cdns_pcie_set_outbound_region_for_normal_msg,
+ .reset_outbound_region = cdns_pcie_reset_outbound_region,
+};
+
+static const struct cdns_pcie_ops cdns_hpa_plat_ops = {
+ .start_link = cdns_pcie_hpa_start_link,
+ .stop_link = cdns_pcie_hpa_stop_link,
+ .link_up = cdns_pcie_hpa_linkup,
+ .host_init_root_port = cdns_pcie_hpa_host_init_root_port,
+ .host_bar_ib_config = cdns_pcie_hpa_host_bar_ib_config,
+ .host_init_address_translation = cdns_pcie_hpa_host_init_address_translation,
+ .detect_quiet_min_delay_set = cdns_pcie_hpa_detect_quiet_min_delay_set,
+ .set_outbound_region = cdns_pcie_hpa_set_outbound_region,
+ .set_outbound_region_for_normal_msg =
+ cdns_pcie_hpa_set_outbound_region_for_normal_msg,
+ .reset_outbound_region = cdns_pcie_hpa_reset_outbound_region,
};
static int cdns_plat_pcie_probe(struct platform_device *pdev)
@@ -67,7 +90,7 @@ static int cdns_plat_pcie_probe(struct platform_device *pdev)
rc = pci_host_bridge_priv(bridge);
rc->pcie.dev = dev;
- rc->pcie.ops = &cdns_plat_ops;
+ rc->pcie.ops = data->is_hpa ? &cdns_hpa_plat_ops : &cdns_plat_ops;
rc->pcie.is_hpa = data->is_hpa;
rc->pcie.is_rc = data->is_rc;
@@ -100,7 +123,7 @@ static int cdns_plat_pcie_probe(struct platform_device *pdev)
return -ENOMEM;
ep->pcie.dev = dev;
- ep->pcie.ops = &cdns_plat_ops;
+ ep->pcie.ops = data->is_hpa ? &cdns_hpa_plat_ops : &cdns_plat_ops;
ep->pcie.is_hpa = data->is_hpa;
ep->pcie.is_rc = data->is_rc;
diff --git a/drivers/pci/controller/cadence/pcie-cadence.c b/drivers/pci/controller/cadence/pcie-cadence.c
index 204e045aed8c..a7ec0b96c19f 100644
--- a/drivers/pci/controller/cadence/pcie-cadence.c
+++ b/drivers/pci/controller/cadence/pcie-cadence.c
@@ -8,6 +8,45 @@
#include "pcie-cadence.h"
+bool cdns_pcie_linkup(struct cdns_pcie *pcie)
+{
+ u32 pl_reg_val;
+
+ pl_reg_val = cdns_pcie_readl(pcie, CDNS_PCIE_LM_BASE);
+ if (pl_reg_val & GENMASK(0, 0))
+ return true;
+ return false;
+}
+
+bool cdns_pcie_hpa_linkup(struct cdns_pcie *pcie)
+{
+ u32 pl_reg_val;
+
+ pl_reg_val = cdns_pcie_hpa_readl(pcie, REG_BANK_IP_REG, CDNS_PCIE_HPA_PHY_DBG_STS_REG0);
+ if (pl_reg_val & GENMASK(0, 0))
+ return true;
+ return false;
+}
+
+int cdns_pcie_hpa_start_link(struct cdns_pcie *pcie)
+{
+ u32 pl_reg_val;
+
+ pl_reg_val = cdns_pcie_hpa_readl(pcie, REG_BANK_IP_REG, CDNS_PCIE_HPA_PHY_LAYER_CFG0);
+ pl_reg_val |= CDNS_PCIE_HPA_LINK_TRNG_EN_MASK;
+ cdns_pcie_hpa_writel(pcie, REG_BANK_IP_REG, CDNS_PCIE_HPA_PHY_LAYER_CFG0, pl_reg_val);
+ return 0;
+}
+
+void cdns_pcie_hpa_stop_link(struct cdns_pcie *pcie)
+{
+ u32 pl_reg_val;
+
+ pl_reg_val = cdns_pcie_hpa_readl(pcie, REG_BANK_IP_REG, CDNS_PCIE_HPA_PHY_LAYER_CFG0);
+ pl_reg_val &= ~CDNS_PCIE_HPA_LINK_TRNG_EN_MASK;
+ cdns_pcie_hpa_writel(pcie, REG_BANK_IP_REG, CDNS_PCIE_HPA_PHY_LAYER_CFG0, pl_reg_val);
+}
+
void cdns_pcie_detect_quiet_min_delay_set(struct cdns_pcie *pcie)
{
u32 delay = 0x3;
@@ -55,7 +94,7 @@ void cdns_pcie_set_outbound_region(struct cdns_pcie *pcie, u8 busnr, u8 fn,
desc1 = 0;
/*
- * Whatever Bit [23] is set or not inside DESC0 register of the outbound
+ * Whether Bit [23] is set or not inside DESC0 register of the outbound
* PCIe descriptor, the PCI function number must be set into
* Bits [26:24] of DESC0 anyway.
*
@@ -147,6 +186,162 @@ void cdns_pcie_reset_outbound_region(struct cdns_pcie *pcie, u32 r)
cdns_pcie_writel(pcie, CDNS_PCIE_AT_OB_REGION_CPU_ADDR1(r), 0);
}
+void cdns_pcie_hpa_detect_quiet_min_delay_set(struct cdns_pcie *pcie)
+{
+ u32 delay = 0x3;
+ u32 ltssm_control_cap;
+
+ /* Set the LTSSM Detect Quiet state min. delay to 2ms. */
+ ltssm_control_cap = cdns_pcie_hpa_readl(pcie, REG_BANK_IP_REG,
+ CDNS_PCIE_HPA_PHY_LAYER_CFG0);
+ ltssm_control_cap = ((ltssm_control_cap &
+ ~CDNS_PCIE_HPA_DETECT_QUIET_MIN_DELAY_MASK) |
+ CDNS_PCIE_HPA_DETECT_QUIET_MIN_DELAY(delay));
+
+ cdns_pcie_hpa_writel(pcie, REG_BANK_IP_REG,
+ CDNS_PCIE_HPA_PHY_LAYER_CFG0, ltssm_control_cap);
+}
+
+void cdns_pcie_hpa_set_outbound_region(struct cdns_pcie *pcie, u8 busnr, u8 fn, u32 r,
+ bool is_io, u64 cpu_addr, u64 pci_addr, size_t size)
+{
+ /*
+ * roundup_pow_of_two() returns an unsigned long, which is not suited
+ * for 64bit values.
+ */
+ u64 sz = 1ULL << fls64(size - 1);
+ int nbits = ilog2(sz);
+ u32 addr0, addr1, desc0, desc1, ctrl0;
+
+ if (nbits < 8)
+ nbits = 8;
+
+ /* Set the PCI address */
+ addr0 = CDNS_PCIE_HPA_AT_OB_REGION_PCI_ADDR0_NBITS(nbits) |
+ (lower_32_bits(pci_addr) & GENMASK(31, 8));
+ addr1 = upper_32_bits(pci_addr);
+
+ cdns_pcie_hpa_writel(pcie, REG_BANK_AXI_SLAVE,
+ CDNS_PCIE_HPA_AT_OB_REGION_PCI_ADDR0(r), addr0);
+ cdns_pcie_hpa_writel(pcie, REG_BANK_AXI_SLAVE,
+ CDNS_PCIE_HPA_AT_OB_REGION_PCI_ADDR1(r), addr1);
+
+ /* Set the PCIe header descriptor */
+ if (is_io)
+ desc0 = CDNS_PCIE_HPA_AT_OB_REGION_DESC0_TYPE_IO;
+ else
+ desc0 = CDNS_PCIE_HPA_AT_OB_REGION_DESC0_TYPE_MEM;
+ desc1 = 0;
+ ctrl0 = 0;
+
+ /*
+ * Whether Bit [26] is set or not inside DESC0 register of the outbound
+ * PCIe descriptor, the PCI function number must be set into
+ * Bits [31:24] of DESC1 anyway.
+ *
+ * In Root Complex mode, the function number is always 0 but in Endpoint
+ * mode, the PCIe controller may support more than one function. This
+ * function number needs to be set properly into the outbound PCIe
+ * descriptor.
+ *
+ * Besides, setting Bit [26] is mandatory when in Root Complex mode:
+ * then the driver must provide the bus, resp. device, number in
+ * Bits [31:24] of DESC1, resp. Bits[23:16] of DESC0. Like the function
+ * number, the device number is always 0 in Root Complex mode.
+ *
+ * However when in Endpoint mode, we can clear Bit [26] of DESC0, hence
+ * the PCIe controller will use the captured values for the bus and
+ * device numbers.
+ */
+ if (pcie->is_rc) {
+ /* The device and function numbers are always 0. */
+ desc1 = CDNS_PCIE_HPA_AT_OB_REGION_DESC1_BUS(busnr) |
+ CDNS_PCIE_HPA_AT_OB_REGION_DESC1_DEVFN(0);
+ ctrl0 = CDNS_PCIE_HPA_AT_OB_REGION_CTRL0_SUPPLY_BUS |
+ CDNS_PCIE_HPA_AT_OB_REGION_CTRL0_SUPPLY_DEV_FN;
+ } else {
+ /*
+ * Use captured values for bus and device numbers but still
+ * need to set the function number.
+ */
+ desc1 |= CDNS_PCIE_HPA_AT_OB_REGION_DESC1_DEVFN(fn);
+ }
+
+ cdns_pcie_hpa_writel(pcie, REG_BANK_AXI_SLAVE,
+ CDNS_PCIE_HPA_AT_OB_REGION_DESC0(r), desc0);
+ cdns_pcie_hpa_writel(pcie, REG_BANK_AXI_SLAVE,
+ CDNS_PCIE_HPA_AT_OB_REGION_DESC1(r), desc1);
+
+ addr0 = CDNS_PCIE_HPA_AT_OB_REGION_CPU_ADDR0_NBITS(nbits) |
+ (lower_32_bits(cpu_addr) & GENMASK(31, 8));
+ addr1 = upper_32_bits(cpu_addr);
+
+ cdns_pcie_hpa_writel(pcie, REG_BANK_AXI_SLAVE,
+ CDNS_PCIE_HPA_AT_OB_REGION_CPU_ADDR0(r), addr0);
+ cdns_pcie_hpa_writel(pcie, REG_BANK_AXI_SLAVE,
+ CDNS_PCIE_HPA_AT_OB_REGION_CPU_ADDR1(r), addr1);
+ cdns_pcie_hpa_writel(pcie, REG_BANK_AXI_SLAVE,
+ CDNS_PCIE_HPA_AT_OB_REGION_CTRL0(r), ctrl0);
+}
+
+void cdns_pcie_hpa_set_outbound_region_for_normal_msg(struct cdns_pcie *pcie,
+ u8 busnr, u8 fn,
+ u32 r, u64 cpu_addr)
+{
+ u32 addr0, addr1, desc0, desc1, ctrl0;
+
+ desc0 = CDNS_PCIE_HPA_AT_OB_REGION_DESC0_TYPE_NORMAL_MSG;
+ desc1 = 0;
+ ctrl0 = 0;
+
+ /* See cdns_pcie_set_outbound_region() comments above. */
+ if (pcie->is_rc) {
+ desc1 = CDNS_PCIE_HPA_AT_OB_REGION_DESC1_BUS(busnr) |
+ CDNS_PCIE_HPA_AT_OB_REGION_DESC1_DEVFN(0);
+ ctrl0 = CDNS_PCIE_HPA_AT_OB_REGION_CTRL0_SUPPLY_BUS |
+ CDNS_PCIE_HPA_AT_OB_REGION_CTRL0_SUPPLY_DEV_FN;
+ } else {
+ desc1 |= CDNS_PCIE_HPA_AT_OB_REGION_DESC1_DEVFN(fn);
+ }
+
+ addr0 = CDNS_PCIE_HPA_AT_OB_REGION_CPU_ADDR0_NBITS(17) |
+ (lower_32_bits(cpu_addr) & GENMASK(31, 8));
+ addr1 = upper_32_bits(cpu_addr);
+
+ cdns_pcie_hpa_writel(pcie, REG_BANK_AXI_SLAVE,
+ CDNS_PCIE_HPA_AT_OB_REGION_PCI_ADDR0(r), 0);
+ cdns_pcie_hpa_writel(pcie, REG_BANK_AXI_SLAVE,
+ CDNS_PCIE_HPA_AT_OB_REGION_PCI_ADDR1(r), 0);
+ cdns_pcie_hpa_writel(pcie, REG_BANK_AXI_SLAVE,
+ CDNS_PCIE_HPA_AT_OB_REGION_DESC0(r), desc0);
+ cdns_pcie_hpa_writel(pcie, REG_BANK_AXI_SLAVE,
+ CDNS_PCIE_HPA_AT_OB_REGION_DESC1(r), desc1);
+ cdns_pcie_hpa_writel(pcie, REG_BANK_AXI_SLAVE,
+ CDNS_PCIE_HPA_AT_OB_REGION_CPU_ADDR0(r), addr0);
+ cdns_pcie_hpa_writel(pcie, REG_BANK_AXI_SLAVE,
+ CDNS_PCIE_HPA_AT_OB_REGION_CPU_ADDR1(r), addr1);
+ cdns_pcie_hpa_writel(pcie, REG_BANK_AXI_SLAVE,
+ CDNS_PCIE_HPA_AT_OB_REGION_CTRL0(r), ctrl0);
+}
+
+void cdns_pcie_hpa_reset_outbound_region(struct cdns_pcie *pcie, u32 r)
+{
+ cdns_pcie_hpa_writel(pcie, REG_BANK_AXI_SLAVE,
+ CDNS_PCIE_HPA_AT_OB_REGION_PCI_ADDR0(r), 0);
+ cdns_pcie_hpa_writel(pcie, REG_BANK_AXI_SLAVE,
+ CDNS_PCIE_HPA_AT_OB_REGION_PCI_ADDR1(r), 0);
+
+ cdns_pcie_hpa_writel(pcie, REG_BANK_AXI_SLAVE,
+ CDNS_PCIE_HPA_AT_OB_REGION_DESC0(r), 0);
+ cdns_pcie_hpa_writel(pcie, REG_BANK_AXI_SLAVE,
+ CDNS_PCIE_HPA_AT_OB_REGION_DESC1(r), 0);
+
+ cdns_pcie_hpa_writel(pcie, REG_BANK_AXI_SLAVE,
+ CDNS_PCIE_HPA_AT_OB_REGION_CPU_ADDR0(r), 0);
+ cdns_pcie_hpa_writel(pcie, REG_BANK_AXI_SLAVE,
+ CDNS_PCIE_HPA_AT_OB_REGION_CPU_ADDR1(r), 0);
+}
+
void cdns_pcie_disable_phy(struct cdns_pcie *pcie)
{
int i = pcie->phy_count;
diff --git a/drivers/pci/controller/cadence/pcie-cadence.h b/drivers/pci/controller/cadence/pcie-cadence.h
index 72cb27c6f9e4..39f6a12aef8d 100644
--- a/drivers/pci/controller/cadence/pcie-cadence.h
+++ b/drivers/pci/controller/cadence/pcie-cadence.h
@@ -878,6 +878,10 @@ static inline int cdns_pcie_ep_setup(struct cdns_pcie_ep *ep)
}
#endif
+bool cdns_pcie_linkup(struct cdns_pcie *pcie);
+bool cdns_pcie_hpa_linkup(struct cdns_pcie *pcie);
+int cdns_pcie_hpa_start_link(struct cdns_pcie *pcie);
+void cdns_pcie_hpa_stop_link(struct cdns_pcie *pcie);
void cdns_pcie_detect_quiet_min_delay_set(struct cdns_pcie *pcie);
void cdns_pcie_set_outbound_region(struct cdns_pcie *pcie, u8 busnr, u8 fn,
u32 r, bool is_io,
@@ -889,7 +893,12 @@ void cdns_pcie_reset_outbound_region(struct cdns_pcie *pcie, u32 r);
void cdns_pcie_disable_phy(struct cdns_pcie *pcie);
int cdns_pcie_enable_phy(struct cdns_pcie *pcie);
int cdns_pcie_init_phy(struct device *dev, struct cdns_pcie *pcie);
-
+void cdns_pcie_hpa_detect_quiet_min_delay_set(struct cdns_pcie *pcie);
+void cdns_pcie_hpa_set_outbound_region(struct cdns_pcie *pcie, u8 busnr, u8 fn, u32 r,
+ bool is_io, u64 cpu_addr, u64 pci_addr, size_t size);
+void cdns_pcie_hpa_set_outbound_region_for_normal_msg(struct cdns_pcie *pcie,
+ u8 busnr, u8 fn, u32 r, u64 cpu_addr);
+void cdns_pcie_hpa_reset_outbound_region(struct cdns_pcie *pcie, u32 r);
extern const struct dev_pm_ops cdns_pcie_pm_ops;
#endif /* _PCIE_CADENCE_H */
--
2.47.1
^ permalink raw reply related [flat|nested] 25+ messages in thread
* Re: [PATCH v4 3/5] PCI: cadence: Add header support for PCIe HPA controller
2025-04-24 1:04 ` [PATCH v4 3/5] PCI: cadence: Add header support for PCIe HPA controller hans.zhang
@ 2025-04-24 3:36 ` Peter Chen (CIX)
2025-04-25 4:18 ` kernel test robot
1 sibling, 0 replies; 25+ messages in thread
From: Peter Chen (CIX) @ 2025-04-24 3:36 UTC (permalink / raw)
To: hans.zhang
Cc: bhelgaas, lpieralisi, kw, manivannan.sadhasivam, robh, krzk+dt,
conor+dt, peter.chen, linux-pci, devicetree, linux-kernel,
Manikandan K Pillai
On 25-04-24 09:04:42, hans.zhang@cixtech.com wrote:
> From: Manikandan K Pillai <mpillai@cadence.com>
>
> +/**
> + * struct cdns_plat_pcie_of_data - Register bank offset for a platform
> + * @is_rc: controller is a RC
> + * @is_hpa: Controller architecture is HPA
> + * @ip_reg_bank_off: ip register bank start offset
> + * @ip_cfg_ctrl_reg_off: ip config control register start offset
> + * @axi_mstr_common_off: AXI master common register start
> + * @axi_slave_off: AXI slave offset start
> + * @axi_master_off: AXI master offset start
> + * @axi_hls_off: AXI HLS offset start
> + * @axi_ras_off: AXI RAS offset
> + * @axi_dti_off: AXI DTI offset
The variable's suffix _off may confuse the reader, since off stands for
something is turned off and also used at device driver commonly,
suggest using _offset and align with your other code.
> + */
> +struct cdns_plat_pcie_of_data {
> + u32 is_rc:1;
> + u32 is_hpa:1;
> + u32 ip_reg_bank_off;
> + u32 ip_cfg_ctrl_reg_off;
> + u32 axi_mstr_common_off;
> + u32 axi_slave_off;
> + u32 axi_master_off;
> + u32 axi_hls_off;
> + u32 axi_ras_off;
> + u32 axi_dti_off;
> };
>
> +static inline void cdns_pcie_hpa_writel(struct cdns_pcie *pcie,
> + enum cdns_pcie_reg_bank bank,
> + u32 reg,
> + u32 value)
> +{
> + u32 offset = cdns_reg_bank_to_off(pcie, bank);
More than one blank space after "=".
> +}
> +
> +static inline u32 cdns_pcie_hpa_readl(struct cdns_pcie *pcie,
> + enum cdns_pcie_reg_bank bank,
> + u32 reg)
> +{
> + u32 offset = cdns_reg_bank_to_off(pcie, bank);
ditto
--
Best regards,
Peter
^ permalink raw reply [flat|nested] 25+ messages in thread
* Re: [PATCH v4 2/5] dt-bindings: pci: cadence: Extend compatible for new EP configurations
2025-04-24 1:04 ` [PATCH v4 2/5] dt-bindings: pci: cadence: Extend compatible for new EP configurations hans.zhang
@ 2025-04-24 15:29 ` Conor Dooley
2025-04-24 15:30 ` Conor Dooley
2025-04-25 2:17 ` Manikandan Karunakaran Pillai
0 siblings, 2 replies; 25+ messages in thread
From: Conor Dooley @ 2025-04-24 15:29 UTC (permalink / raw)
To: hans.zhang
Cc: bhelgaas, lpieralisi, kw, manivannan.sadhasivam, robh, krzk+dt,
conor+dt, peter.chen, linux-pci, devicetree, linux-kernel,
Manikandan K Pillai
[-- Attachment #1: Type: text/plain, Size: 1511 bytes --]
On Thu, Apr 24, 2025 at 09:04:41AM +0800, hans.zhang@cixtech.com wrote:
> From: Manikandan K Pillai <mpillai@cadence.com>
>
> Document the compatible property for HPA (High Performance Architecture)
> PCIe controller EP configuration.
Please explain what makes the new architecture sufficiently different
from the existing one such that a fallback compatible does not work.
Same applies to the other binding patch.
Thanks,
Conor.
>
> Signed-off-by: Manikandan K Pillai <mpillai@cadence.com>
> Signed-off-by: Hans Zhang <hans.zhang@cixtech.com>
> ---
> .../devicetree/bindings/pci/cdns,cdns-pcie-ep.yaml | 6 ++++--
> 1 file changed, 4 insertions(+), 2 deletions(-)
>
> diff --git a/Documentation/devicetree/bindings/pci/cdns,cdns-pcie-ep.yaml b/Documentation/devicetree/bindings/pci/cdns,cdns-pcie-ep.yaml
> index 98651ab22103..a7e404e4f690 100644
> --- a/Documentation/devicetree/bindings/pci/cdns,cdns-pcie-ep.yaml
> +++ b/Documentation/devicetree/bindings/pci/cdns,cdns-pcie-ep.yaml
> @@ -7,14 +7,16 @@ $schema: http://devicetree.org/meta-schemas/core.yaml#
> title: Cadence PCIe EP Controller
>
> maintainers:
> - - Tom Joseph <tjoseph@cadence.com>
> + - Manikandan K Pillai <mpillai@cadence.com>
>
> allOf:
> - $ref: cdns-pcie-ep.yaml#
>
> properties:
> compatible:
> - const: cdns,cdns-pcie-ep
> + enum:
> + - cdns,cdns-pcie-ep
> + - cdns,cdns-pcie-hpa-ep
>
> reg:
> maxItems: 2
> --
> 2.47.1
>
[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 228 bytes --]
^ permalink raw reply [flat|nested] 25+ messages in thread
* Re: [PATCH v4 2/5] dt-bindings: pci: cadence: Extend compatible for new EP configurations
2025-04-24 15:29 ` Conor Dooley
@ 2025-04-24 15:30 ` Conor Dooley
2025-04-25 2:19 ` Manikandan Karunakaran Pillai
2025-04-25 2:17 ` Manikandan Karunakaran Pillai
1 sibling, 1 reply; 25+ messages in thread
From: Conor Dooley @ 2025-04-24 15:30 UTC (permalink / raw)
To: hans.zhang
Cc: bhelgaas, lpieralisi, kw, manivannan.sadhasivam, robh, krzk+dt,
conor+dt, peter.chen, linux-pci, devicetree, linux-kernel,
Manikandan K Pillai
[-- Attachment #1: Type: text/plain, Size: 1807 bytes --]
On Thu, Apr 24, 2025 at 04:29:35PM +0100, Conor Dooley wrote:
> On Thu, Apr 24, 2025 at 09:04:41AM +0800, hans.zhang@cixtech.com wrote:
> > From: Manikandan K Pillai <mpillai@cadence.com>
> >
> > Document the compatible property for HPA (High Performance Architecture)
> > PCIe controller EP configuration.
>
> Please explain what makes the new architecture sufficiently different
> from the existing one such that a fallback compatible does not work.
>
> Same applies to the other binding patch.
Additionally, since this IP is likely in use on your sky1 SoC, why is a
soc-specific compatible for your integration not needed?
>
> Thanks,
> Conor.
>
> >
> > Signed-off-by: Manikandan K Pillai <mpillai@cadence.com>
> > Signed-off-by: Hans Zhang <hans.zhang@cixtech.com>
> > ---
> > .../devicetree/bindings/pci/cdns,cdns-pcie-ep.yaml | 6 ++++--
> > 1 file changed, 4 insertions(+), 2 deletions(-)
> >
> > diff --git a/Documentation/devicetree/bindings/pci/cdns,cdns-pcie-ep.yaml b/Documentation/devicetree/bindings/pci/cdns,cdns-pcie-ep.yaml
> > index 98651ab22103..a7e404e4f690 100644
> > --- a/Documentation/devicetree/bindings/pci/cdns,cdns-pcie-ep.yaml
> > +++ b/Documentation/devicetree/bindings/pci/cdns,cdns-pcie-ep.yaml
> > @@ -7,14 +7,16 @@ $schema: http://devicetree.org/meta-schemas/core.yaml#
> > title: Cadence PCIe EP Controller
> >
> > maintainers:
> > - - Tom Joseph <tjoseph@cadence.com>
> > + - Manikandan K Pillai <mpillai@cadence.com>
> >
> > allOf:
> > - $ref: cdns-pcie-ep.yaml#
> >
> > properties:
> > compatible:
> > - const: cdns,cdns-pcie-ep
> > + enum:
> > + - cdns,cdns-pcie-ep
> > + - cdns,cdns-pcie-hpa-ep
> >
> > reg:
> > maxItems: 2
> > --
> > 2.47.1
> >
[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 228 bytes --]
^ permalink raw reply [flat|nested] 25+ messages in thread
* RE: [PATCH v4 2/5] dt-bindings: pci: cadence: Extend compatible for new EP configurations
2025-04-24 15:29 ` Conor Dooley
2025-04-24 15:30 ` Conor Dooley
@ 2025-04-25 2:17 ` Manikandan Karunakaran Pillai
1 sibling, 0 replies; 25+ messages in thread
From: Manikandan Karunakaran Pillai @ 2025-04-25 2:17 UTC (permalink / raw)
To: Conor Dooley, hans.zhang@cixtech.com
Cc: bhelgaas@google.com, lpieralisi@kernel.org, kw@linux.com,
manivannan.sadhasivam@linaro.org, robh@kernel.org,
krzk+dt@kernel.org, conor+dt@kernel.org, peter.chen@cixtech.com,
linux-pci@vger.kernel.org, devicetree@vger.kernel.org,
linux-kernel@vger.kernel.org
>
>EXTERNAL MAIL
>
>
>On Thu, Apr 24, 2025 at 09:04:41AM +0800, hans.zhang@cixtech.com wrote:
>> From: Manikandan K Pillai <mpillai@cadence.com>
>>
>> Document the compatible property for HPA (High Performance Architecture)
>> PCIe controller EP configuration.
>
>Please explain what makes the new architecture sufficiently different
>from the existing one such that a fallback compatible does not work.
>
>Same applies to the other binding patch.
The new architecture has a different HW architecture and it cannot be probed by the software.
The software needs to differentiate between the new and old architecture IP becos the register sets,
Register offsets and support for PCI generation and feature are different between these two architecture.
With the existing compatible it will not be possible to uniquely identify the generation and initialize the controller
>
>Thanks,
>Conor.
>
>>
>> Signed-off-by: Manikandan K Pillai <mpillai@cadence.com>
>> Signed-off-by: Hans Zhang <hans.zhang@cixtech.com>
>> ---
>> .../devicetree/bindings/pci/cdns,cdns-pcie-ep.yaml | 6 ++++--
>> 1 file changed, 4 insertions(+), 2 deletions(-)
>>
>> diff --git a/Documentation/devicetree/bindings/pci/cdns,cdns-pcie-ep.yaml
>b/Documentation/devicetree/bindings/pci/cdns,cdns-pcie-ep.yaml
>> index 98651ab22103..a7e404e4f690 100644
>> --- a/Documentation/devicetree/bindings/pci/cdns,cdns-pcie-ep.yaml
>> +++ b/Documentation/devicetree/bindings/pci/cdns,cdns-pcie-ep.yaml
>> @@ -7,14 +7,16 @@ $schema: http://devicetree.org/meta-
>schemas/core.yaml#
>> title: Cadence PCIe EP Controller
>>
>> maintainers:
>> - - Tom Joseph <tjoseph@cadence.com>
>> + - Manikandan K Pillai <mpillai@cadence.com>
>>
>> allOf:
>> - $ref: cdns-pcie-ep.yaml#
>>
>> properties:
>> compatible:
>> - const: cdns,cdns-pcie-ep
>> + enum:
>> + - cdns,cdns-pcie-ep
>> + - cdns,cdns-pcie-hpa-ep
>>
>> reg:
>> maxItems: 2
>> --
>> 2.47.1
>>
^ permalink raw reply [flat|nested] 25+ messages in thread
* RE: [PATCH v4 2/5] dt-bindings: pci: cadence: Extend compatible for new EP configurations
2025-04-24 15:30 ` Conor Dooley
@ 2025-04-25 2:19 ` Manikandan Karunakaran Pillai
2025-04-25 14:48 ` Conor Dooley
0 siblings, 1 reply; 25+ messages in thread
From: Manikandan Karunakaran Pillai @ 2025-04-25 2:19 UTC (permalink / raw)
To: Conor Dooley, hans.zhang@cixtech.com
Cc: bhelgaas@google.com, lpieralisi@kernel.org, kw@linux.com,
manivannan.sadhasivam@linaro.org, robh@kernel.org,
krzk+dt@kernel.org, conor+dt@kernel.org, peter.chen@cixtech.com,
linux-pci@vger.kernel.org, devicetree@vger.kernel.org,
linux-kernel@vger.kernel.org
>
>
>On Thu, Apr 24, 2025 at 04:29:35PM +0100, Conor Dooley wrote:
>> On Thu, Apr 24, 2025 at 09:04:41AM +0800, hans.zhang@cixtech.com wrote:
>> > From: Manikandan K Pillai <mpillai@cadence.com>
>> >
>> > Document the compatible property for HPA (High Performance
>Architecture)
>> > PCIe controller EP configuration.
>>
>> Please explain what makes the new architecture sufficiently different
>> from the existing one such that a fallback compatible does not work.
>>
>> Same applies to the other binding patch.
>
>Additionally, since this IP is likely in use on your sky1 SoC, why is a
>soc-specific compatible for your integration not needed?
>
The sky1 SoC support patches will be developed and submitted by the Sky1 team separately.
>>
>> Thanks,
>> Conor.
>>
>> >
>> > Signed-off-by: Manikandan K Pillai <mpillai@cadence.com>
>> > Signed-off-by: Hans Zhang <hans.zhang@cixtech.com>
>> > ---
>> > .../devicetree/bindings/pci/cdns,cdns-pcie-ep.yaml | 6 ++++--
>> > 1 file changed, 4 insertions(+), 2 deletions(-)
>> >
>> > diff --git a/Documentation/devicetree/bindings/pci/cdns,cdns-pcie-ep.yaml
>b/Documentation/devicetree/bindings/pci/cdns,cdns-pcie-ep.yaml
>> > index 98651ab22103..a7e404e4f690 100644
>> > --- a/Documentation/devicetree/bindings/pci/cdns,cdns-pcie-ep.yaml
>> > +++ b/Documentation/devicetree/bindings/pci/cdns,cdns-pcie-ep.yaml
>> > @@ -7,14 +7,16 @@ $schema: http://devicetree.org/meta-
>schemas/core.yaml#
>> > title: Cadence PCIe EP Controller
>> >
>> > maintainers:
>> > - - Tom Joseph <tjoseph@cadence.com>
>> > + - Manikandan K Pillai <mpillai@cadence.com>
>> >
>> > allOf:
>> > - $ref: cdns-pcie-ep.yaml#
>> >
>> > properties:
>> > compatible:
>> > - const: cdns,cdns-pcie-ep
>> > + enum:
>> > + - cdns,cdns-pcie-ep
>> > + - cdns,cdns-pcie-hpa-ep
>> >
>> > reg:
>> > maxItems: 2
>> > --
>> > 2.47.1
>> >
>
^ permalink raw reply [flat|nested] 25+ messages in thread
* Re: [PATCH v4 3/5] PCI: cadence: Add header support for PCIe HPA controller
2025-04-24 1:04 ` [PATCH v4 3/5] PCI: cadence: Add header support for PCIe HPA controller hans.zhang
2025-04-24 3:36 ` Peter Chen (CIX)
@ 2025-04-25 4:18 ` kernel test robot
1 sibling, 0 replies; 25+ messages in thread
From: kernel test robot @ 2025-04-25 4:18 UTC (permalink / raw)
To: hans.zhang, bhelgaas, lpieralisi, kw, manivannan.sadhasivam, robh,
krzk+dt, conor+dt
Cc: llvm, oe-kbuild-all, peter.chen, linux-pci, devicetree,
linux-kernel, Manikandan K Pillai, Hans Zhang
Hi,
kernel test robot noticed the following build errors:
[auto build test ERROR on fc96b232f8e7c0a6c282f47726b2ff6a5fb341d2]
url: https://github.com/intel-lab-lkp/linux/commits/hans-zhang-cixtech-com/dt-bindings-pci-cadence-Extend-compatible-for-new-RP-configuration/20250424-090651
base: fc96b232f8e7c0a6c282f47726b2ff6a5fb341d2
patch link: https://lore.kernel.org/r/20250424010445.2260090-4-hans.zhang%40cixtech.com
patch subject: [PATCH v4 3/5] PCI: cadence: Add header support for PCIe HPA controller
config: i386-buildonly-randconfig-003-20250425 (https://download.01.org/0day-ci/archive/20250425/202504251214.ngJwGxvn-lkp@intel.com/config)
compiler: clang version 20.1.2 (https://github.com/llvm/llvm-project 58df0ef89dd64126512e4ee27b4ac3fd8ddf6247)
reproduce (this is a W=1 build): (https://download.01.org/0day-ci/archive/20250425/202504251214.ngJwGxvn-lkp@intel.com/reproduce)
If you fix the issue in a separate patch/commit (i.e. not just a new version of
the same patch/commit), kindly add following tags
| Reported-by: kernel test robot <lkp@intel.com>
| Closes: https://lore.kernel.org/oe-kbuild-all/202504251214.ngJwGxvn-lkp@intel.com/
All errors (new ones prefixed by >>):
In file included from drivers/pci/controller/cadence/pcie-cadence.c:9:
>> drivers/pci/controller/cadence/pcie-cadence.h:851:8: error: expected ')'
851 | int where)
| ^
drivers/pci/controller/cadence/pcie-cadence.h:850:49: note: to match this '('
850 | static inline void __iomem *cdns_pci_hpa_map_bus(struct pci_bus *bus, unsigned int devfn
| ^
1 error generated.
vim +851 drivers/pci/controller/cadence/pcie-cadence.h
811
812 #ifdef CONFIG_PCIE_CADENCE_HOST
813 int cdns_pcie_host_link_setup(struct cdns_pcie_rc *rc);
814 int cdns_pcie_host_init(struct cdns_pcie_rc *rc);
815 int cdns_pcie_host_setup(struct cdns_pcie_rc *rc);
816 void __iomem *cdns_pci_map_bus(struct pci_bus *bus, unsigned int devfn,
817 int where);
818 int cdns_pcie_host_init_root_port(struct cdns_pcie_rc *rc);
819 int cdns_pcie_host_bar_ib_config(struct cdns_pcie_rc *rc,
820 enum cdns_pcie_rp_bar bar,
821 u64 cpu_addr, u64 size,
822 unsigned long flags);
823 int cdns_pcie_host_init_address_translation(struct cdns_pcie_rc *rc);
824 void __iomem *cdns_pci_hpa_map_bus(struct pci_bus *bus, unsigned int devfn, int where);
825 int cdns_pcie_hpa_host_init_root_port(struct cdns_pcie_rc *rc);
826 int cdns_pcie_hpa_host_bar_ib_config(struct cdns_pcie_rc *rc,
827 enum cdns_pcie_rp_bar bar,
828 u64 cpu_addr, u64 size,
829 unsigned long flags);
830 int cdns_pcie_hpa_host_init_address_translation(struct cdns_pcie_rc *rc);
831 int cdns_pcie_hpa_host_init(struct cdns_pcie_rc *rc);
832 #else
833 static inline int cdns_pcie_host_link_setup(struct cdns_pcie_rc *rc)
834 {
835 return 0;
836 }
837 static inline int cdns_pcie_host_init(struct cdns_pcie_rc *rc)
838 {
839 return 0;
840 }
841 static inline int cdns_pcie_host_setup(struct cdns_pcie_rc *rc)
842 {
843 return 0;
844 }
845 static inline void __iomem *cdns_pci_map_bus(struct pci_bus *bus, unsigned int devfn,
846 int where)
847 {
848 return NULL;
849 }
850 static inline void __iomem *cdns_pci_hpa_map_bus(struct pci_bus *bus, unsigned int devfn
> 851 int where)
852 {
853 return NULL;
854 }
855 static inline int cdns_pcie_hpa_host_init_root_port(struct cdns_pcie_rc *rc)
856 {
857 return 0;
858 }
859 static inline int cdns_pcie_hpa_host_bar_ib_config(struct cdns_pcie_rc *rc,
860 enum cdns_pcie_rp_bar bar,
861 u64 cpu_addr, u64 size,
862 unsigned long flags)
863 {
864 return 0;
865 }
866 static inline int cdns_pcie_hpa_host_init_address_translation(struct cdns_pcie_rc *rc)
867 {
868 return 0;
869 }
870 #endif
871
--
0-DAY CI Kernel Test Service
https://github.com/intel/lkp-tests/wiki
^ permalink raw reply [flat|nested] 25+ messages in thread
* Re: [PATCH v4 5/5] PCI: cadence: Add callback functions for RP and EP controller
2025-04-24 1:04 ` [PATCH v4 5/5] PCI: cadence: Add callback functions for RP and EP controller hans.zhang
@ 2025-04-25 6:01 ` kernel test robot
2025-04-25 16:27 ` Krzysztof Kozlowski
1 sibling, 0 replies; 25+ messages in thread
From: kernel test robot @ 2025-04-25 6:01 UTC (permalink / raw)
To: hans.zhang, bhelgaas, lpieralisi, kw, manivannan.sadhasivam, robh,
krzk+dt, conor+dt
Cc: llvm, oe-kbuild-all, peter.chen, linux-pci, devicetree,
linux-kernel, Manikandan K Pillai, Hans Zhang
Hi,
kernel test robot noticed the following build errors:
[auto build test ERROR on fc96b232f8e7c0a6c282f47726b2ff6a5fb341d2]
url: https://github.com/intel-lab-lkp/linux/commits/hans-zhang-cixtech-com/dt-bindings-pci-cadence-Extend-compatible-for-new-RP-configuration/20250424-090651
base: fc96b232f8e7c0a6c282f47726b2ff6a5fb341d2
patch link: https://lore.kernel.org/r/20250424010445.2260090-6-hans.zhang%40cixtech.com
patch subject: [PATCH v4 5/5] PCI: cadence: Add callback functions for RP and EP controller
config: i386-buildonly-randconfig-003-20250425 (https://download.01.org/0day-ci/archive/20250425/202504251312.YvKIAjMl-lkp@intel.com/config)
compiler: clang version 20.1.2 (https://github.com/llvm/llvm-project 58df0ef89dd64126512e4ee27b4ac3fd8ddf6247)
reproduce (this is a W=1 build): (https://download.01.org/0day-ci/archive/20250425/202504251312.YvKIAjMl-lkp@intel.com/reproduce)
If you fix the issue in a separate patch/commit (i.e. not just a new version of
the same patch/commit), kindly add following tags
| Reported-by: kernel test robot <lkp@intel.com>
| Closes: https://lore.kernel.org/oe-kbuild-all/202504251312.YvKIAjMl-lkp@intel.com/
All errors (new ones prefixed by >>):
In file included from drivers/pci/controller/cadence/pcie-cadence-plat.c:13:
drivers/pci/controller/cadence/pcie-cadence.h:851:8: error: expected ')'
851 | int where)
| ^
drivers/pci/controller/cadence/pcie-cadence.h:850:49: note: to match this '('
850 | static inline void __iomem *cdns_pci_hpa_map_bus(struct pci_bus *bus, unsigned int devfn
| ^
>> drivers/pci/controller/cadence/pcie-cadence-plat.c:35:25: error: use of undeclared identifier 'cdns_pcie_host_init_root_port'; did you mean 'cdns_pcie_hpa_host_init_root_port'?
35 | .host_init_root_port = cdns_pcie_host_init_root_port,
| ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~
| cdns_pcie_hpa_host_init_root_port
drivers/pci/controller/cadence/pcie-cadence.h:855:19: note: 'cdns_pcie_hpa_host_init_root_port' declared here
855 | static inline int cdns_pcie_hpa_host_init_root_port(struct cdns_pcie_rc *rc)
| ^
>> drivers/pci/controller/cadence/pcie-cadence-plat.c:36:24: error: use of undeclared identifier 'cdns_pcie_host_bar_ib_config'; did you mean 'cdns_pcie_hpa_host_bar_ib_config'?
36 | .host_bar_ib_config = cdns_pcie_host_bar_ib_config,
| ^~~~~~~~~~~~~~~~~~~~~~~~~~~~
| cdns_pcie_hpa_host_bar_ib_config
drivers/pci/controller/cadence/pcie-cadence.h:859:19: note: 'cdns_pcie_hpa_host_bar_ib_config' declared here
859 | static inline int cdns_pcie_hpa_host_bar_ib_config(struct cdns_pcie_rc *rc,
| ^
>> drivers/pci/controller/cadence/pcie-cadence-plat.c:37:35: error: use of undeclared identifier 'cdns_pcie_host_init_address_translation'; did you mean 'cdns_pcie_hpa_host_init_address_translation'?
37 | .host_init_address_translation = cdns_pcie_host_init_address_translation,
| ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
| cdns_pcie_hpa_host_init_address_translation
drivers/pci/controller/cadence/pcie-cadence.h:866:19: note: 'cdns_pcie_hpa_host_init_address_translation' declared here
866 | static inline int cdns_pcie_hpa_host_init_address_translation(struct cdns_pcie_rc *rc)
| ^
4 errors generated.
vim +35 drivers/pci/controller/cadence/pcie-cadence-plat.c
31
32 static const struct cdns_pcie_ops cdns_plat_ops = {
33 .link_up = cdns_pcie_linkup,
34 .cpu_addr_fixup = cdns_plat_cpu_addr_fixup,
> 35 .host_init_root_port = cdns_pcie_host_init_root_port,
> 36 .host_bar_ib_config = cdns_pcie_host_bar_ib_config,
> 37 .host_init_address_translation = cdns_pcie_host_init_address_translation,
38 .detect_quiet_min_delay_set = cdns_pcie_detect_quiet_min_delay_set,
39 .set_outbound_region = cdns_pcie_set_outbound_region,
40 .set_outbound_region_for_normal_msg =
41 cdns_pcie_set_outbound_region_for_normal_msg,
42 .reset_outbound_region = cdns_pcie_reset_outbound_region,
43 };
44
--
0-DAY CI Kernel Test Service
https://github.com/intel/lkp-tests/wiki
^ permalink raw reply [flat|nested] 25+ messages in thread
* Re: [PATCH v4 2/5] dt-bindings: pci: cadence: Extend compatible for new EP configurations
2025-04-25 2:19 ` Manikandan Karunakaran Pillai
@ 2025-04-25 14:48 ` Conor Dooley
2025-04-25 15:33 ` Hans Zhang
0 siblings, 1 reply; 25+ messages in thread
From: Conor Dooley @ 2025-04-25 14:48 UTC (permalink / raw)
To: Manikandan Karunakaran Pillai
Cc: hans.zhang@cixtech.com, bhelgaas@google.com,
lpieralisi@kernel.org, kw@linux.com,
manivannan.sadhasivam@linaro.org, robh@kernel.org,
krzk+dt@kernel.org, conor+dt@kernel.org, peter.chen@cixtech.com,
linux-pci@vger.kernel.org, devicetree@vger.kernel.org,
linux-kernel@vger.kernel.org
[-- Attachment #1: Type: text/plain, Size: 955 bytes --]
On Fri, Apr 25, 2025 at 02:19:11AM +0000, Manikandan Karunakaran Pillai wrote:
> >
> >
> >On Thu, Apr 24, 2025 at 04:29:35PM +0100, Conor Dooley wrote:
> >> On Thu, Apr 24, 2025 at 09:04:41AM +0800, hans.zhang@cixtech.com wrote:
> >> > From: Manikandan K Pillai <mpillai@cadence.com>
> >> >
> >> > Document the compatible property for HPA (High Performance
> >Architecture)
> >> > PCIe controller EP configuration.
> >>
> >> Please explain what makes the new architecture sufficiently different
> >> from the existing one such that a fallback compatible does not work.
> >>
> >> Same applies to the other binding patch.
> >
> >Additionally, since this IP is likely in use on your sky1 SoC, why is a
> >soc-specific compatible for your integration not needed?
> >
>
> The sky1 SoC support patches will be developed and submitted by the Sky1
> team separately.
Why? Cixtech sent this patchset, they should send it with their user.
[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 228 bytes --]
^ permalink raw reply [flat|nested] 25+ messages in thread
* Re: [PATCH v4 2/5] dt-bindings: pci: cadence: Extend compatible for new EP configurations
2025-04-25 14:48 ` Conor Dooley
@ 2025-04-25 15:33 ` Hans Zhang
2025-04-25 16:21 ` Krzysztof Kozlowski
0 siblings, 1 reply; 25+ messages in thread
From: Hans Zhang @ 2025-04-25 15:33 UTC (permalink / raw)
To: Conor Dooley, Manikandan Karunakaran Pillai
Cc: bhelgaas@google.com, lpieralisi@kernel.org, kw@linux.com,
manivannan.sadhasivam@linaro.org, robh@kernel.org,
krzk+dt@kernel.org, conor+dt@kernel.org, peter.chen@cixtech.com,
linux-pci@vger.kernel.org, devicetree@vger.kernel.org,
linux-kernel@vger.kernel.org
On 2025/4/25 22:48, Conor Dooley wrote:
> On Fri, Apr 25, 2025 at 02:19:11AM +0000, Manikandan Karunakaran Pillai wrote:
>>>
>>> On Thu, Apr 24, 2025 at 04:29:35PM +0100, Conor Dooley wrote:
>>>> On Thu, Apr 24, 2025 at 09:04:41AM +0800,hans.zhang@cixtech.com wrote:
>>>>> From: Manikandan K Pillai<mpillai@cadence.com>
>>>>>
>>>>> Document the compatible property for HPA (High Performance
>>> Architecture)
>>>>> PCIe controller EP configuration.
>>>> Please explain what makes the new architecture sufficiently different
>>>> from the existing one such that a fallback compatible does not work.
>>>>
>>>> Same applies to the other binding patch.
>>> Additionally, since this IP is likely in use on your sky1 SoC, why is a
>>> soc-specific compatible for your integration not needed?
>>>
>> The sky1 SoC support patches will be developed and submitted by the Sky1
>> team separately.
> Why? Cixtech sent this patchset, they should send it with their user.
Hi Conor,
Please look at the communication history of this website.
https://patchwork.kernel.org/project/linux-pci/patch/CH2PPF4D26F8E1C1CBD2A866C59AA55CD7AA2A12@CH2PPF4D26F8E1C.namprd07.prod.outlook.com/
Best regards,
Hans
^ permalink raw reply [flat|nested] 25+ messages in thread
* Re: [PATCH v4 2/5] dt-bindings: pci: cadence: Extend compatible for new EP configurations
2025-04-25 15:33 ` Hans Zhang
@ 2025-04-25 16:21 ` Krzysztof Kozlowski
2025-04-25 16:47 ` Hans Zhang
2025-04-27 3:55 ` Manikandan Karunakaran Pillai
0 siblings, 2 replies; 25+ messages in thread
From: Krzysztof Kozlowski @ 2025-04-25 16:21 UTC (permalink / raw)
To: Hans Zhang, Conor Dooley, Manikandan Karunakaran Pillai
Cc: bhelgaas@google.com, lpieralisi@kernel.org, kw@linux.com,
manivannan.sadhasivam@linaro.org, robh@kernel.org,
krzk+dt@kernel.org, conor+dt@kernel.org, peter.chen@cixtech.com,
linux-pci@vger.kernel.org, devicetree@vger.kernel.org,
linux-kernel@vger.kernel.org
On 25/04/2025 17:33, Hans Zhang wrote:
>
>
> On 2025/4/25 22:48, Conor Dooley wrote:
>> On Fri, Apr 25, 2025 at 02:19:11AM +0000, Manikandan Karunakaran Pillai wrote:
>>>>
>>>> On Thu, Apr 24, 2025 at 04:29:35PM +0100, Conor Dooley wrote:
>>>>> On Thu, Apr 24, 2025 at 09:04:41AM +0800,hans.zhang@cixtech.com wrote:
>>>>>> From: Manikandan K Pillai<mpillai@cadence.com>
>>>>>>
>>>>>> Document the compatible property for HPA (High Performance
>>>> Architecture)
>>>>>> PCIe controller EP configuration.
>>>>> Please explain what makes the new architecture sufficiently different
>>>>> from the existing one such that a fallback compatible does not work.
>>>>>
>>>>> Same applies to the other binding patch.
>>>> Additionally, since this IP is likely in use on your sky1 SoC, why is a
>>>> soc-specific compatible for your integration not needed?
>>>>
>>> The sky1 SoC support patches will be developed and submitted by the Sky1
>>> team separately.
>> Why? Cixtech sent this patchset, they should send it with their user.
>
> Hi Conor,
>
> Please look at the communication history of this website.
>
> https://patchwork.kernel.org/project/linux-pci/patch/CH2PPF4D26F8E1C1CBD2A866C59AA55CD7AA2A12@CH2PPF4D26F8E1C.namprd07.prod.outlook.com/
And in that thread I asked for Soc specific compatible. More than once.
Conor asks again.
I don't understand your answers at all.
Best regards,
Krzysztof
^ permalink raw reply [flat|nested] 25+ messages in thread
* Re: [PATCH v4 0/5] Enhance the PCIe controller driver
2025-04-24 1:04 [PATCH v4 0/5] Enhance the PCIe controller driver hans.zhang
` (4 preceding siblings ...)
2025-04-24 1:04 ` [PATCH v4 5/5] PCI: cadence: Add callback functions for RP and EP controller hans.zhang
@ 2025-04-25 16:24 ` Krzysztof Kozlowski
5 siblings, 0 replies; 25+ messages in thread
From: Krzysztof Kozlowski @ 2025-04-25 16:24 UTC (permalink / raw)
To: hans.zhang, bhelgaas, lpieralisi, kw, manivannan.sadhasivam, robh,
krzk+dt, conor+dt
Cc: peter.chen, linux-pci, devicetree, linux-kernel
On 24/04/2025 03:04, hans.zhang@cixtech.com wrote:
> From: Hans Zhang <hans.zhang@cixtech.com>
>
> Enhances the exiting Cadence PCIe controller drivers to support
> HPA (High Performance Architecture) Cadence PCIe controllers.
>
> The patch set enhances the Cadence PCIe driver for HPA support.
> The "compatible" property in DTS is added with more enum to support
> the new platform architecture and the register maps that change with
> it. The driver read register and write register functions take the
> updated offset stored from the platform driver to access the registers.
> The driver now supports the legacy and HPA architecture, with the
> legacy code changes beingminimal.
>
> SoC related changes are not available in this patch set.
>
> The TI SoC continues to be supported with the changes incorporated.
>
> The changes are also in tune with how multiple platforms are supported
> in related drivers.
>
> The scripts/checkpatch.pl has been run on the patches with and without
> --strict. With the --strict option, 4 checks are generated on 1 patch
> (PATCH v3 3/6) of the series), which can be ignored. There are no code
> fixes required for these checks. The rest of the 'scripts/checkpatch.pl'
> is clean.
>
> The ./scripts/kernel-doc --none have been run on the changed files.
>
> The changes are tested on TI platforms. The legacy controller changes are
> tested on an TI J7200 EVM and HPA changes are planned for on an FPGA
> platform available within Cadence.
>
> Changes for v4
> - Add header file bitfield.h to pcie-cadence.h.
> - Addressed the following review comments.
> Merged the TI patch as it.
> Removed initialization of struct variables to '0'.
So the rest you did not address?
That's not acceptable. You ignored several comments such way. Either
discussion did not finish or you agree to implement all comments. If you
do not agree, then sending new version hides the previous discussion.
Best regards,
Krzysztof
^ permalink raw reply [flat|nested] 25+ messages in thread
* Re: [PATCH v4 5/5] PCI: cadence: Add callback functions for RP and EP controller
2025-04-24 1:04 ` [PATCH v4 5/5] PCI: cadence: Add callback functions for RP and EP controller hans.zhang
2025-04-25 6:01 ` kernel test robot
@ 2025-04-25 16:27 ` Krzysztof Kozlowski
2025-04-25 16:51 ` Hans Zhang
2025-04-27 3:52 ` Manikandan Karunakaran Pillai
1 sibling, 2 replies; 25+ messages in thread
From: Krzysztof Kozlowski @ 2025-04-25 16:27 UTC (permalink / raw)
To: hans.zhang, bhelgaas, lpieralisi, kw, manivannan.sadhasivam, robh,
krzk+dt, conor+dt
Cc: peter.chen, linux-pci, devicetree, linux-kernel,
Manikandan K Pillai
On 24/04/2025 03:04, hans.zhang@cixtech.com wrote:
> From: Manikandan K Pillai <mpillai@cadence.com>
>
> Add support for the Cadence PCIe HPA controller by adding
> the required callback functions. Update the common functions for
> RP and EP configuration. Invoke the relevant callback functions
> for platform probe of PCIe controller using the callback function.
> Update the support for TI J721 boards to use the updated Cadence
> PCIe controller code.
>
> Signed-off-by: Manikandan K Pillai <mpillai@cadence.com>
> Co-developed-by: Hans Zhang <hans.zhang@cixtech.com>
> Signed-off-by: Hans Zhang <hans.zhang@cixtech.com>
> ---
> drivers/pci/controller/cadence/pci-j721e.c | 12 +
> .../pci/controller/cadence/pcie-cadence-ep.c | 29 +-
> .../controller/cadence/pcie-cadence-host.c | 263 ++++++++++++++++--
> .../controller/cadence/pcie-cadence-plat.c | 27 +-
> drivers/pci/controller/cadence/pcie-cadence.c | 197 ++++++++++++-
> drivers/pci/controller/cadence/pcie-cadence.h | 11 +-
> 6 files changed, 495 insertions(+), 44 deletions(-)
>
> diff --git a/drivers/pci/controller/cadence/pci-j721e.c b/drivers/pci/controller/cadence/pci-j721e.c
> index ef1cfdae33bb..154b36c30101 100644
> --- a/drivers/pci/controller/cadence/pci-j721e.c
> +++ b/drivers/pci/controller/cadence/pci-j721e.c
> @@ -164,6 +164,14 @@ static const struct cdns_pcie_ops j721e_pcie_ops = {
> .start_link = j721e_pcie_start_link,
> .stop_link = j721e_pcie_stop_link,
> .link_up = j721e_pcie_link_up,
> + .host_init_root_port = cdns_pcie_host_init_root_port,
> + .host_bar_ib_config = cdns_pcie_host_bar_ib_config,
> + .host_init_address_translation = cdns_pcie_host_init_address_translation,
> + .detect_quiet_min_delay_set = cdns_pcie_detect_quiet_min_delay_set,
> + .set_outbound_region = cdns_pcie_set_outbound_region,
> + .set_outbound_region_for_normal_msg =
> + cdns_pcie_set_outbound_region_for_normal_msg,
> + .reset_outbound_region = cdns_pcie_reset_outbound_region,
How did you resolve Rob's comments?
These were repeated I think three times finally with:
"Please listen when I say we do not want the ops method used in other
drivers. "
I think you just send the same ignoring previous discussion which is the
shortest way to get yourself NAKed.
Best regards,
Krzysztof
^ permalink raw reply [flat|nested] 25+ messages in thread
* Re: [PATCH v4 2/5] dt-bindings: pci: cadence: Extend compatible for new EP configurations
2025-04-25 16:21 ` Krzysztof Kozlowski
@ 2025-04-25 16:47 ` Hans Zhang
2025-04-27 3:55 ` Manikandan Karunakaran Pillai
1 sibling, 0 replies; 25+ messages in thread
From: Hans Zhang @ 2025-04-25 16:47 UTC (permalink / raw)
To: Krzysztof Kozlowski, Conor Dooley, Manikandan Karunakaran Pillai
Cc: bhelgaas@google.com, lpieralisi@kernel.org, kw@linux.com,
manivannan.sadhasivam@linaro.org, robh@kernel.org,
krzk+dt@kernel.org, conor+dt@kernel.org, peter.chen@cixtech.com,
linux-pci@vger.kernel.org, devicetree@vger.kernel.org,
linux-kernel@vger.kernel.org
On 2025/4/26 00:21, Krzysztof Kozlowski wrote:
> EXTERNAL EMAIL
>
> On 25/04/2025 17:33, Hans Zhang wrote:
>>
>>
>> On 2025/4/25 22:48, Conor Dooley wrote:
>>> On Fri, Apr 25, 2025 at 02:19:11AM +0000, Manikandan Karunakaran Pillai wrote:
>>>>>
>>>>> On Thu, Apr 24, 2025 at 04:29:35PM +0100, Conor Dooley wrote:
>>>>>> On Thu, Apr 24, 2025 at 09:04:41AM +0800,hans.zhang@cixtech.com wrote:
>>>>>>> From: Manikandan K Pillai<mpillai@cadence.com>
>>>>>>>
>>>>>>> Document the compatible property for HPA (High Performance
>>>>> Architecture)
>>>>>>> PCIe controller EP configuration.
>>>>>> Please explain what makes the new architecture sufficiently different
>>>>>> from the existing one such that a fallback compatible does not work.
>>>>>>
>>>>>> Same applies to the other binding patch.
>>>>> Additionally, since this IP is likely in use on your sky1 SoC, why is a
>>>>> soc-specific compatible for your integration not needed?
>>>>>
>>>> The sky1 SoC support patches will be developed and submitted by the Sky1
>>>> team separately.
>>> Why? Cixtech sent this patchset, they should send it with their user.
>>
>> Hi Conor,
>>
>> Please look at the communication history of this website.
>>
>> https://patchwork.kernel.org/project/linux-pci/patch/CH2PPF4D26F8E1C1CBD2A866C59AA55CD7AA2A12@CH2PPF4D26F8E1C.namprd07.prod.outlook.com/
>
> And in that thread I asked for Soc specific compatible. More than once.
> Conor asks again.
>
> I don't understand your answers at all.
Dear Krzysztof,
I'm very sorry. Due to the environmental issue of Manikandan sending
patches, I just want to express that I'm forwarding the patches for
Manikandan. Some parts were developed together by us and verified by me.
Please also ask Manikandan to reply to Conor and Krzysztof's questions.
Best regards,
Hans
^ permalink raw reply [flat|nested] 25+ messages in thread
* Re: [PATCH v4 5/5] PCI: cadence: Add callback functions for RP and EP controller
2025-04-25 16:27 ` Krzysztof Kozlowski
@ 2025-04-25 16:51 ` Hans Zhang
2025-04-27 3:52 ` Manikandan Karunakaran Pillai
1 sibling, 0 replies; 25+ messages in thread
From: Hans Zhang @ 2025-04-25 16:51 UTC (permalink / raw)
To: Krzysztof Kozlowski, bhelgaas, lpieralisi, kw,
manivannan.sadhasivam, robh, krzk+dt, conor+dt
Cc: peter.chen, linux-pci, devicetree, linux-kernel,
Manikandan K Pillai
On 2025/4/26 00:27, Krzysztof Kozlowski wrote:
> EXTERNAL EMAIL
>
> On 24/04/2025 03:04, hans.zhang@cixtech.com wrote:
>> From: Manikandan K Pillai <mpillai@cadence.com>
>>
>> Add support for the Cadence PCIe HPA controller by adding
>> the required callback functions. Update the common functions for
>> RP and EP configuration. Invoke the relevant callback functions
>> for platform probe of PCIe controller using the callback function.
>> Update the support for TI J721 boards to use the updated Cadence
>> PCIe controller code.
>>
>> Signed-off-by: Manikandan K Pillai <mpillai@cadence.com>
>> Co-developed-by: Hans Zhang <hans.zhang@cixtech.com>
>> Signed-off-by: Hans Zhang <hans.zhang@cixtech.com>
>> ---
>> drivers/pci/controller/cadence/pci-j721e.c | 12 +
>> .../pci/controller/cadence/pcie-cadence-ep.c | 29 +-
>> .../controller/cadence/pcie-cadence-host.c | 263 ++++++++++++++++--
>> .../controller/cadence/pcie-cadence-plat.c | 27 +-
>> drivers/pci/controller/cadence/pcie-cadence.c | 197 ++++++++++++-
>> drivers/pci/controller/cadence/pcie-cadence.h | 11 +-
>> 6 files changed, 495 insertions(+), 44 deletions(-)
>>
>> diff --git a/drivers/pci/controller/cadence/pci-j721e.c b/drivers/pci/controller/cadence/pci-j721e.c
>> index ef1cfdae33bb..154b36c30101 100644
>> --- a/drivers/pci/controller/cadence/pci-j721e.c
>> +++ b/drivers/pci/controller/cadence/pci-j721e.c
>> @@ -164,6 +164,14 @@ static const struct cdns_pcie_ops j721e_pcie_ops = {
>> .start_link = j721e_pcie_start_link,
>> .stop_link = j721e_pcie_stop_link,
>> .link_up = j721e_pcie_link_up,
>> + .host_init_root_port = cdns_pcie_host_init_root_port,
>> + .host_bar_ib_config = cdns_pcie_host_bar_ib_config,
>> + .host_init_address_translation = cdns_pcie_host_init_address_translation,
>> + .detect_quiet_min_delay_set = cdns_pcie_detect_quiet_min_delay_set,
>> + .set_outbound_region = cdns_pcie_set_outbound_region,
>> + .set_outbound_region_for_normal_msg =
>> + cdns_pcie_set_outbound_region_for_normal_msg,
>> + .reset_outbound_region = cdns_pcie_reset_outbound_region,
>
> How did you resolve Rob's comments?
>
> These were repeated I think three times finally with:
>
> "Please listen when I say we do not want the ops method used in other
> drivers. "
>
> I think you just send the same ignoring previous discussion which is the
> shortest way to get yourself NAKed.
Hi Manikandan,
Please reply to Krzysztof's question.
Best regards,
Hans
^ permalink raw reply [flat|nested] 25+ messages in thread
* RE: [PATCH v4 5/5] PCI: cadence: Add callback functions for RP and EP controller
2025-04-25 16:27 ` Krzysztof Kozlowski
2025-04-25 16:51 ` Hans Zhang
@ 2025-04-27 3:52 ` Manikandan Karunakaran Pillai
2025-06-01 14:40 ` manivannan.sadhasivam
1 sibling, 1 reply; 25+ messages in thread
From: Manikandan Karunakaran Pillai @ 2025-04-27 3:52 UTC (permalink / raw)
To: Krzysztof Kozlowski, hans.zhang@cixtech.com, bhelgaas@google.com,
lpieralisi@kernel.org, kw@linux.com,
manivannan.sadhasivam@linaro.org, robh@kernel.org,
krzk+dt@kernel.org, conor+dt@kernel.org, Milind Parab
Cc: peter.chen@cixtech.com, linux-pci@vger.kernel.org,
devicetree@vger.kernel.org, linux-kernel@vger.kernel.org
>
>> ---
>> drivers/pci/controller/cadence/pci-j721e.c | 12 +
>> .../pci/controller/cadence/pcie-cadence-ep.c | 29 +-
>> .../controller/cadence/pcie-cadence-host.c | 263 ++++++++++++++++--
>> .../controller/cadence/pcie-cadence-plat.c | 27 +-
>> drivers/pci/controller/cadence/pcie-cadence.c | 197 ++++++++++++-
>> drivers/pci/controller/cadence/pcie-cadence.h | 11 +-
>> 6 files changed, 495 insertions(+), 44 deletions(-)
>>
>> diff --git a/drivers/pci/controller/cadence/pci-j721e.c
>b/drivers/pci/controller/cadence/pci-j721e.c
>> index ef1cfdae33bb..154b36c30101 100644
>> --- a/drivers/pci/controller/cadence/pci-j721e.c
>> +++ b/drivers/pci/controller/cadence/pci-j721e.c
>> @@ -164,6 +164,14 @@ static const struct cdns_pcie_ops j721e_pcie_ops = {
>> .start_link = j721e_pcie_start_link,
>> .stop_link = j721e_pcie_stop_link,
>> .link_up = j721e_pcie_link_up,
>> + .host_init_root_port = cdns_pcie_host_init_root_port,
>> + .host_bar_ib_config = cdns_pcie_host_bar_ib_config,
>> + .host_init_address_translation =
>cdns_pcie_host_init_address_translation,
>> + .detect_quiet_min_delay_set =
>cdns_pcie_detect_quiet_min_delay_set,
>> + .set_outbound_region = cdns_pcie_set_outbound_region,
>> + .set_outbound_region_for_normal_msg =
>> +
>cdns_pcie_set_outbound_region_for_normal_msg,
>> + .reset_outbound_region = cdns_pcie_reset_outbound_region,
>
>How did you resolve Rob's comments?
>
>These were repeated I think three times finally with:
>
>"Please listen when I say we do not want the ops method used in other
>drivers. "
>
>I think you just send the same ignoring previous discussion which is the
>shortest way to get yourself NAKed.
>
>
I was waiting to check if there are additional comments on the approach, because this approach was taken
based on an earlier comments on the patches. Since we have not got any adverse comments from other
maintainers on this, I will separate out the entire driver for old and new architecture. The few common functions
will be moved to a common file, to be used as library functions. There will be repetitions of
code but from Rob's comments, I believe it is fine.
I will leave it to Hans to support the CIX Soc support.
>Best regards,
>Krzysztof
^ permalink raw reply [flat|nested] 25+ messages in thread
* RE: [PATCH v4 2/5] dt-bindings: pci: cadence: Extend compatible for new EP configurations
2025-04-25 16:21 ` Krzysztof Kozlowski
2025-04-25 16:47 ` Hans Zhang
@ 2025-04-27 3:55 ` Manikandan Karunakaran Pillai
2025-04-27 19:08 ` Krzysztof Kozlowski
1 sibling, 1 reply; 25+ messages in thread
From: Manikandan Karunakaran Pillai @ 2025-04-27 3:55 UTC (permalink / raw)
To: Krzysztof Kozlowski, Hans Zhang, Conor Dooley
Cc: bhelgaas@google.com, lpieralisi@kernel.org, kw@linux.com,
manivannan.sadhasivam@linaro.org, robh@kernel.org,
krzk+dt@kernel.org, conor+dt@kernel.org, peter.chen@cixtech.com,
linux-pci@vger.kernel.org, devicetree@vger.kernel.org,
linux-kernel@vger.kernel.org
work.
>>>>>>
>>>>>> Same applies to the other binding patch.
>>>>> Additionally, since this IP is likely in use on your sky1 SoC, why is a
>>>>> soc-specific compatible for your integration not needed?
>>>>>
>>>> The sky1 SoC support patches will be developed and submitted by the
>Sky1
>>>> team separately.
>>> Why? Cixtech sent this patchset, they should send it with their user.
>>
>> Hi Conor,
>>
>> Please look at the communication history of this website.
>>
>> https://urldefense.com/v3/__https://patchwork.kernel.org/project/linux-
>pci/patch/CH2PPF4D26F8E1C1CBD2A866C59AA55CD7AA2A12@CH2PPF4D26F
>8E1C.namprd07.prod.outlook.com/__;!!EHscmS1ygiU1lA!Gh-
>UeyTbbr2R3ocWWa4QZHM_GYBRXws7a5zc3lZvSy_XYVCkcg8mmeEaAWS4wEvI
>SMV2tGCEylE$
>
>And in that thread I asked for Soc specific compatible. More than once.
>Conor asks again.
>
>I don't understand your answers at all.
The current support is for the IP from Cadence. There can be multiple SoC developed based on this IP and it is for
the SoC companies to build in support as and when the SoC support needs to be available.
Since the CIX SoC is available, it can be send together with this patch.
However, I do not understand the need for clubbing these in a single patch.
>
>Best regards,
>Krzysztof
^ permalink raw reply [flat|nested] 25+ messages in thread
* Re: [PATCH v4 2/5] dt-bindings: pci: cadence: Extend compatible for new EP configurations
2025-04-27 3:55 ` Manikandan Karunakaran Pillai
@ 2025-04-27 19:08 ` Krzysztof Kozlowski
0 siblings, 0 replies; 25+ messages in thread
From: Krzysztof Kozlowski @ 2025-04-27 19:08 UTC (permalink / raw)
To: Manikandan Karunakaran Pillai, Hans Zhang, Conor Dooley
Cc: bhelgaas@google.com, lpieralisi@kernel.org, kw@linux.com,
manivannan.sadhasivam@linaro.org, robh@kernel.org,
krzk+dt@kernel.org, conor+dt@kernel.org, peter.chen@cixtech.com,
linux-pci@vger.kernel.org, devicetree@vger.kernel.org,
linux-kernel@vger.kernel.org
On 27/04/2025 05:55, Manikandan Karunakaran Pillai wrote:
> work.
>>>>>>>
>>>>>>> Same applies to the other binding patch.
>>>>>> Additionally, since this IP is likely in use on your sky1 SoC, why is a
>>>>>> soc-specific compatible for your integration not needed?
>>>>>>
>>>>> The sky1 SoC support patches will be developed and submitted by the
>> Sky1
>>>>> team separately.
>>>> Why? Cixtech sent this patchset, they should send it with their user.
>>>
>>> Hi Conor,
>>>
>>> Please look at the communication history of this website.
>>>
>>> https://urldefense.com/v3/__https://patchwork.kernel.org/project/linux-
>> pci/patch/CH2PPF4D26F8E1C1CBD2A866C59AA55CD7AA2A12@CH2PPF4D26F
>> 8E1C.namprd07.prod.outlook.com/__;!!EHscmS1ygiU1lA!Gh-
>> UeyTbbr2R3ocWWa4QZHM_GYBRXws7a5zc3lZvSy_XYVCkcg8mmeEaAWS4wEvI
>> SMV2tGCEylE$
>>
>> And in that thread I asked for Soc specific compatible. More than once.
>> Conor asks again.
>>
>> I don't understand your answers at all.
>
> The current support is for the IP from Cadence. There can be multiple SoC developed based on this IP and it is for
> the SoC companies to build in support as and when the SoC support needs to be available.
>
> Since the CIX SoC is available, it can be send together with this patch.
> However, I do not understand the need for clubbing these in a single patch.
No one asks for this. The point is such IP blocks are usually customized
per SoC this generic compatibles are not enough. That's the argument
here, not whether you can have multiple vendors (we all know this,
imagine we know Cadence, Synopsys etc) or whether you want to combine
here Cix or not. Answer rather how much software interface is compatible
or common between different implementations.
... AND even then you always need soc specific compatible. See writing
bindings for the reason (or any other tutorial/guide/speech about
writing bindings).
Best regards,
Krzysztof
^ permalink raw reply [flat|nested] 25+ messages in thread
* Re: [PATCH v4 5/5] PCI: cadence: Add callback functions for RP and EP controller
2025-04-27 3:52 ` Manikandan Karunakaran Pillai
@ 2025-06-01 14:40 ` manivannan.sadhasivam
2025-06-02 1:24 ` Manikandan Karunakaran Pillai
0 siblings, 1 reply; 25+ messages in thread
From: manivannan.sadhasivam @ 2025-06-01 14:40 UTC (permalink / raw)
To: Manikandan Karunakaran Pillai
Cc: Krzysztof Kozlowski, hans.zhang@cixtech.com, bhelgaas@google.com,
lpieralisi@kernel.org, kw@linux.com, robh@kernel.org,
krzk+dt@kernel.org, conor+dt@kernel.org, Milind Parab,
peter.chen@cixtech.com, linux-pci@vger.kernel.org,
devicetree@vger.kernel.org, linux-kernel@vger.kernel.org
On Sun, Apr 27, 2025 at 03:52:13AM +0000, Manikandan Karunakaran Pillai wrote:
> >
> >> ---
> >> drivers/pci/controller/cadence/pci-j721e.c | 12 +
> >> .../pci/controller/cadence/pcie-cadence-ep.c | 29 +-
> >> .../controller/cadence/pcie-cadence-host.c | 263 ++++++++++++++++--
> >> .../controller/cadence/pcie-cadence-plat.c | 27 +-
> >> drivers/pci/controller/cadence/pcie-cadence.c | 197 ++++++++++++-
> >> drivers/pci/controller/cadence/pcie-cadence.h | 11 +-
> >> 6 files changed, 495 insertions(+), 44 deletions(-)
> >>
> >> diff --git a/drivers/pci/controller/cadence/pci-j721e.c
> >b/drivers/pci/controller/cadence/pci-j721e.c
> >> index ef1cfdae33bb..154b36c30101 100644
> >> --- a/drivers/pci/controller/cadence/pci-j721e.c
> >> +++ b/drivers/pci/controller/cadence/pci-j721e.c
> >> @@ -164,6 +164,14 @@ static const struct cdns_pcie_ops j721e_pcie_ops = {
> >> .start_link = j721e_pcie_start_link,
> >> .stop_link = j721e_pcie_stop_link,
> >> .link_up = j721e_pcie_link_up,
> >> + .host_init_root_port = cdns_pcie_host_init_root_port,
> >> + .host_bar_ib_config = cdns_pcie_host_bar_ib_config,
> >> + .host_init_address_translation =
> >cdns_pcie_host_init_address_translation,
> >> + .detect_quiet_min_delay_set =
> >cdns_pcie_detect_quiet_min_delay_set,
> >> + .set_outbound_region = cdns_pcie_set_outbound_region,
> >> + .set_outbound_region_for_normal_msg =
> >> +
> >cdns_pcie_set_outbound_region_for_normal_msg,
> >> + .reset_outbound_region = cdns_pcie_reset_outbound_region,
> >
> >How did you resolve Rob's comments?
> >
> >These were repeated I think three times finally with:
> >
> >"Please listen when I say we do not want the ops method used in other
> >drivers. "
> >
> >I think you just send the same ignoring previous discussion which is the
> >shortest way to get yourself NAKed.
> >
> >
>
> I was waiting to check if there are additional comments on the approach, because this approach was taken
> based on an earlier comments on the patches. Since we have not got any adverse comments from other
> maintainers on this, I will separate out the entire driver for old and new architecture. The few common functions
> will be moved to a common file, to be used as library functions. There will be repetitions of
> code but from Rob's comments, I believe it is fine.
>
I agree with Rob. We should really get rid of the callbacks and try to make the
common code a library.
- Mani
--
மணிவண்ணன் சதாசிவம்
^ permalink raw reply [flat|nested] 25+ messages in thread
* RE: [PATCH v4 5/5] PCI: cadence: Add callback functions for RP and EP controller
2025-06-01 14:40 ` manivannan.sadhasivam
@ 2025-06-02 1:24 ` Manikandan Karunakaran Pillai
0 siblings, 0 replies; 25+ messages in thread
From: Manikandan Karunakaran Pillai @ 2025-06-02 1:24 UTC (permalink / raw)
To: manivannan.sadhasivam@linaro.org
Cc: Krzysztof Kozlowski, hans.zhang@cixtech.com, bhelgaas@google.com,
lpieralisi@kernel.org, kw@linux.com, robh@kernel.org,
krzk+dt@kernel.org, conor+dt@kernel.org, Milind Parab,
peter.chen@cixtech.com, linux-pci@vger.kernel.org,
devicetree@vger.kernel.org, linux-kernel@vger.kernel.org
Hi,
The next patch v5 still under works will have Rob's comments addressed by separating out the drivers for legacy(LGA)
and High performance arch(HPA) replacing the ops method that are in current patches.
>-----Original Message-----
>From: manivannan.sadhasivam@linaro.org
><manivannan.sadhasivam@linaro.org>
>Sent: Sunday, June 1, 2025 8:11 PM
>To: Manikandan Karunakaran Pillai <mpillai@cadence.com>
>Cc: Krzysztof Kozlowski <krzk@kernel.org>; hans.zhang@cixtech.com;
>bhelgaas@google.com; lpieralisi@kernel.org; kw@linux.com;
>robh@kernel.org; krzk+dt@kernel.org; conor+dt@kernel.org; Milind Parab
><mparab@cadence.com>; peter.chen@cixtech.com; linux-pci@vger.kernel.org;
>devicetree@vger.kernel.org; linux-kernel@vger.kernel.org
>Subject: Re: [PATCH v4 5/5] PCI: cadence: Add callback functions for RP and EP
>controller
>
>EXTERNAL MAIL
>
>
>On Sun, Apr 27, 2025 at 03:52:13AM +0000, Manikandan Karunakaran Pillai
>wrote:
>> >
>> >> ---
>> >> drivers/pci/controller/cadence/pci-j721e.c | 12 +
>> >> .../pci/controller/cadence/pcie-cadence-ep.c | 29 +-
>> >> .../controller/cadence/pcie-cadence-host.c | 263 ++++++++++++++++--
>> >> .../controller/cadence/pcie-cadence-plat.c | 27 +-
>> >> drivers/pci/controller/cadence/pcie-cadence.c | 197 ++++++++++++-
>> >> drivers/pci/controller/cadence/pcie-cadence.h | 11 +-
>> >> 6 files changed, 495 insertions(+), 44 deletions(-)
>> >>
>> >> diff --git a/drivers/pci/controller/cadence/pci-j721e.c
>> >b/drivers/pci/controller/cadence/pci-j721e.c
>> >> index ef1cfdae33bb..154b36c30101 100644
>> >> --- a/drivers/pci/controller/cadence/pci-j721e.c
>> >> +++ b/drivers/pci/controller/cadence/pci-j721e.c
>> >> @@ -164,6 +164,14 @@ static const struct cdns_pcie_ops j721e_pcie_ops
>= {
>> >> .start_link = j721e_pcie_start_link,
>> >> .stop_link = j721e_pcie_stop_link,
>> >> .link_up = j721e_pcie_link_up,
>> >> + .host_init_root_port = cdns_pcie_host_init_root_port,
>> >> + .host_bar_ib_config = cdns_pcie_host_bar_ib_config,
>> >> + .host_init_address_translation =
>> >cdns_pcie_host_init_address_translation,
>> >> + .detect_quiet_min_delay_set =
>> >cdns_pcie_detect_quiet_min_delay_set,
>> >> + .set_outbound_region = cdns_pcie_set_outbound_region,
>> >> + .set_outbound_region_for_normal_msg =
>> >> +
>> >cdns_pcie_set_outbound_region_for_normal_msg,
>> >> + .reset_outbound_region = cdns_pcie_reset_outbound_region,
>> >
>> >How did you resolve Rob's comments?
>> >
>> >These were repeated I think three times finally with:
>> >
>> >"Please listen when I say we do not want the ops method used in other
>> >drivers. "
>> >
>> >I think you just send the same ignoring previous discussion which is the
>> >shortest way to get yourself NAKed.
>> >
>> >
>>
>> I was waiting to check if there are additional comments on the approach,
>because this approach was taken
>> based on an earlier comments on the patches. Since we have not got any
>adverse comments from other
>> maintainers on this, I will separate out the entire driver for old and new
>architecture. The few common functions
>> will be moved to a common file, to be used as library functions. There will be
>repetitions of
>> code but from Rob's comments, I believe it is fine.
>>
>
>I agree with Rob. We should really get rid of the callbacks and try to make the
>common code a library.
>
>- Mani
>
>--
>மணிவண்ணன் சதாசிவம்
^ permalink raw reply [flat|nested] 25+ messages in thread
end of thread, other threads:[~2025-06-02 2:05 UTC | newest]
Thread overview: 25+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2025-04-24 1:04 [PATCH v4 0/5] Enhance the PCIe controller driver hans.zhang
2025-04-24 1:04 ` [PATCH v4 1/5] dt-bindings: pci: cadence: Extend compatible for new RP configuration hans.zhang
2025-04-24 1:04 ` [PATCH v4 2/5] dt-bindings: pci: cadence: Extend compatible for new EP configurations hans.zhang
2025-04-24 15:29 ` Conor Dooley
2025-04-24 15:30 ` Conor Dooley
2025-04-25 2:19 ` Manikandan Karunakaran Pillai
2025-04-25 14:48 ` Conor Dooley
2025-04-25 15:33 ` Hans Zhang
2025-04-25 16:21 ` Krzysztof Kozlowski
2025-04-25 16:47 ` Hans Zhang
2025-04-27 3:55 ` Manikandan Karunakaran Pillai
2025-04-27 19:08 ` Krzysztof Kozlowski
2025-04-25 2:17 ` Manikandan Karunakaran Pillai
2025-04-24 1:04 ` [PATCH v4 3/5] PCI: cadence: Add header support for PCIe HPA controller hans.zhang
2025-04-24 3:36 ` Peter Chen (CIX)
2025-04-25 4:18 ` kernel test robot
2025-04-24 1:04 ` [PATCH v4 4/5] PCI: cadence: Add support for PCIe Endpoint " hans.zhang
2025-04-24 1:04 ` [PATCH v4 5/5] PCI: cadence: Add callback functions for RP and EP controller hans.zhang
2025-04-25 6:01 ` kernel test robot
2025-04-25 16:27 ` Krzysztof Kozlowski
2025-04-25 16:51 ` Hans Zhang
2025-04-27 3:52 ` Manikandan Karunakaran Pillai
2025-06-01 14:40 ` manivannan.sadhasivam
2025-06-02 1:24 ` Manikandan Karunakaran Pillai
2025-04-25 16:24 ` [PATCH v4 0/5] Enhance the PCIe controller driver Krzysztof Kozlowski
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).