* [PATCH v3 0/6] Enhance the PCIe controller driver @ 2025-04-11 10:36 hans.zhang 2025-04-11 10:36 ` [PATCH v3 1/6] dt-bindings: pci: cadence: Extend compatible for new RP configuration hans.zhang ` (5 more replies) 0 siblings, 6 replies; 23+ messages in thread From: hans.zhang @ 2025-04-11 10:36 UTC (permalink / raw) To: bhelgaas, lpieralisi, kw, manivannan.sadhasivam, robh, krzk+dt, conor+dt Cc: linux-pci, devicetree, linux-kernel, Hans Zhang From: Hans Zhang <hans.zhang@cixtech.com> Enhances the exiting Cadence PCIe controller drivers to support HPA (High Performance Architecture) Cadence PCIe controllers. The patch set enhances the Cadence PCIe driver for HPA support. The "compatible" property in DTS is added with more enum to support the new platform architecture and the register maps that change with it. The driver read register and write register functions take the updated offset stored from the platform driver to access the registers. The driver now supports the legacy and HPA architecture, with the legacy code changes beingminimal. SoC related changes are not available in this patch set. The TI SoC continues to be supported with the changes incorporated. The changes are also in tune with how multiple platforms are supported in related drivers. The scripts/checkpatch.pl has been run on the patches with and without --strict. With the --strict option, 4 checks are generated on 1 patch (PATCH v3 3/6) of the series), which can be ignored. There are no code fixes required for these checks. The rest of the 'scripts/checkpatch.pl' is clean. The ./scripts/kernel-doc --none have been run on the changed files. The changes are tested on TI platforms. The legacy controller changes are tested on an TI J7200 EVM and HPA changes are planned for on an FPGA platform available within Cadence. The patch set has been version v3, though the earlier two versions had issues with threading. The previous submitted patch links is at https://lore.kernel.org/lkml/fc1c6ded-2246-4d09-90b4-c0a264962ab3@kernel.org/ Changes for v3: - Patch version v3 added to the subject - Use HPA tag for architecture descriptions - Remove bug related changes to be submitted later as a separate patch - Two patches merged from the last series to ensure readability to address the review comments - Fix several description related issues, coding style issues and some misleading comments - Remove cpu_addr_fixup() functions Manikandan K Pillai (6): dt-bindings: pci: cadence: Extend compatible for new RP configuration dt-bindings: pci: cadence: Extend compatible for new EP configurations PCI: cadence: Add header support for PCIe HPA controller PCI: cadence: Add support for PCIe Endpoint HPA controller PCI: cadence: Add callback functions for RP and EP controller PCI: cadence: Update support for TI J721e boards .../bindings/pci/cdns,cdns-pcie-ep.yaml | 6 +- .../bindings/pci/cdns,cdns-pcie-host.yaml | 6 +- drivers/pci/controller/cadence/pci-j721e.c | 12 + .../pci/controller/cadence/pcie-cadence-ep.c | 170 +++++++-- .../controller/cadence/pcie-cadence-host.c | 284 +++++++++++++-- .../controller/cadence/pcie-cadence-plat.c | 110 ++++++ drivers/pci/controller/cadence/pcie-cadence.c | 196 ++++++++++- drivers/pci/controller/cadence/pcie-cadence.h | 332 +++++++++++++++++- 8 files changed, 1054 insertions(+), 62 deletions(-) base-commit: a24588245776dafc227243a01bfbeb8a59bafba9 -- 2.47.1 ^ permalink raw reply [flat|nested] 23+ messages in thread
* [PATCH v3 1/6] dt-bindings: pci: cadence: Extend compatible for new RP configuration 2025-04-11 10:36 [PATCH v3 0/6] Enhance the PCIe controller driver hans.zhang @ 2025-04-11 10:36 ` hans.zhang 2025-04-11 19:56 ` Rob Herring 2025-04-11 10:36 ` [PATCH v3 2/6] dt-bindings: pci: cadence: Extend compatible for new EP configurations hans.zhang ` (4 subsequent siblings) 5 siblings, 1 reply; 23+ messages in thread From: hans.zhang @ 2025-04-11 10:36 UTC (permalink / raw) To: bhelgaas, lpieralisi, kw, manivannan.sadhasivam, robh, krzk+dt, conor+dt Cc: linux-pci, devicetree, linux-kernel, Manikandan K Pillai, Hans Zhang From: Manikandan K Pillai <mpillai@cadence.com> Document the compatible property for HPA (High Performance Architecture) PCIe controller RP configuration. Signed-off-by: Manikandan K Pillai <mpillai@cadence.com> Signed-off-by: Hans Zhang <hans.zhang@cixtech.com> --- .../devicetree/bindings/pci/cdns,cdns-pcie-host.yaml | 6 ++++-- 1 file changed, 4 insertions(+), 2 deletions(-) diff --git a/Documentation/devicetree/bindings/pci/cdns,cdns-pcie-host.yaml b/Documentation/devicetree/bindings/pci/cdns,cdns-pcie-host.yaml index a8190d9b100f..83a33c4c008f 100644 --- a/Documentation/devicetree/bindings/pci/cdns,cdns-pcie-host.yaml +++ b/Documentation/devicetree/bindings/pci/cdns,cdns-pcie-host.yaml @@ -7,14 +7,16 @@ $schema: http://devicetree.org/meta-schemas/core.yaml# title: Cadence PCIe host controller maintainers: - - Tom Joseph <tjoseph@cadence.com> + - Manikandan K Pillai <mpillai@cadence.com> allOf: - $ref: cdns-pcie-host.yaml# properties: compatible: - const: cdns,cdns-pcie-host + enum: + - cdns,cdns-pcie-host + - cdns,cdns-pcie-hpa-host reg: maxItems: 2 -- 2.47.1 ^ permalink raw reply related [flat|nested] 23+ messages in thread
* Re: [PATCH v3 1/6] dt-bindings: pci: cadence: Extend compatible for new RP configuration 2025-04-11 10:36 ` [PATCH v3 1/6] dt-bindings: pci: cadence: Extend compatible for new RP configuration hans.zhang @ 2025-04-11 19:56 ` Rob Herring 2025-04-14 3:05 ` Manikandan Karunakaran Pillai 0 siblings, 1 reply; 23+ messages in thread From: Rob Herring @ 2025-04-11 19:56 UTC (permalink / raw) To: hans.zhang Cc: bhelgaas, lpieralisi, kw, manivannan.sadhasivam, krzk+dt, conor+dt, linux-pci, devicetree, linux-kernel, Manikandan K Pillai On Fri, Apr 11, 2025 at 06:36:51PM +0800, hans.zhang@cixtech.com wrote: > From: Manikandan K Pillai <mpillai@cadence.com> > > Document the compatible property for HPA (High Performance Architecture) > PCIe controller RP configuration. > > Signed-off-by: Manikandan K Pillai <mpillai@cadence.com> > Signed-off-by: Hans Zhang <hans.zhang@cixtech.com> > --- > .../devicetree/bindings/pci/cdns,cdns-pcie-host.yaml | 6 ++++-- > 1 file changed, 4 insertions(+), 2 deletions(-) > > diff --git a/Documentation/devicetree/bindings/pci/cdns,cdns-pcie-host.yaml b/Documentation/devicetree/bindings/pci/cdns,cdns-pcie-host.yaml > index a8190d9b100f..83a33c4c008f 100644 > --- a/Documentation/devicetree/bindings/pci/cdns,cdns-pcie-host.yaml > +++ b/Documentation/devicetree/bindings/pci/cdns,cdns-pcie-host.yaml > @@ -7,14 +7,16 @@ $schema: http://devicetree.org/meta-schemas/core.yaml# > title: Cadence PCIe host controller > > maintainers: > - - Tom Joseph <tjoseph@cadence.com> Why removing? What about all the other Cadence PCIe files? > + - Manikandan K Pillai <mpillai@cadence.com> > > allOf: > - $ref: cdns-pcie-host.yaml# > > properties: > compatible: > - const: cdns,cdns-pcie-host > + enum: > + - cdns,cdns-pcie-host > + - cdns,cdns-pcie-hpa-host > > reg: > maxItems: 2 > -- > 2.47.1 > ^ permalink raw reply [flat|nested] 23+ messages in thread
* RE: [PATCH v3 1/6] dt-bindings: pci: cadence: Extend compatible for new RP configuration 2025-04-11 19:56 ` Rob Herring @ 2025-04-14 3:05 ` Manikandan Karunakaran Pillai 0 siblings, 0 replies; 23+ messages in thread From: Manikandan Karunakaran Pillai @ 2025-04-14 3:05 UTC (permalink / raw) To: Rob Herring, hans.zhang@cixtech.com Cc: bhelgaas@google.com, lpieralisi@kernel.org, kw@linux.com, manivannan.sadhasivam@linaro.org, krzk+dt@kernel.org, conor+dt@kernel.org, linux-pci@vger.kernel.org, devicetree@vger.kernel.org, linux-kernel@vger.kernel.org > >On Fri, Apr 11, 2025 at 06:36:51PM +0800, hans.zhang@cixtech.com wrote: >> From: Manikandan K Pillai <mpillai@cadence.com> >> >> Document the compatible property for HPA (High Performance Architecture) >> PCIe controller RP configuration. >> >> Signed-off-by: Manikandan K Pillai <mpillai@cadence.com> >> Signed-off-by: Hans Zhang <hans.zhang@cixtech.com> >> --- >> .../devicetree/bindings/pci/cdns,cdns-pcie-host.yaml | 6 ++++-- >> 1 file changed, 4 insertions(+), 2 deletions(-) >> >> diff --git a/Documentation/devicetree/bindings/pci/cdns,cdns-pcie-host.yaml >b/Documentation/devicetree/bindings/pci/cdns,cdns-pcie-host.yaml >> index a8190d9b100f..83a33c4c008f 100644 >> --- a/Documentation/devicetree/bindings/pci/cdns,cdns-pcie-host.yaml >> +++ b/Documentation/devicetree/bindings/pci/cdns,cdns-pcie-host.yaml >> @@ -7,14 +7,16 @@ $schema: >https://urldefense.com/v3/__http://devicetree.org/meta- >schemas/core.yaml*__;Iw!!EHscmS1ygiU1lA!GUdVym9DUn88yPquZ- >jhRAWxXdadkOpGE7fasG33EWZ0zULMY3fQe_xbMFKTh33x577gU6FU-Ko$ >> title: Cadence PCIe host controller >> >> maintainers: >> - - Tom Joseph <tjoseph@cadence.com> > >Why removing? What about all the other Cadence PCIe files? > Tom is not longer with Cadence and hence this change. Will submit a separate patch for other files >> + - Manikandan K Pillai <mpillai@cadence.com> >> >> allOf: >> - $ref: cdns-pcie-host.yaml# >> >> properties: >> compatible: >> - const: cdns,cdns-pcie-host >> + enum: >> + - cdns,cdns-pcie-host >> + - cdns,cdns-pcie-hpa-host >> >> reg: >> maxItems: 2 >> -- >> 2.47.1 >> ^ permalink raw reply [flat|nested] 23+ messages in thread
* [PATCH v3 2/6] dt-bindings: pci: cadence: Extend compatible for new EP configurations 2025-04-11 10:36 [PATCH v3 0/6] Enhance the PCIe controller driver hans.zhang 2025-04-11 10:36 ` [PATCH v3 1/6] dt-bindings: pci: cadence: Extend compatible for new RP configuration hans.zhang @ 2025-04-11 10:36 ` hans.zhang 2025-04-11 10:36 ` [PATCH v3 3/6] PCI: cadence: Add header support for PCIe HPA controller hans.zhang ` (3 subsequent siblings) 5 siblings, 0 replies; 23+ messages in thread From: hans.zhang @ 2025-04-11 10:36 UTC (permalink / raw) To: bhelgaas, lpieralisi, kw, manivannan.sadhasivam, robh, krzk+dt, conor+dt Cc: linux-pci, devicetree, linux-kernel, Manikandan K Pillai, Hans Zhang From: Manikandan K Pillai <mpillai@cadence.com> Document the compatible property for the new generation(architecture) PCIe EP configuration. Signed-off-by: Manikandan K Pillai <mpillai@cadence.com> Signed-off-by: Hans Zhang <hans.zhang@cixtech.com> --- .../devicetree/bindings/pci/cdns,cdns-pcie-ep.yaml | 6 ++++-- 1 file changed, 4 insertions(+), 2 deletions(-) diff --git a/Documentation/devicetree/bindings/pci/cdns,cdns-pcie-ep.yaml b/Documentation/devicetree/bindings/pci/cdns,cdns-pcie-ep.yaml index 98651ab22103..a7e404e4f690 100644 --- a/Documentation/devicetree/bindings/pci/cdns,cdns-pcie-ep.yaml +++ b/Documentation/devicetree/bindings/pci/cdns,cdns-pcie-ep.yaml @@ -7,14 +7,16 @@ $schema: http://devicetree.org/meta-schemas/core.yaml# title: Cadence PCIe EP Controller maintainers: - - Tom Joseph <tjoseph@cadence.com> + - Manikandan K Pillai <mpillai@cadence.com> allOf: - $ref: cdns-pcie-ep.yaml# properties: compatible: - const: cdns,cdns-pcie-ep + enum: + - cdns,cdns-pcie-ep + - cdns,cdns-pcie-hpa-ep reg: maxItems: 2 -- 2.47.1 ^ permalink raw reply related [flat|nested] 23+ messages in thread
* [PATCH v3 3/6] PCI: cadence: Add header support for PCIe HPA controller 2025-04-11 10:36 [PATCH v3 0/6] Enhance the PCIe controller driver hans.zhang 2025-04-11 10:36 ` [PATCH v3 1/6] dt-bindings: pci: cadence: Extend compatible for new RP configuration hans.zhang 2025-04-11 10:36 ` [PATCH v3 2/6] dt-bindings: pci: cadence: Extend compatible for new EP configurations hans.zhang @ 2025-04-11 10:36 ` hans.zhang 2025-04-11 20:31 ` Rob Herring 2025-04-11 10:36 ` [PATCH v3 4/6] PCI: cadence: Add support for PCIe Endpoint " hans.zhang ` (2 subsequent siblings) 5 siblings, 1 reply; 23+ messages in thread From: hans.zhang @ 2025-04-11 10:36 UTC (permalink / raw) To: bhelgaas, lpieralisi, kw, manivannan.sadhasivam, robh, krzk+dt, conor+dt Cc: linux-pci, devicetree, linux-kernel, Manikandan K Pillai, Hans Zhang From: Manikandan K Pillai <mpillai@cadence.com> Add the required definitions for register addresses and register bits for the Cadence PCIe HPA controllers. Add the register bank offsets for different platform architecture and update the global platform data - platform architecture, EP or RP configuration and the correct values of register offsets for different register banks during the platform probe. Signed-off-by: Manikandan K Pillai <mpillai@cadence.com> Co-developed-by: Hans Zhang <hans.zhang@cixtech.com> Signed-off-by: Hans Zhang <hans.zhang@cixtech.com> --- .../controller/cadence/pcie-cadence-host.c | 13 +- .../controller/cadence/pcie-cadence-plat.c | 87 +++++ drivers/pci/controller/cadence/pcie-cadence.h | 320 +++++++++++++++++- 3 files changed, 410 insertions(+), 10 deletions(-) diff --git a/drivers/pci/controller/cadence/pcie-cadence-host.c b/drivers/pci/controller/cadence/pcie-cadence-host.c index 8af95e9da7ce..ce035eef0a5c 100644 --- a/drivers/pci/controller/cadence/pcie-cadence-host.c +++ b/drivers/pci/controller/cadence/pcie-cadence-host.c @@ -175,7 +175,7 @@ static int cdns_pcie_host_start_link(struct cdns_pcie_rc *rc) return ret; } -static int cdns_pcie_host_init_root_port(struct cdns_pcie_rc *rc) +int cdns_pcie_host_init_root_port(struct cdns_pcie_rc *rc) { struct cdns_pcie *pcie = &rc->pcie; u32 value, ctrl; @@ -215,10 +215,10 @@ static int cdns_pcie_host_init_root_port(struct cdns_pcie_rc *rc) return 0; } -static int cdns_pcie_host_bar_ib_config(struct cdns_pcie_rc *rc, - enum cdns_pcie_rp_bar bar, - u64 cpu_addr, u64 size, - unsigned long flags) +int cdns_pcie_host_bar_ib_config(struct cdns_pcie_rc *rc, + enum cdns_pcie_rp_bar bar, + u64 cpu_addr, u64 size, + unsigned long flags) { struct cdns_pcie *pcie = &rc->pcie; u32 addr0, addr1, aperture, value; @@ -428,7 +428,7 @@ static int cdns_pcie_host_map_dma_ranges(struct cdns_pcie_rc *rc) return 0; } -static int cdns_pcie_host_init_address_translation(struct cdns_pcie_rc *rc) +int cdns_pcie_host_init_address_translation(struct cdns_pcie_rc *rc) { struct cdns_pcie *pcie = &rc->pcie; struct pci_host_bridge *bridge = pci_host_bridge_from_priv(rc); @@ -536,7 +536,6 @@ int cdns_pcie_host_setup(struct cdns_pcie_rc *rc) return -ENOMEM; pcie = &rc->pcie; - pcie->is_rc = true; rc->vendor_id = 0xffff; of_property_read_u32(np, "vendor-id", &rc->vendor_id); diff --git a/drivers/pci/controller/cadence/pcie-cadence-plat.c b/drivers/pci/controller/cadence/pcie-cadence-plat.c index 0456845dabb9..b24176d4df1f 100644 --- a/drivers/pci/controller/cadence/pcie-cadence-plat.c +++ b/drivers/pci/controller/cadence/pcie-cadence-plat.c @@ -24,6 +24,15 @@ struct cdns_plat_pcie { struct cdns_plat_pcie_of_data { bool is_rc; + bool is_hpa; + u32 ip_reg_bank_off; + u32 ip_cfg_ctrl_reg_off; + u32 axi_mstr_common_off; + u32 axi_slave_off; + u32 axi_master_off; + u32 axi_hls_off; + u32 axi_ras_off; + u32 axi_dti_off; }; static const struct of_device_id cdns_plat_pcie_of_match[]; @@ -72,6 +81,19 @@ static int cdns_plat_pcie_probe(struct platform_device *pdev) rc = pci_host_bridge_priv(bridge); rc->pcie.dev = dev; rc->pcie.ops = &cdns_plat_ops; + rc->pcie.is_hpa = data->is_hpa; + rc->pcie.is_rc = data->is_rc; + + /* Store all the register bank offsets */ + rc->pcie.cdns_pcie_reg_offsets.ip_reg_bank_off = data->ip_reg_bank_off; + rc->pcie.cdns_pcie_reg_offsets.ip_cfg_ctrl_reg_off = data->ip_cfg_ctrl_reg_off; + rc->pcie.cdns_pcie_reg_offsets.axi_mstr_common_off = data->axi_mstr_common_off; + rc->pcie.cdns_pcie_reg_offsets.axi_master_off = data->axi_master_off; + rc->pcie.cdns_pcie_reg_offsets.axi_slave_off = data->axi_slave_off; + rc->pcie.cdns_pcie_reg_offsets.axi_hls_off = data->axi_hls_off; + rc->pcie.cdns_pcie_reg_offsets.axi_ras_off = data->axi_ras_off; + rc->pcie.cdns_pcie_reg_offsets.axi_dti_off = data->axi_dti_off; + cdns_plat_pcie->pcie = &rc->pcie; ret = cdns_pcie_init_phy(dev, cdns_plat_pcie->pcie); @@ -99,6 +121,19 @@ static int cdns_plat_pcie_probe(struct platform_device *pdev) ep->pcie.dev = dev; ep->pcie.ops = &cdns_plat_ops; + ep->pcie.is_hpa = data->is_hpa; + ep->pcie.is_rc = data->is_rc; + + /* Store all the register bank offset */ + ep->pcie.cdns_pcie_reg_offsets.ip_reg_bank_off = data->ip_reg_bank_off; + ep->pcie.cdns_pcie_reg_offsets.ip_cfg_ctrl_reg_off = data->ip_cfg_ctrl_reg_off; + ep->pcie.cdns_pcie_reg_offsets.axi_mstr_common_off = data->axi_mstr_common_off; + ep->pcie.cdns_pcie_reg_offsets.axi_master_off = data->axi_master_off; + ep->pcie.cdns_pcie_reg_offsets.axi_slave_off = data->axi_slave_off; + ep->pcie.cdns_pcie_reg_offsets.axi_hls_off = data->axi_hls_off; + ep->pcie.cdns_pcie_reg_offsets.axi_ras_off = data->axi_ras_off; + ep->pcie.cdns_pcie_reg_offsets.axi_dti_off = data->axi_dti_off; + cdns_plat_pcie->pcie = &ep->pcie; ret = cdns_pcie_init_phy(dev, cdns_plat_pcie->pcie); @@ -150,10 +185,54 @@ static void cdns_plat_pcie_shutdown(struct platform_device *pdev) static const struct cdns_plat_pcie_of_data cdns_plat_pcie_host_of_data = { .is_rc = true, + .is_hpa = false, + .ip_reg_bank_off = 0x0, + .ip_cfg_ctrl_reg_off = 0x0, + .axi_mstr_common_off = 0x0, + .axi_slave_off = 0x0, + .axi_master_off = 0x0, + .axi_hls_off = 0x0, + .axi_ras_off = 0x0, + .axi_dti_off = 0x0, }; static const struct cdns_plat_pcie_of_data cdns_plat_pcie_ep_of_data = { .is_rc = false, + .is_hpa = false, + .ip_reg_bank_off = 0x0, + .ip_cfg_ctrl_reg_off = 0x0, + .axi_mstr_common_off = 0x0, + .axi_slave_off = 0x0, + .axi_master_off = 0x0, + .axi_hls_off = 0x0, + .axi_ras_off = 0x0, + .axi_dti_off = 0x0, +}; + +static const struct cdns_plat_pcie_of_data cdns_plat_pcie_hpa_host_of_data = { + .is_rc = true, + .is_hpa = true, + .ip_reg_bank_off = CDNS_PCIE_HPA_IP_REG_BANK, + .ip_cfg_ctrl_reg_off = CDNS_PCIE_HPA_IP_CFG_CTRL_REG_BANK, + .axi_mstr_common_off = CDNS_PCIE_HPA_IP_AXI_MASTER_COMMON, + .axi_slave_off = CDNS_PCIE_HPA_AXI_SLAVE, + .axi_master_off = CDNS_PCIE_HPA_AXI_MASTER, + .axi_hls_off = 0, + .axi_ras_off = 0, + .axi_dti_off = 0, +}; + +static const struct cdns_plat_pcie_of_data cdns_plat_pcie_hpa_ep_of_data = { + .is_rc = false, + .is_hpa = true, + .ip_reg_bank_off = CDNS_PCIE_HPA_IP_REG_BANK, + .ip_cfg_ctrl_reg_off = CDNS_PCIE_HPA_IP_CFG_CTRL_REG_BANK, + .axi_mstr_common_off = CDNS_PCIE_HPA_IP_AXI_MASTER_COMMON, + .axi_slave_off = CDNS_PCIE_HPA_AXI_SLAVE, + .axi_master_off = CDNS_PCIE_HPA_AXI_MASTER, + .axi_hls_off = 0, + .axi_ras_off = 0, + .axi_dti_off = 0, }; static const struct of_device_id cdns_plat_pcie_of_match[] = { @@ -165,6 +244,14 @@ static const struct of_device_id cdns_plat_pcie_of_match[] = { .compatible = "cdns,cdns-pcie-ep", .data = &cdns_plat_pcie_ep_of_data, }, + { + .compatible = "cdns,cdns-pcie-hpa-host", + .data = &cdns_plat_pcie_hpa_host_of_data, + }, + { + .compatible = "cdns,cdns-pcie-hpa-ep", + .data = &cdns_plat_pcie_hpa_ep_of_data, + }, {}, }; diff --git a/drivers/pci/controller/cadence/pcie-cadence.h b/drivers/pci/controller/cadence/pcie-cadence.h index 39ee9945c903..a39077d64d1d 100644 --- a/drivers/pci/controller/cadence/pcie-cadence.h +++ b/drivers/pci/controller/cadence/pcie-cadence.h @@ -218,6 +218,172 @@ (((delay) << CDNS_PCIE_DETECT_QUIET_MIN_DELAY_SHIFT) & \ CDNS_PCIE_DETECT_QUIET_MIN_DELAY_MASK) +/* HPA (High Performance Architecture) PCIe controller register */ +#define CDNS_PCIE_HPA_IP_REG_BANK 0x01000000 +#define CDNS_PCIE_HPA_IP_CFG_CTRL_REG_BANK 0x01003c00 +#define CDNS_PCIE_HPA_IP_AXI_MASTER_COMMON 0x01020000 + +/* Address Translation Registers (HPA) */ +#define CDNS_PCIE_HPA_AXI_SLAVE 0x03000000 +#define CDNS_PCIE_HPA_AXI_MASTER 0x03002000 + +/* Root port register base address */ +#define CDNS_PCIE_HPA_RP_BASE 0x0 + +#define CDNS_PCIE_HPA_LM_ID 0x1420 + +/* Endpoint Function BARs (HPA) Configuration Registers */ +#define CDNS_PCIE_HPA_LM_EP_FUNC_BAR_CFG(bar, fn) \ + (((bar) < BAR_3) ? CDNS_PCIE_HPA_LM_EP_FUNC_BAR_CFG0(fn) : \ + CDNS_PCIE_HPA_LM_EP_FUNC_BAR_CFG1(fn)) +#define CDNS_PCIE_HPA_LM_EP_FUNC_BAR_CFG0(pfn) (0x4000 * (pfn)) +#define CDNS_PCIE_HPA_LM_EP_FUNC_BAR_CFG1(pfn) ((0x4000 * (pfn)) + 0x04) +#define CDNS_PCIE_HPA_LM_EP_VFUNC_BAR_CFG(bar, fn) \ + (((bar) < BAR_3) ? CDNS_PCIE_HPA_LM_EP_VFUNC_BAR_CFG0(fn) : \ + CDNS_PCIE_HPA_LM_EP_VFUNC_BAR_CFG1(fn)) +#define CDNS_PCIE_HPA_LM_EP_VFUNC_BAR_CFG0(vfn) ((0x4000 * (vfn)) + 0x08) +#define CDNS_PCIE_HPA_LM_EP_VFUNC_BAR_CFG1(vfn) ((0x4000 * (vfn)) + 0x0c) +#define CDNS_PCIE_HPA_LM_EP_FUNC_BAR_CFG_BAR_APERTURE_MASK(f) (GENMASK(9, 4) << ((f) * 10)) +#define CDNS_PCIE_HPA_LM_EP_FUNC_BAR_CFG_BAR_APERTURE(b, a) \ + (((a) << (4 + ((b) * 10))) & (CDNS_PCIE_HPA_LM_EP_FUNC_BAR_CFG_BAR_APERTURE_MASK(b))) +#define CDNS_PCIE_HPA_LM_EP_FUNC_BAR_CFG_BAR_CTRL_MASK(f) (GENMASK(3, 0) << ((f) * 10)) +#define CDNS_PCIE_HPA_LM_EP_FUNC_BAR_CFG_BAR_CTRL(b, c) \ + (((c) << ((b) * 10)) & (CDNS_PCIE_HPA_LM_EP_FUNC_BAR_CFG_BAR_CTRL_MASK(b))) + +/* Endpoint Function Configuration Register */ +#define CDNS_PCIE_HPA_LM_EP_FUNC_CFG 0x02c0 + +/* Root Complex BAR Configuration Register */ +#define CDNS_PCIE_HPA_LM_RC_BAR_CFG 0x14 +#define CDNS_PCIE_HPA_LM_RC_BAR_CFG_BAR0_APERTURE_MASK GENMASK(9, 4) +#define CDNS_PCIE_HPA_LM_RC_BAR_CFG_BAR0_APERTURE(a) \ + FIELD_PREP(CDNS_PCIE_HPA_LM_RC_BAR_CFG_BAR0_APERTURE_MASK, a) +#define CDNS_PCIE_HPA_LM_RC_BAR_CFG_BAR0_CTRL_MASK GENMASK(3, 0) +#define CDNS_PCIE_HPA_LM_RC_BAR_CFG_BAR0_CTRL(c) \ + FIELD_PREP(CDNS_PCIE_HPA_LM_RC_BAR_CFG_BAR0_CTRL_MASK, c) +#define CDNS_PCIE_HPA_LM_RC_BAR_CFG_BAR1_APERTURE_MASK GENMASK(19, 14) +#define CDNS_PCIE_HPA_LM_RC_BAR_CFG_BAR1_APERTURE(a) \ + FIELD_PREP(CDNS_PCIE_HPA_LM_RC_BAR_CFG_BAR1_APERTURE_MASK, a) +#define CDNS_PCIE_HPA_LM_RC_BAR_CFG_BAR1_CTRL_MASK GENMASK(13, 10) +#define CDNS_PCIE_HPA_LM_RC_BAR_CFG_BAR1_CTRL(c) \ + FIELD_PREP(CDNS_PCIE_HPA_LM_RC_BAR_CFG_BAR1_CTRL_MASK, c) + +#define CDNS_PCIE_HPA_LM_RC_BAR_CFG_PREFETCH_MEM_ENABLE BIT(20) +#define CDNS_PCIE_HPA_LM_RC_BAR_CFG_PREFETCH_MEM_64BITS BIT(21) +#define CDNS_PCIE_HPA_LM_RC_BAR_CFG_IO_ENABLE BIT(22) +#define CDNS_PCIE_HPA_LM_RC_BAR_CFG_IO_32BITS BIT(23) + +/* BAR control values applicable to both Endpoint Function and Root Complex */ +#define CDNS_PCIE_HPA_LM_BAR_CFG_CTRL_DISABLED 0x0 +#define CDNS_PCIE_HPA_LM_BAR_CFG_CTRL_IO_32BITS 0x3 +#define CDNS_PCIE_HPA_LM_BAR_CFG_CTRL_MEM_32BITS 0x1 +#define CDNS_PCIE_HPA_LM_BAR_CFG_CTRL_PREFETCH_MEM_32BITS 0x9 +#define CDNS_PCIE_HPA_LM_BAR_CFG_CTRL_MEM_64BITS 0x5 +#define CDNS_PCIE_HPA_LM_BAR_CFG_CTRL_PREFETCH_MEM_64BITS 0xD + +#define HPA_LM_RC_BAR_CFG_CTRL_DISABLED(bar) \ + (CDNS_PCIE_HPA_LM_BAR_CFG_CTRL_DISABLED << ((bar) * 10)) +#define HPA_LM_RC_BAR_CFG_CTRL_IO_32BITS(bar) \ + (CDNS_PCIE_HPA_LM_BAR_CFG_CTRL_IO_32BITS << ((bar) * 10)) +#define HPA_LM_RC_BAR_CFG_CTRL_MEM_32BITS(bar) \ + (CDNS_PCIE_HPA_LM_BAR_CFG_CTRL_MEM_32BITS << ((bar) * 10)) +#define HPA_LM_RC_BAR_CFG_CTRL_PREF_MEM_32BITS(bar) \ + (CDNS_PCIE_HPA_LM_BAR_CFG_CTRL_PREFETCH_MEM_32BITS << ((bar) * 10)) +#define HPA_LM_RC_BAR_CFG_CTRL_MEM_64BITS(bar) \ + (CDNS_PCIE_HPA_LM_BAR_CFG_CTRL_MEM_64BITS << ((bar) * 10)) +#define HPA_LM_RC_BAR_CFG_CTRL_PREF_MEM_64BITS(bar) \ + (CDNS_PCIE_HPA_LM_BAR_CFG_CTRL_PREFETCH_MEM_64BITS << ((bar) * 10)) +#define HPA_LM_RC_BAR_CFG_APERTURE(bar, aperture) \ + (((aperture) - 7) << ((bar) * 10)) + +#define CDNS_PCIE_HPA_LM_PTM_CTRL 0x0520 +#define CDNS_PCIE_HPA_LM_TPM_CTRL_PTMRSEN BIT(17) + +/* Root Port Registers PCI config space (HPA) for root port function */ +#define CDNS_PCIE_HPA_RP_CAP_OFFSET 0xC0 + +/* Region r Outbound AXI to PCIe Address Translation Register 0 */ +#define CDNS_PCIE_HPA_AT_OB_REGION_PCI_ADDR0(r) (0x1010 + ((r) & 0x1f) * 0x0080) +#define CDNS_PCIE_HPA_AT_OB_REGION_PCI_ADDR0_NBITS_MASK GENMASK(5, 0) +#define CDNS_PCIE_HPA_AT_OB_REGION_PCI_ADDR0_NBITS(nbits) \ + FIELD_PREP(CDNS_PCIE_HPA_AT_OB_REGION_PCI_ADDR0_NBITS_MASK, \ + ((nbits) - 1)) +#define CDNS_PCIE_HPA_AT_OB_REGION_PCI_ADDR0_DEVFN_MASK GENMASK(23, 16) +#define CDNS_PCIE_HPA_AT_OB_REGION_PCI_ADDR0_DEVFN(devfn) \ + FIELD_PREP(CDNS_PCIE_HPA_AT_OB_REGION_PCI_ADDR0_DEVFN_MASK, devfn) +#define CDNS_PCIE_HPA_AT_OB_REGION_PCI_ADDR0_BUS_MASK GENMASK(31, 24) +#define CDNS_PCIE_HPA_AT_OB_REGION_PCI_ADDR0_BUS(bus) \ + FIELD_PREP(CDNS_PCIE_HPA_AT_OB_REGION_PCI_ADDR0_BUS_MASK, bus) + +/* Region r Outbound AXI to PCIe Address Translation Register 1 */ +#define CDNS_PCIE_HPA_AT_OB_REGION_PCI_ADDR1(r) (0x1014 + ((r) & 0x1f) * 0x0080) + +/* Region r Outbound PCIe Descriptor Register 0 */ +#define CDNS_PCIE_HPA_AT_OB_REGION_DESC0(r) (0x1008 + ((r) & 0x1f) * 0x0080) +#define CDNS_PCIE_HPA_AT_OB_REGION_DESC0_TYPE_MASK GENMASK(28, 24) +#define CDNS_PCIE_HPA_AT_OB_REGION_DESC0_TYPE_MEM \ + FIELD_PREP(CDNS_PCIE_HPA_AT_OB_REGION_DESC0_TYPE_MASK, 0x0) +#define CDNS_PCIE_HPA_AT_OB_REGION_DESC0_TYPE_IO \ + FIELD_PREP(CDNS_PCIE_HPA_AT_OB_REGION_DESC0_TYPE_MASK, 0x2) +#define CDNS_PCIE_HPA_AT_OB_REGION_DESC0_TYPE_CONF_TYPE0 \ + FIELD_PREP(CDNS_PCIE_HPA_AT_OB_REGION_DESC0_TYPE_MASK, 0x4) +#define CDNS_PCIE_HPA_AT_OB_REGION_DESC0_TYPE_CONF_TYPE1 \ + FIELD_PREP(CDNS_PCIE_HPA_AT_OB_REGION_DESC0_TYPE_MASK, 0x5) +#define CDNS_PCIE_HPA_AT_OB_REGION_DESC0_TYPE_NORMAL_MSG \ + FIELD_PREP(CDNS_PCIE_HPA_AT_OB_REGION_DESC0_TYPE_MASK, 0x10) + +/* Region r Outbound PCIe Descriptor Register 1 */ +#define CDNS_PCIE_HPA_AT_OB_REGION_DESC1(r) (0x100c + ((r) & 0x1f) * 0x0080) +#define CDNS_PCIE_HPA_AT_OB_REGION_DESC1_BUS_MASK GENMASK(31, 24) +#define CDNS_PCIE_HPA_AT_OB_REGION_DESC1_BUS(bus) \ + FIELD_PREP(CDNS_PCIE_HPA_AT_OB_REGION_DESC1_BUS_MASK, bus) +#define CDNS_PCIE_HPA_AT_OB_REGION_DESC1_DEVFN_MASK GENMASK(23, 16) +#define CDNS_PCIE_HPA_AT_OB_REGION_DESC1_DEVFN(devfn) \ + FIELD_PREP(CDNS_PCIE_HPA_AT_OB_REGION_DESC1_DEVFN_MASK, devfn) + +#define CDNS_PCIE_HPA_AT_OB_REGION_CTRL0(r) (0x1018 + ((r) & 0x1f) * 0x0080) +#define CDNS_PCIE_HPA_AT_OB_REGION_CTRL0_SUPPLY_BUS BIT(26) +#define CDNS_PCIE_HPA_AT_OB_REGION_CTRL0_SUPPLY_DEV_FN BIT(25) + +/* Region r AXI Region Base Address Register 0 */ +#define CDNS_PCIE_HPA_AT_OB_REGION_CPU_ADDR0(r) (0x1000 + ((r) & 0x1f) * 0x0080) +#define CDNS_PCIE_HPA_AT_OB_REGION_CPU_ADDR0_NBITS_MASK GENMASK(5, 0) +#define CDNS_PCIE_HPA_AT_OB_REGION_CPU_ADDR0_NBITS(nbits) \ + FIELD_PREP(CDNS_PCIE_HPA_AT_OB_REGION_CPU_ADDR0_NBITS_MASK, ((nbits) - 1)) + +/* Region r AXI Region Base Address Register 1 */ +#define CDNS_PCIE_HPA_AT_OB_REGION_CPU_ADDR1(r) (0x1004 + ((r) & 0x1f) * 0x0080) + +/* Root Port BAR Inbound PCIe to AXI Address Translation Register */ +#define CDNS_PCIE_HPA_AT_IB_RP_BAR_ADDR0(bar) (((bar) * 0x0008)) +#define CDNS_PCIE_HPA_AT_IB_RP_BAR_ADDR0_NBITS_MASK GENMASK(5, 0) +#define CDNS_PCIE_HPA_AT_IB_RP_BAR_ADDR0_NBITS(nbits) \ + FIELD_PREP(CDNS_PCIE_HPA_AT_IB_RP_BAR_ADDR0_NBITS_MASK, ((nbits) - 1)) +#define CDNS_PCIE_HPA_AT_IB_RP_BAR_ADDR1(bar) (0x04 + ((bar) * 0x0008)) + +/* AXI link down register */ +#define CDNS_PCIE_HPA_AT_LINKDOWN 0x04 + +/* + * Physical Layer Configuration Register 0 + * This register contains the parameters required for functional setup + * of Physical Layer. + */ +#define CDNS_PCIE_HPA_PHY_LAYER_CFG0 0x0400 +#define CDNS_PCIE_HPA_DETECT_QUIET_MIN_DELAY_MASK GENMASK(26, 24) +#define CDNS_PCIE_HPA_DETECT_QUIET_MIN_DELAY(delay) \ + FIELD_PREP(CDNS_PCIE_HPA_DETECT_QUIET_MIN_DELAY_MASK, delay) +#define CDNS_PCIE_HPA_LINK_TRNG_EN_MASK GENMASK(27, 27) + +#define CDNS_PCIE_HPA_PHY_DBG_STS_REG0 0x0420 + +#define CDNS_PCIE_HPA_RP_MAX_IB 0x3 +#define CDNS_PCIE_HPA_MAX_OB 15 + +/* Endpoint Function BAR Inbound PCIe to AXI Address Translation Register (HPA) */ +#define CDNS_PCIE_HPA_AT_IB_EP_FUNC_BAR_ADDR0(fn, bar) (((fn) * 0x0040) + ((bar) * 0x0008)) +#define CDNS_PCIE_HPA_AT_IB_EP_FUNC_BAR_ADDR1(fn, bar) (0x4 + ((fn) * 0x0040) + ((bar) * 0x0008)) + enum cdns_pcie_rp_bar { RP_BAR_UNDEFINED = -1, RP_BAR0, @@ -249,6 +415,7 @@ struct cdns_pcie_rp_ib_bar { #define CDNS_PCIE_MSG_DATA BIT(16) struct cdns_pcie; +struct cdns_pcie_rc; enum cdns_pcie_msg_code { MSG_CODE_ASSERT_INTA = 0x20, @@ -281,11 +448,60 @@ enum cdns_pcie_msg_routing { MSG_ROUTING_GATHER, }; +enum cdns_pcie_reg_bank { + REG_BANK_RP, + REG_BANK_IP_REG, + REG_BANK_IP_CFG_CTRL_REG, + REG_BANK_AXI_MASTER_COMMON, + REG_BANK_AXI_MASTER, + REG_BANK_AXI_SLAVE, + REG_BANK_AXI_HLS, + REG_BANK_AXI_RAS, + REG_BANK_AXI_DTI, + REG_BANKS_MAX, +}; + struct cdns_pcie_ops { int (*start_link)(struct cdns_pcie *pcie); void (*stop_link)(struct cdns_pcie *pcie); bool (*link_up)(struct cdns_pcie *pcie); u64 (*cpu_addr_fixup)(struct cdns_pcie *pcie, u64 cpu_addr); + int (*host_init_root_port)(struct cdns_pcie_rc *rc); + int (*host_bar_ib_config)(struct cdns_pcie_rc *rc, + enum cdns_pcie_rp_bar bar, + u64 cpu_addr, u64 size, + unsigned long flags); + int (*host_init_address_translation)(struct cdns_pcie_rc *rc); + void (*detect_quiet_min_delay_set)(struct cdns_pcie *pcie); + void (*set_outbound_region)(struct cdns_pcie *pcie, u8 busnr, u8 fn, + u32 r, bool is_io, u64 cpu_addr, + u64 pci_addr, size_t size); + void (*set_outbound_region_for_normal_msg)(struct cdns_pcie *pcie, + u8 busnr, u8 fn, u32 r, + u64 cpu_addr); + void (*reset_outbound_region)(struct cdns_pcie *pcie, u32 r); +}; + +/** + * struct cdns_pcie_reg_offset - Register bank offset for a platform + * @ip_reg_bank_off: ip register bank start offset + * @ip_cfg_ctrl_reg_off: ip config control register start offset + * @axi_mstr_common_off: AXI master common register start + * @axi_slave_off: AXI slave offset start + * @axi_master_off: AXI master offset start + * @axi_hls_off: AXI HLS offset start + * @axi_ras_off: AXI RAS offset + * @axi_dti_off: AXI DTI offset + */ +struct cdns_pcie_reg_offset { + u32 ip_reg_bank_off; + u32 ip_cfg_ctrl_reg_off; + u32 axi_mstr_common_off; + u32 axi_slave_off; + u32 axi_master_off; + u32 axi_hls_off; + u32 axi_ras_off; + u32 axi_dti_off; }; /** @@ -294,21 +510,25 @@ struct cdns_pcie_ops { * @mem_res: start/end offsets in the physical system memory to map PCI accesses * @dev: PCIe controller * @is_rc: tell whether the PCIe controller mode is Root Complex or Endpoint. + * @is_hpa: indicates if the architecture is HPA * @phy_count: number of supported PHY devices * @phy: list of pointers to specific PHY control blocks * @link: list of pointers to corresponding device link representations * @ops: Platform-specific ops to control various inputs from Cadence PCIe * wrapper + * @cdns_pcie_reg_offsets: Register bank offsets for different SoC */ struct cdns_pcie { void __iomem *reg_base; struct resource *mem_res; struct device *dev; bool is_rc; + bool is_hpa; int phy_count; struct phy **phy; struct device_link **link; const struct cdns_pcie_ops *ops; + struct cdns_pcie_reg_offset cdns_pcie_reg_offsets; }; /** @@ -386,6 +606,41 @@ struct cdns_pcie_ep { unsigned int quirk_disable_flr:1; }; +static inline u32 cdns_reg_bank_to_off(struct cdns_pcie *pcie, + enum cdns_pcie_reg_bank bank) +{ + u32 offset = 0x0; + + switch (bank) { + case REG_BANK_IP_REG: + offset = pcie->cdns_pcie_reg_offsets.ip_reg_bank_off; + break; + case REG_BANK_IP_CFG_CTRL_REG: + offset = pcie->cdns_pcie_reg_offsets.ip_cfg_ctrl_reg_off; + break; + case REG_BANK_AXI_MASTER_COMMON: + offset = pcie->cdns_pcie_reg_offsets.axi_mstr_common_off; + break; + case REG_BANK_AXI_MASTER: + offset = pcie->cdns_pcie_reg_offsets.axi_master_off; + break; + case REG_BANK_AXI_SLAVE: + offset = pcie->cdns_pcie_reg_offsets.axi_slave_off; + break; + case REG_BANK_AXI_HLS: + offset = pcie->cdns_pcie_reg_offsets.axi_hls_off; + break; + case REG_BANK_AXI_RAS: + offset = pcie->cdns_pcie_reg_offsets.axi_ras_off; + break; + case REG_BANK_AXI_DTI: + offset = pcie->cdns_pcie_reg_offsets.axi_dti_off; + break; + default: + break; + }; + return offset; +} /* Register access */ static inline void cdns_pcie_writel(struct cdns_pcie *pcie, u32 reg, u32 value) @@ -398,6 +653,27 @@ static inline u32 cdns_pcie_readl(struct cdns_pcie *pcie, u32 reg) return readl(pcie->reg_base + reg); } +static inline void cdns_pcie_hpa_writel(struct cdns_pcie *pcie, + enum cdns_pcie_reg_bank bank, + u32 reg, + u32 value) +{ + u32 offset = cdns_reg_bank_to_off(pcie, bank); + + reg += offset; + writel(value, pcie->reg_base + reg); +} + +static inline u32 cdns_pcie_hpa_readl(struct cdns_pcie *pcie, + enum cdns_pcie_reg_bank bank, + u32 reg) +{ + u32 offset = cdns_reg_bank_to_off(pcie, bank); + + reg += offset; + return readl(pcie->reg_base + reg); +} + static inline u32 cdns_pcie_read_sz(void __iomem *addr, int size) { void __iomem *aligned_addr = PTR_ALIGN_DOWN(addr, 0x4); @@ -444,6 +720,9 @@ static inline void cdns_pcie_rp_writeb(struct cdns_pcie *pcie, { void __iomem *addr = pcie->reg_base + CDNS_PCIE_RP_BASE + reg; + if (pcie->is_hpa) + addr = pcie->reg_base + CDNS_PCIE_HPA_RP_BASE + reg; + cdns_pcie_write_sz(addr, 0x1, value); } @@ -452,6 +731,9 @@ static inline void cdns_pcie_rp_writew(struct cdns_pcie *pcie, { void __iomem *addr = pcie->reg_base + CDNS_PCIE_RP_BASE + reg; + if (pcie->is_hpa) + addr = pcie->reg_base + CDNS_PCIE_HPA_RP_BASE + reg; + cdns_pcie_write_sz(addr, 0x2, value); } @@ -459,6 +741,9 @@ static inline u16 cdns_pcie_rp_readw(struct cdns_pcie *pcie, u32 reg) { void __iomem *addr = pcie->reg_base + CDNS_PCIE_RP_BASE + reg; + if (pcie->is_hpa) + addr = pcie->reg_base + CDNS_PCIE_HPA_RP_BASE + reg; + return cdns_pcie_read_sz(addr, 0x2); } @@ -523,29 +808,52 @@ static inline bool cdns_pcie_link_up(struct cdns_pcie *pcie) int cdns_pcie_host_link_setup(struct cdns_pcie_rc *rc); int cdns_pcie_host_init(struct cdns_pcie_rc *rc); int cdns_pcie_host_setup(struct cdns_pcie_rc *rc); +int cdns_pcie_host_init_address_translation(struct cdns_pcie_rc *rc); void __iomem *cdns_pci_map_bus(struct pci_bus *bus, unsigned int devfn, int where); +int cdns_pcie_host_init_root_port(struct cdns_pcie_rc *rc); +int cdns_pcie_host_bar_ib_config(struct cdns_pcie_rc *rc, + enum cdns_pcie_rp_bar bar, + u64 cpu_addr, u64 size, + unsigned long flags); #else static inline int cdns_pcie_host_link_setup(struct cdns_pcie_rc *rc) { return 0; } - static inline int cdns_pcie_host_init(struct cdns_pcie_rc *rc) { return 0; } - static inline int cdns_pcie_host_setup(struct cdns_pcie_rc *rc) { return 0; } - static inline void __iomem *cdns_pci_map_bus(struct pci_bus *bus, unsigned int devfn, int where) { return NULL; } +static inline void __iomem *cdns_pci_hpa_map_bus(struct pci_bus *bus, unsigned int devfn + int where) +{ + return NULL; +} +static inline int cdns_pcie_hpa_host_init_root_port(struct cdns_pcie_rc *rc) +{ + return 0; +} +static inline int cdns_pcie_hpa_host_bar_ib_config(struct cdns_pcie_rc *rc, + enum cdns_pcie_rp_bar bar, + u64 cpu_addr, u64 size, + unsigned long flags) +{ + return 0; +} +static inline int cdns_pcie_hpa_host_init_address_translation(struct cdns_pcie_rc *rc) +{ + return 0; +} #endif #ifdef CONFIG_PCIE_CADENCE_EP @@ -571,6 +879,12 @@ void cdns_pcie_reset_outbound_region(struct cdns_pcie *pcie, u32 r); void cdns_pcie_disable_phy(struct cdns_pcie *pcie); int cdns_pcie_enable_phy(struct cdns_pcie *pcie); int cdns_pcie_init_phy(struct device *dev, struct cdns_pcie *pcie); +void cdns_pcie_hpa_detect_quiet_min_delay_set(struct cdns_pcie *pcie); +void cdns_pcie_hpa_set_outbound_region(struct cdns_pcie *pcie, u8 busnr, u8 fn, u32 r, + bool is_io, u64 cpu_addr, u64 pci_addr, size_t size); +void cdns_pcie_hpa_set_outbound_region_for_normal_msg(struct cdns_pcie *pcie, + u8 busnr, u8 fn, u32 r, u64 cpu_addr); +void cdns_pcie_hpa_reset_outbound_region(struct cdns_pcie *pcie, u32 r); extern const struct dev_pm_ops cdns_pcie_pm_ops; #endif /* _PCIE_CADENCE_H */ -- 2.47.1 ^ permalink raw reply related [flat|nested] 23+ messages in thread
* Re: [PATCH v3 3/6] PCI: cadence: Add header support for PCIe HPA controller 2025-04-11 10:36 ` [PATCH v3 3/6] PCI: cadence: Add header support for PCIe HPA controller hans.zhang @ 2025-04-11 20:31 ` Rob Herring 2025-04-12 15:19 ` Hans Zhang 0 siblings, 1 reply; 23+ messages in thread From: Rob Herring @ 2025-04-11 20:31 UTC (permalink / raw) To: hans.zhang Cc: bhelgaas, lpieralisi, kw, manivannan.sadhasivam, krzk+dt, conor+dt, linux-pci, devicetree, linux-kernel, Manikandan K Pillai On Fri, Apr 11, 2025 at 06:36:53PM +0800, hans.zhang@cixtech.com wrote: > From: Manikandan K Pillai <mpillai@cadence.com> > > Add the required definitions for register addresses and register bits > for the Cadence PCIe HPA controllers. Add the register bank offsets > for different platform architecture and update the global platform > data - platform architecture, EP or RP configuration and the correct > values of register offsets for different register banks during the > platform probe. > > Signed-off-by: Manikandan K Pillai <mpillai@cadence.com> > Co-developed-by: Hans Zhang <hans.zhang@cixtech.com> > Signed-off-by: Hans Zhang <hans.zhang@cixtech.com> > --- > .../controller/cadence/pcie-cadence-host.c | 13 +- > .../controller/cadence/pcie-cadence-plat.c | 87 +++++ > drivers/pci/controller/cadence/pcie-cadence.h | 320 +++++++++++++++++- > 3 files changed, 410 insertions(+), 10 deletions(-) > > diff --git a/drivers/pci/controller/cadence/pcie-cadence-host.c b/drivers/pci/controller/cadence/pcie-cadence-host.c > index 8af95e9da7ce..ce035eef0a5c 100644 > --- a/drivers/pci/controller/cadence/pcie-cadence-host.c > +++ b/drivers/pci/controller/cadence/pcie-cadence-host.c > @@ -175,7 +175,7 @@ static int cdns_pcie_host_start_link(struct cdns_pcie_rc *rc) > return ret; > } > > -static int cdns_pcie_host_init_root_port(struct cdns_pcie_rc *rc) > +int cdns_pcie_host_init_root_port(struct cdns_pcie_rc *rc) > { > struct cdns_pcie *pcie = &rc->pcie; > u32 value, ctrl; > @@ -215,10 +215,10 @@ static int cdns_pcie_host_init_root_port(struct cdns_pcie_rc *rc) > return 0; > } > > -static int cdns_pcie_host_bar_ib_config(struct cdns_pcie_rc *rc, > - enum cdns_pcie_rp_bar bar, > - u64 cpu_addr, u64 size, > - unsigned long flags) > +int cdns_pcie_host_bar_ib_config(struct cdns_pcie_rc *rc, > + enum cdns_pcie_rp_bar bar, > + u64 cpu_addr, u64 size, > + unsigned long flags) > { > struct cdns_pcie *pcie = &rc->pcie; > u32 addr0, addr1, aperture, value; > @@ -428,7 +428,7 @@ static int cdns_pcie_host_map_dma_ranges(struct cdns_pcie_rc *rc) > return 0; > } > > -static int cdns_pcie_host_init_address_translation(struct cdns_pcie_rc *rc) > +int cdns_pcie_host_init_address_translation(struct cdns_pcie_rc *rc) > { > struct cdns_pcie *pcie = &rc->pcie; > struct pci_host_bridge *bridge = pci_host_bridge_from_priv(rc); > @@ -536,7 +536,6 @@ int cdns_pcie_host_setup(struct cdns_pcie_rc *rc) > return -ENOMEM; > > pcie = &rc->pcie; > - pcie->is_rc = true; > > rc->vendor_id = 0xffff; > of_property_read_u32(np, "vendor-id", &rc->vendor_id); > diff --git a/drivers/pci/controller/cadence/pcie-cadence-plat.c b/drivers/pci/controller/cadence/pcie-cadence-plat.c > index 0456845dabb9..b24176d4df1f 100644 > --- a/drivers/pci/controller/cadence/pcie-cadence-plat.c > +++ b/drivers/pci/controller/cadence/pcie-cadence-plat.c > @@ -24,6 +24,15 @@ struct cdns_plat_pcie { > > struct cdns_plat_pcie_of_data { > bool is_rc; > + bool is_hpa; These can be bitfields (e.g. "is_rc: 1"). > + u32 ip_reg_bank_off; > + u32 ip_cfg_ctrl_reg_off; > + u32 axi_mstr_common_off; > + u32 axi_slave_off; > + u32 axi_master_off; > + u32 axi_hls_off; > + u32 axi_ras_off; > + u32 axi_dti_off; > }; > > static const struct of_device_id cdns_plat_pcie_of_match[]; > @@ -72,6 +81,19 @@ static int cdns_plat_pcie_probe(struct platform_device *pdev) > rc = pci_host_bridge_priv(bridge); > rc->pcie.dev = dev; > rc->pcie.ops = &cdns_plat_ops; > + rc->pcie.is_hpa = data->is_hpa; > + rc->pcie.is_rc = data->is_rc; > + > + /* Store all the register bank offsets */ > + rc->pcie.cdns_pcie_reg_offsets.ip_reg_bank_off = data->ip_reg_bank_off; > + rc->pcie.cdns_pcie_reg_offsets.ip_cfg_ctrl_reg_off = data->ip_cfg_ctrl_reg_off; > + rc->pcie.cdns_pcie_reg_offsets.axi_mstr_common_off = data->axi_mstr_common_off; > + rc->pcie.cdns_pcie_reg_offsets.axi_master_off = data->axi_master_off; > + rc->pcie.cdns_pcie_reg_offsets.axi_slave_off = data->axi_slave_off; > + rc->pcie.cdns_pcie_reg_offsets.axi_hls_off = data->axi_hls_off; > + rc->pcie.cdns_pcie_reg_offsets.axi_ras_off = data->axi_ras_off; > + rc->pcie.cdns_pcie_reg_offsets.axi_dti_off = data->axi_dti_off; Why not just store the match data ptr instead of having 2 copies of the information? > + > cdns_plat_pcie->pcie = &rc->pcie; > > ret = cdns_pcie_init_phy(dev, cdns_plat_pcie->pcie); > @@ -99,6 +121,19 @@ static int cdns_plat_pcie_probe(struct platform_device *pdev) > > ep->pcie.dev = dev; > ep->pcie.ops = &cdns_plat_ops; > + ep->pcie.is_hpa = data->is_hpa; > + ep->pcie.is_rc = data->is_rc; > + > + /* Store all the register bank offset */ > + ep->pcie.cdns_pcie_reg_offsets.ip_reg_bank_off = data->ip_reg_bank_off; > + ep->pcie.cdns_pcie_reg_offsets.ip_cfg_ctrl_reg_off = data->ip_cfg_ctrl_reg_off; > + ep->pcie.cdns_pcie_reg_offsets.axi_mstr_common_off = data->axi_mstr_common_off; > + ep->pcie.cdns_pcie_reg_offsets.axi_master_off = data->axi_master_off; > + ep->pcie.cdns_pcie_reg_offsets.axi_slave_off = data->axi_slave_off; > + ep->pcie.cdns_pcie_reg_offsets.axi_hls_off = data->axi_hls_off; > + ep->pcie.cdns_pcie_reg_offsets.axi_ras_off = data->axi_ras_off; > + ep->pcie.cdns_pcie_reg_offsets.axi_dti_off = data->axi_dti_off; > + > cdns_plat_pcie->pcie = &ep->pcie; > > ret = cdns_pcie_init_phy(dev, cdns_plat_pcie->pcie); > @@ -150,10 +185,54 @@ static void cdns_plat_pcie_shutdown(struct platform_device *pdev) > > static const struct cdns_plat_pcie_of_data cdns_plat_pcie_host_of_data = { > .is_rc = true, > + .is_hpa = false, > + .ip_reg_bank_off = 0x0, > + .ip_cfg_ctrl_reg_off = 0x0, > + .axi_mstr_common_off = 0x0, > + .axi_slave_off = 0x0, > + .axi_master_off = 0x0, > + .axi_hls_off = 0x0, > + .axi_ras_off = 0x0, > + .axi_dti_off = 0x0, You can omit anything initialized to 0. > }; > > static const struct cdns_plat_pcie_of_data cdns_plat_pcie_ep_of_data = { > .is_rc = false, > + .is_hpa = false, > + .ip_reg_bank_off = 0x0, > + .ip_cfg_ctrl_reg_off = 0x0, > + .axi_mstr_common_off = 0x0, > + .axi_slave_off = 0x0, > + .axi_master_off = 0x0, > + .axi_hls_off = 0x0, > + .axi_ras_off = 0x0, > + .axi_dti_off = 0x0, > +}; > + > +static const struct cdns_plat_pcie_of_data cdns_plat_pcie_hpa_host_of_data = { > + .is_rc = true, > + .is_hpa = true, > + .ip_reg_bank_off = CDNS_PCIE_HPA_IP_REG_BANK, > + .ip_cfg_ctrl_reg_off = CDNS_PCIE_HPA_IP_CFG_CTRL_REG_BANK, > + .axi_mstr_common_off = CDNS_PCIE_HPA_IP_AXI_MASTER_COMMON, > + .axi_slave_off = CDNS_PCIE_HPA_AXI_SLAVE, > + .axi_master_off = CDNS_PCIE_HPA_AXI_MASTER, > + .axi_hls_off = 0, > + .axi_ras_off = 0, > + .axi_dti_off = 0, > +}; > + > +static const struct cdns_plat_pcie_of_data cdns_plat_pcie_hpa_ep_of_data = { > + .is_rc = false, > + .is_hpa = true, > + .ip_reg_bank_off = CDNS_PCIE_HPA_IP_REG_BANK, > + .ip_cfg_ctrl_reg_off = CDNS_PCIE_HPA_IP_CFG_CTRL_REG_BANK, > + .axi_mstr_common_off = CDNS_PCIE_HPA_IP_AXI_MASTER_COMMON, > + .axi_slave_off = CDNS_PCIE_HPA_AXI_SLAVE, > + .axi_master_off = CDNS_PCIE_HPA_AXI_MASTER, > + .axi_hls_off = 0, > + .axi_ras_off = 0, > + .axi_dti_off = 0, > }; ^ permalink raw reply [flat|nested] 23+ messages in thread
* Re: [PATCH v3 3/6] PCI: cadence: Add header support for PCIe HPA controller 2025-04-11 20:31 ` Rob Herring @ 2025-04-12 15:19 ` Hans Zhang 0 siblings, 0 replies; 23+ messages in thread From: Hans Zhang @ 2025-04-12 15:19 UTC (permalink / raw) To: Rob Herring Cc: bhelgaas, lpieralisi, kw, manivannan.sadhasivam, krzk+dt, conor+dt, linux-pci, devicetree, linux-kernel, Manikandan K Pillai On 2025/4/12 04:31, Rob Herring wrote: >> diff --git a/drivers/pci/controller/cadence/pcie-cadence-plat.c b/drivers/pci/controller/cadence/pcie-cadence-plat.c >> index 0456845dabb9..b24176d4df1f 100644 >> --- a/drivers/pci/controller/cadence/pcie-cadence-plat.c >> +++ b/drivers/pci/controller/cadence/pcie-cadence-plat.c >> @@ -24,6 +24,15 @@ struct cdns_plat_pcie { >> >> struct cdns_plat_pcie_of_data { >> bool is_rc; >> + bool is_hpa; > > These can be bitfields (e.g. "is_rc: 1"). > Hi Rob, Thanks your for reply. Will change. >> + u32 ip_reg_bank_off; >> + u32 ip_cfg_ctrl_reg_off; >> + u32 axi_mstr_common_off; >> + u32 axi_slave_off; >> + u32 axi_master_off; >> + u32 axi_hls_off; >> + u32 axi_ras_off; >> + u32 axi_dti_off; >> }; >> >> static const struct of_device_id cdns_plat_pcie_of_match[]; >> @@ -72,6 +81,19 @@ static int cdns_plat_pcie_probe(struct platform_device *pdev) >> rc = pci_host_bridge_priv(bridge); >> rc->pcie.dev = dev; >> rc->pcie.ops = &cdns_plat_ops; >> + rc->pcie.is_hpa = data->is_hpa; >> + rc->pcie.is_rc = data->is_rc; >> + >> + /* Store all the register bank offsets */ >> + rc->pcie.cdns_pcie_reg_offsets.ip_reg_bank_off = data->ip_reg_bank_off; >> + rc->pcie.cdns_pcie_reg_offsets.ip_cfg_ctrl_reg_off = data->ip_cfg_ctrl_reg_off; >> + rc->pcie.cdns_pcie_reg_offsets.axi_mstr_common_off = data->axi_mstr_common_off; >> + rc->pcie.cdns_pcie_reg_offsets.axi_master_off = data->axi_master_off; >> + rc->pcie.cdns_pcie_reg_offsets.axi_slave_off = data->axi_slave_off; >> + rc->pcie.cdns_pcie_reg_offsets.axi_hls_off = data->axi_hls_off; >> + rc->pcie.cdns_pcie_reg_offsets.axi_ras_off = data->axi_ras_off; >> + rc->pcie.cdns_pcie_reg_offsets.axi_dti_off = data->axi_dti_off; > > Why not just store the match data ptr instead of having 2 copies of the > information? Will change. > >> + >> cdns_plat_pcie->pcie = &rc->pcie; >> >> ret = cdns_pcie_init_phy(dev, cdns_plat_pcie->pcie); >> @@ -99,6 +121,19 @@ static int cdns_plat_pcie_probe(struct platform_device *pdev) >> >> ep->pcie.dev = dev; >> ep->pcie.ops = &cdns_plat_ops; >> + ep->pcie.is_hpa = data->is_hpa; >> + ep->pcie.is_rc = data->is_rc; >> + >> + /* Store all the register bank offset */ >> + ep->pcie.cdns_pcie_reg_offsets.ip_reg_bank_off = data->ip_reg_bank_off; >> + ep->pcie.cdns_pcie_reg_offsets.ip_cfg_ctrl_reg_off = data->ip_cfg_ctrl_reg_off; >> + ep->pcie.cdns_pcie_reg_offsets.axi_mstr_common_off = data->axi_mstr_common_off; >> + ep->pcie.cdns_pcie_reg_offsets.axi_master_off = data->axi_master_off; >> + ep->pcie.cdns_pcie_reg_offsets.axi_slave_off = data->axi_slave_off; >> + ep->pcie.cdns_pcie_reg_offsets.axi_hls_off = data->axi_hls_off; >> + ep->pcie.cdns_pcie_reg_offsets.axi_ras_off = data->axi_ras_off; >> + ep->pcie.cdns_pcie_reg_offsets.axi_dti_off = data->axi_dti_off; >> + >> cdns_plat_pcie->pcie = &ep->pcie; >> >> ret = cdns_pcie_init_phy(dev, cdns_plat_pcie->pcie); >> @@ -150,10 +185,54 @@ static void cdns_plat_pcie_shutdown(struct platform_device *pdev) >> >> static const struct cdns_plat_pcie_of_data cdns_plat_pcie_host_of_data = { >> .is_rc = true, >> + .is_hpa = false, >> + .ip_reg_bank_off = 0x0, >> + .ip_cfg_ctrl_reg_off = 0x0, >> + .axi_mstr_common_off = 0x0, >> + .axi_slave_off = 0x0, >> + .axi_master_off = 0x0, >> + .axi_hls_off = 0x0, >> + .axi_ras_off = 0x0, >> + .axi_dti_off = 0x0, > > You can omit anything initialized to 0. Will change. Best regards, Hans ^ permalink raw reply [flat|nested] 23+ messages in thread
* [PATCH v3 4/6] PCI: cadence: Add support for PCIe Endpoint HPA controller 2025-04-11 10:36 [PATCH v3 0/6] Enhance the PCIe controller driver hans.zhang ` (2 preceding siblings ...) 2025-04-11 10:36 ` [PATCH v3 3/6] PCI: cadence: Add header support for PCIe HPA controller hans.zhang @ 2025-04-11 10:36 ` hans.zhang 2025-04-11 10:36 ` [PATCH v3 5/6] PCI: cadence: Add callback functions for RP and EP controller hans.zhang 2025-04-11 10:36 ` [PATCH v3 6/6] PCI: cadence: Update support for TI J721e boards hans.zhang 5 siblings, 0 replies; 23+ messages in thread From: hans.zhang @ 2025-04-11 10:36 UTC (permalink / raw) To: bhelgaas, lpieralisi, kw, manivannan.sadhasivam, robh, krzk+dt, conor+dt Cc: linux-pci, devicetree, linux-kernel, Manikandan K Pillai, Hans Zhang From: Manikandan K Pillai <mpillai@cadence.com> Add support for the Cadence PCIe endpoint HPA controller by adding the required functions based on the HPA registers and register bit definitions. Signed-off-by: Manikandan K Pillai <mpillai@cadence.com> Co-developed-by: Hans Zhang <hans.zhang@cixtech.com> Signed-off-by: Hans Zhang <hans.zhang@cixtech.com> --- .../pci/controller/cadence/pcie-cadence-ep.c | 141 +++++++++++++++++- 1 file changed, 136 insertions(+), 5 deletions(-) diff --git a/drivers/pci/controller/cadence/pcie-cadence-ep.c b/drivers/pci/controller/cadence/pcie-cadence-ep.c index 599ec4b1223e..f3f956fa116b 100644 --- a/drivers/pci/controller/cadence/pcie-cadence-ep.c +++ b/drivers/pci/controller/cadence/pcie-cadence-ep.c @@ -568,7 +568,11 @@ static int cdns_pcie_ep_start(struct pci_epc *epc) * BIT(0) is hardwired to 1, hence function 0 is always enabled * and can't be disabled anyway. */ - cdns_pcie_writel(pcie, CDNS_PCIE_LM_EP_FUNC_CFG, epc->function_num_map); + if (pcie->is_hpa) + cdns_pcie_hpa_writel(pcie, REG_BANK_IP_REG, + CDNS_PCIE_HPA_LM_EP_FUNC_CFG, epc->function_num_map); + else + cdns_pcie_writel(pcie, CDNS_PCIE_LM_EP_FUNC_CFG, epc->function_num_map); /* * Next function field in ARI_CAP_AND_CTR register for last function @@ -605,6 +609,115 @@ static int cdns_pcie_ep_start(struct pci_epc *epc) return 0; } +static int cdns_pcie_hpa_ep_set_bar(struct pci_epc *epc, u8 fn, u8 vfn, + struct pci_epf_bar *epf_bar) +{ + struct cdns_pcie_ep *ep = epc_get_drvdata(epc); + struct cdns_pcie_epf *epf = &ep->epf[fn]; + struct cdns_pcie *pcie = &ep->pcie; + dma_addr_t bar_phys = epf_bar->phys_addr; + enum pci_barno bar = epf_bar->barno; + int flags = epf_bar->flags; + u32 addr0, addr1, reg, cfg, b, aperture, ctrl; + u64 sz; + + /* BAR size is 2^(aperture + 7) */ + sz = max_t(size_t, epf_bar->size, CDNS_PCIE_EP_MIN_APERTURE); + + /* + * roundup_pow_of_two() returns an unsigned long, which is not suited + * for 64bit values. + */ + sz = 1ULL << fls64(sz - 1); + + /* 128B -> 0, 256B -> 1, 512B -> 2, ... */ + aperture = ilog2(sz) - 7; + + if ((flags & PCI_BASE_ADDRESS_SPACE) == PCI_BASE_ADDRESS_SPACE_IO) { + ctrl = CDNS_PCIE_HPA_LM_BAR_CFG_CTRL_IO_32BITS; + } else { + bool is_prefetch = !!(flags & PCI_BASE_ADDRESS_MEM_PREFETCH); + bool is_64bits = !!(flags & PCI_BASE_ADDRESS_MEM_TYPE_64); + + if (is_64bits && (bar & 1)) + return -EINVAL; + + if (is_64bits && is_prefetch) + ctrl = CDNS_PCIE_HPA_LM_BAR_CFG_CTRL_PREFETCH_MEM_64BITS; + else if (is_prefetch) + ctrl = CDNS_PCIE_HPA_LM_BAR_CFG_CTRL_PREFETCH_MEM_32BITS; + else if (is_64bits) + ctrl = CDNS_PCIE_HPA_LM_BAR_CFG_CTRL_MEM_64BITS; + else + ctrl = CDNS_PCIE_HPA_LM_BAR_CFG_CTRL_MEM_32BITS; + } + + addr0 = lower_32_bits(bar_phys); + addr1 = upper_32_bits(bar_phys); + + if (vfn == 1) + reg = CDNS_PCIE_HPA_LM_EP_VFUNC_BAR_CFG(bar, fn); + else + reg = CDNS_PCIE_HPA_LM_EP_FUNC_BAR_CFG(bar, fn); + b = (bar < BAR_4) ? bar : bar - BAR_4; + + if (vfn == 0 || vfn == 1) { + cfg = cdns_pcie_hpa_readl(pcie, REG_BANK_IP_CFG_CTRL_REG, reg); + cfg &= ~(CDNS_PCIE_HPA_LM_EP_FUNC_BAR_CFG_BAR_APERTURE_MASK(b) | + CDNS_PCIE_HPA_LM_EP_FUNC_BAR_CFG_BAR_CTRL_MASK(b)); + cfg |= (CDNS_PCIE_HPA_LM_EP_FUNC_BAR_CFG_BAR_APERTURE(b, aperture) | + CDNS_PCIE_HPA_LM_EP_FUNC_BAR_CFG_BAR_CTRL(b, ctrl)); + cdns_pcie_hpa_writel(pcie, REG_BANK_IP_CFG_CTRL_REG, reg, cfg); + } + + fn = cdns_pcie_get_fn_from_vfn(pcie, fn, vfn); + cdns_pcie_hpa_writel(pcie, REG_BANK_AXI_MASTER_COMMON, + CDNS_PCIE_HPA_AT_IB_EP_FUNC_BAR_ADDR0(fn, bar), addr0); + cdns_pcie_hpa_writel(pcie, REG_BANK_AXI_MASTER_COMMON, + CDNS_PCIE_HPA_AT_IB_EP_FUNC_BAR_ADDR1(fn, bar), addr1); + + if (vfn > 0) + epf = &epf->epf[vfn - 1]; + epf->epf_bar[bar] = epf_bar; + + return 0; +} + +static void cdns_pcie_hpa_ep_clear_bar(struct pci_epc *epc, u8 fn, u8 vfn, + struct pci_epf_bar *epf_bar) +{ + struct cdns_pcie_ep *ep = epc_get_drvdata(epc); + struct cdns_pcie_epf *epf = &ep->epf[fn]; + struct cdns_pcie *pcie = &ep->pcie; + enum pci_barno bar = epf_bar->barno; + u32 reg, cfg, b, ctrl; + + if (vfn == 1) + reg = CDNS_PCIE_HPA_LM_EP_VFUNC_BAR_CFG(bar, fn); + else + reg = CDNS_PCIE_HPA_LM_EP_FUNC_BAR_CFG(bar, fn); + b = (bar < BAR_4) ? bar : bar - BAR_4; + + if (vfn == 0 || vfn == 1) { + ctrl = CDNS_PCIE_HPA_LM_BAR_CFG_CTRL_DISABLED; + cfg = cdns_pcie_hpa_readl(pcie, REG_BANK_IP_CFG_CTRL_REG, reg); + cfg &= ~(CDNS_PCIE_HPA_LM_EP_FUNC_BAR_CFG_BAR_APERTURE_MASK(b) | + CDNS_PCIE_HPA_LM_EP_FUNC_BAR_CFG_BAR_CTRL_MASK(b)); + cfg |= CDNS_PCIE_HPA_LM_EP_FUNC_BAR_CFG_BAR_CTRL(b, ctrl); + cdns_pcie_hpa_writel(pcie, REG_BANK_IP_CFG_CTRL_REG, reg, cfg); + } + + fn = cdns_pcie_get_fn_from_vfn(pcie, fn, vfn); + cdns_pcie_hpa_writel(pcie, REG_BANK_AXI_MASTER_COMMON, + CDNS_PCIE_HPA_AT_IB_EP_FUNC_BAR_ADDR0(fn, bar), 0); + cdns_pcie_hpa_writel(pcie, REG_BANK_AXI_MASTER_COMMON, + CDNS_PCIE_HPA_AT_IB_EP_FUNC_BAR_ADDR1(fn, bar), 0); + + if (vfn > 0) + epf = &epf->epf[vfn - 1]; + epf->epf_bar[bar] = NULL; +} + static const struct pci_epc_features cdns_pcie_epc_vf_features = { .linkup_notifier = false, .msi_capable = true, @@ -644,6 +757,21 @@ static const struct pci_epc_ops cdns_pcie_epc_ops = { .get_features = cdns_pcie_ep_get_features, }; +static const struct pci_epc_ops cdns_pcie_hpa_epc_ops = { + .write_header = cdns_pcie_ep_write_header, + .set_bar = cdns_pcie_hpa_ep_set_bar, + .clear_bar = cdns_pcie_hpa_ep_clear_bar, + .map_addr = cdns_pcie_ep_map_addr, + .unmap_addr = cdns_pcie_ep_unmap_addr, + .set_msi = cdns_pcie_ep_set_msi, + .get_msi = cdns_pcie_ep_get_msi, + .set_msix = cdns_pcie_ep_set_msix, + .get_msix = cdns_pcie_ep_get_msix, + .raise_irq = cdns_pcie_ep_raise_irq, + .map_msi_irq = cdns_pcie_ep_map_msi_irq, + .start = cdns_pcie_ep_start, + .get_features = cdns_pcie_ep_get_features, +}; int cdns_pcie_ep_setup(struct cdns_pcie_ep *ep) { @@ -681,10 +809,13 @@ int cdns_pcie_ep_setup(struct cdns_pcie_ep *ep) if (!ep->ob_addr) return -ENOMEM; - /* Disable all but function 0 (anyway BIT(0) is hardwired to 1). */ - cdns_pcie_writel(pcie, CDNS_PCIE_LM_EP_FUNC_CFG, BIT(0)); - - epc = devm_pci_epc_create(dev, &cdns_pcie_epc_ops); + if (pcie->is_hpa) { + epc = devm_pci_epc_create(dev, &cdns_pcie_hpa_epc_ops); + } else { + /* Disable all but function 0 (anyway BIT(0) is hardwired to 1) */ + cdns_pcie_writel(pcie, CDNS_PCIE_LM_EP_FUNC_CFG, BIT(0)); + epc = devm_pci_epc_create(dev, &cdns_pcie_epc_ops); + } if (IS_ERR(epc)) { dev_err(dev, "failed to create epc device\n"); return PTR_ERR(epc); -- 2.47.1 ^ permalink raw reply related [flat|nested] 23+ messages in thread
* [PATCH v3 5/6] PCI: cadence: Add callback functions for RP and EP controller 2025-04-11 10:36 [PATCH v3 0/6] Enhance the PCIe controller driver hans.zhang ` (3 preceding siblings ...) 2025-04-11 10:36 ` [PATCH v3 4/6] PCI: cadence: Add support for PCIe Endpoint " hans.zhang @ 2025-04-11 10:36 ` hans.zhang 2025-04-11 20:24 ` Rob Herring ` (3 more replies) 2025-04-11 10:36 ` [PATCH v3 6/6] PCI: cadence: Update support for TI J721e boards hans.zhang 5 siblings, 4 replies; 23+ messages in thread From: hans.zhang @ 2025-04-11 10:36 UTC (permalink / raw) To: bhelgaas, lpieralisi, kw, manivannan.sadhasivam, robh, krzk+dt, conor+dt Cc: linux-pci, devicetree, linux-kernel, Manikandan K Pillai, Hans Zhang From: Manikandan K Pillai <mpillai@cadence.com> Add support for the Cadence PCIe HPA controller by adding the required callback functions. Update the common functions for RP and EP configuration. Invoke the relevant callback functions for platform probe of PCIe controller using the callback function. Signed-off-by: Manikandan K Pillai <mpillai@cadence.com> Co-developed-by: Hans Zhang <hans.zhang@cixtech.com> Signed-off-by: Hans Zhang <hans.zhang@cixtech.com> --- .../pci/controller/cadence/pcie-cadence-ep.c | 29 +- .../controller/cadence/pcie-cadence-host.c | 271 ++++++++++++++++-- .../controller/cadence/pcie-cadence-plat.c | 23 ++ drivers/pci/controller/cadence/pcie-cadence.c | 196 ++++++++++++- drivers/pci/controller/cadence/pcie-cadence.h | 12 + 5 files changed, 488 insertions(+), 43 deletions(-) diff --git a/drivers/pci/controller/cadence/pcie-cadence-ep.c b/drivers/pci/controller/cadence/pcie-cadence-ep.c index f3f956fa116b..f4961c760434 100644 --- a/drivers/pci/controller/cadence/pcie-cadence-ep.c +++ b/drivers/pci/controller/cadence/pcie-cadence-ep.c @@ -192,7 +192,7 @@ static int cdns_pcie_ep_map_addr(struct pci_epc *epc, u8 fn, u8 vfn, } fn = cdns_pcie_get_fn_from_vfn(pcie, fn, vfn); - cdns_pcie_set_outbound_region(pcie, 0, fn, r, false, addr, pci_addr, size); + pcie->ops->set_outbound_region(pcie, 0, fn, r, false, addr, pci_addr, size); set_bit(r, &ep->ob_region_map); ep->ob_addr[r] = addr; @@ -214,7 +214,7 @@ static void cdns_pcie_ep_unmap_addr(struct pci_epc *epc, u8 fn, u8 vfn, if (r == ep->max_regions - 1) return; - cdns_pcie_reset_outbound_region(pcie, r); + pcie->ops->reset_outbound_region(pcie, r); ep->ob_addr[r] = 0; clear_bit(r, &ep->ob_region_map); @@ -329,8 +329,7 @@ static void cdns_pcie_ep_assert_intx(struct cdns_pcie_ep *ep, u8 fn, u8 intx, if (unlikely(ep->irq_pci_addr != CDNS_PCIE_EP_IRQ_PCI_ADDR_LEGACY || ep->irq_pci_fn != fn)) { /* First region was reserved for IRQ writes. */ - cdns_pcie_set_outbound_region_for_normal_msg(pcie, 0, fn, 0, - ep->irq_phys_addr); + pcie->ops->set_outbound_region_for_normal_msg(pcie, 0, fn, 0, ep->irq_phys_addr); ep->irq_pci_addr = CDNS_PCIE_EP_IRQ_PCI_ADDR_LEGACY; ep->irq_pci_fn = fn; } @@ -411,11 +410,11 @@ static int cdns_pcie_ep_send_msi_irq(struct cdns_pcie_ep *ep, u8 fn, u8 vfn, if (unlikely(ep->irq_pci_addr != (pci_addr & ~pci_addr_mask) || ep->irq_pci_fn != fn)) { /* First region was reserved for IRQ writes. */ - cdns_pcie_set_outbound_region(pcie, 0, fn, 0, - false, - ep->irq_phys_addr, - pci_addr & ~pci_addr_mask, - pci_addr_mask + 1); + pcie->ops->set_outbound_region(pcie, 0, fn, 0, + false, + ep->irq_phys_addr, + pci_addr & ~pci_addr_mask, + pci_addr_mask + 1); ep->irq_pci_addr = (pci_addr & ~pci_addr_mask); ep->irq_pci_fn = fn; } @@ -514,11 +513,11 @@ static int cdns_pcie_ep_send_msix_irq(struct cdns_pcie_ep *ep, u8 fn, u8 vfn, if (ep->irq_pci_addr != (msg_addr & ~pci_addr_mask) || ep->irq_pci_fn != fn) { /* First region was reserved for IRQ writes. */ - cdns_pcie_set_outbound_region(pcie, 0, fn, 0, - false, - ep->irq_phys_addr, - msg_addr & ~pci_addr_mask, - pci_addr_mask + 1); + pcie->ops->set_outbound_region(pcie, 0, fn, 0, + false, + ep->irq_phys_addr, + msg_addr & ~pci_addr_mask, + pci_addr_mask + 1); ep->irq_pci_addr = (msg_addr & ~pci_addr_mask); ep->irq_pci_fn = fn; } @@ -869,7 +868,7 @@ int cdns_pcie_ep_setup(struct cdns_pcie_ep *ep) set_bit(0, &ep->ob_region_map); if (ep->quirk_detect_quiet_flag) - cdns_pcie_detect_quiet_min_delay_set(&ep->pcie); + pcie->ops->detect_quiet_min_delay_set(&ep->pcie); spin_lock_init(&ep->lock); diff --git a/drivers/pci/controller/cadence/pcie-cadence-host.c b/drivers/pci/controller/cadence/pcie-cadence-host.c index ce035eef0a5c..c7066ea3b9e8 100644 --- a/drivers/pci/controller/cadence/pcie-cadence-host.c +++ b/drivers/pci/controller/cadence/pcie-cadence-host.c @@ -60,10 +60,7 @@ void __iomem *cdns_pci_map_bus(struct pci_bus *bus, unsigned int devfn, /* Configuration Type 0 or Type 1 access. */ desc0 = CDNS_PCIE_AT_OB_REGION_DESC0_HARDCODED_RID | CDNS_PCIE_AT_OB_REGION_DESC0_DEVFN(0); - /* - * The bus number was already set once for all in desc1 by - * cdns_pcie_host_init_address_translation(). - */ + if (busn == bridge->busnr + 1) desc0 |= CDNS_PCIE_AT_OB_REGION_DESC0_TYPE_CONF_TYPE0; else @@ -73,12 +70,81 @@ void __iomem *cdns_pci_map_bus(struct pci_bus *bus, unsigned int devfn, return rc->cfg_base + (where & 0xfff); } +void __iomem *cdns_pci_hpa_map_bus(struct pci_bus *bus, unsigned int devfn, + int where) +{ + struct pci_host_bridge *bridge = pci_find_host_bridge(bus); + struct cdns_pcie_rc *rc = pci_host_bridge_priv(bridge); + struct cdns_pcie *pcie = &rc->pcie; + unsigned int busn = bus->number; + u32 addr0, desc0, desc1, ctrl0; + u32 regval; + + if (pci_is_root_bus(bus)) { + /* + * Only the root port (devfn == 0) is connected to this bus. + * All other PCI devices are behind some bridge hence on another + * bus. + */ + if (devfn) + return NULL; + + return pcie->reg_base + (where & 0xfff); + } + + /* + * Clear AXI link-down status + */ + regval = cdns_pcie_hpa_readl(pcie, REG_BANK_AXI_SLAVE, CDNS_PCIE_HPA_AT_LINKDOWN); + cdns_pcie_hpa_writel(pcie, REG_BANK_AXI_SLAVE, CDNS_PCIE_HPA_AT_LINKDOWN, + (regval & GENMASK(0, 0))); + + desc1 = 0; + ctrl0 = 0; + + /* + * Update Output registers for AXI region 0. + */ + addr0 = CDNS_PCIE_HPA_AT_OB_REGION_PCI_ADDR0_NBITS(12) | + CDNS_PCIE_HPA_AT_OB_REGION_PCI_ADDR0_DEVFN(devfn) | + CDNS_PCIE_HPA_AT_OB_REGION_PCI_ADDR0_BUS(busn); + cdns_pcie_hpa_writel(pcie, REG_BANK_AXI_SLAVE, + CDNS_PCIE_HPA_AT_OB_REGION_PCI_ADDR0(0), addr0); + + desc1 = cdns_pcie_hpa_readl(pcie, REG_BANK_AXI_SLAVE, + CDNS_PCIE_HPA_AT_OB_REGION_DESC1(0)); + desc1 &= ~CDNS_PCIE_HPA_AT_OB_REGION_DESC1_DEVFN_MASK; + desc1 |= CDNS_PCIE_HPA_AT_OB_REGION_DESC1_DEVFN(0); + ctrl0 = CDNS_PCIE_HPA_AT_OB_REGION_CTRL0_SUPPLY_BUS | + CDNS_PCIE_HPA_AT_OB_REGION_CTRL0_SUPPLY_DEV_FN; + + if (busn == bridge->busnr + 1) + desc0 |= CDNS_PCIE_HPA_AT_OB_REGION_DESC0_TYPE_CONF_TYPE0; + else + desc0 |= CDNS_PCIE_HPA_AT_OB_REGION_DESC0_TYPE_CONF_TYPE1; + + cdns_pcie_hpa_writel(pcie, REG_BANK_AXI_SLAVE, + CDNS_PCIE_HPA_AT_OB_REGION_DESC0(0), desc0); + cdns_pcie_hpa_writel(pcie, REG_BANK_AXI_SLAVE, + CDNS_PCIE_HPA_AT_OB_REGION_DESC1(0), desc1); + cdns_pcie_hpa_writel(pcie, REG_BANK_AXI_SLAVE, + CDNS_PCIE_HPA_AT_OB_REGION_CTRL0(0), ctrl0); + + return rc->cfg_base + (where & 0xfff); +} + static struct pci_ops cdns_pcie_host_ops = { .map_bus = cdns_pci_map_bus, .read = pci_generic_config_read, .write = pci_generic_config_write, }; +static struct pci_ops cdns_pcie_hpa_host_ops = { + .map_bus = cdns_pci_hpa_map_bus, + .read = pci_generic_config_read, + .write = pci_generic_config_write, +}; + static int cdns_pcie_host_training_complete(struct cdns_pcie *pcie) { u32 pcie_cap_off = CDNS_PCIE_RP_CAP_OFFSET; @@ -154,8 +220,14 @@ static void cdns_pcie_host_enable_ptm_response(struct cdns_pcie *pcie) { u32 val; - val = cdns_pcie_readl(pcie, CDNS_PCIE_LM_PTM_CTRL); - cdns_pcie_writel(pcie, CDNS_PCIE_LM_PTM_CTRL, val | CDNS_PCIE_LM_TPM_CTRL_PTMRSEN); + if (!pcie->is_hpa) { + val = cdns_pcie_readl(pcie, CDNS_PCIE_LM_PTM_CTRL); + cdns_pcie_writel(pcie, CDNS_PCIE_LM_PTM_CTRL, val | CDNS_PCIE_LM_TPM_CTRL_PTMRSEN); + } else { + val = cdns_pcie_hpa_readl(pcie, REG_BANK_IP_REG, CDNS_PCIE_HPA_LM_PTM_CTRL); + cdns_pcie_hpa_writel(pcie, REG_BANK_IP_REG, CDNS_PCIE_HPA_LM_PTM_CTRL, + val | CDNS_PCIE_HPA_LM_TPM_CTRL_PTMRSEN); + } } static int cdns_pcie_host_start_link(struct cdns_pcie_rc *rc) @@ -340,8 +412,8 @@ static int cdns_pcie_host_bar_config(struct cdns_pcie_rc *rc, */ bar = cdns_pcie_host_find_min_bar(rc, size); if (bar != RP_BAR_UNDEFINED) { - ret = cdns_pcie_host_bar_ib_config(rc, bar, cpu_addr, - size, flags); + ret = pcie->ops->host_bar_ib_config(rc, bar, cpu_addr, + size, flags); if (ret) dev_err(dev, "IB BAR: %d config failed\n", bar); return ret; @@ -366,8 +438,7 @@ static int cdns_pcie_host_bar_config(struct cdns_pcie_rc *rc, } winsize = bar_max_size[bar]; - ret = cdns_pcie_host_bar_ib_config(rc, bar, cpu_addr, winsize, - flags); + ret = pcie->ops->host_bar_ib_config(rc, bar, cpu_addr, winsize, flags); if (ret) { dev_err(dev, "IB BAR: %d config failed\n", bar); return ret; @@ -408,8 +479,8 @@ static int cdns_pcie_host_map_dma_ranges(struct cdns_pcie_rc *rc) if (list_empty(&bridge->dma_ranges)) { of_property_read_u32(np, "cdns,no-bar-match-nbits", &no_bar_nbits); - err = cdns_pcie_host_bar_ib_config(rc, RP_NO_BAR, 0x0, - (u64)1 << no_bar_nbits, 0); + err = pcie->ops->host_bar_ib_config(rc, RP_NO_BAR, 0x0, + (u64)1 << no_bar_nbits, 0); if (err) dev_err(dev, "IB BAR: %d config failed\n", RP_NO_BAR); return err; @@ -467,17 +538,159 @@ int cdns_pcie_host_init_address_translation(struct cdns_pcie_rc *rc) u64 pci_addr = res->start - entry->offset; if (resource_type(res) == IORESOURCE_IO) - cdns_pcie_set_outbound_region(pcie, busnr, 0, r, - true, - pci_pio_to_address(res->start), - pci_addr, - resource_size(res)); + pcie->ops->set_outbound_region(pcie, busnr, 0, r, + true, + pci_pio_to_address(res->start), + pci_addr, + resource_size(res)); + else + pcie->ops->set_outbound_region(pcie, busnr, 0, r, + false, + res->start, + pci_addr, + resource_size(res)); + + r++; + } + + return cdns_pcie_host_map_dma_ranges(rc); +} + +int cdns_pcie_hpa_host_init_root_port(struct cdns_pcie_rc *rc) +{ + struct cdns_pcie *pcie = &rc->pcie; + u32 value, ctrl; + + /* + * Set the root complex BAR configuration register: + * - disable both BAR0 and BAR1. + * - enable Prefetchable Memory Base and Limit registers in type 1 + * config space (64 bits). + * - enable IO Base and Limit registers in type 1 config + * space (32 bits). + */ + + ctrl = CDNS_PCIE_HPA_LM_BAR_CFG_CTRL_DISABLED; + value = CDNS_PCIE_HPA_LM_RC_BAR_CFG_BAR0_CTRL(ctrl) | + CDNS_PCIE_HPA_LM_RC_BAR_CFG_BAR1_CTRL(ctrl) | + CDNS_PCIE_HPA_LM_RC_BAR_CFG_PREFETCH_MEM_ENABLE | + CDNS_PCIE_HPA_LM_RC_BAR_CFG_PREFETCH_MEM_64BITS | + CDNS_PCIE_HPA_LM_RC_BAR_CFG_IO_ENABLE | + CDNS_PCIE_HPA_LM_RC_BAR_CFG_IO_32BITS; + cdns_pcie_hpa_writel(pcie, REG_BANK_IP_CFG_CTRL_REG, + CDNS_PCIE_HPA_LM_RC_BAR_CFG, value); + + if (rc->vendor_id != 0xffff) + cdns_pcie_rp_writew(pcie, PCI_VENDOR_ID, rc->vendor_id); + + if (rc->device_id != 0xffff) + cdns_pcie_rp_writew(pcie, PCI_DEVICE_ID, rc->device_id); + + cdns_pcie_rp_writeb(pcie, PCI_CLASS_REVISION, 0); + cdns_pcie_rp_writeb(pcie, PCI_CLASS_PROG, 0); + cdns_pcie_rp_writew(pcie, PCI_CLASS_DEVICE, PCI_CLASS_BRIDGE_PCI); + + return 0; +} + +int cdns_pcie_hpa_host_bar_ib_config(struct cdns_pcie_rc *rc, + enum cdns_pcie_rp_bar bar, + u64 cpu_addr, u64 size, + unsigned long flags) +{ + struct cdns_pcie *pcie = &rc->pcie; + u32 addr0, addr1, aperture, value; + + if (!rc->avail_ib_bar[bar]) + return -EBUSY; + + rc->avail_ib_bar[bar] = false; + + aperture = ilog2(size); + addr0 = CDNS_PCIE_HPA_AT_IB_RP_BAR_ADDR0_NBITS(aperture) | + (lower_32_bits(cpu_addr) & GENMASK(31, 8)); + addr1 = upper_32_bits(cpu_addr); + cdns_pcie_hpa_writel(pcie, REG_BANK_AXI_MASTER, + CDNS_PCIE_HPA_AT_IB_RP_BAR_ADDR0(bar), addr0); + cdns_pcie_hpa_writel(pcie, REG_BANK_AXI_MASTER, + CDNS_PCIE_HPA_AT_IB_RP_BAR_ADDR1(bar), addr1); + + if (bar == RP_NO_BAR) + return 0; + + value = cdns_pcie_hpa_readl(pcie, REG_BANK_IP_CFG_CTRL_REG, CDNS_PCIE_HPA_LM_RC_BAR_CFG); + value &= ~(HPA_LM_RC_BAR_CFG_CTRL_MEM_64BITS(bar) | + HPA_LM_RC_BAR_CFG_CTRL_PREF_MEM_64BITS(bar) | + HPA_LM_RC_BAR_CFG_CTRL_MEM_32BITS(bar) | + HPA_LM_RC_BAR_CFG_CTRL_PREF_MEM_32BITS(bar) | + HPA_LM_RC_BAR_CFG_APERTURE(bar, bar_aperture_mask[bar] + 2)); + if (size + cpu_addr >= SZ_4G) { + if (!(flags & IORESOURCE_PREFETCH)) + value |= HPA_LM_RC_BAR_CFG_CTRL_MEM_64BITS(bar); + value |= HPA_LM_RC_BAR_CFG_CTRL_PREF_MEM_64BITS(bar); + } else { + if (!(flags & IORESOURCE_PREFETCH)) + value |= HPA_LM_RC_BAR_CFG_CTRL_MEM_32BITS(bar); + value |= HPA_LM_RC_BAR_CFG_CTRL_PREF_MEM_32BITS(bar); + } + + value |= HPA_LM_RC_BAR_CFG_APERTURE(bar, aperture); + cdns_pcie_hpa_writel(pcie, REG_BANK_IP_CFG_CTRL_REG, CDNS_PCIE_HPA_LM_RC_BAR_CFG, value); + + return 0; +} + +int cdns_pcie_hpa_host_init_address_translation(struct cdns_pcie_rc *rc) +{ + struct cdns_pcie *pcie = &rc->pcie; + struct pci_host_bridge *bridge = pci_host_bridge_from_priv(rc); + struct resource *cfg_res = rc->cfg_res; + struct resource_entry *entry; + u64 cpu_addr = cfg_res->start; + u32 addr0, addr1, desc1; + int r, busnr = 0; + + entry = resource_list_first_type(&bridge->windows, IORESOURCE_BUS); + if (entry) + busnr = entry->res->start; + + /* + * Reserve region 0 for PCI configure space accesses: + * OB_REGION_PCI_ADDR0 and OB_REGION_DESC0 are updated dynamically by + * cdns_pci_map_bus(), other region registers are set here once for all. + */ + addr1 = 0; + desc1 = CDNS_PCIE_HPA_AT_OB_REGION_DESC1_BUS(busnr); + cdns_pcie_hpa_writel(pcie, REG_BANK_AXI_SLAVE, + CDNS_PCIE_HPA_AT_OB_REGION_PCI_ADDR1(0), addr1); + cdns_pcie_hpa_writel(pcie, REG_BANK_AXI_SLAVE, + CDNS_PCIE_HPA_AT_OB_REGION_DESC1(0), desc1); + + addr0 = CDNS_PCIE_HPA_AT_OB_REGION_CPU_ADDR0_NBITS(12) | + (lower_32_bits(cpu_addr) & GENMASK(31, 8)); + addr1 = upper_32_bits(cpu_addr); + cdns_pcie_hpa_writel(pcie, REG_BANK_AXI_SLAVE, + CDNS_PCIE_HPA_AT_OB_REGION_CPU_ADDR0(0), addr0); + cdns_pcie_hpa_writel(pcie, REG_BANK_AXI_SLAVE, + CDNS_PCIE_HPA_AT_OB_REGION_CPU_ADDR1(0), addr1); + + r = 1; + resource_list_for_each_entry(entry, &bridge->windows) { + struct resource *res = entry->res; + u64 pci_addr = res->start - entry->offset; + + if (resource_type(res) == IORESOURCE_IO) + pcie->ops->set_outbound_region(pcie, busnr, 0, r, + true, + pci_pio_to_address(res->start), + pci_addr, + resource_size(res)); else - cdns_pcie_set_outbound_region(pcie, busnr, 0, r, - false, - res->start, - pci_addr, - resource_size(res)); + pcie->ops->set_outbound_region(pcie, busnr, 0, r, + false, + res->start, + pci_addr, + resource_size(res)); r++; } @@ -489,11 +702,11 @@ int cdns_pcie_host_init(struct cdns_pcie_rc *rc) { int err; - err = cdns_pcie_host_init_root_port(rc); + err = rc->pcie.ops->host_init_root_port(rc); if (err) return err; - return cdns_pcie_host_init_address_translation(rc); + return rc->pcie.ops->host_init_address_translation(rc); } int cdns_pcie_host_link_setup(struct cdns_pcie_rc *rc) @@ -503,7 +716,7 @@ int cdns_pcie_host_link_setup(struct cdns_pcie_rc *rc) int ret; if (rc->quirk_detect_quiet_flag) - cdns_pcie_detect_quiet_min_delay_set(&rc->pcie); + pcie->ops->detect_quiet_min_delay_set(&rc->pcie); cdns_pcie_host_enable_ptm_response(pcie); @@ -566,8 +779,12 @@ int cdns_pcie_host_setup(struct cdns_pcie_rc *rc) if (ret) return ret; - if (!bridge->ops) - bridge->ops = &cdns_pcie_host_ops; + if (!bridge->ops) { + if (pcie->is_hpa) + bridge->ops = &cdns_pcie_hpa_host_ops; + else + bridge->ops = &cdns_pcie_host_ops; + } ret = pci_host_probe(bridge); if (ret < 0) diff --git a/drivers/pci/controller/cadence/pcie-cadence-plat.c b/drivers/pci/controller/cadence/pcie-cadence-plat.c index b24176d4df1f..8d5fbaef0a3c 100644 --- a/drivers/pci/controller/cadence/pcie-cadence-plat.c +++ b/drivers/pci/controller/cadence/pcie-cadence-plat.c @@ -43,7 +43,30 @@ static u64 cdns_plat_cpu_addr_fixup(struct cdns_pcie *pcie, u64 cpu_addr) } static const struct cdns_pcie_ops cdns_plat_ops = { + .link_up = cdns_pcie_linkup, .cpu_addr_fixup = cdns_plat_cpu_addr_fixup, + .host_init_root_port = cdns_pcie_host_init_root_port, + .host_bar_ib_config = cdns_pcie_host_bar_ib_config, + .host_init_address_translation = cdns_pcie_host_init_address_translation, + .detect_quiet_min_delay_set = cdns_pcie_detect_quiet_min_delay_set, + .set_outbound_region = cdns_pcie_set_outbound_region, + .set_outbound_region_for_normal_msg = + cdns_pcie_set_outbound_region_for_normal_msg, + .reset_outbound_region = cdns_pcie_reset_outbound_region, +}; + +static const struct cdns_pcie_ops cdns_hpa_plat_ops = { + .start_link = cdns_pcie_hpa_startlink, + .stop_link = cdns_pcie_hpa_stop_link, + .link_up = cdns_pcie_hpa_linkup, + .host_init_root_port = cdns_pcie_hpa_host_init_root_port, + .host_bar_ib_config = cdns_pcie_hpa_host_bar_ib_config, + .host_init_address_translation = cdns_pcie_hpa_host_init_address_translation, + .detect_quiet_min_delay_set = cdns_pcie_hpa_detect_quiet_min_delay_set, + .set_outbound_region = cdns_pcie_hpa_set_outbound_region, + .set_outbound_region_for_normal_msg = + cdns_pcie_hpa_set_outbound_region_for_normal_msg, + .reset_outbound_region = cdns_pcie_hpa_reset_outbound_region, }; static int cdns_plat_pcie_probe(struct platform_device *pdev) diff --git a/drivers/pci/controller/cadence/pcie-cadence.c b/drivers/pci/controller/cadence/pcie-cadence.c index 204e045aed8c..c0d3ab3c363f 100644 --- a/drivers/pci/controller/cadence/pcie-cadence.c +++ b/drivers/pci/controller/cadence/pcie-cadence.c @@ -8,6 +8,45 @@ #include "pcie-cadence.h" +bool cdns_pcie_linkup(struct cdns_pcie *pcie) +{ + u32 pl_reg_val; + + pl_reg_val = cdns_pcie_readl(pcie, CDNS_PCIE_LM_BASE); + if (pl_reg_val & GENMASK(0, 0)) + return true; + return false; +} + +bool cdns_pcie_hpa_linkup(struct cdns_pcie *pcie) +{ + u32 pl_reg_val; + + pl_reg_val = cdns_pcie_hpa_readl(pcie, REG_BANK_IP_REG, CDNS_PCIE_HPA_PHY_DBG_STS_REG0); + if (pl_reg_val & GENMASK(0, 0)) + return true; + return false; +} + +int cdns_pcie_hpa_start_link(struct cdns_pcie *pcie) +{ + u32 pl_reg_val; + + pl_reg_val = cdns_pcie_hpa_readl(pcie, REG_BANK_IP_REG, CDNS_PCIE_HPA_PHY_LAYER_CFG0); + pl_reg_val |= CDNS_PCIE_HPA_LINK_TRNG_EN_MASK; + cdns_pcie_hpa_writel(pcie, REG_BANK_IP_REG, CDNS_PCIE_HPA_PHY_LAYER_CFG0, pl_reg_val); + return 0; +} + +void cdns_pcie_hpa_stop_link(struct cdns_pcie *pcie) +{ + u32 pl_reg_val; + + pl_reg_val = cdns_pcie_hpa_readl(pcie, REG_BANK_IP_REG, CDNS_PCIE_HPA_PHY_LAYER_CFG0); + pl_reg_val &= ~CDNS_PCIE_HPA_LINK_TRNG_EN_MASK; + cdns_pcie_hpa_writel(pcie, REG_BANK_IP_REG, CDNS_PCIE_HPA_PHY_LAYER_CFG0, pl_reg_val); +} + void cdns_pcie_detect_quiet_min_delay_set(struct cdns_pcie *pcie) { u32 delay = 0x3; @@ -55,7 +94,7 @@ void cdns_pcie_set_outbound_region(struct cdns_pcie *pcie, u8 busnr, u8 fn, desc1 = 0; /* - * Whatever Bit [23] is set or not inside DESC0 register of the outbound + * Whether Bit [23] is set or not inside DESC0 register of the outbound * PCIe descriptor, the PCI function number must be set into * Bits [26:24] of DESC0 anyway. * @@ -147,6 +186,161 @@ void cdns_pcie_reset_outbound_region(struct cdns_pcie *pcie, u32 r) cdns_pcie_writel(pcie, CDNS_PCIE_AT_OB_REGION_CPU_ADDR1(r), 0); } +void cdns_pcie_hpa_detect_quiet_min_delay_set(struct cdns_pcie *pcie) +{ + u32 delay = 0x3; + u32 ltssm_control_cap; + + /* Set the LTSSM Detect Quiet state min. delay to 2ms. */ + ltssm_control_cap = cdns_pcie_hpa_readl(pcie, REG_BANK_IP_REG, + CDNS_PCIE_HPA_PHY_LAYER_CFG0); + ltssm_control_cap = ((ltssm_control_cap & + ~CDNS_PCIE_HPA_DETECT_QUIET_MIN_DELAY_MASK) | + CDNS_PCIE_HPA_DETECT_QUIET_MIN_DELAY(delay)); + + cdns_pcie_hpa_writel(pcie, REG_BANK_IP_REG, + CDNS_PCIE_HPA_PHY_LAYER_CFG0, ltssm_control_cap); +} + +void cdns_pcie_hpa_set_outbound_region(struct cdns_pcie *pcie, u8 busnr, u8 fn, + u32 r, bool is_io, + u64 cpu_addr, u64 pci_addr, size_t size) +{ + /* + * roundup_pow_of_two() returns an unsigned long, which is not suited + * for 64bit values. + */ + u64 sz = 1ULL << fls64(size - 1); + int nbits = ilog2(sz); + u32 addr0, addr1, desc0, desc1, ctrl0; + + if (nbits < 8) + nbits = 8; + + /* Set the PCI address */ + addr0 = CDNS_PCIE_HPA_AT_OB_REGION_PCI_ADDR0_NBITS(nbits) | + (lower_32_bits(pci_addr) & GENMASK(31, 8)); + addr1 = upper_32_bits(pci_addr); + + cdns_pcie_hpa_writel(pcie, REG_BANK_AXI_SLAVE, + CDNS_PCIE_HPA_AT_OB_REGION_PCI_ADDR0(r), addr0); + cdns_pcie_hpa_writel(pcie, REG_BANK_AXI_SLAVE, + CDNS_PCIE_HPA_AT_OB_REGION_PCI_ADDR1(r), addr1); + + /* Set the PCIe header descriptor */ + if (is_io) + desc0 = CDNS_PCIE_HPA_AT_OB_REGION_DESC0_TYPE_IO; + else + desc0 = CDNS_PCIE_HPA_AT_OB_REGION_DESC0_TYPE_MEM; + desc1 = 0; + + /* + * Whether Bit [26] is set or not inside DESC0 register of the outbound + * PCIe descriptor, the PCI function number must be set into + * Bits [31:24] of DESC1 anyway. + * + * In Root Complex mode, the function number is always 0 but in Endpoint + * mode, the PCIe controller may support more than one function. This + * function number needs to be set properly into the outbound PCIe + * descriptor. + * + * Besides, setting Bit [26] is mandatory when in Root Complex mode: + * then the driver must provide the bus, resp. device, number in + * Bits [31:24] of DESC1, resp. Bits[23:16] of DESC0. Like the function + * number, the device number is always 0 in Root Complex mode. + * + * However when in Endpoint mode, we can clear Bit [26] of DESC0, hence + * the PCIe controller will use the captured values for the bus and + * device numbers. + */ + if (pcie->is_rc) { + /* The device and function numbers are always 0. */ + desc1 = CDNS_PCIE_HPA_AT_OB_REGION_DESC1_BUS(busnr) | + CDNS_PCIE_HPA_AT_OB_REGION_DESC1_DEVFN(0); + ctrl0 = CDNS_PCIE_HPA_AT_OB_REGION_CTRL0_SUPPLY_BUS | + CDNS_PCIE_HPA_AT_OB_REGION_CTRL0_SUPPLY_DEV_FN; + } else { + /* + * Use captured values for bus and device numbers but still + * need to set the function number. + */ + desc1 |= CDNS_PCIE_HPA_AT_OB_REGION_DESC1_DEVFN(fn); + } + + cdns_pcie_hpa_writel(pcie, REG_BANK_AXI_SLAVE, + CDNS_PCIE_HPA_AT_OB_REGION_DESC0(r), desc0); + cdns_pcie_hpa_writel(pcie, REG_BANK_AXI_SLAVE, + CDNS_PCIE_HPA_AT_OB_REGION_DESC1(r), desc1); + + addr0 = CDNS_PCIE_HPA_AT_OB_REGION_CPU_ADDR0_NBITS(nbits) | + (lower_32_bits(cpu_addr) & GENMASK(31, 8)); + addr1 = upper_32_bits(cpu_addr); + + cdns_pcie_hpa_writel(pcie, REG_BANK_AXI_SLAVE, + CDNS_PCIE_HPA_AT_OB_REGION_CPU_ADDR0(r), addr0); + cdns_pcie_hpa_writel(pcie, REG_BANK_AXI_SLAVE, + CDNS_PCIE_HPA_AT_OB_REGION_CPU_ADDR1(r), addr1); + cdns_pcie_hpa_writel(pcie, REG_BANK_AXI_SLAVE, + CDNS_PCIE_HPA_AT_OB_REGION_CTRL0(r), ctrl0); +} + +void cdns_pcie_hpa_set_outbound_region_for_normal_msg(struct cdns_pcie *pcie, + u8 busnr, u8 fn, + u32 r, u64 cpu_addr) +{ + u32 addr0, addr1, desc0, desc1, ctrl0; + + desc0 = CDNS_PCIE_HPA_AT_OB_REGION_DESC0_TYPE_NORMAL_MSG; + desc1 = 0; + + /* See cdns_pcie_set_outbound_region() comments above. */ + if (pcie->is_rc) { + desc1 = CDNS_PCIE_HPA_AT_OB_REGION_DESC1_BUS(busnr) | + CDNS_PCIE_HPA_AT_OB_REGION_DESC1_DEVFN(0); + ctrl0 = CDNS_PCIE_HPA_AT_OB_REGION_CTRL0_SUPPLY_BUS | + CDNS_PCIE_HPA_AT_OB_REGION_CTRL0_SUPPLY_DEV_FN; + } else { + desc1 |= CDNS_PCIE_HPA_AT_OB_REGION_DESC1_DEVFN(fn); + } + + addr0 = CDNS_PCIE_HPA_AT_OB_REGION_CPU_ADDR0_NBITS(17) | + (lower_32_bits(cpu_addr) & GENMASK(31, 8)); + addr1 = upper_32_bits(cpu_addr); + + cdns_pcie_hpa_writel(pcie, REG_BANK_AXI_SLAVE, + CDNS_PCIE_HPA_AT_OB_REGION_PCI_ADDR0(r), 0); + cdns_pcie_hpa_writel(pcie, REG_BANK_AXI_SLAVE, + CDNS_PCIE_HPA_AT_OB_REGION_PCI_ADDR1(r), 0); + cdns_pcie_hpa_writel(pcie, REG_BANK_AXI_SLAVE, + CDNS_PCIE_HPA_AT_OB_REGION_DESC0(r), desc0); + cdns_pcie_hpa_writel(pcie, REG_BANK_AXI_SLAVE, + CDNS_PCIE_HPA_AT_OB_REGION_DESC1(r), desc1); + cdns_pcie_hpa_writel(pcie, REG_BANK_AXI_SLAVE, + CDNS_PCIE_HPA_AT_OB_REGION_CPU_ADDR0(r), addr0); + cdns_pcie_hpa_writel(pcie, REG_BANK_AXI_SLAVE, + CDNS_PCIE_HPA_AT_OB_REGION_CPU_ADDR1(r), addr1); + cdns_pcie_hpa_writel(pcie, REG_BANK_AXI_SLAVE, + CDNS_PCIE_HPA_AT_OB_REGION_CTRL0(r), ctrl0); +} + +void cdns_pcie_hpa_reset_outbound_region(struct cdns_pcie *pcie, u32 r) +{ + cdns_pcie_hpa_writel(pcie, REG_BANK_AXI_SLAVE, + CDNS_PCIE_HPA_AT_OB_REGION_PCI_ADDR0(r), 0); + cdns_pcie_hpa_writel(pcie, REG_BANK_AXI_SLAVE, + CDNS_PCIE_HPA_AT_OB_REGION_PCI_ADDR1(r), 0); + + cdns_pcie_hpa_writel(pcie, REG_BANK_AXI_SLAVE, + CDNS_PCIE_HPA_AT_OB_REGION_DESC0(r), 0); + cdns_pcie_hpa_writel(pcie, REG_BANK_AXI_SLAVE, + CDNS_PCIE_HPA_AT_OB_REGION_DESC1(r), 0); + + cdns_pcie_hpa_writel(pcie, REG_BANK_AXI_SLAVE, + CDNS_PCIE_HPA_AT_OB_REGION_CPU_ADDR0(r), 0); + cdns_pcie_hpa_writel(pcie, REG_BANK_AXI_SLAVE, + CDNS_PCIE_HPA_AT_OB_REGION_CPU_ADDR1(r), 0); +} + void cdns_pcie_disable_phy(struct cdns_pcie *pcie) { int i = pcie->phy_count; diff --git a/drivers/pci/controller/cadence/pcie-cadence.h b/drivers/pci/controller/cadence/pcie-cadence.h index a39077d64d1d..c317fc10f5d1 100644 --- a/drivers/pci/controller/cadence/pcie-cadence.h +++ b/drivers/pci/controller/cadence/pcie-cadence.h @@ -816,6 +816,14 @@ int cdns_pcie_host_bar_ib_config(struct cdns_pcie_rc *rc, enum cdns_pcie_rp_bar bar, u64 cpu_addr, u64 size, unsigned long flags); +void __iomem *cdns_pci_hpa_map_bus(struct pci_bus *bus, unsigned int devfn, + int where); +int cdns_pcie_hpa_host_init_root_port(struct cdns_pcie_rc *rc); +int cdns_pcie_hpa_host_bar_ib_config(struct cdns_pcie_rc *rc, + enum cdns_pcie_rp_bar bar, + u64 cpu_addr, u64 size, + unsigned long flags); +int cdns_pcie_hpa_host_init_address_translation(struct cdns_pcie_rc *rc); #else static inline int cdns_pcie_host_link_setup(struct cdns_pcie_rc *rc) { @@ -865,6 +873,10 @@ static inline int cdns_pcie_ep_setup(struct cdns_pcie_ep *ep) } #endif +bool cdns_pcie_linkup(struct cdns_pcie *pcie); +bool cdns_pcie_hpa_linkup(struct cdns_pcie *pcie); +int cdns_pcie_hpa_start_link(struct cdns_pcie *pcie); +void cdns_pcie_hpa_stop_link(struct cdns_pcie *pcie); void cdns_pcie_detect_quiet_min_delay_set(struct cdns_pcie *pcie); void cdns_pcie_set_outbound_region(struct cdns_pcie *pcie, u8 busnr, u8 fn, -- 2.47.1 ^ permalink raw reply related [flat|nested] 23+ messages in thread
* Re: [PATCH v3 5/6] PCI: cadence: Add callback functions for RP and EP controller 2025-04-11 10:36 ` [PATCH v3 5/6] PCI: cadence: Add callback functions for RP and EP controller hans.zhang @ 2025-04-11 20:24 ` Rob Herring 2025-04-12 15:45 ` Hans Zhang 2025-04-14 3:52 ` Manikandan Karunakaran Pillai 2025-04-14 4:13 ` kernel test robot ` (2 subsequent siblings) 3 siblings, 2 replies; 23+ messages in thread From: Rob Herring @ 2025-04-11 20:24 UTC (permalink / raw) To: hans.zhang Cc: bhelgaas, lpieralisi, kw, manivannan.sadhasivam, krzk+dt, conor+dt, linux-pci, devicetree, linux-kernel, Manikandan K Pillai On Fri, Apr 11, 2025 at 06:36:55PM +0800, hans.zhang@cixtech.com wrote: > From: Manikandan K Pillai <mpillai@cadence.com> > > Add support for the Cadence PCIe HPA controller by adding > the required callback functions. Update the common functions for > RP and EP configuration. Invoke the relevant callback functions > for platform probe of PCIe controller using the callback function. > > Signed-off-by: Manikandan K Pillai <mpillai@cadence.com> > Co-developed-by: Hans Zhang <hans.zhang@cixtech.com> > Signed-off-by: Hans Zhang <hans.zhang@cixtech.com> > --- > .../pci/controller/cadence/pcie-cadence-ep.c | 29 +- > .../controller/cadence/pcie-cadence-host.c | 271 ++++++++++++++++-- > .../controller/cadence/pcie-cadence-plat.c | 23 ++ > drivers/pci/controller/cadence/pcie-cadence.c | 196 ++++++++++++- > drivers/pci/controller/cadence/pcie-cadence.h | 12 + > 5 files changed, 488 insertions(+), 43 deletions(-) > > diff --git a/drivers/pci/controller/cadence/pcie-cadence-ep.c b/drivers/pci/controller/cadence/pcie-cadence-ep.c > index f3f956fa116b..f4961c760434 100644 > --- a/drivers/pci/controller/cadence/pcie-cadence-ep.c > +++ b/drivers/pci/controller/cadence/pcie-cadence-ep.c > @@ -192,7 +192,7 @@ static int cdns_pcie_ep_map_addr(struct pci_epc *epc, u8 fn, u8 vfn, > } > > fn = cdns_pcie_get_fn_from_vfn(pcie, fn, vfn); > - cdns_pcie_set_outbound_region(pcie, 0, fn, r, false, addr, pci_addr, size); > + pcie->ops->set_outbound_region(pcie, 0, fn, r, false, addr, pci_addr, size); > > set_bit(r, &ep->ob_region_map); > ep->ob_addr[r] = addr; > @@ -214,7 +214,7 @@ static void cdns_pcie_ep_unmap_addr(struct pci_epc *epc, u8 fn, u8 vfn, > if (r == ep->max_regions - 1) > return; > > - cdns_pcie_reset_outbound_region(pcie, r); > + pcie->ops->reset_outbound_region(pcie, r); > > ep->ob_addr[r] = 0; > clear_bit(r, &ep->ob_region_map); > @@ -329,8 +329,7 @@ static void cdns_pcie_ep_assert_intx(struct cdns_pcie_ep *ep, u8 fn, u8 intx, > if (unlikely(ep->irq_pci_addr != CDNS_PCIE_EP_IRQ_PCI_ADDR_LEGACY || > ep->irq_pci_fn != fn)) { > /* First region was reserved for IRQ writes. */ > - cdns_pcie_set_outbound_region_for_normal_msg(pcie, 0, fn, 0, > - ep->irq_phys_addr); > + pcie->ops->set_outbound_region_for_normal_msg(pcie, 0, fn, 0, ep->irq_phys_addr); > ep->irq_pci_addr = CDNS_PCIE_EP_IRQ_PCI_ADDR_LEGACY; > ep->irq_pci_fn = fn; > } > @@ -411,11 +410,11 @@ static int cdns_pcie_ep_send_msi_irq(struct cdns_pcie_ep *ep, u8 fn, u8 vfn, > if (unlikely(ep->irq_pci_addr != (pci_addr & ~pci_addr_mask) || > ep->irq_pci_fn != fn)) { > /* First region was reserved for IRQ writes. */ > - cdns_pcie_set_outbound_region(pcie, 0, fn, 0, > - false, > - ep->irq_phys_addr, > - pci_addr & ~pci_addr_mask, > - pci_addr_mask + 1); > + pcie->ops->set_outbound_region(pcie, 0, fn, 0, > + false, > + ep->irq_phys_addr, > + pci_addr & ~pci_addr_mask, > + pci_addr_mask + 1); > ep->irq_pci_addr = (pci_addr & ~pci_addr_mask); > ep->irq_pci_fn = fn; > } > @@ -514,11 +513,11 @@ static int cdns_pcie_ep_send_msix_irq(struct cdns_pcie_ep *ep, u8 fn, u8 vfn, > if (ep->irq_pci_addr != (msg_addr & ~pci_addr_mask) || > ep->irq_pci_fn != fn) { > /* First region was reserved for IRQ writes. */ > - cdns_pcie_set_outbound_region(pcie, 0, fn, 0, > - false, > - ep->irq_phys_addr, > - msg_addr & ~pci_addr_mask, > - pci_addr_mask + 1); > + pcie->ops->set_outbound_region(pcie, 0, fn, 0, > + false, > + ep->irq_phys_addr, > + msg_addr & ~pci_addr_mask, > + pci_addr_mask + 1); > ep->irq_pci_addr = (msg_addr & ~pci_addr_mask); > ep->irq_pci_fn = fn; > } > @@ -869,7 +868,7 @@ int cdns_pcie_ep_setup(struct cdns_pcie_ep *ep) > set_bit(0, &ep->ob_region_map); > > if (ep->quirk_detect_quiet_flag) > - cdns_pcie_detect_quiet_min_delay_set(&ep->pcie); > + pcie->ops->detect_quiet_min_delay_set(&ep->pcie); > > spin_lock_init(&ep->lock); > > diff --git a/drivers/pci/controller/cadence/pcie-cadence-host.c b/drivers/pci/controller/cadence/pcie-cadence-host.c > index ce035eef0a5c..c7066ea3b9e8 100644 > --- a/drivers/pci/controller/cadence/pcie-cadence-host.c > +++ b/drivers/pci/controller/cadence/pcie-cadence-host.c > @@ -60,10 +60,7 @@ void __iomem *cdns_pci_map_bus(struct pci_bus *bus, unsigned int devfn, > /* Configuration Type 0 or Type 1 access. */ > desc0 = CDNS_PCIE_AT_OB_REGION_DESC0_HARDCODED_RID | > CDNS_PCIE_AT_OB_REGION_DESC0_DEVFN(0); > - /* > - * The bus number was already set once for all in desc1 by > - * cdns_pcie_host_init_address_translation(). > - */ > + > if (busn == bridge->busnr + 1) > desc0 |= CDNS_PCIE_AT_OB_REGION_DESC0_TYPE_CONF_TYPE0; > else > @@ -73,12 +70,81 @@ void __iomem *cdns_pci_map_bus(struct pci_bus *bus, unsigned int devfn, > return rc->cfg_base + (where & 0xfff); > } > > +void __iomem *cdns_pci_hpa_map_bus(struct pci_bus *bus, unsigned int devfn, > + int where) > +{ > + struct pci_host_bridge *bridge = pci_find_host_bridge(bus); > + struct cdns_pcie_rc *rc = pci_host_bridge_priv(bridge); > + struct cdns_pcie *pcie = &rc->pcie; > + unsigned int busn = bus->number; > + u32 addr0, desc0, desc1, ctrl0; > + u32 regval; > + > + if (pci_is_root_bus(bus)) { > + /* > + * Only the root port (devfn == 0) is connected to this bus. > + * All other PCI devices are behind some bridge hence on another > + * bus. > + */ > + if (devfn) > + return NULL; > + > + return pcie->reg_base + (where & 0xfff); > + } > + > + /* > + * Clear AXI link-down status > + */ That is an odd thing to do in map_bus. Also, it is completely racy because... > + regval = cdns_pcie_hpa_readl(pcie, REG_BANK_AXI_SLAVE, CDNS_PCIE_HPA_AT_LINKDOWN); > + cdns_pcie_hpa_writel(pcie, REG_BANK_AXI_SLAVE, CDNS_PCIE_HPA_AT_LINKDOWN, > + (regval & GENMASK(0, 0))); > + What if the link goes down again here. > + desc1 = 0; > + ctrl0 = 0; > + > + /* > + * Update Output registers for AXI region 0. > + */ > + addr0 = CDNS_PCIE_HPA_AT_OB_REGION_PCI_ADDR0_NBITS(12) | > + CDNS_PCIE_HPA_AT_OB_REGION_PCI_ADDR0_DEVFN(devfn) | > + CDNS_PCIE_HPA_AT_OB_REGION_PCI_ADDR0_BUS(busn); > + cdns_pcie_hpa_writel(pcie, REG_BANK_AXI_SLAVE, > + CDNS_PCIE_HPA_AT_OB_REGION_PCI_ADDR0(0), addr0); > + > + desc1 = cdns_pcie_hpa_readl(pcie, REG_BANK_AXI_SLAVE, > + CDNS_PCIE_HPA_AT_OB_REGION_DESC1(0)); > + desc1 &= ~CDNS_PCIE_HPA_AT_OB_REGION_DESC1_DEVFN_MASK; > + desc1 |= CDNS_PCIE_HPA_AT_OB_REGION_DESC1_DEVFN(0); > + ctrl0 = CDNS_PCIE_HPA_AT_OB_REGION_CTRL0_SUPPLY_BUS | > + CDNS_PCIE_HPA_AT_OB_REGION_CTRL0_SUPPLY_DEV_FN; > + > + if (busn == bridge->busnr + 1) > + desc0 |= CDNS_PCIE_HPA_AT_OB_REGION_DESC0_TYPE_CONF_TYPE0; > + else > + desc0 |= CDNS_PCIE_HPA_AT_OB_REGION_DESC0_TYPE_CONF_TYPE1; > + > + cdns_pcie_hpa_writel(pcie, REG_BANK_AXI_SLAVE, > + CDNS_PCIE_HPA_AT_OB_REGION_DESC0(0), desc0); > + cdns_pcie_hpa_writel(pcie, REG_BANK_AXI_SLAVE, > + CDNS_PCIE_HPA_AT_OB_REGION_DESC1(0), desc1); > + cdns_pcie_hpa_writel(pcie, REG_BANK_AXI_SLAVE, > + CDNS_PCIE_HPA_AT_OB_REGION_CTRL0(0), ctrl0); This is also racy with the read and write functions. Don't worry, lots of other broken h/w like this... Surely this new h/w supports ECAM style config accesses? If so, use and support that mode instead. > + > + return rc->cfg_base + (where & 0xfff); > +} > + > static struct pci_ops cdns_pcie_host_ops = { > .map_bus = cdns_pci_map_bus, > .read = pci_generic_config_read, > .write = pci_generic_config_write, > }; > > +static struct pci_ops cdns_pcie_hpa_host_ops = { > + .map_bus = cdns_pci_hpa_map_bus, > + .read = pci_generic_config_read, > + .write = pci_generic_config_write, > +}; > + > static int cdns_pcie_host_training_complete(struct cdns_pcie *pcie) > { > u32 pcie_cap_off = CDNS_PCIE_RP_CAP_OFFSET; > @@ -154,8 +220,14 @@ static void cdns_pcie_host_enable_ptm_response(struct cdns_pcie *pcie) > { > u32 val; > > - val = cdns_pcie_readl(pcie, CDNS_PCIE_LM_PTM_CTRL); > - cdns_pcie_writel(pcie, CDNS_PCIE_LM_PTM_CTRL, val | CDNS_PCIE_LM_TPM_CTRL_PTMRSEN); > + if (!pcie->is_hpa) { > + val = cdns_pcie_readl(pcie, CDNS_PCIE_LM_PTM_CTRL); > + cdns_pcie_writel(pcie, CDNS_PCIE_LM_PTM_CTRL, val | CDNS_PCIE_LM_TPM_CTRL_PTMRSEN); > + } else { > + val = cdns_pcie_hpa_readl(pcie, REG_BANK_IP_REG, CDNS_PCIE_HPA_LM_PTM_CTRL); > + cdns_pcie_hpa_writel(pcie, REG_BANK_IP_REG, CDNS_PCIE_HPA_LM_PTM_CTRL, > + val | CDNS_PCIE_HPA_LM_TPM_CTRL_PTMRSEN); > + } > } > > static int cdns_pcie_host_start_link(struct cdns_pcie_rc *rc) > @@ -340,8 +412,8 @@ static int cdns_pcie_host_bar_config(struct cdns_pcie_rc *rc, > */ > bar = cdns_pcie_host_find_min_bar(rc, size); > if (bar != RP_BAR_UNDEFINED) { > - ret = cdns_pcie_host_bar_ib_config(rc, bar, cpu_addr, > - size, flags); > + ret = pcie->ops->host_bar_ib_config(rc, bar, cpu_addr, > + size, flags); > if (ret) > dev_err(dev, "IB BAR: %d config failed\n", bar); > return ret; > @@ -366,8 +438,7 @@ static int cdns_pcie_host_bar_config(struct cdns_pcie_rc *rc, > } > > winsize = bar_max_size[bar]; > - ret = cdns_pcie_host_bar_ib_config(rc, bar, cpu_addr, winsize, > - flags); > + ret = pcie->ops->host_bar_ib_config(rc, bar, cpu_addr, winsize, flags); > if (ret) { > dev_err(dev, "IB BAR: %d config failed\n", bar); > return ret; > @@ -408,8 +479,8 @@ static int cdns_pcie_host_map_dma_ranges(struct cdns_pcie_rc *rc) > if (list_empty(&bridge->dma_ranges)) { > of_property_read_u32(np, "cdns,no-bar-match-nbits", > &no_bar_nbits); > - err = cdns_pcie_host_bar_ib_config(rc, RP_NO_BAR, 0x0, > - (u64)1 << no_bar_nbits, 0); > + err = pcie->ops->host_bar_ib_config(rc, RP_NO_BAR, 0x0, > + (u64)1 << no_bar_nbits, 0); > if (err) > dev_err(dev, "IB BAR: %d config failed\n", RP_NO_BAR); > return err; > @@ -467,17 +538,159 @@ int cdns_pcie_host_init_address_translation(struct cdns_pcie_rc *rc) > u64 pci_addr = res->start - entry->offset; > > if (resource_type(res) == IORESOURCE_IO) > - cdns_pcie_set_outbound_region(pcie, busnr, 0, r, > - true, > - pci_pio_to_address(res->start), > - pci_addr, > - resource_size(res)); > + pcie->ops->set_outbound_region(pcie, busnr, 0, r, > + true, > + pci_pio_to_address(res->start), > + pci_addr, > + resource_size(res)); > + else > + pcie->ops->set_outbound_region(pcie, busnr, 0, r, > + false, > + res->start, > + pci_addr, > + resource_size(res)); > + > + r++; > + } > + > + return cdns_pcie_host_map_dma_ranges(rc); > +} > + > +int cdns_pcie_hpa_host_init_root_port(struct cdns_pcie_rc *rc) > +{ > + struct cdns_pcie *pcie = &rc->pcie; > + u32 value, ctrl; > + > + /* > + * Set the root complex BAR configuration register: > + * - disable both BAR0 and BAR1. > + * - enable Prefetchable Memory Base and Limit registers in type 1 > + * config space (64 bits). > + * - enable IO Base and Limit registers in type 1 config > + * space (32 bits). > + */ > + > + ctrl = CDNS_PCIE_HPA_LM_BAR_CFG_CTRL_DISABLED; > + value = CDNS_PCIE_HPA_LM_RC_BAR_CFG_BAR0_CTRL(ctrl) | > + CDNS_PCIE_HPA_LM_RC_BAR_CFG_BAR1_CTRL(ctrl) | > + CDNS_PCIE_HPA_LM_RC_BAR_CFG_PREFETCH_MEM_ENABLE | > + CDNS_PCIE_HPA_LM_RC_BAR_CFG_PREFETCH_MEM_64BITS | > + CDNS_PCIE_HPA_LM_RC_BAR_CFG_IO_ENABLE | > + CDNS_PCIE_HPA_LM_RC_BAR_CFG_IO_32BITS; > + cdns_pcie_hpa_writel(pcie, REG_BANK_IP_CFG_CTRL_REG, > + CDNS_PCIE_HPA_LM_RC_BAR_CFG, value); > + > + if (rc->vendor_id != 0xffff) > + cdns_pcie_rp_writew(pcie, PCI_VENDOR_ID, rc->vendor_id); > + > + if (rc->device_id != 0xffff) > + cdns_pcie_rp_writew(pcie, PCI_DEVICE_ID, rc->device_id); > + > + cdns_pcie_rp_writeb(pcie, PCI_CLASS_REVISION, 0); > + cdns_pcie_rp_writeb(pcie, PCI_CLASS_PROG, 0); > + cdns_pcie_rp_writew(pcie, PCI_CLASS_DEVICE, PCI_CLASS_BRIDGE_PCI); > + > + return 0; > +} > + > +int cdns_pcie_hpa_host_bar_ib_config(struct cdns_pcie_rc *rc, > + enum cdns_pcie_rp_bar bar, > + u64 cpu_addr, u64 size, > + unsigned long flags) > +{ > + struct cdns_pcie *pcie = &rc->pcie; > + u32 addr0, addr1, aperture, value; > + > + if (!rc->avail_ib_bar[bar]) > + return -EBUSY; > + > + rc->avail_ib_bar[bar] = false; > + > + aperture = ilog2(size); > + addr0 = CDNS_PCIE_HPA_AT_IB_RP_BAR_ADDR0_NBITS(aperture) | > + (lower_32_bits(cpu_addr) & GENMASK(31, 8)); > + addr1 = upper_32_bits(cpu_addr); > + cdns_pcie_hpa_writel(pcie, REG_BANK_AXI_MASTER, > + CDNS_PCIE_HPA_AT_IB_RP_BAR_ADDR0(bar), addr0); > + cdns_pcie_hpa_writel(pcie, REG_BANK_AXI_MASTER, > + CDNS_PCIE_HPA_AT_IB_RP_BAR_ADDR1(bar), addr1); > + > + if (bar == RP_NO_BAR) > + return 0; > + > + value = cdns_pcie_hpa_readl(pcie, REG_BANK_IP_CFG_CTRL_REG, CDNS_PCIE_HPA_LM_RC_BAR_CFG); > + value &= ~(HPA_LM_RC_BAR_CFG_CTRL_MEM_64BITS(bar) | > + HPA_LM_RC_BAR_CFG_CTRL_PREF_MEM_64BITS(bar) | > + HPA_LM_RC_BAR_CFG_CTRL_MEM_32BITS(bar) | > + HPA_LM_RC_BAR_CFG_CTRL_PREF_MEM_32BITS(bar) | > + HPA_LM_RC_BAR_CFG_APERTURE(bar, bar_aperture_mask[bar] + 2)); > + if (size + cpu_addr >= SZ_4G) { > + if (!(flags & IORESOURCE_PREFETCH)) > + value |= HPA_LM_RC_BAR_CFG_CTRL_MEM_64BITS(bar); > + value |= HPA_LM_RC_BAR_CFG_CTRL_PREF_MEM_64BITS(bar); > + } else { > + if (!(flags & IORESOURCE_PREFETCH)) > + value |= HPA_LM_RC_BAR_CFG_CTRL_MEM_32BITS(bar); > + value |= HPA_LM_RC_BAR_CFG_CTRL_PREF_MEM_32BITS(bar); > + } > + > + value |= HPA_LM_RC_BAR_CFG_APERTURE(bar, aperture); > + cdns_pcie_hpa_writel(pcie, REG_BANK_IP_CFG_CTRL_REG, CDNS_PCIE_HPA_LM_RC_BAR_CFG, value); > + > + return 0; > +} > + > +int cdns_pcie_hpa_host_init_address_translation(struct cdns_pcie_rc *rc) > +{ > + struct cdns_pcie *pcie = &rc->pcie; > + struct pci_host_bridge *bridge = pci_host_bridge_from_priv(rc); > + struct resource *cfg_res = rc->cfg_res; > + struct resource_entry *entry; > + u64 cpu_addr = cfg_res->start; > + u32 addr0, addr1, desc1; > + int r, busnr = 0; > + > + entry = resource_list_first_type(&bridge->windows, IORESOURCE_BUS); > + if (entry) > + busnr = entry->res->start; > + > + /* > + * Reserve region 0 for PCI configure space accesses: > + * OB_REGION_PCI_ADDR0 and OB_REGION_DESC0 are updated dynamically by > + * cdns_pci_map_bus(), other region registers are set here once for all. > + */ > + addr1 = 0; > + desc1 = CDNS_PCIE_HPA_AT_OB_REGION_DESC1_BUS(busnr); > + cdns_pcie_hpa_writel(pcie, REG_BANK_AXI_SLAVE, > + CDNS_PCIE_HPA_AT_OB_REGION_PCI_ADDR1(0), addr1); > + cdns_pcie_hpa_writel(pcie, REG_BANK_AXI_SLAVE, > + CDNS_PCIE_HPA_AT_OB_REGION_DESC1(0), desc1); > + > + addr0 = CDNS_PCIE_HPA_AT_OB_REGION_CPU_ADDR0_NBITS(12) | > + (lower_32_bits(cpu_addr) & GENMASK(31, 8)); > + addr1 = upper_32_bits(cpu_addr); > + cdns_pcie_hpa_writel(pcie, REG_BANK_AXI_SLAVE, > + CDNS_PCIE_HPA_AT_OB_REGION_CPU_ADDR0(0), addr0); > + cdns_pcie_hpa_writel(pcie, REG_BANK_AXI_SLAVE, > + CDNS_PCIE_HPA_AT_OB_REGION_CPU_ADDR1(0), addr1); > + > + r = 1; > + resource_list_for_each_entry(entry, &bridge->windows) { > + struct resource *res = entry->res; > + u64 pci_addr = res->start - entry->offset; > + > + if (resource_type(res) == IORESOURCE_IO) > + pcie->ops->set_outbound_region(pcie, busnr, 0, r, > + true, > + pci_pio_to_address(res->start), > + pci_addr, > + resource_size(res)); > else > - cdns_pcie_set_outbound_region(pcie, busnr, 0, r, > - false, > - res->start, > - pci_addr, > - resource_size(res)); > + pcie->ops->set_outbound_region(pcie, busnr, 0, r, > + false, > + res->start, > + pci_addr, > + resource_size(res)); > > r++; > } > @@ -489,11 +702,11 @@ int cdns_pcie_host_init(struct cdns_pcie_rc *rc) > { > int err; > > - err = cdns_pcie_host_init_root_port(rc); > + err = rc->pcie.ops->host_init_root_port(rc); > if (err) > return err; > > - return cdns_pcie_host_init_address_translation(rc); > + return rc->pcie.ops->host_init_address_translation(rc); > } > > int cdns_pcie_host_link_setup(struct cdns_pcie_rc *rc) > @@ -503,7 +716,7 @@ int cdns_pcie_host_link_setup(struct cdns_pcie_rc *rc) > int ret; > > if (rc->quirk_detect_quiet_flag) > - cdns_pcie_detect_quiet_min_delay_set(&rc->pcie); > + pcie->ops->detect_quiet_min_delay_set(&rc->pcie); > > cdns_pcie_host_enable_ptm_response(pcie); > > @@ -566,8 +779,12 @@ int cdns_pcie_host_setup(struct cdns_pcie_rc *rc) > if (ret) > return ret; > > - if (!bridge->ops) > - bridge->ops = &cdns_pcie_host_ops; > + if (!bridge->ops) { > + if (pcie->is_hpa) > + bridge->ops = &cdns_pcie_hpa_host_ops; > + else > + bridge->ops = &cdns_pcie_host_ops; > + } > > ret = pci_host_probe(bridge); > if (ret < 0) > diff --git a/drivers/pci/controller/cadence/pcie-cadence-plat.c b/drivers/pci/controller/cadence/pcie-cadence-plat.c > index b24176d4df1f..8d5fbaef0a3c 100644 > --- a/drivers/pci/controller/cadence/pcie-cadence-plat.c > +++ b/drivers/pci/controller/cadence/pcie-cadence-plat.c > @@ -43,7 +43,30 @@ static u64 cdns_plat_cpu_addr_fixup(struct cdns_pcie *pcie, u64 cpu_addr) > } > > static const struct cdns_pcie_ops cdns_plat_ops = { > + .link_up = cdns_pcie_linkup, > .cpu_addr_fixup = cdns_plat_cpu_addr_fixup, > + .host_init_root_port = cdns_pcie_host_init_root_port, > + .host_bar_ib_config = cdns_pcie_host_bar_ib_config, > + .host_init_address_translation = cdns_pcie_host_init_address_translation, > + .detect_quiet_min_delay_set = cdns_pcie_detect_quiet_min_delay_set, > + .set_outbound_region = cdns_pcie_set_outbound_region, > + .set_outbound_region_for_normal_msg = > + cdns_pcie_set_outbound_region_for_normal_msg, > + .reset_outbound_region = cdns_pcie_reset_outbound_region, > +}; > + > +static const struct cdns_pcie_ops cdns_hpa_plat_ops = { > + .start_link = cdns_pcie_hpa_startlink, > + .stop_link = cdns_pcie_hpa_stop_link, > + .link_up = cdns_pcie_hpa_linkup, > + .host_init_root_port = cdns_pcie_hpa_host_init_root_port, > + .host_bar_ib_config = cdns_pcie_hpa_host_bar_ib_config, > + .host_init_address_translation = cdns_pcie_hpa_host_init_address_translation, > + .detect_quiet_min_delay_set = cdns_pcie_hpa_detect_quiet_min_delay_set, > + .set_outbound_region = cdns_pcie_hpa_set_outbound_region, > + .set_outbound_region_for_normal_msg = > + cdns_pcie_hpa_set_outbound_region_for_normal_msg, > + .reset_outbound_region = cdns_pcie_hpa_reset_outbound_region, What exactly is shared between these 2 implementations. Link handling, config space accesses, address translation, and host init are all different. What's left to share? MSIs (if not passed thru) and interrupts? I think it's questionable that this be the same driver. A bunch of driver specific 'ops' is not the right direction despite other drivers (DWC) having that. If there are common parts, then make them library functions multiple drivers can call. Rob ^ permalink raw reply [flat|nested] 23+ messages in thread
* Re: [PATCH v3 5/6] PCI: cadence: Add callback functions for RP and EP controller 2025-04-11 20:24 ` Rob Herring @ 2025-04-12 15:45 ` Hans Zhang 2025-04-12 16:02 ` Hans Zhang 2025-04-14 3:52 ` Manikandan Karunakaran Pillai 1 sibling, 1 reply; 23+ messages in thread From: Hans Zhang @ 2025-04-12 15:45 UTC (permalink / raw) To: Rob Herring Cc: bhelgaas, lpieralisi, kw, manivannan.sadhasivam, krzk+dt, conor+dt, linux-pci, devicetree, linux-kernel, Manikandan K Pillai On 2025/4/12 04:24, Rob Herring wrote: >> +void __iomem *cdns_pci_hpa_map_bus(struct pci_bus *bus, unsigned int devfn, >> + int where) >> +{ >> + struct pci_host_bridge *bridge = pci_find_host_bridge(bus); >> + struct cdns_pcie_rc *rc = pci_host_bridge_priv(bridge); >> + struct cdns_pcie *pcie = &rc->pcie; >> + unsigned int busn = bus->number; >> + u32 addr0, desc0, desc1, ctrl0; >> + u32 regval; >> + >> + if (pci_is_root_bus(bus)) { >> + /* >> + * Only the root port (devfn == 0) is connected to this bus. >> + * All other PCI devices are behind some bridge hence on another >> + * bus. >> + */ >> + if (devfn) >> + return NULL; >> + >> + return pcie->reg_base + (where & 0xfff); >> + } >> + >> + /* >> + * Clear AXI link-down status >> + */ > > That is an odd thing to do in map_bus. Also, it is completely racy > because... > >> + regval = cdns_pcie_hpa_readl(pcie, REG_BANK_AXI_SLAVE, CDNS_PCIE_HPA_AT_LINKDOWN); >> + cdns_pcie_hpa_writel(pcie, REG_BANK_AXI_SLAVE, CDNS_PCIE_HPA_AT_LINKDOWN, >> + (regval & GENMASK(0, 0))); >> + > > What if the link goes down again here. > Hi Rob, Thanks your for reply. Compared to Synopsys PCIe IP, Cadence PCIe IP has one more register - CDNS_PCIE_HPA_AT_LINKDOWN. When the PCIe link appears link down, the register -CDNS_PCIE_HPA_AT_LINKDOWN bit0 is set to 1. Then, ECAM cannot access config space. You need to clear CDNS_PCIE_HPA_AT_LINKDOWN bit0 to continue the access. In my opinion, this is where Cadence PCIe IP doesn't make sense. As Cadence users, we had no choice, and the chip had already been posted to silicon. Therefore, when we design the second-generation SOC, we will design an SPI interrupt. When link down occurs, an SPI interrupt is generated. We set CDNS_PCIE_HPA_AT_LINKDOWN bit0 to 0 in the interrupt function. If there are other reasons, please Manikandan add. In addition, by the way, ECAM is not accessible when the link is down. For example, if the RP is set to hot reset, the hot reset cannot be unreset. In this case, you need to use the APB to unreset. We mentioned an RTL bug to Cadence that we currently can't fix with our first or second generation chips. Cadence has not released RTL patch to us so far. This software workaround approach will also later appear in the Cixtech PCIe controller series patch. Best regards, Hans ^ permalink raw reply [flat|nested] 23+ messages in thread
* Re: [PATCH v3 5/6] PCI: cadence: Add callback functions for RP and EP controller 2025-04-12 15:45 ` Hans Zhang @ 2025-04-12 16:02 ` Hans Zhang 0 siblings, 0 replies; 23+ messages in thread From: Hans Zhang @ 2025-04-12 16:02 UTC (permalink / raw) To: Rob Herring Cc: bhelgaas, lpieralisi, kw, manivannan.sadhasivam, krzk+dt, conor+dt, linux-pci, devicetree, linux-kernel, Manikandan K Pillai On 2025/4/12 23:45, Hans Zhang wrote: >>> + /* >>> + * Clear AXI link-down status >>> + */ >> >> That is an odd thing to do in map_bus. Also, it is completely racy >> because... >> >>> + regval = cdns_pcie_hpa_readl(pcie, REG_BANK_AXI_SLAVE, >>> CDNS_PCIE_HPA_AT_LINKDOWN); >>> + cdns_pcie_hpa_writel(pcie, REG_BANK_AXI_SLAVE, >>> CDNS_PCIE_HPA_AT_LINKDOWN, >>> + (regval & GENMASK(0, 0))); >>> + >> >> What if the link goes down again here. >> > > Hi Rob, > > Thanks your for reply. Compared to Synopsys PCIe IP, Cadence PCIe IP has > one more register - CDNS_PCIE_HPA_AT_LINKDOWN. When the PCIe link > appears link down, the register -CDNS_PCIE_HPA_AT_LINKDOWN bit0 is set > to 1. Then, ECAM cannot access config space. You need to clear > CDNS_PCIE_HPA_AT_LINKDOWN bit0 to continue the access. > > In my opinion, this is where Cadence PCIe IP doesn't make sense. As > Cadence users, we had no choice, and the chip had already been posted to > silicon. Supplement: Prior to this series of patches, in the current linux master branch, Cadence first generation PCIe IP has the following code: void __iomem *cdns_pci_map_bus(struct pci_bus *bus, unsigned int devfn, int where) { ...... /* Clear AXI link-down status */ cdns_pcie_writel(pcie, CDNS_PCIE_AT_LINKDOWN, 0x0); ...... } It seems that all Cadence PCIe IPs have this problem. > > Therefore, when we design the second-generation SOC, we will design an > SPI interrupt. When link down occurs, an SPI interrupt is generated. We > set CDNS_PCIE_HPA_AT_LINKDOWN bit0 to 0 in the interrupt function. > > If there are other reasons, please Manikandan add. > > > > In addition, by the way, ECAM is not accessible when the link is down. > For example, if the RP is set to hot reset, the hot reset cannot be > unreset. In this case, you need to use the APB to unreset. We mentioned > an RTL bug to Cadence that we currently can't fix with our first or > second generation chips. Cadence has not released RTL patch to us so far. > > This software workaround approach will also later appear in the Cixtech > PCIe controller series patch. > > > Best regards, > Hans ^ permalink raw reply [flat|nested] 23+ messages in thread
* RE: [PATCH v3 5/6] PCI: cadence: Add callback functions for RP and EP controller 2025-04-11 20:24 ` Rob Herring 2025-04-12 15:45 ` Hans Zhang @ 2025-04-14 3:52 ` Manikandan Karunakaran Pillai 2025-04-24 2:58 ` Rob Herring 1 sibling, 1 reply; 23+ messages in thread From: Manikandan Karunakaran Pillai @ 2025-04-14 3:52 UTC (permalink / raw) To: Rob Herring, hans.zhang@cixtech.com Cc: bhelgaas@google.com, lpieralisi@kernel.org, kw@linux.com, manivannan.sadhasivam@linaro.org, krzk+dt@kernel.org, conor+dt@kernel.org, linux-pci@vger.kernel.org, devicetree@vger.kernel.org, linux-kernel@vger.kernel.org >> +void __iomem *cdns_pci_hpa_map_bus(struct pci_bus *bus, unsigned int >devfn, >> + int where) >> +{ >> + struct pci_host_bridge *bridge = pci_find_host_bridge(bus); >> + struct cdns_pcie_rc *rc = pci_host_bridge_priv(bridge); >> + struct cdns_pcie *pcie = &rc->pcie; >> + unsigned int busn = bus->number; >> + u32 addr0, desc0, desc1, ctrl0; >> + u32 regval; >> + >> + if (pci_is_root_bus(bus)) { >> + /* >> + * Only the root port (devfn == 0) is connected to this bus. >> + * All other PCI devices are behind some bridge hence on >another >> + * bus. >> + */ >> + if (devfn) >> + return NULL; >> + >> + return pcie->reg_base + (where & 0xfff); >> + } >> + >> + /* >> + * Clear AXI link-down status >> + */ > >That is an odd thing to do in map_bus. Also, it is completely racy >because... > >> + regval = cdns_pcie_hpa_readl(pcie, REG_BANK_AXI_SLAVE, >CDNS_PCIE_HPA_AT_LINKDOWN); >> + cdns_pcie_hpa_writel(pcie, REG_BANK_AXI_SLAVE, >CDNS_PCIE_HPA_AT_LINKDOWN, >> + (regval & GENMASK(0, 0))); >> + > >What if the link goes down again here. > >> + desc1 = 0; >> + ctrl0 = 0; >> + >> + /* >> + * Update Output registers for AXI region 0. >> + */ >> + addr0 = CDNS_PCIE_HPA_AT_OB_REGION_PCI_ADDR0_NBITS(12) | >> + CDNS_PCIE_HPA_AT_OB_REGION_PCI_ADDR0_DEVFN(devfn) | >> + CDNS_PCIE_HPA_AT_OB_REGION_PCI_ADDR0_BUS(busn); >> + cdns_pcie_hpa_writel(pcie, REG_BANK_AXI_SLAVE, >> + CDNS_PCIE_HPA_AT_OB_REGION_PCI_ADDR0(0), >addr0); >> + >> + desc1 = cdns_pcie_hpa_readl(pcie, REG_BANK_AXI_SLAVE, >> + >CDNS_PCIE_HPA_AT_OB_REGION_DESC1(0)); >> + desc1 &= ~CDNS_PCIE_HPA_AT_OB_REGION_DESC1_DEVFN_MASK; >> + desc1 |= CDNS_PCIE_HPA_AT_OB_REGION_DESC1_DEVFN(0); >> + ctrl0 = CDNS_PCIE_HPA_AT_OB_REGION_CTRL0_SUPPLY_BUS | >> + CDNS_PCIE_HPA_AT_OB_REGION_CTRL0_SUPPLY_DEV_FN; >> + >> + if (busn == bridge->busnr + 1) >> + desc0 |= >CDNS_PCIE_HPA_AT_OB_REGION_DESC0_TYPE_CONF_TYPE0; >> + else >> + desc0 |= >CDNS_PCIE_HPA_AT_OB_REGION_DESC0_TYPE_CONF_TYPE1; >> + >> + cdns_pcie_hpa_writel(pcie, REG_BANK_AXI_SLAVE, >> + CDNS_PCIE_HPA_AT_OB_REGION_DESC0(0), desc0); >> + cdns_pcie_hpa_writel(pcie, REG_BANK_AXI_SLAVE, >> + CDNS_PCIE_HPA_AT_OB_REGION_DESC1(0), desc1); >> + cdns_pcie_hpa_writel(pcie, REG_BANK_AXI_SLAVE, >> + CDNS_PCIE_HPA_AT_OB_REGION_CTRL0(0), ctrl0); > >This is also racy with the read and write functions. Don't worry, lots >of other broken h/w like this... > >Surely this new h/w supports ECAM style config accesses? If so, use >and support that mode instead. > As an IP related driver, the ECAM address in the SoC is not available. For SoC, the Vendor can override this function in their code with the ECAM address. >> + >> + return rc->cfg_base + (where & 0xfff); >> +} >> + >> static struct pci_ops cdns_pcie_host_ops = { >> .map_bus = cdns_pci_map_bus, >> .read = pci_generic_config_read, >> .write = pci_generic_config_write, >> }; >> >> +static struct pci_ops cdns_pcie_hpa_host_ops = { >> + .map_bus = cdns_pci_hpa_map_bus, >> + .read = pci_generic_config_read, >> + .write = pci_generic_config_write, >> +}; >> + >> static int cdns_pcie_host_training_complete(struct cdns_pcie *pcie) >> { >> u32 pcie_cap_off = CDNS_PCIE_RP_CAP_OFFSET; >> @@ -154,8 +220,14 @@ static void >cdns_pcie_host_enable_ptm_response(struct cdns_pcie *pcie) >> { >> u32 val; >> >> - val = cdns_pcie_readl(pcie, CDNS_PCIE_LM_PTM_CTRL); >> - cdns_pcie_writel(pcie, CDNS_PCIE_LM_PTM_CTRL, val | >CDNS_PCIE_LM_TPM_CTRL_PTMRSEN); >> + if (!pcie->is_hpa) { >> + val = cdns_pcie_readl(pcie, CDNS_PCIE_LM_PTM_CTRL); >> + cdns_pcie_writel(pcie, CDNS_PCIE_LM_PTM_CTRL, val | >CDNS_PCIE_LM_TPM_CTRL_PTMRSEN); >> + } else { >> + val = cdns_pcie_hpa_readl(pcie, REG_BANK_IP_REG, >CDNS_PCIE_HPA_LM_PTM_CTRL); >> + cdns_pcie_hpa_writel(pcie, REG_BANK_IP_REG, >CDNS_PCIE_HPA_LM_PTM_CTRL, >> + val | >CDNS_PCIE_HPA_LM_TPM_CTRL_PTMRSEN); >> + } >> } >> >> static int cdns_pcie_host_start_link(struct cdns_pcie_rc *rc) >> @@ -340,8 +412,8 @@ static int cdns_pcie_host_bar_config(struct >cdns_pcie_rc *rc, >> */ >> bar = cdns_pcie_host_find_min_bar(rc, size); >> if (bar != RP_BAR_UNDEFINED) { >> - ret = cdns_pcie_host_bar_ib_config(rc, bar, cpu_addr, >> - size, flags); >> + ret = pcie->ops->host_bar_ib_config(rc, bar, cpu_addr, >> + size, flags); >> if (ret) >> dev_err(dev, "IB BAR: %d config failed\n", bar); >> return ret; >> @@ -366,8 +438,7 @@ static int cdns_pcie_host_bar_config(struct >cdns_pcie_rc *rc, >> } >> >> winsize = bar_max_size[bar]; >> - ret = cdns_pcie_host_bar_ib_config(rc, bar, cpu_addr, winsize, >> - flags); >> + ret = pcie->ops->host_bar_ib_config(rc, bar, cpu_addr, winsize, >flags); >> if (ret) { >> dev_err(dev, "IB BAR: %d config failed\n", bar); >> return ret; >> @@ -408,8 +479,8 @@ static int cdns_pcie_host_map_dma_ranges(struct >cdns_pcie_rc *rc) >> if (list_empty(&bridge->dma_ranges)) { >> of_property_read_u32(np, "cdns,no-bar-match-nbits", >> &no_bar_nbits); >> - err = cdns_pcie_host_bar_ib_config(rc, RP_NO_BAR, 0x0, >> - (u64)1 << no_bar_nbits, 0); >> + err = pcie->ops->host_bar_ib_config(rc, RP_NO_BAR, 0x0, >> + (u64)1 << no_bar_nbits, 0); >> if (err) >> dev_err(dev, "IB BAR: %d config failed\n", >RP_NO_BAR); >> return err; >> @@ -467,17 +538,159 @@ int >cdns_pcie_host_init_address_translation(struct cdns_pcie_rc *rc) >> u64 pci_addr = res->start - entry->offset; >> >> if (resource_type(res) == IORESOURCE_IO) >> - cdns_pcie_set_outbound_region(pcie, busnr, 0, r, >> - true, >> - pci_pio_to_address(res- >>start), >> - pci_addr, >> - resource_size(res)); >> + pcie->ops->set_outbound_region(pcie, busnr, 0, r, >> + true, >> + pci_pio_to_address(res- >>start), >> + pci_addr, >> + resource_size(res)); >> + else >> + pcie->ops->set_outbound_region(pcie, busnr, 0, r, >> + false, >> + res->start, >> + pci_addr, >> + resource_size(res)); >> + >> + r++; >> + } >> + >> + return cdns_pcie_host_map_dma_ranges(rc); >> +} >> + >> +int cdns_pcie_hpa_host_init_root_port(struct cdns_pcie_rc *rc) >> +{ >> + struct cdns_pcie *pcie = &rc->pcie; >> + u32 value, ctrl; >> + >> + /* >> + * Set the root complex BAR configuration register: >> + * - disable both BAR0 and BAR1. >> + * - enable Prefetchable Memory Base and Limit registers in type 1 >> + * config space (64 bits). >> + * - enable IO Base and Limit registers in type 1 config >> + * space (32 bits). >> + */ >> + >> + ctrl = CDNS_PCIE_HPA_LM_BAR_CFG_CTRL_DISABLED; >> + value = CDNS_PCIE_HPA_LM_RC_BAR_CFG_BAR0_CTRL(ctrl) | >> + CDNS_PCIE_HPA_LM_RC_BAR_CFG_BAR1_CTRL(ctrl) | >> + CDNS_PCIE_HPA_LM_RC_BAR_CFG_PREFETCH_MEM_ENABLE >| >> + CDNS_PCIE_HPA_LM_RC_BAR_CFG_PREFETCH_MEM_64BITS | >> + CDNS_PCIE_HPA_LM_RC_BAR_CFG_IO_ENABLE | >> + CDNS_PCIE_HPA_LM_RC_BAR_CFG_IO_32BITS; >> + cdns_pcie_hpa_writel(pcie, REG_BANK_IP_CFG_CTRL_REG, >> + CDNS_PCIE_HPA_LM_RC_BAR_CFG, value); >> + >> + if (rc->vendor_id != 0xffff) >> + cdns_pcie_rp_writew(pcie, PCI_VENDOR_ID, rc->vendor_id); >> + >> + if (rc->device_id != 0xffff) >> + cdns_pcie_rp_writew(pcie, PCI_DEVICE_ID, rc->device_id); >> + >> + cdns_pcie_rp_writeb(pcie, PCI_CLASS_REVISION, 0); >> + cdns_pcie_rp_writeb(pcie, PCI_CLASS_PROG, 0); >> + cdns_pcie_rp_writew(pcie, PCI_CLASS_DEVICE, >PCI_CLASS_BRIDGE_PCI); >> + >> + return 0; >> +} >> + >> +int cdns_pcie_hpa_host_bar_ib_config(struct cdns_pcie_rc *rc, >> + enum cdns_pcie_rp_bar bar, >> + u64 cpu_addr, u64 size, >> + unsigned long flags) >> +{ >> + struct cdns_pcie *pcie = &rc->pcie; >> + u32 addr0, addr1, aperture, value; >> + >> + if (!rc->avail_ib_bar[bar]) >> + return -EBUSY; >> + >> + rc->avail_ib_bar[bar] = false; >> + >> + aperture = ilog2(size); >> + addr0 = CDNS_PCIE_HPA_AT_IB_RP_BAR_ADDR0_NBITS(aperture) | >> + (lower_32_bits(cpu_addr) & GENMASK(31, 8)); >> + addr1 = upper_32_bits(cpu_addr); >> + cdns_pcie_hpa_writel(pcie, REG_BANK_AXI_MASTER, >> + CDNS_PCIE_HPA_AT_IB_RP_BAR_ADDR0(bar), >addr0); >> + cdns_pcie_hpa_writel(pcie, REG_BANK_AXI_MASTER, >> + CDNS_PCIE_HPA_AT_IB_RP_BAR_ADDR1(bar), >addr1); >> + >> + if (bar == RP_NO_BAR) >> + return 0; >> + >> + value = cdns_pcie_hpa_readl(pcie, REG_BANK_IP_CFG_CTRL_REG, >CDNS_PCIE_HPA_LM_RC_BAR_CFG); >> + value &= ~(HPA_LM_RC_BAR_CFG_CTRL_MEM_64BITS(bar) | >> + HPA_LM_RC_BAR_CFG_CTRL_PREF_MEM_64BITS(bar) | >> + HPA_LM_RC_BAR_CFG_CTRL_MEM_32BITS(bar) | >> + HPA_LM_RC_BAR_CFG_CTRL_PREF_MEM_32BITS(bar) | >> + HPA_LM_RC_BAR_CFG_APERTURE(bar, >bar_aperture_mask[bar] + 2)); >> + if (size + cpu_addr >= SZ_4G) { >> + if (!(flags & IORESOURCE_PREFETCH)) >> + value |= >HPA_LM_RC_BAR_CFG_CTRL_MEM_64BITS(bar); >> + value |= >HPA_LM_RC_BAR_CFG_CTRL_PREF_MEM_64BITS(bar); >> + } else { >> + if (!(flags & IORESOURCE_PREFETCH)) >> + value |= >HPA_LM_RC_BAR_CFG_CTRL_MEM_32BITS(bar); >> + value |= >HPA_LM_RC_BAR_CFG_CTRL_PREF_MEM_32BITS(bar); >> + } >> + >> + value |= HPA_LM_RC_BAR_CFG_APERTURE(bar, aperture); >> + cdns_pcie_hpa_writel(pcie, REG_BANK_IP_CFG_CTRL_REG, >CDNS_PCIE_HPA_LM_RC_BAR_CFG, value); >> + >> + return 0; >> +} >> + >> +int cdns_pcie_hpa_host_init_address_translation(struct cdns_pcie_rc *rc) >> +{ >> + struct cdns_pcie *pcie = &rc->pcie; >> + struct pci_host_bridge *bridge = pci_host_bridge_from_priv(rc); >> + struct resource *cfg_res = rc->cfg_res; >> + struct resource_entry *entry; >> + u64 cpu_addr = cfg_res->start; >> + u32 addr0, addr1, desc1; >> + int r, busnr = 0; >> + >> + entry = resource_list_first_type(&bridge->windows, >IORESOURCE_BUS); >> + if (entry) >> + busnr = entry->res->start; >> + >> + /* >> + * Reserve region 0 for PCI configure space accesses: >> + * OB_REGION_PCI_ADDR0 and OB_REGION_DESC0 are updated >dynamically by >> + * cdns_pci_map_bus(), other region registers are set here once for all. >> + */ >> + addr1 = 0; >> + desc1 = CDNS_PCIE_HPA_AT_OB_REGION_DESC1_BUS(busnr); >> + cdns_pcie_hpa_writel(pcie, REG_BANK_AXI_SLAVE, >> + CDNS_PCIE_HPA_AT_OB_REGION_PCI_ADDR1(0), >addr1); >> + cdns_pcie_hpa_writel(pcie, REG_BANK_AXI_SLAVE, >> + CDNS_PCIE_HPA_AT_OB_REGION_DESC1(0), desc1); >> + >> + addr0 = CDNS_PCIE_HPA_AT_OB_REGION_CPU_ADDR0_NBITS(12) | >> + (lower_32_bits(cpu_addr) & GENMASK(31, 8)); >> + addr1 = upper_32_bits(cpu_addr); >> + cdns_pcie_hpa_writel(pcie, REG_BANK_AXI_SLAVE, >> + CDNS_PCIE_HPA_AT_OB_REGION_CPU_ADDR0(0), >addr0); >> + cdns_pcie_hpa_writel(pcie, REG_BANK_AXI_SLAVE, >> + CDNS_PCIE_HPA_AT_OB_REGION_CPU_ADDR1(0), >addr1); >> + >> + r = 1; >> + resource_list_for_each_entry(entry, &bridge->windows) { >> + struct resource *res = entry->res; >> + u64 pci_addr = res->start - entry->offset; >> + >> + if (resource_type(res) == IORESOURCE_IO) >> + pcie->ops->set_outbound_region(pcie, busnr, 0, r, >> + true, >> + pci_pio_to_address(res- >>start), >> + pci_addr, >> + resource_size(res)); >> else >> - cdns_pcie_set_outbound_region(pcie, busnr, 0, r, >> - false, >> - res->start, >> - pci_addr, >> - resource_size(res)); >> + pcie->ops->set_outbound_region(pcie, busnr, 0, r, >> + false, >> + res->start, >> + pci_addr, >> + resource_size(res)); >> >> r++; >> } >> @@ -489,11 +702,11 @@ int cdns_pcie_host_init(struct cdns_pcie_rc *rc) >> { >> int err; >> >> - err = cdns_pcie_host_init_root_port(rc); >> + err = rc->pcie.ops->host_init_root_port(rc); >> if (err) >> return err; >> >> - return cdns_pcie_host_init_address_translation(rc); >> + return rc->pcie.ops->host_init_address_translation(rc); >> } >> >> int cdns_pcie_host_link_setup(struct cdns_pcie_rc *rc) >> @@ -503,7 +716,7 @@ int cdns_pcie_host_link_setup(struct cdns_pcie_rc >*rc) >> int ret; >> >> if (rc->quirk_detect_quiet_flag) >> - cdns_pcie_detect_quiet_min_delay_set(&rc->pcie); >> + pcie->ops->detect_quiet_min_delay_set(&rc->pcie); >> >> cdns_pcie_host_enable_ptm_response(pcie); >> >> @@ -566,8 +779,12 @@ int cdns_pcie_host_setup(struct cdns_pcie_rc *rc) >> if (ret) >> return ret; >> >> - if (!bridge->ops) >> - bridge->ops = &cdns_pcie_host_ops; >> + if (!bridge->ops) { >> + if (pcie->is_hpa) >> + bridge->ops = &cdns_pcie_hpa_host_ops; >> + else >> + bridge->ops = &cdns_pcie_host_ops; >> + } >> >> ret = pci_host_probe(bridge); >> if (ret < 0) >> diff --git a/drivers/pci/controller/cadence/pcie-cadence-plat.c >b/drivers/pci/controller/cadence/pcie-cadence-plat.c >> index b24176d4df1f..8d5fbaef0a3c 100644 >> --- a/drivers/pci/controller/cadence/pcie-cadence-plat.c >> +++ b/drivers/pci/controller/cadence/pcie-cadence-plat.c >> @@ -43,7 +43,30 @@ static u64 cdns_plat_cpu_addr_fixup(struct cdns_pcie >*pcie, u64 cpu_addr) >> } >> >> static const struct cdns_pcie_ops cdns_plat_ops = { >> + .link_up = cdns_pcie_linkup, >> .cpu_addr_fixup = cdns_plat_cpu_addr_fixup, >> + .host_init_root_port = cdns_pcie_host_init_root_port, >> + .host_bar_ib_config = cdns_pcie_host_bar_ib_config, >> + .host_init_address_translation = >cdns_pcie_host_init_address_translation, >> + .detect_quiet_min_delay_set = >cdns_pcie_detect_quiet_min_delay_set, >> + .set_outbound_region = cdns_pcie_set_outbound_region, >> + .set_outbound_region_for_normal_msg = >> + >cdns_pcie_set_outbound_region_for_normal_msg, >> + .reset_outbound_region = cdns_pcie_reset_outbound_region, >> +}; >> + >> +static const struct cdns_pcie_ops cdns_hpa_plat_ops = { >> + .start_link = cdns_pcie_hpa_startlink, >> + .stop_link = cdns_pcie_hpa_stop_link, >> + .link_up = cdns_pcie_hpa_linkup, >> + .host_init_root_port = cdns_pcie_hpa_host_init_root_port, >> + .host_bar_ib_config = cdns_pcie_hpa_host_bar_ib_config, >> + .host_init_address_translation = >cdns_pcie_hpa_host_init_address_translation, >> + .detect_quiet_min_delay_set = >cdns_pcie_hpa_detect_quiet_min_delay_set, >> + .set_outbound_region = cdns_pcie_hpa_set_outbound_region, >> + .set_outbound_region_for_normal_msg = >> + >cdns_pcie_hpa_set_outbound_region_for_normal_msg, >> + .reset_outbound_region = cdns_pcie_hpa_reset_outbound_region, > >What exactly is shared between these 2 implementations. Link handling, >config space accesses, address translation, and host init are all >different. What's left to share? MSIs (if not passed thru) and >interrupts? I think it's questionable that this be the same driver. > The address of both these have changed as the controller architecture has changed. In the event these driver have to be same driver, there will lot of sprinkled "if(is_hpa)" and that was already rejected in earlier version of code. Hence it was done similar to other drivers by architecture specific "ops". The "if(is_hpa)" is now very limited where a specific ops functions does not make any sense. >A bunch of driver specific 'ops' is not the right direction despite >other drivers (DWC) having that. If there are common parts, then make >them library functions multiple drivers can call. > >Rob ^ permalink raw reply [flat|nested] 23+ messages in thread
* Re: [PATCH v3 5/6] PCI: cadence: Add callback functions for RP and EP controller 2025-04-14 3:52 ` Manikandan Karunakaran Pillai @ 2025-04-24 2:58 ` Rob Herring 2025-04-24 3:53 ` Manikandan Karunakaran Pillai 0 siblings, 1 reply; 23+ messages in thread From: Rob Herring @ 2025-04-24 2:58 UTC (permalink / raw) To: Manikandan Karunakaran Pillai Cc: hans.zhang@cixtech.com, bhelgaas@google.com, lpieralisi@kernel.org, kw@linux.com, manivannan.sadhasivam@linaro.org, krzk+dt@kernel.org, conor+dt@kernel.org, linux-pci@vger.kernel.org, devicetree@vger.kernel.org, linux-kernel@vger.kernel.org On Sun, Apr 13, 2025 at 10:52 PM Manikandan Karunakaran Pillai <mpillai@cadence.com> wrote: > > >> +void __iomem *cdns_pci_hpa_map_bus(struct pci_bus *bus, unsigned int > >devfn, > >> + int where) > >> +{ > >> + struct pci_host_bridge *bridge = pci_find_host_bridge(bus); > >> + struct cdns_pcie_rc *rc = pci_host_bridge_priv(bridge); > >> + struct cdns_pcie *pcie = &rc->pcie; > >> + unsigned int busn = bus->number; > >> + u32 addr0, desc0, desc1, ctrl0; > >> + u32 regval; > >> + > >> + if (pci_is_root_bus(bus)) { > >> + /* > >> + * Only the root port (devfn == 0) is connected to this bus. > >> + * All other PCI devices are behind some bridge hence on > >another > >> + * bus. > >> + */ > >> + if (devfn) > >> + return NULL; > >> + > >> + return pcie->reg_base + (where & 0xfff); > >> + } > >> + > >> + /* > >> + * Clear AXI link-down status > >> + */ > > > >That is an odd thing to do in map_bus. Also, it is completely racy > >because... > > > >> + regval = cdns_pcie_hpa_readl(pcie, REG_BANK_AXI_SLAVE, > >CDNS_PCIE_HPA_AT_LINKDOWN); > >> + cdns_pcie_hpa_writel(pcie, REG_BANK_AXI_SLAVE, > >CDNS_PCIE_HPA_AT_LINKDOWN, > >> + (regval & GENMASK(0, 0))); > >> + > > > >What if the link goes down again here. > > > >> + desc1 = 0; > >> + ctrl0 = 0; > >> + > >> + /* > >> + * Update Output registers for AXI region 0. > >> + */ > >> + addr0 = CDNS_PCIE_HPA_AT_OB_REGION_PCI_ADDR0_NBITS(12) | > >> + CDNS_PCIE_HPA_AT_OB_REGION_PCI_ADDR0_DEVFN(devfn) | > >> + CDNS_PCIE_HPA_AT_OB_REGION_PCI_ADDR0_BUS(busn); > >> + cdns_pcie_hpa_writel(pcie, REG_BANK_AXI_SLAVE, > >> + CDNS_PCIE_HPA_AT_OB_REGION_PCI_ADDR0(0), > >addr0); > >> + > >> + desc1 = cdns_pcie_hpa_readl(pcie, REG_BANK_AXI_SLAVE, > >> + > >CDNS_PCIE_HPA_AT_OB_REGION_DESC1(0)); > >> + desc1 &= ~CDNS_PCIE_HPA_AT_OB_REGION_DESC1_DEVFN_MASK; > >> + desc1 |= CDNS_PCIE_HPA_AT_OB_REGION_DESC1_DEVFN(0); > >> + ctrl0 = CDNS_PCIE_HPA_AT_OB_REGION_CTRL0_SUPPLY_BUS | > >> + CDNS_PCIE_HPA_AT_OB_REGION_CTRL0_SUPPLY_DEV_FN; > >> + > >> + if (busn == bridge->busnr + 1) > >> + desc0 |= > >CDNS_PCIE_HPA_AT_OB_REGION_DESC0_TYPE_CONF_TYPE0; > >> + else > >> + desc0 |= > >CDNS_PCIE_HPA_AT_OB_REGION_DESC0_TYPE_CONF_TYPE1; > >> + > >> + cdns_pcie_hpa_writel(pcie, REG_BANK_AXI_SLAVE, > >> + CDNS_PCIE_HPA_AT_OB_REGION_DESC0(0), desc0); > >> + cdns_pcie_hpa_writel(pcie, REG_BANK_AXI_SLAVE, > >> + CDNS_PCIE_HPA_AT_OB_REGION_DESC1(0), desc1); > >> + cdns_pcie_hpa_writel(pcie, REG_BANK_AXI_SLAVE, > >> + CDNS_PCIE_HPA_AT_OB_REGION_CTRL0(0), ctrl0); > > > >This is also racy with the read and write functions. Don't worry, lots > >of other broken h/w like this... > > > >Surely this new h/w supports ECAM style config accesses? If so, use > >and support that mode instead. > > > > As an IP related driver, the ECAM address in the SoC is not available. For SoC, the > Vendor can override this function in their code with the ECAM address. > > >> + > >> + return rc->cfg_base + (where & 0xfff); > >> +} > >> + > >> static struct pci_ops cdns_pcie_host_ops = { > >> .map_bus = cdns_pci_map_bus, > >> .read = pci_generic_config_read, > >> .write = pci_generic_config_write, > >> }; > >> > >> +static struct pci_ops cdns_pcie_hpa_host_ops = { > >> + .map_bus = cdns_pci_hpa_map_bus, > >> + .read = pci_generic_config_read, > >> + .write = pci_generic_config_write, > >> +}; > >> + > >> static int cdns_pcie_host_training_complete(struct cdns_pcie *pcie) > >> { > >> u32 pcie_cap_off = CDNS_PCIE_RP_CAP_OFFSET; > >> @@ -154,8 +220,14 @@ static void > >cdns_pcie_host_enable_ptm_response(struct cdns_pcie *pcie) > >> { > >> u32 val; > >> > >> - val = cdns_pcie_readl(pcie, CDNS_PCIE_LM_PTM_CTRL); > >> - cdns_pcie_writel(pcie, CDNS_PCIE_LM_PTM_CTRL, val | > >CDNS_PCIE_LM_TPM_CTRL_PTMRSEN); > >> + if (!pcie->is_hpa) { > >> + val = cdns_pcie_readl(pcie, CDNS_PCIE_LM_PTM_CTRL); > >> + cdns_pcie_writel(pcie, CDNS_PCIE_LM_PTM_CTRL, val | > >CDNS_PCIE_LM_TPM_CTRL_PTMRSEN); > >> + } else { > >> + val = cdns_pcie_hpa_readl(pcie, REG_BANK_IP_REG, > >CDNS_PCIE_HPA_LM_PTM_CTRL); > >> + cdns_pcie_hpa_writel(pcie, REG_BANK_IP_REG, > >CDNS_PCIE_HPA_LM_PTM_CTRL, > >> + val | > >CDNS_PCIE_HPA_LM_TPM_CTRL_PTMRSEN); > >> + } > >> } > >> > >> static int cdns_pcie_host_start_link(struct cdns_pcie_rc *rc) > >> @@ -340,8 +412,8 @@ static int cdns_pcie_host_bar_config(struct > >cdns_pcie_rc *rc, > >> */ > >> bar = cdns_pcie_host_find_min_bar(rc, size); > >> if (bar != RP_BAR_UNDEFINED) { > >> - ret = cdns_pcie_host_bar_ib_config(rc, bar, cpu_addr, > >> - size, flags); > >> + ret = pcie->ops->host_bar_ib_config(rc, bar, cpu_addr, > >> + size, flags); > >> if (ret) > >> dev_err(dev, "IB BAR: %d config failed\n", bar); > >> return ret; > >> @@ -366,8 +438,7 @@ static int cdns_pcie_host_bar_config(struct > >cdns_pcie_rc *rc, > >> } > >> > >> winsize = bar_max_size[bar]; > >> - ret = cdns_pcie_host_bar_ib_config(rc, bar, cpu_addr, winsize, > >> - flags); > >> + ret = pcie->ops->host_bar_ib_config(rc, bar, cpu_addr, winsize, > >flags); > >> if (ret) { > >> dev_err(dev, "IB BAR: %d config failed\n", bar); > >> return ret; > >> @@ -408,8 +479,8 @@ static int cdns_pcie_host_map_dma_ranges(struct > >cdns_pcie_rc *rc) > >> if (list_empty(&bridge->dma_ranges)) { > >> of_property_read_u32(np, "cdns,no-bar-match-nbits", > >> &no_bar_nbits); > >> - err = cdns_pcie_host_bar_ib_config(rc, RP_NO_BAR, 0x0, > >> - (u64)1 << no_bar_nbits, 0); > >> + err = pcie->ops->host_bar_ib_config(rc, RP_NO_BAR, 0x0, > >> + (u64)1 << no_bar_nbits, 0); > >> if (err) > >> dev_err(dev, "IB BAR: %d config failed\n", > >RP_NO_BAR); > >> return err; > >> @@ -467,17 +538,159 @@ int > >cdns_pcie_host_init_address_translation(struct cdns_pcie_rc *rc) > >> u64 pci_addr = res->start - entry->offset; > >> > >> if (resource_type(res) == IORESOURCE_IO) > >> - cdns_pcie_set_outbound_region(pcie, busnr, 0, r, > >> - true, > >> - pci_pio_to_address(res- > >>start), > >> - pci_addr, > >> - resource_size(res)); > >> + pcie->ops->set_outbound_region(pcie, busnr, 0, r, > >> + true, > >> + pci_pio_to_address(res- > >>start), > >> + pci_addr, > >> + resource_size(res)); > >> + else > >> + pcie->ops->set_outbound_region(pcie, busnr, 0, r, > >> + false, > >> + res->start, > >> + pci_addr, > >> + resource_size(res)); > >> + > >> + r++; > >> + } > >> + > >> + return cdns_pcie_host_map_dma_ranges(rc); > >> +} > >> + > >> +int cdns_pcie_hpa_host_init_root_port(struct cdns_pcie_rc *rc) > >> +{ > >> + struct cdns_pcie *pcie = &rc->pcie; > >> + u32 value, ctrl; > >> + > >> + /* > >> + * Set the root complex BAR configuration register: > >> + * - disable both BAR0 and BAR1. > >> + * - enable Prefetchable Memory Base and Limit registers in type 1 > >> + * config space (64 bits). > >> + * - enable IO Base and Limit registers in type 1 config > >> + * space (32 bits). > >> + */ > >> + > >> + ctrl = CDNS_PCIE_HPA_LM_BAR_CFG_CTRL_DISABLED; > >> + value = CDNS_PCIE_HPA_LM_RC_BAR_CFG_BAR0_CTRL(ctrl) | > >> + CDNS_PCIE_HPA_LM_RC_BAR_CFG_BAR1_CTRL(ctrl) | > >> + CDNS_PCIE_HPA_LM_RC_BAR_CFG_PREFETCH_MEM_ENABLE > >| > >> + CDNS_PCIE_HPA_LM_RC_BAR_CFG_PREFETCH_MEM_64BITS | > >> + CDNS_PCIE_HPA_LM_RC_BAR_CFG_IO_ENABLE | > >> + CDNS_PCIE_HPA_LM_RC_BAR_CFG_IO_32BITS; > >> + cdns_pcie_hpa_writel(pcie, REG_BANK_IP_CFG_CTRL_REG, > >> + CDNS_PCIE_HPA_LM_RC_BAR_CFG, value); > >> + > >> + if (rc->vendor_id != 0xffff) > >> + cdns_pcie_rp_writew(pcie, PCI_VENDOR_ID, rc->vendor_id); > >> + > >> + if (rc->device_id != 0xffff) > >> + cdns_pcie_rp_writew(pcie, PCI_DEVICE_ID, rc->device_id); > >> + > >> + cdns_pcie_rp_writeb(pcie, PCI_CLASS_REVISION, 0); > >> + cdns_pcie_rp_writeb(pcie, PCI_CLASS_PROG, 0); > >> + cdns_pcie_rp_writew(pcie, PCI_CLASS_DEVICE, > >PCI_CLASS_BRIDGE_PCI); > >> + > >> + return 0; > >> +} > >> + > >> +int cdns_pcie_hpa_host_bar_ib_config(struct cdns_pcie_rc *rc, > >> + enum cdns_pcie_rp_bar bar, > >> + u64 cpu_addr, u64 size, > >> + unsigned long flags) > >> +{ > >> + struct cdns_pcie *pcie = &rc->pcie; > >> + u32 addr0, addr1, aperture, value; > >> + > >> + if (!rc->avail_ib_bar[bar]) > >> + return -EBUSY; > >> + > >> + rc->avail_ib_bar[bar] = false; > >> + > >> + aperture = ilog2(size); > >> + addr0 = CDNS_PCIE_HPA_AT_IB_RP_BAR_ADDR0_NBITS(aperture) | > >> + (lower_32_bits(cpu_addr) & GENMASK(31, 8)); > >> + addr1 = upper_32_bits(cpu_addr); > >> + cdns_pcie_hpa_writel(pcie, REG_BANK_AXI_MASTER, > >> + CDNS_PCIE_HPA_AT_IB_RP_BAR_ADDR0(bar), > >addr0); > >> + cdns_pcie_hpa_writel(pcie, REG_BANK_AXI_MASTER, > >> + CDNS_PCIE_HPA_AT_IB_RP_BAR_ADDR1(bar), > >addr1); > >> + > >> + if (bar == RP_NO_BAR) > >> + return 0; > >> + > >> + value = cdns_pcie_hpa_readl(pcie, REG_BANK_IP_CFG_CTRL_REG, > >CDNS_PCIE_HPA_LM_RC_BAR_CFG); > >> + value &= ~(HPA_LM_RC_BAR_CFG_CTRL_MEM_64BITS(bar) | > >> + HPA_LM_RC_BAR_CFG_CTRL_PREF_MEM_64BITS(bar) | > >> + HPA_LM_RC_BAR_CFG_CTRL_MEM_32BITS(bar) | > >> + HPA_LM_RC_BAR_CFG_CTRL_PREF_MEM_32BITS(bar) | > >> + HPA_LM_RC_BAR_CFG_APERTURE(bar, > >bar_aperture_mask[bar] + 2)); > >> + if (size + cpu_addr >= SZ_4G) { > >> + if (!(flags & IORESOURCE_PREFETCH)) > >> + value |= > >HPA_LM_RC_BAR_CFG_CTRL_MEM_64BITS(bar); > >> + value |= > >HPA_LM_RC_BAR_CFG_CTRL_PREF_MEM_64BITS(bar); > >> + } else { > >> + if (!(flags & IORESOURCE_PREFETCH)) > >> + value |= > >HPA_LM_RC_BAR_CFG_CTRL_MEM_32BITS(bar); > >> + value |= > >HPA_LM_RC_BAR_CFG_CTRL_PREF_MEM_32BITS(bar); > >> + } > >> + > >> + value |= HPA_LM_RC_BAR_CFG_APERTURE(bar, aperture); > >> + cdns_pcie_hpa_writel(pcie, REG_BANK_IP_CFG_CTRL_REG, > >CDNS_PCIE_HPA_LM_RC_BAR_CFG, value); > >> + > >> + return 0; > >> +} > >> + > >> +int cdns_pcie_hpa_host_init_address_translation(struct cdns_pcie_rc *rc) > >> +{ > >> + struct cdns_pcie *pcie = &rc->pcie; > >> + struct pci_host_bridge *bridge = pci_host_bridge_from_priv(rc); > >> + struct resource *cfg_res = rc->cfg_res; > >> + struct resource_entry *entry; > >> + u64 cpu_addr = cfg_res->start; > >> + u32 addr0, addr1, desc1; > >> + int r, busnr = 0; > >> + > >> + entry = resource_list_first_type(&bridge->windows, > >IORESOURCE_BUS); > >> + if (entry) > >> + busnr = entry->res->start; > >> + > >> + /* > >> + * Reserve region 0 for PCI configure space accesses: > >> + * OB_REGION_PCI_ADDR0 and OB_REGION_DESC0 are updated > >dynamically by > >> + * cdns_pci_map_bus(), other region registers are set here once for all. > >> + */ > >> + addr1 = 0; > >> + desc1 = CDNS_PCIE_HPA_AT_OB_REGION_DESC1_BUS(busnr); > >> + cdns_pcie_hpa_writel(pcie, REG_BANK_AXI_SLAVE, > >> + CDNS_PCIE_HPA_AT_OB_REGION_PCI_ADDR1(0), > >addr1); > >> + cdns_pcie_hpa_writel(pcie, REG_BANK_AXI_SLAVE, > >> + CDNS_PCIE_HPA_AT_OB_REGION_DESC1(0), desc1); > >> + > >> + addr0 = CDNS_PCIE_HPA_AT_OB_REGION_CPU_ADDR0_NBITS(12) | > >> + (lower_32_bits(cpu_addr) & GENMASK(31, 8)); > >> + addr1 = upper_32_bits(cpu_addr); > >> + cdns_pcie_hpa_writel(pcie, REG_BANK_AXI_SLAVE, > >> + CDNS_PCIE_HPA_AT_OB_REGION_CPU_ADDR0(0), > >addr0); > >> + cdns_pcie_hpa_writel(pcie, REG_BANK_AXI_SLAVE, > >> + CDNS_PCIE_HPA_AT_OB_REGION_CPU_ADDR1(0), > >addr1); > >> + > >> + r = 1; > >> + resource_list_for_each_entry(entry, &bridge->windows) { > >> + struct resource *res = entry->res; > >> + u64 pci_addr = res->start - entry->offset; > >> + > >> + if (resource_type(res) == IORESOURCE_IO) > >> + pcie->ops->set_outbound_region(pcie, busnr, 0, r, > >> + true, > >> + pci_pio_to_address(res- > >>start), > >> + pci_addr, > >> + resource_size(res)); > >> else > >> - cdns_pcie_set_outbound_region(pcie, busnr, 0, r, > >> - false, > >> - res->start, > >> - pci_addr, > >> - resource_size(res)); > >> + pcie->ops->set_outbound_region(pcie, busnr, 0, r, > >> + false, > >> + res->start, > >> + pci_addr, > >> + resource_size(res)); > >> > >> r++; > >> } > >> @@ -489,11 +702,11 @@ int cdns_pcie_host_init(struct cdns_pcie_rc *rc) > >> { > >> int err; > >> > >> - err = cdns_pcie_host_init_root_port(rc); > >> + err = rc->pcie.ops->host_init_root_port(rc); > >> if (err) > >> return err; > >> > >> - return cdns_pcie_host_init_address_translation(rc); > >> + return rc->pcie.ops->host_init_address_translation(rc); > >> } > >> > >> int cdns_pcie_host_link_setup(struct cdns_pcie_rc *rc) > >> @@ -503,7 +716,7 @@ int cdns_pcie_host_link_setup(struct cdns_pcie_rc > >*rc) > >> int ret; > >> > >> if (rc->quirk_detect_quiet_flag) > >> - cdns_pcie_detect_quiet_min_delay_set(&rc->pcie); > >> + pcie->ops->detect_quiet_min_delay_set(&rc->pcie); > >> > >> cdns_pcie_host_enable_ptm_response(pcie); > >> > >> @@ -566,8 +779,12 @@ int cdns_pcie_host_setup(struct cdns_pcie_rc *rc) > >> if (ret) > >> return ret; > >> > >> - if (!bridge->ops) > >> - bridge->ops = &cdns_pcie_host_ops; > >> + if (!bridge->ops) { > >> + if (pcie->is_hpa) > >> + bridge->ops = &cdns_pcie_hpa_host_ops; > >> + else > >> + bridge->ops = &cdns_pcie_host_ops; > >> + } > >> > >> ret = pci_host_probe(bridge); > >> if (ret < 0) > >> diff --git a/drivers/pci/controller/cadence/pcie-cadence-plat.c > >b/drivers/pci/controller/cadence/pcie-cadence-plat.c > >> index b24176d4df1f..8d5fbaef0a3c 100644 > >> --- a/drivers/pci/controller/cadence/pcie-cadence-plat.c > >> +++ b/drivers/pci/controller/cadence/pcie-cadence-plat.c > >> @@ -43,7 +43,30 @@ static u64 cdns_plat_cpu_addr_fixup(struct cdns_pcie > >*pcie, u64 cpu_addr) > >> } > >> > >> static const struct cdns_pcie_ops cdns_plat_ops = { > >> + .link_up = cdns_pcie_linkup, > >> .cpu_addr_fixup = cdns_plat_cpu_addr_fixup, > >> + .host_init_root_port = cdns_pcie_host_init_root_port, > >> + .host_bar_ib_config = cdns_pcie_host_bar_ib_config, > >> + .host_init_address_translation = > >cdns_pcie_host_init_address_translation, > >> + .detect_quiet_min_delay_set = > >cdns_pcie_detect_quiet_min_delay_set, > >> + .set_outbound_region = cdns_pcie_set_outbound_region, > >> + .set_outbound_region_for_normal_msg = > >> + > >cdns_pcie_set_outbound_region_for_normal_msg, > >> + .reset_outbound_region = cdns_pcie_reset_outbound_region, > >> +}; > >> + > >> +static const struct cdns_pcie_ops cdns_hpa_plat_ops = { > >> + .start_link = cdns_pcie_hpa_startlink, > >> + .stop_link = cdns_pcie_hpa_stop_link, > >> + .link_up = cdns_pcie_hpa_linkup, > >> + .host_init_root_port = cdns_pcie_hpa_host_init_root_port, > >> + .host_bar_ib_config = cdns_pcie_hpa_host_bar_ib_config, > >> + .host_init_address_translation = > >cdns_pcie_hpa_host_init_address_translation, > >> + .detect_quiet_min_delay_set = > >cdns_pcie_hpa_detect_quiet_min_delay_set, > >> + .set_outbound_region = cdns_pcie_hpa_set_outbound_region, > >> + .set_outbound_region_for_normal_msg = > >> + > >cdns_pcie_hpa_set_outbound_region_for_normal_msg, > >> + .reset_outbound_region = cdns_pcie_hpa_reset_outbound_region, > > > >What exactly is shared between these 2 implementations. Link handling, > >config space accesses, address translation, and host init are all > >different. What's left to share? MSIs (if not passed thru) and > >interrupts? I think it's questionable that this be the same driver. > > > The address of both these have changed as the controller architecture has > changed. In the event these driver have to be same driver, there will lot of > sprinkled "if(is_hpa)" and that was already rejected in earlier version of code. I'm saying they should *not* be the same driver because you don't share hardly anything. Again, what is really common here? > Hence it was done similar to other drivers by architecture specific "ops". Yes, and IMO driver private/custom ops is the wrong direction to go. Read my prior reply below again. > The "if(is_hpa)" is now very limited where a specific ops functions does not make > any sense. But you still have them. In a separate driver, you would have 0. > >A bunch of driver specific 'ops' is not the right direction despite > >other drivers (DWC) having that. If there are common parts, then make > >them library functions multiple drivers can call. ^ permalink raw reply [flat|nested] 23+ messages in thread
* RE: [PATCH v3 5/6] PCI: cadence: Add callback functions for RP and EP controller 2025-04-24 2:58 ` Rob Herring @ 2025-04-24 3:53 ` Manikandan Karunakaran Pillai 2025-04-24 15:07 ` Rob Herring 0 siblings, 1 reply; 23+ messages in thread From: Manikandan Karunakaran Pillai @ 2025-04-24 3:53 UTC (permalink / raw) To: Rob Herring Cc: hans.zhang@cixtech.com, bhelgaas@google.com, lpieralisi@kernel.org, kw@linux.com, manivannan.sadhasivam@linaro.org, krzk+dt@kernel.org, conor+dt@kernel.org, linux-pci@vger.kernel.org, devicetree@vger.kernel.org, linux-kernel@vger.kernel.org >> >What exactly is shared between these 2 implementations. Link handling, >> >config space accesses, address translation, and host init are all >> >different. What's left to share? MSIs (if not passed thru) and >> >interrupts? I think it's questionable that this be the same driver. >> > >> The address of both these have changed as the controller architecture has >> changed. In the event these driver have to be same driver, there will lot of >> sprinkled "if(is_hpa)" and that was already rejected in earlier version of >code. > >I'm saying they should *not* be the same driver because you don't >share hardly anything. Again, what is really common here? The architecture of the PCie controller is next generation but the software flow and functions are almost same. The addresses of the registers accessed for the newly added functions have changed and to ensure that we reduce "if(is_hpa)" checks, the ops method was adopted as in other existing drivers. > >> Hence it was done similar to other drivers by architecture specific "ops". > >Yes, and IMO driver private/custom ops is the wrong direction to go. >Read my prior reply below again. > >> The "if(is_hpa)" is now very limited where a specific ops functions does not >make >> any sense. > >But you still have them. In a separate driver, you would have 0. The architecture is one changed and from a driver viewpoint, the addresses of the registers have changed. The logic within the function still remains the same but it now accesses a different set of registers. In the pcie-cadence-host.c and pcie-cadence-ep.c, there are still a lot of common functions. If it is a separate driver, then the entire code needs to be put in two different files. The "is_hpa" checks will be 0, but there will be a lot more of replicated code. > >> >A bunch of driver specific 'ops' is not the right direction despite >> >other drivers (DWC) having that. If there are common parts, then make >> >them library functions multiple drivers can call. ^ permalink raw reply [flat|nested] 23+ messages in thread
* Re: [PATCH v3 5/6] PCI: cadence: Add callback functions for RP and EP controller 2025-04-24 3:53 ` Manikandan Karunakaran Pillai @ 2025-04-24 15:07 ` Rob Herring 0 siblings, 0 replies; 23+ messages in thread From: Rob Herring @ 2025-04-24 15:07 UTC (permalink / raw) To: Manikandan Karunakaran Pillai Cc: hans.zhang@cixtech.com, bhelgaas@google.com, lpieralisi@kernel.org, kw@linux.com, manivannan.sadhasivam@linaro.org, krzk+dt@kernel.org, conor+dt@kernel.org, linux-pci@vger.kernel.org, devicetree@vger.kernel.org, linux-kernel@vger.kernel.org On Wed, Apr 23, 2025 at 10:54 PM Manikandan Karunakaran Pillai <mpillai@cadence.com> wrote: > > >> >What exactly is shared between these 2 implementations. Link handling, > >> >config space accesses, address translation, and host init are all > >> >different. What's left to share? MSIs (if not passed thru) and > >> >interrupts? I think it's questionable that this be the same driver. > >> > > >> The address of both these have changed as the controller architecture has > >> changed. In the event these driver have to be same driver, there will lot of > >> sprinkled "if(is_hpa)" and that was already rejected in earlier version of > >code. > > > >I'm saying they should *not* be the same driver because you don't > >share hardly anything. Again, what is really common here? > > The architecture of the PCie controller is next generation but the software flow > and functions are almost same. The addresses of the registers accessed for the > newly added functions have changed and to ensure that we reduce "if(is_hpa)" > checks, the ops method was adopted as in other existing drivers. Please listen when I say we do not want the ops method used in other drivers. That's called a midlayer and is an anti-pattern. Here's some background reading for you: https://lwn.net/Articles/708891/ https://blog.ffwll.ch/2016/12/midlayers-once-more-with-feeling.html So what are you supposed to do with the common parts? Make them a library that each driver can call into as I already suggested. If you want an example, see SDHCI drivers and library (sdhci.c). Actually, the current Cadence support is also an example of this. It's 2 different drivers (pcie-cadence-plat.c and pci-j721e.c) with a library of functions (pcie-cadence.c). We probably had this same discussion when all that was upstreamed. Sigh. Now, where there should be more ops is in struct pci_host_bridge for things like link state and PERST# control. Then the PCI core could manage the link state and drivers only have to provide start/stop/status. Rob ^ permalink raw reply [flat|nested] 23+ messages in thread
* Re: [PATCH v3 5/6] PCI: cadence: Add callback functions for RP and EP controller 2025-04-11 10:36 ` [PATCH v3 5/6] PCI: cadence: Add callback functions for RP and EP controller hans.zhang 2025-04-11 20:24 ` Rob Herring @ 2025-04-14 4:13 ` kernel test robot 2025-04-14 7:47 ` kernel test robot 2025-04-14 9:20 ` kernel test robot 3 siblings, 0 replies; 23+ messages in thread From: kernel test robot @ 2025-04-14 4:13 UTC (permalink / raw) To: hans.zhang, bhelgaas, lpieralisi, kw, manivannan.sadhasivam, robh, krzk+dt, conor+dt Cc: oe-kbuild-all, linux-pci, devicetree, linux-kernel, Manikandan K Pillai, Hans Zhang Hi, kernel test robot noticed the following build errors: [auto build test ERROR on a24588245776dafc227243a01bfbeb8a59bafba9] url: https://github.com/intel-lab-lkp/linux/commits/hans-zhang-cixtech-com/dt-bindings-pci-cadence-Extend-compatible-for-new-RP-configuration/20250414-094836 base: a24588245776dafc227243a01bfbeb8a59bafba9 patch link: https://lore.kernel.org/r/20250411103656.2740517-6-hans.zhang%40cixtech.com patch subject: [PATCH v3 5/6] PCI: cadence: Add callback functions for RP and EP controller config: arc-randconfig-001-20250414 (https://download.01.org/0day-ci/archive/20250414/202504141101.J2GJGhRZ-lkp@intel.com/config) compiler: arc-linux-gcc (GCC) 14.2.0 reproduce (this is a W=1 build): (https://download.01.org/0day-ci/archive/20250414/202504141101.J2GJGhRZ-lkp@intel.com/reproduce) If you fix the issue in a separate patch/commit (i.e. not just a new version of the same patch/commit), kindly add following tags | Reported-by: kernel test robot <lkp@intel.com> | Closes: https://lore.kernel.org/oe-kbuild-all/202504141101.J2GJGhRZ-lkp@intel.com/ All error/warnings (new ones prefixed by >>): In file included from drivers/pci/controller/cadence/pcie-cadence-host.c:13: drivers/pci/controller/cadence/pcie-cadence-host.c: In function 'cdns_pci_hpa_map_bus': >> drivers/pci/controller/cadence/pcie-cadence.h:309:9: error: implicit declaration of function 'FIELD_PREP' [-Wimplicit-function-declaration] 309 | FIELD_PREP(CDNS_PCIE_HPA_AT_OB_REGION_PCI_ADDR0_NBITS_MASK, \ | ^~~~~~~~~~ drivers/pci/controller/cadence/pcie-cadence-host.c:108:17: note: in expansion of macro 'CDNS_PCIE_HPA_AT_OB_REGION_PCI_ADDR0_NBITS' 108 | addr0 = CDNS_PCIE_HPA_AT_OB_REGION_PCI_ADDR0_NBITS(12) | | ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ -- >> drivers/pci/controller/cadence/pcie-cadence-plat.c:59:23: error: 'cdns_pcie_hpa_startlink' undeclared here (not in a function); did you mean 'cdns_pcie_hpa_start_link'? 59 | .start_link = cdns_pcie_hpa_startlink, | ^~~~~~~~~~~~~~~~~~~~~~~ | cdns_pcie_hpa_start_link >> drivers/pci/controller/cadence/pcie-cadence-plat.c:58:35: warning: 'cdns_hpa_plat_ops' defined but not used [-Wunused-const-variable=] 58 | static const struct cdns_pcie_ops cdns_hpa_plat_ops = { | ^~~~~~~~~~~~~~~~~ vim +/FIELD_PREP +309 drivers/pci/controller/cadence/pcie-cadence.h fc9e872310321c Manikandan K Pillai 2025-04-11 304 fc9e872310321c Manikandan K Pillai 2025-04-11 305 /* Region r Outbound AXI to PCIe Address Translation Register 0 */ fc9e872310321c Manikandan K Pillai 2025-04-11 306 #define CDNS_PCIE_HPA_AT_OB_REGION_PCI_ADDR0(r) (0x1010 + ((r) & 0x1f) * 0x0080) fc9e872310321c Manikandan K Pillai 2025-04-11 307 #define CDNS_PCIE_HPA_AT_OB_REGION_PCI_ADDR0_NBITS_MASK GENMASK(5, 0) fc9e872310321c Manikandan K Pillai 2025-04-11 308 #define CDNS_PCIE_HPA_AT_OB_REGION_PCI_ADDR0_NBITS(nbits) \ fc9e872310321c Manikandan K Pillai 2025-04-11 @309 FIELD_PREP(CDNS_PCIE_HPA_AT_OB_REGION_PCI_ADDR0_NBITS_MASK, \ fc9e872310321c Manikandan K Pillai 2025-04-11 310 ((nbits) - 1)) fc9e872310321c Manikandan K Pillai 2025-04-11 311 #define CDNS_PCIE_HPA_AT_OB_REGION_PCI_ADDR0_DEVFN_MASK GENMASK(23, 16) fc9e872310321c Manikandan K Pillai 2025-04-11 312 #define CDNS_PCIE_HPA_AT_OB_REGION_PCI_ADDR0_DEVFN(devfn) \ fc9e872310321c Manikandan K Pillai 2025-04-11 313 FIELD_PREP(CDNS_PCIE_HPA_AT_OB_REGION_PCI_ADDR0_DEVFN_MASK, devfn) fc9e872310321c Manikandan K Pillai 2025-04-11 314 #define CDNS_PCIE_HPA_AT_OB_REGION_PCI_ADDR0_BUS_MASK GENMASK(31, 24) fc9e872310321c Manikandan K Pillai 2025-04-11 315 #define CDNS_PCIE_HPA_AT_OB_REGION_PCI_ADDR0_BUS(bus) \ fc9e872310321c Manikandan K Pillai 2025-04-11 316 FIELD_PREP(CDNS_PCIE_HPA_AT_OB_REGION_PCI_ADDR0_BUS_MASK, bus) fc9e872310321c Manikandan K Pillai 2025-04-11 317 -- 0-DAY CI Kernel Test Service https://github.com/intel/lkp-tests/wiki ^ permalink raw reply [flat|nested] 23+ messages in thread
* Re: [PATCH v3 5/6] PCI: cadence: Add callback functions for RP and EP controller 2025-04-11 10:36 ` [PATCH v3 5/6] PCI: cadence: Add callback functions for RP and EP controller hans.zhang 2025-04-11 20:24 ` Rob Herring 2025-04-14 4:13 ` kernel test robot @ 2025-04-14 7:47 ` kernel test robot 2025-04-14 9:20 ` kernel test robot 3 siblings, 0 replies; 23+ messages in thread From: kernel test robot @ 2025-04-14 7:47 UTC (permalink / raw) To: hans.zhang, bhelgaas, lpieralisi, kw, manivannan.sadhasivam, robh, krzk+dt, conor+dt Cc: llvm, oe-kbuild-all, linux-pci, devicetree, linux-kernel, Manikandan K Pillai, Hans Zhang Hi, kernel test robot noticed the following build errors: [auto build test ERROR on a24588245776dafc227243a01bfbeb8a59bafba9] url: https://github.com/intel-lab-lkp/linux/commits/hans-zhang-cixtech-com/dt-bindings-pci-cadence-Extend-compatible-for-new-RP-configuration/20250414-094836 base: a24588245776dafc227243a01bfbeb8a59bafba9 patch link: https://lore.kernel.org/r/20250411103656.2740517-6-hans.zhang%40cixtech.com patch subject: [PATCH v3 5/6] PCI: cadence: Add callback functions for RP and EP controller config: x86_64-buildonly-randconfig-002-20250414 (https://download.01.org/0day-ci/archive/20250414/202504141523.v9N9MrDJ-lkp@intel.com/config) compiler: clang version 20.1.2 (https://github.com/llvm/llvm-project 58df0ef89dd64126512e4ee27b4ac3fd8ddf6247) reproduce (this is a W=1 build): (https://download.01.org/0day-ci/archive/20250414/202504141523.v9N9MrDJ-lkp@intel.com/reproduce) If you fix the issue in a separate patch/commit (i.e. not just a new version of the same patch/commit), kindly add following tags | Reported-by: kernel test robot <lkp@intel.com> | Closes: https://lore.kernel.org/oe-kbuild-all/202504141523.v9N9MrDJ-lkp@intel.com/ All errors (new ones prefixed by >>): >> drivers/pci/controller/cadence/pcie-cadence-host.c:108:10: error: call to undeclared function 'FIELD_PREP'; ISO C99 and later do not support implicit function declarations [-Wimplicit-function-declaration] 108 | addr0 = CDNS_PCIE_HPA_AT_OB_REGION_PCI_ADDR0_NBITS(12) | | ^ drivers/pci/controller/cadence/pcie-cadence.h:309:2: note: expanded from macro 'CDNS_PCIE_HPA_AT_OB_REGION_PCI_ADDR0_NBITS' 309 | FIELD_PREP(CDNS_PCIE_HPA_AT_OB_REGION_PCI_ADDR0_NBITS_MASK, \ | ^ drivers/pci/controller/cadence/pcie-cadence-host.c:574:10: error: call to undeclared function 'FIELD_PREP'; ISO C99 and later do not support implicit function declarations [-Wimplicit-function-declaration] 574 | value = CDNS_PCIE_HPA_LM_RC_BAR_CFG_BAR0_CTRL(ctrl) | | ^ drivers/pci/controller/cadence/pcie-cadence.h:263:2: note: expanded from macro 'CDNS_PCIE_HPA_LM_RC_BAR_CFG_BAR0_CTRL' 263 | FIELD_PREP(CDNS_PCIE_HPA_LM_RC_BAR_CFG_BAR0_CTRL_MASK, c) | ^ drivers/pci/controller/cadence/pcie-cadence-host.c:610:10: error: call to undeclared function 'FIELD_PREP'; ISO C99 and later do not support implicit function declarations [-Wimplicit-function-declaration] 610 | addr0 = CDNS_PCIE_HPA_AT_IB_RP_BAR_ADDR0_NBITS(aperture) | | ^ drivers/pci/controller/cadence/pcie-cadence.h:361:2: note: expanded from macro 'CDNS_PCIE_HPA_AT_IB_RP_BAR_ADDR0_NBITS' 361 | FIELD_PREP(CDNS_PCIE_HPA_AT_IB_RP_BAR_ADDR0_NBITS_MASK, ((nbits) - 1)) | ^ drivers/pci/controller/cadence/pcie-cadence-host.c:663:10: error: call to undeclared function 'FIELD_PREP'; ISO C99 and later do not support implicit function declarations [-Wimplicit-function-declaration] 663 | desc1 = CDNS_PCIE_HPA_AT_OB_REGION_DESC1_BUS(busnr); | ^ drivers/pci/controller/cadence/pcie-cadence.h:339:2: note: expanded from macro 'CDNS_PCIE_HPA_AT_OB_REGION_DESC1_BUS' 339 | FIELD_PREP(CDNS_PCIE_HPA_AT_OB_REGION_DESC1_BUS_MASK, bus) | ^ 4 errors generated. -- >> drivers/pci/controller/cadence/pcie-cadence.c:199:8: error: call to undeclared function 'FIELD_PREP'; ISO C99 and later do not support implicit function declarations [-Wimplicit-function-declaration] 199 | CDNS_PCIE_HPA_DETECT_QUIET_MIN_DELAY(delay)); | ^ drivers/pci/controller/cadence/pcie-cadence.h:375:2: note: expanded from macro 'CDNS_PCIE_HPA_DETECT_QUIET_MIN_DELAY' 375 | FIELD_PREP(CDNS_PCIE_HPA_DETECT_QUIET_MIN_DELAY_MASK, delay) | ^ drivers/pci/controller/cadence/pcie-cadence.c:221:10: error: call to undeclared function 'FIELD_PREP'; ISO C99 and later do not support implicit function declarations [-Wimplicit-function-declaration] 221 | addr0 = CDNS_PCIE_HPA_AT_OB_REGION_PCI_ADDR0_NBITS(nbits) | | ^ drivers/pci/controller/cadence/pcie-cadence.h:309:2: note: expanded from macro 'CDNS_PCIE_HPA_AT_OB_REGION_PCI_ADDR0_NBITS' 309 | FIELD_PREP(CDNS_PCIE_HPA_AT_OB_REGION_PCI_ADDR0_NBITS_MASK, \ | ^ drivers/pci/controller/cadence/pcie-cadence.c:293:10: error: call to undeclared function 'FIELD_PREP'; ISO C99 and later do not support implicit function declarations [-Wimplicit-function-declaration] 293 | desc0 = CDNS_PCIE_HPA_AT_OB_REGION_DESC0_TYPE_NORMAL_MSG; | ^ drivers/pci/controller/cadence/pcie-cadence.h:333:2: note: expanded from macro 'CDNS_PCIE_HPA_AT_OB_REGION_DESC0_TYPE_NORMAL_MSG' 333 | FIELD_PREP(CDNS_PCIE_HPA_AT_OB_REGION_DESC0_TYPE_MASK, 0x10) | ^ 3 errors generated. vim +/FIELD_PREP +108 drivers/pci/controller/cadence/pcie-cadence-host.c 72 73 void __iomem *cdns_pci_hpa_map_bus(struct pci_bus *bus, unsigned int devfn, 74 int where) 75 { 76 struct pci_host_bridge *bridge = pci_find_host_bridge(bus); 77 struct cdns_pcie_rc *rc = pci_host_bridge_priv(bridge); 78 struct cdns_pcie *pcie = &rc->pcie; 79 unsigned int busn = bus->number; 80 u32 addr0, desc0, desc1, ctrl0; 81 u32 regval; 82 83 if (pci_is_root_bus(bus)) { 84 /* 85 * Only the root port (devfn == 0) is connected to this bus. 86 * All other PCI devices are behind some bridge hence on another 87 * bus. 88 */ 89 if (devfn) 90 return NULL; 91 92 return pcie->reg_base + (where & 0xfff); 93 } 94 95 /* 96 * Clear AXI link-down status 97 */ 98 regval = cdns_pcie_hpa_readl(pcie, REG_BANK_AXI_SLAVE, CDNS_PCIE_HPA_AT_LINKDOWN); 99 cdns_pcie_hpa_writel(pcie, REG_BANK_AXI_SLAVE, CDNS_PCIE_HPA_AT_LINKDOWN, 100 (regval & GENMASK(0, 0))); 101 102 desc1 = 0; 103 ctrl0 = 0; 104 105 /* 106 * Update Output registers for AXI region 0. 107 */ > 108 addr0 = CDNS_PCIE_HPA_AT_OB_REGION_PCI_ADDR0_NBITS(12) | 109 CDNS_PCIE_HPA_AT_OB_REGION_PCI_ADDR0_DEVFN(devfn) | 110 CDNS_PCIE_HPA_AT_OB_REGION_PCI_ADDR0_BUS(busn); 111 cdns_pcie_hpa_writel(pcie, REG_BANK_AXI_SLAVE, 112 CDNS_PCIE_HPA_AT_OB_REGION_PCI_ADDR0(0), addr0); 113 114 desc1 = cdns_pcie_hpa_readl(pcie, REG_BANK_AXI_SLAVE, 115 CDNS_PCIE_HPA_AT_OB_REGION_DESC1(0)); 116 desc1 &= ~CDNS_PCIE_HPA_AT_OB_REGION_DESC1_DEVFN_MASK; 117 desc1 |= CDNS_PCIE_HPA_AT_OB_REGION_DESC1_DEVFN(0); 118 ctrl0 = CDNS_PCIE_HPA_AT_OB_REGION_CTRL0_SUPPLY_BUS | 119 CDNS_PCIE_HPA_AT_OB_REGION_CTRL0_SUPPLY_DEV_FN; 120 121 if (busn == bridge->busnr + 1) 122 desc0 |= CDNS_PCIE_HPA_AT_OB_REGION_DESC0_TYPE_CONF_TYPE0; 123 else 124 desc0 |= CDNS_PCIE_HPA_AT_OB_REGION_DESC0_TYPE_CONF_TYPE1; 125 126 cdns_pcie_hpa_writel(pcie, REG_BANK_AXI_SLAVE, 127 CDNS_PCIE_HPA_AT_OB_REGION_DESC0(0), desc0); 128 cdns_pcie_hpa_writel(pcie, REG_BANK_AXI_SLAVE, 129 CDNS_PCIE_HPA_AT_OB_REGION_DESC1(0), desc1); 130 cdns_pcie_hpa_writel(pcie, REG_BANK_AXI_SLAVE, 131 CDNS_PCIE_HPA_AT_OB_REGION_CTRL0(0), ctrl0); 132 133 return rc->cfg_base + (where & 0xfff); 134 } 135 -- 0-DAY CI Kernel Test Service https://github.com/intel/lkp-tests/wiki ^ permalink raw reply [flat|nested] 23+ messages in thread
* Re: [PATCH v3 5/6] PCI: cadence: Add callback functions for RP and EP controller 2025-04-11 10:36 ` [PATCH v3 5/6] PCI: cadence: Add callback functions for RP and EP controller hans.zhang ` (2 preceding siblings ...) 2025-04-14 7:47 ` kernel test robot @ 2025-04-14 9:20 ` kernel test robot 3 siblings, 0 replies; 23+ messages in thread From: kernel test robot @ 2025-04-14 9:20 UTC (permalink / raw) To: hans.zhang, bhelgaas, lpieralisi, kw, manivannan.sadhasivam, robh, krzk+dt, conor+dt Cc: llvm, oe-kbuild-all, linux-pci, devicetree, linux-kernel, Manikandan K Pillai, Hans Zhang Hi, kernel test robot noticed the following build warnings: [auto build test WARNING on a24588245776dafc227243a01bfbeb8a59bafba9] url: https://github.com/intel-lab-lkp/linux/commits/hans-zhang-cixtech-com/dt-bindings-pci-cadence-Extend-compatible-for-new-RP-configuration/20250414-094836 base: a24588245776dafc227243a01bfbeb8a59bafba9 patch link: https://lore.kernel.org/r/20250411103656.2740517-6-hans.zhang%40cixtech.com patch subject: [PATCH v3 5/6] PCI: cadence: Add callback functions for RP and EP controller config: powerpc64-randconfig-001-20250414 (https://download.01.org/0day-ci/archive/20250414/202504141719.svx3rf5x-lkp@intel.com/config) compiler: clang version 17.0.6 (https://github.com/llvm/llvm-project 6009708b4367171ccdbf4b5905cb6a803753fe18) reproduce (this is a W=1 build): (https://download.01.org/0day-ci/archive/20250414/202504141719.svx3rf5x-lkp@intel.com/reproduce) If you fix the issue in a separate patch/commit (i.e. not just a new version of the same patch/commit), kindly add following tags | Reported-by: kernel test robot <lkp@intel.com> | Closes: https://lore.kernel.org/oe-kbuild-all/202504141719.svx3rf5x-lkp@intel.com/ All warnings (new ones prefixed by >>): >> drivers/pci/controller/cadence/pcie-cadence.c:303:12: warning: variable 'ctrl0' is used uninitialized whenever 'if' condition is false [-Wsometimes-uninitialized] 303 | desc1 |= CDNS_PCIE_HPA_AT_OB_REGION_DESC1_DEVFN(fn); | ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ drivers/pci/controller/cadence/pcie-cadence.h:342:2: note: expanded from macro 'CDNS_PCIE_HPA_AT_OB_REGION_DESC1_DEVFN' 342 | FIELD_PREP(CDNS_PCIE_HPA_AT_OB_REGION_DESC1_DEVFN_MASK, devfn) | ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ include/linux/bitfield.h:115:3: note: expanded from macro 'FIELD_PREP' 115 | __BF_FIELD_CHECK(_mask, 0ULL, _val, "FIELD_PREP: "); \ | ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ include/linux/bitfield.h:68:3: note: expanded from macro '__BF_FIELD_CHECK' 68 | BUILD_BUG_ON_MSG(__builtin_constant_p(_val) ? \ | ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 69 | ~((_mask) >> __bf_shf(_mask)) & \ | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 70 | (0 + (_val)) : 0, \ | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 71 | _pfx "value too large for the field"); \ | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ note: (skipping 1 expansions in backtrace; use -fmacro-backtrace-limit=0 to see all) include/linux/compiler_types.h:557:2: note: expanded from macro 'compiletime_assert' 557 | _compiletime_assert(condition, msg, __compiletime_assert_, __COUNTER__) | ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ include/linux/compiler_types.h:545:2: note: expanded from macro '_compiletime_assert' 545 | __compiletime_assert(condition, msg, prefix, suffix) | ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ include/linux/compiler_types.h:537:7: note: expanded from macro '__compiletime_assert' 537 | if (!(condition)) \ | ^~~~~~~~~~~~ drivers/pci/controller/cadence/pcie-cadence.c:323:46: note: uninitialized use occurs here 323 | CDNS_PCIE_HPA_AT_OB_REGION_CTRL0(r), ctrl0); | ^~~~~ drivers/pci/controller/cadence/pcie-cadence.c:303:12: note: remove the 'if' if its condition is always true 303 | desc1 |= CDNS_PCIE_HPA_AT_OB_REGION_DESC1_DEVFN(fn); | ^ drivers/pci/controller/cadence/pcie-cadence.h:342:2: note: expanded from macro 'CDNS_PCIE_HPA_AT_OB_REGION_DESC1_DEVFN' 342 | FIELD_PREP(CDNS_PCIE_HPA_AT_OB_REGION_DESC1_DEVFN_MASK, devfn) | ^ include/linux/bitfield.h:115:3: note: expanded from macro 'FIELD_PREP' 115 | __BF_FIELD_CHECK(_mask, 0ULL, _val, "FIELD_PREP: "); \ | ^ include/linux/bitfield.h:68:3: note: expanded from macro '__BF_FIELD_CHECK' 68 | BUILD_BUG_ON_MSG(__builtin_constant_p(_val) ? \ | ^ note: (skipping 1 expansions in backtrace; use -fmacro-backtrace-limit=0 to see all) include/linux/compiler_types.h:557:2: note: expanded from macro 'compiletime_assert' 557 | _compiletime_assert(condition, msg, __compiletime_assert_, __COUNTER__) | ^ include/linux/compiler_types.h:545:2: note: expanded from macro '_compiletime_assert' 545 | __compiletime_assert(condition, msg, prefix, suffix) | ^ include/linux/compiler_types.h:537:3: note: expanded from macro '__compiletime_assert' 537 | if (!(condition)) \ | ^ drivers/pci/controller/cadence/pcie-cadence.c:291:39: note: initialize the variable 'ctrl0' to silence this warning 291 | u32 addr0, addr1, desc0, desc1, ctrl0; | ^ | = 0 1 warning generated. -- >> drivers/pci/controller/cadence/pcie-cadence-host.c:122:3: warning: variable 'desc0' is uninitialized when used here [-Wuninitialized] 122 | desc0 |= CDNS_PCIE_HPA_AT_OB_REGION_DESC0_TYPE_CONF_TYPE0; | ^~~~~ drivers/pci/controller/cadence/pcie-cadence-host.c:80:18: note: initialize the variable 'desc0' to silence this warning 80 | u32 addr0, desc0, desc1, ctrl0; | ^ | = 0 1 warning generated. vim +303 drivers/pci/controller/cadence/pcie-cadence.c 286 287 void cdns_pcie_hpa_set_outbound_region_for_normal_msg(struct cdns_pcie *pcie, 288 u8 busnr, u8 fn, 289 u32 r, u64 cpu_addr) 290 { 291 u32 addr0, addr1, desc0, desc1, ctrl0; 292 293 desc0 = CDNS_PCIE_HPA_AT_OB_REGION_DESC0_TYPE_NORMAL_MSG; 294 desc1 = 0; 295 296 /* See cdns_pcie_set_outbound_region() comments above. */ 297 if (pcie->is_rc) { 298 desc1 = CDNS_PCIE_HPA_AT_OB_REGION_DESC1_BUS(busnr) | 299 CDNS_PCIE_HPA_AT_OB_REGION_DESC1_DEVFN(0); 300 ctrl0 = CDNS_PCIE_HPA_AT_OB_REGION_CTRL0_SUPPLY_BUS | 301 CDNS_PCIE_HPA_AT_OB_REGION_CTRL0_SUPPLY_DEV_FN; 302 } else { > 303 desc1 |= CDNS_PCIE_HPA_AT_OB_REGION_DESC1_DEVFN(fn); 304 } 305 306 addr0 = CDNS_PCIE_HPA_AT_OB_REGION_CPU_ADDR0_NBITS(17) | 307 (lower_32_bits(cpu_addr) & GENMASK(31, 8)); 308 addr1 = upper_32_bits(cpu_addr); 309 310 cdns_pcie_hpa_writel(pcie, REG_BANK_AXI_SLAVE, 311 CDNS_PCIE_HPA_AT_OB_REGION_PCI_ADDR0(r), 0); 312 cdns_pcie_hpa_writel(pcie, REG_BANK_AXI_SLAVE, 313 CDNS_PCIE_HPA_AT_OB_REGION_PCI_ADDR1(r), 0); 314 cdns_pcie_hpa_writel(pcie, REG_BANK_AXI_SLAVE, 315 CDNS_PCIE_HPA_AT_OB_REGION_DESC0(r), desc0); 316 cdns_pcie_hpa_writel(pcie, REG_BANK_AXI_SLAVE, 317 CDNS_PCIE_HPA_AT_OB_REGION_DESC1(r), desc1); 318 cdns_pcie_hpa_writel(pcie, REG_BANK_AXI_SLAVE, 319 CDNS_PCIE_HPA_AT_OB_REGION_CPU_ADDR0(r), addr0); 320 cdns_pcie_hpa_writel(pcie, REG_BANK_AXI_SLAVE, 321 CDNS_PCIE_HPA_AT_OB_REGION_CPU_ADDR1(r), addr1); 322 cdns_pcie_hpa_writel(pcie, REG_BANK_AXI_SLAVE, 323 CDNS_PCIE_HPA_AT_OB_REGION_CTRL0(r), ctrl0); 324 } 325 -- 0-DAY CI Kernel Test Service https://github.com/intel/lkp-tests/wiki ^ permalink raw reply [flat|nested] 23+ messages in thread
* [PATCH v3 6/6] PCI: cadence: Update support for TI J721e boards 2025-04-11 10:36 [PATCH v3 0/6] Enhance the PCIe controller driver hans.zhang ` (4 preceding siblings ...) 2025-04-11 10:36 ` [PATCH v3 5/6] PCI: cadence: Add callback functions for RP and EP controller hans.zhang @ 2025-04-11 10:36 ` hans.zhang 2025-04-11 20:25 ` Rob Herring 5 siblings, 1 reply; 23+ messages in thread From: hans.zhang @ 2025-04-11 10:36 UTC (permalink / raw) To: bhelgaas, lpieralisi, kw, manivannan.sadhasivam, robh, krzk+dt, conor+dt Cc: linux-pci, devicetree, linux-kernel, Manikandan K Pillai, Hans Zhang From: Manikandan K Pillai <mpillai@cadence.com> Update the support for TI J721 boards to use the updated Cadence PCIe controller code. Signed-off-by: Manikandan K Pillai <mpillai@cadence.com> Signed-off-by: Hans Zhang <hans.zhang@cixtech.com> --- drivers/pci/controller/cadence/pci-j721e.c | 12 ++++++++++++ 1 file changed, 12 insertions(+) diff --git a/drivers/pci/controller/cadence/pci-j721e.c b/drivers/pci/controller/cadence/pci-j721e.c index ef1cfdae33bb..154b36c30101 100644 --- a/drivers/pci/controller/cadence/pci-j721e.c +++ b/drivers/pci/controller/cadence/pci-j721e.c @@ -164,6 +164,14 @@ static const struct cdns_pcie_ops j721e_pcie_ops = { .start_link = j721e_pcie_start_link, .stop_link = j721e_pcie_stop_link, .link_up = j721e_pcie_link_up, + .host_init_root_port = cdns_pcie_host_init_root_port, + .host_bar_ib_config = cdns_pcie_host_bar_ib_config, + .host_init_address_translation = cdns_pcie_host_init_address_translation, + .detect_quiet_min_delay_set = cdns_pcie_detect_quiet_min_delay_set, + .set_outbound_region = cdns_pcie_set_outbound_region, + .set_outbound_region_for_normal_msg = + cdns_pcie_set_outbound_region_for_normal_msg, + .reset_outbound_region = cdns_pcie_reset_outbound_region, }; static int j721e_pcie_set_mode(struct j721e_pcie *pcie, struct regmap *syscon, @@ -479,6 +487,8 @@ static int j721e_pcie_probe(struct platform_device *pdev) cdns_pcie = &rc->pcie; cdns_pcie->dev = dev; + cdns_pcie->is_rc = true; + cdns_pcie->is_hpa = false; cdns_pcie->ops = &j721e_pcie_ops; pcie->cdns_pcie = cdns_pcie; break; @@ -495,6 +505,8 @@ static int j721e_pcie_probe(struct platform_device *pdev) cdns_pcie = &ep->pcie; cdns_pcie->dev = dev; + cdns_pcie->is_rc = false; + cdns_pcie->is_hpa = false; cdns_pcie->ops = &j721e_pcie_ops; pcie->cdns_pcie = cdns_pcie; break; -- 2.47.1 ^ permalink raw reply related [flat|nested] 23+ messages in thread
* Re: [PATCH v3 6/6] PCI: cadence: Update support for TI J721e boards 2025-04-11 10:36 ` [PATCH v3 6/6] PCI: cadence: Update support for TI J721e boards hans.zhang @ 2025-04-11 20:25 ` Rob Herring 2025-04-14 3:12 ` Manikandan Karunakaran Pillai 0 siblings, 1 reply; 23+ messages in thread From: Rob Herring @ 2025-04-11 20:25 UTC (permalink / raw) To: hans.zhang Cc: bhelgaas, lpieralisi, kw, manivannan.sadhasivam, krzk+dt, conor+dt, linux-pci, devicetree, linux-kernel, Manikandan K Pillai On Fri, Apr 11, 2025 at 06:36:56PM +0800, hans.zhang@cixtech.com wrote: > From: Manikandan K Pillai <mpillai@cadence.com> > > Update the support for TI J721 boards to use the updated Cadence > PCIe controller code. Without this patch, you just broke TI. That's not bisectable. > > Signed-off-by: Manikandan K Pillai <mpillai@cadence.com> > Signed-off-by: Hans Zhang <hans.zhang@cixtech.com> > --- > drivers/pci/controller/cadence/pci-j721e.c | 12 ++++++++++++ > 1 file changed, 12 insertions(+) ^ permalink raw reply [flat|nested] 23+ messages in thread
* RE: [PATCH v3 6/6] PCI: cadence: Update support for TI J721e boards 2025-04-11 20:25 ` Rob Herring @ 2025-04-14 3:12 ` Manikandan Karunakaran Pillai 0 siblings, 0 replies; 23+ messages in thread From: Manikandan Karunakaran Pillai @ 2025-04-14 3:12 UTC (permalink / raw) To: Rob Herring, hans.zhang@cixtech.com Cc: bhelgaas@google.com, lpieralisi@kernel.org, kw@linux.com, manivannan.sadhasivam@linaro.org, krzk+dt@kernel.org, conor+dt@kernel.org, linux-pci@vger.kernel.org, devicetree@vger.kernel.org, linux-kernel@vger.kernel.org >Subject: Re: [PATCH v3 6/6] PCI: cadence: Update support for TI J721e boards > >EXTERNAL MAIL > > >On Fri, Apr 11, 2025 at 06:36:56PM +0800, hans.zhang@cixtech.com wrote: >> From: Manikandan K Pillai <mpillai@cadence.com> >> >> Update the support for TI J721 boards to use the updated Cadence >> PCIe controller code. > >Without this patch, you just broke TI. That's not bisectable. > Ok will merge this patch with the earlier one. >> >> Signed-off-by: Manikandan K Pillai <mpillai@cadence.com> >> Signed-off-by: Hans Zhang <hans.zhang@cixtech.com> >> --- >> drivers/pci/controller/cadence/pci-j721e.c | 12 ++++++++++++ >> 1 file changed, 12 insertions(+) ^ permalink raw reply [flat|nested] 23+ messages in thread
end of thread, other threads:[~2025-04-24 15:07 UTC | newest] Thread overview: 23+ messages (download: mbox.gz follow: Atom feed -- links below jump to the message on this page -- 2025-04-11 10:36 [PATCH v3 0/6] Enhance the PCIe controller driver hans.zhang 2025-04-11 10:36 ` [PATCH v3 1/6] dt-bindings: pci: cadence: Extend compatible for new RP configuration hans.zhang 2025-04-11 19:56 ` Rob Herring 2025-04-14 3:05 ` Manikandan Karunakaran Pillai 2025-04-11 10:36 ` [PATCH v3 2/6] dt-bindings: pci: cadence: Extend compatible for new EP configurations hans.zhang 2025-04-11 10:36 ` [PATCH v3 3/6] PCI: cadence: Add header support for PCIe HPA controller hans.zhang 2025-04-11 20:31 ` Rob Herring 2025-04-12 15:19 ` Hans Zhang 2025-04-11 10:36 ` [PATCH v3 4/6] PCI: cadence: Add support for PCIe Endpoint " hans.zhang 2025-04-11 10:36 ` [PATCH v3 5/6] PCI: cadence: Add callback functions for RP and EP controller hans.zhang 2025-04-11 20:24 ` Rob Herring 2025-04-12 15:45 ` Hans Zhang 2025-04-12 16:02 ` Hans Zhang 2025-04-14 3:52 ` Manikandan Karunakaran Pillai 2025-04-24 2:58 ` Rob Herring 2025-04-24 3:53 ` Manikandan Karunakaran Pillai 2025-04-24 15:07 ` Rob Herring 2025-04-14 4:13 ` kernel test robot 2025-04-14 7:47 ` kernel test robot 2025-04-14 9:20 ` kernel test robot 2025-04-11 10:36 ` [PATCH v3 6/6] PCI: cadence: Update support for TI J721e boards hans.zhang 2025-04-11 20:25 ` Rob Herring 2025-04-14 3:12 ` Manikandan Karunakaran Pillai
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox; as well as URLs for NNTP newsgroup(s).