* [PATCH 0/7] Enhance the PCIe controller driver [not found] <20250327105429.2947013-1-mpillai@cadence.com> @ 2025-03-27 10:59 ` Manikandan Karunakaran Pillai [not found] ` <20250327111106.2947888-1-mpillai@cadence.com> ` (8 more replies) 0 siblings, 9 replies; 28+ messages in thread From: Manikandan Karunakaran Pillai @ 2025-03-27 10:59 UTC (permalink / raw) To: bhelgaas@google.com, lpieralisi@kernel.org, kw@linux.com, manivannan.sadhasivam@linaro.org, robh@kernel.org, krzk+dt@kernel.org, conor+dt@kernel.org, Manikandan Karunakaran Pillai, Milind Parab Cc: manivannan.sadhasivam@linaro.org, robh@kernel.org, linux-pci@vger.kernel.org, devicetree@vger.kernel.org, linux-kernel@vger.kernel.org Enhances the exiting Cadence PCIe controller drivers to support second generation PCIe controller also referred as HPA(High Performance Architecture) controllers. The patch set enhances the Cadence PCIe driver for the new high performance architecture changes. The "compatible" property in DTS is added with more strings to support the new platform architecture and the register maps that change with it. The driver read register and write register functions take the updated offset stored from the platform driver to access the registers. The driver now supports the legacy and HPA architecture, with the legacy code being changed minimal. The TI SoC continues to be supported with the changes incorporated. The changes are also in tune with how multiple platforms are supported in related drivers. Patch 1/7 - DTS related changes for property "compatible" Patch 2/7 - Updates the header file with relevant register offsets and bit definitions Patch 3/7 - Platform related code changes Patch 4/7 - PCIe EP related code changes Patch 5/7 - Header file is updated with register offsets and updated read and write register functions Patch 6/7 - Support for multiple arch by using registered callbacks Patch 7/7 - TIJ72X board is updated to use the new approach Comments from the earlier patch submission on the same enhancements are taken into consideration. The previous submitted patch links is https://lore.kernel.org/lkml/CH2PPF4D26F8E1C205166209F012D4F3A81A2A42@CH2PPF4D26F8E1C.namprd07.prod.outlook.com/ The scripts/checkpatch.pl has been run on the patches with and without --strict. With the --strict option, 4 checks are generated on 1 patch (patch 0002 of the series), which can be ignored. There are no code fixes required for these checks. The rest of the 'scripts/checkpatch.pl' is clean. The changes are tested on TI platforms. The legacy controller changes are tested on an TI J7200 EVM and HPA changes are planned for on an FPGA platform available within Cadence. Manikandan K Pillai (7): dt-bindings: pci: cadence: Extend compatible for new platform configurations PCI: cadence: Add header support for PCIe next generation controllers PCI: cadence: Add platform related architecture and register information PCI: cadence: Add support for PCIe Endpoint HPA controllers PCI: cadence: Update the PCIe controller register address offsets PCI: cadence: Add callback functions for Root Port and EP controller PCI: cadence: Update support for TI J721e boards .../bindings/pci/cdns,cdns-pcie-ep.yaml | 12 +- .../bindings/pci/cdns,cdns-pcie-host.yaml | 119 +++++- drivers/pci/controller/cadence/pci-j721e.c | 8 + .../pci/controller/cadence/pcie-cadence-ep.c | 184 +++++++-- .../controller/cadence/pcie-cadence-host.c | 264 ++++++++++-- .../controller/cadence/pcie-cadence-plat.c | 145 +++++++ drivers/pci/controller/cadence/pcie-cadence.c | 217 +++++++++- drivers/pci/controller/cadence/pcie-cadence.h | 380 +++++++++++++++++- 8 files changed, 1259 insertions(+), 70 deletions(-) -- 2.27.0 ^ permalink raw reply [flat|nested] 28+ messages in thread
[parent not found: <20250327111106.2947888-1-mpillai@cadence.com>]
* [PATCH 1/7] dt-bindings: pci: cadence: Extend compatible for new platform configurations [not found] ` <20250327111106.2947888-1-mpillai@cadence.com> @ 2025-03-27 11:19 ` Manikandan Karunakaran Pillai 2025-03-27 14:15 ` Krzysztof Kozlowski 2025-03-28 8:22 ` Krzysztof Kozlowski 0 siblings, 2 replies; 28+ messages in thread From: Manikandan Karunakaran Pillai @ 2025-03-27 11:19 UTC (permalink / raw) To: bhelgaas@google.com, lpieralisi@kernel.org, kw@linux.com, manivannan.sadhasivam@linaro.org, robh@kernel.org, krzk+dt@kernel.org, conor+dt@kernel.org Cc: manivannan.sadhasivam@linaro.org, robh@kernel.org, linux-pci@vger.kernel.org, devicetree@vger.kernel.org, linux-kernel@vger.kernel.org Document the compatible property for the newly added values for PCIe EP and RP configurations. Fix the compilation issues that came up for the existing Cadence bindings Signed-off-by: Manikandan K Pillai <mpillai@cadence.com> --- .../bindings/pci/cdns,cdns-pcie-ep.yaml | 12 +- .../bindings/pci/cdns,cdns-pcie-host.yaml | 119 +++++++++++++++--- 2 files changed, 110 insertions(+), 21 deletions(-) diff --git a/Documentation/devicetree/bindings/pci/cdns,cdns-pcie-ep.yaml b/Documentation/devicetree/bindings/pci/cdns,cdns-pcie-ep.yaml index 98651ab22103..aa4ad69a9b71 100644 --- a/Documentation/devicetree/bindings/pci/cdns,cdns-pcie-ep.yaml +++ b/Documentation/devicetree/bindings/pci/cdns,cdns-pcie-ep.yaml @@ -7,14 +7,22 @@ $schema: http://devicetree.org/meta-schemas/core.yaml# title: Cadence PCIe EP Controller maintainers: - - Tom Joseph <tjoseph@cadence.com> + - Manikandan K Pillai <mpillai@cadence.com> allOf: - $ref: cdns-pcie-ep.yaml# properties: compatible: - const: cdns,cdns-pcie-ep + oneOf: + - const: cdns,cdns-pcie-ep + - const: cdns,cdns-pcie-hpa-ep + - const: cdns,cdns-cix-pcie-hpa-ep + - description: PCIe EP controller from cadence + items: + - const: cdns,cdns-pcie-ep + - const: cdns,cdns-pcie-hpa-ep + - const: cdns,cdns-cix-pcie-hpa-ep reg: maxItems: 2 diff --git a/Documentation/devicetree/bindings/pci/cdns,cdns-pcie-host.yaml b/Documentation/devicetree/bindings/pci/cdns,cdns-pcie-host.yaml index a8190d9b100f..bb7ffb9ddaf9 100644 --- a/Documentation/devicetree/bindings/pci/cdns,cdns-pcie-host.yaml +++ b/Documentation/devicetree/bindings/pci/cdns,cdns-pcie-host.yaml @@ -7,16 +7,30 @@ $schema: http://devicetree.org/meta-schemas/core.yaml# title: Cadence PCIe host controller maintainers: - - Tom Joseph <tjoseph@cadence.com> + - Manikandan K Pillai <mpillai@cadence.com> allOf: - - $ref: cdns-pcie-host.yaml# + - $ref: cdns-pcie.yaml# properties: + "#size-cells": + const: 2 + "#address-cells": + const: 3 + compatible: - const: cdns,cdns-pcie-host + oneOf: + - const: cdns,cdns-pcie-host + - const: cdns,cdns-pcie-hpa-host + - const: cdns,cdns-cix-pcie-hpa-host + - description: PCIe RP controller from cadence + items: + - const: cdns,cdns-pcie-host + - const: cdns,cdns-pcie-hpa-host + - const: cdns,cdns-cix-pcie-hpa-host reg: + minItems: 1 maxItems: 2 reg-names: @@ -24,6 +38,74 @@ properties: - const: reg - const: cfg + device_type: + const: pci + + vendor-id: + const: 0x17cd + + device-id: + enum: + - 0x0200 + + "#interrupt-cells": true + + interrupt-map: + minItems: 1 + maxItems: 8 + + interrupt-map-mask: + items: + - const: 0 + - const: 0 + - const: 0 + - const: 7 + + interrupts: + minItems: 1 + maxItems: 8 + + interrupt-names: + items: + - const: msi1 + - const: msi0 + + linux,pci-domain: + description: + If present this property assigns a fixed PCI domain number to a PCI + Endpoint Controller, otherwise an unstable (across boots) unique number + will be assigned. It is required to either not set this property at all + or set it for all PCI endpoint controllers in the system, otherwise + potentially conflicting domain numbers may be assigned to endpoint + controllers. The domain number for each endpoint controller in the system + must be unique. + $ref: /schemas/types.yaml#/definitions/uint32 + + ranges: + minItems: 1 + maxItems: 8 + + bus-range: + description: | + The PCI bus number range; as this is a single bus, the range + should be specified as the same value twice. + + dma-ranges: + description: | + A single range for the inbound memory region. If not supplied, + defaults to 1GiB at 0x40000000. Note there are hardware restrictions on + the allowed combinations of address and size. + maxItems: 1 + + phys: + maxItems: 1 + + phy-names: + items: + - const: pcie-phy + + msi-parent: true + required: - reg - reg-names @@ -33,37 +115,36 @@ unevaluatedProperties: false examples: - | bus { - #address-cells = <2>; - #size-cells = <2>; + #address-cells = <2>; + #size-cells = <2>; pcie@fb000000 { compatible = "cdns,cdns-pcie-host"; - device_type = "pci"; #address-cells = <3>; #size-cells = <2>; + device_type = "pci"; bus-range = <0x0 0xff>; linux,pci-domain = <0>; vendor-id = <0x17cd>; device-id = <0x0200>; - reg = <0x0 0xfb000000 0x0 0x01000000>, - <0x0 0x41000000 0x0 0x00001000>; + reg = <0xfb000000 0x01000000>,<0x41000000 0x00001000>; reg-names = "reg", "cfg"; - ranges = <0x02000000 0x0 0x42000000 0x0 0x42000000 0x0 0x1000000>, - <0x01000000 0x0 0x43000000 0x0 0x43000000 0x0 0x0010000>; - dma-ranges = <0x02000000 0x0 0x0 0x0 0x0 0x1 0x00000000>; - - #interrupt-cells = <0x1>; + ranges = <0x02000000 0x0 0x42000000 0x42000000 0x0 0x1000000 0x0>; - interrupt-map = <0x0 0x0 0x0 0x1 &gic 0x0 0x0 0x0 14 0x1>, - <0x0 0x0 0x0 0x2 &gic 0x0 0x0 0x0 15 0x1>, - <0x0 0x0 0x0 0x3 &gic 0x0 0x0 0x0 16 0x1>, - <0x0 0x0 0x0 0x4 &gic 0x0 0x0 0x0 17 0x1>; + dma-ranges = <0x02000000 0x0 0x0 0x0 0x1 0x00000000 0x0>; - interrupt-map-mask = <0x0 0x0 0x0 0x7>; + #interrupt-cells = <1>; - msi-parent = <&its_pci>; + interrupt-parent = <&gic>; + interrupts = <0 118 4>, <0 116 1>; + interrupt-names = "msi1", "msi0"; + interrupt-map-mask = <0x0 0x0 0x0 0x7>; + interrupt-map = <0x0 0x0 0x0 0x1 &pcie_intc 0x1>, + <0x0 0x0 0x0 0x2 &pcie_intc 0x2>, + <0x0 0x0 0x0 0x3 &pcie_intc 0x3>, + <0x0 0x0 0x0 0x4 &pcie_intc 0x4>; phys = <&pcie_phy0>; phy-names = "pcie-phy"; -- 2.27.0 ^ permalink raw reply related [flat|nested] 28+ messages in thread
* Re: [PATCH 1/7] dt-bindings: pci: cadence: Extend compatible for new platform configurations 2025-03-27 11:19 ` [PATCH 1/7] dt-bindings: pci: cadence: Extend compatible for new platform configurations Manikandan Karunakaran Pillai @ 2025-03-27 14:15 ` Krzysztof Kozlowski 2025-03-28 5:07 ` Manikandan Karunakaran Pillai 2025-03-28 8:22 ` Krzysztof Kozlowski 1 sibling, 1 reply; 28+ messages in thread From: Krzysztof Kozlowski @ 2025-03-27 14:15 UTC (permalink / raw) To: Manikandan Karunakaran Pillai, bhelgaas@google.com, lpieralisi@kernel.org, kw@linux.com, manivannan.sadhasivam@linaro.org, robh@kernel.org, krzk+dt@kernel.org, conor+dt@kernel.org Cc: linux-pci@vger.kernel.org, devicetree@vger.kernel.org, linux-kernel@vger.kernel.org On 27/03/2025 12:19, Manikandan Karunakaran Pillai wrote: > Document the compatible property for the newly added values for PCIe EP and > RP configurations. Fix the compilation issues that came up for the existing > Cadence bindings These are two different commits. > > Signed-off-by: Manikandan K Pillai <mpillai@cadence.com> > --- > .../bindings/pci/cdns,cdns-pcie-ep.yaml | 12 +- > .../bindings/pci/cdns,cdns-pcie-host.yaml | 119 +++++++++++++++--- > 2 files changed, 110 insertions(+), 21 deletions(-) > > diff --git a/Documentation/devicetree/bindings/pci/cdns,cdns-pcie-ep.yaml b/Documentation/devicetree/bindings/pci/cdns,cdns-pcie-ep.yaml > index 98651ab22103..aa4ad69a9b71 100644 > --- a/Documentation/devicetree/bindings/pci/cdns,cdns-pcie-ep.yaml > +++ b/Documentation/devicetree/bindings/pci/cdns,cdns-pcie-ep.yaml > @@ -7,14 +7,22 @@ $schema: http://devicetree.org/meta-schemas/core.yaml# > title: Cadence PCIe EP Controller > > maintainers: > - - Tom Joseph <tjoseph@cadence.com> > + - Manikandan K Pillai <mpillai@cadence.com> > > allOf: > - $ref: cdns-pcie-ep.yaml# > > properties: > compatible: > - const: cdns,cdns-pcie-ep > + oneOf: > + - const: cdns,cdns-pcie-ep > + - const: cdns,cdns-pcie-hpa-ep What is hpa? Which soc is that? I don't think this should keep growing, but instead use SoC based compatibles. Anyway, that's enum. > + - const: cdns,cdns-cix-pcie-hpa-ep What is cix? If you want to stuff here soc in the middle, then no, no no. Please read devicetree spec and writing bindings how the compatibles are created. > + - description: PCIe EP controller from cadence > + items: > + - const: cdns,cdns-pcie-ep > + - const: cdns,cdns-pcie-hpa-ep > + - const: cdns,cdns-cix-pcie-hpa-ep This makes no sense. > > reg: > maxItems: 2 > diff --git a/Documentation/devicetree/bindings/pci/cdns,cdns-pcie-host.yaml b/Documentation/devicetree/bindings/pci/cdns,cdns-pcie-host.yaml > index a8190d9b100f..bb7ffb9ddaf9 100644 > --- a/Documentation/devicetree/bindings/pci/cdns,cdns-pcie-host.yaml > +++ b/Documentation/devicetree/bindings/pci/cdns,cdns-pcie-host.yaml > @@ -7,16 +7,30 @@ $schema: http://devicetree.org/meta-schemas/core.yaml# > title: Cadence PCIe host controller > > maintainers: > - - Tom Joseph <tjoseph@cadence.com> > + - Manikandan K Pillai <mpillai@cadence.com> > > allOf: > - - $ref: cdns-pcie-host.yaml# > + - $ref: cdns-pcie.yaml# Why? > > properties: > + "#size-cells": > + const: 2 > + "#address-cells": > + const: 3 Huh? Why? Nothing here makes sense. Best regards, Krzysztof ^ permalink raw reply [flat|nested] 28+ messages in thread
* RE: [PATCH 1/7] dt-bindings: pci: cadence: Extend compatible for new platform configurations 2025-03-27 14:15 ` Krzysztof Kozlowski @ 2025-03-28 5:07 ` Manikandan Karunakaran Pillai 2025-03-28 7:20 ` Krzysztof Kozlowski 0 siblings, 1 reply; 28+ messages in thread From: Manikandan Karunakaran Pillai @ 2025-03-28 5:07 UTC (permalink / raw) To: Krzysztof Kozlowski, bhelgaas@google.com, lpieralisi@kernel.org, kw@linux.com, manivannan.sadhasivam@linaro.org, robh@kernel.org, krzk+dt@kernel.org, conor+dt@kernel.org, Milind Parab Cc: linux-pci@vger.kernel.org, devicetree@vger.kernel.org, linux-kernel@vger.kernel.org >EXTERNAL MAIL > > >On 27/03/2025 12:19, Manikandan Karunakaran Pillai wrote: >> Document the compatible property for the newly added values for PCIe EP >and >> RP configurations. Fix the compilation issues that came up for the existing >> Cadence bindings > >These are two different commits. Ok > >> >> Signed-off-by: Manikandan K Pillai <mpillai@cadence.com> >> --- >> .../bindings/pci/cdns,cdns-pcie-ep.yaml | 12 +- >> .../bindings/pci/cdns,cdns-pcie-host.yaml | 119 +++++++++++++++--- >> 2 files changed, 110 insertions(+), 21 deletions(-) >> >> diff --git a/Documentation/devicetree/bindings/pci/cdns,cdns-pcie-ep.yaml >b/Documentation/devicetree/bindings/pci/cdns,cdns-pcie-ep.yaml >> index 98651ab22103..aa4ad69a9b71 100644 >> --- a/Documentation/devicetree/bindings/pci/cdns,cdns-pcie-ep.yaml >> +++ b/Documentation/devicetree/bindings/pci/cdns,cdns-pcie-ep.yaml >> @@ -7,14 +7,22 @@ $schema: >https://urldefense.com/v3/__http://devicetree.org/meta- >schemas/core.yaml*__;Iw!!EHscmS1ygiU1lA!CB5lvkvRUKSEDPSjpW7GJoPNyXZ >xMge5SyndD4Z-VVLCZvzLIPDP-BMRjhKZ2UTxi6a18vaodaU$ >> title: Cadence PCIe EP Controller >> >> maintainers: >> - - Tom Joseph <tjoseph@cadence.com> >> + - Manikandan K Pillai <mpillai@cadence.com> >> >> allOf: >> - $ref: cdns-pcie-ep.yaml# >> >> properties: >> compatible: >> - const: cdns,cdns-pcie-ep >> + oneOf: >> + - const: cdns,cdns-pcie-ep >> + - const: cdns,cdns-pcie-hpa-ep > >What is hpa? Which soc is that? > >I don't think this should keep growing, but instead use SoC based >compatibles. > >Anyway, that's enum. > HPA is high performance architecture based controllers. The major difference here in PCIe controllers is that the address map changes. Each of the compatibles defined here have different address maps that allow the driver to support them from the driver using compable property that provides the info from related data "struct of_device_id" in the driver. >> + - const: cdns,cdns-cix-pcie-hpa-ep > >What is cix? If you want to stuff here soc in the middle, then no, no >no. Please read devicetree spec and writing bindings how the compatibles >are created. > As mentioned in the earlier sections, cix is another implementation of the PCIe controller where the address map is changed by our customer >> + - description: PCIe EP controller from cadence >> + items: >> + - const: cdns,cdns-pcie-ep >> + - const: cdns,cdns-pcie-hpa-ep >> + - const: cdns,cdns-cix-pcie-hpa-ep > >This makes no sense. > Only one of the above compatible is valid for PCIe controllers, which will be defined in the SoC related binding. >> >> reg: >> diff --git a/Documentation/devicetree/bindings/pci/cdns,cdns-pcie-host.yaml >b/Documentation/devicetree/bindings/pci/cdns,cdns-pcie-host.yaml >> index a8190d9b100f..bb7ffb9ddaf9 100644 >> --- a/Documentation/devicetree/bindings/pci/cdns,cdns-pcie-host.yaml >> +++ b/Documentation/devicetree/bindings/pci/cdns,cdns-pcie-host.yaml >> >> maintainers: >> - - Tom Joseph <tjoseph@cadence.com> >> + - Manikandan K Pillai <mpillai@cadence.com> >> >> allOf: >> - - $ref: cdns-pcie-host.yaml# >> + - $ref: cdns-pcie.yaml# > >Why? > The existing yaml files were throwing out errors and the changes in these files are for fixing them. >> >> properties: >> + "#size-cells": >> + const: 2 >> + "#address-cells": >> + const: 3 > >Huh? Why? Nothing here makes sense. > > Compilation error related fixes. >Best regards, >Krzysztof ^ permalink raw reply [flat|nested] 28+ messages in thread
* Re: [PATCH 1/7] dt-bindings: pci: cadence: Extend compatible for new platform configurations 2025-03-28 5:07 ` Manikandan Karunakaran Pillai @ 2025-03-28 7:20 ` Krzysztof Kozlowski 0 siblings, 0 replies; 28+ messages in thread From: Krzysztof Kozlowski @ 2025-03-28 7:20 UTC (permalink / raw) To: Manikandan Karunakaran Pillai, bhelgaas@google.com, lpieralisi@kernel.org, kw@linux.com, manivannan.sadhasivam@linaro.org, robh@kernel.org, krzk+dt@kernel.org, conor+dt@kernel.org, Milind Parab Cc: linux-pci@vger.kernel.org, devicetree@vger.kernel.org, linux-kernel@vger.kernel.org On 28/03/2025 06:07, Manikandan Karunakaran Pillai wrote: >> EXTERNAL MAIL >> >> >> On 27/03/2025 12:19, Manikandan Karunakaran Pillai wrote: >>> Document the compatible property for the newly added values for PCIe EP >> and >>> RP configurations. Fix the compilation issues that came up for the existing >>> Cadence bindings >> >> These are two different commits. > > Ok > >> >>> >>> Signed-off-by: Manikandan K Pillai <mpillai@cadence.com> >>> --- >>> .../bindings/pci/cdns,cdns-pcie-ep.yaml | 12 +- >>> .../bindings/pci/cdns,cdns-pcie-host.yaml | 119 +++++++++++++++--- >>> 2 files changed, 110 insertions(+), 21 deletions(-) >>> >>> diff --git a/Documentation/devicetree/bindings/pci/cdns,cdns-pcie-ep.yaml >> b/Documentation/devicetree/bindings/pci/cdns,cdns-pcie-ep.yaml >>> index 98651ab22103..aa4ad69a9b71 100644 >>> --- a/Documentation/devicetree/bindings/pci/cdns,cdns-pcie-ep.yaml >>> +++ b/Documentation/devicetree/bindings/pci/cdns,cdns-pcie-ep.yaml >>> @@ -7,14 +7,22 @@ $schema: >> https://urldefense.com/v3/__http://devicetree.org/meta- >> schemas/core.yaml*__;Iw!!EHscmS1ygiU1lA!CB5lvkvRUKSEDPSjpW7GJoPNyXZ >> xMge5SyndD4Z-VVLCZvzLIPDP-BMRjhKZ2UTxi6a18vaodaU$ >>> title: Cadence PCIe EP Controller >>> >>> maintainers: >>> - - Tom Joseph <tjoseph@cadence.com> >>> + - Manikandan K Pillai <mpillai@cadence.com> >>> >>> allOf: >>> - $ref: cdns-pcie-ep.yaml# >>> >>> properties: >>> compatible: >>> - const: cdns,cdns-pcie-ep >>> + oneOf: >>> + - const: cdns,cdns-pcie-ep >>> + - const: cdns,cdns-pcie-hpa-ep >> >> What is hpa? Which soc is that? >> >> I don't think this should keep growing, but instead use SoC based >> compatibles. >> >> Anyway, that's enum. >> > > HPA is high performance architecture based controllers. The major difference here in PCIe controllers is that > the address map changes. Each of the compatibles defined here have different address maps that allow the driver > to support them from the driver using compable property that provides the info from related data "struct of_device_id" in the driver. Just switch to SoC specific compatibles. > >>> + - const: cdns,cdns-cix-pcie-hpa-ep >> >> What is cix? If you want to stuff here soc in the middle, then no, no >> no. Please read devicetree spec and writing bindings how the compatibles >> are created. >> > > As mentioned in the earlier sections, cix is another implementation of the PCIe controller where > the address map is changed by our customer So a SoC. Use SoC compatibles and follow every other recent binding. > >>> + - description: PCIe EP controller from cadence >>> + items: >>> + - const: cdns,cdns-pcie-ep >>> + - const: cdns,cdns-pcie-hpa-ep >>> + - const: cdns,cdns-cix-pcie-hpa-ep >> >> This makes no sense. >> > Only one of the above compatible is valid for PCIe controllers, which will be defined in the SoC related binding. That's not how lists are working. Don't explain me what it does, because I know that it does nothing good: it's broken code. You can explain me what you wanted to achieve, but still this part is just wrong and makes no sense. Drop. > >>> >>> reg: >>> diff --git a/Documentation/devicetree/bindings/pci/cdns,cdns-pcie-host.yaml >> b/Documentation/devicetree/bindings/pci/cdns,cdns-pcie-host.yaml >>> index a8190d9b100f..bb7ffb9ddaf9 100644 >>> --- a/Documentation/devicetree/bindings/pci/cdns,cdns-pcie-host.yaml >>> +++ b/Documentation/devicetree/bindings/pci/cdns,cdns-pcie-host.yaml >>> >>> maintainers: >>> - - Tom Joseph <tjoseph@cadence.com> >>> + - Manikandan K Pillai <mpillai@cadence.com> >>> >>> allOf: >>> - - $ref: cdns-pcie-host.yaml# >>> + - $ref: cdns-pcie.yaml# >> >> Why? >> > > The existing yaml files were throwing out errors and the changes in these files are for fixing them. Then rather investigate the errors instead of doing random changes. > >>> >>> properties: >>> + "#size-cells": >>> + const: 2 >>> + "#address-cells": >>> + const: 3 >> >> Huh? Why? Nothing here makes sense. >> >> > Compilation error related fixes. NAK, no point at all. Best regards, Krzysztof ^ permalink raw reply [flat|nested] 28+ messages in thread
* Re: [PATCH 1/7] dt-bindings: pci: cadence: Extend compatible for new platform configurations 2025-03-27 11:19 ` [PATCH 1/7] dt-bindings: pci: cadence: Extend compatible for new platform configurations Manikandan Karunakaran Pillai 2025-03-27 14:15 ` Krzysztof Kozlowski @ 2025-03-28 8:22 ` Krzysztof Kozlowski 2025-03-28 8:48 ` Hans Zhang 1 sibling, 1 reply; 28+ messages in thread From: Krzysztof Kozlowski @ 2025-03-28 8:22 UTC (permalink / raw) To: Manikandan Karunakaran Pillai Cc: bhelgaas@google.com, lpieralisi@kernel.org, kw@linux.com, manivannan.sadhasivam@linaro.org, robh@kernel.org, krzk+dt@kernel.org, conor+dt@kernel.org, linux-pci@vger.kernel.org, devicetree@vger.kernel.org, linux-kernel@vger.kernel.org On Thu, Mar 27, 2025 at 11:19:47AM +0000, Manikandan Karunakaran Pillai wrote: > Document the compatible property for the newly added values for PCIe EP and > RP configurations. Fix the compilation issues that came up for the existing > Cadence bindings > > Signed-off-by: Manikandan K Pillai <mpillai@cadence.com> > --- > .../bindings/pci/cdns,cdns-pcie-ep.yaml | 12 +- > .../bindings/pci/cdns,cdns-pcie-host.yaml | 119 +++++++++++++++--- > 2 files changed, 110 insertions(+), 21 deletions(-) One more thing: SoB mismatch. Maybe got corrupted by Microsoft (it is known), so you really need to fix your mailing setup or use b4 relay. Best regards, Krzysztof ^ permalink raw reply [flat|nested] 28+ messages in thread
* Re: [PATCH 1/7] dt-bindings: pci: cadence: Extend compatible for new platform configurations 2025-03-28 8:22 ` Krzysztof Kozlowski @ 2025-03-28 8:48 ` Hans Zhang 2025-03-28 9:17 ` Krzysztof Kozlowski 0 siblings, 1 reply; 28+ messages in thread From: Hans Zhang @ 2025-03-28 8:48 UTC (permalink / raw) To: Krzysztof Kozlowski, Manikandan Karunakaran Pillai Cc: bhelgaas@google.com, lpieralisi@kernel.org, kw@linux.com, manivannan.sadhasivam@linaro.org, robh@kernel.org, krzk+dt@kernel.org, conor+dt@kernel.org, linux-pci@vger.kernel.org, devicetree@vger.kernel.org, linux-kernel@vger.kernel.org On 2025/3/28 16:22, Krzysztof Kozlowski wrote: > EXTERNAL EMAIL > > On Thu, Mar 27, 2025 at 11:19:47AM +0000, Manikandan Karunakaran Pillai wrote: >> Document the compatible property for the newly added values for PCIe EP and >> RP configurations. Fix the compilation issues that came up for the existing >> Cadence bindings >> >> Signed-off-by: Manikandan K Pillai <mpillai@cadence.com> >> --- >> .../bindings/pci/cdns,cdns-pcie-ep.yaml | 12 +- >> .../bindings/pci/cdns,cdns-pcie-host.yaml | 119 +++++++++++++++--- >> 2 files changed, 110 insertions(+), 21 deletions(-) > > One more thing: SoB mismatch. Maybe got corrupted by Microsoft (it is > known), so you really need to fix your mailing setup or use b4 relay. > Hi Krzysztof, I have obtained Manikandan's consent and we will collaborate to submit the series patch. Our Cixtech P1 (internal name sky1) is currently upstream. Because I need upstream Cadence root port driver, However, the Cadence common code of the current linux master does not support HPA[High Performance Architecture IP] is the second generation of cadence PCIe IP. Subsequently, I will send git send-email to pci mail list. Peter Chen patchs: https://patchwork.kernel.org/project/linux-arm-kernel/cover/20250324062420.360289-1-peter.chen@cixtech.com/ Best regards, Hans ^ permalink raw reply [flat|nested] 28+ messages in thread
* Re: [PATCH 1/7] dt-bindings: pci: cadence: Extend compatible for new platform configurations 2025-03-28 8:48 ` Hans Zhang @ 2025-03-28 9:17 ` Krzysztof Kozlowski 2025-03-30 14:59 ` Hans Zhang 0 siblings, 1 reply; 28+ messages in thread From: Krzysztof Kozlowski @ 2025-03-28 9:17 UTC (permalink / raw) To: Hans Zhang, Manikandan Karunakaran Pillai Cc: bhelgaas@google.com, lpieralisi@kernel.org, kw@linux.com, manivannan.sadhasivam@linaro.org, robh@kernel.org, krzk+dt@kernel.org, conor+dt@kernel.org, linux-pci@vger.kernel.org, devicetree@vger.kernel.org, linux-kernel@vger.kernel.org On 28/03/2025 09:48, Hans Zhang wrote: > > > On 2025/3/28 16:22, Krzysztof Kozlowski wrote: >> EXTERNAL EMAIL >> >> On Thu, Mar 27, 2025 at 11:19:47AM +0000, Manikandan Karunakaran Pillai wrote: >>> Document the compatible property for the newly added values for PCIe EP and >>> RP configurations. Fix the compilation issues that came up for the existing >>> Cadence bindings >>> >>> Signed-off-by: Manikandan K Pillai <mpillai@cadence.com> >>> --- >>> .../bindings/pci/cdns,cdns-pcie-ep.yaml | 12 +- >>> .../bindings/pci/cdns,cdns-pcie-host.yaml | 119 +++++++++++++++--- >>> 2 files changed, 110 insertions(+), 21 deletions(-) >> >> One more thing: SoB mismatch. Maybe got corrupted by Microsoft (it is >> known), so you really need to fix your mailing setup or use b4 relay. >> > > Hi Krzysztof, > > I have obtained Manikandan's consent and we will collaborate to submit It does not matter. You still need proper SoB / DCO chain. Please follow submitting patches. Best regards, Krzysztof ^ permalink raw reply [flat|nested] 28+ messages in thread
* Re: [PATCH 1/7] dt-bindings: pci: cadence: Extend compatible for new platform configurations 2025-03-28 9:17 ` Krzysztof Kozlowski @ 2025-03-30 14:59 ` Hans Zhang 0 siblings, 0 replies; 28+ messages in thread From: Hans Zhang @ 2025-03-30 14:59 UTC (permalink / raw) To: Krzysztof Kozlowski, Manikandan Karunakaran Pillai Cc: bhelgaas@google.com, lpieralisi@kernel.org, kw@linux.com, manivannan.sadhasivam@linaro.org, robh@kernel.org, krzk+dt@kernel.org, conor+dt@kernel.org, linux-pci@vger.kernel.org, devicetree@vger.kernel.org, linux-kernel@vger.kernel.org On 2025/3/28 17:17, Krzysztof Kozlowski wrote: > EXTERNAL EMAIL > > On 28/03/2025 09:48, Hans Zhang wrote: >> >> >> On 2025/3/28 16:22, Krzysztof Kozlowski wrote: >>> EXTERNAL EMAIL >>> >>> On Thu, Mar 27, 2025 at 11:19:47AM +0000, Manikandan Karunakaran Pillai wrote: >>>> Document the compatible property for the newly added values for PCIe EP and >>>> RP configurations. Fix the compilation issues that came up for the existing >>>> Cadence bindings >>>> >>>> Signed-off-by: Manikandan K Pillai <mpillai@cadence.com> >>>> --- >>>> .../bindings/pci/cdns,cdns-pcie-ep.yaml | 12 +- >>>> .../bindings/pci/cdns,cdns-pcie-host.yaml | 119 +++++++++++++++--- >>>> 2 files changed, 110 insertions(+), 21 deletions(-) >>> >>> One more thing: SoB mismatch. Maybe got corrupted by Microsoft (it is >>> known), so you really need to fix your mailing setup or use b4 relay. >>> >> >> Hi Krzysztof, >> >> I have obtained Manikandan's consent and we will collaborate to submit > > It does not matter. You still need proper SoB / DCO chain. Please follow > submitting patches. > Hi Krzysztof, Thank you very much for reminding me. I will pay attention to it. Thanks Hans ^ permalink raw reply [flat|nested] 28+ messages in thread
[parent not found: <20250327111127.2947944-1-mpillai@cadence.com>]
* [PATCH 2/7] PCI: cadence: Add header support for PCIe next generation controllers [not found] ` <20250327111127.2947944-1-mpillai@cadence.com> @ 2025-03-27 11:26 ` Manikandan Karunakaran Pillai 2025-03-27 12:01 ` Hans Zhang 2025-04-09 20:39 ` Bjorn Helgaas 0 siblings, 2 replies; 28+ messages in thread From: Manikandan Karunakaran Pillai @ 2025-03-27 11:26 UTC (permalink / raw) To: bhelgaas@google.com, lpieralisi@kernel.org, kw@linux.com, manivannan.sadhasivam@linaro.org, robh@kernel.org Cc: manivannan.sadhasivam@linaro.org, robh@kernel.org, linux-pci@vger.kernel.org, linux-kernel@vger.kernel.org Add the required definitions for register addresses and register bits for the next generation Cadence PCIe controllers - High performance rchitecture(HPA) controllers. Define register access functions for SoC platforms with different base address Signed-off-by: Manikandan K Pillai <mpillai@cadence.com> --- .../controller/cadence/pcie-cadence-host.c | 12 +- drivers/pci/controller/cadence/pcie-cadence.h | 365 +++++++++++++++++- 2 files changed, 370 insertions(+), 7 deletions(-) diff --git a/drivers/pci/controller/cadence/pcie-cadence-host.c b/drivers/pci/controller/cadence/pcie-cadence-host.c index 8af95e9da7ce..1e2df49e40c6 100644 --- a/drivers/pci/controller/cadence/pcie-cadence-host.c +++ b/drivers/pci/controller/cadence/pcie-cadence-host.c @@ -175,7 +175,7 @@ static int cdns_pcie_host_start_link(struct cdns_pcie_rc *rc) return ret; } -static int cdns_pcie_host_init_root_port(struct cdns_pcie_rc *rc) +int cdns_pcie_host_init_root_port(struct cdns_pcie_rc *rc) { struct cdns_pcie *pcie = &rc->pcie; u32 value, ctrl; @@ -215,10 +215,10 @@ static int cdns_pcie_host_init_root_port(struct cdns_pcie_rc *rc) return 0; } -static int cdns_pcie_host_bar_ib_config(struct cdns_pcie_rc *rc, - enum cdns_pcie_rp_bar bar, - u64 cpu_addr, u64 size, - unsigned long flags) +int cdns_pcie_host_bar_ib_config(struct cdns_pcie_rc *rc, + enum cdns_pcie_rp_bar bar, + u64 cpu_addr, u64 size, + unsigned long flags) { struct cdns_pcie *pcie = &rc->pcie; u32 addr0, addr1, aperture, value; @@ -428,7 +428,7 @@ static int cdns_pcie_host_map_dma_ranges(struct cdns_pcie_rc *rc) return 0; } -static int cdns_pcie_host_init_address_translation(struct cdns_pcie_rc *rc) +int cdns_pcie_host_init_address_translation(struct cdns_pcie_rc *rc) { struct cdns_pcie *pcie = &rc->pcie; struct pci_host_bridge *bridge = pci_host_bridge_from_priv(rc); diff --git a/drivers/pci/controller/cadence/pcie-cadence.h b/drivers/pci/controller/cadence/pcie-cadence.h index f5eeff834ec1..69c59c10808e 100644 --- a/drivers/pci/controller/cadence/pcie-cadence.h +++ b/drivers/pci/controller/cadence/pcie-cadence.h @@ -218,6 +218,203 @@ (((delay) << CDNS_PCIE_DETECT_QUIET_MIN_DELAY_SHIFT) & \ CDNS_PCIE_DETECT_QUIET_MIN_DELAY_MASK) +/* + * High Performance Architecture(HPA) PCIe controller register + */ +#define CDNS_PCIE_HPA_IP_REG_BANK 0x01000000 +#define CDNS_PCIE_HPA_IP_CFG_CTRL_REG_BANK 0x01003C00 +#define CDNS_PCIE_HPA_IP_AXI_MASTER_COMMON 0x01020000 +/* + * Address Translation Registers(HPA) + */ +#define CDNS_PCIE_HPA_AXI_SLAVE 0x03000000 +#define CDNS_PCIE_HPA_AXI_MASTER 0x03002000 +/* + * Root port register base address + */ +#define CDNS_PCIE_HPA_RP_BASE 0x0 + +#define CDNS_PCIE_HPA_LM_ID 0x1420 + +/* + * Endpoint Function BARs(HPA) Configuration Registers + */ +#define CDNS_PCIE_HPA_LM_EP_FUNC_BAR_CFG(bar, fn) \ + (((bar) < BAR_3) ? CDNS_PCIE_HPA_LM_EP_FUNC_BAR_CFG0(fn) : \ + CDNS_PCIE_HPA_LM_EP_FUNC_BAR_CFG1(fn)) +#define CDNS_PCIE_HPA_LM_EP_FUNC_BAR_CFG0(pfn) (0x4000 * (pfn)) +#define CDNS_PCIE_HPA_LM_EP_FUNC_BAR_CFG1(pfn) ((0x4000 * (pfn)) + 0x04) +#define CDNS_PCIE_HPA_LM_EP_VFUNC_BAR_CFG(bar, fn) \ + (((bar) < BAR_3) ? CDNS_PCIE_HPA_LM_EP_VFUNC_BAR_CFG0(fn) : \ + CDNS_PCIE_HPA_LM_EP_VFUNC_BAR_CFG1(fn)) +#define CDNS_PCIE_HPA_LM_EP_VFUNC_BAR_CFG0(vfn) ((0x4000 * (vfn)) + 0x08) +#define CDNS_PCIE_HPA_LM_EP_VFUNC_BAR_CFG1(vfn) ((0x4000 * (vfn)) + 0x0C) +#define CDNS_PCIE_HPA_LM_EP_FUNC_BAR_CFG_BAR_APERTURE_MASK(f) \ + (GENMASK(9, 4) << ((f) * 10)) +#define CDNS_PCIE_HPA_LM_EP_FUNC_BAR_CFG_BAR_APERTURE(b, a) \ + (((a) << (4 + ((b) * 10))) & (CDNS_PCIE_HPA_LM_EP_FUNC_BAR_CFG_BAR_APERTURE_MASK(b))) +#define CDNS_PCIE_HPA_LM_EP_FUNC_BAR_CFG_BAR_CTRL_MASK(f) \ + (GENMASK(3, 0) << ((f) * 10)) +#define CDNS_PCIE_HPA_LM_EP_FUNC_BAR_CFG_BAR_CTRL(b, c) \ + (((c) << ((b) * 10)) & (CDNS_PCIE_HPA_LM_EP_FUNC_BAR_CFG_BAR_CTRL_MASK(b))) + +/* + * Endpoint Function Configuration Register + */ +#define CDNS_PCIE_HPA_LM_EP_FUNC_CFG 0x02c0 + +/* + * Root Complex BAR Configuration Register + */ +#define CDNS_PCIE_HPA_LM_RC_BAR_CFG 0x14 +#define CDNS_PCIE_HPA_LM_RC_BAR_CFG_BAR0_APERTURE_MASK GENMASK(9, 4) +#define CDNS_PCIE_HPA_LM_RC_BAR_CFG_BAR0_APERTURE(a) \ + FIELD_PREP(CDNS_PCIE_HPA_LM_RC_BAR_CFG_BAR0_APERTURE_MASK, a) +#define CDNS_PCIE_HPA_LM_RC_BAR_CFG_BAR0_CTRL_MASK GENMASK(3, 0) +#define CDNS_PCIE_HPA_LM_RC_BAR_CFG_BAR0_CTRL(c) \ + FIELD_PREP(CDNS_PCIE_HPA_LM_RC_BAR_CFG_BAR0_CTRL_MASK, c) +#define CDNS_PCIE_HPA_LM_RC_BAR_CFG_BAR1_APERTURE_MASK GENMASK(19, 14) +#define CDNS_PCIE_HPA_LM_RC_BAR_CFG_BAR1_APERTURE(a) \ + FIELD_PREP(CDNS_PCIE_HPA_LM_RC_BAR_CFG_BAR1_APERTURE_MASK, a) +#define CDNS_PCIE_HPA_LM_RC_BAR_CFG_BAR1_CTRL_MASK GENMASK(13, 10) +#define CDNS_PCIE_HPA_LM_RC_BAR_CFG_BAR1_CTRL(c) \ + FIELD_PREP(CDNS_PCIE_HPA_LM_RC_BAR_CFG_BAR1_CTRL_MASK, c) + +#define CDNS_PCIE_HPA_LM_RC_BAR_CFG_PREFETCH_MEM_ENABLE BIT(20) +#define CDNS_PCIE_HPA_LM_RC_BAR_CFG_PREFETCH_MEM_64BITS BIT(21) +#define CDNS_PCIE_HPA_LM_RC_BAR_CFG_IO_ENABLE BIT(22) +#define CDNS_PCIE_HPA_LM_RC_BAR_CFG_IO_32BITS BIT(23) + +/* BAR control values applicable to both Endpoint Function and Root Complex */ +#define CDNS_PCIE_HPA_LM_BAR_CFG_CTRL_DISABLED 0x0 +#define CDNS_PCIE_HPA_LM_BAR_CFG_CTRL_IO_32BITS 0x3 +#define CDNS_PCIE_HPA_LM_BAR_CFG_CTRL_MEM_32BITS 0x1 +#define CDNS_PCIE_HPA_LM_BAR_CFG_CTRL_PREFETCH_MEM_32BITS 0x9 +#define CDNS_PCIE_HPA_LM_BAR_CFG_CTRL_MEM_64BITS 0x5 +#define CDNS_PCIE_HPA_LM_BAR_CFG_CTRL_PREFETCH_MEM_64BITS 0xD + +#define HPA_LM_RC_BAR_CFG_CTRL_DISABLED(bar) \ + (CDNS_PCIE_HPA_LM_BAR_CFG_CTRL_DISABLED << ((bar) * 10)) +#define HPA_LM_RC_BAR_CFG_CTRL_IO_32BITS(bar) \ + (CDNS_PCIE_HPA_LM_BAR_CFG_CTRL_IO_32BITS << ((bar) * 10)) +#define HPA_LM_RC_BAR_CFG_CTRL_MEM_32BITS(bar) \ + (CDNS_PCIE_HPA_LM_BAR_CFG_CTRL_MEM_32BITS << ((bar) * 10)) +#define HPA_LM_RC_BAR_CFG_CTRL_PREF_MEM_32BITS(bar) \ + (CDNS_PCIE_HPA_LM_BAR_CFG_CTRL_PREFETCH_MEM_32BITS << ((bar) * 10)) +#define HPA_LM_RC_BAR_CFG_CTRL_MEM_64BITS(bar) \ + (CDNS_PCIE_HPA_LM_BAR_CFG_CTRL_MEM_64BITS << ((bar) * 10)) +#define HPA_LM_RC_BAR_CFG_CTRL_PREF_MEM_64BITS(bar) \ + (CDNS_PCIE_HPA_LM_BAR_CFG_CTRL_PREFETCH_MEM_64BITS << ((bar) * 10)) +#define HPA_LM_RC_BAR_CFG_APERTURE(bar, aperture) \ + (((aperture) - 7) << ((bar) * 10)) + +#define CDNS_PCIE_HPA_LM_PTM_CTRL 0x0520 +#define CDNS_PCIE_HPA_LM_TPM_CTRL_PTMRSEN BIT(17) + +/* + * Root Port Registers PCI config space(HPA) for root port function + */ +#define CDNS_PCIE_HPA_RP_CAP_OFFSET 0xC0 + +/* + * Region r Outbound AXI to PCIe Address Translation Register 0 + */ +#define CDNS_PCIE_HPA_AT_OB_REGION_PCI_ADDR0(r) (0x1010 + ((r) & 0x1F) * 0x0080) +#define CDNS_PCIE_HPA_AT_OB_REGION_PCI_ADDR0_NBITS_MASK GENMASK(5, 0) +#define CDNS_PCIE_HPA_AT_OB_REGION_PCI_ADDR0_NBITS(nbits) \ + FIELD_PREP(CDNS_PCIE_HPA_AT_OB_REGION_PCI_ADDR0_NBITS_MASK, ((nbits) - 1)) +#define CDNS_PCIE_HPA_AT_OB_REGION_PCI_ADDR0_DEVFN_MASK GENMASK(23, 16) +#define CDNS_PCIE_HPA_AT_OB_REGION_PCI_ADDR0_DEVFN(devfn) \ + FIELD_PREP(CDNS_PCIE_HPA_AT_OB_REGION_PCI_ADDR0_DEVFN_MASK, devfn) +#define CDNS_PCIE_HPA_AT_OB_REGION_PCI_ADDR0_BUS_MASK GENMASK(31, 24) +#define CDNS_PCIE_HPA_AT_OB_REGION_PCI_ADDR0_BUS(bus) \ + FIELD_PREP(CDNS_PCIE_HPA_AT_OB_REGION_PCI_ADDR0_BUS_MASK, bus) + +/* + * Region r Outbound AXI to PCIe Address Translation Register 1 + */ +#define CDNS_PCIE_HPA_AT_OB_REGION_PCI_ADDR1(r) (0x1014 + ((r) & 0x1F) * 0x0080) + +/* + * Region r Outbound PCIe Descriptor Register 0 + */ +#define CDNS_PCIE_HPA_AT_OB_REGION_DESC0(r) (0x1008 + ((r) & 0x1F) * 0x0080) +#define CDNS_PCIE_HPA_AT_OB_REGION_DESC0_TYPE_MASK GENMASK(28, 24) +#define CDNS_PCIE_HPA_AT_OB_REGION_DESC0_TYPE_MEM \ + FIELD_PREP(CDNS_PCIE_HPA_AT_OB_REGION_DESC0_TYPE_MASK, 0x0) +#define CDNS_PCIE_HPA_AT_OB_REGION_DESC0_TYPE_IO \ + FIELD_PREP(CDNS_PCIE_HPA_AT_OB_REGION_DESC0_TYPE_MASK, 0x2) +#define CDNS_PCIE_HPA_AT_OB_REGION_DESC0_TYPE_CONF_TYPE0 \ + FIELD_PREP(CDNS_PCIE_HPA_AT_OB_REGION_DESC0_TYPE_MASK, 0x4) +#define CDNS_PCIE_HPA_AT_OB_REGION_DESC0_TYPE_CONF_TYPE1 \ + FIELD_PREP(CDNS_PCIE_HPA_AT_OB_REGION_DESC0_TYPE_MASK, 0x5) +#define CDNS_PCIE_HPA_AT_OB_REGION_DESC0_TYPE_NORMAL_MSG \ + FIELD_PREP(CDNS_PCIE_HPA_AT_OB_REGION_DESC0_TYPE_MASK, 0x10) + +/* + * Region r Outbound PCIe Descriptor Register 1 + */ +#define CDNS_PCIE_HPA_AT_OB_REGION_DESC1(r) (0x100C + ((r) & 0x1F) * 0x0080) +#define CDNS_PCIE_HPA_AT_OB_REGION_DESC1_BUS_MASK GENMASK(31, 24) +#define CDNS_PCIE_HPA_AT_OB_REGION_DESC1_BUS(bus) \ + FIELD_PREP(CDNS_PCIE_HPA_AT_OB_REGION_DESC1_BUS_MASK, bus) +#define CDNS_PCIE_HPA_AT_OB_REGION_DESC1_DEVFN_MASK GENMASK(23, 16) +#define CDNS_PCIE_HPA_AT_OB_REGION_DESC1_DEVFN(devfn) \ + FIELD_PREP(CDNS_PCIE_HPA_AT_OB_REGION_DESC1_DEVFN_MASK, devfn) + +#define CDNS_PCIE_HPA_AT_OB_REGION_CTRL0(r) (0x1018 + ((r) & 0x1F) * 0x0080) +#define CDNS_PCIE_HPA_AT_OB_REGION_CTRL0_SUPPLY_BUS BIT(26) +#define CDNS_PCIE_HPA_AT_OB_REGION_CTRL0_SUPPLY_DEV_FN BIT(25) + +/* + * Region r AXI Region Base Address Register 0 + */ +#define CDNS_PCIE_HPA_AT_OB_REGION_CPU_ADDR0(r) (0x1000 + ((r) & 0x1F) * 0x0080) +#define CDNS_PCIE_HPA_AT_OB_REGION_CPU_ADDR0_NBITS_MASK GENMASK(5, 0) +#define CDNS_PCIE_HPA_AT_OB_REGION_CPU_ADDR0_NBITS(nbits) \ + FIELD_PREP(CDNS_PCIE_HPA_AT_OB_REGION_CPU_ADDR0_NBITS_MASK, ((nbits) - 1)) + +/* + * Region r AXI Region Base Address Register 1 + */ +#define CDNS_PCIE_HPA_AT_OB_REGION_CPU_ADDR1(r) (0x1004 + ((r) & 0x1F) * 0x0080) + +/* + * Root Port BAR Inbound PCIe to AXI Address Translation Register + */ +#define CDNS_PCIE_HPA_AT_IB_RP_BAR_ADDR0(bar) (((bar) * 0x0008)) +#define CDNS_PCIE_HPA_AT_IB_RP_BAR_ADDR0_NBITS_MASK GENMASK(5, 0) +#define CDNS_PCIE_HPA_AT_IB_RP_BAR_ADDR0_NBITS(nbits) \ + FIELD_PREP(CDNS_PCIE_HPA_AT_IB_RP_BAR_ADDR0_NBITS_MASK, ((nbits) - 1)) +#define CDNS_PCIE_HPA_AT_IB_RP_BAR_ADDR1(bar) (0x04 + ((bar) * 0x0008)) + +/* + * AXI link down register + */ +#define CDNS_PCIE_HPA_AT_LINKDOWN 0x04 + +/* + * Physical Layer Configuration Register 0 + * This register contains the parameters required for functional setup + * of Physical Layer. + */ +#define CDNS_PCIE_HPA_PHY_LAYER_CFG0 0x0400 +#define CDNS_PCIE_HPA_DETECT_QUIET_MIN_DELAY_MASK GENMASK(26, 24) +#define CDNS_PCIE_HPA_DETECT_QUIET_MIN_DELAY(delay) \ + FIELD_PREP(CDNS_PCIE_HPA_DETECT_QUIET_MIN_DELAY_MASK, delay) +#define CDNS_PCIE_HPA_LINK_TRNG_EN_MASK GENMASK(27, 27) + +#define CDNS_PCIE_HPA_PHY_DBG_STS_REG0 0x0420 + +#define CDNS_PCIE_HPA_RP_MAX_IB 0x3 +#define CDNS_PCIE_HPA_MAX_OB 15 + +/* + * Endpoint Function BAR Inbound PCIe to AXI Address Translation Register(HPA) + */ +#define CDNS_PCIE_HPA_AT_IB_EP_FUNC_BAR_ADDR0(fn, bar) (((fn) * 0x0040) + ((bar) * 0x0008)) +#define CDNS_PCIE_HPA_AT_IB_EP_FUNC_BAR_ADDR1(fn, bar) (0x4 + ((fn) * 0x0040) + ((bar) * 0x0008)) + enum cdns_pcie_rp_bar { RP_BAR_UNDEFINED = -1, RP_BAR0, @@ -249,6 +446,7 @@ struct cdns_pcie_rp_ib_bar { #define CDNS_PCIE_MSG_NO_DATA BIT(16) struct cdns_pcie; +struct cdns_pcie_rc; enum cdns_pcie_msg_code { MSG_CODE_ASSERT_INTA = 0x20, @@ -281,11 +479,59 @@ enum cdns_pcie_msg_routing { MSG_ROUTING_GATHER, }; +enum cdns_pcie_reg_bank { + REG_BANK_IP_REG, + REG_BANK_IP_CFG_CTRL_REG, + REG_BANK_AXI_MASTER_COMMON, + REG_BANK_AXI_MASTER, + REG_BANK_AXI_SLAVE, + REG_BANK_AXI_HLS, + REG_BANK_AXI_RAS, + REG_BANK_AXI_DTI, + REG_BANKS_MAX, +}; + struct cdns_pcie_ops { int (*start_link)(struct cdns_pcie *pcie); void (*stop_link)(struct cdns_pcie *pcie); bool (*link_up)(struct cdns_pcie *pcie); u64 (*cpu_addr_fixup)(struct cdns_pcie *pcie, u64 cpu_addr); + int (*pcie_host_init_root_port)(struct cdns_pcie_rc *rc); + int (*pcie_host_bar_ib_config)(struct cdns_pcie_rc *rc, + enum cdns_pcie_rp_bar bar, + u64 cpu_addr, u64 size, + unsigned long flags); + int (*pcie_host_init_address_translation)(struct cdns_pcie_rc *rc); + void (*pcie_detect_quiet_min_delay_set)(struct cdns_pcie *pcie); + void (*pcie_set_outbound_region)(struct cdns_pcie *pcie, u8 busnr, u8 fn, + u32 r, bool is_io, u64 cpu_addr, + u64 pci_addr, size_t size); + void (*pcie_set_outbound_region_for_normal_msg)(struct cdns_pcie *pcie, + u8 busnr, u8 fn, u32 r, + u64 cpu_addr); + void (*pcie_reset_outbound_region)(struct cdns_pcie *pcie, u32 r); +}; + +/** + * struct cdns_pcie_reg_offset - Register bank offset for a platform + * @ip_reg_bank_off - ip register bank start offset + * @iP_cfg_ctrl_reg_off - ip config contrl register start offset + * @axi_mstr_common_off - AXI master common register start + * @axi_slave_off - AXI skave offset start + * @axi_master_off - AXI master offset start + * @axi_hls_off - AXI HLS offset start + * @axi_ras_off - AXI RAS offset + * @axi_dti_off - AXI DTI offset + */ +struct cdns_pcie_reg_offset { + u32 ip_reg_bank_off; + u32 ip_cfg_ctrl_reg_off; + u32 axi_mstr_common_off; + u32 axi_slave_off; + u32 axi_master_off; + u32 axi_hls_off; + u32 axi_ras_off; + u32 axi_dti_off; }; /** @@ -305,10 +551,12 @@ struct cdns_pcie { struct resource *mem_res; struct device *dev; bool is_rc; + bool is_hpa; int phy_count; struct phy **phy; struct device_link **link; const struct cdns_pcie_ops *ops; + struct cdns_pcie_reg_offset cdns_pcie_reg_offsets; }; /** @@ -386,6 +634,40 @@ struct cdns_pcie_ep { unsigned int quirk_disable_flr:1; }; +static inline u32 cdns_reg_bank_to_off(struct cdns_pcie *pcie, enum cdns_pcie_reg_bank bank) +{ + u32 offset; + + switch (bank) { + case REG_BANK_IP_REG: + offset = pcie->cdns_pcie_reg_offsets.ip_reg_bank_off; + break; + case REG_BANK_IP_CFG_CTRL_REG: + offset = pcie->cdns_pcie_reg_offsets.ip_cfg_ctrl_reg_off; + break; + case REG_BANK_AXI_MASTER_COMMON: + offset = pcie->cdns_pcie_reg_offsets.axi_mstr_common_off; + break; + case REG_BANK_AXI_MASTER: + offset = pcie->cdns_pcie_reg_offsets.axi_master_off; + break; + case REG_BANK_AXI_SLAVE: + offset = pcie->cdns_pcie_reg_offsets.axi_slave_off; + break; + case REG_BANK_AXI_HLS: + offset = pcie->cdns_pcie_reg_offsets.axi_hls_off; + break; + case REG_BANK_AXI_RAS: + offset = pcie->cdns_pcie_reg_offsets.axi_ras_off; + break; + case REG_BANK_AXI_DTI: + offset = pcie->cdns_pcie_reg_offsets.axi_dti_off; + break; + default: + break; + }; + return offset; +} /* Register access */ static inline void cdns_pcie_writel(struct cdns_pcie *pcie, u32 reg, u32 value) @@ -398,6 +680,27 @@ static inline u32 cdns_pcie_readl(struct cdns_pcie *pcie, u32 reg) return readl(pcie->reg_base + reg); } +static inline void cdns_pcie_hpa_writel(struct cdns_pcie *pcie, + enum cdns_pcie_reg_bank bank, + u32 reg, + u32 value) +{ + u32 offset = cdns_reg_bank_to_off(pcie, bank); + + reg += offset; + writel(value, pcie->reg_base + reg); +} + +static inline u32 cdns_pcie_hpa_readl(struct cdns_pcie *pcie, + enum cdns_pcie_reg_bank bank, + u32 reg) +{ + u32 offset = cdns_reg_bank_to_off(pcie, bank); + + reg += offset; + return readl(pcie->reg_base + reg); +} + static inline u32 cdns_pcie_read_sz(void __iomem *addr, int size) { void __iomem *aligned_addr = PTR_ALIGN_DOWN(addr, 0x4); @@ -444,6 +747,8 @@ static inline void cdns_pcie_rp_writeb(struct cdns_pcie *pcie, { void __iomem *addr = pcie->reg_base + CDNS_PCIE_RP_BASE + reg; + if (pcie->is_hpa) + addr = pcie->reg_base + CDNS_PCIE_HPA_RP_BASE + reg; cdns_pcie_write_sz(addr, 0x1, value); } @@ -452,6 +757,8 @@ static inline void cdns_pcie_rp_writew(struct cdns_pcie *pcie, { void __iomem *addr = pcie->reg_base + CDNS_PCIE_RP_BASE + reg; + if (pcie->is_hpa) + addr = pcie->reg_base + CDNS_PCIE_HPA_RP_BASE + reg; cdns_pcie_write_sz(addr, 0x2, value); } @@ -459,6 +766,8 @@ static inline u16 cdns_pcie_rp_readw(struct cdns_pcie *pcie, u32 reg) { void __iomem *addr = pcie->reg_base + CDNS_PCIE_RP_BASE + reg; + if (pcie->is_hpa) + addr = pcie->reg_base + CDNS_PCIE_HPA_RP_BASE + reg; return cdns_pcie_read_sz(addr, 0x2); } @@ -525,6 +834,22 @@ int cdns_pcie_host_init(struct cdns_pcie_rc *rc); int cdns_pcie_host_setup(struct cdns_pcie_rc *rc); void __iomem *cdns_pci_map_bus(struct pci_bus *bus, unsigned int devfn, int where); +int cdns_pcie_host_init_root_port(struct cdns_pcie_rc *rc); +int cdns_pcie_host_bar_ib_config(struct cdns_pcie_rc *rc, + enum cdns_pcie_rp_bar bar, + u64 cpu_addr, u64 size, + unsigned long flags); +int cdns_pcie_host_init_address_translation(struct cdns_pcie_rc *rc); +int cdns_pcie_host_init(struct cdns_pcie_rc *rc); +void __iomem *cdns_pci_hpa_map_bus(struct pci_bus *bus, unsigned int devfn, int where); +int cdns_pcie_hpa_host_init_root_port(struct cdns_pcie_rc *rc); +int cdns_pcie_hpa_host_bar_ib_config(struct cdns_pcie_rc *rc, + enum cdns_pcie_rp_bar bar, + u64 cpu_addr, u64 size, + unsigned long flags); +int cdns_pcie_hpa_host_init_address_translation(struct cdns_pcie_rc *rc); +int cdns_pcie_hpa_host_init(struct cdns_pcie_rc *rc); + #else static inline int cdns_pcie_host_link_setup(struct cdns_pcie_rc *rc) { @@ -546,6 +871,34 @@ static inline void __iomem *cdns_pci_map_bus(struct pci_bus *bus, unsigned int d { return NULL; } + +void __iomem *cdns_pci_hpa_map_bus(struct pci_bus *bus, unsigned int devfn, int where) +{ + return NULL; +} + +int cdns_pcie_hpa_host_init_root_port(struct cdns_pcie_rc *rc) +{ + return 0; +} + +int cdns_pcie_hpa_host_bar_ib_config(struct cdns_pcie_rc *rc, + enum cdns_pcie_rp_bar bar, + u64 cpu_addr, u64 size, + unsigned long flags) +{ + return 0; +} + +int cdns_pcie_hpa_host_init_address_translation(struct cdns_pcie_rc *rc) +{ + return 0; +} + +int cdns_pcie_hpa_host_init(struct cdns_pcie_rc *rc) +{ + return 0; +} #endif #ifdef CONFIG_PCIE_CADENCE_EP @@ -556,7 +909,10 @@ static inline int cdns_pcie_ep_setup(struct cdns_pcie_ep *ep) return 0; } #endif - +bool cdns_pcie_linkup(struct cdns_pcie *pcie); +bool cdns_pcie_hpa_linkup(struct cdns_pcie *pcie); +int cdns_pcie_hpa_startlink(struct cdns_pcie *pcie); +void cdns_pcie_hpa_stop_link(struct cdns_pcie *pcie); void cdns_pcie_detect_quiet_min_delay_set(struct cdns_pcie *pcie); void cdns_pcie_set_outbound_region(struct cdns_pcie *pcie, u8 busnr, u8 fn, @@ -571,6 +927,13 @@ void cdns_pcie_reset_outbound_region(struct cdns_pcie *pcie, u32 r); void cdns_pcie_disable_phy(struct cdns_pcie *pcie); int cdns_pcie_enable_phy(struct cdns_pcie *pcie); int cdns_pcie_init_phy(struct device *dev, struct cdns_pcie *pcie); +void cdns_pcie_hpa_detect_quiet_min_delay_set(struct cdns_pcie *pcie); +void cdns_pcie_hpa_set_outbound_region(struct cdns_pcie *pcie, u8 busnr, u8 fn, + u32 r, bool is_io, u64 cpu_addr, u64 pci_addr, size_t size); +void cdns_pcie_hpa_set_outbound_region_for_normal_msg(struct cdns_pcie *pcie, + u8 busnr, u8 fn, u32 r, u64 cpu_addr); +void cdns_pcie_hpa_reset_outbound_region(struct cdns_pcie *pcie, u32 r); + extern const struct dev_pm_ops cdns_pcie_pm_ops; #endif /* _PCIE_CADENCE_H */ -- 2.27.0 ^ permalink raw reply related [flat|nested] 28+ messages in thread
* Re: [PATCH 2/7] PCI: cadence: Add header support for PCIe next generation controllers 2025-03-27 11:26 ` [PATCH 2/7] PCI: cadence: Add header support for PCIe next generation controllers Manikandan Karunakaran Pillai @ 2025-03-27 12:01 ` Hans Zhang 2025-04-09 20:39 ` Bjorn Helgaas 1 sibling, 0 replies; 28+ messages in thread From: Hans Zhang @ 2025-03-27 12:01 UTC (permalink / raw) To: Manikandan Karunakaran Pillai, bhelgaas@google.com, lpieralisi@kernel.org, kw@linux.com, manivannan.sadhasivam@linaro.org, robh@kernel.org Cc: linux-pci@vger.kernel.org, linux-kernel@vger.kernel.org On 2025/3/27 19:26, Manikandan Karunakaran Pillai wrote: > +/* > + * High Performance Architecture(HPA) PCIe controller register > + */ > +#define CDNS_PCIE_HPA_IP_REG_BANK 0x01000000 > +#define CDNS_PCIE_HPA_IP_CFG_CTRL_REG_BANK 0x01003C00 > +#define CDNS_PCIE_HPA_IP_AXI_MASTER_COMMON 0x01020000 > +/* > + * Address Translation Registers(HPA) > + */ > +#define CDNS_PCIE_HPA_AXI_SLAVE 0x03000000 > +#define CDNS_PCIE_HPA_AXI_MASTER 0x03002000 Hi Manikandan, I have replied to your email in Cadence case, and our design engineer has also explained it. Please provide a way for us, as Cadence customers, to modify it ourselves. Please think about it. Please kindly follow up HPA's patch CC to my email address, thank you very much. V1 patch: https://patchwork.kernel.org/project/linux-pci/patch/CH2PPF4D26F8E1CDE19710828C0186B13EEA2A42@CH2PPF4D26F8E1C.namprd07.prod.outlook.com/ Communication history: Can you change this part of the code to look like this? #define CDNS_PCIE_HPA_IP_REG_BANK(a) (a) #define CDNS_PCIE_HPA_IP_CFG_CTRL_REG_BANK(a) (a) #define CDNS_PCIE_HPA_IP_AXI_MASTER_COMMON(a) (a) #define CDNS_PCIE_HPA_AXI_SLAVE(a) (a) #define CDNS_PCIE_HPA_AXI_MASTER(a) (a) The offset we designed is: (Cixtech) #define CDNS_PCIE_HPA_IP_REG_BANK 0x1000 #define CDNS_PCIE_HPA_IP_CFG_CTRL_REG_BANK 0x4c00 #define CDNS_PCIE_HPA_IP_AXI_MASTER_COMMON 0xf000 #define CDNS_PCIE_HPA_AXI_SLAVE 0x9000 #define CDNS_PCIE_HPA_AXI_MASTER 0xb000 #define CDNS_PCIE_HPA_AXI_HLS_REGISTERS 0xc000 #define CDNS_PCIE_HPA_DTI_REGISTERS 0xd000 #define CDNS_PCIE_HPA_AXI_RAS_REGISTERS 0xe000 #define CDNS_PCIE_HPA_DMA_BASE 0xf400 #define CDNS_PCIE_HPA_DMA_COMMON_BASE 0xf800 The original register bank consumed at least 48MB address space which is begin from 0x0000_0000 to 0x03020000. Because there is unoccupied address space between every two register banks , our hardware remaps the registers to a smaller address space which means the register bank offset address is changed by custormer. So, we cannot utilise the common code directly without rewriting the function. We submit and pull a Cadence case: #46872873 Reply from Cadence case Manikandan: Another option I can propose is to pass these values through the DTS file … (Hopefully that would be lesser changes) Hans: I agree to get it through the DTS attribute, please modify it, so as to be more flexible. This offset value may be modified when RTL is integrated. Best regards, Hans ^ permalink raw reply [flat|nested] 28+ messages in thread
* Re: [PATCH 2/7] PCI: cadence: Add header support for PCIe next generation controllers 2025-03-27 11:26 ` [PATCH 2/7] PCI: cadence: Add header support for PCIe next generation controllers Manikandan Karunakaran Pillai 2025-03-27 12:01 ` Hans Zhang @ 2025-04-09 20:39 ` Bjorn Helgaas 2025-04-11 4:16 ` Manikandan Karunakaran Pillai 1 sibling, 1 reply; 28+ messages in thread From: Bjorn Helgaas @ 2025-04-09 20:39 UTC (permalink / raw) To: Manikandan Karunakaran Pillai Cc: bhelgaas@google.com, lpieralisi@kernel.org, kw@linux.com, manivannan.sadhasivam@linaro.org, robh@kernel.org, linux-pci@vger.kernel.org, linux-kernel@vger.kernel.org On Thu, Mar 27, 2025 at 11:26:21AM +0000, Manikandan Karunakaran Pillai wrote: > Add the required definitions for register addresses and register bits > for the next generation Cadence PCIe controllers - High performance > rchitecture(HPA) controllers. Define register access functions for > SoC platforms with different base address "Next generation" is not really meaningful since there will probably be a next-next generation, and "high performance architecture" is probably not much better because the next-next generation will presumably be "higher than high performance." I would just use the codename or marketing name and omit "next generation." Maybe that's "HPA" and we can look forward to another superlative name for the next generation after this :) s/High performance/High Performance/ s/rchitecture/Architecture/ Add period at end of sentence. > +#define CDNS_PCIE_HPA_LM_EP_FUNC_BAR_CFG(bar, fn) \ > + (((bar) < BAR_3) ? CDNS_PCIE_HPA_LM_EP_FUNC_BAR_CFG0(fn) : \ > + CDNS_PCIE_HPA_LM_EP_FUNC_BAR_CFG1(fn)) > +#define CDNS_PCIE_HPA_LM_EP_FUNC_BAR_CFG0(pfn) (0x4000 * (pfn)) > +#define CDNS_PCIE_HPA_LM_EP_FUNC_BAR_CFG1(pfn) ((0x4000 * (pfn)) + 0x04) > +#define CDNS_PCIE_HPA_LM_EP_VFUNC_BAR_CFG(bar, fn) \ > + (((bar) < BAR_3) ? CDNS_PCIE_HPA_LM_EP_VFUNC_BAR_CFG0(fn) : \ > + CDNS_PCIE_HPA_LM_EP_VFUNC_BAR_CFG1(fn)) > +#define CDNS_PCIE_HPA_LM_EP_VFUNC_BAR_CFG0(vfn) ((0x4000 * (vfn)) + 0x08) > +#define CDNS_PCIE_HPA_LM_EP_VFUNC_BAR_CFG1(vfn) ((0x4000 * (vfn)) + 0x0C) > +#define CDNS_PCIE_HPA_LM_EP_FUNC_BAR_CFG_BAR_APERTURE_MASK(f) \ > + (GENMASK(9, 4) << ((f) * 10)) > +#define CDNS_PCIE_HPA_LM_EP_FUNC_BAR_CFG_BAR_APERTURE(b, a) \ > + (((a) << (4 + ((b) * 10))) & (CDNS_PCIE_HPA_LM_EP_FUNC_BAR_CFG_BAR_APERTURE_MASK(b))) > +#define CDNS_PCIE_HPA_LM_EP_FUNC_BAR_CFG_BAR_CTRL_MASK(f) \ > + (GENMASK(3, 0) << ((f) * 10)) > +#define CDNS_PCIE_HPA_LM_EP_FUNC_BAR_CFG_BAR_CTRL(b, c) \ > + (((c) << ((b) * 10)) & (CDNS_PCIE_HPA_LM_EP_FUNC_BAR_CFG_BAR_CTRL_MASK(b))) Wow, these names are ... loooong. This would be more readable if they could be abbreviated a bit. "PCIE" could be dropped with no loss. There are probably other words that could be dropped too. > struct cdns_pcie_ops { > int (*start_link)(struct cdns_pcie *pcie); > void (*stop_link)(struct cdns_pcie *pcie); > bool (*link_up)(struct cdns_pcie *pcie); > u64 (*cpu_addr_fixup)(struct cdns_pcie *pcie, u64 cpu_addr); > + int (*pcie_host_init_root_port)(struct cdns_pcie_rc *rc); > + int (*pcie_host_bar_ib_config)(struct cdns_pcie_rc *rc, > + enum cdns_pcie_rp_bar bar, > + u64 cpu_addr, u64 size, > + unsigned long flags); > + int (*pcie_host_init_address_translation)(struct cdns_pcie_rc *rc); > + void (*pcie_detect_quiet_min_delay_set)(struct cdns_pcie *pcie); > + void (*pcie_set_outbound_region)(struct cdns_pcie *pcie, u8 busnr, u8 fn, > + u32 r, bool is_io, u64 cpu_addr, > + u64 pci_addr, size_t size); > + void (*pcie_set_outbound_region_for_normal_msg)(struct cdns_pcie *pcie, > + u8 busnr, u8 fn, u32 r, > + u64 cpu_addr); > + void (*pcie_reset_outbound_region)(struct cdns_pcie *pcie, u32 r); Also some pretty long names here that don't fit the style of the existing members (none of the others have the "pcie_" prefix). > + * struct cdns_pcie_reg_offset - Register bank offset for a platform > + * @ip_reg_bank_off - ip register bank start offset > + * @iP_cfg_ctrl_reg_off - ip config contrl register start offset s/@iP_cfg_ctrl_reg_off/@ip_cfg_ctrl_reg_off/ "scripts/kernel-doc -none <file>" should find errors like this for you. s/contrl/control/ > + * @axi_mstr_common_off - AXI master common register start > + * @axi_slave_off - AXI skave offset start s/skave/slave/ > +struct cdns_pcie_reg_offset { > + u32 ip_reg_bank_off; > + u32 ip_cfg_ctrl_reg_off; > + u32 axi_mstr_common_off; > + u32 axi_slave_off; > + u32 axi_master_off; > + u32 axi_hls_off; > + u32 axi_ras_off; > + u32 axi_dti_off; > }; > > /** > @@ -305,10 +551,12 @@ struct cdns_pcie { > struct resource *mem_res; > struct device *dev; > bool is_rc; > + bool is_hpa; > int phy_count; > struct phy **phy; > struct device_link **link; > const struct cdns_pcie_ops *ops; > + struct cdns_pcie_reg_offset cdns_pcie_reg_offsets; Why does struct cdns_pcie need to contain an entire struct cdns_pcie_reg_offset instead of just a pointer to it? > +static inline u32 cdns_reg_bank_to_off(struct cdns_pcie *pcie, enum cdns_pcie_reg_bank bank) > +{ > + u32 offset; > + > + switch (bank) { > + case REG_BANK_IP_REG: > + offset = pcie->cdns_pcie_reg_offsets.ip_reg_bank_off; It's a little hard to untangle this without being able to apply the series, but normally we would add the struct cdns_pcie_reg_offset definition, the inclusion in struct cdns_pcie, this use of it, and the setting of it in the same patch. > #ifdef CONFIG_PCIE_CADENCE_EP > @@ -556,7 +909,10 @@ static inline int cdns_pcie_ep_setup(struct cdns_pcie_ep *ep) > return 0; > } > #endif > - Probably spurious change? Looks like we would want the blank line here. > +bool cdns_pcie_linkup(struct cdns_pcie *pcie); > +bool cdns_pcie_hpa_linkup(struct cdns_pcie *pcie); > +int cdns_pcie_hpa_startlink(struct cdns_pcie *pcie); > +void cdns_pcie_hpa_stop_link(struct cdns_pcie *pcie); > void cdns_pcie_detect_quiet_min_delay_set(struct cdns_pcie *pcie); ^ permalink raw reply [flat|nested] 28+ messages in thread
* [PATCH 2/7] PCI: cadence: Add header support for PCIe next generation controllers 2025-04-09 20:39 ` Bjorn Helgaas @ 2025-04-11 4:16 ` Manikandan Karunakaran Pillai 0 siblings, 0 replies; 28+ messages in thread From: Manikandan Karunakaran Pillai @ 2025-04-11 4:16 UTC (permalink / raw) To: Bjorn Helgaas Cc: bhelgaas@google.com, lpieralisi@kernel.org, kw@linux.com, manivannan.sadhasivam@linaro.org, robh@kernel.org, linux-pci@vger.kernel.org, linux-kernel@vger.kernel.org >On Thu, Mar 27, 2025 at 11:26:21AM +0000, Manikandan Karunakaran Pillai >wrote: >> Add the required definitions for register addresses and register bits >> for the next generation Cadence PCIe controllers - High performance >> rchitecture(HPA) controllers. Define register access functions for >> SoC platforms with different base address > >"Next generation" is not really meaningful since there will probably >be a next-next generation, and "high performance architecture" is >probably not much better because the next-next generation will >presumably be "higher than high performance." > >I would just use the codename or marketing name and omit "next >generation." Maybe that's "HPA" and we can look forward to another >superlative name for the next generation after this :) > "HPA" will be used from the next patch series >s/High performance/High Performance/ >s/rchitecture/Architecture/ > >Add period at end of sentence. > OK >> +#define CDNS_PCIE_HPA_LM_EP_FUNC_BAR_CFG(bar, fn) \ >> + (((bar) < BAR_3) ? CDNS_PCIE_HPA_LM_EP_FUNC_BAR_CFG0(fn) : \ >> + CDNS_PCIE_HPA_LM_EP_FUNC_BAR_CFG1(fn)) >> +#define CDNS_PCIE_HPA_LM_EP_FUNC_BAR_CFG0(pfn) (0x4000 * (pfn)) >> +#define CDNS_PCIE_HPA_LM_EP_FUNC_BAR_CFG1(pfn) ((0x4000 * (pfn)) + >0x04) >> +#define CDNS_PCIE_HPA_LM_EP_VFUNC_BAR_CFG(bar, fn) \ >> + (((bar) < BAR_3) ? CDNS_PCIE_HPA_LM_EP_VFUNC_BAR_CFG0(fn) : \ >> + CDNS_PCIE_HPA_LM_EP_VFUNC_BAR_CFG1(fn)) >> +#define CDNS_PCIE_HPA_LM_EP_VFUNC_BAR_CFG0(vfn) ((0x4000 * (vfn)) + >0x08) >> +#define CDNS_PCIE_HPA_LM_EP_VFUNC_BAR_CFG1(vfn) ((0x4000 * (vfn)) + >0x0C) >> +#define >CDNS_PCIE_HPA_LM_EP_FUNC_BAR_CFG_BAR_APERTURE_MASK(f) \ >> + (GENMASK(9, 4) << ((f) * 10)) >> +#define CDNS_PCIE_HPA_LM_EP_FUNC_BAR_CFG_BAR_APERTURE(b, a) \ >> + (((a) << (4 + ((b) * 10))) & >(CDNS_PCIE_HPA_LM_EP_FUNC_BAR_CFG_BAR_APERTURE_MASK(b))) >> +#define CDNS_PCIE_HPA_LM_EP_FUNC_BAR_CFG_BAR_CTRL_MASK(f) \ >> + (GENMASK(3, 0) << ((f) * 10)) >> +#define CDNS_PCIE_HPA_LM_EP_FUNC_BAR_CFG_BAR_CTRL(b, c) \ >> + (((c) << ((b) * 10)) & >(CDNS_PCIE_HPA_LM_EP_FUNC_BAR_CFG_BAR_CTRL_MASK(b))) > >Wow, these names are ... loooong. This would be more readable if they >could be abbreviated a bit. "PCIE" could be dropped with no loss. >There are probably other words that could be dropped too. > The names are in sync with the hardware register specification and also with the existing code for legacy Cadence PCIe controller. Hence would like to retain them for readability. >> struct cdns_pcie_ops { >> int (*start_link)(struct cdns_pcie *pcie); >> void (*stop_link)(struct cdns_pcie *pcie); >> bool (*link_up)(struct cdns_pcie *pcie); >> u64 (*cpu_addr_fixup)(struct cdns_pcie *pcie, u64 cpu_addr); >> + int (*pcie_host_init_root_port)(struct cdns_pcie_rc *rc); >> + int (*pcie_host_bar_ib_config)(struct cdns_pcie_rc *rc, >> + enum cdns_pcie_rp_bar bar, >> + u64 cpu_addr, u64 size, >> + unsigned long flags); >> + int (*pcie_host_init_address_translation)(struct cdns_pcie_rc *rc); >> + void (*pcie_detect_quiet_min_delay_set)(struct cdns_pcie *pcie); >> + void (*pcie_set_outbound_region)(struct cdns_pcie *pcie, u8 busnr, >u8 fn, >> + u32 r, bool is_io, u64 cpu_addr, >> + u64 pci_addr, size_t size); >> + void (*pcie_set_outbound_region_for_normal_msg)(struct >cdns_pcie *pcie, >> + u8 busnr, u8 fn, u32 r, >> + u64 cpu_addr); >> + void (*pcie_reset_outbound_region)(struct cdns_pcie *pcie, u32 r); > >Also some pretty long names here that don't fit the style of the >existing members (none of the others have the "pcie_" prefix). "pcie" is removed from the function names to be in sync with other function pointer naming > >> + * struct cdns_pcie_reg_offset - Register bank offset for a platform >> + * @ip_reg_bank_off - ip register bank start offset >> + * @iP_cfg_ctrl_reg_off - ip config contrl register start offset > >s/@iP_cfg_ctrl_reg_off/@ip_cfg_ctrl_reg_off/ > >"scripts/kernel-doc -none <file>" should find errors like this for you. kernel-doc --none is run on all the files for the next patch series > >s/contrl/control/ > >> + * @axi_mstr_common_off - AXI master common register start >> + * @axi_slave_off - AXI skave offset start > >s/skave/slave/ > >> +struct cdns_pcie_reg_offset { >> + u32 ip_reg_bank_off; >> + u32 ip_cfg_ctrl_reg_off; >> + u32 axi_mstr_common_off; >> + u32 axi_slave_off; >> + u32 axi_master_off; >> + u32 axi_hls_off; >> + u32 axi_ras_off; >> + u32 axi_dti_off; >> }; >> >> /** >> @@ -305,10 +551,12 @@ struct cdns_pcie { >> struct resource *mem_res; >> struct device *dev; >> bool is_rc; >> + bool is_hpa; >> int phy_count; >> struct phy **phy; >> struct device_link **link; >> const struct cdns_pcie_ops *ops; >> + struct cdns_pcie_reg_offset cdns_pcie_reg_offsets; > >Why does struct cdns_pcie need to contain an entire struct >cdns_pcie_reg_offset instead of just a pointer to it? The cdns_pci_reg_offset is declared only in this global store for further usage by the driver. There is only Struct defined for a PCIe controller and it made sense to define it inside this global context struct for controllers > >> +static inline u32 cdns_reg_bank_to_off(struct cdns_pcie *pcie, enum >cdns_pcie_reg_bank bank) >> +{ >> + u32 offset; >> + >> + switch (bank) { >> + case REG_BANK_IP_REG: >> + offset = pcie->cdns_pcie_reg_offsets.ip_reg_bank_off; > >It's a little hard to untangle this without being able to apply the >series, but normally we would add the struct cdns_pcie_reg_offset >definition, the inclusion in struct cdns_pcie, this use of it, and the >setting of it in the same patch. > Ok >> #ifdef CONFIG_PCIE_CADENCE_EP >> @@ -556,7 +909,10 @@ static inline int cdns_pcie_ep_setup(struct >cdns_pcie_ep *ep) >> return 0; >> } >> #endif >> - > >Probably spurious change? Looks like we would want the blank line >here. > Ok >> +bool cdns_pcie_linkup(struct cdns_pcie *pcie); >> +bool cdns_pcie_hpa_linkup(struct cdns_pcie *pcie); >> +int cdns_pcie_hpa_startlink(struct cdns_pcie *pcie); >> +void cdns_pcie_hpa_stop_link(struct cdns_pcie *pcie); >> void cdns_pcie_detect_quiet_min_delay_set(struct cdns_pcie *pcie); ^ permalink raw reply [flat|nested] 28+ messages in thread
[parent not found: <20250327111200.2948071-1-mpillai@cadence.com>]
* [PATCH 4/7] PCI: cadence: Add support for PCIe Endpoint HPA controllers [not found] ` <20250327111200.2948071-1-mpillai@cadence.com> @ 2025-03-27 11:40 ` Manikandan Karunakaran Pillai 2025-04-09 22:15 ` Bjorn Helgaas 0 siblings, 1 reply; 28+ messages in thread From: Manikandan Karunakaran Pillai @ 2025-03-27 11:40 UTC (permalink / raw) To: bhelgaas@google.com, lpieralisi@kernel.org, kw@linux.com, manivannan.sadhasivam@linaro.org, robh@kernel.org Cc: manivannan.sadhasivam@linaro.org, robh@kernel.org, linux-pci@vger.kernel.org, linux-kernel@vger.kernel.org Add support for the second generation(HPA) Cadence PCIe endpoint controller by adding the required functions based on the HPA registers and register bit definitions Signed-off-by: Manikandan K Pillai <mpillai@cadence.com> --- .../pci/controller/cadence/pcie-cadence-ep.c | 154 +++++++++++++++++- 1 file changed, 146 insertions(+), 8 deletions(-) diff --git a/drivers/pci/controller/cadence/pcie-cadence-ep.c b/drivers/pci/controller/cadence/pcie-cadence-ep.c index e0cc4560dfde..1dc13e403473 100644 --- a/drivers/pci/controller/cadence/pcie-cadence-ep.c +++ b/drivers/pci/controller/cadence/pcie-cadence-ep.c @@ -93,7 +93,10 @@ static int cdns_pcie_ep_set_bar(struct pci_epc *epc, u8 fn, u8 vfn, * for 64bit values. */ sz = 1ULL << fls64(sz - 1); - aperture = ilog2(sz) - 7; /* 128B -> 0, 256B -> 1, 512B -> 2, ... */ + /* + * 128B -> 0, 256B -> 1, 512B -> 2, ... + */ + aperture = ilog2(sz) - 7; if ((flags & PCI_BASE_ADDRESS_SPACE) == PCI_BASE_ADDRESS_SPACE_IO) { ctrl = CDNS_PCIE_LM_BAR_CFG_CTRL_IO_32BITS; @@ -121,7 +124,7 @@ static int cdns_pcie_ep_set_bar(struct pci_epc *epc, u8 fn, u8 vfn, reg = CDNS_PCIE_LM_EP_VFUNC_BAR_CFG(bar, fn); else reg = CDNS_PCIE_LM_EP_FUNC_BAR_CFG(bar, fn); - b = (bar < BAR_4) ? bar : bar - BAR_4; + b = (bar < BAR_3) ? bar : bar - BAR_3; if (vfn == 0 || vfn == 1) { cfg = cdns_pcie_readl(pcie, reg); @@ -158,7 +161,7 @@ static void cdns_pcie_ep_clear_bar(struct pci_epc *epc, u8 fn, u8 vfn, reg = CDNS_PCIE_LM_EP_VFUNC_BAR_CFG(bar, fn); else reg = CDNS_PCIE_LM_EP_FUNC_BAR_CFG(bar, fn); - b = (bar < BAR_4) ? bar : bar - BAR_4; + b = (bar < BAR_3) ? bar : bar - BAR_3; if (vfn == 0 || vfn == 1) { ctrl = CDNS_PCIE_LM_BAR_CFG_CTRL_DISABLED; @@ -569,7 +572,11 @@ static int cdns_pcie_ep_start(struct pci_epc *epc) * BIT(0) is hardwired to 1, hence function 0 is always enabled * and can't be disabled anyway. */ - cdns_pcie_writel(pcie, CDNS_PCIE_LM_EP_FUNC_CFG, epc->function_num_map); + if (pcie->is_hpa) + cdns_pcie_hpa_writel(pcie, REG_BANK_IP_REG, + CDNS_PCIE_HPA_LM_EP_FUNC_CFG, epc->function_num_map); + else + cdns_pcie_writel(pcie, CDNS_PCIE_LM_EP_FUNC_CFG, epc->function_num_map); /* * Next function field in ARI_CAP_AND_CTR register for last function @@ -606,6 +613,117 @@ static int cdns_pcie_ep_start(struct pci_epc *epc) return 0; } +static int cdns_pcie_hpa_ep_set_bar(struct pci_epc *epc, u8 fn, u8 vfn, + struct pci_epf_bar *epf_bar) +{ + struct cdns_pcie_ep *ep = epc_get_drvdata(epc); + struct cdns_pcie_epf *epf = &ep->epf[fn]; + struct cdns_pcie *pcie = &ep->pcie; + dma_addr_t bar_phys = epf_bar->phys_addr; + enum pci_barno bar = epf_bar->barno; + int flags = epf_bar->flags; + u32 addr0, addr1, reg, cfg, b, aperture, ctrl; + u64 sz; + + /* + * BAR size is 2^(aperture + 7) + */ + sz = max_t(size_t, epf_bar->size, CDNS_PCIE_EP_MIN_APERTURE); + /* + * roundup_pow_of_two() returns an unsigned long, which is not suited + * for 64bit values. + */ + sz = 1ULL << fls64(sz - 1); + /* + * 128B -> 0, 256B -> 1, 512B -> 2, ... + */ + aperture = ilog2(sz) - 7; + + if ((flags & PCI_BASE_ADDRESS_SPACE) == PCI_BASE_ADDRESS_SPACE_IO) { + ctrl = CDNS_PCIE_HPA_LM_BAR_CFG_CTRL_IO_32BITS; + } else { + bool is_prefetch = !!(flags & PCI_BASE_ADDRESS_MEM_PREFETCH); + bool is_64bits = !!(flags & PCI_BASE_ADDRESS_MEM_TYPE_64); + + if (is_64bits && (bar & 1)) + return -EINVAL; + + if (is_64bits && is_prefetch) + ctrl = CDNS_PCIE_HPA_LM_BAR_CFG_CTRL_PREFETCH_MEM_64BITS; + else if (is_prefetch) + ctrl = CDNS_PCIE_HPA_LM_BAR_CFG_CTRL_PREFETCH_MEM_32BITS; + else if (is_64bits) + ctrl = CDNS_PCIE_HPA_LM_BAR_CFG_CTRL_MEM_64BITS; + else + ctrl = CDNS_PCIE_HPA_LM_BAR_CFG_CTRL_MEM_32BITS; + } + + addr0 = lower_32_bits(bar_phys); + addr1 = upper_32_bits(bar_phys); + + if (vfn == 1) + reg = CDNS_PCIE_HPA_LM_EP_VFUNC_BAR_CFG(bar, fn); + else + reg = CDNS_PCIE_HPA_LM_EP_FUNC_BAR_CFG(bar, fn); + b = (bar < BAR_4) ? bar : bar - BAR_4; + + if (vfn == 0 || vfn == 1) { + cfg = cdns_pcie_hpa_readl(pcie, REG_BANK_IP_CFG_CTRL_REG, reg); + cfg &= ~(CDNS_PCIE_HPA_LM_EP_FUNC_BAR_CFG_BAR_APERTURE_MASK(b) | + CDNS_PCIE_HPA_LM_EP_FUNC_BAR_CFG_BAR_CTRL_MASK(b)); + cfg |= (CDNS_PCIE_HPA_LM_EP_FUNC_BAR_CFG_BAR_APERTURE(b, aperture) | + CDNS_PCIE_HPA_LM_EP_FUNC_BAR_CFG_BAR_CTRL(b, ctrl)); + cdns_pcie_hpa_writel(pcie, REG_BANK_IP_CFG_CTRL_REG, reg, cfg); + } + + fn = cdns_pcie_get_fn_from_vfn(pcie, fn, vfn); + cdns_pcie_hpa_writel(pcie, REG_BANK_AXI_MASTER_COMMON, + CDNS_PCIE_HPA_AT_IB_EP_FUNC_BAR_ADDR0(fn, bar), addr0); + cdns_pcie_hpa_writel(pcie, REG_BANK_AXI_MASTER_COMMON, + CDNS_PCIE_HPA_AT_IB_EP_FUNC_BAR_ADDR1(fn, bar), addr1); + + if (vfn > 0) + epf = &epf->epf[vfn - 1]; + epf->epf_bar[bar] = epf_bar; + + return 0; +} + +static void cdns_pcie_hpa_ep_clear_bar(struct pci_epc *epc, u8 fn, u8 vfn, + struct pci_epf_bar *epf_bar) +{ + struct cdns_pcie_ep *ep = epc_get_drvdata(epc); + struct cdns_pcie_epf *epf = &ep->epf[fn]; + struct cdns_pcie *pcie = &ep->pcie; + enum pci_barno bar = epf_bar->barno; + u32 reg, cfg, b, ctrl; + + if (vfn == 1) + reg = CDNS_PCIE_HPA_LM_EP_VFUNC_BAR_CFG(bar, fn); + else + reg = CDNS_PCIE_HPA_LM_EP_FUNC_BAR_CFG(bar, fn); + b = (bar < BAR_4) ? bar : bar - BAR_4; + + if (vfn == 0 || vfn == 1) { + ctrl = CDNS_PCIE_HPA_LM_BAR_CFG_CTRL_DISABLED; + cfg = cdns_pcie_hpa_readl(pcie, REG_BANK_IP_CFG_CTRL_REG, reg); + cfg &= ~(CDNS_PCIE_HPA_LM_EP_FUNC_BAR_CFG_BAR_APERTURE_MASK(b) | + CDNS_PCIE_HPA_LM_EP_FUNC_BAR_CFG_BAR_CTRL_MASK(b)); + cfg |= CDNS_PCIE_HPA_LM_EP_FUNC_BAR_CFG_BAR_CTRL(b, ctrl); + cdns_pcie_hpa_writel(pcie, REG_BANK_IP_CFG_CTRL_REG, reg, cfg); + } + + fn = cdns_pcie_get_fn_from_vfn(pcie, fn, vfn); + cdns_pcie_hpa_writel(pcie, REG_BANK_AXI_MASTER_COMMON, + CDNS_PCIE_HPA_AT_IB_EP_FUNC_BAR_ADDR0(fn, bar), 0); + cdns_pcie_hpa_writel(pcie, REG_BANK_AXI_MASTER_COMMON, + CDNS_PCIE_HPA_AT_IB_EP_FUNC_BAR_ADDR1(fn, bar), 0); + + if (vfn > 0) + epf = &epf->epf[vfn - 1]; + epf->epf_bar[bar] = NULL; +} + static const struct pci_epc_features cdns_pcie_epc_vf_features = { .linkup_notifier = false, .msi_capable = true, @@ -645,6 +763,21 @@ static const struct pci_epc_ops cdns_pcie_epc_ops = { .get_features = cdns_pcie_ep_get_features, }; +static const struct pci_epc_ops cdns_pcie_hpa_epc_ops = { + .write_header = cdns_pcie_ep_write_header, + .set_bar = cdns_pcie_hpa_ep_set_bar, + .clear_bar = cdns_pcie_hpa_ep_clear_bar, + .map_addr = cdns_pcie_ep_map_addr, + .unmap_addr = cdns_pcie_ep_unmap_addr, + .set_msi = cdns_pcie_ep_set_msi, + .get_msi = cdns_pcie_ep_get_msi, + .set_msix = cdns_pcie_ep_set_msix, + .get_msix = cdns_pcie_ep_get_msix, + .raise_irq = cdns_pcie_ep_raise_irq, + .map_msi_irq = cdns_pcie_ep_map_msi_irq, + .start = cdns_pcie_ep_start, + .get_features = cdns_pcie_ep_get_features, +}; int cdns_pcie_ep_setup(struct cdns_pcie_ep *ep) { @@ -682,10 +815,15 @@ int cdns_pcie_ep_setup(struct cdns_pcie_ep *ep) if (!ep->ob_addr) return -ENOMEM; - /* Disable all but function 0 (anyway BIT(0) is hardwired to 1). */ - cdns_pcie_writel(pcie, CDNS_PCIE_LM_EP_FUNC_CFG, BIT(0)); - - epc = devm_pci_epc_create(dev, &cdns_pcie_epc_ops); + if (pcie->is_hpa) { + epc = devm_pci_epc_create(dev, &cdns_pcie_hpa_epc_ops); + } else { + /* + * Disable all but function 0 (anyway BIT(0) is hardwired to 1) + */ + cdns_pcie_writel(pcie, CDNS_PCIE_LM_EP_FUNC_CFG, BIT(0)); + epc = devm_pci_epc_create(dev, &cdns_pcie_epc_ops); + } if (IS_ERR(epc)) { dev_err(dev, "failed to create epc device\n"); return PTR_ERR(epc); -- 2.27.0 ^ permalink raw reply related [flat|nested] 28+ messages in thread
* Re: [PATCH 4/7] PCI: cadence: Add support for PCIe Endpoint HPA controllers 2025-03-27 11:40 ` [PATCH 4/7] PCI: cadence: Add support for PCIe Endpoint HPA controllers Manikandan Karunakaran Pillai @ 2025-04-09 22:15 ` Bjorn Helgaas 2025-04-11 4:23 ` Manikandan Karunakaran Pillai 0 siblings, 1 reply; 28+ messages in thread From: Bjorn Helgaas @ 2025-04-09 22:15 UTC (permalink / raw) To: Manikandan Karunakaran Pillai Cc: bhelgaas@google.com, lpieralisi@kernel.org, kw@linux.com, manivannan.sadhasivam@linaro.org, robh@kernel.org, linux-pci@vger.kernel.org, linux-kernel@vger.kernel.org On Thu, Mar 27, 2025 at 11:40:36AM +0000, Manikandan Karunakaran Pillai wrote: > Add support for the second generation(HPA) Cadence PCIe endpoint > controller by adding the required functions based on the HPA registers > and register bit definitions Add period. > @@ -93,7 +93,10 @@ static int cdns_pcie_ep_set_bar(struct pci_epc *epc, u8 fn, u8 vfn, > * for 64bit values. > */ > sz = 1ULL << fls64(sz - 1); > - aperture = ilog2(sz) - 7; /* 128B -> 0, 256B -> 1, 512B -> 2, ... */ > + /* > + * 128B -> 0, 256B -> 1, 512B -> 2, ... > + */ > + aperture = ilog2(sz) - 7; Unclear exactly how this is related to HPA and whether it affects non-HPA. > @@ -121,7 +124,7 @@ static int cdns_pcie_ep_set_bar(struct pci_epc *epc, u8 fn, u8 vfn, > reg = CDNS_PCIE_LM_EP_VFUNC_BAR_CFG(bar, fn); > else > reg = CDNS_PCIE_LM_EP_FUNC_BAR_CFG(bar, fn); > - b = (bar < BAR_4) ? bar : bar - BAR_4; > + b = (bar < BAR_3) ? bar : bar - BAR_3; Unclear what's going on here because this doesn't look specific to HPA. Should this be a separate patch that fixes an existing defect? > if (vfn == 0 || vfn == 1) { > cfg = cdns_pcie_readl(pcie, reg); > @@ -158,7 +161,7 @@ static void cdns_pcie_ep_clear_bar(struct pci_epc *epc, u8 fn, u8 vfn, > reg = CDNS_PCIE_LM_EP_VFUNC_BAR_CFG(bar, fn); > else > reg = CDNS_PCIE_LM_EP_FUNC_BAR_CFG(bar, fn); > - b = (bar < BAR_4) ? bar : bar - BAR_4; > + b = (bar < BAR_3) ? bar : bar - BAR_3; And here. > @@ -569,7 +572,11 @@ static int cdns_pcie_ep_start(struct pci_epc *epc) > * BIT(0) is hardwired to 1, hence function 0 is always enabled > * and can't be disabled anyway. > */ > - cdns_pcie_writel(pcie, CDNS_PCIE_LM_EP_FUNC_CFG, epc->function_num_map); > + if (pcie->is_hpa) > + cdns_pcie_hpa_writel(pcie, REG_BANK_IP_REG, > + CDNS_PCIE_HPA_LM_EP_FUNC_CFG, epc->function_num_map); > + else > + cdns_pcie_writel(pcie, CDNS_PCIE_LM_EP_FUNC_CFG, epc->function_num_map); Sprinkling tests of "is_hpa" around is not very extensible. When the next generation after HPA shows up, then it gets really messy. Sometimes generation-specific function pointers can make this simpler. > +static int cdns_pcie_hpa_ep_set_bar(struct pci_epc *epc, u8 fn, u8 vfn, > + struct pci_epf_bar *epf_bar) > +{ > + struct cdns_pcie_ep *ep = epc_get_drvdata(epc); > + struct cdns_pcie_epf *epf = &ep->epf[fn]; > + struct cdns_pcie *pcie = &ep->pcie; > + dma_addr_t bar_phys = epf_bar->phys_addr; > + enum pci_barno bar = epf_bar->barno; > + int flags = epf_bar->flags; > + u32 addr0, addr1, reg, cfg, b, aperture, ctrl; > + u64 sz; > + > + /* > + * BAR size is 2^(aperture + 7) > + */ > + sz = max_t(size_t, epf_bar->size, CDNS_PCIE_EP_MIN_APERTURE); > + /* Add blank line between code and comment. > + * roundup_pow_of_two() returns an unsigned long, which is not suited > + * for 64bit values. > + */ > + sz = 1ULL << fls64(sz - 1); > + /* Again. Check for other places in this series. > + * 128B -> 0, 256B -> 1, 512B -> 2, ... > + */ > + aperture = ilog2(sz) - 7; ^ permalink raw reply [flat|nested] 28+ messages in thread
* [PATCH 4/7] PCI: cadence: Add support for PCIe Endpoint HPA controllers 2025-04-09 22:15 ` Bjorn Helgaas @ 2025-04-11 4:23 ` Manikandan Karunakaran Pillai 0 siblings, 0 replies; 28+ messages in thread From: Manikandan Karunakaran Pillai @ 2025-04-11 4:23 UTC (permalink / raw) To: Bjorn Helgaas Cc: bhelgaas@google.com, lpieralisi@kernel.org, kw@linux.com, manivannan.sadhasivam@linaro.org, robh@kernel.org, linux-pci@vger.kernel.org, linux-kernel@vger.kernel.org > >On Thu, Mar 27, 2025 at 11:40:36AM +0000, Manikandan Karunakaran Pillai >wrote: >> Add support for the second generation(HPA) Cadence PCIe endpoint >> controller by adding the required functions based on the HPA registers >> and register bit definitions > >Add period. > Ok >> @@ -93,7 +93,10 @@ static int cdns_pcie_ep_set_bar(struct pci_epc *epc, >u8 fn, u8 vfn, >> * for 64bit values. >> */ >> sz = 1ULL << fls64(sz - 1); >> - aperture = ilog2(sz) - 7; /* 128B -> 0, 256B -> 1, 512B -> 2, ... */ >> + /* >> + * 128B -> 0, 256B -> 1, 512B -> 2, ... >> + */ >> + aperture = ilog2(sz) - 7; > >Unclear exactly how this is related to HPA and whether it affects >non-HPA. > Removed in the next patch series >> @@ -121,7 +124,7 @@ static int cdns_pcie_ep_set_bar(struct pci_epc *epc, >u8 fn, u8 vfn, >> reg = CDNS_PCIE_LM_EP_VFUNC_BAR_CFG(bar, fn); >> else >> reg = CDNS_PCIE_LM_EP_FUNC_BAR_CFG(bar, fn); >> - b = (bar < BAR_4) ? bar : bar - BAR_4; >> + b = (bar < BAR_3) ? bar : bar - BAR_3; > >Unclear what's going on here because this doesn't look specific to >HPA. Should this be a separate patch that fixes an existing defect? > >> if (vfn == 0 || vfn == 1) { >> cfg = cdns_pcie_readl(pcie, reg); >> @@ -158,7 +161,7 @@ static void cdns_pcie_ep_clear_bar(struct pci_epc >*epc, u8 fn, u8 vfn, >> reg = CDNS_PCIE_LM_EP_VFUNC_BAR_CFG(bar, fn); >> else >> reg = CDNS_PCIE_LM_EP_FUNC_BAR_CFG(bar, fn); >> - b = (bar < BAR_4) ? bar : bar - BAR_4; >> + b = (bar < BAR_3) ? bar : bar - BAR_3; > >And here. Removed in the next patch series and will be submitted as patch for bug fix > >> @@ -569,7 +572,11 @@ static int cdns_pcie_ep_start(struct pci_epc *epc) >> * BIT(0) is hardwired to 1, hence function 0 is always enabled >> * and can't be disabled anyway. >> */ >> - cdns_pcie_writel(pcie, CDNS_PCIE_LM_EP_FUNC_CFG, epc- >>function_num_map); >> + if (pcie->is_hpa) >> + cdns_pcie_hpa_writel(pcie, REG_BANK_IP_REG, >> + CDNS_PCIE_HPA_LM_EP_FUNC_CFG, epc- >>function_num_map); >> + else >> + cdns_pcie_writel(pcie, CDNS_PCIE_LM_EP_FUNC_CFG, epc- >>function_num_map); > >Sprinkling tests of "is_hpa" around is not very extensible. When the >next generation after HPA shows up, then it gets really messy. >Sometimes generation-specific function pointers can make this simpler. > The "is_hpa" are used at very few places in the code where putting a generation specific functions might be an overkill >> +static int cdns_pcie_hpa_ep_set_bar(struct pci_epc *epc, u8 fn, u8 vfn, >> + struct pci_epf_bar *epf_bar) >> +{ >> + struct cdns_pcie_ep *ep = epc_get_drvdata(epc); >> + struct cdns_pcie_epf *epf = &ep->epf[fn]; >> + struct cdns_pcie *pcie = &ep->pcie; >> + dma_addr_t bar_phys = epf_bar->phys_addr; >> + enum pci_barno bar = epf_bar->barno; >> + int flags = epf_bar->flags; >> + u32 addr0, addr1, reg, cfg, b, aperture, ctrl; >> + u64 sz; >> + >> + /* >> + * BAR size is 2^(aperture + 7) >> + */ >> + sz = max_t(size_t, epf_bar->size, CDNS_PCIE_EP_MIN_APERTURE); >> + /* > >Add blank line between code and comment. Ok > >> + * roundup_pow_of_two() returns an unsigned long, which is not >suited >> + * for 64bit values. >> + */ >> + sz = 1ULL << fls64(sz - 1); >> + /* > >Again. Check for other places in this series. > >> + * 128B -> 0, 256B -> 1, 512B -> 2, ... >> + */ >> + aperture = ilog2(sz) - 7; ^ permalink raw reply [flat|nested] 28+ messages in thread
[parent not found: <20250327111241.2948184-1-mpillai@cadence.com>]
* [PATCH 6/7] PCI: cadence: Add callback functions for Root Port and EP controller [not found] ` <20250327111241.2948184-1-mpillai@cadence.com> @ 2025-03-27 11:42 ` Manikandan Karunakaran Pillai 2025-04-09 22:45 ` Bjorn Helgaas 0 siblings, 1 reply; 28+ messages in thread From: Manikandan Karunakaran Pillai @ 2025-03-27 11:42 UTC (permalink / raw) To: bhelgaas@google.com, lpieralisi@kernel.org, kw@linux.com, manivannan.sadhasivam@linaro.org, robh@kernel.org Cc: manivannan.sadhasivam@linaro.org, robh@kernel.org, linux-pci@vger.kernel.org, linux-kernel@vger.kernel.org Add support for the second generation PCIe controller by adding the required callback functions. Update the common functions for endpoint and Root port modes. Invoke the relevant callback functions for platform probe of PCIe controller using the callback functions Signed-off-by: Manikandan K Pillai <mpillai@cadence.com> --- .../pci/controller/cadence/pcie-cadence-ep.c | 30 +-- .../controller/cadence/pcie-cadence-host.c | 252 ++++++++++++++++-- .../controller/cadence/pcie-cadence-plat.c | 24 ++ drivers/pci/controller/cadence/pcie-cadence.c | 217 ++++++++++++++- 4 files changed, 490 insertions(+), 33 deletions(-) diff --git a/drivers/pci/controller/cadence/pcie-cadence-ep.c b/drivers/pci/controller/cadence/pcie-cadence-ep.c index 1dc13e403473..d86d00ab475d 100644 --- a/drivers/pci/controller/cadence/pcie-cadence-ep.c +++ b/drivers/pci/controller/cadence/pcie-cadence-ep.c @@ -195,7 +195,7 @@ static int cdns_pcie_ep_map_addr(struct pci_epc *epc, u8 fn, u8 vfn, } fn = cdns_pcie_get_fn_from_vfn(pcie, fn, vfn); - cdns_pcie_set_outbound_region(pcie, 0, fn, r, false, addr, pci_addr, size); + pcie->ops->pcie_set_outbound_region(pcie, 0, fn, r, false, addr, pci_addr, size); set_bit(r, &ep->ob_region_map); ep->ob_addr[r] = addr; @@ -217,7 +217,7 @@ static void cdns_pcie_ep_unmap_addr(struct pci_epc *epc, u8 fn, u8 vfn, if (r == ep->max_regions - 1) return; - cdns_pcie_reset_outbound_region(pcie, r); + pcie->ops->pcie_reset_outbound_region(pcie, r); ep->ob_addr[r] = 0; clear_bit(r, &ep->ob_region_map); @@ -332,8 +332,8 @@ static void cdns_pcie_ep_assert_intx(struct cdns_pcie_ep *ep, u8 fn, u8 intx, if (unlikely(ep->irq_pci_addr != CDNS_PCIE_EP_IRQ_PCI_ADDR_LEGACY || ep->irq_pci_fn != fn)) { /* First region was reserved for IRQ writes. */ - cdns_pcie_set_outbound_region_for_normal_msg(pcie, 0, fn, 0, - ep->irq_phys_addr); + pcie->ops->pcie_set_outbound_region_for_normal_msg(pcie, 0, fn, 0, + ep->irq_phys_addr); ep->irq_pci_addr = CDNS_PCIE_EP_IRQ_PCI_ADDR_LEGACY; ep->irq_pci_fn = fn; } @@ -415,11 +415,11 @@ static int cdns_pcie_ep_send_msi_irq(struct cdns_pcie_ep *ep, u8 fn, u8 vfn, if (unlikely(ep->irq_pci_addr != (pci_addr & ~pci_addr_mask) || ep->irq_pci_fn != fn)) { /* First region was reserved for IRQ writes. */ - cdns_pcie_set_outbound_region(pcie, 0, fn, 0, - false, - ep->irq_phys_addr, - pci_addr & ~pci_addr_mask, - pci_addr_mask + 1); + pcie->ops->pcie_set_outbound_region(pcie, 0, fn, 0, + false, + ep->irq_phys_addr, + pci_addr & ~pci_addr_mask, + pci_addr_mask + 1); ep->irq_pci_addr = (pci_addr & ~pci_addr_mask); ep->irq_pci_fn = fn; } @@ -518,11 +518,11 @@ static int cdns_pcie_ep_send_msix_irq(struct cdns_pcie_ep *ep, u8 fn, u8 vfn, if (ep->irq_pci_addr != (msg_addr & ~pci_addr_mask) || ep->irq_pci_fn != fn) { /* First region was reserved for IRQ writes. */ - cdns_pcie_set_outbound_region(pcie, 0, fn, 0, - false, - ep->irq_phys_addr, - msg_addr & ~pci_addr_mask, - pci_addr_mask + 1); + pcie->ops->pcie_set_outbound_region(pcie, 0, fn, 0, + false, + ep->irq_phys_addr, + msg_addr & ~pci_addr_mask, + pci_addr_mask + 1); ep->irq_pci_addr = (msg_addr & ~pci_addr_mask); ep->irq_pci_fn = fn; } @@ -877,7 +877,7 @@ int cdns_pcie_ep_setup(struct cdns_pcie_ep *ep) set_bit(0, &ep->ob_region_map); if (ep->quirk_detect_quiet_flag) - cdns_pcie_detect_quiet_min_delay_set(&ep->pcie); + pcie->ops->pcie_detect_quiet_min_delay_set(&ep->pcie); spin_lock_init(&ep->lock); diff --git a/drivers/pci/controller/cadence/pcie-cadence-host.c b/drivers/pci/controller/cadence/pcie-cadence-host.c index 1e2df49e40c6..0aadc194014e 100644 --- a/drivers/pci/controller/cadence/pcie-cadence-host.c +++ b/drivers/pci/controller/cadence/pcie-cadence-host.c @@ -73,12 +73,83 @@ void __iomem *cdns_pci_map_bus(struct pci_bus *bus, unsigned int devfn, return rc->cfg_base + (where & 0xfff); } +void __iomem *cdns_pci_hpa_map_bus(struct pci_bus *bus, unsigned int devfn, + int where) +{ + struct pci_host_bridge *bridge = pci_find_host_bridge(bus); + struct cdns_pcie_rc *rc = pci_host_bridge_priv(bridge); + struct cdns_pcie *pcie = &rc->pcie; + unsigned int busn = bus->number; + u32 addr0, desc0, desc1, ctrl0; + u32 regval; + + if (pci_is_root_bus(bus)) { + /* + * Only the root port (devfn == 0) is connected to this bus. + * All other PCI devices are behind some bridge hence on another + * bus. + */ + if (devfn) + return NULL; + + return pcie->reg_base + (where & 0xfff); + } + + /* + * Clear AXI link-down status + */ + regval = cdns_pcie_hpa_readl(pcie, REG_BANK_AXI_SLAVE, CDNS_PCIE_HPA_AT_LINKDOWN); + cdns_pcie_hpa_writel(pcie, REG_BANK_AXI_SLAVE, CDNS_PCIE_HPA_AT_LINKDOWN, + (regval & GENMASK(0, 0))); + + desc1 = 0; + ctrl0 = 0; + /* + * Update Output registers for AXI region 0. + */ + addr0 = CDNS_PCIE_HPA_AT_OB_REGION_PCI_ADDR0_NBITS(12) | + CDNS_PCIE_HPA_AT_OB_REGION_PCI_ADDR0_DEVFN(devfn) | + CDNS_PCIE_HPA_AT_OB_REGION_PCI_ADDR0_BUS(busn); + cdns_pcie_hpa_writel(pcie, REG_BANK_AXI_SLAVE, + CDNS_PCIE_HPA_AT_OB_REGION_PCI_ADDR0(0), addr0); + + desc1 = cdns_pcie_hpa_readl(pcie, REG_BANK_AXI_SLAVE, + CDNS_PCIE_HPA_AT_OB_REGION_DESC1(0)); + desc1 &= ~CDNS_PCIE_HPA_AT_OB_REGION_DESC1_DEVFN_MASK; + desc1 |= CDNS_PCIE_HPA_AT_OB_REGION_DESC1_DEVFN(0); + ctrl0 = CDNS_PCIE_HPA_AT_OB_REGION_CTRL0_SUPPLY_BUS | + CDNS_PCIE_HPA_AT_OB_REGION_CTRL0_SUPPLY_DEV_FN; + /* + * The bus number was already set once for all in desc1 by + * cdns_pcie_host_init_address_translation(). + */ + if (busn == bridge->busnr + 1) + desc0 |= CDNS_PCIE_HPA_AT_OB_REGION_DESC0_TYPE_CONF_TYPE0; + else + desc0 |= CDNS_PCIE_HPA_AT_OB_REGION_DESC0_TYPE_CONF_TYPE1; + + cdns_pcie_hpa_writel(pcie, REG_BANK_AXI_SLAVE, + CDNS_PCIE_HPA_AT_OB_REGION_DESC0(0), desc0); + cdns_pcie_hpa_writel(pcie, REG_BANK_AXI_SLAVE, + CDNS_PCIE_HPA_AT_OB_REGION_DESC1(0), desc1); + cdns_pcie_hpa_writel(pcie, REG_BANK_AXI_SLAVE, + CDNS_PCIE_HPA_AT_OB_REGION_CTRL0(0), ctrl0); + + return rc->cfg_base + (where & 0xfff); +} + static struct pci_ops cdns_pcie_host_ops = { .map_bus = cdns_pci_map_bus, .read = pci_generic_config_read, .write = pci_generic_config_write, }; +static struct pci_ops cdns_pcie_hpa_host_ops = { + .map_bus = cdns_pci_hpa_map_bus, + .read = pci_generic_config_read, + .write = pci_generic_config_write, +}; + static int cdns_pcie_host_training_complete(struct cdns_pcie *pcie) { u32 pcie_cap_off = CDNS_PCIE_RP_CAP_OFFSET; @@ -340,8 +411,8 @@ static int cdns_pcie_host_bar_config(struct cdns_pcie_rc *rc, */ bar = cdns_pcie_host_find_min_bar(rc, size); if (bar != RP_BAR_UNDEFINED) { - ret = cdns_pcie_host_bar_ib_config(rc, bar, cpu_addr, - size, flags); + ret = pcie->ops->pcie_host_bar_ib_config(rc, bar, cpu_addr, + size, flags); if (ret) dev_err(dev, "IB BAR: %d config failed\n", bar); return ret; @@ -366,8 +437,8 @@ static int cdns_pcie_host_bar_config(struct cdns_pcie_rc *rc, } winsize = bar_max_size[bar]; - ret = cdns_pcie_host_bar_ib_config(rc, bar, cpu_addr, winsize, - flags); + ret = pcie->ops->pcie_host_bar_ib_config(rc, bar, cpu_addr, winsize, + flags); if (ret) { dev_err(dev, "IB BAR: %d config failed\n", bar); return ret; @@ -408,8 +479,8 @@ static int cdns_pcie_host_map_dma_ranges(struct cdns_pcie_rc *rc) if (list_empty(&bridge->dma_ranges)) { of_property_read_u32(np, "cdns,no-bar-match-nbits", &no_bar_nbits); - err = cdns_pcie_host_bar_ib_config(rc, RP_NO_BAR, 0x0, - (u64)1 << no_bar_nbits, 0); + err = pcie->ops->pcie_host_bar_ib_config(rc, RP_NO_BAR, 0x0, + (u64)1 << no_bar_nbits, 0); if (err) dev_err(dev, "IB BAR: %d config failed\n", RP_NO_BAR); return err; @@ -467,17 +538,160 @@ int cdns_pcie_host_init_address_translation(struct cdns_pcie_rc *rc) u64 pci_addr = res->start - entry->offset; if (resource_type(res) == IORESOURCE_IO) - cdns_pcie_set_outbound_region(pcie, busnr, 0, r, + pcie->ops->pcie_set_outbound_region(pcie, busnr, 0, r, + true, + pci_pio_to_address(res->start), + pci_addr, + resource_size(res)); + else + pcie->ops->pcie_set_outbound_region(pcie, busnr, 0, r, + false, + res->start, + pci_addr, + resource_size(res)); + + r++; + } + + return cdns_pcie_host_map_dma_ranges(rc); +} + +int cdns_pcie_hpa_host_init_root_port(struct cdns_pcie_rc *rc) +{ + struct cdns_pcie *pcie = &rc->pcie; + u32 value, ctrl; + u32 id; + + /* + * Set the root complex BAR configuration register: + * - disable both BAR0 and BAR1. + * - enable Prefetchable Memory Base and Limit registers in type 1 + * config space (64 bits). + * - enable IO Base and Limit registers in type 1 config + * space (32 bits). + */ + + ctrl = CDNS_PCIE_HPA_LM_BAR_CFG_CTRL_DISABLED; + value = CDNS_PCIE_HPA_LM_RC_BAR_CFG_BAR0_CTRL(ctrl) | + CDNS_PCIE_HPA_LM_RC_BAR_CFG_BAR1_CTRL(ctrl) | + CDNS_PCIE_HPA_LM_RC_BAR_CFG_PREFETCH_MEM_ENABLE | + CDNS_PCIE_HPA_LM_RC_BAR_CFG_PREFETCH_MEM_64BITS | + CDNS_PCIE_HPA_LM_RC_BAR_CFG_IO_ENABLE | + CDNS_PCIE_HPA_LM_RC_BAR_CFG_IO_32BITS; + cdns_pcie_hpa_writel(pcie, REG_BANK_IP_CFG_CTRL_REG, + CDNS_PCIE_HPA_LM_RC_BAR_CFG, value); + + if (rc->device_id != 0xffff) + cdns_pcie_rp_writew(pcie, PCI_DEVICE_ID, rc->device_id); + + cdns_pcie_rp_writeb(pcie, PCI_CLASS_REVISION, 0); + cdns_pcie_rp_writeb(pcie, PCI_CLASS_PROG, 0); + cdns_pcie_rp_writew(pcie, PCI_CLASS_DEVICE, PCI_CLASS_BRIDGE_PCI); + + return 0; +} + +int cdns_pcie_hpa_host_bar_ib_config(struct cdns_pcie_rc *rc, + enum cdns_pcie_rp_bar bar, + u64 cpu_addr, u64 size, + unsigned long flags) +{ + struct cdns_pcie *pcie = &rc->pcie; + u32 addr0, addr1, aperture, value; + + if (!rc->avail_ib_bar[bar]) + return -EBUSY; + + rc->avail_ib_bar[bar] = false; + + aperture = ilog2(size); + addr0 = CDNS_PCIE_HPA_AT_IB_RP_BAR_ADDR0_NBITS(aperture) | + (lower_32_bits(cpu_addr) & GENMASK(31, 8)); + addr1 = upper_32_bits(cpu_addr); + cdns_pcie_hpa_writel(pcie, REG_BANK_AXI_MASTER, + CDNS_PCIE_HPA_AT_IB_RP_BAR_ADDR0(bar), addr0); + cdns_pcie_hpa_writel(pcie, REG_BANK_AXI_MASTER, + CDNS_PCIE_HPA_AT_IB_RP_BAR_ADDR1(bar), addr1); + + if (bar == RP_NO_BAR) + return 0; + + value = cdns_pcie_hpa_readl(pcie, REG_BANK_IP_CFG_CTRL_REG, CDNS_PCIE_HPA_LM_RC_BAR_CFG); + value &= ~(HPA_LM_RC_BAR_CFG_CTRL_MEM_64BITS(bar) | + HPA_LM_RC_BAR_CFG_CTRL_PREF_MEM_64BITS(bar) | + HPA_LM_RC_BAR_CFG_CTRL_MEM_32BITS(bar) | + HPA_LM_RC_BAR_CFG_CTRL_PREF_MEM_32BITS(bar) | + HPA_LM_RC_BAR_CFG_APERTURE(bar, bar_aperture_mask[bar] + 2)); + if (size + cpu_addr >= SZ_4G) { + if (!(flags & IORESOURCE_PREFETCH)) + value |= HPA_LM_RC_BAR_CFG_CTRL_MEM_64BITS(bar); + value |= HPA_LM_RC_BAR_CFG_CTRL_PREF_MEM_64BITS(bar); + } else { + if (!(flags & IORESOURCE_PREFETCH)) + value |= HPA_LM_RC_BAR_CFG_CTRL_MEM_32BITS(bar); + value |= HPA_LM_RC_BAR_CFG_CTRL_PREF_MEM_32BITS(bar); + } + + value |= HPA_LM_RC_BAR_CFG_APERTURE(bar, aperture); + cdns_pcie_hpa_writel(pcie, REG_BANK_IP_CFG_CTRL_REG, CDNS_PCIE_HPA_LM_RC_BAR_CFG, value); + + return 0; +} + +int cdns_pcie_hpa_host_init_address_translation(struct cdns_pcie_rc *rc) +{ + struct cdns_pcie *pcie = &rc->pcie; + struct pci_host_bridge *bridge = pci_host_bridge_from_priv(rc); + struct resource *cfg_res = rc->cfg_res; + struct resource_entry *entry; + u64 cpu_addr = cfg_res->start; + u32 addr0, addr1, desc1; + int r, busnr = 0; + + entry = resource_list_first_type(&bridge->windows, IORESOURCE_BUS); + if (entry) + busnr = entry->res->start; + + /* + * Reserve region 0 for PCI configure space accesses: + * OB_REGION_PCI_ADDR0 and OB_REGION_DESC0 are updated dynamically by + * cdns_pci_map_bus(), other region registers are set here once for all. + */ + addr1 = 0; + desc1 = CDNS_PCIE_HPA_AT_OB_REGION_DESC1_BUS(busnr); + cdns_pcie_hpa_writel(pcie, REG_BANK_AXI_SLAVE, + CDNS_PCIE_HPA_AT_OB_REGION_PCI_ADDR1(0), addr1); + cdns_pcie_hpa_writel(pcie, REG_BANK_AXI_SLAVE, + CDNS_PCIE_HPA_AT_OB_REGION_DESC1(0), desc1); + + if (pcie->ops->cpu_addr_fixup) + cpu_addr = pcie->ops->cpu_addr_fixup(pcie, cpu_addr); + + addr0 = CDNS_PCIE_HPA_AT_OB_REGION_CPU_ADDR0_NBITS(12) | + (lower_32_bits(cpu_addr) & GENMASK(31, 8)); + addr1 = upper_32_bits(cpu_addr); + cdns_pcie_hpa_writel(pcie, REG_BANK_AXI_SLAVE, + CDNS_PCIE_HPA_AT_OB_REGION_CPU_ADDR0(0), addr0); + cdns_pcie_hpa_writel(pcie, REG_BANK_AXI_SLAVE, + CDNS_PCIE_HPA_AT_OB_REGION_CPU_ADDR1(0), addr1); + + r = 1; + resource_list_for_each_entry(entry, &bridge->windows) { + struct resource *res = entry->res; + u64 pci_addr = res->start - entry->offset; + + if (resource_type(res) == IORESOURCE_IO) + pcie->ops->pcie_set_outbound_region(pcie, busnr, 0, r, true, pci_pio_to_address(res->start), pci_addr, resource_size(res)); else - cdns_pcie_set_outbound_region(pcie, busnr, 0, r, - false, - res->start, - pci_addr, - resource_size(res)); + pcie->ops->pcie_set_outbound_region(pcie, busnr, 0, r, + false, + res->start, + pci_addr, + resource_size(res)); r++; } @@ -489,11 +703,11 @@ int cdns_pcie_host_init(struct cdns_pcie_rc *rc) { int err; - err = cdns_pcie_host_init_root_port(rc); + err = rc->pcie.ops->pcie_host_init_root_port(rc); if (err) return err; - return cdns_pcie_host_init_address_translation(rc); + return rc->pcie.ops->pcie_host_init_address_translation(rc); } int cdns_pcie_host_link_setup(struct cdns_pcie_rc *rc) @@ -503,7 +717,7 @@ int cdns_pcie_host_link_setup(struct cdns_pcie_rc *rc) int ret; if (rc->quirk_detect_quiet_flag) - cdns_pcie_detect_quiet_min_delay_set(&rc->pcie); + pcie->ops->pcie_detect_quiet_min_delay_set(&rc->pcie); cdns_pcie_host_enable_ptm_response(pcie); @@ -567,8 +781,12 @@ int cdns_pcie_host_setup(struct cdns_pcie_rc *rc) if (ret) return ret; - if (!bridge->ops) - bridge->ops = &cdns_pcie_host_ops; + if (!bridge->ops) { + if (pcie->is_hpa) + bridge->ops = &cdns_pcie_hpa_host_ops; + else + bridge->ops = &cdns_pcie_host_ops; + } ret = pci_host_probe(bridge); if (ret < 0) diff --git a/drivers/pci/controller/cadence/pcie-cadence-plat.c b/drivers/pci/controller/cadence/pcie-cadence-plat.c index e190a068b305..3f4e0bd68f11 100644 --- a/drivers/pci/controller/cadence/pcie-cadence-plat.c +++ b/drivers/pci/controller/cadence/pcie-cadence-plat.c @@ -43,7 +43,31 @@ static u64 cdns_plat_cpu_addr_fixup(struct cdns_pcie *pcie, u64 cpu_addr) } static const struct cdns_pcie_ops cdns_plat_ops = { + .link_up = cdns_pcie_linkup, .cpu_addr_fixup = cdns_plat_cpu_addr_fixup, + .pcie_host_init_root_port = cdns_pcie_host_init_root_port, + .pcie_host_bar_ib_config = cdns_pcie_host_bar_ib_config, + .pcie_host_init_address_translation = cdns_pcie_host_init_address_translation, + .pcie_detect_quiet_min_delay_set = cdns_pcie_detect_quiet_min_delay_set, + .pcie_set_outbound_region = cdns_pcie_set_outbound_region, + .pcie_set_outbound_region_for_normal_msg = + cdns_pcie_set_outbound_region_for_normal_msg, + .pcie_reset_outbound_region = cdns_pcie_reset_outbound_region, +}; + +static const struct cdns_pcie_ops cdns_hpa_plat_ops = { + .start_link = cdns_pcie_hpa_startlink, + .stop_link = cdns_pcie_hpa_stop_link, + .link_up = cdns_pcie_hpa_linkup, + .cpu_addr_fixup = cdns_plat_cpu_addr_fixup, + .pcie_host_init_root_port = cdns_pcie_hpa_host_init_root_port, + .pcie_host_bar_ib_config = cdns_pcie_hpa_host_bar_ib_config, + .pcie_host_init_address_translation = cdns_pcie_hpa_host_init_address_translation, + .pcie_detect_quiet_min_delay_set = cdns_pcie_hpa_detect_quiet_min_delay_set, + .pcie_set_outbound_region = cdns_pcie_hpa_set_outbound_region, + .pcie_set_outbound_region_for_normal_msg = + cdns_pcie_hpa_set_outbound_region_for_normal_msg, + .pcie_reset_outbound_region = cdns_pcie_hpa_reset_outbound_region, }; static int cdns_plat_pcie_probe(struct platform_device *pdev) diff --git a/drivers/pci/controller/cadence/pcie-cadence.c b/drivers/pci/controller/cadence/pcie-cadence.c index 204e045aed8c..d730429ba20c 100644 --- a/drivers/pci/controller/cadence/pcie-cadence.c +++ b/drivers/pci/controller/cadence/pcie-cadence.c @@ -5,9 +5,49 @@ #include <linux/kernel.h> #include <linux/of.h> - #include "pcie-cadence.h" +bool cdns_pcie_linkup(struct cdns_pcie *pcie) +{ + u32 pl_reg_val; + + pl_reg_val = cdns_pcie_readl(pcie, CDNS_PCIE_LM_BASE); + if (pl_reg_val & GENMASK(0, 0)) + return true; + else + return false; +} + +bool cdns_pcie_hpa_linkup(struct cdns_pcie *pcie) +{ + u32 pl_reg_val; + + pl_reg_val = cdns_pcie_hpa_readl(pcie, REG_BANK_IP_REG, CDNS_PCIE_HPA_PHY_DBG_STS_REG0); + if (pl_reg_val & GENMASK(0, 0)) + return true; + else + return false; +} + +int cdns_pcie_hpa_startlink(struct cdns_pcie *pcie) +{ + u32 pl_reg_val; + + pl_reg_val = cdns_pcie_hpa_readl(pcie, REG_BANK_IP_REG, CDNS_PCIE_HPA_PHY_LAYER_CFG0); + pl_reg_val |= CDNS_PCIE_HPA_LINK_TRNG_EN_MASK; + cdns_pcie_hpa_writel(pcie, REG_BANK_IP_REG, CDNS_PCIE_HPA_PHY_LAYER_CFG0, pl_reg_val); + return 1; +} + +void cdns_pcie_hpa_stop_link(struct cdns_pcie *pcie) +{ + u32 pl_reg_val; + + pl_reg_val = cdns_pcie_hpa_readl(pcie, REG_BANK_IP_REG, CDNS_PCIE_HPA_PHY_LAYER_CFG0); + pl_reg_val &= ~CDNS_PCIE_HPA_LINK_TRNG_EN_MASK; + cdns_pcie_hpa_writel(pcie, REG_BANK_IP_REG, CDNS_PCIE_HPA_PHY_LAYER_CFG0, pl_reg_val); +} + void cdns_pcie_detect_quiet_min_delay_set(struct cdns_pcie *pcie) { u32 delay = 0x3; @@ -147,6 +187,181 @@ void cdns_pcie_reset_outbound_region(struct cdns_pcie *pcie, u32 r) cdns_pcie_writel(pcie, CDNS_PCIE_AT_OB_REGION_CPU_ADDR1(r), 0); } +void cdns_pcie_hpa_detect_quiet_min_delay_set(struct cdns_pcie *pcie) +{ + u32 delay = 0x3; + u32 ltssm_control_cap; + + /* + * Set the LTSSM Detect Quiet state min. delay to 2ms. + */ + ltssm_control_cap = cdns_pcie_hpa_readl(pcie, REG_BANK_IP_REG, + CDNS_PCIE_HPA_PHY_LAYER_CFG0); + ltssm_control_cap = ((ltssm_control_cap & + ~CDNS_PCIE_HPA_DETECT_QUIET_MIN_DELAY_MASK) | + CDNS_PCIE_HPA_DETECT_QUIET_MIN_DELAY(delay)); + + cdns_pcie_hpa_writel(pcie, REG_BANK_IP_REG, + CDNS_PCIE_HPA_PHY_LAYER_CFG0, ltssm_control_cap); +} + +void cdns_pcie_hpa_set_outbound_region(struct cdns_pcie *pcie, u8 busnr, u8 fn, + u32 r, bool is_io, + u64 cpu_addr, u64 pci_addr, size_t size) +{ + /* + * roundup_pow_of_two() returns an unsigned long, which is not suited + * for 64bit values. + */ + u64 sz = 1ULL << fls64(size - 1); + int nbits = ilog2(sz); + u32 addr0, addr1, desc0, desc1, ctrl0; + + if (nbits < 8) + nbits = 8; + + /* + * Set the PCI address + */ + addr0 = CDNS_PCIE_HPA_AT_OB_REGION_PCI_ADDR0_NBITS(nbits) | + (lower_32_bits(pci_addr) & GENMASK(31, 8)); + addr1 = upper_32_bits(pci_addr); + + cdns_pcie_hpa_writel(pcie, REG_BANK_AXI_SLAVE, + CDNS_PCIE_HPA_AT_OB_REGION_PCI_ADDR0(r), addr0); + cdns_pcie_hpa_writel(pcie, REG_BANK_AXI_SLAVE, + CDNS_PCIE_HPA_AT_OB_REGION_PCI_ADDR1(r), addr1); + + /* + * Set the PCIe header descriptor + */ + if (is_io) + desc0 = CDNS_PCIE_HPA_AT_OB_REGION_DESC0_TYPE_IO; + else + desc0 = CDNS_PCIE_HPA_AT_OB_REGION_DESC0_TYPE_MEM; + desc1 = 0; + + /* + * Whatever Bit [26] is set or not inside DESC0 register of the outbound + * PCIe descriptor, the PCI function number must be set into + * Bits [31:24] of DESC1 anyway. + * + * In Root Complex mode, the function number is always 0 but in Endpoint + * mode, the PCIe controller may support more than one function. This + * function number needs to be set properly into the outbound PCIe + * descriptor. + * + * Besides, setting Bit [26] is mandatory when in Root Complex mode: + * then the driver must provide the bus, resp. device, number in + * Bits [31:24] of DESC1, resp. Bits[23:16] of DESC0. Like the function + * number, the device number is always 0 in Root Complex mode. + * + * However when in Endpoint mode, we can clear Bit [26] of DESC0, hence + * the PCIe controller will use the captured values for the bus and + * device numbers. + */ + if (pcie->is_rc) { + /* The device and function numbers are always 0. */ + desc1 = CDNS_PCIE_HPA_AT_OB_REGION_DESC1_BUS(busnr) | + CDNS_PCIE_HPA_AT_OB_REGION_DESC1_DEVFN(0); + ctrl0 = CDNS_PCIE_HPA_AT_OB_REGION_CTRL0_SUPPLY_BUS | + CDNS_PCIE_HPA_AT_OB_REGION_CTRL0_SUPPLY_DEV_FN; + } else { + /* + * Use captured values for bus and device numbers but still + * need to set the function number. + */ + desc1 |= CDNS_PCIE_HPA_AT_OB_REGION_DESC1_DEVFN(fn); + } + + cdns_pcie_hpa_writel(pcie, REG_BANK_AXI_SLAVE, + CDNS_PCIE_HPA_AT_OB_REGION_DESC0(r), desc0); + cdns_pcie_hpa_writel(pcie, REG_BANK_AXI_SLAVE, + CDNS_PCIE_HPA_AT_OB_REGION_DESC1(r), desc1); + + /* + * Set the CPU address + */ + if (pcie->ops->cpu_addr_fixup) + cpu_addr = pcie->ops->cpu_addr_fixup(pcie, cpu_addr); + + addr0 = CDNS_PCIE_HPA_AT_OB_REGION_CPU_ADDR0_NBITS(nbits) | + (lower_32_bits(cpu_addr) & GENMASK(31, 8)); + addr1 = upper_32_bits(cpu_addr); + + cdns_pcie_hpa_writel(pcie, REG_BANK_AXI_SLAVE, + CDNS_PCIE_HPA_AT_OB_REGION_CPU_ADDR0(r), addr0); + cdns_pcie_hpa_writel(pcie, REG_BANK_AXI_SLAVE, + CDNS_PCIE_HPA_AT_OB_REGION_CPU_ADDR1(r), addr1); + cdns_pcie_hpa_writel(pcie, REG_BANK_AXI_SLAVE, + CDNS_PCIE_HPA_AT_OB_REGION_CTRL0(r), ctrl0); +} + +void cdns_pcie_hpa_set_outbound_region_for_normal_msg(struct cdns_pcie *pcie, + u8 busnr, u8 fn, + u32 r, u64 cpu_addr) +{ + u32 addr0, addr1, desc0, desc1, ctrl0; + + desc0 = CDNS_PCIE_HPA_AT_OB_REGION_DESC0_TYPE_NORMAL_MSG; + desc1 = 0; + + /* + * See cdns_pcie_set_outbound_region() comments above. + */ + if (pcie->is_rc) { + desc1 = CDNS_PCIE_HPA_AT_OB_REGION_DESC1_BUS(busnr) | + CDNS_PCIE_HPA_AT_OB_REGION_DESC1_DEVFN(0); + ctrl0 = CDNS_PCIE_HPA_AT_OB_REGION_CTRL0_SUPPLY_BUS | + CDNS_PCIE_HPA_AT_OB_REGION_CTRL0_SUPPLY_DEV_FN; + } else { + desc1 |= CDNS_PCIE_HPA_AT_OB_REGION_DESC1_DEVFN(fn); + } + + /* + * Set the CPU address + */ + if (pcie->ops->cpu_addr_fixup) + cpu_addr = pcie->ops->cpu_addr_fixup(pcie, cpu_addr); + + addr0 = CDNS_PCIE_HPA_AT_OB_REGION_CPU_ADDR0_NBITS(17) | + (lower_32_bits(cpu_addr) & GENMASK(31, 8)); + addr1 = upper_32_bits(cpu_addr); + + cdns_pcie_hpa_writel(pcie, REG_BANK_AXI_SLAVE, + CDNS_PCIE_HPA_AT_OB_REGION_PCI_ADDR0(r), 0); + cdns_pcie_hpa_writel(pcie, REG_BANK_AXI_SLAVE, + CDNS_PCIE_HPA_AT_OB_REGION_PCI_ADDR1(r), 0); + cdns_pcie_hpa_writel(pcie, REG_BANK_AXI_SLAVE, + CDNS_PCIE_HPA_AT_OB_REGION_DESC0(r), desc0); + cdns_pcie_hpa_writel(pcie, REG_BANK_AXI_SLAVE, + CDNS_PCIE_HPA_AT_OB_REGION_DESC1(r), desc1); + cdns_pcie_hpa_writel(pcie, REG_BANK_AXI_SLAVE, + CDNS_PCIE_HPA_AT_OB_REGION_CPU_ADDR0(r), addr0); + cdns_pcie_hpa_writel(pcie, REG_BANK_AXI_SLAVE, + CDNS_PCIE_HPA_AT_OB_REGION_CPU_ADDR1(r), addr1); + cdns_pcie_hpa_writel(pcie, REG_BANK_AXI_SLAVE, + CDNS_PCIE_HPA_AT_OB_REGION_CTRL0(r), ctrl0); +} + +void cdns_pcie_hpa_reset_outbound_region(struct cdns_pcie *pcie, u32 r) +{ + cdns_pcie_hpa_writel(pcie, REG_BANK_AXI_SLAVE, + CDNS_PCIE_HPA_AT_OB_REGION_PCI_ADDR0(r), 0); + cdns_pcie_hpa_writel(pcie, REG_BANK_AXI_SLAVE, + CDNS_PCIE_HPA_AT_OB_REGION_PCI_ADDR1(r), 0); + + cdns_pcie_hpa_writel(pcie, REG_BANK_AXI_SLAVE, + CDNS_PCIE_HPA_AT_OB_REGION_DESC0(r), 0); + cdns_pcie_hpa_writel(pcie, REG_BANK_AXI_SLAVE, + CDNS_PCIE_HPA_AT_OB_REGION_DESC1(r), 0); + + cdns_pcie_hpa_writel(pcie, REG_BANK_AXI_SLAVE, + CDNS_PCIE_HPA_AT_OB_REGION_CPU_ADDR0(r), 0); + cdns_pcie_hpa_writel(pcie, REG_BANK_AXI_SLAVE, + CDNS_PCIE_HPA_AT_OB_REGION_CPU_ADDR1(r), 0); +} + void cdns_pcie_disable_phy(struct cdns_pcie *pcie) { int i = pcie->phy_count; -- 2.27.0 ^ permalink raw reply related [flat|nested] 28+ messages in thread
* Re: [PATCH 6/7] PCI: cadence: Add callback functions for Root Port and EP controller 2025-03-27 11:42 ` [PATCH 6/7] PCI: cadence: Add callback functions for Root Port and EP controller Manikandan Karunakaran Pillai @ 2025-04-09 22:45 ` Bjorn Helgaas 2025-04-11 4:26 ` Manikandan Karunakaran Pillai 0 siblings, 1 reply; 28+ messages in thread From: Bjorn Helgaas @ 2025-04-09 22:45 UTC (permalink / raw) To: Manikandan Karunakaran Pillai Cc: bhelgaas@google.com, lpieralisi@kernel.org, kw@linux.com, manivannan.sadhasivam@linaro.org, robh@kernel.org, linux-pci@vger.kernel.org, linux-kernel@vger.kernel.org, Frank Li [+cc Frank for .cpu_addr_fixup()] On Thu, Mar 27, 2025 at 11:42:27AM +0000, Manikandan Karunakaran Pillai wrote: > Add support for the second generation PCIe controller by adding > the required callback functions. Update the common functions for > endpoint and Root port modes. Invoke the relevant callback functions > for platform probe of PCIe controller using the callback functions Pick "second generation" or "HPA" and use it consistently so we can keep this all straight. s/endpoint/Endpoint/ s/Root port/Root Port/ Add period again. > @@ -877,7 +877,7 @@ int cdns_pcie_ep_setup(struct cdns_pcie_ep *ep) > set_bit(0, &ep->ob_region_map); > > if (ep->quirk_detect_quiet_flag) > - cdns_pcie_detect_quiet_min_delay_set(&ep->pcie); > + pcie->ops->pcie_detect_quiet_min_delay_set(&ep->pcie); Maybe the quirk check should go inside .pcie_detect_quiet_min_delay()? Just an idea, maybe that wouldn't help. > +void __iomem *cdns_pci_hpa_map_bus(struct pci_bus *bus, unsigned int devfn, > + int where) > +{ > + struct pci_host_bridge *bridge = pci_find_host_bridge(bus); > + struct cdns_pcie_rc *rc = pci_host_bridge_priv(bridge); > + struct cdns_pcie *pcie = &rc->pcie; > + unsigned int busn = bus->number; > + u32 addr0, desc0, desc1, ctrl0; > + u32 regval; > + > + if (pci_is_root_bus(bus)) { > + /* > + * Only the root port (devfn == 0) is connected to this bus. > + * All other PCI devices are behind some bridge hence on another > + * bus. > + */ > + if (devfn) > + return NULL; > + > + return pcie->reg_base + (where & 0xfff); > + } > + > + /* > + * Clear AXI link-down status > + */ > + regval = cdns_pcie_hpa_readl(pcie, REG_BANK_AXI_SLAVE, CDNS_PCIE_HPA_AT_LINKDOWN); > + cdns_pcie_hpa_writel(pcie, REG_BANK_AXI_SLAVE, CDNS_PCIE_HPA_AT_LINKDOWN, > + (regval & GENMASK(0, 0))); > + > + desc1 = 0; > + ctrl0 = 0; > + /* Blank line before comment. You could make this a single-line comment, e.g., /* Update Output registers for AXI region 0. */ > + * Update Output registers for AXI region 0. > + */ > + addr0 = CDNS_PCIE_HPA_AT_OB_REGION_PCI_ADDR0_NBITS(12) | > + CDNS_PCIE_HPA_AT_OB_REGION_PCI_ADDR0_DEVFN(devfn) | > + CDNS_PCIE_HPA_AT_OB_REGION_PCI_ADDR0_BUS(busn); > + cdns_pcie_hpa_writel(pcie, REG_BANK_AXI_SLAVE, > + CDNS_PCIE_HPA_AT_OB_REGION_PCI_ADDR0(0), addr0); > + > + desc1 = cdns_pcie_hpa_readl(pcie, REG_BANK_AXI_SLAVE, > + CDNS_PCIE_HPA_AT_OB_REGION_DESC1(0)); > + desc1 &= ~CDNS_PCIE_HPA_AT_OB_REGION_DESC1_DEVFN_MASK; > + desc1 |= CDNS_PCIE_HPA_AT_OB_REGION_DESC1_DEVFN(0); > + ctrl0 = CDNS_PCIE_HPA_AT_OB_REGION_CTRL0_SUPPLY_BUS | > + CDNS_PCIE_HPA_AT_OB_REGION_CTRL0_SUPPLY_DEV_FN; > + /* Again. > + * The bus number was already set once for all in desc1 by > + * cdns_pcie_host_init_address_translation(). This comment sounds like you only support the root bus and a single other bus. But you're not actually setting the *bus number* here; you're setting up either a Type 0 access (for the Root Port's secondary bus) or a Type 1 access (for anything else, e.g. things below a switch). > + */ > + if (busn == bridge->busnr + 1) The Root Port's secondary bus number need not be the Root Port's bus number + 1. It *might* be, and since you said the current design only has a single Root Port, it probably *will* be, but that secondary bus number is writable and can be changed either by the PCI core or by the user via setpci. So you shouldn't assume this. If/when a design supports more than one Root Port, that assumption will certainly be broken. > + desc0 |= CDNS_PCIE_HPA_AT_OB_REGION_DESC0_TYPE_CONF_TYPE0; > + else > + desc0 |= CDNS_PCIE_HPA_AT_OB_REGION_DESC0_TYPE_CONF_TYPE1; > + > + cdns_pcie_hpa_writel(pcie, REG_BANK_AXI_SLAVE, > + CDNS_PCIE_HPA_AT_OB_REGION_DESC0(0), desc0); > + cdns_pcie_hpa_writel(pcie, REG_BANK_AXI_SLAVE, > + CDNS_PCIE_HPA_AT_OB_REGION_DESC1(0), desc1); > + cdns_pcie_hpa_writel(pcie, REG_BANK_AXI_SLAVE, > + CDNS_PCIE_HPA_AT_OB_REGION_CTRL0(0), ctrl0); > + > + return rc->cfg_base + (where & 0xfff); > +} > +++ b/drivers/pci/controller/cadence/pcie-cadence.c > @@ -5,9 +5,49 @@ > > #include <linux/kernel.h> > #include <linux/of.h> > - Spurious change, keep this blank line. > #include "pcie-cadence.h" > > +bool cdns_pcie_linkup(struct cdns_pcie *pcie) Static unless needed elsewhere. I can't tell whether it is because I can't download or apply the whole series. > +{ > + u32 pl_reg_val; > + > + pl_reg_val = cdns_pcie_readl(pcie, CDNS_PCIE_LM_BASE); > + if (pl_reg_val & GENMASK(0, 0)) > + return true; > + else > + return false; Drop the else: if (pl_reg_val & GENMASK(0, 0)) return true; return false; > +} > + > +bool cdns_pcie_hpa_linkup(struct cdns_pcie *pcie) > +{ > + u32 pl_reg_val; > + > + pl_reg_val = cdns_pcie_hpa_readl(pcie, REG_BANK_IP_REG, CDNS_PCIE_HPA_PHY_DBG_STS_REG0); > + if (pl_reg_val & GENMASK(0, 0)) > + return true; > + else > + return false; Ditto. > +} > + > +int cdns_pcie_hpa_startlink(struct cdns_pcie *pcie) s/cdns_pcie_hpa_startlink/cdns_pcie_hpa_start_link/ > +{ > + u32 pl_reg_val; > + > + pl_reg_val = cdns_pcie_hpa_readl(pcie, REG_BANK_IP_REG, CDNS_PCIE_HPA_PHY_LAYER_CFG0); > + pl_reg_val |= CDNS_PCIE_HPA_LINK_TRNG_EN_MASK; > + cdns_pcie_hpa_writel(pcie, REG_BANK_IP_REG, CDNS_PCIE_HPA_PHY_LAYER_CFG0, pl_reg_val); > + return 1; This should return 0 for success. > +} > +void cdns_pcie_hpa_set_outbound_region(struct cdns_pcie *pcie, u8 busnr, u8 fn, > + u32 r, bool is_io, > + u64 cpu_addr, u64 pci_addr, size_t size) > +{ > + /* > + * roundup_pow_of_two() returns an unsigned long, which is not suited > + * for 64bit values. > + */ > + u64 sz = 1ULL << fls64(size - 1); > + int nbits = ilog2(sz); > + u32 addr0, addr1, desc0, desc1, ctrl0; > + > + if (nbits < 8) > + nbits = 8; > + > + /* > + * Set the PCI address > + */ Could be a single line comment: /* Set the PCI address */ like many others in this series. > + addr0 = CDNS_PCIE_HPA_AT_OB_REGION_PCI_ADDR0_NBITS(nbits) | > + (lower_32_bits(pci_addr) & GENMASK(31, 8)); > + addr1 = upper_32_bits(pci_addr); > + > + cdns_pcie_hpa_writel(pcie, REG_BANK_AXI_SLAVE, > + CDNS_PCIE_HPA_AT_OB_REGION_PCI_ADDR0(r), addr0); > + cdns_pcie_hpa_writel(pcie, REG_BANK_AXI_SLAVE, > + CDNS_PCIE_HPA_AT_OB_REGION_PCI_ADDR1(r), addr1); > + > + /* > + * Set the PCIe header descriptor > + */ > + if (is_io) > + desc0 = CDNS_PCIE_HPA_AT_OB_REGION_DESC0_TYPE_IO; > + else > + desc0 = CDNS_PCIE_HPA_AT_OB_REGION_DESC0_TYPE_MEM; > + desc1 = 0; > + > + /* > + * Whatever Bit [26] is set or not inside DESC0 register of the outbound > + * PCIe descriptor, the PCI function number must be set into > + * Bits [31:24] of DESC1 anyway. s/Whatever/Whether/ (I think) > + * In Root Complex mode, the function number is always 0 but in Endpoint > + * mode, the PCIe controller may support more than one function. This > + * function number needs to be set properly into the outbound PCIe > + * descriptor. > + * > + * Besides, setting Bit [26] is mandatory when in Root Complex mode: > + * then the driver must provide the bus, resp. device, number in > + * Bits [31:24] of DESC1, resp. Bits[23:16] of DESC0. Like the function > + * number, the device number is always 0 in Root Complex mode. > + * > + * However when in Endpoint mode, we can clear Bit [26] of DESC0, hence > + * the PCIe controller will use the captured values for the bus and > + * device numbers. > + */ > + if (pcie->is_rc) { > + /* The device and function numbers are always 0. */ > + desc1 = CDNS_PCIE_HPA_AT_OB_REGION_DESC1_BUS(busnr) | > + CDNS_PCIE_HPA_AT_OB_REGION_DESC1_DEVFN(0); > + ctrl0 = CDNS_PCIE_HPA_AT_OB_REGION_CTRL0_SUPPLY_BUS | > + CDNS_PCIE_HPA_AT_OB_REGION_CTRL0_SUPPLY_DEV_FN; > + } else { > + /* > + * Use captured values for bus and device numbers but still > + * need to set the function number. > + */ > + desc1 |= CDNS_PCIE_HPA_AT_OB_REGION_DESC1_DEVFN(fn); > + } > + > + cdns_pcie_hpa_writel(pcie, REG_BANK_AXI_SLAVE, > + CDNS_PCIE_HPA_AT_OB_REGION_DESC0(r), desc0); > + cdns_pcie_hpa_writel(pcie, REG_BANK_AXI_SLAVE, > + CDNS_PCIE_HPA_AT_OB_REGION_DESC1(r), desc1); > + > + /* > + * Set the CPU address > + */ > + if (pcie->ops->cpu_addr_fixup) > + cpu_addr = pcie->ops->cpu_addr_fixup(pcie, cpu_addr); Oops, we can't add any more .cpu_addr_fixup() functions or uses. This must be done via the devicetree description. If we add a new .cpu_addr_fixup(), it may cover up defects in the devicetree. You can see Frank Li's nice work to fix this for some of the dwc drivers on the branch ending at 07ae413e169d ("PCI: intel-gw: Remove intel_pcie_cpu_addr()"): https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/log/?h=07ae413e169d This one is the biggest issue so far. Bjorn ^ permalink raw reply [flat|nested] 28+ messages in thread
* [PATCH 6/7] PCI: cadence: Add callback functions for Root Port and EP controller 2025-04-09 22:45 ` Bjorn Helgaas @ 2025-04-11 4:26 ` Manikandan Karunakaran Pillai 0 siblings, 0 replies; 28+ messages in thread From: Manikandan Karunakaran Pillai @ 2025-04-11 4:26 UTC (permalink / raw) To: Bjorn Helgaas Cc: bhelgaas@google.com, lpieralisi@kernel.org, kw@linux.com, manivannan.sadhasivam@linaro.org, robh@kernel.org, linux-pci@vger.kernel.org, linux-kernel@vger.kernel.org, Frank Li >[+cc Frank for .cpu_addr_fixup()] > >On Thu, Mar 27, 2025 at 11:42:27AM +0000, Manikandan Karunakaran Pillai >wrote: >> Add support for the second generation PCIe controller by adding >> the required callback functions. Update the common functions for >> endpoint and Root port modes. Invoke the relevant callback functions >> for platform probe of PCIe controller using the callback functions > >Pick "second generation" or "HPA" and use it consistently so we can >keep this all straight. > >s/endpoint/Endpoint/ >s/Root port/Root Port/ > >Add period again. Ok > >> @@ -877,7 +877,7 @@ int cdns_pcie_ep_setup(struct cdns_pcie_ep *ep) >> set_bit(0, &ep->ob_region_map); >> >> if (ep->quirk_detect_quiet_flag) >> - cdns_pcie_detect_quiet_min_delay_set(&ep->pcie); >> + pcie->ops->pcie_detect_quiet_min_delay_set(&ep->pcie); > >Maybe the quirk check should go inside .pcie_detect_quiet_min_delay()? >Just an idea, maybe that wouldn't help. After going through the code, the check requires to be outside. > >> +void __iomem *cdns_pci_hpa_map_bus(struct pci_bus *bus, unsigned int >devfn, >> + int where) >> +{ >> + struct pci_host_bridge *bridge = pci_find_host_bridge(bus); >> + struct cdns_pcie_rc *rc = pci_host_bridge_priv(bridge); >> + struct cdns_pcie *pcie = &rc->pcie; >> + unsigned int busn = bus->number; >> + u32 addr0, desc0, desc1, ctrl0; >> + u32 regval; >> + >> + if (pci_is_root_bus(bus)) { >> + /* >> + * Only the root port (devfn == 0) is connected to this bus. >> + * All other PCI devices are behind some bridge hence on >another >> + * bus. >> + */ >> + if (devfn) >> + return NULL; >> + >> + return pcie->reg_base + (where & 0xfff); >> + } >> + >> + /* >> + * Clear AXI link-down status >> + */ >> + regval = cdns_pcie_hpa_readl(pcie, REG_BANK_AXI_SLAVE, >CDNS_PCIE_HPA_AT_LINKDOWN); >> + cdns_pcie_hpa_writel(pcie, REG_BANK_AXI_SLAVE, >CDNS_PCIE_HPA_AT_LINKDOWN, >> + (regval & GENMASK(0, 0))); >> + >> + desc1 = 0; >> + ctrl0 = 0; >> + /* > >Blank line before comment. You could make this a single-line comment, >e.g., > Ok > /* Update Output registers for AXI region 0. */ > >> + * Update Output registers for AXI region 0. >> + */ >> + addr0 = CDNS_PCIE_HPA_AT_OB_REGION_PCI_ADDR0_NBITS(12) | >> + CDNS_PCIE_HPA_AT_OB_REGION_PCI_ADDR0_DEVFN(devfn) | >> + CDNS_PCIE_HPA_AT_OB_REGION_PCI_ADDR0_BUS(busn); >> + cdns_pcie_hpa_writel(pcie, REG_BANK_AXI_SLAVE, >> + CDNS_PCIE_HPA_AT_OB_REGION_PCI_ADDR0(0), >addr0); >> + >> + desc1 = cdns_pcie_hpa_readl(pcie, REG_BANK_AXI_SLAVE, >> + >CDNS_PCIE_HPA_AT_OB_REGION_DESC1(0)); >> + desc1 &= ~CDNS_PCIE_HPA_AT_OB_REGION_DESC1_DEVFN_MASK; >> + desc1 |= CDNS_PCIE_HPA_AT_OB_REGION_DESC1_DEVFN(0); >> + ctrl0 = CDNS_PCIE_HPA_AT_OB_REGION_CTRL0_SUPPLY_BUS | >> + CDNS_PCIE_HPA_AT_OB_REGION_CTRL0_SUPPLY_DEV_FN; >> + /* > >Again. > Ok >> + * The bus number was already set once for all in desc1 by >> + * cdns_pcie_host_init_address_translation(). > >This comment sounds like you only support the root bus and a single >other bus. But you're not actually setting the *bus number* here; >you're setting up either a Type 0 access (for the Root Port's >secondary bus) or a Type 1 access (for anything else, e.g. things >below a switch). > Removed the comment in here and in existing code for the next patch series >> + */ >> + if (busn == bridge->busnr + 1) > >The Root Port's secondary bus number need not be the Root Port's bus >number + 1. It *might* be, and since you said the current design only >has a single Root Port, it probably *will* be, but that secondary bus >number is writable and can be changed either by the PCI core or by the >user via setpci. So you shouldn't assume this. If/when a design >supports more than one Root Port, that assumption will certainly be >broken. > >> + desc0 |= >CDNS_PCIE_HPA_AT_OB_REGION_DESC0_TYPE_CONF_TYPE0; >> + else >> + desc0 |= >CDNS_PCIE_HPA_AT_OB_REGION_DESC0_TYPE_CONF_TYPE1; >> + >> + cdns_pcie_hpa_writel(pcie, REG_BANK_AXI_SLAVE, >> + CDNS_PCIE_HPA_AT_OB_REGION_DESC0(0), desc0); >> + cdns_pcie_hpa_writel(pcie, REG_BANK_AXI_SLAVE, >> + CDNS_PCIE_HPA_AT_OB_REGION_DESC1(0), desc1); >> + cdns_pcie_hpa_writel(pcie, REG_BANK_AXI_SLAVE, >> + CDNS_PCIE_HPA_AT_OB_REGION_CTRL0(0), ctrl0); >> + >> + return rc->cfg_base + (where & 0xfff); >> +} > >> +++ b/drivers/pci/controller/cadence/pcie-cadence.c >> @@ -5,9 +5,49 @@ >> >> #include <linux/kernel.h> >> #include <linux/of.h> >> - > >Spurious change, keep this blank line. > Ok >> #include "pcie-cadence.h" >> >> +bool cdns_pcie_linkup(struct cdns_pcie *pcie) > >Static unless needed elsewhere. I can't tell whether it is because I >can't download or apply the whole series. Used in other file. > >> +{ >> + u32 pl_reg_val; >> + >> + pl_reg_val = cdns_pcie_readl(pcie, CDNS_PCIE_LM_BASE); >> + if (pl_reg_val & GENMASK(0, 0)) >> + return true; >> + else >> + return false; > >Drop the else: > > if (pl_reg_val & GENMASK(0, 0)) > return true; > > return false; > >> +} >> + >> +bool cdns_pcie_hpa_linkup(struct cdns_pcie *pcie) >> +{ >> + u32 pl_reg_val; >> + >> + pl_reg_val = cdns_pcie_hpa_readl(pcie, REG_BANK_IP_REG, >CDNS_PCIE_HPA_PHY_DBG_STS_REG0); >> + if (pl_reg_val & GENMASK(0, 0)) >> + return true; >> + else >> + return false; > >Ditto. OK > >> +} >> + >> +int cdns_pcie_hpa_startlink(struct cdns_pcie *pcie) > >s/cdns_pcie_hpa_startlink/cdns_pcie_hpa_start_link/ > >> +{ >> + u32 pl_reg_val; >> + >> + pl_reg_val = cdns_pcie_hpa_readl(pcie, REG_BANK_IP_REG, >CDNS_PCIE_HPA_PHY_LAYER_CFG0); >> + pl_reg_val |= CDNS_PCIE_HPA_LINK_TRNG_EN_MASK; >> + cdns_pcie_hpa_writel(pcie, REG_BANK_IP_REG, >CDNS_PCIE_HPA_PHY_LAYER_CFG0, pl_reg_val); >> + return 1; > >This should return 0 for success. > >> +} > Ok >> +void cdns_pcie_hpa_set_outbound_region(struct cdns_pcie *pcie, u8 busnr, >u8 fn, >> + u32 r, bool is_io, >> + u64 cpu_addr, u64 pci_addr, size_t size) >> +{ >> + /* >> + * roundup_pow_of_two() returns an unsigned long, which is not >suited >> + * for 64bit values. >> + */ >> + u64 sz = 1ULL << fls64(size - 1); >> + int nbits = ilog2(sz); >> + u32 addr0, addr1, desc0, desc1, ctrl0; >> + >> + if (nbits < 8) >> + nbits = 8; >> + >> + /* >> + * Set the PCI address >> + */ > >Could be a single line comment: > > /* Set the PCI address */ > >like many others in this series. > >> + addr0 = CDNS_PCIE_HPA_AT_OB_REGION_PCI_ADDR0_NBITS(nbits) | >> + (lower_32_bits(pci_addr) & GENMASK(31, 8)); >> + addr1 = upper_32_bits(pci_addr); >> + >> + cdns_pcie_hpa_writel(pcie, REG_BANK_AXI_SLAVE, >> + CDNS_PCIE_HPA_AT_OB_REGION_PCI_ADDR0(r), >addr0); >> + cdns_pcie_hpa_writel(pcie, REG_BANK_AXI_SLAVE, >> + CDNS_PCIE_HPA_AT_OB_REGION_PCI_ADDR1(r), >addr1); >> + >> + /* >> + * Set the PCIe header descriptor >> + */ >> + if (is_io) >> + desc0 = CDNS_PCIE_HPA_AT_OB_REGION_DESC0_TYPE_IO; >> + else >> + desc0 = CDNS_PCIE_HPA_AT_OB_REGION_DESC0_TYPE_MEM; >> + desc1 = 0; >> + >> + /* >> + * Whatever Bit [26] is set or not inside DESC0 register of the outbound >> + * PCIe descriptor, the PCI function number must be set into >> + * Bits [31:24] of DESC1 anyway. > >s/Whatever/Whether/ (I think) > Ok >> + * In Root Complex mode, the function number is always 0 but in >Endpoint >> + * mode, the PCIe controller may support more than one function. This >> + * function number needs to be set properly into the outbound PCIe >> + * descriptor. >> + * >> + * Besides, setting Bit [26] is mandatory when in Root Complex mode: >> + * then the driver must provide the bus, resp. device, number in >> + * Bits [31:24] of DESC1, resp. Bits[23:16] of DESC0. Like the function >> + * number, the device number is always 0 in Root Complex mode. >> + * >> + * However when in Endpoint mode, we can clear Bit [26] of DESC0, >hence >> + * the PCIe controller will use the captured values for the bus and >> + * device numbers. >> + */ >> + if (pcie->is_rc) { >> + /* The device and function numbers are always 0. */ >> + desc1 = CDNS_PCIE_HPA_AT_OB_REGION_DESC1_BUS(busnr) >| >> + CDNS_PCIE_HPA_AT_OB_REGION_DESC1_DEVFN(0); >> + ctrl0 = CDNS_PCIE_HPA_AT_OB_REGION_CTRL0_SUPPLY_BUS | >> + > CDNS_PCIE_HPA_AT_OB_REGION_CTRL0_SUPPLY_DEV_FN; >> + } else { >> + /* >> + * Use captured values for bus and device numbers but still >> + * need to set the function number. >> + */ >> + desc1 |= CDNS_PCIE_HPA_AT_OB_REGION_DESC1_DEVFN(fn); >> + } >> + >> + cdns_pcie_hpa_writel(pcie, REG_BANK_AXI_SLAVE, >> + CDNS_PCIE_HPA_AT_OB_REGION_DESC0(r), desc0); >> + cdns_pcie_hpa_writel(pcie, REG_BANK_AXI_SLAVE, >> + CDNS_PCIE_HPA_AT_OB_REGION_DESC1(r), desc1); >> + >> + /* >> + * Set the CPU address >> + */ >> + if (pcie->ops->cpu_addr_fixup) >> + cpu_addr = pcie->ops->cpu_addr_fixup(pcie, cpu_addr); > >Oops, we can't add any more .cpu_addr_fixup() functions or uses. This >must be done via the devicetree description. If we add a new >.cpu_addr_fixup(), it may cover up defects in the devicetree. > cpu_addr_fixup is removed. >You can see Frank Li's nice work to fix this for some of the dwc >drivers on the branch ending at 07ae413e169d ("PCI: intel-gw: Remove >intel_pcie_cpu_addr()"): > > >https://urldefense.com/v3/__https://git.kernel.org/pub/scm/linux/kernel/git/t >orvalds/linux.git/log/?h=07ae413e169d__;!!EHscmS1ygiU1lA!HirY7Jqq13s65vE >sm8Xtx9gMxZHrQDFP83kcJl16q69IqZNwZ8uRfTlSISvAIeG6vWCI2sIr8sP4N64$ > >This one is the biggest issue so far. > >Bjorn ^ permalink raw reply [flat|nested] 28+ messages in thread
[parent not found: <20250327111256.2948250-1-mpillai@cadence.com>]
* [PATCH 7/7] PCI: cadence: Update support for TI J721e boards [not found] ` <20250327111256.2948250-1-mpillai@cadence.com> @ 2025-03-27 11:43 ` Manikandan Karunakaran Pillai 0 siblings, 0 replies; 28+ messages in thread From: Manikandan Karunakaran Pillai @ 2025-03-27 11:43 UTC (permalink / raw) To: bhelgaas@google.com, lpieralisi@kernel.org, kw@linux.com, manivannan.sadhasivam@linaro.org, robh@kernel.org Cc: manivannan.sadhasivam@linaro.org, robh@kernel.org, linux-pci@vger.kernel.org, linux-kernel@vger.kernel.org Update the support for TI J721 boards to use the updated Cadence PCIe controller code Signed-off-by: Manikandan K Pillai <mpillai@cadence.com> --- drivers/pci/controller/cadence/pci-j721e.c | 8 ++++++++ 1 file changed, 8 insertions(+) diff --git a/drivers/pci/controller/cadence/pci-j721e.c b/drivers/pci/controller/cadence/pci-j721e.c index 0341d51d6aed..118c87fb9381 100644 --- a/drivers/pci/controller/cadence/pci-j721e.c +++ b/drivers/pci/controller/cadence/pci-j721e.c @@ -164,6 +164,14 @@ static const struct cdns_pcie_ops j721e_pcie_ops = { .start_link = j721e_pcie_start_link, .stop_link = j721e_pcie_stop_link, .link_up = j721e_pcie_link_up, + .pcie_host_init_root_port = cdns_pcie_host_init_root_port, + .pcie_host_bar_ib_config = cdns_pcie_host_bar_ib_config, + .pcie_host_init_address_translation = cdns_pcie_host_init_address_translation, + .pcie_detect_quiet_min_delay_set = cdns_pcie_detect_quiet_min_delay_set, + .pcie_set_outbound_region = cdns_pcie_set_outbound_region, + .pcie_set_outbound_region_for_normal_msg = + cdns_pcie_set_outbound_region_for_normal_msg, + .pcie_reset_outbound_region = cdns_pcie_reset_outbound_region, }; static int j721e_pcie_set_mode(struct j721e_pcie *pcie, struct regmap *syscon, -- 2.27.0 ^ permalink raw reply related [flat|nested] 28+ messages in thread
* Re: [PATCH 0/7] Enhance the PCIe controller driver 2025-03-27 10:59 ` [PATCH 0/7] Enhance the PCIe controller driver Manikandan Karunakaran Pillai ` (4 preceding siblings ...) [not found] ` <20250327111256.2948250-1-mpillai@cadence.com> @ 2025-03-27 12:03 ` Hans Zhang 2025-03-27 14:16 ` Krzysztof Kozlowski ` (2 subsequent siblings) 8 siblings, 0 replies; 28+ messages in thread From: Hans Zhang @ 2025-03-27 12:03 UTC (permalink / raw) To: Manikandan Karunakaran Pillai, bhelgaas@google.com, lpieralisi@kernel.org, kw@linux.com, manivannan.sadhasivam@linaro.org, robh@kernel.org, krzk+dt@kernel.org, conor+dt@kernel.org, Milind Parab Cc: linux-pci@vger.kernel.org, devicetree@vger.kernel.org, linux-kernel@vger.kernel.org Hi Manikandan, You should update your patch to V2, not no version at all. Best regards, Hans On 2025/3/27 18:59, Manikandan Karunakaran Pillai wrote: > EXTERNAL EMAIL > > Enhances the exiting Cadence PCIe controller drivers to support second > generation PCIe controller also referred as HPA(High Performance > Architecture) controllers. > > The patch set enhances the Cadence PCIe driver for the new high > performance architecture changes. The "compatible" property in DTS > is added with more strings to support the new platform architecture > and the register maps that change with it. The driver read register > and write register functions take the updated offset stored from the > platform driver to access the registers. The driver now supports > the legacy and HPA architecture, with the legacy code being changed > minimal. The TI SoC continues to be supported with the changes > incorporated. The changes are also in tune with how multiple platforms > are supported in related drivers. > > Patch 1/7 - DTS related changes for property "compatible" > Patch 2/7 - Updates the header file with relevant register offsets and > bit definitions > Patch 3/7 - Platform related code changes > Patch 4/7 - PCIe EP related code changes > Patch 5/7 - Header file is updated with register offsets and updated > read and write register functions > Patch 6/7 - Support for multiple arch by using registered callbacks > Patch 7/7 - TIJ72X board is updated to use the new approach > > Comments from the earlier patch submission on the same enhancements are > taken into consideration. The previous submitted patch links is > https://lore.kernel.org/lkml/CH2PPF4D26F8E1C205166209F012D4F3A81A2A42@CH2PPF4D26F8E1C.namprd07.prod.outlook.com/ > > The scripts/checkpatch.pl has been run on the patches with and without > --strict. With the --strict option, 4 checks are generated on 1 patch > (patch 0002 of the series), which can be ignored. There are no code > fixes required for these checks. The rest of the 'scripts/checkpatch.pl' > is clean. > > The changes are tested on TI platforms. The legacy controller changes are > tested on an TI J7200 EVM and HPA changes are planned for on an FPGA > platform available within Cadence. > > Manikandan K Pillai (7): > dt-bindings: pci: cadence: Extend compatible for new platform > configurations > PCI: cadence: Add header support for PCIe next generation controllers > PCI: cadence: Add platform related architecture and register > information > PCI: cadence: Add support for PCIe Endpoint HPA controllers > PCI: cadence: Update the PCIe controller register address offsets > PCI: cadence: Add callback functions for Root Port and EP controller > PCI: cadence: Update support for TI J721e boards > > .../bindings/pci/cdns,cdns-pcie-ep.yaml | 12 +- > .../bindings/pci/cdns,cdns-pcie-host.yaml | 119 +++++- > drivers/pci/controller/cadence/pci-j721e.c | 8 + > .../pci/controller/cadence/pcie-cadence-ep.c | 184 +++++++-- > .../controller/cadence/pcie-cadence-host.c | 264 ++++++++++-- > .../controller/cadence/pcie-cadence-plat.c | 145 +++++++ > drivers/pci/controller/cadence/pcie-cadence.c | 217 +++++++++- > drivers/pci/controller/cadence/pcie-cadence.h | 380 +++++++++++++++++- > 8 files changed, 1259 insertions(+), 70 deletions(-) > > -- > 2.27.0 > > ^ permalink raw reply [flat|nested] 28+ messages in thread
* Re: [PATCH 0/7] Enhance the PCIe controller driver 2025-03-27 10:59 ` [PATCH 0/7] Enhance the PCIe controller driver Manikandan Karunakaran Pillai ` (5 preceding siblings ...) 2025-03-27 12:03 ` [PATCH 0/7] Enhance the PCIe controller driver Hans Zhang @ 2025-03-27 14:16 ` Krzysztof Kozlowski 2025-03-27 14:43 ` Manikandan Karunakaran Pillai 2025-04-09 17:08 ` manivannan.sadhasivam 2025-04-09 20:11 ` Bjorn Helgaas 8 siblings, 1 reply; 28+ messages in thread From: Krzysztof Kozlowski @ 2025-03-27 14:16 UTC (permalink / raw) To: Manikandan Karunakaran Pillai, bhelgaas@google.com, lpieralisi@kernel.org, kw@linux.com, manivannan.sadhasivam@linaro.org, robh@kernel.org, krzk+dt@kernel.org, conor+dt@kernel.org, Milind Parab Cc: linux-pci@vger.kernel.org, devicetree@vger.kernel.org, linux-kernel@vger.kernel.org On 27/03/2025 11:59, Manikandan Karunakaran Pillai wrote: > Enhances the exiting Cadence PCIe controller drivers to support second > generation PCIe controller also referred as HPA(High Performance > Architecture) controllers. > > The patch set enhances the Cadence PCIe driver for the new high > performance architecture changes. The "compatible" property in DTS > is added with more strings to support the new platform architecture > and the register maps that change with it. The driver read register > and write register functions take the updated offset stored from the > platform driver to access the registers. The driver now supports > the legacy and HPA architecture, with the legacy code being changed > minimal. The TI SoC continues to be supported with the changes > incorporated. The changes are also in tune with how multiple platforms > are supported in related drivers. > > Patch 1/7 - DTS related changes for property "compatible" > Patch 2/7 - Updates the header file with relevant register offsets and > bit definitions > Patch 3/7 - Platform related code changes > Patch 4/7 - PCIe EP related code changes > Patch 5/7 - Header file is updated with register offsets and updated > read and write register functions > Patch 6/7 - Support for multiple arch by using registered callbacks > Patch 7/7 - TIJ72X board is updated to use the new approach > > Comments from the earlier patch submission on the same enhancements are > taken into consideration. The previous submitted patch links is > https://lore.kernel.org/lkml/CH2PPF4D26F8E1C205166209F012D4F3A81A2A42@CH2PPF4D26F8E1C.namprd07.prod.outlook.com/ > > The scripts/checkpatch.pl has been run on the patches with and without > --strict. With the --strict option, 4 checks are generated on 1 patch > (patch 0002 of the series), which can be ignored. There are no code > fixes required for these checks. The rest of the 'scripts/checkpatch.pl' > is clean. > > The changes are tested on TI platforms. The legacy controller changes are > tested on an TI J7200 EVM and HPA changes are planned for on an FPGA > platform available within Cadence. > > Manikandan K Pillai (7): > dt-bindings: pci: cadence: Extend compatible for new platform > configurations > PCI: cadence: Add header support for PCIe next generation controllers > PCI: cadence: Add platform related architecture and register > information > PCI: cadence: Add support for PCIe Endpoint HPA controllers > PCI: cadence: Update the PCIe controller register address offsets > PCI: cadence: Add callback functions for Root Port and EP controller > PCI: cadence: Update support for TI J721e boards > Why your patches are not properly threaded? I see only one patch. Same story was for this: https://lore.kernel.org/all/CH2PPF4D26F8E1CE4E18E9CC5B8DAF724DCA2A42@CH2PPF4D26F8E1C.namprd07.prod.outlook.com/ BTW, that's continuation, so version correctly your patches and provide detailed changelog. Best regards, Krzysztof ^ permalink raw reply [flat|nested] 28+ messages in thread
* RE: [PATCH 0/7] Enhance the PCIe controller driver 2025-03-27 14:16 ` Krzysztof Kozlowski @ 2025-03-27 14:43 ` Manikandan Karunakaran Pillai 2025-03-27 14:46 ` Krzysztof Kozlowski 0 siblings, 1 reply; 28+ messages in thread From: Manikandan Karunakaran Pillai @ 2025-03-27 14:43 UTC (permalink / raw) To: Krzysztof Kozlowski, bhelgaas@google.com, lpieralisi@kernel.org, kw@linux.com, manivannan.sadhasivam@linaro.org, robh@kernel.org, krzk+dt@kernel.org, conor+dt@kernel.org, Milind Parab Cc: linux-pci@vger.kernel.org, devicetree@vger.kernel.org, linux-kernel@vger.kernel.org >-----Original Message----- >From: Krzysztof Kozlowski <krzk@kernel.org> >Sent: Thursday, March 27, 2025 7:47 PM >To: Manikandan Karunakaran Pillai <mpillai@cadence.com>; >bhelgaas@google.com; lpieralisi@kernel.org; kw@linux.com; >manivannan.sadhasivam@linaro.org; robh@kernel.org; krzk+dt@kernel.org; >conor+dt@kernel.org; Milind Parab <mparab@cadence.com> >Cc: linux-pci@vger.kernel.org; devicetree@vger.kernel.org; linux- >kernel@vger.kernel.org >Subject: Re: [PATCH 0/7] Enhance the PCIe controller driver > >EXTERNAL MAIL > > >On 27/03/2025 11:59, Manikandan Karunakaran Pillai wrote: >> Enhances the exiting Cadence PCIe controller drivers to support second >> generation PCIe controller also referred as HPA(High Performance >> Architecture) controllers. >> >> The scripts/checkpatch.pl has been run on the patches with and without >> --strict. With the --strict option, 4 checks are generated on 1 patch >> (patch 0002 of the series), which can be ignored. There are no code >> fixes required for these checks. The rest of the 'scripts/checkpatch.pl' >> is clean. >> > >Why your patches are not properly threaded? I see only one patch. > I don’t have git send-email enabled from the organization and hence need to send from Microsoft outlook. I use git send-email to send it to my Outlook account and then forward it from there to the Linux mail list. (While sending from linux git send-email, I am using the Message-ID of the cover letter) Any suggestions on how to fix ? >Same story was for this: >https://urldefense.com/v3/__https://lore.kernel.org/all/CH2PPF4D26F8E1CE4E >18E9CC5B8DAF724DCA2A42@CH2PPF4D26F8E1C.namprd07.prod.outlook.co >m/__;!!EHscmS1ygiU1lA!HNzOgB2eNiPii0vsPjZjLmsmAtnW3wzvv5pJvzf5ugICCz >YpQWyb0iW_LfxE6pkPmvx75I93ckU$ > >BTW, that's continuation, so version correctly your patches and provide >detailed changelog. > >Best regards, >Krzysztof ^ permalink raw reply [flat|nested] 28+ messages in thread
* Re: [PATCH 0/7] Enhance the PCIe controller driver 2025-03-27 14:43 ` Manikandan Karunakaran Pillai @ 2025-03-27 14:46 ` Krzysztof Kozlowski 0 siblings, 0 replies; 28+ messages in thread From: Krzysztof Kozlowski @ 2025-03-27 14:46 UTC (permalink / raw) To: Manikandan Karunakaran Pillai, bhelgaas@google.com, lpieralisi@kernel.org, kw@linux.com, manivannan.sadhasivam@linaro.org, robh@kernel.org, krzk+dt@kernel.org, conor+dt@kernel.org, Milind Parab Cc: linux-pci@vger.kernel.org, devicetree@vger.kernel.org, linux-kernel@vger.kernel.org On 27/03/2025 15:43, Manikandan Karunakaran Pillai wrote: > > >> -----Original Message----- >> From: Krzysztof Kozlowski <krzk@kernel.org> >> Sent: Thursday, March 27, 2025 7:47 PM >> To: Manikandan Karunakaran Pillai <mpillai@cadence.com>; >> bhelgaas@google.com; lpieralisi@kernel.org; kw@linux.com; >> manivannan.sadhasivam@linaro.org; robh@kernel.org; krzk+dt@kernel.org; >> conor+dt@kernel.org; Milind Parab <mparab@cadence.com> >> Cc: linux-pci@vger.kernel.org; devicetree@vger.kernel.org; linux- >> kernel@vger.kernel.org >> Subject: Re: [PATCH 0/7] Enhance the PCIe controller driver >> >> EXTERNAL MAIL >> >> >> On 27/03/2025 11:59, Manikandan Karunakaran Pillai wrote: >>> Enhances the exiting Cadence PCIe controller drivers to support second >>> generation PCIe controller also referred as HPA(High Performance >>> Architecture) controllers. >>> >>> The scripts/checkpatch.pl has been run on the patches with and without >>> --strict. With the --strict option, 4 checks are generated on 1 patch >>> (patch 0002 of the series), which can be ignored. There are no code >>> fixes required for these checks. The rest of the 'scripts/checkpatch.pl' >>> is clean. >>> >> >> Why your patches are not properly threaded? I see only one patch. >> > I don’t have git send-email enabled from the organization and hence need to send from Microsoft outlook. That's your problem to fix, not ours to deal with. It is really not okay to make this my problem. Especially that it is solveable with b4 relay. > I use git send-email to send it to my Outlook account and then forward it from there to the > Linux mail list. (While sending from linux git send-email, I am using the Message-ID of the cover letter) > Any suggestions on how to fix ? b4 relay or just use normal (non-Microsoft) systems. Where is the changelog and versioning? Best regards, Krzysztof ^ permalink raw reply [flat|nested] 28+ messages in thread
* Re: [PATCH 0/7] Enhance the PCIe controller driver 2025-03-27 10:59 ` [PATCH 0/7] Enhance the PCIe controller driver Manikandan Karunakaran Pillai ` (6 preceding siblings ...) 2025-03-27 14:16 ` Krzysztof Kozlowski @ 2025-04-09 17:08 ` manivannan.sadhasivam 2025-04-11 4:08 ` Manikandan Karunakaran Pillai 2025-04-09 20:11 ` Bjorn Helgaas 8 siblings, 1 reply; 28+ messages in thread From: manivannan.sadhasivam @ 2025-04-09 17:08 UTC (permalink / raw) To: Manikandan Karunakaran Pillai Cc: bhelgaas@google.com, lpieralisi@kernel.org, kw@linux.com, robh@kernel.org, krzk+dt@kernel.org, conor+dt@kernel.org, Milind Parab, linux-pci@vger.kernel.org, devicetree@vger.kernel.org, linux-kernel@vger.kernel.org On Thu, Mar 27, 2025 at 10:59:08AM +0000, Manikandan Karunakaran Pillai wrote: > Enhances the exiting Cadence PCIe controller drivers to support second > generation PCIe controller also referred as HPA(High Performance > Architecture) controllers. > > The patch set enhances the Cadence PCIe driver for the new high > performance architecture changes. The "compatible" property in DTS > is added with more strings to support the new platform architecture > and the register maps that change with it. The driver read register > and write register functions take the updated offset stored from the > platform driver to access the registers. The driver now supports > the legacy and HPA architecture, with the legacy code being changed > minimal. The TI SoC continues to be supported with the changes > incorporated. The changes are also in tune with how multiple platforms > are supported in related drivers. > > Patch 1/7 - DTS related changes for property "compatible" > Patch 2/7 - Updates the header file with relevant register offsets and > bit definitions > Patch 3/7 - Platform related code changes > Patch 4/7 - PCIe EP related code changes > Patch 5/7 - Header file is updated with register offsets and updated > read and write register functions > Patch 6/7 - Support for multiple arch by using registered callbacks > Patch 7/7 - TIJ72X board is updated to use the new approach This one line patch summary is not useful. We can look into individual patches. > This series is v2. Please use version in the subject prefix and also include the changelog section. > Comments from the earlier patch submission on the same enhancements are > taken into consideration. The previous submitted patch links is > https://lore.kernel.org/lkml/CH2PPF4D26F8E1C205166209F012D4F3A81A2A42@CH2PPF4D26F8E1C.namprd07.prod.outlook.com/ > This is not how you would add changelog in cover letter. Please read: Documentation/process/submitting-patches.rst - Mani -- மணிவண்ணன் சதாசிவம் ^ permalink raw reply [flat|nested] 28+ messages in thread
* RE: [PATCH 0/7] Enhance the PCIe controller driver 2025-04-09 17:08 ` manivannan.sadhasivam @ 2025-04-11 4:08 ` Manikandan Karunakaran Pillai 0 siblings, 0 replies; 28+ messages in thread From: Manikandan Karunakaran Pillai @ 2025-04-11 4:08 UTC (permalink / raw) To: manivannan.sadhasivam@linaro.org Cc: bhelgaas@google.com, lpieralisi@kernel.org, kw@linux.com, robh@kernel.org, krzk+dt@kernel.org, conor+dt@kernel.org, Milind Parab, linux-pci@vger.kernel.org, devicetree@vger.kernel.org, linux-kernel@vger.kernel.org > >EXTERNAL MAIL > > >On Thu, Mar 27, 2025 at 10:59:08AM +0000, Manikandan Karunakaran Pillai >wrote: >> Enhances the exiting Cadence PCIe controller drivers to support second >> generation PCIe controller also referred as HPA(High Performance >> Architecture) controllers. >> >> The patch set enhances the Cadence PCIe driver for the new high >> performance architecture changes. The "compatible" property in DTS >> is added with more strings to support the new platform architecture >> and the register maps that change with it. The driver read register >> and write register functions take the updated offset stored from the >> platform driver to access the registers. The driver now supports >> the legacy and HPA architecture, with the legacy code being changed >> minimal. The TI SoC continues to be supported with the changes >> incorporated. The changes are also in tune with how multiple platforms >> are supported in related drivers. >> >> Patch 1/7 - DTS related changes for property "compatible" >> Patch 2/7 - Updates the header file with relevant register offsets and >> bit definitions >> Patch 3/7 - Platform related code changes >> Patch 4/7 - PCIe EP related code changes >> Patch 5/7 - Header file is updated with register offsets and updated >> read and write register functions >> Patch 6/7 - Support for multiple arch by using registered callbacks >> Patch 7/7 - TIJ72X board is updated to use the new approach > >This one line patch summary is not useful. We can look into individual patches. > Will remove this one in the next submission >> > >This series is v2. Please use version in the subject prefix and also include the >changelog section. > Plan to send out the next patch as v3. >> Comments from the earlier patch submission on the same enhancements >are >> taken into consideration. The previous submitted patch links is >> >https://urldefense.com/v3/__https://lore.kernel.org/lkml/CH2PPF4D26F8E1C20 >5166209F012D4F3A81A2A42@CH2PPF4D26F8E1C.namprd07.prod.outlook.co >m/__;!!EHscmS1ygiU1lA!HHaKm1CBv1jpRLP6XLRHiZaHXTVDW7dtEXp1k5GrzL6 >sEZ5avF7nkcTmTRc-xU1glrJLmydxfi_HvLkwChItFEwo2Do$ >> > >This is not how you would add changelog in cover letter. Please read: >Documentation/process/submitting-patches.rst OK > >- Mani > >-- >மணிவண்ணன் சதாசிவம் ^ permalink raw reply [flat|nested] 28+ messages in thread
* Re: [PATCH 0/7] Enhance the PCIe controller driver 2025-03-27 10:59 ` [PATCH 0/7] Enhance the PCIe controller driver Manikandan Karunakaran Pillai ` (7 preceding siblings ...) 2025-04-09 17:08 ` manivannan.sadhasivam @ 2025-04-09 20:11 ` Bjorn Helgaas 2025-04-11 4:10 ` Manikandan Karunakaran Pillai 8 siblings, 1 reply; 28+ messages in thread From: Bjorn Helgaas @ 2025-04-09 20:11 UTC (permalink / raw) To: Manikandan Karunakaran Pillai Cc: bhelgaas@google.com, lpieralisi@kernel.org, kw@linux.com, manivannan.sadhasivam@linaro.org, robh@kernel.org, krzk+dt@kernel.org, conor+dt@kernel.org, Milind Parab, linux-pci@vger.kernel.org, devicetree@vger.kernel.org, linux-kernel@vger.kernel.org On Thu, Mar 27, 2025 at 10:59:08AM +0000, Manikandan Karunakaran Pillai wrote: > Enhances the exiting Cadence PCIe controller drivers to support second > generation PCIe controller also referred as HPA(High Performance > Architecture) controllers. Usual convention is to include a space before "(" in English text. Apply throughout this series in commit logs and comments. Others have mentioned the fact that we can't easily extract the patches from this posting because of the Outlook series. That also makes it harder to review because we can't apply the series and see the changes in context. Bjorn ^ permalink raw reply [flat|nested] 28+ messages in thread
* RE: [PATCH 0/7] Enhance the PCIe controller driver 2025-04-09 20:11 ` Bjorn Helgaas @ 2025-04-11 4:10 ` Manikandan Karunakaran Pillai 0 siblings, 0 replies; 28+ messages in thread From: Manikandan Karunakaran Pillai @ 2025-04-11 4:10 UTC (permalink / raw) To: Bjorn Helgaas Cc: bhelgaas@google.com, lpieralisi@kernel.org, kw@linux.com, manivannan.sadhasivam@linaro.org, robh@kernel.org, krzk+dt@kernel.org, conor+dt@kernel.org, Milind Parab, linux-pci@vger.kernel.org, devicetree@vger.kernel.org, linux-kernel@vger.kernel.org >EXTERNAL MAIL > > >On Thu, Mar 27, 2025 at 10:59:08AM +0000, Manikandan Karunakaran Pillai >wrote: >> Enhances the exiting Cadence PCIe controller drivers to support second >> generation PCIe controller also referred as HPA(High Performance >> Architecture) controllers. > >Usual convention is to include a space before "(" in English text. >Apply throughout this series in commit logs and comments. > Ok >Others have mentioned the fact that we can't easily extract the >patches from this posting because of the Outlook series. That also >makes it harder to review because we can't apply the series and see >the changes in context. > Planning to send the patches through another Linux developer who has the necessary environment setup. >Bjorn ^ permalink raw reply [flat|nested] 28+ messages in thread
end of thread, other threads:[~2025-04-11 4:27 UTC | newest]
Thread overview: 28+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
[not found] <20250327105429.2947013-1-mpillai@cadence.com>
2025-03-27 10:59 ` [PATCH 0/7] Enhance the PCIe controller driver Manikandan Karunakaran Pillai
[not found] ` <20250327111106.2947888-1-mpillai@cadence.com>
2025-03-27 11:19 ` [PATCH 1/7] dt-bindings: pci: cadence: Extend compatible for new platform configurations Manikandan Karunakaran Pillai
2025-03-27 14:15 ` Krzysztof Kozlowski
2025-03-28 5:07 ` Manikandan Karunakaran Pillai
2025-03-28 7:20 ` Krzysztof Kozlowski
2025-03-28 8:22 ` Krzysztof Kozlowski
2025-03-28 8:48 ` Hans Zhang
2025-03-28 9:17 ` Krzysztof Kozlowski
2025-03-30 14:59 ` Hans Zhang
[not found] ` <20250327111127.2947944-1-mpillai@cadence.com>
2025-03-27 11:26 ` [PATCH 2/7] PCI: cadence: Add header support for PCIe next generation controllers Manikandan Karunakaran Pillai
2025-03-27 12:01 ` Hans Zhang
2025-04-09 20:39 ` Bjorn Helgaas
2025-04-11 4:16 ` Manikandan Karunakaran Pillai
[not found] ` <20250327111200.2948071-1-mpillai@cadence.com>
2025-03-27 11:40 ` [PATCH 4/7] PCI: cadence: Add support for PCIe Endpoint HPA controllers Manikandan Karunakaran Pillai
2025-04-09 22:15 ` Bjorn Helgaas
2025-04-11 4:23 ` Manikandan Karunakaran Pillai
[not found] ` <20250327111241.2948184-1-mpillai@cadence.com>
2025-03-27 11:42 ` [PATCH 6/7] PCI: cadence: Add callback functions for Root Port and EP controller Manikandan Karunakaran Pillai
2025-04-09 22:45 ` Bjorn Helgaas
2025-04-11 4:26 ` Manikandan Karunakaran Pillai
[not found] ` <20250327111256.2948250-1-mpillai@cadence.com>
2025-03-27 11:43 ` [PATCH 7/7] PCI: cadence: Update support for TI J721e boards Manikandan Karunakaran Pillai
2025-03-27 12:03 ` [PATCH 0/7] Enhance the PCIe controller driver Hans Zhang
2025-03-27 14:16 ` Krzysztof Kozlowski
2025-03-27 14:43 ` Manikandan Karunakaran Pillai
2025-03-27 14:46 ` Krzysztof Kozlowski
2025-04-09 17:08 ` manivannan.sadhasivam
2025-04-11 4:08 ` Manikandan Karunakaran Pillai
2025-04-09 20:11 ` Bjorn Helgaas
2025-04-11 4:10 ` Manikandan Karunakaran Pillai
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox