public inbox for linux-arm-kernel@lists.infradead.org
 help / color / mirror / Atom feed
* [PATCH 0/2] PCI: cadence: Add 100 ms delay after link up for speeds > 5 GT/s
@ 2026-05-01 15:35 Hans Zhang
  2026-05-01 15:35 ` [PATCH 1/2] PCI: cadence: Ensure that cdns_pcie_host_wait_for_link() waits 100 ms after link up Hans Zhang
  2026-05-01 15:35 ` [PATCH 2/2] PCI: j721e: Set max_link_speed to enable 100 ms delay " Hans Zhang
  0 siblings, 2 replies; 8+ messages in thread
From: Hans Zhang @ 2026-05-01 15:35 UTC (permalink / raw)
  To: bhelgaas, lpieralisi, kwilczynski, mani, vigneshr
  Cc: robh, s-vadapalli, linux-omap, linux-arm-kernel, linux-pci,
	linux-kernel, Hans Zhang

As per PCIe r6.0, sec 6.6.1, a Downstream Port that supports Link speeds
greater than 5.0 GT/s, software must wait a minimum of 100 ms after Link
training completes before sending a Configuration Request.

The same requirement has already been addressed for the Synopsys
DesignWare PCIe controller in commit 80dc18a0cba8d ("PCI: dwc: Ensure that
dw_pcie_wait_for_link() waits 100 ms after link up").

This series implements the required delay for the Cadence PCIe controller.

Patch 1 introduces a 'max_link_speed' field in struct cdns_pcie and adds
the delay logic in cdns_pcie_host_wait_for_link(). Since max_link_speed
defaults to 0, the delay is not yet triggered. This patch prepares the
infrastructure and references the DWC implementation.

Patch 2 sets the max_link_speed value in the TI J721E glue driver based
on the maximum supported link speed (obtained from the device tree
"max-link-speed" property), thereby activating the delay when the
controller supports speeds greater than 5 GT/s.

Other Cadence-based glue drivers can be updated similarly in follow-up
work.

---
Our company's product is based on the HPA IP from Cadence. When connecting
to different devices, we encountered issues with the enumeration failure
when connecting to the NVIDIA RTX5070 GPU and the NVMe SSD with PCIe 5.0
interface. Our code is based on: 80dc18a0cba8d ("PCI: dwc: Ensure that
dw_pcie_wait_for_link() waits 100 ms after link up").
---

Hans Zhang (2):
  PCI: cadence: Ensure that cdns_pcie_host_wait_for_link() waits 100 ms
    after link up
  PCI: j721e: Set max_link_speed to enable 100 ms delay after link up

 drivers/pci/controller/cadence/pci-j721e.c               | 1 +
 .../pci/controller/cadence/pcie-cadence-host-common.c    | 9 +++++++++
 drivers/pci/controller/cadence/pcie-cadence.h            | 2 ++
 3 files changed, 12 insertions(+)


base-commit: e75a43c7cec459a07d91ed17de4de13ede2b7758
-- 
2.34.1



^ permalink raw reply	[flat|nested] 8+ messages in thread

* [PATCH 1/2] PCI: cadence: Ensure that cdns_pcie_host_wait_for_link() waits 100 ms after link up
  2026-05-01 15:35 [PATCH 0/2] PCI: cadence: Add 100 ms delay after link up for speeds > 5 GT/s Hans Zhang
@ 2026-05-01 15:35 ` Hans Zhang
  2026-05-02  5:18   ` Siddharth Vadapalli
  2026-05-01 15:35 ` [PATCH 2/2] PCI: j721e: Set max_link_speed to enable 100 ms delay " Hans Zhang
  1 sibling, 1 reply; 8+ messages in thread
From: Hans Zhang @ 2026-05-01 15:35 UTC (permalink / raw)
  To: bhelgaas, lpieralisi, kwilczynski, mani, vigneshr
  Cc: robh, s-vadapalli, linux-omap, linux-arm-kernel, linux-pci,
	linux-kernel, Hans Zhang

As per PCIe r6.0, sec 6.6.1, a Downstream Port that supports Link speeds
greater than 5.0 GT/s, software must wait a minimum of 100 ms after Link
training completes before sending a Configuration Request.

Add a new 'max_link_speed' field in struct cdns_pcie to record the
maximum supported (or currently configured) link speed of the controller.

In cdns_pcie_host_wait_for_link(), after the link is reported as up,
insert a 100 ms delay if max_link_speed > 2 (i.e., > 5 GT/s). This
implements the required delay at the common Cadence host layer.

Currently max_link_speed is zero-initialized, so the delay is not yet
active. Glue drivers must set max_link_speed appropriately to enable
the delay. This matches the approach taken for the Synopsys DWC
controller in commit 80dc18a0cba8d ("PCI: dwc: Ensure that
dw_pcie_wait_for_link() waits 100 ms after link up").

Signed-off-by: Hans Zhang <18255117159@163.com>
---
 .../pci/controller/cadence/pcie-cadence-host-common.c    | 9 +++++++++
 drivers/pci/controller/cadence/pcie-cadence.h            | 2 ++
 2 files changed, 11 insertions(+)

diff --git a/drivers/pci/controller/cadence/pcie-cadence-host-common.c b/drivers/pci/controller/cadence/pcie-cadence-host-common.c
index 2b0211870f02..d4ae762f423f 100644
--- a/drivers/pci/controller/cadence/pcie-cadence-host-common.c
+++ b/drivers/pci/controller/cadence/pcie-cadence-host-common.c
@@ -14,6 +14,7 @@
 
 #include "pcie-cadence.h"
 #include "pcie-cadence-host-common.h"
+#include "../../pci.h"
 
 #define LINK_RETRAIN_TIMEOUT HZ
 
@@ -55,6 +56,14 @@ int cdns_pcie_host_wait_for_link(struct cdns_pcie *pcie,
 	/* Check if the link is up or not */
 	for (retries = 0; retries < LINK_WAIT_MAX_RETRIES; retries++) {
 		if (pcie_link_up(pcie)) {
+			/*
+			 * As per PCIe r6.0, sec 6.6.1, a Downstream Port that
+			 * supports Link speeds greater than 5.0 GT/s, software
+			 * must wait a minimum of 100 ms after Link training
+			 * completes before sending a Configuration Request.
+			 */
+			if (pcie->max_link_speed > 2)
+				msleep(PCIE_RESET_CONFIG_WAIT_MS);
 			dev_info(dev, "Link up\n");
 			return 0;
 		}
diff --git a/drivers/pci/controller/cadence/pcie-cadence.h b/drivers/pci/controller/cadence/pcie-cadence.h
index 574e9cf4d003..e222b095d2b6 100644
--- a/drivers/pci/controller/cadence/pcie-cadence.h
+++ b/drivers/pci/controller/cadence/pcie-cadence.h
@@ -86,6 +86,7 @@ struct cdns_plat_pcie_of_data {
  * @ops: Platform-specific ops to control various inputs from Cadence PCIe
  *       wrapper
  * @cdns_pcie_reg_offsets: Register bank offsets for different SoC
+ * @max_link_speed: maximum supported link speed
  */
 struct cdns_pcie {
 	void __iomem		             *reg_base;
@@ -98,6 +99,7 @@ struct cdns_pcie {
 	struct device_link	             **link;
 	const  struct cdns_pcie_ops          *ops;
 	const  struct cdns_plat_pcie_of_data *cdns_pcie_reg_offsets;
+	int				     max_link_speed;
 };
 
 /**
-- 
2.34.1



^ permalink raw reply related	[flat|nested] 8+ messages in thread

* [PATCH 2/2] PCI: j721e: Set max_link_speed to enable 100 ms delay after link up
  2026-05-01 15:35 [PATCH 0/2] PCI: cadence: Add 100 ms delay after link up for speeds > 5 GT/s Hans Zhang
  2026-05-01 15:35 ` [PATCH 1/2] PCI: cadence: Ensure that cdns_pcie_host_wait_for_link() waits 100 ms after link up Hans Zhang
@ 2026-05-01 15:35 ` Hans Zhang
  1 sibling, 0 replies; 8+ messages in thread
From: Hans Zhang @ 2026-05-01 15:35 UTC (permalink / raw)
  To: bhelgaas, lpieralisi, kwilczynski, mani, vigneshr
  Cc: robh, s-vadapalli, linux-omap, linux-arm-kernel, linux-pci,
	linux-kernel, Hans Zhang

Set cdns_pcie.max_link_speed to the maximum supported link speed
(obtained from the device tree property "max-link-speed") in
j721e_pcie_set_link_speed(). This activates the post-link delay logic
added in cdns_pcie_host_wait_for_link() when the controller supports
speeds greater than 5 GT/s.

As required by PCIe r6.0 sec 6.6.1, and following the same approach as
commit 80dc18a0cba8d ("PCI: dwc: Ensure that dw_pcie_wait_for_link()
waits 100 ms after link up"), this ensures a 100 ms delay after link
training completes before any Configuration Request is sent.

Signed-off-by: Hans Zhang <18255117159@163.com>
---
 drivers/pci/controller/cadence/pci-j721e.c | 1 +
 1 file changed, 1 insertion(+)

diff --git a/drivers/pci/controller/cadence/pci-j721e.c b/drivers/pci/controller/cadence/pci-j721e.c
index bfdfe98d5aba..ee85b8e04f5b 100644
--- a/drivers/pci/controller/cadence/pci-j721e.c
+++ b/drivers/pci/controller/cadence/pci-j721e.c
@@ -206,6 +206,7 @@ static int j721e_pcie_set_link_speed(struct j721e_pcie *pcie,
 	    (pcie_get_link_speed(link_speed) == PCI_SPEED_UNKNOWN))
 		link_speed = 2;
 
+	pcie->cdns_pcie.max_link_speed = link_speed;
 	val = link_speed - 1;
 	ret = regmap_update_bits(syscon, offset, GENERATION_SEL_MASK, val);
 	if (ret)
-- 
2.34.1



^ permalink raw reply related	[flat|nested] 8+ messages in thread

* Re: [PATCH 1/2] PCI: cadence: Ensure that cdns_pcie_host_wait_for_link() waits 100 ms after link up
  2026-05-01 15:35 ` [PATCH 1/2] PCI: cadence: Ensure that cdns_pcie_host_wait_for_link() waits 100 ms after link up Hans Zhang
@ 2026-05-02  5:18   ` Siddharth Vadapalli
  2026-05-03 15:46     ` Hans Zhang
  0 siblings, 1 reply; 8+ messages in thread
From: Siddharth Vadapalli @ 2026-05-02  5:18 UTC (permalink / raw)
  To: Hans Zhang
  Cc: bhelgaas, lpieralisi, kwilczynski, mani, vigneshr, robh,
	linux-omap, linux-arm-kernel, linux-pci, linux-kernel,
	s-vadapalli

On 01/05/26 21:05, Hans Zhang wrote:
> As per PCIe r6.0, sec 6.6.1, a Downstream Port that supports Link speeds
> greater than 5.0 GT/s, software must wait a minimum of 100 ms after Link
> training completes before sending a Configuration Request.
> 
> Add a new 'max_link_speed' field in struct cdns_pcie to record the
> maximum supported (or currently configured) link speed of the controller.
> 
> In cdns_pcie_host_wait_for_link(), after the link is reported as up,
> insert a 100 ms delay if max_link_speed > 2 (i.e., > 5 GT/s). This
> implements the required delay at the common Cadence host layer.
> 
> Currently max_link_speed is zero-initialized, so the delay is not yet
> active. Glue drivers must set max_link_speed appropriately to enable
> the delay. This matches the approach taken for the Synopsys DWC
> controller in commit 80dc18a0cba8d ("PCI: dwc: Ensure that
> dw_pcie_wait_for_link() waits 100 ms after link up").
> 
> Signed-off-by: Hans Zhang <18255117159@163.com>
> ---
>   .../pci/controller/cadence/pcie-cadence-host-common.c    | 9 +++++++++
>   drivers/pci/controller/cadence/pcie-cadence.h            | 2 ++
>   2 files changed, 11 insertions(+)
> 
> diff --git a/drivers/pci/controller/cadence/pcie-cadence-host-common.c b/drivers/pci/controller/cadence/pcie-cadence-host-common.c
> index 2b0211870f02..d4ae762f423f 100644
> --- a/drivers/pci/controller/cadence/pcie-cadence-host-common.c
> +++ b/drivers/pci/controller/cadence/pcie-cadence-host-common.c
> @@ -14,6 +14,7 @@
>   
>   #include "pcie-cadence.h"
>   #include "pcie-cadence-host-common.h"
> +#include "../../pci.h"
>   
>   #define LINK_RETRAIN_TIMEOUT HZ
>   
> @@ -55,6 +56,14 @@ int cdns_pcie_host_wait_for_link(struct cdns_pcie *pcie,
>   	/* Check if the link is up or not */
>   	for (retries = 0; retries < LINK_WAIT_MAX_RETRIES; retries++) {
>   		if (pcie_link_up(pcie)) {
> +			/*
> +			 * As per PCIe r6.0, sec 6.6.1, a Downstream Port that
> +			 * supports Link speeds greater than 5.0 GT/s, software
> +			 * must wait a minimum of 100 ms after Link training
> +			 * completes before sending a Configuration Request.
> +			 */
> +			if (pcie->max_link_speed > 2)
> +				msleep(PCIE_RESET_CONFIG_WAIT_MS);

I think the above could be moved to cdns_pcie_host_start_link() as follows:

diff --git a/drivers/pci/controller/cadence/pcie-cadence-host-common.c 
b/drivers/pci/controller/cadence/pcie-cadence-host-common.c
index 2b0211870f02..0f885dcbdb12 100644
--- a/drivers/pci/controller/cadence/pcie-cadence-host-common.c
+++ b/drivers/pci/controller/cadence/pcie-cadence-host-common.c
@@ -115,6 +115,15 @@ int cdns_pcie_host_start_link(struct cdns_pcie_rc *rc,
  	if (!ret && rc->quirk_retrain_flag)
  		ret = cdns_pcie_retrain(pcie, pcie_link_up);

+	/*
+	 * As per PCIe r6.0, sec 6.6.1, a Downstream Port that
+	 * supports Link speeds greater than 5.0 GT/s, software
+	 * must wait a minimum of 100 ms after Link training
+	 * completes before sending a Configuration Request.
+	 */
+	if (!ret && pcie->max_link_speed > 2)
+		msleep(PCIE_RESET_CONFIG_WAIT_MS);
+
  	return ret;
  }
  EXPORT_SYMBOL_GPL(cdns_pcie_host_start_link);

This will avoid an additional and unnecessary delay when 
'cdns_pcie_retrain()' retrains the link.

Instead of checking for the link being up using "pcie_link_up(pcie)", 
checking for 'ret' being zero should also work (ret being zero indicates 
that the link is up).

Since configuration space accesses will not be performed until 
cdns_pcie_host_start_link() completes executing, it should be safe to 
switch to the above implementation.


>   			dev_info(dev, "Link up\n");
>   			return 0;
>   		}

[TRIMMED]

Regards,
Siddharth.


^ permalink raw reply related	[flat|nested] 8+ messages in thread

* Re: [PATCH 1/2] PCI: cadence: Ensure that cdns_pcie_host_wait_for_link() waits 100 ms after link up
  2026-05-02  5:18   ` Siddharth Vadapalli
@ 2026-05-03 15:46     ` Hans Zhang
  2026-05-04  5:08       ` Siddharth Vadapalli
  0 siblings, 1 reply; 8+ messages in thread
From: Hans Zhang @ 2026-05-03 15:46 UTC (permalink / raw)
  To: Siddharth Vadapalli
  Cc: bhelgaas, lpieralisi, kwilczynski, mani, vigneshr, robh,
	linux-omap, linux-arm-kernel, linux-pci, linux-kernel



On 5/2/26 13:18, Siddharth Vadapalli wrote:
> On 01/05/26 21:05, Hans Zhang wrote:
>> As per PCIe r6.0, sec 6.6.1, a Downstream Port that supports Link speeds
>> greater than 5.0 GT/s, software must wait a minimum of 100 ms after Link
>> training completes before sending a Configuration Request.
>>
>> Add a new 'max_link_speed' field in struct cdns_pcie to record the
>> maximum supported (or currently configured) link speed of the controller.
>>
>> In cdns_pcie_host_wait_for_link(), after the link is reported as up,
>> insert a 100 ms delay if max_link_speed > 2 (i.e., > 5 GT/s). This
>> implements the required delay at the common Cadence host layer.
>>
>> Currently max_link_speed is zero-initialized, so the delay is not yet
>> active. Glue drivers must set max_link_speed appropriately to enable
>> the delay. This matches the approach taken for the Synopsys DWC
>> controller in commit 80dc18a0cba8d ("PCI: dwc: Ensure that
>> dw_pcie_wait_for_link() waits 100 ms after link up").
>>
>> Signed-off-by: Hans Zhang <18255117159@163.com>
>> ---
>>   .../pci/controller/cadence/pcie-cadence-host-common.c    | 9 +++++++++
>>   drivers/pci/controller/cadence/pcie-cadence.h            | 2 ++
>>   2 files changed, 11 insertions(+)
>>
>> diff --git a/drivers/pci/controller/cadence/pcie-cadence-host-common.c 
>> b/drivers/pci/controller/cadence/pcie-cadence-host-common.c
>> index 2b0211870f02..d4ae762f423f 100644
>> --- a/drivers/pci/controller/cadence/pcie-cadence-host-common.c
>> +++ b/drivers/pci/controller/cadence/pcie-cadence-host-common.c
>> @@ -14,6 +14,7 @@
>>   #include "pcie-cadence.h"
>>   #include "pcie-cadence-host-common.h"
>> +#include "../../pci.h"
>>   #define LINK_RETRAIN_TIMEOUT HZ
>> @@ -55,6 +56,14 @@ int cdns_pcie_host_wait_for_link(struct cdns_pcie 
>> *pcie,
>>       /* Check if the link is up or not */
>>       for (retries = 0; retries < LINK_WAIT_MAX_RETRIES; retries++) {
>>           if (pcie_link_up(pcie)) {
>> +            /*
>> +             * As per PCIe r6.0, sec 6.6.1, a Downstream Port that
>> +             * supports Link speeds greater than 5.0 GT/s, software
>> +             * must wait a minimum of 100 ms after Link training
>> +             * completes before sending a Configuration Request.
>> +             */
>> +            if (pcie->max_link_speed > 2)
>> +                msleep(PCIE_RESET_CONFIG_WAIT_MS);
> 
> I think the above could be moved to cdns_pcie_host_start_link() as follows:
> 
> diff --git a/drivers/pci/controller/cadence/pcie-cadence-host-common.c 
> b/drivers/pci/controller/cadence/pcie-cadence-host-common.c
> index 2b0211870f02..0f885dcbdb12 100644
> --- a/drivers/pci/controller/cadence/pcie-cadence-host-common.c
> +++ b/drivers/pci/controller/cadence/pcie-cadence-host-common.c
> @@ -115,6 +115,15 @@ int cdns_pcie_host_start_link(struct cdns_pcie_rc *rc,
>       if (!ret && rc->quirk_retrain_flag)
>           ret = cdns_pcie_retrain(pcie, pcie_link_up);
> 
> +    /*
> +     * As per PCIe r6.0, sec 6.6.1, a Downstream Port that
> +     * supports Link speeds greater than 5.0 GT/s, software
> +     * must wait a minimum of 100 ms after Link training
> +     * completes before sending a Configuration Request.
> +     */
> +    if (!ret && pcie->max_link_speed > 2)
> +        msleep(PCIE_RESET_CONFIG_WAIT_MS);
> +
>       return ret;
>   }
>   EXPORT_SYMBOL_GPL(cdns_pcie_host_start_link);
> 
> This will avoid an additional and unnecessary delay when 
> 'cdns_pcie_retrain()' retrains the link.
> 
> Instead of checking for the link being up using "pcie_link_up(pcie)", 
> checking for 'ret' being zero should also work (ret being zero indicates 
> that the link is up).
> 
> Since configuration space accesses will not be performed until 
> cdns_pcie_host_start_link() completes executing, it should be safe to 
> switch to the above implementation.

Hi Siddharth,

I think this is applicable to LGA IP as per the method you mentioned. 
However, for HPA IP, additional repetitive code needs to be added in the 
following code.

Regarding the "quirk_retrain_flag" tag, I reviewed this submission 
record and it appears to be a workaround method. Can it be considered 
that it is not a universal method? Or is the same processing logic also 
added in the HPA?

diff --git a/drivers/pci/controller/cadence/pcie-cadence-host-hpa.c 
b/drivers/pci/controller/cadence/pcie-cadence-host-hpa.c
index 0f540bed58e8..65159f52067d 100644
--- a/drivers/pci/controller/cadence/pcie-cadence-host-hpa.c
+++ b/drivers/pci/controller/cadence/pcie-cadence-host-hpa.c
@@ -305,6 +305,15 @@ int cdns_pcie_hpa_host_link_setup(struct 
cdns_pcie_rc *rc)
         if (ret)
                 dev_dbg(dev, "PCIe link never came up\n");

+       /*
+        * As per PCIe r6.0, sec 6.6.1, a Downstream Port that
+        * supports Link speeds greater than 5.0 GT/s, software
+        * must wait a minimum of 100 ms after Link training
+        * completes before sending a Configuration Request.
+        */
+       if (pcie->max_link_speed > 2)
+               msleep(PCIE_RESET_CONFIG_WAIT_MS);
+
         return ret;
  }
  EXPORT_SYMBOL_GPL(cdns_pcie_hpa_host_link_setup);

Best regards,
Hans
> 
> 
>>               dev_info(dev, "Link up\n");
>>               return 0;
>>           }
> 
> [TRIMMED]
> 
> Regards,
> Siddharth.



^ permalink raw reply related	[flat|nested] 8+ messages in thread

* Re: [PATCH 1/2] PCI: cadence: Ensure that cdns_pcie_host_wait_for_link() waits 100 ms after link up
  2026-05-03 15:46     ` Hans Zhang
@ 2026-05-04  5:08       ` Siddharth Vadapalli
  2026-05-04  6:23         ` Hans Zhang
  0 siblings, 1 reply; 8+ messages in thread
From: Siddharth Vadapalli @ 2026-05-04  5:08 UTC (permalink / raw)
  To: Hans Zhang
  Cc: s-vadapalli, bhelgaas, lpieralisi, kwilczynski, mani, vigneshr,
	robh, linux-omap, linux-arm-kernel, linux-pci, linux-kernel

On 03/05/26 21:16, Hans Zhang wrote:
> 
> 
> On 5/2/26 13:18, Siddharth Vadapalli wrote:
>> On 01/05/26 21:05, Hans Zhang wrote:
>>> As per PCIe r6.0, sec 6.6.1, a Downstream Port that supports Link speeds
>>> greater than 5.0 GT/s, software must wait a minimum of 100 ms after Link
>>> training completes before sending a Configuration Request.
>>>
>>> Add a new 'max_link_speed' field in struct cdns_pcie to record the
>>> maximum supported (or currently configured) link speed of the controller.
>>>
>>> In cdns_pcie_host_wait_for_link(), after the link is reported as up,
>>> insert a 100 ms delay if max_link_speed > 2 (i.e., > 5 GT/s). This
>>> implements the required delay at the common Cadence host layer.
>>>
>>> Currently max_link_speed is zero-initialized, so the delay is not yet
>>> active. Glue drivers must set max_link_speed appropriately to enable
>>> the delay. This matches the approach taken for the Synopsys DWC
>>> controller in commit 80dc18a0cba8d ("PCI: dwc: Ensure that
>>> dw_pcie_wait_for_link() waits 100 ms after link up").
>>>
>>> Signed-off-by: Hans Zhang <18255117159@163.com>
>>> ---
>>>   .../pci/controller/cadence/pcie-cadence-host-common.c    | 9 +++++++++
>>>   drivers/pci/controller/cadence/pcie-cadence.h            | 2 ++
>>>   2 files changed, 11 insertions(+)
>>>
>>> diff --git a/drivers/pci/controller/cadence/pcie-cadence-host-common.c 
>>> b/drivers/pci/controller/cadence/pcie-cadence-host-common.c
>>> index 2b0211870f02..d4ae762f423f 100644
>>> --- a/drivers/pci/controller/cadence/pcie-cadence-host-common.c
>>> +++ b/drivers/pci/controller/cadence/pcie-cadence-host-common.c
>>> @@ -14,6 +14,7 @@
>>>   #include "pcie-cadence.h"
>>>   #include "pcie-cadence-host-common.h"
>>> +#include "../../pci.h"
>>>   #define LINK_RETRAIN_TIMEOUT HZ
>>> @@ -55,6 +56,14 @@ int cdns_pcie_host_wait_for_link(struct cdns_pcie *pcie,
>>>       /* Check if the link is up or not */
>>>       for (retries = 0; retries < LINK_WAIT_MAX_RETRIES; retries++) {
>>>           if (pcie_link_up(pcie)) {
>>> +            /*
>>> +             * As per PCIe r6.0, sec 6.6.1, a Downstream Port that
>>> +             * supports Link speeds greater than 5.0 GT/s, software
>>> +             * must wait a minimum of 100 ms after Link training
>>> +             * completes before sending a Configuration Request.
>>> +             */
>>> +            if (pcie->max_link_speed > 2)
>>> +                msleep(PCIE_RESET_CONFIG_WAIT_MS);
>>
>> I think the above could be moved to cdns_pcie_host_start_link() as follows:
>>
>> diff --git a/drivers/pci/controller/cadence/pcie-cadence-host-common.c b/ 
>> drivers/pci/controller/cadence/pcie-cadence-host-common.c
>> index 2b0211870f02..0f885dcbdb12 100644
>> --- a/drivers/pci/controller/cadence/pcie-cadence-host-common.c
>> +++ b/drivers/pci/controller/cadence/pcie-cadence-host-common.c
>> @@ -115,6 +115,15 @@ int cdns_pcie_host_start_link(struct cdns_pcie_rc *rc,
>>       if (!ret && rc->quirk_retrain_flag)
>>           ret = cdns_pcie_retrain(pcie, pcie_link_up);
>>
>> +    /*
>> +     * As per PCIe r6.0, sec 6.6.1, a Downstream Port that
>> +     * supports Link speeds greater than 5.0 GT/s, software
>> +     * must wait a minimum of 100 ms after Link training
>> +     * completes before sending a Configuration Request.
>> +     */
>> +    if (!ret && pcie->max_link_speed > 2)
>> +        msleep(PCIE_RESET_CONFIG_WAIT_MS);
>> +
>>       return ret;
>>   }
>>   EXPORT_SYMBOL_GPL(cdns_pcie_host_start_link);
>>
>> This will avoid an additional and unnecessary delay when 
>> 'cdns_pcie_retrain()' retrains the link.
>>
>> Instead of checking for the link being up using "pcie_link_up(pcie)", 
>> checking for 'ret' being zero should also work (ret being zero indicates 
>> that the link is up).
>>
>> Since configuration space accesses will not be performed until 
>> cdns_pcie_host_start_link() completes executing, it should be safe to 
>> switch to the above implementation.
> 
> Hi Siddharth,
> 
> I think this is applicable to LGA IP as per the method you mentioned. 
> However, for HPA IP, additional repetitive code needs to be added in the 
> following code.

Yes, additional code is required as you rightly pointed out, but the 
problem I was trying to address with your patch is the following:
	cdns_pcie_host_start_link()
	  calls cdns_pcie_host_wait_for_link()
		Link is Up and we wait for 100 ms here
	  calls cdns_pcie_retrain()
		  calls cdns_pcie_host_wait_for_link() a second time
			Link is Up again after retraining and we wait and
			we wait an additional 100 ms here.

Instead, it will be sufficient if we could wait just once after 
cdns_pcie_retrain() returns.

> 
> Regarding the "quirk_retrain_flag" tag, I reviewed this submission record 
> and it appears to be a workaround method. Can it be considered that it is 
> not a universal method? Or is the same processing logic also added in the HPA?

I am not sure, but to the best of my knowledge, the quirk is not applicable 
to HPA.

> 
> diff --git a/drivers/pci/controller/cadence/pcie-cadence-host-hpa.c b/ 
> drivers/pci/controller/cadence/pcie-cadence-host-hpa.c
> index 0f540bed58e8..65159f52067d 100644
> --- a/drivers/pci/controller/cadence/pcie-cadence-host-hpa.c
> +++ b/drivers/pci/controller/cadence/pcie-cadence-host-hpa.c
> @@ -305,6 +305,15 @@ int cdns_pcie_hpa_host_link_setup(struct cdns_pcie_rc 
> *rc)
>          if (ret)
>                  dev_dbg(dev, "PCIe link never came up\n");
> 
> +       /*
> +        * As per PCIe r6.0, sec 6.6.1, a Downstream Port that
> +        * supports Link speeds greater than 5.0 GT/s, software
> +        * must wait a minimum of 100 ms after Link training
> +        * completes before sending a Configuration Request.
> +        */
> +       if (pcie->max_link_speed > 2)
> +               msleep(PCIE_RESET_CONFIG_WAIT_MS);
> +
>          return ret;
>   }
>   EXPORT_SYMBOL_GPL(cdns_pcie_hpa_host_link_setup);
> 
> Best regards,
> Hans

[TRIMMED]

Regards,
Siddharth.


^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH 1/2] PCI: cadence: Ensure that cdns_pcie_host_wait_for_link() waits 100 ms after link up
  2026-05-04  5:08       ` Siddharth Vadapalli
@ 2026-05-04  6:23         ` Hans Zhang
  2026-05-04 16:22           ` Bjorn Helgaas
  0 siblings, 1 reply; 8+ messages in thread
From: Hans Zhang @ 2026-05-04  6:23 UTC (permalink / raw)
  To: Siddharth Vadapalli
  Cc: bhelgaas, lpieralisi, kwilczynski, mani, vigneshr, robh,
	linux-omap, linux-arm-kernel, linux-pci, linux-kernel



On 5/4/26 13:08, Siddharth Vadapalli wrote:
> On 03/05/26 21:16, Hans Zhang wrote:
>>
>>
>> On 5/2/26 13:18, Siddharth Vadapalli wrote:
>>> On 01/05/26 21:05, Hans Zhang wrote:
>>>> As per PCIe r6.0, sec 6.6.1, a Downstream Port that supports Link 
>>>> speeds
>>>> greater than 5.0 GT/s, software must wait a minimum of 100 ms after 
>>>> Link
>>>> training completes before sending a Configuration Request.
>>>>
>>>> Add a new 'max_link_speed' field in struct cdns_pcie to record the
>>>> maximum supported (or currently configured) link speed of the 
>>>> controller.
>>>>
>>>> In cdns_pcie_host_wait_for_link(), after the link is reported as up,
>>>> insert a 100 ms delay if max_link_speed > 2 (i.e., > 5 GT/s). This
>>>> implements the required delay at the common Cadence host layer.
>>>>
>>>> Currently max_link_speed is zero-initialized, so the delay is not yet
>>>> active. Glue drivers must set max_link_speed appropriately to enable
>>>> the delay. This matches the approach taken for the Synopsys DWC
>>>> controller in commit 80dc18a0cba8d ("PCI: dwc: Ensure that
>>>> dw_pcie_wait_for_link() waits 100 ms after link up").
>>>>
>>>> Signed-off-by: Hans Zhang <18255117159@163.com>
>>>> ---
>>>>   .../pci/controller/cadence/pcie-cadence-host-common.c    | 9 +++++ 
>>>> ++++
>>>>   drivers/pci/controller/cadence/pcie-cadence.h            | 2 ++
>>>>   2 files changed, 11 insertions(+)
>>>>
>>>> diff --git a/drivers/pci/controller/cadence/pcie-cadence-host- 
>>>> common.c b/drivers/pci/controller/cadence/pcie-cadence-host-common.c
>>>> index 2b0211870f02..d4ae762f423f 100644
>>>> --- a/drivers/pci/controller/cadence/pcie-cadence-host-common.c
>>>> +++ b/drivers/pci/controller/cadence/pcie-cadence-host-common.c
>>>> @@ -14,6 +14,7 @@
>>>>   #include "pcie-cadence.h"
>>>>   #include "pcie-cadence-host-common.h"
>>>> +#include "../../pci.h"
>>>>   #define LINK_RETRAIN_TIMEOUT HZ
>>>> @@ -55,6 +56,14 @@ int cdns_pcie_host_wait_for_link(struct cdns_pcie 
>>>> *pcie,
>>>>       /* Check if the link is up or not */
>>>>       for (retries = 0; retries < LINK_WAIT_MAX_RETRIES; retries++) {
>>>>           if (pcie_link_up(pcie)) {
>>>> +            /*
>>>> +             * As per PCIe r6.0, sec 6.6.1, a Downstream Port that
>>>> +             * supports Link speeds greater than 5.0 GT/s, software
>>>> +             * must wait a minimum of 100 ms after Link training
>>>> +             * completes before sending a Configuration Request.
>>>> +             */
>>>> +            if (pcie->max_link_speed > 2)
>>>> +                msleep(PCIE_RESET_CONFIG_WAIT_MS);
>>>
>>> I think the above could be moved to cdns_pcie_host_start_link() as 
>>> follows:
>>>
>>> diff --git a/drivers/pci/controller/cadence/pcie-cadence-host- 
>>> common.c b/ drivers/pci/controller/cadence/pcie-cadence-host-common.c
>>> index 2b0211870f02..0f885dcbdb12 100644
>>> --- a/drivers/pci/controller/cadence/pcie-cadence-host-common.c
>>> +++ b/drivers/pci/controller/cadence/pcie-cadence-host-common.c
>>> @@ -115,6 +115,15 @@ int cdns_pcie_host_start_link(struct 
>>> cdns_pcie_rc *rc,
>>>       if (!ret && rc->quirk_retrain_flag)
>>>           ret = cdns_pcie_retrain(pcie, pcie_link_up);
>>>
>>> +    /*
>>> +     * As per PCIe r6.0, sec 6.6.1, a Downstream Port that
>>> +     * supports Link speeds greater than 5.0 GT/s, software
>>> +     * must wait a minimum of 100 ms after Link training
>>> +     * completes before sending a Configuration Request.
>>> +     */
>>> +    if (!ret && pcie->max_link_speed > 2)
>>> +        msleep(PCIE_RESET_CONFIG_WAIT_MS);
>>> +
>>>       return ret;
>>>   }
>>>   EXPORT_SYMBOL_GPL(cdns_pcie_host_start_link);
>>>
>>> This will avoid an additional and unnecessary delay when 
>>> 'cdns_pcie_retrain()' retrains the link.
>>>
>>> Instead of checking for the link being up using "pcie_link_up(pcie)", 
>>> checking for 'ret' being zero should also work (ret being zero 
>>> indicates that the link is up).
>>>
>>> Since configuration space accesses will not be performed until 
>>> cdns_pcie_host_start_link() completes executing, it should be safe to 
>>> switch to the above implementation.
>>
>> Hi Siddharth,
>>
>> I think this is applicable to LGA IP as per the method you mentioned. 
>> However, for HPA IP, additional repetitive code needs to be added in 
>> the following code.
> 
> Yes, additional code is required as you rightly pointed out, but the 
> problem I was trying to address with your patch is the following:
>      cdns_pcie_host_start_link()
>        calls cdns_pcie_host_wait_for_link()
>          Link is Up and we wait for 100 ms here
>        calls cdns_pcie_retrain()
>            calls cdns_pcie_host_wait_for_link() a second time
>              Link is Up again after retraining and we wait and
>              we wait an additional 100 ms here.
> 
> Instead, it will be sufficient if we could wait just once after 
> cdns_pcie_retrain() returns.

Hi Siddharth,

Yes, I looked at the code and indeed it works this way.

Because of the abundance of redundant comments. I'm wondering if it's 
possible to encapsulate a helper function in the file 
drivers/pci/controller/pci-host-common.c, so that controller drivers 
like dwc and cadence can call this API. Or do you know where it would be 
appropriate to place it?

Hello, Bjorn and Mani, I wonder what your opinions are.

> 
>>
>> Regarding the "quirk_retrain_flag" tag, I reviewed this submission 
>> record and it appears to be a workaround method. Can it be considered 
>> that it is not a universal method? Or is the same processing logic 
>> also added in the HPA?
> 
> I am not sure, but to the best of my knowledge, the quirk is not 
> applicable to HPA.
> 

The SOC of our company is HPA IP, and it doesn't require this flag setting.

Best regards,
Hans

>>
>> diff --git a/drivers/pci/controller/cadence/pcie-cadence-host-hpa.c b/ 
>> drivers/pci/controller/cadence/pcie-cadence-host-hpa.c
>> index 0f540bed58e8..65159f52067d 100644
>> --- a/drivers/pci/controller/cadence/pcie-cadence-host-hpa.c
>> +++ b/drivers/pci/controller/cadence/pcie-cadence-host-hpa.c
>> @@ -305,6 +305,15 @@ int cdns_pcie_hpa_host_link_setup(struct 
>> cdns_pcie_rc *rc)
>>          if (ret)
>>                  dev_dbg(dev, "PCIe link never came up\n");
>>
>> +       /*
>> +        * As per PCIe r6.0, sec 6.6.1, a Downstream Port that
>> +        * supports Link speeds greater than 5.0 GT/s, software
>> +        * must wait a minimum of 100 ms after Link training
>> +        * completes before sending a Configuration Request.
>> +        */
>> +       if (pcie->max_link_speed > 2)
>> +               msleep(PCIE_RESET_CONFIG_WAIT_MS);
>> +
>>          return ret;
>>   }
>>   EXPORT_SYMBOL_GPL(cdns_pcie_hpa_host_link_setup);
>>
>> Best regards,
>> Hans
> 
> [TRIMMED]
> 
> Regards,
> Siddharth.



^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH 1/2] PCI: cadence: Ensure that cdns_pcie_host_wait_for_link() waits 100 ms after link up
  2026-05-04  6:23         ` Hans Zhang
@ 2026-05-04 16:22           ` Bjorn Helgaas
  0 siblings, 0 replies; 8+ messages in thread
From: Bjorn Helgaas @ 2026-05-04 16:22 UTC (permalink / raw)
  To: Hans Zhang
  Cc: Siddharth Vadapalli, bhelgaas, lpieralisi, kwilczynski, mani,
	vigneshr, robh, linux-omap, linux-arm-kernel, linux-pci,
	linux-kernel

On Mon, May 04, 2026 at 02:23:34PM +0800, Hans Zhang wrote:
> On 5/4/26 13:08, Siddharth Vadapalli wrote:
> > On 03/05/26 21:16, Hans Zhang wrote:
> > > On 5/2/26 13:18, Siddharth Vadapalli wrote:
> > > > On 01/05/26 21:05, Hans Zhang wrote:
> > > > > As per PCIe r6.0, sec 6.6.1, a Downstream Port that supports
> > > > > Link speeds
> > > > > greater than 5.0 GT/s, software must wait a minimum of 100
> > > > > ms after Link
> > > > > training completes before sending a Configuration Request.
> > > > > 
> > > > > Add a new 'max_link_speed' field in struct cdns_pcie to record the
> > > > > maximum supported (or currently configured) link speed of
> > > > > the controller.
> > > > > 
> > > > > In cdns_pcie_host_wait_for_link(), after the link is reported as up,
> > > > > insert a 100 ms delay if max_link_speed > 2 (i.e., > 5 GT/s). This
> > > > > implements the required delay at the common Cadence host layer.
> > > > > 
> > > > > Currently max_link_speed is zero-initialized, so the delay is not yet
> > > > > active. Glue drivers must set max_link_speed appropriately to enable
> > > > > the delay. This matches the approach taken for the Synopsys DWC
> > > > > controller in commit 80dc18a0cba8d ("PCI: dwc: Ensure that
> > > > > dw_pcie_wait_for_link() waits 100 ms after link up").
> > > > > 
> > > > > Signed-off-by: Hans Zhang <18255117159@163.com>
> > > > > ---
> > > > >   .../pci/controller/cadence/pcie-cadence-host-common.c    |
> > > > > 9 +++++ ++++
> > > > >   drivers/pci/controller/cadence/pcie-cadence.h            | 2 ++
> > > > >   2 files changed, 11 insertions(+)
> > > > > 
> > > > > diff --git
> > > > > a/drivers/pci/controller/cadence/pcie-cadence-host- common.c
> > > > > b/drivers/pci/controller/cadence/pcie-cadence-host-common.c
> > > > > index 2b0211870f02..d4ae762f423f 100644
> > > > > --- a/drivers/pci/controller/cadence/pcie-cadence-host-common.c
> > > > > +++ b/drivers/pci/controller/cadence/pcie-cadence-host-common.c
> > > > > @@ -14,6 +14,7 @@
> > > > >   #include "pcie-cadence.h"
> > > > >   #include "pcie-cadence-host-common.h"
> > > > > +#include "../../pci.h"
> > > > >   #define LINK_RETRAIN_TIMEOUT HZ
> > > > > @@ -55,6 +56,14 @@ int cdns_pcie_host_wait_for_link(struct
> > > > > cdns_pcie *pcie,
> > > > >       /* Check if the link is up or not */
> > > > >       for (retries = 0; retries < LINK_WAIT_MAX_RETRIES; retries++) {
> > > > >           if (pcie_link_up(pcie)) {
> > > > > +            /*
> > > > > +             * As per PCIe r6.0, sec 6.6.1, a Downstream Port that
> > > > > +             * supports Link speeds greater than 5.0 GT/s, software
> > > > > +             * must wait a minimum of 100 ms after Link training
> > > > > +             * completes before sending a Configuration Request.
> > > > > +             */
> > > > > +            if (pcie->max_link_speed > 2)
> > > > > +                msleep(PCIE_RESET_CONFIG_WAIT_MS);
> > > > 
> > > > I think the above could be moved to cdns_pcie_host_start_link()
> > > > as follows:
> > > > 
> > > > diff --git a/drivers/pci/controller/cadence/pcie-cadence-host-
> > > > common.c b/
> > > > drivers/pci/controller/cadence/pcie-cadence-host-common.c
> > > > index 2b0211870f02..0f885dcbdb12 100644
> > > > --- a/drivers/pci/controller/cadence/pcie-cadence-host-common.c
> > > > +++ b/drivers/pci/controller/cadence/pcie-cadence-host-common.c
> > > > @@ -115,6 +115,15 @@ int cdns_pcie_host_start_link(struct
> > > > cdns_pcie_rc *rc,
> > > >       if (!ret && rc->quirk_retrain_flag)
> > > >           ret = cdns_pcie_retrain(pcie, pcie_link_up);
> > > > 
> > > > +    /*
> > > > +     * As per PCIe r6.0, sec 6.6.1, a Downstream Port that
> > > > +     * supports Link speeds greater than 5.0 GT/s, software
> > > > +     * must wait a minimum of 100 ms after Link training
> > > > +     * completes before sending a Configuration Request.
> > > > +     */
> > > > +    if (!ret && pcie->max_link_speed > 2)
> > > > +        msleep(PCIE_RESET_CONFIG_WAIT_MS);
> > > > +
> > > >       return ret;
> > > >   }
> > > >   EXPORT_SYMBOL_GPL(cdns_pcie_host_start_link);
> > > > 
> > > > This will avoid an additional and unnecessary delay when
> > > > 'cdns_pcie_retrain()' retrains the link.
> > > > 
> > > > Instead of checking for the link being up using
> > > > "pcie_link_up(pcie)", checking for 'ret' being zero should also
> > > > work (ret being zero indicates that the link is up).
> > > > 
> > > > Since configuration space accesses will not be performed until
> > > > cdns_pcie_host_start_link() completes executing, it should be
> > > > safe to switch to the above implementation.
> > > 
> > > Hi Siddharth,
> > > 
> > > I think this is applicable to LGA IP as per the method you
> > > mentioned. However, for HPA IP, additional repetitive code needs to
> > > be added in the following code.
> > 
> > Yes, additional code is required as you rightly pointed out, but the
> > problem I was trying to address with your patch is the following:
> >      cdns_pcie_host_start_link()
> >        calls cdns_pcie_host_wait_for_link()
> >          Link is Up and we wait for 100 ms here
> >        calls cdns_pcie_retrain()
> >            calls cdns_pcie_host_wait_for_link() a second time
> >              Link is Up again after retraining and we wait and
> >              we wait an additional 100 ms here.
> > 
> > Instead, it will be sufficient if we could wait just once after
> > cdns_pcie_retrain() returns.
> 
> Hi Siddharth,
> 
> Yes, I looked at the code and indeed it works this way.
> 
> Because of the abundance of redundant comments. I'm wondering if it's
> possible to encapsulate a helper function in the file
> drivers/pci/controller/pci-host-common.c, so that controller drivers like
> dwc and cadence can call this API. Or do you know where it would be
> appropriate to place it?
> 
> Hello, Bjorn and Mani, I wonder what your opinions are.

Make a proposal.  Sounds fine to remove redundant comments if they
cause confusion.  Adding a helper to make things more consistent
across drivers also sounds fine, but it would be better to have a
straw-man proposal to respond to.

Bjorn


^ permalink raw reply	[flat|nested] 8+ messages in thread

end of thread, other threads:[~2026-05-04 16:22 UTC | newest]

Thread overview: 8+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2026-05-01 15:35 [PATCH 0/2] PCI: cadence: Add 100 ms delay after link up for speeds > 5 GT/s Hans Zhang
2026-05-01 15:35 ` [PATCH 1/2] PCI: cadence: Ensure that cdns_pcie_host_wait_for_link() waits 100 ms after link up Hans Zhang
2026-05-02  5:18   ` Siddharth Vadapalli
2026-05-03 15:46     ` Hans Zhang
2026-05-04  5:08       ` Siddharth Vadapalli
2026-05-04  6:23         ` Hans Zhang
2026-05-04 16:22           ` Bjorn Helgaas
2026-05-01 15:35 ` [PATCH 2/2] PCI: j721e: Set max_link_speed to enable 100 ms delay " Hans Zhang

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox