public inbox for linux-arm-kernel@lists.infradead.org
 help / color / mirror / Atom feed
From: Hans Zhang <18255117159@163.com>
To: Siddharth Vadapalli <s-vadapalli@ti.com>
Cc: bhelgaas@google.com, lpieralisi@kernel.org,
	kwilczynski@kernel.org, mani@kernel.org, vigneshr@ti.com,
	robh@kernel.org, linux-omap@vger.kernel.org,
	linux-arm-kernel@lists.infradead.org, linux-pci@vger.kernel.org,
	linux-kernel@vger.kernel.org
Subject: Re: [PATCH 1/2] PCI: cadence: Ensure that cdns_pcie_host_wait_for_link() waits 100 ms after link up
Date: Mon, 4 May 2026 14:23:34 +0800	[thread overview]
Message-ID: <ce5574b7-1c0d-4b46-b92e-e8f7ddbd8ef0@163.com> (raw)
In-Reply-To: <a3093010-d120-4c5d-b272-d8cb3884f0b3@ti.com>



On 5/4/26 13:08, Siddharth Vadapalli wrote:
> On 03/05/26 21:16, Hans Zhang wrote:
>>
>>
>> On 5/2/26 13:18, Siddharth Vadapalli wrote:
>>> On 01/05/26 21:05, Hans Zhang wrote:
>>>> As per PCIe r6.0, sec 6.6.1, a Downstream Port that supports Link 
>>>> speeds
>>>> greater than 5.0 GT/s, software must wait a minimum of 100 ms after 
>>>> Link
>>>> training completes before sending a Configuration Request.
>>>>
>>>> Add a new 'max_link_speed' field in struct cdns_pcie to record the
>>>> maximum supported (or currently configured) link speed of the 
>>>> controller.
>>>>
>>>> In cdns_pcie_host_wait_for_link(), after the link is reported as up,
>>>> insert a 100 ms delay if max_link_speed > 2 (i.e., > 5 GT/s). This
>>>> implements the required delay at the common Cadence host layer.
>>>>
>>>> Currently max_link_speed is zero-initialized, so the delay is not yet
>>>> active. Glue drivers must set max_link_speed appropriately to enable
>>>> the delay. This matches the approach taken for the Synopsys DWC
>>>> controller in commit 80dc18a0cba8d ("PCI: dwc: Ensure that
>>>> dw_pcie_wait_for_link() waits 100 ms after link up").
>>>>
>>>> Signed-off-by: Hans Zhang <18255117159@163.com>
>>>> ---
>>>>   .../pci/controller/cadence/pcie-cadence-host-common.c    | 9 +++++ 
>>>> ++++
>>>>   drivers/pci/controller/cadence/pcie-cadence.h            | 2 ++
>>>>   2 files changed, 11 insertions(+)
>>>>
>>>> diff --git a/drivers/pci/controller/cadence/pcie-cadence-host- 
>>>> common.c b/drivers/pci/controller/cadence/pcie-cadence-host-common.c
>>>> index 2b0211870f02..d4ae762f423f 100644
>>>> --- a/drivers/pci/controller/cadence/pcie-cadence-host-common.c
>>>> +++ b/drivers/pci/controller/cadence/pcie-cadence-host-common.c
>>>> @@ -14,6 +14,7 @@
>>>>   #include "pcie-cadence.h"
>>>>   #include "pcie-cadence-host-common.h"
>>>> +#include "../../pci.h"
>>>>   #define LINK_RETRAIN_TIMEOUT HZ
>>>> @@ -55,6 +56,14 @@ int cdns_pcie_host_wait_for_link(struct cdns_pcie 
>>>> *pcie,
>>>>       /* Check if the link is up or not */
>>>>       for (retries = 0; retries < LINK_WAIT_MAX_RETRIES; retries++) {
>>>>           if (pcie_link_up(pcie)) {
>>>> +            /*
>>>> +             * As per PCIe r6.0, sec 6.6.1, a Downstream Port that
>>>> +             * supports Link speeds greater than 5.0 GT/s, software
>>>> +             * must wait a minimum of 100 ms after Link training
>>>> +             * completes before sending a Configuration Request.
>>>> +             */
>>>> +            if (pcie->max_link_speed > 2)
>>>> +                msleep(PCIE_RESET_CONFIG_WAIT_MS);
>>>
>>> I think the above could be moved to cdns_pcie_host_start_link() as 
>>> follows:
>>>
>>> diff --git a/drivers/pci/controller/cadence/pcie-cadence-host- 
>>> common.c b/ drivers/pci/controller/cadence/pcie-cadence-host-common.c
>>> index 2b0211870f02..0f885dcbdb12 100644
>>> --- a/drivers/pci/controller/cadence/pcie-cadence-host-common.c
>>> +++ b/drivers/pci/controller/cadence/pcie-cadence-host-common.c
>>> @@ -115,6 +115,15 @@ int cdns_pcie_host_start_link(struct 
>>> cdns_pcie_rc *rc,
>>>       if (!ret && rc->quirk_retrain_flag)
>>>           ret = cdns_pcie_retrain(pcie, pcie_link_up);
>>>
>>> +    /*
>>> +     * As per PCIe r6.0, sec 6.6.1, a Downstream Port that
>>> +     * supports Link speeds greater than 5.0 GT/s, software
>>> +     * must wait a minimum of 100 ms after Link training
>>> +     * completes before sending a Configuration Request.
>>> +     */
>>> +    if (!ret && pcie->max_link_speed > 2)
>>> +        msleep(PCIE_RESET_CONFIG_WAIT_MS);
>>> +
>>>       return ret;
>>>   }
>>>   EXPORT_SYMBOL_GPL(cdns_pcie_host_start_link);
>>>
>>> This will avoid an additional and unnecessary delay when 
>>> 'cdns_pcie_retrain()' retrains the link.
>>>
>>> Instead of checking for the link being up using "pcie_link_up(pcie)", 
>>> checking for 'ret' being zero should also work (ret being zero 
>>> indicates that the link is up).
>>>
>>> Since configuration space accesses will not be performed until 
>>> cdns_pcie_host_start_link() completes executing, it should be safe to 
>>> switch to the above implementation.
>>
>> Hi Siddharth,
>>
>> I think this is applicable to LGA IP as per the method you mentioned. 
>> However, for HPA IP, additional repetitive code needs to be added in 
>> the following code.
> 
> Yes, additional code is required as you rightly pointed out, but the 
> problem I was trying to address with your patch is the following:
>      cdns_pcie_host_start_link()
>        calls cdns_pcie_host_wait_for_link()
>          Link is Up and we wait for 100 ms here
>        calls cdns_pcie_retrain()
>            calls cdns_pcie_host_wait_for_link() a second time
>              Link is Up again after retraining and we wait and
>              we wait an additional 100 ms here.
> 
> Instead, it will be sufficient if we could wait just once after 
> cdns_pcie_retrain() returns.

Hi Siddharth,

Yes, I looked at the code and indeed it works this way.

Because of the abundance of redundant comments. I'm wondering if it's 
possible to encapsulate a helper function in the file 
drivers/pci/controller/pci-host-common.c, so that controller drivers 
like dwc and cadence can call this API. Or do you know where it would be 
appropriate to place it?

Hello, Bjorn and Mani, I wonder what your opinions are.

> 
>>
>> Regarding the "quirk_retrain_flag" tag, I reviewed this submission 
>> record and it appears to be a workaround method. Can it be considered 
>> that it is not a universal method? Or is the same processing logic 
>> also added in the HPA?
> 
> I am not sure, but to the best of my knowledge, the quirk is not 
> applicable to HPA.
> 

The SOC of our company is HPA IP, and it doesn't require this flag setting.

Best regards,
Hans

>>
>> diff --git a/drivers/pci/controller/cadence/pcie-cadence-host-hpa.c b/ 
>> drivers/pci/controller/cadence/pcie-cadence-host-hpa.c
>> index 0f540bed58e8..65159f52067d 100644
>> --- a/drivers/pci/controller/cadence/pcie-cadence-host-hpa.c
>> +++ b/drivers/pci/controller/cadence/pcie-cadence-host-hpa.c
>> @@ -305,6 +305,15 @@ int cdns_pcie_hpa_host_link_setup(struct 
>> cdns_pcie_rc *rc)
>>          if (ret)
>>                  dev_dbg(dev, "PCIe link never came up\n");
>>
>> +       /*
>> +        * As per PCIe r6.0, sec 6.6.1, a Downstream Port that
>> +        * supports Link speeds greater than 5.0 GT/s, software
>> +        * must wait a minimum of 100 ms after Link training
>> +        * completes before sending a Configuration Request.
>> +        */
>> +       if (pcie->max_link_speed > 2)
>> +               msleep(PCIE_RESET_CONFIG_WAIT_MS);
>> +
>>          return ret;
>>   }
>>   EXPORT_SYMBOL_GPL(cdns_pcie_hpa_host_link_setup);
>>
>> Best regards,
>> Hans
> 
> [TRIMMED]
> 
> Regards,
> Siddharth.



  reply	other threads:[~2026-05-04  6:24 UTC|newest]

Thread overview: 9+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2026-05-01 15:35 [PATCH 0/2] PCI: cadence: Add 100 ms delay after link up for speeds > 5 GT/s Hans Zhang
2026-05-01 15:35 ` [PATCH 1/2] PCI: cadence: Ensure that cdns_pcie_host_wait_for_link() waits 100 ms after link up Hans Zhang
2026-05-02  5:18   ` Siddharth Vadapalli
2026-05-03 15:46     ` Hans Zhang
2026-05-04  5:08       ` Siddharth Vadapalli
2026-05-04  6:23         ` Hans Zhang [this message]
2026-05-04 16:22           ` Bjorn Helgaas
2026-05-05  6:28             ` Hans Zhang
2026-05-01 15:35 ` [PATCH 2/2] PCI: j721e: Set max_link_speed to enable 100 ms delay " Hans Zhang

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=ce5574b7-1c0d-4b46-b92e-e8f7ddbd8ef0@163.com \
    --to=18255117159@163.com \
    --cc=bhelgaas@google.com \
    --cc=kwilczynski@kernel.org \
    --cc=linux-arm-kernel@lists.infradead.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-omap@vger.kernel.org \
    --cc=linux-pci@vger.kernel.org \
    --cc=lpieralisi@kernel.org \
    --cc=mani@kernel.org \
    --cc=robh@kernel.org \
    --cc=s-vadapalli@ti.com \
    --cc=vigneshr@ti.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox