linuxppc-dev.lists.ozlabs.org archive mirror
 help / color / mirror / Atom feed
From: "Bowman, Terry" <terry.bowman@amd.com>
To: Shuai Xue <xueshuai@linux.alibaba.com>,
	linux-pci@vger.kernel.org, linux-kernel@vger.kernel.org,
	linuxppc-dev@lists.ozlabs.org, bhelgaas@google.com,
	kbusch@kernel.org, Lukas Wunner <lukas@wunner.de>
Cc: mahesh@linux.ibm.com, oohall@gmail.com,
	sathyanarayanan.kuppuswamy@linux.intel.com
Subject: Re: [PATCH v2 2/2] PCI/AER: Report fatal errors of RCiEP and EP if link recoverd
Date: Fri, 15 Nov 2024 14:20:09 -0600	[thread overview]
Message-ID: <a76394c4-8746-46c0-9cb5-bf0e2e0aa9b5@amd.com> (raw)
In-Reply-To: <20241112135419.59491-3-xueshuai@linux.alibaba.com>

Hi Shuai,


On 11/12/2024 7:54 AM, Shuai Xue wrote:
> The AER driver has historically avoided reading the configuration space of
> an endpoint or RCiEP that reported a fatal error, considering the link to
> that device unreliable. Consequently, when a fatal error occurs, the AER
> and DPC drivers do not report specific error types, resulting in logs like:
> 
>   pcieport 0000:30:03.0: EDR: EDR event received
>   pcieport 0000:30:03.0: DPC: containment event, status:0x0005 source:0x3400
>   pcieport 0000:30:03.0: DPC: ERR_FATAL detected
>   pcieport 0000:30:03.0: AER: broadcast error_detected message
>   nvme nvme0: frozen state error detected, reset controller
>   nvme 0000:34:00.0: ready 0ms after DPC
>   pcieport 0000:30:03.0: AER: broadcast slot_reset message
> 
> AER status registers are sticky and Write-1-to-clear. If the link recovered
> after hot reset, we can still safely access AER status of the error device.
> In such case, report fatal errors which helps to figure out the error root
> case.
> 
> After this patch, the logs like:
> 
>   pcieport 0000:30:03.0: EDR: EDR event received
>   pcieport 0000:30:03.0: DPC: containment event, status:0x0005 source:0x3400
>   pcieport 0000:30:03.0: DPC: ERR_FATAL detected
>   pcieport 0000:30:03.0: AER: broadcast error_detected message
>   nvme nvme0: frozen state error detected, reset controller
>   pcieport 0000:30:03.0: waiting 100 ms for downstream link, after activation
>   nvme 0000:34:00.0: ready 0ms after DPC
>   nvme 0000:34:00.0: PCIe Bus Error: severity=Uncorrectable (Fatal), type=Data Link Layer, (Receiver ID)
>   nvme 0000:34:00.0:   device [144d:a804] error status/mask=00000010/00504000
>   nvme 0000:34:00.0:    [ 4] DLP                    (First)
>   pcieport 0000:30:03.0: AER: broadcast slot_reset message
> 
> Signed-off-by: Shuai Xue <xueshuai@linux.alibaba.com>
> ---
>  drivers/pci/pci.h      |  3 ++-
>  drivers/pci/pcie/aer.c | 11 +++++++----
>  drivers/pci/pcie/dpc.c |  2 +-
>  drivers/pci/pcie/err.c |  9 +++++++++
>  4 files changed, 19 insertions(+), 6 deletions(-)
> 
> diff --git a/drivers/pci/pci.h b/drivers/pci/pci.h
> index 0866f79aec54..6f827c313639 100644
> --- a/drivers/pci/pci.h
> +++ b/drivers/pci/pci.h
> @@ -504,7 +504,8 @@ struct aer_err_info {
>  	struct pcie_tlp_log tlp;	/* TLP Header */
>  };
>  
> -int aer_get_device_error_info(struct pci_dev *dev, struct aer_err_info *info);
> +int aer_get_device_error_info(struct pci_dev *dev, struct aer_err_info *info,
> +			      bool link_healthy);
>  void aer_print_error(struct pci_dev *dev, struct aer_err_info *info);
>  #endif	/* CONFIG_PCIEAER */
>  
> diff --git a/drivers/pci/pcie/aer.c b/drivers/pci/pcie/aer.c
> index 13b8586924ea..97ec1c17b6f4 100644
> --- a/drivers/pci/pcie/aer.c
> +++ b/drivers/pci/pcie/aer.c
> @@ -1200,12 +1200,14 @@ EXPORT_SYMBOL_GPL(aer_recover_queue);
>   * aer_get_device_error_info - read error status from dev and store it to info
>   * @dev: pointer to the device expected to have a error record
>   * @info: pointer to structure to store the error record
> + * @link_healthy: link is healthy or not
>   *
>   * Return 1 on success, 0 on error.
>   *
>   * Note that @info is reused among all error devices. Clear fields properly.
>   */
> -int aer_get_device_error_info(struct pci_dev *dev, struct aer_err_info *info)
> +int aer_get_device_error_info(struct pci_dev *dev, struct aer_err_info *info,
> +			      bool link_healthy)
>  {
>  	int type = pci_pcie_type(dev);
>  	int aer = dev->aer_cap;
> @@ -1229,7 +1231,8 @@ int aer_get_device_error_info(struct pci_dev *dev, struct aer_err_info *info)
>  	} else if (type == PCI_EXP_TYPE_ROOT_PORT ||
>  		   type == PCI_EXP_TYPE_RC_EC ||
>  		   type == PCI_EXP_TYPE_DOWNSTREAM ||
> -		   info->severity == AER_NONFATAL) {
> +		   info->severity == AER_NONFATAL ||
> +		   (info->severity == AER_FATAL && link_healthy)) {
>  
>  		/* Link is still healthy for IO reads */
>  		pci_read_config_dword(dev, aer + PCI_ERR_UNCOR_STATUS,
> @@ -1258,11 +1261,11 @@ static inline void aer_process_err_devices(struct aer_err_info *e_info)
>  
>  	/* Report all before handle them, not to lost records by reset etc. */
>  	for (i = 0; i < e_info->error_dev_num && e_info->dev[i]; i++) {
> -		if (aer_get_device_error_info(e_info->dev[i], e_info))
> +		if (aer_get_device_error_info(e_info->dev[i], e_info, false))
>  			aer_print_error(e_info->dev[i], e_info);
>  	}

Would it be reasonable to detect if the link is intact and set the aer_get_device_error_info()
function's 'link_healthy' parameter accordingly? I was thinking the port upstream capability 
link status register could be used to indicate the link viability.

Regards,
Terry


  parent reply	other threads:[~2024-11-15 20:20 UTC|newest]

Thread overview: 17+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2024-11-12 13:54 [PATCH v2 0/2] PCI/AER: Report fatal errors of RCiEP and EP if link recoverd Shuai Xue
2024-11-12 13:54 ` [PATCH v2 1/2] PCI/DPC: Run recovery on device that detected the error Shuai Xue
2025-01-23  4:53   ` Sathyanarayanan Kuppuswamy
2025-01-23  7:03     ` Shuai Xue
2024-11-12 13:54 ` [PATCH v2 2/2] PCI/AER: Report fatal errors of RCiEP and EP if link recoverd Shuai Xue
2024-11-15  9:06   ` Lukas Wunner
2024-11-15  9:22     ` Shuai Xue
2024-11-15 20:20   ` Bowman, Terry [this message]
2024-11-16 12:44     ` Shuai Xue
2024-11-17 13:36       ` Shuai Xue
2024-11-25  5:43         ` Shuai Xue
2024-11-25 19:47           ` Bowman, Terry
2025-01-23 20:10   ` Sathyanarayanan Kuppuswamy
2025-01-24  1:45     ` Shuai Xue
2025-01-24  7:03       ` Sathyanarayanan Kuppuswamy
2024-12-24 11:03 ` [PATCH v2 0/2] " Shuai Xue
2025-01-22 10:59   ` Shuai Xue

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=a76394c4-8746-46c0-9cb5-bf0e2e0aa9b5@amd.com \
    --to=terry.bowman@amd.com \
    --cc=bhelgaas@google.com \
    --cc=kbusch@kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-pci@vger.kernel.org \
    --cc=linuxppc-dev@lists.ozlabs.org \
    --cc=lukas@wunner.de \
    --cc=mahesh@linux.ibm.com \
    --cc=oohall@gmail.com \
    --cc=sathyanarayanan.kuppuswamy@linux.intel.com \
    --cc=xueshuai@linux.alibaba.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).