linuxppc-dev.lists.ozlabs.org archive mirror
 help / color / mirror / Atom feed
* [RFC PATCH v1 0/2] PCI/AER: report fatal errors of RCiEP and EP if link recoverd
@ 2024-11-06  9:03 Shuai Xue
  2024-11-06  9:03 ` [RFC PATCH v1 1/2] PCI/AER: run recovery on device that detected the error Shuai Xue
  2024-11-06  9:03 ` [RFC PATCH v1 2/2] PCI/AER: report fatal errors of RCiEP and EP if link recoverd Shuai Xue
  0 siblings, 2 replies; 7+ messages in thread
From: Shuai Xue @ 2024-11-06  9:03 UTC (permalink / raw)
  To: linux-pci, linux-kernel, linuxppc-dev
  Cc: bhelgaas, mahesh, oohall, sathyanarayanan.kuppuswamy, xueshuai

The AER driver has historically avoided reading the configuration space of an
endpoint or RCiEP that reported a fatal error, considering the link to that
device unreliable. Consequently, when a fatal error occurs, the AER and DPC
drivers do not report specific error types, resulting in logs like:

[  245.281980] pcieport 0000:30:03.0: EDR: EDR event received
[  245.287466] pcieport 0000:30:03.0: DPC: containment event, status:0x0005 source:0x3400
[  245.295372] pcieport 0000:30:03.0: DPC: ERR_FATAL detected
[  245.300849] pcieport 0000:30:03.0: AER: broadcast error_detected message
[  245.307540] nvme nvme0: frozen state error detected, reset controller
[  245.722582] nvme 0000:34:00.0: ready 0ms after DPC
[  245.727365] pcieport 0000:30:03.0: AER: broadcast slot_reset message

But, if the link recovered after hot reset, we can safely access AER status of
the error device. In such case, report fatal error which helps to figure out the
error root case.

- Patch 1/2 identifies the error device by SOURCE ID register
- Patch 2/3 reports the AER status if link recoverd.

After this patch set, the logs like:

[  414.356755] pcieport 0000:30:03.0: EDR: EDR event received
[  414.362240] pcieport 0000:30:03.0: DPC: containment event, status:0x0005 source:0x3400
[  414.370148] pcieport 0000:30:03.0: DPC: ERR_FATAL detected
[  414.375642] pcieport 0000:30:03.0: AER: broadcast error_detected message
[  414.382335] nvme nvme0: frozen state error detected, reset controller
[  414.645413] pcieport 0000:30:03.0: waiting 100 ms for downstream link, after activation
[  414.788016] nvme 0000:34:00.0: ready 0ms after DPC
[  414.796975] nvme 0000:34:00.0: PCIe Bus Error: severity=Uncorrectable (Fatal), type=Data Link Layer, (Receiver ID)
[  414.807312] nvme 0000:34:00.0:   device [144d:a804] error status/mask=00000010/00504000
[  414.815305] nvme 0000:34:00.0:    [ 4] DLP                    (First)
[  414.821768] pcieport 0000:30:03.0: AER: broadcast slot_reset message

Shuai Xue (2):
  PCI/AER: run recovery on device that detected the error
  PCI/AER: report fatal errors of RCiEP and EP if link recoverd

 drivers/pci/pci.h      |  3 ++-
 drivers/pci/pcie/aer.c | 50 ++++++++++++++++++++++++++++++++++++++++++
 drivers/pci/pcie/dpc.c | 30 ++++++++++++++++++++-----
 drivers/pci/pcie/edr.c | 35 +++++++++++++++--------------
 drivers/pci/pcie/err.c |  6 +++++
 5 files changed, 100 insertions(+), 24 deletions(-)

-- 
2.39.3



^ permalink raw reply	[flat|nested] 7+ messages in thread

end of thread, other threads:[~2024-11-07  1:27 UTC | newest]

Thread overview: 7+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2024-11-06  9:03 [RFC PATCH v1 0/2] PCI/AER: report fatal errors of RCiEP and EP if link recoverd Shuai Xue
2024-11-06  9:03 ` [RFC PATCH v1 1/2] PCI/AER: run recovery on device that detected the error Shuai Xue
2024-11-06  9:03 ` [RFC PATCH v1 2/2] PCI/AER: report fatal errors of RCiEP and EP if link recoverd Shuai Xue
2024-11-06 16:02   ` Bjorn Helgaas
2024-11-07  1:24     ` Shuai Xue
2024-11-06 16:39   ` Keith Busch
2024-11-07  1:27     ` Shuai Xue

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).