linuxppc-dev.lists.ozlabs.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v4 0/3] PCI/AER: Report fatal errors of RCiEP and EP if link recoverd
@ 2025-02-17  2:42 Shuai Xue
  2025-02-17  2:42 ` [PATCH v4 1/3] PCI/DPC: Clarify naming for error port in DPC Handling Shuai Xue
                   ` (3 more replies)
  0 siblings, 4 replies; 14+ messages in thread
From: Shuai Xue @ 2025-02-17  2:42 UTC (permalink / raw)
  To: linux-pci, linux-kernel, linuxppc-dev, bhelgaas, kbusch,
	sathyanarayanan.kuppuswamy
  Cc: mahesh, oohall, xueshuai, Jonathan.Cameron, terry.bowman,
	tianruidong

changes since v3:
- squash patch 1 and 2 into one patch per Sathyanarayanan
- add comments note for dpc_process_error per Sathyanarayanan
- pick up Reviewed-by tag from Sathyanarayanan

changes since v2:
- moving the "err_port" rename to a separate patch per Sathyanarayanan
- rewrite comments of dpc_process_error per Sathyanarayanan
- remove NULL initialization for err_dev per Sathyanarayanan

changes since v1:
- rewrite commit log per Bjorn
- refactor aer_get_device_error_info to reduce duplication per Keith
- fix to avoid reporting fatal errors twice for root and downstream ports per Keith

The AER driver has historically avoided reading the configuration space of an
endpoint or RCiEP that reported a fatal error, considering the link to that
device unreliable. Consequently, when a fatal error occurs, the AER and DPC
drivers do not report specific error types, resulting in logs like:

   pcieport 0000:30:03.0: EDR: EDR event received
   pcieport 0000:30:03.0: DPC: containment event, status:0x0005 source:0x3400
   pcieport 0000:30:03.0: DPC: ERR_FATAL detected
   pcieport 0000:30:03.0: AER: broadcast error_detected message
   nvme nvme0: frozen state error detected, reset controller
   nvme 0000:34:00.0: ready 0ms after DPC
   pcieport 0000:30:03.0: AER: broadcast slot_reset message

AER status registers are sticky and Write-1-to-clear. If the link recovered
after hot reset, we can still safely access AER status of the error device.
In such case, report fatal errors which helps to figure out the error root
case.

After this patch set, the logs like:

   pcieport 0000:30:03.0: EDR: EDR event received
   pcieport 0000:30:03.0: DPC: containment event, status:0x0005 source:0x3400
   pcieport 0000:30:03.0: DPC: ERR_FATAL detected
   pcieport 0000:30:03.0: AER: broadcast error_detected message
   nvme nvme0: frozen state error detected, reset controller
   pcieport 0000:30:03.0: waiting 100 ms for downstream link, after activation
   nvme 0000:34:00.0: ready 0ms after DPC
   nvme 0000:34:00.0: PCIe Bus Error: severity=Uncorrectable (Fatal), type=Data Link Layer, (Receiver ID)
   nvme 0000:34:00.0:   device [144d:a804] error status/mask=00000010/00504000
   nvme 0000:34:00.0:    [ 4] DLP                    (First)
   pcieport 0000:30:03.0: AER: broadcast slot_reset message 

Shuai Xue (3):
  PCI/DPC: Clarify naming for error port in DPC Handling
  PCI/DPC: Run recovery on device that detected the error
  PCI/AER: Report fatal errors of RCiEP and EP if link recoverd

 drivers/pci/pci.h      |  5 +++--
 drivers/pci/pcie/aer.c | 11 +++++++----
 drivers/pci/pcie/dpc.c | 34 +++++++++++++++++++++++++++-------
 drivers/pci/pcie/edr.c | 35 ++++++++++++++++++-----------------
 drivers/pci/pcie/err.c |  9 +++++++++
 5 files changed, 64 insertions(+), 30 deletions(-)

-- 
2.39.3



^ permalink raw reply	[flat|nested] 14+ messages in thread

* [PATCH v4 1/3] PCI/DPC: Clarify naming for error port in DPC Handling
  2025-02-17  2:42 [PATCH v4 0/3] PCI/AER: Report fatal errors of RCiEP and EP if link recoverd Shuai Xue
@ 2025-02-17  2:42 ` Shuai Xue
  2025-02-17  2:42 ` [PATCH v4 2/3] PCI/DPC: Run recovery on device that detected the error Shuai Xue
                   ` (2 subsequent siblings)
  3 siblings, 0 replies; 14+ messages in thread
From: Shuai Xue @ 2025-02-17  2:42 UTC (permalink / raw)
  To: linux-pci, linux-kernel, linuxppc-dev, bhelgaas, kbusch,
	sathyanarayanan.kuppuswamy
  Cc: mahesh, oohall, xueshuai, Jonathan.Cameron, terry.bowman,
	tianruidong

dpc_handler() is registered for error port which recevie DPC interrupt
and acpi_dpc_port_get() locate the port that experienced the containment
event.

Rename edev and pdev to err_port for clear so that later patch will
avoid misused err_port in pcie_do_recovery().

No functional changes intended.

Signed-off-by: Shuai Xue <xueshuai@linux.alibaba.com>
Reviewed-by: Kuppuswamy Sathyanarayanan <sathyanarayanan.kuppuswamy@linux.intel.com>
---
 drivers/pci/pcie/dpc.c | 10 +++++-----
 drivers/pci/pcie/edr.c | 34 +++++++++++++++++-----------------
 2 files changed, 22 insertions(+), 22 deletions(-)

diff --git a/drivers/pci/pcie/dpc.c b/drivers/pci/pcie/dpc.c
index 242cabd5eeeb..1a54a0b657ae 100644
--- a/drivers/pci/pcie/dpc.c
+++ b/drivers/pci/pcie/dpc.c
@@ -346,21 +346,21 @@ static bool dpc_is_surprise_removal(struct pci_dev *pdev)
 
 static irqreturn_t dpc_handler(int irq, void *context)
 {
-	struct pci_dev *pdev = context;
+	struct pci_dev *err_port = context;
 
 	/*
 	 * According to PCIe r6.0 sec 6.7.6, errors are an expected side effect
 	 * of async removal and should be ignored by software.
 	 */
-	if (dpc_is_surprise_removal(pdev)) {
-		dpc_handle_surprise_removal(pdev);
+	if (dpc_is_surprise_removal(err_port)) {
+		dpc_handle_surprise_removal(err_port);
 		return IRQ_HANDLED;
 	}
 
-	dpc_process_error(pdev);
+	dpc_process_error(err_port);
 
 	/* We configure DPC so it only triggers on ERR_FATAL */
-	pcie_do_recovery(pdev, pci_channel_io_frozen, dpc_reset_link);
+	pcie_do_recovery(err_port, pci_channel_io_frozen, dpc_reset_link);
 
 	return IRQ_HANDLED;
 }
diff --git a/drivers/pci/pcie/edr.c b/drivers/pci/pcie/edr.c
index e86298dbbcff..521fca2f40cb 100644
--- a/drivers/pci/pcie/edr.c
+++ b/drivers/pci/pcie/edr.c
@@ -150,7 +150,7 @@ static int acpi_send_edr_status(struct pci_dev *pdev, struct pci_dev *edev,
 
 static void edr_handle_event(acpi_handle handle, u32 event, void *data)
 {
-	struct pci_dev *pdev = data, *edev;
+	struct pci_dev *pdev = data, *err_port;
 	pci_ers_result_t estate = PCI_ERS_RESULT_DISCONNECT;
 	u16 status;
 
@@ -169,36 +169,36 @@ static void edr_handle_event(acpi_handle handle, u32 event, void *data)
 	 * may be that port or a parent of it (PCI Firmware r3.3, sec
 	 * 4.6.13).
 	 */
-	edev = acpi_dpc_port_get(pdev);
-	if (!edev) {
+	err_port = acpi_dpc_port_get(pdev);
+	if (!err_port) {
 		pci_err(pdev, "Firmware failed to locate DPC port\n");
 		return;
 	}
 
-	pci_dbg(pdev, "Reported EDR dev: %s\n", pci_name(edev));
+	pci_dbg(pdev, "Reported EDR dev: %s\n", pci_name(err_port));
 
 	/* If port does not support DPC, just send the OST */
-	if (!edev->dpc_cap) {
-		pci_err(edev, FW_BUG "This device doesn't support DPC\n");
+	if (!err_port->dpc_cap) {
+		pci_err(err_port, FW_BUG "This device doesn't support DPC\n");
 		goto send_ost;
 	}
 
 	/* Check if there is a valid DPC trigger */
-	pci_read_config_word(edev, edev->dpc_cap + PCI_EXP_DPC_STATUS, &status);
+	pci_read_config_word(err_port, err_port->dpc_cap + PCI_EXP_DPC_STATUS, &status);
 	if (!(status & PCI_EXP_DPC_STATUS_TRIGGER)) {
-		pci_err(edev, "Invalid DPC trigger %#010x\n", status);
+		pci_err(err_port, "Invalid DPC trigger %#010x\n", status);
 		goto send_ost;
 	}
 
-	dpc_process_error(edev);
-	pci_aer_raw_clear_status(edev);
+	dpc_process_error(err_port);
+	pci_aer_raw_clear_status(err_port);
 
 	/*
 	 * Irrespective of whether the DPC event is triggered by ERR_FATAL
 	 * or ERR_NONFATAL, since the link is already down, use the FATAL
 	 * error recovery path for both cases.
 	 */
-	estate = pcie_do_recovery(edev, pci_channel_io_frozen, dpc_reset_link);
+	estate = pcie_do_recovery(err_port, pci_channel_io_frozen, dpc_reset_link);
 
 send_ost:
 
@@ -207,15 +207,15 @@ static void edr_handle_event(acpi_handle handle, u32 event, void *data)
 	 * to firmware. If not successful, send _OST(0xF, BDF << 16 | 0x81).
 	 */
 	if (estate == PCI_ERS_RESULT_RECOVERED) {
-		pci_dbg(edev, "DPC port successfully recovered\n");
-		pcie_clear_device_status(edev);
-		acpi_send_edr_status(pdev, edev, EDR_OST_SUCCESS);
+		pci_dbg(err_port, "DPC port successfully recovered\n");
+		pcie_clear_device_status(err_port);
+		acpi_send_edr_status(pdev, err_port, EDR_OST_SUCCESS);
 	} else {
-		pci_dbg(edev, "DPC port recovery failed\n");
-		acpi_send_edr_status(pdev, edev, EDR_OST_FAILED);
+		pci_dbg(err_port, "DPC port recovery failed\n");
+		acpi_send_edr_status(pdev, err_port, EDR_OST_FAILED);
 	}
 
-	pci_dev_put(edev);
+	pci_dev_put(err_port);
 }
 
 void pci_acpi_add_edr_notifier(struct pci_dev *pdev)
-- 
2.39.3



^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [PATCH v4 2/3] PCI/DPC: Run recovery on device that detected the error
  2025-02-17  2:42 [PATCH v4 0/3] PCI/AER: Report fatal errors of RCiEP and EP if link recoverd Shuai Xue
  2025-02-17  2:42 ` [PATCH v4 1/3] PCI/DPC: Clarify naming for error port in DPC Handling Shuai Xue
@ 2025-02-17  2:42 ` Shuai Xue
  2025-03-03  3:36   ` Sathyanarayanan Kuppuswamy
  2025-06-12 10:31   ` Manivannan Sadhasivam
  2025-02-17  2:42 ` [PATCH v4 3/3] PCI/AER: Report fatal errors of RCiEP and EP if link recoverd Shuai Xue
  2025-03-03  2:03 ` [PATCH v4 0/3] " Shuai Xue
  3 siblings, 2 replies; 14+ messages in thread
From: Shuai Xue @ 2025-02-17  2:42 UTC (permalink / raw)
  To: linux-pci, linux-kernel, linuxppc-dev, bhelgaas, kbusch,
	sathyanarayanan.kuppuswamy
  Cc: mahesh, oohall, xueshuai, Jonathan.Cameron, terry.bowman,
	tianruidong

The current implementation of pcie_do_recovery() assumes that the
recovery process is executed on the device that detected the error.
However, the DPC driver currently passes the error port that experienced
the DPC event to pcie_do_recovery().

Use the SOURCE ID register to correctly identify the device that
detected the error. When passing the error device, the
pcie_do_recovery() will find the upstream bridge and walk bridges
potentially AER affected. And subsequent patches will be able to
accurately access AER status of the error device.

Should not observe any functional changes.

Signed-off-by: Shuai Xue <xueshuai@linux.alibaba.com>
---
 drivers/pci/pci.h      |  2 +-
 drivers/pci/pcie/dpc.c | 28 ++++++++++++++++++++++++----
 drivers/pci/pcie/edr.c |  7 ++++---
 3 files changed, 29 insertions(+), 8 deletions(-)

diff --git a/drivers/pci/pci.h b/drivers/pci/pci.h
index 01e51db8d285..870d2fbd6ff2 100644
--- a/drivers/pci/pci.h
+++ b/drivers/pci/pci.h
@@ -572,7 +572,7 @@ struct rcec_ea {
 void pci_save_dpc_state(struct pci_dev *dev);
 void pci_restore_dpc_state(struct pci_dev *dev);
 void pci_dpc_init(struct pci_dev *pdev);
-void dpc_process_error(struct pci_dev *pdev);
+struct pci_dev *dpc_process_error(struct pci_dev *pdev);
 pci_ers_result_t dpc_reset_link(struct pci_dev *pdev);
 bool pci_dpc_recovered(struct pci_dev *pdev);
 unsigned int dpc_tlp_log_len(struct pci_dev *dev);
diff --git a/drivers/pci/pcie/dpc.c b/drivers/pci/pcie/dpc.c
index 1a54a0b657ae..ea3ea989afa7 100644
--- a/drivers/pci/pcie/dpc.c
+++ b/drivers/pci/pcie/dpc.c
@@ -253,10 +253,20 @@ static int dpc_get_aer_uncorrect_severity(struct pci_dev *dev,
 	return 1;
 }
 
-void dpc_process_error(struct pci_dev *pdev)
+/**
+ * dpc_process_error - handle the DPC error status
+ * @pdev: the port that experienced the containment event
+ *
+ * Return the device that detected the error.
+ *
+ * NOTE: The device reference count is increased, the caller must decrement
+ * the reference count by calling pci_dev_put().
+ */
+struct pci_dev *dpc_process_error(struct pci_dev *pdev)
 {
 	u16 cap = pdev->dpc_cap, status, source, reason, ext_reason;
 	struct aer_err_info info;
+	struct pci_dev *err_dev;
 
 	pci_read_config_word(pdev, cap + PCI_EXP_DPC_STATUS, &status);
 	pci_read_config_word(pdev, cap + PCI_EXP_DPC_SOURCE_ID, &source);
@@ -279,6 +289,13 @@ void dpc_process_error(struct pci_dev *pdev)
 		 "software trigger" :
 		 "reserved error");
 
+	if (reason == PCI_EXP_DPC_STATUS_TRIGGER_RSN_NFE ||
+	    reason == PCI_EXP_DPC_STATUS_TRIGGER_RSN_FE)
+		err_dev = pci_get_domain_bus_and_slot(pci_domain_nr(pdev->bus),
+					    PCI_BUS_NUM(source), source & 0xff);
+	else
+		err_dev = pci_dev_get(pdev);
+
 	/* show RP PIO error detail information */
 	if (pdev->dpc_rp_extensions &&
 	    reason == PCI_EXP_DPC_STATUS_TRIGGER_RSN_IN_EXT &&
@@ -291,6 +308,8 @@ void dpc_process_error(struct pci_dev *pdev)
 		pci_aer_clear_nonfatal_status(pdev);
 		pci_aer_clear_fatal_status(pdev);
 	}
+
+	return err_dev;
 }
 
 static void pci_clear_surpdn_errors(struct pci_dev *pdev)
@@ -346,7 +365,7 @@ static bool dpc_is_surprise_removal(struct pci_dev *pdev)
 
 static irqreturn_t dpc_handler(int irq, void *context)
 {
-	struct pci_dev *err_port = context;
+	struct pci_dev *err_port = context, *err_dev;
 
 	/*
 	 * According to PCIe r6.0 sec 6.7.6, errors are an expected side effect
@@ -357,10 +376,11 @@ static irqreturn_t dpc_handler(int irq, void *context)
 		return IRQ_HANDLED;
 	}
 
-	dpc_process_error(err_port);
+	err_dev = dpc_process_error(err_port);
 
 	/* We configure DPC so it only triggers on ERR_FATAL */
-	pcie_do_recovery(err_port, pci_channel_io_frozen, dpc_reset_link);
+	pcie_do_recovery(err_dev, pci_channel_io_frozen, dpc_reset_link);
+	pci_dev_put(err_dev);
 
 	return IRQ_HANDLED;
 }
diff --git a/drivers/pci/pcie/edr.c b/drivers/pci/pcie/edr.c
index 521fca2f40cb..088f3e188f54 100644
--- a/drivers/pci/pcie/edr.c
+++ b/drivers/pci/pcie/edr.c
@@ -150,7 +150,7 @@ static int acpi_send_edr_status(struct pci_dev *pdev, struct pci_dev *edev,
 
 static void edr_handle_event(acpi_handle handle, u32 event, void *data)
 {
-	struct pci_dev *pdev = data, *err_port;
+	struct pci_dev *pdev = data, *err_port, *err_dev;
 	pci_ers_result_t estate = PCI_ERS_RESULT_DISCONNECT;
 	u16 status;
 
@@ -190,7 +190,7 @@ static void edr_handle_event(acpi_handle handle, u32 event, void *data)
 		goto send_ost;
 	}
 
-	dpc_process_error(err_port);
+	err_dev = dpc_process_error(err_port);
 	pci_aer_raw_clear_status(err_port);
 
 	/*
@@ -198,7 +198,7 @@ static void edr_handle_event(acpi_handle handle, u32 event, void *data)
 	 * or ERR_NONFATAL, since the link is already down, use the FATAL
 	 * error recovery path for both cases.
 	 */
-	estate = pcie_do_recovery(err_port, pci_channel_io_frozen, dpc_reset_link);
+	estate = pcie_do_recovery(err_dev, pci_channel_io_frozen, dpc_reset_link);
 
 send_ost:
 
@@ -216,6 +216,7 @@ static void edr_handle_event(acpi_handle handle, u32 event, void *data)
 	}
 
 	pci_dev_put(err_port);
+	pci_dev_put(err_dev);
 }
 
 void pci_acpi_add_edr_notifier(struct pci_dev *pdev)
-- 
2.39.3



^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [PATCH v4 3/3] PCI/AER: Report fatal errors of RCiEP and EP if link recoverd
  2025-02-17  2:42 [PATCH v4 0/3] PCI/AER: Report fatal errors of RCiEP and EP if link recoverd Shuai Xue
  2025-02-17  2:42 ` [PATCH v4 1/3] PCI/DPC: Clarify naming for error port in DPC Handling Shuai Xue
  2025-02-17  2:42 ` [PATCH v4 2/3] PCI/DPC: Run recovery on device that detected the error Shuai Xue
@ 2025-02-17  2:42 ` Shuai Xue
  2025-03-03  3:43   ` Sathyanarayanan Kuppuswamy
  2025-03-03  2:03 ` [PATCH v4 0/3] " Shuai Xue
  3 siblings, 1 reply; 14+ messages in thread
From: Shuai Xue @ 2025-02-17  2:42 UTC (permalink / raw)
  To: linux-pci, linux-kernel, linuxppc-dev, bhelgaas, kbusch,
	sathyanarayanan.kuppuswamy
  Cc: mahesh, oohall, xueshuai, Jonathan.Cameron, terry.bowman,
	tianruidong

The AER driver has historically avoided reading the configuration space of
an endpoint or RCiEP that reported a fatal error, considering the link to
that device unreliable. Consequently, when a fatal error occurs, the AER
and DPC drivers do not report specific error types, resulting in logs like:

  pcieport 0000:30:03.0: EDR: EDR event received
  pcieport 0000:30:03.0: DPC: containment event, status:0x0005 source:0x3400
  pcieport 0000:30:03.0: DPC: ERR_FATAL detected
  pcieport 0000:30:03.0: AER: broadcast error_detected message
  nvme nvme0: frozen state error detected, reset controller
  nvme 0000:34:00.0: ready 0ms after DPC
  pcieport 0000:30:03.0: AER: broadcast slot_reset message

AER status registers are sticky and Write-1-to-clear. If the link recovered
after hot reset, we can still safely access AER status of the error device.
In such case, report fatal errors which helps to figure out the error root
case.

After this patch, the logs like:

  pcieport 0000:30:03.0: EDR: EDR event received
  pcieport 0000:30:03.0: DPC: containment event, status:0x0005 source:0x3400
  pcieport 0000:30:03.0: DPC: ERR_FATAL detected
  pcieport 0000:30:03.0: AER: broadcast error_detected message
  nvme nvme0: frozen state error detected, reset controller
  pcieport 0000:30:03.0: waiting 100 ms for downstream link, after activation
  nvme 0000:34:00.0: ready 0ms after DPC
  nvme 0000:34:00.0: PCIe Bus Error: severity=Uncorrectable (Fatal), type=Data Link Layer, (Receiver ID)
  nvme 0000:34:00.0:   device [144d:a804] error status/mask=00000010/00504000
  nvme 0000:34:00.0:    [ 4] DLP                    (First)
  pcieport 0000:30:03.0: AER: broadcast slot_reset message

Signed-off-by: Shuai Xue <xueshuai@linux.alibaba.com>
---
 drivers/pci/pci.h      |  3 ++-
 drivers/pci/pcie/aer.c | 11 +++++++----
 drivers/pci/pcie/dpc.c |  2 +-
 drivers/pci/pcie/err.c |  9 +++++++++
 4 files changed, 19 insertions(+), 6 deletions(-)

diff --git a/drivers/pci/pci.h b/drivers/pci/pci.h
index 870d2fbd6ff2..e852fa58b250 100644
--- a/drivers/pci/pci.h
+++ b/drivers/pci/pci.h
@@ -549,7 +549,8 @@ struct aer_err_info {
 	struct pcie_tlp_log tlp;	/* TLP Header */
 };
 
-int aer_get_device_error_info(struct pci_dev *dev, struct aer_err_info *info);
+int aer_get_device_error_info(struct pci_dev *dev, struct aer_err_info *info,
+			      bool link_healthy);
 void aer_print_error(struct pci_dev *dev, struct aer_err_info *info);
 
 int pcie_read_tlp_log(struct pci_dev *dev, int where, int where2,
diff --git a/drivers/pci/pcie/aer.c b/drivers/pci/pcie/aer.c
index 508474e17183..bfb67db074f0 100644
--- a/drivers/pci/pcie/aer.c
+++ b/drivers/pci/pcie/aer.c
@@ -1197,12 +1197,14 @@ EXPORT_SYMBOL_GPL(aer_recover_queue);
  * aer_get_device_error_info - read error status from dev and store it to info
  * @dev: pointer to the device expected to have a error record
  * @info: pointer to structure to store the error record
+ * @link_healthy: link is healthy or not
  *
  * Return 1 on success, 0 on error.
  *
  * Note that @info is reused among all error devices. Clear fields properly.
  */
-int aer_get_device_error_info(struct pci_dev *dev, struct aer_err_info *info)
+int aer_get_device_error_info(struct pci_dev *dev, struct aer_err_info *info,
+			      bool link_healthy)
 {
 	int type = pci_pcie_type(dev);
 	int aer = dev->aer_cap;
@@ -1226,7 +1228,8 @@ int aer_get_device_error_info(struct pci_dev *dev, struct aer_err_info *info)
 	} else if (type == PCI_EXP_TYPE_ROOT_PORT ||
 		   type == PCI_EXP_TYPE_RC_EC ||
 		   type == PCI_EXP_TYPE_DOWNSTREAM ||
-		   info->severity == AER_NONFATAL) {
+		   info->severity == AER_NONFATAL ||
+		   (info->severity == AER_FATAL && link_healthy)) {
 
 		/* Link is still healthy for IO reads */
 		pci_read_config_dword(dev, aer + PCI_ERR_UNCOR_STATUS,
@@ -1258,11 +1261,11 @@ static inline void aer_process_err_devices(struct aer_err_info *e_info)
 
 	/* Report all before handle them, not to lost records by reset etc. */
 	for (i = 0; i < e_info->error_dev_num && e_info->dev[i]; i++) {
-		if (aer_get_device_error_info(e_info->dev[i], e_info))
+		if (aer_get_device_error_info(e_info->dev[i], e_info, false))
 			aer_print_error(e_info->dev[i], e_info);
 	}
 	for (i = 0; i < e_info->error_dev_num && e_info->dev[i]; i++) {
-		if (aer_get_device_error_info(e_info->dev[i], e_info))
+		if (aer_get_device_error_info(e_info->dev[i], e_info, false))
 			handle_error_source(e_info->dev[i], e_info);
 	}
 }
diff --git a/drivers/pci/pcie/dpc.c b/drivers/pci/pcie/dpc.c
index ea3ea989afa7..2d3dd831b755 100644
--- a/drivers/pci/pcie/dpc.c
+++ b/drivers/pci/pcie/dpc.c
@@ -303,7 +303,7 @@ struct pci_dev *dpc_process_error(struct pci_dev *pdev)
 		dpc_process_rp_pio_error(pdev);
 	else if (reason == PCI_EXP_DPC_STATUS_TRIGGER_RSN_UNCOR &&
 		 dpc_get_aer_uncorrect_severity(pdev, &info) &&
-		 aer_get_device_error_info(pdev, &info)) {
+		 aer_get_device_error_info(pdev, &info, false)) {
 		aer_print_error(pdev, &info);
 		pci_aer_clear_nonfatal_status(pdev);
 		pci_aer_clear_fatal_status(pdev);
diff --git a/drivers/pci/pcie/err.c b/drivers/pci/pcie/err.c
index 31090770fffc..462577b8d75a 100644
--- a/drivers/pci/pcie/err.c
+++ b/drivers/pci/pcie/err.c
@@ -196,6 +196,7 @@ pci_ers_result_t pcie_do_recovery(struct pci_dev *dev,
 	struct pci_dev *bridge;
 	pci_ers_result_t status = PCI_ERS_RESULT_CAN_RECOVER;
 	struct pci_host_bridge *host = pci_find_host_bridge(dev->bus);
+	struct aer_err_info info;
 
 	/*
 	 * If the error was detected by a Root Port, Downstream Port, RCEC,
@@ -223,6 +224,13 @@ pci_ers_result_t pcie_do_recovery(struct pci_dev *dev,
 			pci_warn(bridge, "subordinate device reset failed\n");
 			goto failed;
 		}
+
+		info.severity = AER_FATAL;
+		/* Link recovered, report fatal errors of RCiEP or EP */
+		if ((type == PCI_EXP_TYPE_ENDPOINT ||
+		     type == PCI_EXP_TYPE_RC_END) &&
+		    aer_get_device_error_info(dev, &info, true))
+			aer_print_error(dev, &info);
 	} else {
 		pci_walk_bridge(bridge, report_normal_detected, &status);
 	}
@@ -259,6 +267,7 @@ pci_ers_result_t pcie_do_recovery(struct pci_dev *dev,
 	if (host->native_aer || pcie_ports_native) {
 		pcie_clear_device_status(dev);
 		pci_aer_clear_nonfatal_status(dev);
+		pci_aer_clear_fatal_status(dev);
 	}
 
 	pci_walk_bridge(bridge, pci_pm_runtime_put, NULL);
-- 
2.39.3



^ permalink raw reply related	[flat|nested] 14+ messages in thread

* Re: [PATCH v4 0/3] PCI/AER: Report fatal errors of RCiEP and EP if link recoverd
  2025-02-17  2:42 [PATCH v4 0/3] PCI/AER: Report fatal errors of RCiEP and EP if link recoverd Shuai Xue
                   ` (2 preceding siblings ...)
  2025-02-17  2:42 ` [PATCH v4 3/3] PCI/AER: Report fatal errors of RCiEP and EP if link recoverd Shuai Xue
@ 2025-03-03  2:03 ` Shuai Xue
  3 siblings, 0 replies; 14+ messages in thread
From: Shuai Xue @ 2025-03-03  2:03 UTC (permalink / raw)
  To: linux-pci, linux-kernel, linuxppc-dev, bhelgaas, kbusch,
	sathyanarayanan.kuppuswamy
  Cc: mahesh, oohall, Jonathan.Cameron, terry.bowman, tianruidong



在 2025/2/17 10:42, Shuai Xue 写道:
> changes since v3:
> - squash patch 1 and 2 into one patch per Sathyanarayanan
> - add comments note for dpc_process_error per Sathyanarayanan
> - pick up Reviewed-by tag from Sathyanarayanan
> 
> changes since v2:
> - moving the "err_port" rename to a separate patch per Sathyanarayanan
> - rewrite comments of dpc_process_error per Sathyanarayanan
> - remove NULL initialization for err_dev per Sathyanarayanan
> 
> changes since v1:
> - rewrite commit log per Bjorn
> - refactor aer_get_device_error_info to reduce duplication per Keith
> - fix to avoid reporting fatal errors twice for root and downstream ports per Keith
> 
> The AER driver has historically avoided reading the configuration space of an
> endpoint or RCiEP that reported a fatal error, considering the link to that
> device unreliable. Consequently, when a fatal error occurs, the AER and DPC
> drivers do not report specific error types, resulting in logs like:
> 
>     pcieport 0000:30:03.0: EDR: EDR event received
>     pcieport 0000:30:03.0: DPC: containment event, status:0x0005 source:0x3400
>     pcieport 0000:30:03.0: DPC: ERR_FATAL detected
>     pcieport 0000:30:03.0: AER: broadcast error_detected message
>     nvme nvme0: frozen state error detected, reset controller
>     nvme 0000:34:00.0: ready 0ms after DPC
>     pcieport 0000:30:03.0: AER: broadcast slot_reset message
> 
> AER status registers are sticky and Write-1-to-clear. If the link recovered
> after hot reset, we can still safely access AER status of the error device.
> In such case, report fatal errors which helps to figure out the error root
> case.
> 
> After this patch set, the logs like:
> 
>     pcieport 0000:30:03.0: EDR: EDR event received
>     pcieport 0000:30:03.0: DPC: containment event, status:0x0005 source:0x3400
>     pcieport 0000:30:03.0: DPC: ERR_FATAL detected
>     pcieport 0000:30:03.0: AER: broadcast error_detected message
>     nvme nvme0: frozen state error detected, reset controller
>     pcieport 0000:30:03.0: waiting 100 ms for downstream link, after activation
>     nvme 0000:34:00.0: ready 0ms after DPC
>     nvme 0000:34:00.0: PCIe Bus Error: severity=Uncorrectable (Fatal), type=Data Link Layer, (Receiver ID)
>     nvme 0000:34:00.0:   device [144d:a804] error status/mask=00000010/00504000
>     nvme 0000:34:00.0:    [ 4] DLP                    (First)
>     pcieport 0000:30:03.0: AER: broadcast slot_reset message
> 
> Shuai Xue (3):
>    PCI/DPC: Clarify naming for error port in DPC Handling
>    PCI/DPC: Run recovery on device that detected the error
>    PCI/AER: Report fatal errors of RCiEP and EP if link recoverd
> 
>   drivers/pci/pci.h      |  5 +++--
>   drivers/pci/pcie/aer.c | 11 +++++++----
>   drivers/pci/pcie/dpc.c | 34 +++++++++++++++++++++++++++-------
>   drivers/pci/pcie/edr.c | 35 ++++++++++++++++++-----------------
>   drivers/pci/pcie/err.c |  9 +++++++++
>   5 files changed, 64 insertions(+), 30 deletions(-)
> 


Hi, All,

Gentle ping.

Thanks.
Shuai


^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH v4 2/3] PCI/DPC: Run recovery on device that detected the error
  2025-02-17  2:42 ` [PATCH v4 2/3] PCI/DPC: Run recovery on device that detected the error Shuai Xue
@ 2025-03-03  3:36   ` Sathyanarayanan Kuppuswamy
  2025-03-03  3:48     ` Shuai Xue
  2025-06-12 10:31   ` Manivannan Sadhasivam
  1 sibling, 1 reply; 14+ messages in thread
From: Sathyanarayanan Kuppuswamy @ 2025-03-03  3:36 UTC (permalink / raw)
  To: Shuai Xue, linux-pci, linux-kernel, linuxppc-dev, bhelgaas,
	kbusch
  Cc: mahesh, oohall, Jonathan.Cameron, terry.bowman, tianruidong


On 2/16/25 6:42 PM, Shuai Xue wrote:
> The current implementation of pcie_do_recovery() assumes that the
> recovery process is executed on the device that detected the error.
> However, the DPC driver currently passes the error port that experienced
> the DPC event to pcie_do_recovery().
>
> Use the SOURCE ID register to correctly identify the device that
> detected the error. When passing the error device, the
> pcie_do_recovery() will find the upstream bridge and walk bridges
> potentially AER affected. And subsequent patches will be able to
> accurately access AER status of the error device.
>
> Should not observe any functional changes.
>
> Signed-off-by: Shuai Xue <xueshuai@linux.alibaba.com>
> ---

Looks good to me

Reviewed-by: Kuppuswamy Sathyanarayanan 
<sathyanarayanan.kuppuswamy@linux.intel.com>

>   drivers/pci/pci.h      |  2 +-
>   drivers/pci/pcie/dpc.c | 28 ++++++++++++++++++++++++----
>   drivers/pci/pcie/edr.c |  7 ++++---
>   3 files changed, 29 insertions(+), 8 deletions(-)
>
> diff --git a/drivers/pci/pci.h b/drivers/pci/pci.h
> index 01e51db8d285..870d2fbd6ff2 100644
> --- a/drivers/pci/pci.h
> +++ b/drivers/pci/pci.h
> @@ -572,7 +572,7 @@ struct rcec_ea {
>   void pci_save_dpc_state(struct pci_dev *dev);
>   void pci_restore_dpc_state(struct pci_dev *dev);
>   void pci_dpc_init(struct pci_dev *pdev);
> -void dpc_process_error(struct pci_dev *pdev);
> +struct pci_dev *dpc_process_error(struct pci_dev *pdev);
>   pci_ers_result_t dpc_reset_link(struct pci_dev *pdev);
>   bool pci_dpc_recovered(struct pci_dev *pdev);
>   unsigned int dpc_tlp_log_len(struct pci_dev *dev);
> diff --git a/drivers/pci/pcie/dpc.c b/drivers/pci/pcie/dpc.c
> index 1a54a0b657ae..ea3ea989afa7 100644
> --- a/drivers/pci/pcie/dpc.c
> +++ b/drivers/pci/pcie/dpc.c
> @@ -253,10 +253,20 @@ static int dpc_get_aer_uncorrect_severity(struct pci_dev *dev,
>   	return 1;
>   }
>   
> -void dpc_process_error(struct pci_dev *pdev)
> +/**
> + * dpc_process_error - handle the DPC error status
> + * @pdev: the port that experienced the containment event
> + *
> + * Return the device that detected the error.
> + *
> + * NOTE: The device reference count is increased, the caller must decrement
> + * the reference count by calling pci_dev_put().
> + */
> +struct pci_dev *dpc_process_error(struct pci_dev *pdev)
>   {
>   	u16 cap = pdev->dpc_cap, status, source, reason, ext_reason;
>   	struct aer_err_info info;
> +	struct pci_dev *err_dev;
>   
>   	pci_read_config_word(pdev, cap + PCI_EXP_DPC_STATUS, &status);
>   	pci_read_config_word(pdev, cap + PCI_EXP_DPC_SOURCE_ID, &source);
> @@ -279,6 +289,13 @@ void dpc_process_error(struct pci_dev *pdev)
>   		 "software trigger" :
>   		 "reserved error");
>   
> +	if (reason == PCI_EXP_DPC_STATUS_TRIGGER_RSN_NFE ||
> +	    reason == PCI_EXP_DPC_STATUS_TRIGGER_RSN_FE)
> +		err_dev = pci_get_domain_bus_and_slot(pci_domain_nr(pdev->bus),
> +					    PCI_BUS_NUM(source), source & 0xff);
> +	else
> +		err_dev = pci_dev_get(pdev);
> +
>   	/* show RP PIO error detail information */
>   	if (pdev->dpc_rp_extensions &&
>   	    reason == PCI_EXP_DPC_STATUS_TRIGGER_RSN_IN_EXT &&
> @@ -291,6 +308,8 @@ void dpc_process_error(struct pci_dev *pdev)
>   		pci_aer_clear_nonfatal_status(pdev);
>   		pci_aer_clear_fatal_status(pdev);
>   	}
> +
> +	return err_dev;
>   }
>   
>   static void pci_clear_surpdn_errors(struct pci_dev *pdev)
> @@ -346,7 +365,7 @@ static bool dpc_is_surprise_removal(struct pci_dev *pdev)
>   
>   static irqreturn_t dpc_handler(int irq, void *context)
>   {
> -	struct pci_dev *err_port = context;
> +	struct pci_dev *err_port = context, *err_dev;
>   
>   	/*
>   	 * According to PCIe r6.0 sec 6.7.6, errors are an expected side effect
> @@ -357,10 +376,11 @@ static irqreturn_t dpc_handler(int irq, void *context)
>   		return IRQ_HANDLED;
>   	}
>   
> -	dpc_process_error(err_port);
> +	err_dev = dpc_process_error(err_port);
>   
>   	/* We configure DPC so it only triggers on ERR_FATAL */
> -	pcie_do_recovery(err_port, pci_channel_io_frozen, dpc_reset_link);
> +	pcie_do_recovery(err_dev, pci_channel_io_frozen, dpc_reset_link);
> +	pci_dev_put(err_dev);
>   
>   	return IRQ_HANDLED;
>   }
> diff --git a/drivers/pci/pcie/edr.c b/drivers/pci/pcie/edr.c
> index 521fca2f40cb..088f3e188f54 100644
> --- a/drivers/pci/pcie/edr.c
> +++ b/drivers/pci/pcie/edr.c
> @@ -150,7 +150,7 @@ static int acpi_send_edr_status(struct pci_dev *pdev, struct pci_dev *edev,
>   
>   static void edr_handle_event(acpi_handle handle, u32 event, void *data)
>   {
> -	struct pci_dev *pdev = data, *err_port;
> +	struct pci_dev *pdev = data, *err_port, *err_dev;
>   	pci_ers_result_t estate = PCI_ERS_RESULT_DISCONNECT;
>   	u16 status;
>   
> @@ -190,7 +190,7 @@ static void edr_handle_event(acpi_handle handle, u32 event, void *data)
>   		goto send_ost;
>   	}
>   
> -	dpc_process_error(err_port);
> +	err_dev = dpc_process_error(err_port);
>   	pci_aer_raw_clear_status(err_port);
>   
>   	/*
> @@ -198,7 +198,7 @@ static void edr_handle_event(acpi_handle handle, u32 event, void *data)
>   	 * or ERR_NONFATAL, since the link is already down, use the FATAL
>   	 * error recovery path for both cases.
>   	 */
> -	estate = pcie_do_recovery(err_port, pci_channel_io_frozen, dpc_reset_link);
> +	estate = pcie_do_recovery(err_dev, pci_channel_io_frozen, dpc_reset_link);
>   
>   send_ost:
>   
> @@ -216,6 +216,7 @@ static void edr_handle_event(acpi_handle handle, u32 event, void *data)
>   	}
>   
>   	pci_dev_put(err_port);
> +	pci_dev_put(err_dev);
>   }
>   
>   void pci_acpi_add_edr_notifier(struct pci_dev *pdev)

-- 
Sathyanarayanan Kuppuswamy
Linux Kernel Developer



^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH v4 3/3] PCI/AER: Report fatal errors of RCiEP and EP if link recoverd
  2025-02-17  2:42 ` [PATCH v4 3/3] PCI/AER: Report fatal errors of RCiEP and EP if link recoverd Shuai Xue
@ 2025-03-03  3:43   ` Sathyanarayanan Kuppuswamy
  2025-03-03  4:33     ` Shuai Xue
  2025-06-12 10:46     ` Manivannan Sadhasivam
  0 siblings, 2 replies; 14+ messages in thread
From: Sathyanarayanan Kuppuswamy @ 2025-03-03  3:43 UTC (permalink / raw)
  To: Shuai Xue, linux-pci, linux-kernel, linuxppc-dev, bhelgaas,
	kbusch
  Cc: mahesh, oohall, Jonathan.Cameron, terry.bowman, tianruidong


On 2/16/25 6:42 PM, Shuai Xue wrote:
> The AER driver has historically avoided reading the configuration space of
> an endpoint or RCiEP that reported a fatal error, considering the link to
> that device unreliable. Consequently, when a fatal error occurs, the AER
> and DPC drivers do not report specific error types, resulting in logs like:
>
>    pcieport 0000:30:03.0: EDR: EDR event received
>    pcieport 0000:30:03.0: DPC: containment event, status:0x0005 source:0x3400
>    pcieport 0000:30:03.0: DPC: ERR_FATAL detected
>    pcieport 0000:30:03.0: AER: broadcast error_detected message
>    nvme nvme0: frozen state error detected, reset controller
>    nvme 0000:34:00.0: ready 0ms after DPC
>    pcieport 0000:30:03.0: AER: broadcast slot_reset message
>
> AER status registers are sticky and Write-1-to-clear. If the link recovered
> after hot reset, we can still safely access AER status of the error device.
> In such case, report fatal errors which helps to figure out the error root
> case.
>
> After this patch, the logs like:
>
>    pcieport 0000:30:03.0: EDR: EDR event received
>    pcieport 0000:30:03.0: DPC: containment event, status:0x0005 source:0x3400
>    pcieport 0000:30:03.0: DPC: ERR_FATAL detected
>    pcieport 0000:30:03.0: AER: broadcast error_detected message
>    nvme nvme0: frozen state error detected, reset controller
>    pcieport 0000:30:03.0: waiting 100 ms for downstream link, after activation
>    nvme 0000:34:00.0: ready 0ms after DPC
>    nvme 0000:34:00.0: PCIe Bus Error: severity=Uncorrectable (Fatal), type=Data Link Layer, (Receiver ID)
>    nvme 0000:34:00.0:   device [144d:a804] error status/mask=00000010/00504000
>    nvme 0000:34:00.0:    [ 4] DLP                    (First)
>    pcieport 0000:30:03.0: AER: broadcast slot_reset message

IMO, above info about device error details is more of a debug info. 
Since the
main use of this info use to understand more details about the recovered
DPC error. So I think is better to print with debug tag. Lets see what 
others
think.

Code wise, looks fine to me.



> Signed-off-by: Shuai Xue <xueshuai@linux.alibaba.com>
> ---
>   drivers/pci/pci.h      |  3 ++-
>   drivers/pci/pcie/aer.c | 11 +++++++----
>   drivers/pci/pcie/dpc.c |  2 +-
>   drivers/pci/pcie/err.c |  9 +++++++++
>   4 files changed, 19 insertions(+), 6 deletions(-)
>
> diff --git a/drivers/pci/pci.h b/drivers/pci/pci.h
> index 870d2fbd6ff2..e852fa58b250 100644
> --- a/drivers/pci/pci.h
> +++ b/drivers/pci/pci.h
> @@ -549,7 +549,8 @@ struct aer_err_info {
>   	struct pcie_tlp_log tlp;	/* TLP Header */
>   };
>   
> -int aer_get_device_error_info(struct pci_dev *dev, struct aer_err_info *info);
> +int aer_get_device_error_info(struct pci_dev *dev, struct aer_err_info *info,
> +			      bool link_healthy);
>   void aer_print_error(struct pci_dev *dev, struct aer_err_info *info);
>   
>   int pcie_read_tlp_log(struct pci_dev *dev, int where, int where2,
> diff --git a/drivers/pci/pcie/aer.c b/drivers/pci/pcie/aer.c
> index 508474e17183..bfb67db074f0 100644
> --- a/drivers/pci/pcie/aer.c
> +++ b/drivers/pci/pcie/aer.c
> @@ -1197,12 +1197,14 @@ EXPORT_SYMBOL_GPL(aer_recover_queue);
>    * aer_get_device_error_info - read error status from dev and store it to info
>    * @dev: pointer to the device expected to have a error record
>    * @info: pointer to structure to store the error record
> + * @link_healthy: link is healthy or not
>    *
>    * Return 1 on success, 0 on error.
>    *
>    * Note that @info is reused among all error devices. Clear fields properly.
>    */
> -int aer_get_device_error_info(struct pci_dev *dev, struct aer_err_info *info)
> +int aer_get_device_error_info(struct pci_dev *dev, struct aer_err_info *info,
> +			      bool link_healthy)
>   {
>   	int type = pci_pcie_type(dev);
>   	int aer = dev->aer_cap;
> @@ -1226,7 +1228,8 @@ int aer_get_device_error_info(struct pci_dev *dev, struct aer_err_info *info)
>   	} else if (type == PCI_EXP_TYPE_ROOT_PORT ||
>   		   type == PCI_EXP_TYPE_RC_EC ||
>   		   type == PCI_EXP_TYPE_DOWNSTREAM ||
> -		   info->severity == AER_NONFATAL) {
> +		   info->severity == AER_NONFATAL ||
> +		   (info->severity == AER_FATAL && link_healthy)) {
>   
>   		/* Link is still healthy for IO reads */
>   		pci_read_config_dword(dev, aer + PCI_ERR_UNCOR_STATUS,
> @@ -1258,11 +1261,11 @@ static inline void aer_process_err_devices(struct aer_err_info *e_info)
>   
>   	/* Report all before handle them, not to lost records by reset etc. */
>   	for (i = 0; i < e_info->error_dev_num && e_info->dev[i]; i++) {
> -		if (aer_get_device_error_info(e_info->dev[i], e_info))
> +		if (aer_get_device_error_info(e_info->dev[i], e_info, false))
>   			aer_print_error(e_info->dev[i], e_info);
>   	}
>   	for (i = 0; i < e_info->error_dev_num && e_info->dev[i]; i++) {
> -		if (aer_get_device_error_info(e_info->dev[i], e_info))
> +		if (aer_get_device_error_info(e_info->dev[i], e_info, false))
>   			handle_error_source(e_info->dev[i], e_info);
>   	}
>   }
> diff --git a/drivers/pci/pcie/dpc.c b/drivers/pci/pcie/dpc.c
> index ea3ea989afa7..2d3dd831b755 100644
> --- a/drivers/pci/pcie/dpc.c
> +++ b/drivers/pci/pcie/dpc.c
> @@ -303,7 +303,7 @@ struct pci_dev *dpc_process_error(struct pci_dev *pdev)
>   		dpc_process_rp_pio_error(pdev);
>   	else if (reason == PCI_EXP_DPC_STATUS_TRIGGER_RSN_UNCOR &&
>   		 dpc_get_aer_uncorrect_severity(pdev, &info) &&
> -		 aer_get_device_error_info(pdev, &info)) {
> +		 aer_get_device_error_info(pdev, &info, false)) {
>   		aer_print_error(pdev, &info);
>   		pci_aer_clear_nonfatal_status(pdev);
>   		pci_aer_clear_fatal_status(pdev);
> diff --git a/drivers/pci/pcie/err.c b/drivers/pci/pcie/err.c
> index 31090770fffc..462577b8d75a 100644
> --- a/drivers/pci/pcie/err.c
> +++ b/drivers/pci/pcie/err.c
> @@ -196,6 +196,7 @@ pci_ers_result_t pcie_do_recovery(struct pci_dev *dev,
>   	struct pci_dev *bridge;
>   	pci_ers_result_t status = PCI_ERS_RESULT_CAN_RECOVER;
>   	struct pci_host_bridge *host = pci_find_host_bridge(dev->bus);
> +	struct aer_err_info info;
>   
>   	/*
>   	 * If the error was detected by a Root Port, Downstream Port, RCEC,
> @@ -223,6 +224,13 @@ pci_ers_result_t pcie_do_recovery(struct pci_dev *dev,
>   			pci_warn(bridge, "subordinate device reset failed\n");
>   			goto failed;
>   		}
> +
> +		info.severity = AER_FATAL;
> +		/* Link recovered, report fatal errors of RCiEP or EP */
> +		if ((type == PCI_EXP_TYPE_ENDPOINT ||
> +		     type == PCI_EXP_TYPE_RC_END) &&
> +		    aer_get_device_error_info(dev, &info, true))
> +			aer_print_error(dev, &info);
>   	} else {
>   		pci_walk_bridge(bridge, report_normal_detected, &status);
>   	}
> @@ -259,6 +267,7 @@ pci_ers_result_t pcie_do_recovery(struct pci_dev *dev,
>   	if (host->native_aer || pcie_ports_native) {
>   		pcie_clear_device_status(dev);
>   		pci_aer_clear_nonfatal_status(dev);
> +		pci_aer_clear_fatal_status(dev);

Add some info about above change in the commit log.

>   	}
>   
>   	pci_walk_bridge(bridge, pci_pm_runtime_put, NULL);

-- 
Sathyanarayanan Kuppuswamy
Linux Kernel Developer



^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH v4 2/3] PCI/DPC: Run recovery on device that detected the error
  2025-03-03  3:36   ` Sathyanarayanan Kuppuswamy
@ 2025-03-03  3:48     ` Shuai Xue
  0 siblings, 0 replies; 14+ messages in thread
From: Shuai Xue @ 2025-03-03  3:48 UTC (permalink / raw)
  To: Sathyanarayanan Kuppuswamy, linux-pci, linux-kernel, linuxppc-dev,
	bhelgaas, kbusch
  Cc: mahesh, oohall, Jonathan.Cameron, terry.bowman, tianruidong



在 2025/3/3 11:36, Sathyanarayanan Kuppuswamy 写道:
> 
> On 2/16/25 6:42 PM, Shuai Xue wrote:
>> The current implementation of pcie_do_recovery() assumes that the
>> recovery process is executed on the device that detected the error.
>> However, the DPC driver currently passes the error port that experienced
>> the DPC event to pcie_do_recovery().
>>
>> Use the SOURCE ID register to correctly identify the device that
>> detected the error. When passing the error device, the
>> pcie_do_recovery() will find the upstream bridge and walk bridges
>> potentially AER affected. And subsequent patches will be able to
>> accurately access AER status of the error device.
>>
>> Should not observe any functional changes.
>>
>> Signed-off-by: Shuai Xue <xueshuai@linux.alibaba.com>
>> ---
> 
> Looks good to me
> 
> Reviewed-by: Kuppuswamy Sathyanarayanan <sathyanarayanan.kuppuswamy@linux.intel.com>
> 

Thanks.
Shuai


^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH v4 3/3] PCI/AER: Report fatal errors of RCiEP and EP if link recoverd
  2025-03-03  3:43   ` Sathyanarayanan Kuppuswamy
@ 2025-03-03  4:33     ` Shuai Xue
  2025-03-17  6:02       ` Shuai Xue
  2025-06-12 10:46     ` Manivannan Sadhasivam
  1 sibling, 1 reply; 14+ messages in thread
From: Shuai Xue @ 2025-03-03  4:33 UTC (permalink / raw)
  To: Sathyanarayanan Kuppuswamy, linux-pci, linux-kernel, linuxppc-dev,
	bhelgaas, kbusch
  Cc: mahesh, oohall, Jonathan.Cameron, terry.bowman, tianruidong



在 2025/3/3 11:43, Sathyanarayanan Kuppuswamy 写道:
> 
> On 2/16/25 6:42 PM, Shuai Xue wrote:
>> The AER driver has historically avoided reading the configuration space of
>> an endpoint or RCiEP that reported a fatal error, considering the link to
>> that device unreliable. Consequently, when a fatal error occurs, the AER
>> and DPC drivers do not report specific error types, resulting in logs like:
>>
>>    pcieport 0000:30:03.0: EDR: EDR event received
>>    pcieport 0000:30:03.0: DPC: containment event, status:0x0005 source:0x3400
>>    pcieport 0000:30:03.0: DPC: ERR_FATAL detected
>>    pcieport 0000:30:03.0: AER: broadcast error_detected message
>>    nvme nvme0: frozen state error detected, reset controller
>>    nvme 0000:34:00.0: ready 0ms after DPC
>>    pcieport 0000:30:03.0: AER: broadcast slot_reset message
>>
>> AER status registers are sticky and Write-1-to-clear. If the link recovered
>> after hot reset, we can still safely access AER status of the error device.
>> In such case, report fatal errors which helps to figure out the error root
>> case.
>>
>> After this patch, the logs like:
>>
>>    pcieport 0000:30:03.0: EDR: EDR event received
>>    pcieport 0000:30:03.0: DPC: containment event, status:0x0005 source:0x3400
>>    pcieport 0000:30:03.0: DPC: ERR_FATAL detected
>>    pcieport 0000:30:03.0: AER: broadcast error_detected message
>>    nvme nvme0: frozen state error detected, reset controller
>>    pcieport 0000:30:03.0: waiting 100 ms for downstream link, after activation
>>    nvme 0000:34:00.0: ready 0ms after DPC
>>    nvme 0000:34:00.0: PCIe Bus Error: severity=Uncorrectable (Fatal), type=Data Link Layer, (Receiver ID)
>>    nvme 0000:34:00.0:   device [144d:a804] error status/mask=00000010/00504000
>>    nvme 0000:34:00.0:    [ 4] DLP                    (First)
>>    pcieport 0000:30:03.0: AER: broadcast slot_reset message
> 
> IMO, above info about device error details is more of a debug info. Since the
> main use of this info use to understand more details about the recovered
> DPC error. So I think is better to print with debug tag. Lets see what others
> think.
> 
> Code wise, looks fine to me.

thanks, looking forward to more feedback.
> 
> 
> 
>> Signed-off-by: Shuai Xue <xueshuai@linux.alibaba.com>
>> ---
>>   drivers/pci/pci.h      |  3 ++-
>>   drivers/pci/pcie/aer.c | 11 +++++++----
>>   drivers/pci/pcie/dpc.c |  2 +-
>>   drivers/pci/pcie/err.c |  9 +++++++++
>>   4 files changed, 19 insertions(+), 6 deletions(-)
>>
>> diff --git a/drivers/pci/pci.h b/drivers/pci/pci.h
>> index 870d2fbd6ff2..e852fa58b250 100644
>> --- a/drivers/pci/pci.h
>> +++ b/drivers/pci/pci.h
>> @@ -549,7 +549,8 @@ struct aer_err_info {
>>       struct pcie_tlp_log tlp;    /* TLP Header */
>>   };
>> -int aer_get_device_error_info(struct pci_dev *dev, struct aer_err_info *info);
>> +int aer_get_device_error_info(struct pci_dev *dev, struct aer_err_info *info,
>> +                  bool link_healthy);
>>   void aer_print_error(struct pci_dev *dev, struct aer_err_info *info);
>>   int pcie_read_tlp_log(struct pci_dev *dev, int where, int where2,
>> diff --git a/drivers/pci/pcie/aer.c b/drivers/pci/pcie/aer.c
>> index 508474e17183..bfb67db074f0 100644
>> --- a/drivers/pci/pcie/aer.c
>> +++ b/drivers/pci/pcie/aer.c
>> @@ -1197,12 +1197,14 @@ EXPORT_SYMBOL_GPL(aer_recover_queue);
>>    * aer_get_device_error_info - read error status from dev and store it to info
>>    * @dev: pointer to the device expected to have a error record
>>    * @info: pointer to structure to store the error record
>> + * @link_healthy: link is healthy or not
>>    *
>>    * Return 1 on success, 0 on error.
>>    *
>>    * Note that @info is reused among all error devices. Clear fields properly.
>>    */
>> -int aer_get_device_error_info(struct pci_dev *dev, struct aer_err_info *info)
>> +int aer_get_device_error_info(struct pci_dev *dev, struct aer_err_info *info,
>> +                  bool link_healthy)
>>   {
>>       int type = pci_pcie_type(dev);
>>       int aer = dev->aer_cap;
>> @@ -1226,7 +1228,8 @@ int aer_get_device_error_info(struct pci_dev *dev, struct aer_err_info *info)
>>       } else if (type == PCI_EXP_TYPE_ROOT_PORT ||
>>              type == PCI_EXP_TYPE_RC_EC ||
>>              type == PCI_EXP_TYPE_DOWNSTREAM ||
>> -           info->severity == AER_NONFATAL) {
>> +           info->severity == AER_NONFATAL ||
>> +           (info->severity == AER_FATAL && link_healthy)) {
>>           /* Link is still healthy for IO reads */
>>           pci_read_config_dword(dev, aer + PCI_ERR_UNCOR_STATUS,
>> @@ -1258,11 +1261,11 @@ static inline void aer_process_err_devices(struct aer_err_info *e_info)
>>       /* Report all before handle them, not to lost records by reset etc. */
>>       for (i = 0; i < e_info->error_dev_num && e_info->dev[i]; i++) {
>> -        if (aer_get_device_error_info(e_info->dev[i], e_info))
>> +        if (aer_get_device_error_info(e_info->dev[i], e_info, false))
>>               aer_print_error(e_info->dev[i], e_info);
>>       }
>>       for (i = 0; i < e_info->error_dev_num && e_info->dev[i]; i++) {
>> -        if (aer_get_device_error_info(e_info->dev[i], e_info))
>> +        if (aer_get_device_error_info(e_info->dev[i], e_info, false))
>>               handle_error_source(e_info->dev[i], e_info);
>>       }
>>   }
>> diff --git a/drivers/pci/pcie/dpc.c b/drivers/pci/pcie/dpc.c
>> index ea3ea989afa7..2d3dd831b755 100644
>> --- a/drivers/pci/pcie/dpc.c
>> +++ b/drivers/pci/pcie/dpc.c
>> @@ -303,7 +303,7 @@ struct pci_dev *dpc_process_error(struct pci_dev *pdev)
>>           dpc_process_rp_pio_error(pdev);
>>       else if (reason == PCI_EXP_DPC_STATUS_TRIGGER_RSN_UNCOR &&
>>            dpc_get_aer_uncorrect_severity(pdev, &info) &&
>> -         aer_get_device_error_info(pdev, &info)) {
>> +         aer_get_device_error_info(pdev, &info, false)) {
>>           aer_print_error(pdev, &info);
>>           pci_aer_clear_nonfatal_status(pdev);
>>           pci_aer_clear_fatal_status(pdev);
>> diff --git a/drivers/pci/pcie/err.c b/drivers/pci/pcie/err.c
>> index 31090770fffc..462577b8d75a 100644
>> --- a/drivers/pci/pcie/err.c
>> +++ b/drivers/pci/pcie/err.c
>> @@ -196,6 +196,7 @@ pci_ers_result_t pcie_do_recovery(struct pci_dev *dev,
>>       struct pci_dev *bridge;
>>       pci_ers_result_t status = PCI_ERS_RESULT_CAN_RECOVER;
>>       struct pci_host_bridge *host = pci_find_host_bridge(dev->bus);
>> +    struct aer_err_info info;
>>       /*
>>        * If the error was detected by a Root Port, Downstream Port, RCEC,
>> @@ -223,6 +224,13 @@ pci_ers_result_t pcie_do_recovery(struct pci_dev *dev,
>>               pci_warn(bridge, "subordinate device reset failed\n");
>>               goto failed;
>>           }
>> +
>> +        info.severity = AER_FATAL;
>> +        /* Link recovered, report fatal errors of RCiEP or EP */
>> +        if ((type == PCI_EXP_TYPE_ENDPOINT ||
>> +             type == PCI_EXP_TYPE_RC_END) &&
>> +            aer_get_device_error_info(dev, &info, true))
>> +            aer_print_error(dev, &info);
>>       } else {
>>           pci_walk_bridge(bridge, report_normal_detected, &status);
>>       }
>> @@ -259,6 +267,7 @@ pci_ers_result_t pcie_do_recovery(struct pci_dev *dev,
>>       if (host->native_aer || pcie_ports_native) {
>>           pcie_clear_device_status(dev);
>>           pci_aer_clear_nonfatal_status(dev);
>> +        pci_aer_clear_fatal_status(dev);
> 
> Add some info about above change in the commit log.

Will do.

Thanks.
Shuai


^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH v4 3/3] PCI/AER: Report fatal errors of RCiEP and EP if link recoverd
  2025-03-03  4:33     ` Shuai Xue
@ 2025-03-17  6:02       ` Shuai Xue
  2025-04-24 11:48         ` Shuai Xue
  0 siblings, 1 reply; 14+ messages in thread
From: Shuai Xue @ 2025-03-17  6:02 UTC (permalink / raw)
  To: Sathyanarayanan Kuppuswamy, linux-pci, linux-kernel, linuxppc-dev,
	bhelgaas, kbusch
  Cc: mahesh, oohall, Jonathan.Cameron, terry.bowman, tianruidong



在 2025/3/3 12:33, Shuai Xue 写道:
> 
> 
> 在 2025/3/3 11:43, Sathyanarayanan Kuppuswamy 写道:
>>
>> On 2/16/25 6:42 PM, Shuai Xue wrote:
>>> The AER driver has historically avoided reading the configuration space of
>>> an endpoint or RCiEP that reported a fatal error, considering the link to
>>> that device unreliable. Consequently, when a fatal error occurs, the AER
>>> and DPC drivers do not report specific error types, resulting in logs like:
>>>
>>>    pcieport 0000:30:03.0: EDR: EDR event received
>>>    pcieport 0000:30:03.0: DPC: containment event, status:0x0005 source:0x3400
>>>    pcieport 0000:30:03.0: DPC: ERR_FATAL detected
>>>    pcieport 0000:30:03.0: AER: broadcast error_detected message
>>>    nvme nvme0: frozen state error detected, reset controller
>>>    nvme 0000:34:00.0: ready 0ms after DPC
>>>    pcieport 0000:30:03.0: AER: broadcast slot_reset message
>>>
>>> AER status registers are sticky and Write-1-to-clear. If the link recovered
>>> after hot reset, we can still safely access AER status of the error device.
>>> In such case, report fatal errors which helps to figure out the error root
>>> case.
>>>
>>> After this patch, the logs like:
>>>
>>>    pcieport 0000:30:03.0: EDR: EDR event received
>>>    pcieport 0000:30:03.0: DPC: containment event, status:0x0005 source:0x3400
>>>    pcieport 0000:30:03.0: DPC: ERR_FATAL detected
>>>    pcieport 0000:30:03.0: AER: broadcast error_detected message
>>>    nvme nvme0: frozen state error detected, reset controller
>>>    pcieport 0000:30:03.0: waiting 100 ms for downstream link, after activation
>>>    nvme 0000:34:00.0: ready 0ms after DPC
>>>    nvme 0000:34:00.0: PCIe Bus Error: severity=Uncorrectable (Fatal), type=Data Link Layer, (Receiver ID)
>>>    nvme 0000:34:00.0:   device [144d:a804] error status/mask=00000010/00504000
>>>    nvme 0000:34:00.0:    [ 4] DLP                    (First)
>>>    pcieport 0000:30:03.0: AER: broadcast slot_reset message
>>
>> IMO, above info about device error details is more of a debug info. Since the
>> main use of this info use to understand more details about the recovered
>> DPC error. So I think is better to print with debug tag. Lets see what others
>> think.
>>
>> Code wise, looks fine to me.
> 
> thanks, looking forward to more feedback.
>>
>>

Hi, all,

Gentle ping.

Thanks.
Shuai



^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH v4 3/3] PCI/AER: Report fatal errors of RCiEP and EP if link recoverd
  2025-03-17  6:02       ` Shuai Xue
@ 2025-04-24 11:48         ` Shuai Xue
  0 siblings, 0 replies; 14+ messages in thread
From: Shuai Xue @ 2025-04-24 11:48 UTC (permalink / raw)
  To: Sathyanarayanan Kuppuswamy, linux-pci, linux-kernel, linuxppc-dev,
	bhelgaas, kbusch
  Cc: mahesh, oohall, Jonathan.Cameron, terry.bowman, tianruidong



在 2025/3/17 14:02, Shuai Xue 写道:
> 
> 
> 在 2025/3/3 12:33, Shuai Xue 写道:
>>
>>
>> 在 2025/3/3 11:43, Sathyanarayanan Kuppuswamy 写道:
>>>
>>> On 2/16/25 6:42 PM, Shuai Xue wrote:
>>>> The AER driver has historically avoided reading the configuration space of
>>>> an endpoint or RCiEP that reported a fatal error, considering the link to
>>>> that device unreliable. Consequently, when a fatal error occurs, the AER
>>>> and DPC drivers do not report specific error types, resulting in logs like:
>>>>
>>>>    pcieport 0000:30:03.0: EDR: EDR event received
>>>>    pcieport 0000:30:03.0: DPC: containment event, status:0x0005 source:0x3400
>>>>    pcieport 0000:30:03.0: DPC: ERR_FATAL detected
>>>>    pcieport 0000:30:03.0: AER: broadcast error_detected message
>>>>    nvme nvme0: frozen state error detected, reset controller
>>>>    nvme 0000:34:00.0: ready 0ms after DPC
>>>>    pcieport 0000:30:03.0: AER: broadcast slot_reset message
>>>>
>>>> AER status registers are sticky and Write-1-to-clear. If the link recovered
>>>> after hot reset, we can still safely access AER status of the error device.
>>>> In such case, report fatal errors which helps to figure out the error root
>>>> case.
>>>>
>>>> After this patch, the logs like:
>>>>
>>>>    pcieport 0000:30:03.0: EDR: EDR event received
>>>>    pcieport 0000:30:03.0: DPC: containment event, status:0x0005 source:0x3400
>>>>    pcieport 0000:30:03.0: DPC: ERR_FATAL detected
>>>>    pcieport 0000:30:03.0: AER: broadcast error_detected message
>>>>    nvme nvme0: frozen state error detected, reset controller
>>>>    pcieport 0000:30:03.0: waiting 100 ms for downstream link, after activation
>>>>    nvme 0000:34:00.0: ready 0ms after DPC
>>>>    nvme 0000:34:00.0: PCIe Bus Error: severity=Uncorrectable (Fatal), type=Data Link Layer, (Receiver ID)
>>>>    nvme 0000:34:00.0:   device [144d:a804] error status/mask=00000010/00504000
>>>>    nvme 0000:34:00.0:    [ 4] DLP                    (First)
>>>>    pcieport 0000:30:03.0: AER: broadcast slot_reset message
>>>
>>> IMO, above info about device error details is more of a debug info. Since the
>>> main use of this info use to understand more details about the recovered
>>> DPC error. So I think is better to print with debug tag. Lets see what others
>>> think.
>>>
>>> Code wise, looks fine to me.
>>
>> thanks, looking forward to more feedback.
>>>
>>>
> 
> Hi, all,
> 
> Gentle ping.
> 
> Thanks.
> Shuai
> 


Hi, all,
  
Gentle ping.
  
Thanks.
Shuai


^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH v4 2/3] PCI/DPC: Run recovery on device that detected the error
  2025-02-17  2:42 ` [PATCH v4 2/3] PCI/DPC: Run recovery on device that detected the error Shuai Xue
  2025-03-03  3:36   ` Sathyanarayanan Kuppuswamy
@ 2025-06-12 10:31   ` Manivannan Sadhasivam
  2025-06-19  6:28     ` Shuai Xue
  1 sibling, 1 reply; 14+ messages in thread
From: Manivannan Sadhasivam @ 2025-06-12 10:31 UTC (permalink / raw)
  To: Shuai Xue
  Cc: linux-pci, linux-kernel, linuxppc-dev, bhelgaas, kbusch,
	sathyanarayanan.kuppuswamy, mahesh, oohall, Jonathan.Cameron,
	terry.bowman, tianruidong

On Mon, Feb 17, 2025 at 10:42:17AM +0800, Shuai Xue wrote:
> The current implementation of pcie_do_recovery() assumes that the
> recovery process is executed on the device that detected the error.

s/on/for

> However, the DPC driver currently passes the error port that experienced
> the DPC event to pcie_do_recovery().
> 
> Use the SOURCE ID register to correctly identify the device that
> detected the error. When passing the error device, the
> pcie_do_recovery() will find the upstream bridge and walk bridges
> potentially AER affected. And subsequent patches will be able to

s/patches/commits

> accurately access AER status of the error device.
> 
> Should not observe any functional changes.
> 
> Signed-off-by: Shuai Xue <xueshuai@linux.alibaba.com>
> ---
>  drivers/pci/pci.h      |  2 +-
>  drivers/pci/pcie/dpc.c | 28 ++++++++++++++++++++++++----
>  drivers/pci/pcie/edr.c |  7 ++++---
>  3 files changed, 29 insertions(+), 8 deletions(-)
> 
> diff --git a/drivers/pci/pci.h b/drivers/pci/pci.h
> index 01e51db8d285..870d2fbd6ff2 100644
> --- a/drivers/pci/pci.h
> +++ b/drivers/pci/pci.h
> @@ -572,7 +572,7 @@ struct rcec_ea {
>  void pci_save_dpc_state(struct pci_dev *dev);
>  void pci_restore_dpc_state(struct pci_dev *dev);
>  void pci_dpc_init(struct pci_dev *pdev);
> -void dpc_process_error(struct pci_dev *pdev);
> +struct pci_dev *dpc_process_error(struct pci_dev *pdev);
>  pci_ers_result_t dpc_reset_link(struct pci_dev *pdev);
>  bool pci_dpc_recovered(struct pci_dev *pdev);
>  unsigned int dpc_tlp_log_len(struct pci_dev *dev);
> diff --git a/drivers/pci/pcie/dpc.c b/drivers/pci/pcie/dpc.c
> index 1a54a0b657ae..ea3ea989afa7 100644
> --- a/drivers/pci/pcie/dpc.c
> +++ b/drivers/pci/pcie/dpc.c
> @@ -253,10 +253,20 @@ static int dpc_get_aer_uncorrect_severity(struct pci_dev *dev,
>  	return 1;
>  }
>  
> -void dpc_process_error(struct pci_dev *pdev)
> +/**
> + * dpc_process_error - handle the DPC error status
> + * @pdev: the port that experienced the containment event
> + *
> + * Return the device that detected the error.

s/Return/Return:

> + *
> + * NOTE: The device reference count is increased, the caller must decrement
> + * the reference count by calling pci_dev_put().
> + */
> +struct pci_dev *dpc_process_error(struct pci_dev *pdev)
>  {
>  	u16 cap = pdev->dpc_cap, status, source, reason, ext_reason;
>  	struct aer_err_info info;
> +	struct pci_dev *err_dev;
>  
>  	pci_read_config_word(pdev, cap + PCI_EXP_DPC_STATUS, &status);
>  	pci_read_config_word(pdev, cap + PCI_EXP_DPC_SOURCE_ID, &source);
> @@ -279,6 +289,13 @@ void dpc_process_error(struct pci_dev *pdev)
>  		 "software trigger" :
>  		 "reserved error");
>  
> +	if (reason == PCI_EXP_DPC_STATUS_TRIGGER_RSN_NFE ||
> +	    reason == PCI_EXP_DPC_STATUS_TRIGGER_RSN_FE)
> +		err_dev = pci_get_domain_bus_and_slot(pci_domain_nr(pdev->bus),
> +					    PCI_BUS_NUM(source), source & 0xff);
> +	else
> +		err_dev = pci_dev_get(pdev);
> +
>  	/* show RP PIO error detail information */
>  	if (pdev->dpc_rp_extensions &&
>  	    reason == PCI_EXP_DPC_STATUS_TRIGGER_RSN_IN_EXT &&
> @@ -291,6 +308,8 @@ void dpc_process_error(struct pci_dev *pdev)
>  		pci_aer_clear_nonfatal_status(pdev);
>  		pci_aer_clear_fatal_status(pdev);
>  	}
> +
> +	return err_dev;
>  }
>  
>  static void pci_clear_surpdn_errors(struct pci_dev *pdev)
> @@ -346,7 +365,7 @@ static bool dpc_is_surprise_removal(struct pci_dev *pdev)
>  
>  static irqreturn_t dpc_handler(int irq, void *context)
>  {
> -	struct pci_dev *err_port = context;
> +	struct pci_dev *err_port = context, *err_dev;
>  
>  	/*
>  	 * According to PCIe r6.0 sec 6.7.6, errors are an expected side effect
> @@ -357,10 +376,11 @@ static irqreturn_t dpc_handler(int irq, void *context)
>  		return IRQ_HANDLED;
>  	}
>  
> -	dpc_process_error(err_port);
> +	err_dev = dpc_process_error(err_port);
>  
>  	/* We configure DPC so it only triggers on ERR_FATAL */
> -	pcie_do_recovery(err_port, pci_channel_io_frozen, dpc_reset_link);
> +	pcie_do_recovery(err_dev, pci_channel_io_frozen, dpc_reset_link);
> +	pci_dev_put(err_dev);
>  
>  	return IRQ_HANDLED;
>  }
> diff --git a/drivers/pci/pcie/edr.c b/drivers/pci/pcie/edr.c
> index 521fca2f40cb..088f3e188f54 100644
> --- a/drivers/pci/pcie/edr.c
> +++ b/drivers/pci/pcie/edr.c
> @@ -150,7 +150,7 @@ static int acpi_send_edr_status(struct pci_dev *pdev, struct pci_dev *edev,
>  
>  static void edr_handle_event(acpi_handle handle, u32 event, void *data)
>  {
> -	struct pci_dev *pdev = data, *err_port;
> +	struct pci_dev *pdev = data, *err_port, *err_dev;
>  	pci_ers_result_t estate = PCI_ERS_RESULT_DISCONNECT;
>  	u16 status;
>  
> @@ -190,7 +190,7 @@ static void edr_handle_event(acpi_handle handle, u32 event, void *data)
>  		goto send_ost;
>  	}
>  
> -	dpc_process_error(err_port);
> +	err_dev = dpc_process_error(err_port);
>  	pci_aer_raw_clear_status(err_port);
>  
>  	/*
> @@ -198,7 +198,7 @@ static void edr_handle_event(acpi_handle handle, u32 event, void *data)
>  	 * or ERR_NONFATAL, since the link is already down, use the FATAL
>  	 * error recovery path for both cases.
>  	 */
> -	estate = pcie_do_recovery(err_port, pci_channel_io_frozen, dpc_reset_link);
> +	estate = pcie_do_recovery(err_dev, pci_channel_io_frozen, dpc_reset_link);
>  
>  send_ost:
>  
> @@ -216,6 +216,7 @@ static void edr_handle_event(acpi_handle handle, u32 event, void *data)
>  	}
>  
>  	pci_dev_put(err_port);
> +	pci_dev_put(err_dev);

err_dev is not a valid pointer before calling dpc_process_error(). So either
initialize it with NULL or only call it in error paths after
dpc_process_error().

And btw, pci_dev_put(err_dev) should come before pci_dev_put(err_port).

- Mani

-- 
மணிவண்ணன் சதாசிவம்


^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH v4 3/3] PCI/AER: Report fatal errors of RCiEP and EP if link recoverd
  2025-03-03  3:43   ` Sathyanarayanan Kuppuswamy
  2025-03-03  4:33     ` Shuai Xue
@ 2025-06-12 10:46     ` Manivannan Sadhasivam
  1 sibling, 0 replies; 14+ messages in thread
From: Manivannan Sadhasivam @ 2025-06-12 10:46 UTC (permalink / raw)
  To: Sathyanarayanan Kuppuswamy, Shuai Xue
  Cc: linux-pci, linux-kernel, linuxppc-dev, bhelgaas, kbusch, mahesh,
	oohall, Jonathan.Cameron, terry.bowman, tianruidong

On Sun, Mar 02, 2025 at 07:43:41PM -0800, Sathyanarayanan Kuppuswamy wrote:
> 
> On 2/16/25 6:42 PM, Shuai Xue wrote:
> > The AER driver has historically avoided reading the configuration space of
> > an endpoint or RCiEP that reported a fatal error, considering the link to
> > that device unreliable. Consequently, when a fatal error occurs, the AER
> > and DPC drivers do not report specific error types, resulting in logs like:
> > 
> >    pcieport 0000:30:03.0: EDR: EDR event received
> >    pcieport 0000:30:03.0: DPC: containment event, status:0x0005 source:0x3400
> >    pcieport 0000:30:03.0: DPC: ERR_FATAL detected
> >    pcieport 0000:30:03.0: AER: broadcast error_detected message
> >    nvme nvme0: frozen state error detected, reset controller
> >    nvme 0000:34:00.0: ready 0ms after DPC
> >    pcieport 0000:30:03.0: AER: broadcast slot_reset message
> > 
> > AER status registers are sticky and Write-1-to-clear. If the link recovered
> > after hot reset, we can still safely access AER status of the error device.
> > In such case, report fatal errors which helps to figure out the error root
> > case.
> > 
> > After this patch, the logs like:
> > 
> >    pcieport 0000:30:03.0: EDR: EDR event received
> >    pcieport 0000:30:03.0: DPC: containment event, status:0x0005 source:0x3400
> >    pcieport 0000:30:03.0: DPC: ERR_FATAL detected
> >    pcieport 0000:30:03.0: AER: broadcast error_detected message
> >    nvme nvme0: frozen state error detected, reset controller
> >    pcieport 0000:30:03.0: waiting 100 ms for downstream link, after activation
> >    nvme 0000:34:00.0: ready 0ms after DPC
> >    nvme 0000:34:00.0: PCIe Bus Error: severity=Uncorrectable (Fatal), type=Data Link Layer, (Receiver ID)
> >    nvme 0000:34:00.0:   device [144d:a804] error status/mask=00000010/00504000
> >    nvme 0000:34:00.0:    [ 4] DLP                    (First)
> >    pcieport 0000:30:03.0: AER: broadcast slot_reset message
> 
> IMO, above info about device error details is more of a debug info. Since
> the
> main use of this info use to understand more details about the recovered
> DPC error. So I think is better to print with debug tag. Lets see what
> others
> think.
> 

My two cents: All AER logs are mostly error messages, so I don't see why this
one should be a debug message. But having said that, this new error log may
confuse users as if a new AER error is received post recovery. So adding
something that specifies that this belong to the previous AER error would be
good IMO.

- Mani

-- 
மணிவண்ணன் சதாசிவம்


^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH v4 2/3] PCI/DPC: Run recovery on device that detected the error
  2025-06-12 10:31   ` Manivannan Sadhasivam
@ 2025-06-19  6:28     ` Shuai Xue
  0 siblings, 0 replies; 14+ messages in thread
From: Shuai Xue @ 2025-06-19  6:28 UTC (permalink / raw)
  To: Manivannan Sadhasivam
  Cc: linux-pci, linux-kernel, linuxppc-dev, bhelgaas, kbusch,
	sathyanarayanan.kuppuswamy, mahesh, oohall, Jonathan.Cameron,
	terry.bowman, tianruidong



在 2025/6/12 18:31, Manivannan Sadhasivam 写道:
> On Mon, Feb 17, 2025 at 10:42:17AM +0800, Shuai Xue wrote:
>> The current implementation of pcie_do_recovery() assumes that the
>> recovery process is executed on the device that detected the error.
> 
> s/on/for
> 
>> However, the DPC driver currently passes the error port that experienced
>> the DPC event to pcie_do_recovery().
>>
>> Use the SOURCE ID register to correctly identify the device that
>> detected the error. When passing the error device, the
>> pcie_do_recovery() will find the upstream bridge and walk bridges
>> potentially AER affected. And subsequent patches will be able to
> 
> s/patches/commits

Will fix the typos.
> 
>> accurately access AER status of the error device.
>>
>> Should not observe any functional changes.
>>
>> Signed-off-by: Shuai Xue <xueshuai@linux.alibaba.com>
>> ---
>>   drivers/pci/pci.h      |  2 +-
>>   drivers/pci/pcie/dpc.c | 28 ++++++++++++++++++++++++----
>>   drivers/pci/pcie/edr.c |  7 ++++---
>>   3 files changed, 29 insertions(+), 8 deletions(-)
>>
>> diff --git a/drivers/pci/pci.h b/drivers/pci/pci.h
>> index 01e51db8d285..870d2fbd6ff2 100644
>> --- a/drivers/pci/pci.h
>> +++ b/drivers/pci/pci.h
>> @@ -572,7 +572,7 @@ struct rcec_ea {
>>   void pci_save_dpc_state(struct pci_dev *dev);
>>   void pci_restore_dpc_state(struct pci_dev *dev);
>>   void pci_dpc_init(struct pci_dev *pdev);
>> -void dpc_process_error(struct pci_dev *pdev);
>> +struct pci_dev *dpc_process_error(struct pci_dev *pdev);
>>   pci_ers_result_t dpc_reset_link(struct pci_dev *pdev);
>>   bool pci_dpc_recovered(struct pci_dev *pdev);
>>   unsigned int dpc_tlp_log_len(struct pci_dev *dev);
>> diff --git a/drivers/pci/pcie/dpc.c b/drivers/pci/pcie/dpc.c
>> index 1a54a0b657ae..ea3ea989afa7 100644
>> --- a/drivers/pci/pcie/dpc.c
>> +++ b/drivers/pci/pcie/dpc.c
>> @@ -253,10 +253,20 @@ static int dpc_get_aer_uncorrect_severity(struct pci_dev *dev,
>>   	return 1;
>>   }
>>   
>> -void dpc_process_error(struct pci_dev *pdev)
>> +/**
>> + * dpc_process_error - handle the DPC error status
>> + * @pdev: the port that experienced the containment event
>> + *
>> + * Return the device that detected the error.
> 
> s/Return/Return:
> 
>> + *
>> + * NOTE: The device reference count is increased, the caller must decrement
>> + * the reference count by calling pci_dev_put().
>> + */
>> +struct pci_dev *dpc_process_error(struct pci_dev *pdev)
>>   {
>>   	u16 cap = pdev->dpc_cap, status, source, reason, ext_reason;
>>   	struct aer_err_info info;
>> +	struct pci_dev *err_dev;
>>   
>>   	pci_read_config_word(pdev, cap + PCI_EXP_DPC_STATUS, &status);
>>   	pci_read_config_word(pdev, cap + PCI_EXP_DPC_SOURCE_ID, &source);
>> @@ -279,6 +289,13 @@ void dpc_process_error(struct pci_dev *pdev)
>>   		 "software trigger" :
>>   		 "reserved error");
>>   
>> +	if (reason == PCI_EXP_DPC_STATUS_TRIGGER_RSN_NFE ||
>> +	    reason == PCI_EXP_DPC_STATUS_TRIGGER_RSN_FE)
>> +		err_dev = pci_get_domain_bus_and_slot(pci_domain_nr(pdev->bus),
>> +					    PCI_BUS_NUM(source), source & 0xff);
>> +	else
>> +		err_dev = pci_dev_get(pdev);
>> +
>>   	/* show RP PIO error detail information */
>>   	if (pdev->dpc_rp_extensions &&
>>   	    reason == PCI_EXP_DPC_STATUS_TRIGGER_RSN_IN_EXT &&
>> @@ -291,6 +308,8 @@ void dpc_process_error(struct pci_dev *pdev)
>>   		pci_aer_clear_nonfatal_status(pdev);
>>   		pci_aer_clear_fatal_status(pdev);
>>   	}
>> +
>> +	return err_dev;
>>   }
>>   
>>   static void pci_clear_surpdn_errors(struct pci_dev *pdev)
>> @@ -346,7 +365,7 @@ static bool dpc_is_surprise_removal(struct pci_dev *pdev)
>>   
>>   static irqreturn_t dpc_handler(int irq, void *context)
>>   {
>> -	struct pci_dev *err_port = context;
>> +	struct pci_dev *err_port = context, *err_dev;
>>   
>>   	/*
>>   	 * According to PCIe r6.0 sec 6.7.6, errors are an expected side effect
>> @@ -357,10 +376,11 @@ static irqreturn_t dpc_handler(int irq, void *context)
>>   		return IRQ_HANDLED;
>>   	}
>>   
>> -	dpc_process_error(err_port);
>> +	err_dev = dpc_process_error(err_port);
>>   
>>   	/* We configure DPC so it only triggers on ERR_FATAL */
>> -	pcie_do_recovery(err_port, pci_channel_io_frozen, dpc_reset_link);
>> +	pcie_do_recovery(err_dev, pci_channel_io_frozen, dpc_reset_link);
>> +	pci_dev_put(err_dev);
>>   
>>   	return IRQ_HANDLED;
>>   }
>> diff --git a/drivers/pci/pcie/edr.c b/drivers/pci/pcie/edr.c
>> index 521fca2f40cb..088f3e188f54 100644
>> --- a/drivers/pci/pcie/edr.c
>> +++ b/drivers/pci/pcie/edr.c
>> @@ -150,7 +150,7 @@ static int acpi_send_edr_status(struct pci_dev *pdev, struct pci_dev *edev,
>>   
>>   static void edr_handle_event(acpi_handle handle, u32 event, void *data)
>>   {
>> -	struct pci_dev *pdev = data, *err_port;
>> +	struct pci_dev *pdev = data, *err_port, *err_dev;
>>   	pci_ers_result_t estate = PCI_ERS_RESULT_DISCONNECT;
>>   	u16 status;
>>   
>> @@ -190,7 +190,7 @@ static void edr_handle_event(acpi_handle handle, u32 event, void *data)
>>   		goto send_ost;
>>   	}
>>   
>> -	dpc_process_error(err_port);
>> +	err_dev = dpc_process_error(err_port);
>>   	pci_aer_raw_clear_status(err_port);
>>   
>>   	/*
>> @@ -198,7 +198,7 @@ static void edr_handle_event(acpi_handle handle, u32 event, void *data)
>>   	 * or ERR_NONFATAL, since the link is already down, use the FATAL
>>   	 * error recovery path for both cases.
>>   	 */
>> -	estate = pcie_do_recovery(err_port, pci_channel_io_frozen, dpc_reset_link);
>> +	estate = pcie_do_recovery(err_dev, pci_channel_io_frozen, dpc_reset_link);
>>   
>>   send_ost:
>>   
>> @@ -216,6 +216,7 @@ static void edr_handle_event(acpi_handle handle, u32 event, void *data)
>>   	}
>>   
>>   	pci_dev_put(err_port);
>> +	pci_dev_put(err_dev);
> 
> err_dev is not a valid pointer before calling dpc_process_error(). So either
> initialize it with NULL or only call it in error paths after
> dpc_process_error().
> 
> And btw, pci_dev_put(err_dev) should come before pci_dev_put(err_port).
> 
> - Mani
> 

You are right.

Will send a new patch to fix it.

Thanks.
Shuai


^ permalink raw reply	[flat|nested] 14+ messages in thread

end of thread, other threads:[~2025-06-19  6:29 UTC | newest]

Thread overview: 14+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2025-02-17  2:42 [PATCH v4 0/3] PCI/AER: Report fatal errors of RCiEP and EP if link recoverd Shuai Xue
2025-02-17  2:42 ` [PATCH v4 1/3] PCI/DPC: Clarify naming for error port in DPC Handling Shuai Xue
2025-02-17  2:42 ` [PATCH v4 2/3] PCI/DPC: Run recovery on device that detected the error Shuai Xue
2025-03-03  3:36   ` Sathyanarayanan Kuppuswamy
2025-03-03  3:48     ` Shuai Xue
2025-06-12 10:31   ` Manivannan Sadhasivam
2025-06-19  6:28     ` Shuai Xue
2025-02-17  2:42 ` [PATCH v4 3/3] PCI/AER: Report fatal errors of RCiEP and EP if link recoverd Shuai Xue
2025-03-03  3:43   ` Sathyanarayanan Kuppuswamy
2025-03-03  4:33     ` Shuai Xue
2025-03-17  6:02       ` Shuai Xue
2025-04-24 11:48         ` Shuai Xue
2025-06-12 10:46     ` Manivannan Sadhasivam
2025-03-03  2:03 ` [PATCH v4 0/3] " Shuai Xue

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).