linuxppc-dev.lists.ozlabs.org archive mirror
 help / color / mirror / Atom feed
* [RFC PATCH v1 0/2] PCI/AER: report fatal errors of RCiEP and EP if link recoverd
@ 2024-11-06  9:03 Shuai Xue
  2024-11-06  9:03 ` [RFC PATCH v1 1/2] PCI/AER: run recovery on device that detected the error Shuai Xue
  2024-11-06  9:03 ` [RFC PATCH v1 2/2] PCI/AER: report fatal errors of RCiEP and EP if link recoverd Shuai Xue
  0 siblings, 2 replies; 7+ messages in thread
From: Shuai Xue @ 2024-11-06  9:03 UTC (permalink / raw)
  To: linux-pci, linux-kernel, linuxppc-dev
  Cc: bhelgaas, mahesh, oohall, sathyanarayanan.kuppuswamy, xueshuai

The AER driver has historically avoided reading the configuration space of an
endpoint or RCiEP that reported a fatal error, considering the link to that
device unreliable. Consequently, when a fatal error occurs, the AER and DPC
drivers do not report specific error types, resulting in logs like:

[  245.281980] pcieport 0000:30:03.0: EDR: EDR event received
[  245.287466] pcieport 0000:30:03.0: DPC: containment event, status:0x0005 source:0x3400
[  245.295372] pcieport 0000:30:03.0: DPC: ERR_FATAL detected
[  245.300849] pcieport 0000:30:03.0: AER: broadcast error_detected message
[  245.307540] nvme nvme0: frozen state error detected, reset controller
[  245.722582] nvme 0000:34:00.0: ready 0ms after DPC
[  245.727365] pcieport 0000:30:03.0: AER: broadcast slot_reset message

But, if the link recovered after hot reset, we can safely access AER status of
the error device. In such case, report fatal error which helps to figure out the
error root case.

- Patch 1/2 identifies the error device by SOURCE ID register
- Patch 2/3 reports the AER status if link recoverd.

After this patch set, the logs like:

[  414.356755] pcieport 0000:30:03.0: EDR: EDR event received
[  414.362240] pcieport 0000:30:03.0: DPC: containment event, status:0x0005 source:0x3400
[  414.370148] pcieport 0000:30:03.0: DPC: ERR_FATAL detected
[  414.375642] pcieport 0000:30:03.0: AER: broadcast error_detected message
[  414.382335] nvme nvme0: frozen state error detected, reset controller
[  414.645413] pcieport 0000:30:03.0: waiting 100 ms for downstream link, after activation
[  414.788016] nvme 0000:34:00.0: ready 0ms after DPC
[  414.796975] nvme 0000:34:00.0: PCIe Bus Error: severity=Uncorrectable (Fatal), type=Data Link Layer, (Receiver ID)
[  414.807312] nvme 0000:34:00.0:   device [144d:a804] error status/mask=00000010/00504000
[  414.815305] nvme 0000:34:00.0:    [ 4] DLP                    (First)
[  414.821768] pcieport 0000:30:03.0: AER: broadcast slot_reset message

Shuai Xue (2):
  PCI/AER: run recovery on device that detected the error
  PCI/AER: report fatal errors of RCiEP and EP if link recoverd

 drivers/pci/pci.h      |  3 ++-
 drivers/pci/pcie/aer.c | 50 ++++++++++++++++++++++++++++++++++++++++++
 drivers/pci/pcie/dpc.c | 30 ++++++++++++++++++++-----
 drivers/pci/pcie/edr.c | 35 +++++++++++++++--------------
 drivers/pci/pcie/err.c |  6 +++++
 5 files changed, 100 insertions(+), 24 deletions(-)

-- 
2.39.3



^ permalink raw reply	[flat|nested] 7+ messages in thread

* [RFC PATCH v1 1/2] PCI/AER: run recovery on device that detected the error
  2024-11-06  9:03 [RFC PATCH v1 0/2] PCI/AER: report fatal errors of RCiEP and EP if link recoverd Shuai Xue
@ 2024-11-06  9:03 ` Shuai Xue
  2024-11-06  9:03 ` [RFC PATCH v1 2/2] PCI/AER: report fatal errors of RCiEP and EP if link recoverd Shuai Xue
  1 sibling, 0 replies; 7+ messages in thread
From: Shuai Xue @ 2024-11-06  9:03 UTC (permalink / raw)
  To: linux-pci, linux-kernel, linuxppc-dev
  Cc: bhelgaas, mahesh, oohall, sathyanarayanan.kuppuswamy, xueshuai

The current implementation of pcie_do_recovery() assumes that the
recovery process is executed on the device that detected the error.
However, the DPC driver currently passes the error port that experienced
the DPC event to pcie_do_recovery().

Use the SOURCE ID register to correctly identify the device that detected the
error. By passing this error-detecting device to pcie_do_recovery(), subsequent
patches will be able to accurately access the AER error status.

Signed-off-by: Shuai Xue <xueshuai@linux.alibaba.com>
---
 drivers/pci/pci.h      |  2 +-
 drivers/pci/pcie/dpc.c | 30 ++++++++++++++++++++++++------
 drivers/pci/pcie/edr.c | 35 ++++++++++++++++++-----------------
 3 files changed, 43 insertions(+), 24 deletions(-)

diff --git a/drivers/pci/pci.h b/drivers/pci/pci.h
index 14d00ce45bfa..0866f79aec54 100644
--- a/drivers/pci/pci.h
+++ b/drivers/pci/pci.h
@@ -521,7 +521,7 @@ struct rcec_ea {
 void pci_save_dpc_state(struct pci_dev *dev);
 void pci_restore_dpc_state(struct pci_dev *dev);
 void pci_dpc_init(struct pci_dev *pdev);
-void dpc_process_error(struct pci_dev *pdev);
+struct pci_dev *dpc_process_error(struct pci_dev *pdev);
 pci_ers_result_t dpc_reset_link(struct pci_dev *pdev);
 bool pci_dpc_recovered(struct pci_dev *pdev);
 #else
diff --git a/drivers/pci/pcie/dpc.c b/drivers/pci/pcie/dpc.c
index 2b6ef7efa3c1..62a68cde4364 100644
--- a/drivers/pci/pcie/dpc.c
+++ b/drivers/pci/pcie/dpc.c
@@ -257,10 +257,17 @@ static int dpc_get_aer_uncorrect_severity(struct pci_dev *dev,
 	return 1;
 }
 
-void dpc_process_error(struct pci_dev *pdev)
+/**
+ * dpc_process_error - handle the DPC error status
+ * @pdev: the port that experienced the containment event
+ *
+ * Return the device that experienced the error.
+ */
+struct pci_dev *dpc_process_error(struct pci_dev *pdev)
 {
 	u16 cap = pdev->dpc_cap, status, source, reason, ext_reason;
 	struct aer_err_info info;
+	struct pci_dev *err_dev = NULL;
 
 	pci_read_config_word(pdev, cap + PCI_EXP_DPC_STATUS, &status);
 	pci_read_config_word(pdev, cap + PCI_EXP_DPC_SOURCE_ID, &source);
@@ -283,6 +290,13 @@ void dpc_process_error(struct pci_dev *pdev)
 		 "software trigger" :
 		 "reserved error");
 
+	if (reason == PCI_EXP_DPC_STATUS_TRIGGER_RSN_NFE ||
+	    reason == PCI_EXP_DPC_STATUS_TRIGGER_RSN_FE)
+		err_dev = pci_get_domain_bus_and_slot(pci_domain_nr(pdev->bus),
+					    PCI_BUS_NUM(source), source & 0xff);
+	else
+		err_dev = pci_dev_get(pdev);
+
 	/* show RP PIO error detail information */
 	if (pdev->dpc_rp_extensions &&
 	    reason == PCI_EXP_DPC_STATUS_TRIGGER_RSN_IN_EXT &&
@@ -295,6 +309,8 @@ void dpc_process_error(struct pci_dev *pdev)
 		pci_aer_clear_nonfatal_status(pdev);
 		pci_aer_clear_fatal_status(pdev);
 	}
+
+	return err_dev;
 }
 
 static void pci_clear_surpdn_errors(struct pci_dev *pdev)
@@ -350,21 +366,23 @@ static bool dpc_is_surprise_removal(struct pci_dev *pdev)
 
 static irqreturn_t dpc_handler(int irq, void *context)
 {
-	struct pci_dev *pdev = context;
+	struct pci_dev *err_port = context, *err_dev = NULL;
 
 	/*
 	 * According to PCIe r6.0 sec 6.7.6, errors are an expected side effect
 	 * of async removal and should be ignored by software.
 	 */
-	if (dpc_is_surprise_removal(pdev)) {
-		dpc_handle_surprise_removal(pdev);
+	if (dpc_is_surprise_removal(err_port)) {
+		dpc_handle_surprise_removal(err_port);
 		return IRQ_HANDLED;
 	}
 
-	dpc_process_error(pdev);
+	err_dev = dpc_process_error(err_port);
 
 	/* We configure DPC so it only triggers on ERR_FATAL */
-	pcie_do_recovery(pdev, pci_channel_io_frozen, dpc_reset_link);
+	pcie_do_recovery(err_dev, pci_channel_io_frozen, dpc_reset_link);
+
+	pci_dev_put(err_dev);
 
 	return IRQ_HANDLED;
 }
diff --git a/drivers/pci/pcie/edr.c b/drivers/pci/pcie/edr.c
index e86298dbbcff..6ac95e5e001b 100644
--- a/drivers/pci/pcie/edr.c
+++ b/drivers/pci/pcie/edr.c
@@ -150,7 +150,7 @@ static int acpi_send_edr_status(struct pci_dev *pdev, struct pci_dev *edev,
 
 static void edr_handle_event(acpi_handle handle, u32 event, void *data)
 {
-	struct pci_dev *pdev = data, *edev;
+	struct pci_dev *pdev = data, *err_port, *err_dev = NULL;
 	pci_ers_result_t estate = PCI_ERS_RESULT_DISCONNECT;
 	u16 status;
 
@@ -169,36 +169,36 @@ static void edr_handle_event(acpi_handle handle, u32 event, void *data)
 	 * may be that port or a parent of it (PCI Firmware r3.3, sec
 	 * 4.6.13).
 	 */
-	edev = acpi_dpc_port_get(pdev);
-	if (!edev) {
+	err_port = acpi_dpc_port_get(pdev);
+	if (!err_port) {
 		pci_err(pdev, "Firmware failed to locate DPC port\n");
 		return;
 	}
 
-	pci_dbg(pdev, "Reported EDR dev: %s\n", pci_name(edev));
+	pci_dbg(pdev, "Reported EDR dev: %s\n", pci_name(err_port));
 
 	/* If port does not support DPC, just send the OST */
-	if (!edev->dpc_cap) {
-		pci_err(edev, FW_BUG "This device doesn't support DPC\n");
+	if (!err_port->dpc_cap) {
+		pci_err(err_port, FW_BUG "This device doesn't support DPC\n");
 		goto send_ost;
 	}
 
 	/* Check if there is a valid DPC trigger */
-	pci_read_config_word(edev, edev->dpc_cap + PCI_EXP_DPC_STATUS, &status);
+	pci_read_config_word(err_port, err_port->dpc_cap + PCI_EXP_DPC_STATUS, &status);
 	if (!(status & PCI_EXP_DPC_STATUS_TRIGGER)) {
-		pci_err(edev, "Invalid DPC trigger %#010x\n", status);
+		pci_err(err_port, "Invalid DPC trigger %#010x\n", status);
 		goto send_ost;
 	}
 
-	dpc_process_error(edev);
-	pci_aer_raw_clear_status(edev);
+	err_dev = dpc_process_error(err_port);
+	pci_aer_raw_clear_status(err_port);
 
 	/*
 	 * Irrespective of whether the DPC event is triggered by ERR_FATAL
 	 * or ERR_NONFATAL, since the link is already down, use the FATAL
 	 * error recovery path for both cases.
 	 */
-	estate = pcie_do_recovery(edev, pci_channel_io_frozen, dpc_reset_link);
+	estate = pcie_do_recovery(err_dev, pci_channel_io_frozen, dpc_reset_link);
 
 send_ost:
 
@@ -207,15 +207,16 @@ static void edr_handle_event(acpi_handle handle, u32 event, void *data)
 	 * to firmware. If not successful, send _OST(0xF, BDF << 16 | 0x81).
 	 */
 	if (estate == PCI_ERS_RESULT_RECOVERED) {
-		pci_dbg(edev, "DPC port successfully recovered\n");
-		pcie_clear_device_status(edev);
-		acpi_send_edr_status(pdev, edev, EDR_OST_SUCCESS);
+		pci_dbg(err_port, "DPC port successfully recovered\n");
+		pcie_clear_device_status(err_port);
+		acpi_send_edr_status(pdev, err_port, EDR_OST_SUCCESS);
 	} else {
-		pci_dbg(edev, "DPC port recovery failed\n");
-		acpi_send_edr_status(pdev, edev, EDR_OST_FAILED);
+		pci_dbg(err_port, "DPC port recovery failed\n");
+		acpi_send_edr_status(pdev, err_port, EDR_OST_FAILED);
 	}
 
-	pci_dev_put(edev);
+	pci_dev_put(err_port);
+	pci_dev_put(err_dev);
 }
 
 void pci_acpi_add_edr_notifier(struct pci_dev *pdev)
-- 
2.39.3



^ permalink raw reply related	[flat|nested] 7+ messages in thread

* [RFC PATCH v1 2/2] PCI/AER: report fatal errors of RCiEP and EP if link recoverd
  2024-11-06  9:03 [RFC PATCH v1 0/2] PCI/AER: report fatal errors of RCiEP and EP if link recoverd Shuai Xue
  2024-11-06  9:03 ` [RFC PATCH v1 1/2] PCI/AER: run recovery on device that detected the error Shuai Xue
@ 2024-11-06  9:03 ` Shuai Xue
  2024-11-06 16:02   ` Bjorn Helgaas
  2024-11-06 16:39   ` Keith Busch
  1 sibling, 2 replies; 7+ messages in thread
From: Shuai Xue @ 2024-11-06  9:03 UTC (permalink / raw)
  To: linux-pci, linux-kernel, linuxppc-dev
  Cc: bhelgaas, mahesh, oohall, sathyanarayanan.kuppuswamy, xueshuai

The AER driver has historically avoided reading the configuration space of an
endpoint or RCiEP that reported a fatal error, considering the link to that
device unreliable. Consequently, when a fatal error occurs, the AER and DPC
drivers do not report specific error types, resulting in logs like:

[  245.281980] pcieport 0000:30:03.0: EDR: EDR event received
[  245.287466] pcieport 0000:30:03.0: DPC: containment event, status:0x0005 source:0x3400
[  245.295372] pcieport 0000:30:03.0: DPC: ERR_FATAL detected
[  245.300849] pcieport 0000:30:03.0: AER: broadcast error_detected message
[  245.307540] nvme nvme0: frozen state error detected, reset controller
[  245.722582] nvme 0000:34:00.0: ready 0ms after DPC
[  245.727365] pcieport 0000:30:03.0: AER: broadcast slot_reset message

But, if the link recovered after hot reset, we can safely access AER status of
the error device. In such case, report fatal error which helps to figure out the
error root case.

After this patch, the logs like:

[  414.356755] pcieport 0000:30:03.0: EDR: EDR event received
[  414.362240] pcieport 0000:30:03.0: DPC: containment event, status:0x0005 source:0x3400
[  414.370148] pcieport 0000:30:03.0: DPC: ERR_FATAL detected
[  414.375642] pcieport 0000:30:03.0: AER: broadcast error_detected message
[  414.382335] nvme nvme0: frozen state error detected, reset controller
[  414.645413] pcieport 0000:30:03.0: waiting 100 ms for downstream link, after activation
[  414.788016] nvme 0000:34:00.0: ready 0ms after DPC
[  414.796975] nvme 0000:34:00.0: PCIe Bus Error: severity=Uncorrectable (Fatal), type=Data Link Layer, (Receiver ID)
[  414.807312] nvme 0000:34:00.0:   device [144d:a804] error status/mask=00000010/00504000
[  414.815305] nvme 0000:34:00.0:    [ 4] DLP                    (First)
[  414.821768] pcieport 0000:30:03.0: AER: broadcast slot_reset message

Signed-off-by: Shuai Xue <xueshuai@linux.alibaba.com>
---
 drivers/pci/pci.h      |  1 +
 drivers/pci/pcie/aer.c | 50 ++++++++++++++++++++++++++++++++++++++++++
 drivers/pci/pcie/err.c |  6 +++++
 3 files changed, 57 insertions(+)

diff --git a/drivers/pci/pci.h b/drivers/pci/pci.h
index 0866f79aec54..143f960a813d 100644
--- a/drivers/pci/pci.h
+++ b/drivers/pci/pci.h
@@ -505,6 +505,7 @@ struct aer_err_info {
 };
 
 int aer_get_device_error_info(struct pci_dev *dev, struct aer_err_info *info);
+int aer_get_device_fatal_error_info(struct pci_dev *dev, struct aer_err_info *info);
 void aer_print_error(struct pci_dev *dev, struct aer_err_info *info);
 #endif	/* CONFIG_PCIEAER */
 
diff --git a/drivers/pci/pcie/aer.c b/drivers/pci/pcie/aer.c
index 13b8586924ea..0c1e382ce117 100644
--- a/drivers/pci/pcie/aer.c
+++ b/drivers/pci/pcie/aer.c
@@ -1252,6 +1252,56 @@ int aer_get_device_error_info(struct pci_dev *dev, struct aer_err_info *info)
 	return 1;
 }
 
+/**
+ * aer_get_device_fatal_error_info - read fatal error status from EP or RCiEP
+ * and store it to info
+ * @dev: pointer to the device expected to have a error record
+ * @info: pointer to structure to store the error record
+ *
+ * Return 1 on success, 0 on error.
+ *
+ * Note that @info is reused among all error devices. Clear fields properly.
+ */
+int aer_get_device_fatal_error_info(struct pci_dev *dev, struct aer_err_info *info)
+{
+	int type = pci_pcie_type(dev);
+	int aer = dev->aer_cap;
+	u32 aercc;
+
+	pci_info(dev, "type :%d\n", type);
+
+	/* Must reset in this function */
+	info->status = 0;
+	info->tlp_header_valid = 0;
+	info->severity = AER_FATAL;
+
+	/* The device might not support AER */
+	if (!aer)
+		return 0;
+
+
+	if (type == PCI_EXP_TYPE_ENDPOINT || type == PCI_EXP_TYPE_RC_END) {
+		/* Link is healthy for IO reads now */
+		pci_read_config_dword(dev, aer + PCI_ERR_UNCOR_STATUS,
+			&info->status);
+		pci_read_config_dword(dev, aer + PCI_ERR_UNCOR_MASK,
+			&info->mask);
+		if (!(info->status & ~info->mask))
+			return 0;
+
+		/* Get First Error Pointer */
+		pci_read_config_dword(dev, aer + PCI_ERR_CAP, &aercc);
+		info->first_error = PCI_ERR_CAP_FEP(aercc);
+
+		if (info->status & AER_LOG_TLP_MASKS) {
+			info->tlp_header_valid = 1;
+			pcie_read_tlp_log(dev, aer + PCI_ERR_HEADER_LOG, &info->tlp);
+		}
+	}
+
+	return 1;
+}
+
 static inline void aer_process_err_devices(struct aer_err_info *e_info)
 {
 	int i;
diff --git a/drivers/pci/pcie/err.c b/drivers/pci/pcie/err.c
index 31090770fffc..a74ae6a55064 100644
--- a/drivers/pci/pcie/err.c
+++ b/drivers/pci/pcie/err.c
@@ -196,6 +196,7 @@ pci_ers_result_t pcie_do_recovery(struct pci_dev *dev,
 	struct pci_dev *bridge;
 	pci_ers_result_t status = PCI_ERS_RESULT_CAN_RECOVER;
 	struct pci_host_bridge *host = pci_find_host_bridge(dev->bus);
+	struct aer_err_info info;
 
 	/*
 	 * If the error was detected by a Root Port, Downstream Port, RCEC,
@@ -223,6 +224,10 @@ pci_ers_result_t pcie_do_recovery(struct pci_dev *dev,
 			pci_warn(bridge, "subordinate device reset failed\n");
 			goto failed;
 		}
+
+		/* Link recovered, report fatal errors on RCiEP or EP */
+		if (aer_get_device_fatal_error_info(dev, &info))
+			aer_print_error(dev, &info);
 	} else {
 		pci_walk_bridge(bridge, report_normal_detected, &status);
 	}
@@ -259,6 +264,7 @@ pci_ers_result_t pcie_do_recovery(struct pci_dev *dev,
 	if (host->native_aer || pcie_ports_native) {
 		pcie_clear_device_status(dev);
 		pci_aer_clear_nonfatal_status(dev);
+		pci_aer_clear_fatal_status(dev);
 	}
 
 	pci_walk_bridge(bridge, pci_pm_runtime_put, NULL);
-- 
2.39.3



^ permalink raw reply related	[flat|nested] 7+ messages in thread

* Re: [RFC PATCH v1 2/2] PCI/AER: report fatal errors of RCiEP and EP if link recoverd
  2024-11-06  9:03 ` [RFC PATCH v1 2/2] PCI/AER: report fatal errors of RCiEP and EP if link recoverd Shuai Xue
@ 2024-11-06 16:02   ` Bjorn Helgaas
  2024-11-07  1:24     ` Shuai Xue
  2024-11-06 16:39   ` Keith Busch
  1 sibling, 1 reply; 7+ messages in thread
From: Bjorn Helgaas @ 2024-11-06 16:02 UTC (permalink / raw)
  To: Shuai Xue
  Cc: linux-pci, linux-kernel, linuxppc-dev, bhelgaas, mahesh, oohall,
	sathyanarayanan.kuppuswamy

On Wed, Nov 06, 2024 at 05:03:39PM +0800, Shuai Xue wrote:
> The AER driver has historically avoided reading the configuration space of an
> endpoint or RCiEP that reported a fatal error, considering the link to that
> device unreliable. Consequently, when a fatal error occurs, the AER and DPC
> drivers do not report specific error types, resulting in logs like:
> 
> [  245.281980] pcieport 0000:30:03.0: EDR: EDR event received
> [  245.287466] pcieport 0000:30:03.0: DPC: containment event, status:0x0005 source:0x3400
> [  245.295372] pcieport 0000:30:03.0: DPC: ERR_FATAL detected
> [  245.300849] pcieport 0000:30:03.0: AER: broadcast error_detected message
> [  245.307540] nvme nvme0: frozen state error detected, reset controller
> [  245.722582] nvme 0000:34:00.0: ready 0ms after DPC
> [  245.727365] pcieport 0000:30:03.0: AER: broadcast slot_reset message
> 
> But, if the link recovered after hot reset, we can safely access AER status of
> the error device. In such case, report fatal error which helps to figure out the
> error root case.

Explain why we can access these registers after reset.  I think it's
important that these registers are sticky ("RW1CS" per spec).

> After this patch, the logs like:
> 
> [  414.356755] pcieport 0000:30:03.0: EDR: EDR event received
> [  414.362240] pcieport 0000:30:03.0: DPC: containment event, status:0x0005 source:0x3400
> [  414.370148] pcieport 0000:30:03.0: DPC: ERR_FATAL detected
> [  414.375642] pcieport 0000:30:03.0: AER: broadcast error_detected message
> [  414.382335] nvme nvme0: frozen state error detected, reset controller
> [  414.645413] pcieport 0000:30:03.0: waiting 100 ms for downstream link, after activation
> [  414.788016] nvme 0000:34:00.0: ready 0ms after DPC
> [  414.796975] nvme 0000:34:00.0: PCIe Bus Error: severity=Uncorrectable (Fatal), type=Data Link Layer, (Receiver ID)
> [  414.807312] nvme 0000:34:00.0:   device [144d:a804] error status/mask=00000010/00504000
> [  414.815305] nvme 0000:34:00.0:    [ 4] DLP                    (First)
> [  414.821768] pcieport 0000:30:03.0: AER: broadcast slot_reset message

Capitalize subject lines to match history (use "git log --oneline
drivers/pci/pcie/aer.c" to see it).

Remove timestamps since they don't help understand the problem.

Indent the quoted material two spaces.

Wrap commit log to fit in 75 columns (except the quoted material;
don't insert line breaks there).

> Signed-off-by: Shuai Xue <xueshuai@linux.alibaba.com>
> ---
>  drivers/pci/pci.h      |  1 +
>  drivers/pci/pcie/aer.c | 50 ++++++++++++++++++++++++++++++++++++++++++
>  drivers/pci/pcie/err.c |  6 +++++
>  3 files changed, 57 insertions(+)
> 
> diff --git a/drivers/pci/pci.h b/drivers/pci/pci.h
> index 0866f79aec54..143f960a813d 100644
> --- a/drivers/pci/pci.h
> +++ b/drivers/pci/pci.h
> @@ -505,6 +505,7 @@ struct aer_err_info {
>  };
>  
>  int aer_get_device_error_info(struct pci_dev *dev, struct aer_err_info *info);
> +int aer_get_device_fatal_error_info(struct pci_dev *dev, struct aer_err_info *info);
>  void aer_print_error(struct pci_dev *dev, struct aer_err_info *info);
>  #endif	/* CONFIG_PCIEAER */
>  
> diff --git a/drivers/pci/pcie/aer.c b/drivers/pci/pcie/aer.c
> index 13b8586924ea..0c1e382ce117 100644
> --- a/drivers/pci/pcie/aer.c
> +++ b/drivers/pci/pcie/aer.c
> @@ -1252,6 +1252,56 @@ int aer_get_device_error_info(struct pci_dev *dev, struct aer_err_info *info)
>  	return 1;
>  }
>  
> +/**
> + * aer_get_device_fatal_error_info - read fatal error status from EP or RCiEP
> + * and store it to info
> + * @dev: pointer to the device expected to have a error record
> + * @info: pointer to structure to store the error record
> + *
> + * Return 1 on success, 0 on error.

Backwards from the usual return value convention.

> + * Note that @info is reused among all error devices. Clear fields properly.
> + */
> +int aer_get_device_fatal_error_info(struct pci_dev *dev, struct aer_err_info *info)
> +{
> +	int type = pci_pcie_type(dev);
> +	int aer = dev->aer_cap;
> +	u32 aercc;
> +
> +	pci_info(dev, "type :%d\n", type);

I don't see this line in the sample output in the commit log.  Is this
debug that you intended to remove?

> +	/* Must reset in this function */
> +	info->status = 0;
> +	info->tlp_header_valid = 0;
> +	info->severity = AER_FATAL;
> +
> +	/* The device might not support AER */

Unnecessary comment.

> +	if (!aer)
> +		return 0;
> +
> +
> +	if (type == PCI_EXP_TYPE_ENDPOINT || type == PCI_EXP_TYPE_RC_END) {
> +		/* Link is healthy for IO reads now */
> +		pci_read_config_dword(dev, aer + PCI_ERR_UNCOR_STATUS,
> +			&info->status);
> +		pci_read_config_dword(dev, aer + PCI_ERR_UNCOR_MASK,
> +			&info->mask);
> +		if (!(info->status & ~info->mask))
> +			return 0;
> +
> +		/* Get First Error Pointer */
> +		pci_read_config_dword(dev, aer + PCI_ERR_CAP, &aercc);
> +		info->first_error = PCI_ERR_CAP_FEP(aercc);
> +
> +		if (info->status & AER_LOG_TLP_MASKS) {
> +			info->tlp_header_valid = 1;
> +			pcie_read_tlp_log(dev, aer + PCI_ERR_HEADER_LOG, &info->tlp);
> +		}
> +	}
> +
> +	return 1;
> +}
> +
>  static inline void aer_process_err_devices(struct aer_err_info *e_info)
>  {
>  	int i;
> diff --git a/drivers/pci/pcie/err.c b/drivers/pci/pcie/err.c
> index 31090770fffc..a74ae6a55064 100644
> --- a/drivers/pci/pcie/err.c
> +++ b/drivers/pci/pcie/err.c
> @@ -196,6 +196,7 @@ pci_ers_result_t pcie_do_recovery(struct pci_dev *dev,
>  	struct pci_dev *bridge;
>  	pci_ers_result_t status = PCI_ERS_RESULT_CAN_RECOVER;
>  	struct pci_host_bridge *host = pci_find_host_bridge(dev->bus);
> +	struct aer_err_info info;
>  
>  	/*
>  	 * If the error was detected by a Root Port, Downstream Port, RCEC,
> @@ -223,6 +224,10 @@ pci_ers_result_t pcie_do_recovery(struct pci_dev *dev,
>  			pci_warn(bridge, "subordinate device reset failed\n");
>  			goto failed;
>  		}
> +
> +		/* Link recovered, report fatal errors on RCiEP or EP */
> +		if (aer_get_device_fatal_error_info(dev, &info))
> +			aer_print_error(dev, &info);
>  	} else {
>  		pci_walk_bridge(bridge, report_normal_detected, &status);
>  	}
> @@ -259,6 +264,7 @@ pci_ers_result_t pcie_do_recovery(struct pci_dev *dev,
>  	if (host->native_aer || pcie_ports_native) {
>  		pcie_clear_device_status(dev);
>  		pci_aer_clear_nonfatal_status(dev);
> +		pci_aer_clear_fatal_status(dev);
>  	}
>  
>  	pci_walk_bridge(bridge, pci_pm_runtime_put, NULL);
> -- 
> 2.39.3
> 


^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [RFC PATCH v1 2/2] PCI/AER: report fatal errors of RCiEP and EP if link recoverd
  2024-11-06  9:03 ` [RFC PATCH v1 2/2] PCI/AER: report fatal errors of RCiEP and EP if link recoverd Shuai Xue
  2024-11-06 16:02   ` Bjorn Helgaas
@ 2024-11-06 16:39   ` Keith Busch
  2024-11-07  1:27     ` Shuai Xue
  1 sibling, 1 reply; 7+ messages in thread
From: Keith Busch @ 2024-11-06 16:39 UTC (permalink / raw)
  To: Shuai Xue
  Cc: linux-pci, linux-kernel, linuxppc-dev, bhelgaas, mahesh, oohall,
	sathyanarayanan.kuppuswamy

On Wed, Nov 06, 2024 at 05:03:39PM +0800, Shuai Xue wrote:
> +int aer_get_device_fatal_error_info(struct pci_dev *dev, struct aer_err_info *info)
> +{
> +	int type = pci_pcie_type(dev);
> +	int aer = dev->aer_cap;
> +	u32 aercc;
> +
> +	pci_info(dev, "type :%d\n", type);
> +
> +	/* Must reset in this function */
> +	info->status = 0;
> +	info->tlp_header_valid = 0;
> +	info->severity = AER_FATAL;
> +
> +	/* The device might not support AER */
> +	if (!aer)
> +		return 0;
> +
> +
> +	if (type == PCI_EXP_TYPE_ENDPOINT || type == PCI_EXP_TYPE_RC_END) {
> +		/* Link is healthy for IO reads now */
> +		pci_read_config_dword(dev, aer + PCI_ERR_UNCOR_STATUS,
> +			&info->status);
> +		pci_read_config_dword(dev, aer + PCI_ERR_UNCOR_MASK,
> +			&info->mask);
> +		if (!(info->status & ~info->mask))
> +			return 0;
> +
> +		/* Get First Error Pointer */
> +		pci_read_config_dword(dev, aer + PCI_ERR_CAP, &aercc);
> +		info->first_error = PCI_ERR_CAP_FEP(aercc);
> +
> +		if (info->status & AER_LOG_TLP_MASKS) {
> +			info->tlp_header_valid = 1;
> +			pcie_read_tlp_log(dev, aer + PCI_ERR_HEADER_LOG, &info->tlp);
> +		}

This matches the uncorrectable handling in aer_get_device_error_info, so
perhaps a helper to reduce duplication.

> +	}
> +
> +	return 1;
> +}

Returning '1' even if type is root or downstream port?

>  static inline void aer_process_err_devices(struct aer_err_info *e_info)
>  {
>  	int i;
> diff --git a/drivers/pci/pcie/err.c b/drivers/pci/pcie/err.c
> index 31090770fffc..a74ae6a55064 100644
> --- a/drivers/pci/pcie/err.c
> +++ b/drivers/pci/pcie/err.c
> @@ -196,6 +196,7 @@ pci_ers_result_t pcie_do_recovery(struct pci_dev *dev,
>  	struct pci_dev *bridge;
>  	pci_ers_result_t status = PCI_ERS_RESULT_CAN_RECOVER;
>  	struct pci_host_bridge *host = pci_find_host_bridge(dev->bus);
> +	struct aer_err_info info;
>  
>  	/*
>  	 * If the error was detected by a Root Port, Downstream Port, RCEC,
> @@ -223,6 +224,10 @@ pci_ers_result_t pcie_do_recovery(struct pci_dev *dev,
>  			pci_warn(bridge, "subordinate device reset failed\n");
>  			goto failed;
>  		}
> +
> +		/* Link recovered, report fatal errors on RCiEP or EP */
> +		if (aer_get_device_fatal_error_info(dev, &info))
> +			aer_print_error(dev, &info);

This will always print the error info even for root and downstream
ports, but you initialize "info" status and mask only if it's an EP or
RCiEP.


^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [RFC PATCH v1 2/2] PCI/AER: report fatal errors of RCiEP and EP if link recoverd
  2024-11-06 16:02   ` Bjorn Helgaas
@ 2024-11-07  1:24     ` Shuai Xue
  0 siblings, 0 replies; 7+ messages in thread
From: Shuai Xue @ 2024-11-07  1:24 UTC (permalink / raw)
  To: Bjorn Helgaas, kbusch
  Cc: linux-pci, linux-kernel, linuxppc-dev, bhelgaas, mahesh, oohall,
	sathyanarayanan.kuppuswamy



在 2024/11/7 00:02, Bjorn Helgaas 写道:
> On Wed, Nov 06, 2024 at 05:03:39PM +0800, Shuai Xue wrote:
>> The AER driver has historically avoided reading the configuration space of an
>> endpoint or RCiEP that reported a fatal error, considering the link to that
>> device unreliable. Consequently, when a fatal error occurs, the AER and DPC
>> drivers do not report specific error types, resulting in logs like:
>>
>> [  245.281980] pcieport 0000:30:03.0: EDR: EDR event received
>> [  245.287466] pcieport 0000:30:03.0: DPC: containment event, status:0x0005 source:0x3400
>> [  245.295372] pcieport 0000:30:03.0: DPC: ERR_FATAL detected
>> [  245.300849] pcieport 0000:30:03.0: AER: broadcast error_detected message
>> [  245.307540] nvme nvme0: frozen state error detected, reset controller
>> [  245.722582] nvme 0000:34:00.0: ready 0ms after DPC
>> [  245.727365] pcieport 0000:30:03.0: AER: broadcast slot_reset message
>>
>> But, if the link recovered after hot reset, we can safely access AER status of
>> the error device. In such case, report fatal error which helps to figure out the
>> error root case.
> 
> Explain why we can access these registers after reset.  I think it's
> important that these registers are sticky ("RW1CS" per spec).

Yes, AER error status registers are Sticky and Write-1-to-clear. If we 
does not read them after reset_subordinates, the registers will be 
cleared in pci_error_handlers, e.g. nvme_err_handler

   slot_reset() => nvme_slot_reset()
     pci_restore_state()
       pci_aer_clear_status()

Will add the reason in commit log.

> 
>> After this patch, the logs like:
>>
>> [  414.356755] pcieport 0000:30:03.0: EDR: EDR event received
>> [  414.362240] pcieport 0000:30:03.0: DPC: containment event, status:0x0005 source:0x3400
>> [  414.370148] pcieport 0000:30:03.0: DPC: ERR_FATAL detected
>> [  414.375642] pcieport 0000:30:03.0: AER: broadcast error_detected message
>> [  414.382335] nvme nvme0: frozen state error detected, reset controller
>> [  414.645413] pcieport 0000:30:03.0: waiting 100 ms for downstream link, after activation
>> [  414.788016] nvme 0000:34:00.0: ready 0ms after DPC
>> [  414.796975] nvme 0000:34:00.0: PCIe Bus Error: severity=Uncorrectable (Fatal), type=Data Link Layer, (Receiver ID)
>> [  414.807312] nvme 0000:34:00.0:   device [144d:a804] error status/mask=00000010/00504000
>> [  414.815305] nvme 0000:34:00.0:    [ 4] DLP                    (First)
>> [  414.821768] pcieport 0000:30:03.0: AER: broadcast slot_reset message
> 
> Capitalize subject lines to match history (use "git log --oneline
> drivers/pci/pcie/aer.c" to see it).
> 
> Remove timestamps since they don't help understand the problem.
> 
> Indent the quoted material two spaces.
> 
> Wrap commit log to fit in 75 columns (except the quoted material;
> don't insert line breaks there).

Will do.

> 
>> Signed-off-by: Shuai Xue <xueshuai@linux.alibaba.com>
>> ---
>>   drivers/pci/pci.h      |  1 +
>>   drivers/pci/pcie/aer.c | 50 ++++++++++++++++++++++++++++++++++++++++++
>>   drivers/pci/pcie/err.c |  6 +++++
>>   3 files changed, 57 insertions(+)
>>
>> diff --git a/drivers/pci/pci.h b/drivers/pci/pci.h
>> index 0866f79aec54..143f960a813d 100644
>> --- a/drivers/pci/pci.h
>> +++ b/drivers/pci/pci.h
>> @@ -505,6 +505,7 @@ struct aer_err_info {
>>   };
>>   
>>   int aer_get_device_error_info(struct pci_dev *dev, struct aer_err_info *info);
>> +int aer_get_device_fatal_error_info(struct pci_dev *dev, struct aer_err_info *info);
>>   void aer_print_error(struct pci_dev *dev, struct aer_err_info *info);
>>   #endif	/* CONFIG_PCIEAER */
>>   
>> diff --git a/drivers/pci/pcie/aer.c b/drivers/pci/pcie/aer.c
>> index 13b8586924ea..0c1e382ce117 100644
>> --- a/drivers/pci/pcie/aer.c
>> +++ b/drivers/pci/pcie/aer.c
>> @@ -1252,6 +1252,56 @@ int aer_get_device_error_info(struct pci_dev *dev, struct aer_err_info *info)
>>   	return 1;
>>   }
>>   
>> +/**
>> + * aer_get_device_fatal_error_info - read fatal error status from EP or RCiEP
>> + * and store it to info
>> + * @dev: pointer to the device expected to have a error record
>> + * @info: pointer to structure to store the error record
>> + *
>> + * Return 1 on success, 0 on error.
> 
> Backwards from the usual return value convention.

Yes. As @Keith pointed, aer_get_device_fatal_error_info() is copied from 
  aer_get_device_error_info(), I will try to add a helper to reduce 
duplication.

> 
>> + * Note that @info is reused among all error devices. Clear fields properly.
>> + */
>> +int aer_get_device_fatal_error_info(struct pci_dev *dev, struct aer_err_info *info)
>> +{
>> +	int type = pci_pcie_type(dev);
>> +	int aer = dev->aer_cap;
>> +	u32 aercc;
>> +
>> +	pci_info(dev, "type :%d\n", type);
> 
> I don't see this line in the sample output in the commit log.  Is this
> debug that you intended to remove?


Sorry, I missed this line, will remove it.

> 
>> +	/* Must reset in this function */
>> +	info->status = 0;
>> +	info->tlp_header_valid = 0;
>> +	info->severity = AER_FATAL;
>> +
>> +	/* The device might not support AER */
> 
> Unnecessary comment.

Will remove it.

Thank you for valuable comments.

Best Regards,
Shuai




^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [RFC PATCH v1 2/2] PCI/AER: report fatal errors of RCiEP and EP if link recoverd
  2024-11-06 16:39   ` Keith Busch
@ 2024-11-07  1:27     ` Shuai Xue
  0 siblings, 0 replies; 7+ messages in thread
From: Shuai Xue @ 2024-11-07  1:27 UTC (permalink / raw)
  To: Keith Busch
  Cc: linux-pci, linux-kernel, linuxppc-dev, bhelgaas, mahesh, oohall,
	sathyanarayanan.kuppuswamy



在 2024/11/7 00:39, Keith Busch 写道:
> On Wed, Nov 06, 2024 at 05:03:39PM +0800, Shuai Xue wrote:
>> +int aer_get_device_fatal_error_info(struct pci_dev *dev, struct aer_err_info *info)
>> +{
>> +	int type = pci_pcie_type(dev);
>> +	int aer = dev->aer_cap;
>> +	u32 aercc;
>> +
>> +	pci_info(dev, "type :%d\n", type);
>> +
>> +	/* Must reset in this function */
>> +	info->status = 0;
>> +	info->tlp_header_valid = 0;
>> +	info->severity = AER_FATAL;
>> +
>> +	/* The device might not support AER */
>> +	if (!aer)
>> +		return 0;
>> +
>> +
>> +	if (type == PCI_EXP_TYPE_ENDPOINT || type == PCI_EXP_TYPE_RC_END) {
>> +		/* Link is healthy for IO reads now */
>> +		pci_read_config_dword(dev, aer + PCI_ERR_UNCOR_STATUS,
>> +			&info->status);
>> +		pci_read_config_dword(dev, aer + PCI_ERR_UNCOR_MASK,
>> +			&info->mask);
>> +		if (!(info->status & ~info->mask))
>> +			return 0;
>> +
>> +		/* Get First Error Pointer */
>> +		pci_read_config_dword(dev, aer + PCI_ERR_CAP, &aercc);
>> +		info->first_error = PCI_ERR_CAP_FEP(aercc);
>> +
>> +		if (info->status & AER_LOG_TLP_MASKS) {
>> +			info->tlp_header_valid = 1;
>> +			pcie_read_tlp_log(dev, aer + PCI_ERR_HEADER_LOG, &info->tlp);
>> +		}
> 
> This matches the uncorrectable handling in aer_get_device_error_info, so
> perhaps a helper to reduce duplication.

Yes, will do.

> 
>> +	}
>> +
>> +	return 1;
>> +}
> 
> Returning '1' even if type is root or downstream port?
> 
>>   static inline void aer_process_err_devices(struct aer_err_info *e_info)
>>   {
>>   	int i;
>> diff --git a/drivers/pci/pcie/err.c b/drivers/pci/pcie/err.c
>> index 31090770fffc..a74ae6a55064 100644
>> --- a/drivers/pci/pcie/err.c
>> +++ b/drivers/pci/pcie/err.c
>> @@ -196,6 +196,7 @@ pci_ers_result_t pcie_do_recovery(struct pci_dev *dev,
>>   	struct pci_dev *bridge;
>>   	pci_ers_result_t status = PCI_ERS_RESULT_CAN_RECOVER;
>>   	struct pci_host_bridge *host = pci_find_host_bridge(dev->bus);
>> +	struct aer_err_info info;
>>   
>>   	/*
>>   	 * If the error was detected by a Root Port, Downstream Port, RCEC,
>> @@ -223,6 +224,10 @@ pci_ers_result_t pcie_do_recovery(struct pci_dev *dev,
>>   			pci_warn(bridge, "subordinate device reset failed\n");
>>   			goto failed;
>>   		}
>> +
>> +		/* Link recovered, report fatal errors on RCiEP or EP */
>> +		if (aer_get_device_fatal_error_info(dev, &info))
>> +			aer_print_error(dev, &info);
> 
> This will always print the error info even for root and downstream
> ports, but you initialize "info" status and mask only if it's an EP or
> RCiEP.

Got it. Will fix it.

Thank you for valuable comments.

Best Regards,
Shuai



^ permalink raw reply	[flat|nested] 7+ messages in thread

end of thread, other threads:[~2024-11-07  1:27 UTC | newest]

Thread overview: 7+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2024-11-06  9:03 [RFC PATCH v1 0/2] PCI/AER: report fatal errors of RCiEP and EP if link recoverd Shuai Xue
2024-11-06  9:03 ` [RFC PATCH v1 1/2] PCI/AER: run recovery on device that detected the error Shuai Xue
2024-11-06  9:03 ` [RFC PATCH v1 2/2] PCI/AER: report fatal errors of RCiEP and EP if link recoverd Shuai Xue
2024-11-06 16:02   ` Bjorn Helgaas
2024-11-07  1:24     ` Shuai Xue
2024-11-06 16:39   ` Keith Busch
2024-11-07  1:27     ` Shuai Xue

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).