From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from frasgout.his.huawei.com (frasgout.his.huawei.com [185.176.79.56]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 8855633D4EC; Tue, 27 Jan 2026 10:24:07 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=185.176.79.56 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1769509451; cv=none; b=sCBjIdPcVke+EYBCCMfWgtVPCxJlD7rXKemMZsBSPFFRnFIHZPUffztpCE0krVt6BNj+FwfeTdCYr+sYl5bx14CgTPGo4GiIfIVShfwMUWmiB0FnQ+Mr/4Z6tKvfXeY7PxgVPXO6oEbaDoUL6yfGiNLd6RP6fyKifDJ/iyB0LtA= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1769509451; c=relaxed/simple; bh=JAqfXyXXva8XxB5r0jXHkipzNHceJ67flQVK/TCh2V4=; h=Date:From:To:CC:Subject:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=VrtXHYVhJefmH0OU+kWkvlKC+Kk8vEzpTxPO6sOtpNVh+YzRRkrIkttshK4tGyjHXbCW53xEXeCGbrLL5aOb2jXN+ZHbfReI8RC4/im+UNia0Hr582pbbOZdGfxtJpfyIpEEhCNe2perpj5RXFvLHK33nsqhBQdb/+FommVm6IU= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=huawei.com; spf=pass smtp.mailfrom=huawei.com; arc=none smtp.client-ip=185.176.79.56 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=huawei.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=huawei.com Received: from mail.maildlp.com (unknown [172.18.224.83]) by frasgout.his.huawei.com (SkyGuard) with ESMTPS id 4f0hLv5gfQzHnH6v; Tue, 27 Jan 2026 18:23:19 +0800 (CST) Received: from dubpeml500005.china.huawei.com (unknown [7.214.145.207]) by mail.maildlp.com (Postfix) with ESMTPS id 290C940569; Tue, 27 Jan 2026 18:24:05 +0800 (CST) Received: from localhost (10.203.177.15) by dubpeml500005.china.huawei.com (7.214.145.207) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.11; Tue, 27 Jan 2026 10:24:04 +0000 Date: Tue, 27 Jan 2026 10:24:02 +0000 From: Jonathan Cameron To: Shuai Xue CC: , , , , , , , , , , Subject: Re: [PATCH v7 2/5] PCI/DPC: Run recovery on device that detected the error Message-ID: <20260127102402.00004da2@huawei.com> In-Reply-To: <20260124074557.73961-3-xueshuai@linux.alibaba.com> References: <20260124074557.73961-1-xueshuai@linux.alibaba.com> <20260124074557.73961-3-xueshuai@linux.alibaba.com> X-Mailer: Claws Mail 4.3.0 (GTK 3.24.42; x86_64-w64-mingw32) Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset="US-ASCII" Content-Transfer-Encoding: 7bit X-ClientProxiedBy: lhrpeml100010.china.huawei.com (7.191.174.197) To dubpeml500005.china.huawei.com (7.214.145.207) On Sat, 24 Jan 2026 15:45:54 +0800 Shuai Xue wrote: > The current implementation of pcie_do_recovery() assumes that the > recovery process is executed for the device that detected the error. > However, the DPC driver currently passes the error port that experienced > the DPC event to pcie_do_recovery(). > > Use the SOURCE ID register to correctly identify the device that > detected the error. When passing the error device, the > pcie_do_recovery() will find the upstream bridge and walk bridges > potentially AER affected. And subsequent commits will be able to > accurately access AER status of the error device. > > Should not observe any functional changes. > > Reviewed-by: Kuppuswamy Sathyanarayanan > Signed-off-by: Shuai Xue > --- > drivers/pci/pci.h | 2 +- > drivers/pci/pcie/dpc.c | 25 +++++++++++++++++++++---- > drivers/pci/pcie/edr.c | 7 ++++--- > 3 files changed, 26 insertions(+), 8 deletions(-) > > diff --git a/drivers/pci/pci.h b/drivers/pci/pci.h > index 0e67014aa001..58640e656897 100644 > --- a/drivers/pci/pci.h > +++ b/drivers/pci/pci.h > @@ -771,7 +771,7 @@ struct rcec_ea { > void pci_save_dpc_state(struct pci_dev *dev); > void pci_restore_dpc_state(struct pci_dev *dev); > void pci_dpc_init(struct pci_dev *pdev); > -void dpc_process_error(struct pci_dev *pdev); > +struct pci_dev *dpc_process_error(struct pci_dev *pdev); > pci_ers_result_t dpc_reset_link(struct pci_dev *pdev); > bool pci_dpc_recovered(struct pci_dev *pdev); > unsigned int dpc_tlp_log_len(struct pci_dev *dev); > diff --git a/drivers/pci/pcie/dpc.c b/drivers/pci/pcie/dpc.c > index bff29726c6a5..f6069f621683 100644 > --- a/drivers/pci/pcie/dpc.c > +++ b/drivers/pci/pcie/dpc.c > @@ -260,10 +260,20 @@ static int dpc_get_aer_uncorrect_severity(struct pci_dev *dev, > return 1; > } > > -void dpc_process_error(struct pci_dev *pdev) > +/** > + * dpc_process_error - handle the DPC error status > + * @pdev: the port that experienced the containment event > + * > + * Return: the device that detected the error. > + * > + * NOTE: The device reference count is increased, the caller must decrement > + * the reference count by calling pci_dev_put(). > + */ > +struct pci_dev *dpc_process_error(struct pci_dev *pdev) Maybe it makes sense to carry the err_port naming for the pci_dev in here as well? Seems stronger than just relying on people reading the documentation you've added. > { > u16 cap = pdev->dpc_cap, status, source, reason, ext_reason; > struct aer_err_info info = {}; > + struct pci_dev *err_dev; > > pci_read_config_word(pdev, cap + PCI_EXP_DPC_STATUS, &status); > > @@ -279,6 +289,7 @@ void dpc_process_error(struct pci_dev *pdev) > pci_aer_clear_nonfatal_status(pdev); > pci_aer_clear_fatal_status(pdev); > } > + err_dev = pci_dev_get(pdev); > break; > case PCI_EXP_DPC_STATUS_TRIGGER_RSN_NFE: > case PCI_EXP_DPC_STATUS_TRIGGER_RSN_FE: > @@ -290,6 +301,8 @@ void dpc_process_error(struct pci_dev *pdev) > "ERR_FATAL" : "ERR_NONFATAL", > pci_domain_nr(pdev->bus), PCI_BUS_NUM(source), > PCI_SLOT(source), PCI_FUNC(source)); > + err_dev = pci_get_domain_bus_and_slot(pci_domain_nr(pdev->bus), > + PCI_BUS_NUM(source), source & 0xff); Bunch of replication in her with the pci_warn(). Maybe some local variables? Maybe even rebuild the final parameter from PCI_DEVFN(slot, func) just to make the association with the print really obvious? Is there any chance that this might return NULL? Feels like maybe that's only a possibility on a broken setup, but I'm not sure of all the wonderful races around hotplug and DPC occurring before the OS has caught up. > break; > case PCI_EXP_DPC_STATUS_TRIGGER_RSN_IN_EXT: > ext_reason = status & PCI_EXP_DPC_STATUS_TRIGGER_RSN_EXT; > @@ -304,8 +317,11 @@ void dpc_process_error(struct pci_dev *pdev) > if (ext_reason == PCI_EXP_DPC_STATUS_TRIGGER_RSN_RP_PIO && > pdev->dpc_rp_extensions) > dpc_process_rp_pio_error(pdev); > + err_dev = pci_dev_get(pdev); > break; > } > + > + return err_dev; > } > diff --git a/drivers/pci/pcie/edr.c b/drivers/pci/pcie/edr.c > index 521fca2f40cb..b6e9d652297e 100644 > --- a/drivers/pci/pcie/edr.c > +++ b/drivers/pci/pcie/edr.c > @@ -150,7 +150,7 @@ static int acpi_send_edr_status(struct pci_dev *pdev, struct pci_dev *edev, > > static void edr_handle_event(acpi_handle handle, u32 event, void *data) > { > - struct pci_dev *pdev = data, *err_port; > + struct pci_dev *pdev = data, *err_port, *err_dev; > pci_ers_result_t estate = PCI_ERS_RESULT_DISCONNECT; > u16 status; > > @@ -190,7 +190,7 @@ static void edr_handle_event(acpi_handle handle, u32 event, void *data) > goto send_ost; > } > > - dpc_process_error(err_port); > + err_dev = dpc_process_error(err_port); > pci_aer_raw_clear_status(err_port); > > /* > @@ -198,7 +198,8 @@ static void edr_handle_event(acpi_handle handle, u32 event, void *data) > * or ERR_NONFATAL, since the link is already down, use the FATAL > * error recovery path for both cases. > */ > - estate = pcie_do_recovery(err_port, pci_channel_io_frozen, dpc_reset_link); > + estate = pcie_do_recovery(err_dev, pci_channel_io_frozen, dpc_reset_link); > + pci_dev_put(err_dev); > > send_ost: >