public inbox for linux-rdma@vger.kernel.org
 help / color / mirror / Atom feed
From: Niklas Schnelle <schnelle@linux.ibm.com>
To: Leon Romanovsky <leon@kernel.org>, Bjorn Helgaas <helgaas@kernel.org>
Cc: Saeed Mahameed <saeedm@nvidia.com>,
	"David S. Miller" <davem@davemloft.net>,
	Eric Dumazet <edumazet@google.com>,
	Jakub Kicinski <kuba@kernel.org>, Paolo Abeni <pabeni@redhat.com>,
	Gerd Bayer <gbayer@linux.ibm.com>,
	Alexander Schmidt <alexs@linux.ibm.com>,
	netdev@vger.kernel.org, linux-rdma@vger.kernel.org,
	linux-kernel@vger.kernel.org
Subject: Re: [PATCH] net/mlx5: stop waiting for PCI link if reset is required
Date: Tue, 04 Apr 2023 17:27:35 +0200	[thread overview]
Message-ID: <a25455eac6a02eeb9710d9204dfe0b91938f61a1.camel@linux.ibm.com> (raw)
In-Reply-To: <20230403182105.GC4514@unreal>

On Mon, 2023-04-03 at 21:21 +0300, Leon Romanovsky wrote:
> On Mon, Apr 03, 2023 at 09:56:56AM +0200, Niklas Schnelle wrote:
> > after an error on the PCI link, the driver does not need to wait
> > for the link to become functional again as a reset is required. Stop
> > the wait loop in this case to accelerate the recovery flow.
> > 
> > Co-developed-by: Alexander Schmidt <alexs@linux.ibm.com>
> > Signed-off-by: Alexander Schmidt <alexs@linux.ibm.com>
> > Signed-off-by: Niklas Schnelle <schnelle@linux.ibm.com>
> > ---
> >  drivers/net/ethernet/mellanox/mlx5/core/health.c | 12 ++++++++++--
> >  1 file changed, 10 insertions(+), 2 deletions(-)
> > 
> > diff --git a/drivers/net/ethernet/mellanox/mlx5/core/health.c b/drivers/net/ethernet/mellanox/mlx5/core/health.c
> > index f9438d4e43ca..81ca44e0705a 100644
> > --- a/drivers/net/ethernet/mellanox/mlx5/core/health.c
> > +++ b/drivers/net/ethernet/mellanox/mlx5/core/health.c
> > @@ -325,6 +325,8 @@ int mlx5_health_wait_pci_up(struct mlx5_core_dev *dev)
> >  	while (sensor_pci_not_working(dev)) {
> 
> According to the comment in sensor_pci_not_working(), this loop is
> supposed to wait till PCI will be ready again. Otherwise, already in
> first iteration, we will bail out with pci_channel_offline() error.
> 
> Thanks

Well yes. The problem is that this works for intermittent errors
including when the card resets itself which seems to be the use case in
mlx5_fw_reset_complete_reload() and mlx5_devlink_reload_fw_activate().
If there is a PCI error that requires a link reset though we see some
problems though it does work after running into the timeout.

As I understand it and as implemented at least on s390,
pci_channel_io_frozen is only set for fatal errors that require a reset
while non fatal errors will have pci_channel_io_normal (see also
Documentation/PCI/pcieaer-howto.rst) thus I think pci_channel_offline()
should only be true if a reset is required or there is a permanent
error. Furthermore in the pci_channel_io_frozen state the PCI function
may be isolated and the reads will not reach the endpoint, this is the
case at least on s390.  Thus for errors requiring a reset the loop
without pci_channel_offline() will run until the reset is performed or
the timeout is reached. In the mlx5_health_try_recover() case during
error recovery we will then indeed always loop until timeout, because
the loop blocks mlx5_pci_err_detected() from returning thus blocking
the reset (see Documentation/PCI/pci-error-recovery.rst). Adding Bjorn,
maybe he can confirm or correct my assumptions here.

Thanks,
Niklas

> 
> >  		if (time_after(jiffies, end))
> >  			return -ETIMEDOUT;
> > +		if (pci_channel_offline(dev->pdev))
> > +			return -EIO;
> >  		msleep(100);
> >  	}
> >  	return 0;
> > @@ -332,10 +334,16 @@ int mlx5_health_wait_pci_up(struct mlx5_core_dev *dev)
> >  
> >  static int mlx5_health_try_recover(struct mlx5_core_dev *dev)
> >  {
> > +	int rc;
> > +
> >  	mlx5_core_warn(dev, "handling bad device here\n");
> >  	mlx5_handle_bad_state(dev);
> > -	if (mlx5_health_wait_pci_up(dev)) {
> > -		mlx5_core_err(dev, "health recovery flow aborted, PCI reads still not working\n");
> > +	rc = mlx5_health_wait_pci_up(dev);
> > +	if (rc) {
> > +		if (rc == -ETIMEDOUT)
> > +			mlx5_core_err(dev, "health recovery flow aborted, PCI reads still not working\n");
> > +		else
> > +			mlx5_core_err(dev, "health recovery flow aborted, PCI channel offline\n");
> >  		return -EIO;
> >  	}
> >  	mlx5_core_err(dev, "starting health recovery flow\n");
> > 
> > base-commit: 7e364e56293bb98cae1b55fd835f5991c4e96e7d
> > -- 
> > 2.37.2
> > 


  reply	other threads:[~2023-04-04 15:28 UTC|newest]

Thread overview: 7+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2023-04-03  7:56 [PATCH] net/mlx5: stop waiting for PCI link if reset is required Niklas Schnelle
2023-04-03 18:21 ` Leon Romanovsky
2023-04-04 15:27   ` Niklas Schnelle [this message]
2023-04-05 21:06     ` Bjorn Helgaas
2023-04-09  8:54       ` Leon Romanovsky
2023-04-09  8:55 ` Leon Romanovsky
2023-04-11 10:13   ` Niklas Schnelle

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=a25455eac6a02eeb9710d9204dfe0b91938f61a1.camel@linux.ibm.com \
    --to=schnelle@linux.ibm.com \
    --cc=alexs@linux.ibm.com \
    --cc=davem@davemloft.net \
    --cc=edumazet@google.com \
    --cc=gbayer@linux.ibm.com \
    --cc=helgaas@kernel.org \
    --cc=kuba@kernel.org \
    --cc=leon@kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-rdma@vger.kernel.org \
    --cc=netdev@vger.kernel.org \
    --cc=pabeni@redhat.com \
    --cc=saeedm@nvidia.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox