netdev.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Saeed Mahameed <saeed@kernel.org>
To: "David S. Miller" <davem@davemloft.net>,
	Jakub Kicinski <kuba@kernel.org>
Cc: netdev@vger.kernel.org, Gal Pressman <gal@nvidia.com>,
	Aya Levin <ayal@nvidia.com>, Saeed Mahameed <saeedm@nvidia.com>
Subject: [PATCH net-next 14/15] net/mlx5e: Add recovery flow in case of error CQE
Date: Thu,  6 Jan 2022 16:29:55 -0800	[thread overview]
Message-ID: <20220107002956.74849-15-saeed@kernel.org> (raw)
In-Reply-To: <20220107002956.74849-1-saeed@kernel.org>

From: Gal Pressman <gal@nvidia.com>

The rep legacy RQ completion handling was missing the appropriate
handling of error CQEs (dump the CQE and queue a recover work), fix it
by calling trigger_report() when needed.

Since all CQE handling flows do the exact same error CQE handling,
extract it to a common helper function.

Signed-off-by: Gal Pressman <gal@nvidia.com>
Reviewed-by: Aya Levin <ayal@nvidia.com>
Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
---
 .../net/ethernet/mellanox/mlx5/core/en_rx.c   | 20 ++++++++++---------
 1 file changed, 11 insertions(+), 9 deletions(-)

diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c b/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c
index f09b57c31ed7..96e260fd7987 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c
@@ -1603,6 +1603,12 @@ static void trigger_report(struct mlx5e_rq *rq, struct mlx5_cqe64 *cqe)
 	}
 }
 
+static void mlx5e_handle_rx_err_cqe(struct mlx5e_rq *rq, struct mlx5_cqe64 *cqe)
+{
+	trigger_report(rq, cqe);
+	rq->stats->wqe_err++;
+}
+
 static void mlx5e_handle_rx_cqe(struct mlx5e_rq *rq, struct mlx5_cqe64 *cqe)
 {
 	struct mlx5_wq_cyc *wq = &rq->wqe.wq;
@@ -1616,8 +1622,7 @@ static void mlx5e_handle_rx_cqe(struct mlx5e_rq *rq, struct mlx5_cqe64 *cqe)
 	cqe_bcnt = be32_to_cpu(cqe->byte_cnt);
 
 	if (unlikely(MLX5E_RX_ERR_CQE(cqe))) {
-		trigger_report(rq, cqe);
-		rq->stats->wqe_err++;
+		mlx5e_handle_rx_err_cqe(rq, cqe);
 		goto free_wqe;
 	}
 
@@ -1670,7 +1675,7 @@ static void mlx5e_handle_rx_cqe_rep(struct mlx5e_rq *rq, struct mlx5_cqe64 *cqe)
 	cqe_bcnt = be32_to_cpu(cqe->byte_cnt);
 
 	if (unlikely(MLX5E_RX_ERR_CQE(cqe))) {
-		rq->stats->wqe_err++;
+		mlx5e_handle_rx_err_cqe(rq, cqe);
 		goto free_wqe;
 	}
 
@@ -1719,8 +1724,7 @@ static void mlx5e_handle_rx_cqe_mpwrq_rep(struct mlx5e_rq *rq, struct mlx5_cqe64
 	wi->consumed_strides += cstrides;
 
 	if (unlikely(MLX5E_RX_ERR_CQE(cqe))) {
-		trigger_report(rq, cqe);
-		rq->stats->wqe_err++;
+		mlx5e_handle_rx_err_cqe(rq, cqe);
 		goto mpwrq_cqe_out;
 	}
 
@@ -1988,8 +1992,7 @@ static void mlx5e_handle_rx_cqe_mpwrq_shampo(struct mlx5e_rq *rq, struct mlx5_cq
 	wi->consumed_strides += cstrides;
 
 	if (unlikely(MLX5E_RX_ERR_CQE(cqe))) {
-		trigger_report(rq, cqe);
-		stats->wqe_err++;
+		mlx5e_handle_rx_err_cqe(rq, cqe);
 		goto mpwrq_cqe_out;
 	}
 
@@ -2058,8 +2061,7 @@ static void mlx5e_handle_rx_cqe_mpwrq(struct mlx5e_rq *rq, struct mlx5_cqe64 *cq
 	wi->consumed_strides += cstrides;
 
 	if (unlikely(MLX5E_RX_ERR_CQE(cqe))) {
-		trigger_report(rq, cqe);
-		rq->stats->wqe_err++;
+		mlx5e_handle_rx_err_cqe(rq, cqe);
 		goto mpwrq_cqe_out;
 	}
 
-- 
2.33.1


  parent reply	other threads:[~2022-01-07  0:30 UTC|newest]

Thread overview: 17+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-01-07  0:29 [pull request][net-next 00/15] mlx5 updates 2022-01-06 Saeed Mahameed
2022-01-07  0:29 ` [PATCH net-next 01/15] net/mlx5: mlx5e_hv_vhca_stats_create return type to void Saeed Mahameed
2022-01-07 11:30   ` patchwork-bot+netdevbpf
2022-01-07  0:29 ` [PATCH net-next 02/15] net/mlx5: Introduce control IRQ request API Saeed Mahameed
2022-01-07  0:29 ` [PATCH net-next 03/15] net/mlx5: Move affinity assignment into irq_request Saeed Mahameed
2022-01-07  0:29 ` [PATCH net-next 04/15] net/mlx5: Split irq_pool_affinity logic to new file Saeed Mahameed
2022-01-07  0:29 ` [PATCH net-next 05/15] net/mlx5: Introduce API for bulk request and release of IRQs Saeed Mahameed
2022-01-07  0:29 ` [PATCH net-next 06/15] net/mlx5: SF, Use all available cpu for setting cpu affinity Saeed Mahameed
2022-01-07  0:29 ` [PATCH net-next 07/15] net/mlx5: Update log_max_qp value to FW max capability Saeed Mahameed
2022-01-07  0:29 ` [PATCH net-next 08/15] net/mlx5e: Expose FEC counters via ethtool Saeed Mahameed
2022-01-07  0:29 ` [PATCH net-next 09/15] net/mlx5e: Unblock setting vid 0 for VF in case PF isn't eswitch manager Saeed Mahameed
2022-01-07  0:29 ` [PATCH net-next 10/15] net/mlx5e: Fix feature check per profile Saeed Mahameed
2022-01-07  0:29 ` [PATCH net-next 11/15] net/mlx5e: Move HW-GRO and CQE compression check to fix features flow Saeed Mahameed
2022-01-07  0:29 ` [PATCH net-next 12/15] net/mlx5e: Refactor set_pflag_cqe_based_moder Saeed Mahameed
2022-01-07  0:29 ` [PATCH net-next 13/15] net/mlx5e: TC, Remove redundant error logging Saeed Mahameed
2022-01-07  0:29 ` Saeed Mahameed [this message]
2022-01-07  0:29 ` [PATCH net-next 15/15] Documentation: devlink: mlx5.rst: Fix htmldoc build warning Saeed Mahameed

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20220107002956.74849-15-saeed@kernel.org \
    --to=saeed@kernel.org \
    --cc=ayal@nvidia.com \
    --cc=davem@davemloft.net \
    --cc=gal@nvidia.com \
    --cc=kuba@kernel.org \
    --cc=netdev@vger.kernel.org \
    --cc=saeedm@nvidia.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).