From: Saeed Mahameed <saeed@kernel.org>
To: "David S. Miller" <davem@davemloft.net>,
Jakub Kicinski <kuba@kernel.org>
Cc: netdev@vger.kernel.org, Moshe Shemesh <moshe@nvidia.com>,
Leon Romanovsky <leonro@nvidia.com>,
Saeed Mahameed <saeedm@nvidia.com>
Subject: [net-next 05/16] net/mlx5: Remove redundant error on reclaim pages
Date: Wed, 9 Mar 2022 13:37:44 -0800 [thread overview]
Message-ID: <20220309213755.610202-6-saeed@kernel.org> (raw)
In-Reply-To: <20220309213755.610202-1-saeed@kernel.org>
From: Moshe Shemesh <moshe@nvidia.com>
If reclaim pages was triggered by FW event and FW failed the command,
the driver should ignore as FW is aware and will handle it.
The downstream patch will add a debugfs counter on this flow for
debuggability.
Signed-off-by: Moshe Shemesh <moshe@nvidia.com>
Reviewed-by: Leon Romanovsky <leonro@nvidia.com>
Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
---
drivers/net/ethernet/mellanox/mlx5/core/pagealloc.c | 12 ++++++++----
1 file changed, 8 insertions(+), 4 deletions(-)
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/pagealloc.c b/drivers/net/ethernet/mellanox/mlx5/core/pagealloc.c
index cc4734da6171..d29a305858ad 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/pagealloc.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/pagealloc.c
@@ -465,7 +465,7 @@ static int reclaim_pages_cmd(struct mlx5_core_dev *dev,
u32 i = 0;
if (!mlx5_cmd_is_down(dev))
- return mlx5_cmd_exec(dev, in, in_size, out, out_size);
+ return mlx5_cmd_do(dev, in, in_size, out, out_size);
/* No hard feelings, we want our pages back! */
npages = MLX5_GET(manage_pages_in, in, input_num_entries);
@@ -489,7 +489,7 @@ static int reclaim_pages_cmd(struct mlx5_core_dev *dev,
}
static int reclaim_pages(struct mlx5_core_dev *dev, u16 func_id, int npages,
- int *nclaimed, bool ec_function)
+ int *nclaimed, bool event, bool ec_function)
{
u32 function = get_function(func_id, ec_function);
int outlen = MLX5_ST_SZ_BYTES(manage_pages_out);
@@ -516,7 +516,11 @@ static int reclaim_pages(struct mlx5_core_dev *dev, u16 func_id, int npages,
mlx5_core_dbg(dev, "func 0x%x, npages %d, outlen %d\n",
func_id, npages, outlen);
err = reclaim_pages_cmd(dev, in, sizeof(in), out, outlen);
+ /* if triggered by FW event and failed by FW then ignore */
+ if (event && err == -EREMOTEIO)
+ err = 0;
if (err) {
+ err = mlx5_cmd_check(dev, err, in, out);
mlx5_core_err(dev, "failed reclaiming pages: err %d\n", err);
goto out_free;
}
@@ -556,7 +560,7 @@ static void pages_work_handler(struct work_struct *work)
release_all_pages(dev, req->func_id, req->ec_function);
else if (req->npages < 0)
err = reclaim_pages(dev, req->func_id, -1 * req->npages, NULL,
- req->ec_function);
+ true, req->ec_function);
else if (req->npages > 0)
err = give_pages(dev, req->func_id, req->npages, 1, req->ec_function);
@@ -655,7 +659,7 @@ static int mlx5_reclaim_root_pages(struct mlx5_core_dev *dev,
int err;
err = reclaim_pages(dev, func_id, optimal_reclaimed_pages(),
- &nclaimed, mlx5_core_is_ecpf(dev));
+ &nclaimed, false, mlx5_core_is_ecpf(dev));
if (err) {
mlx5_core_warn(dev, "failed reclaiming pages (%d) for func id 0x%x\n",
err, func_id);
--
2.35.1
next prev parent reply other threads:[~2022-03-09 21:38 UTC|newest]
Thread overview: 18+ messages / expand[flat|nested] mbox.gz Atom feed top
2022-03-09 21:37 [pull request][net-next 00/16] mlx5 updates 2022-03-09 Saeed Mahameed
2022-03-09 21:37 ` [net-next 01/16] net/mlx5e: TC, Fix use after free in mlx5e_clone_flow_attr_for_post_act() Saeed Mahameed
2022-03-10 22:50 ` patchwork-bot+netdevbpf
2022-03-09 21:37 ` [net-next 02/16] net/mlx5: Add command failures data to debugfs Saeed Mahameed
2022-03-09 21:37 ` [net-next 03/16] net/mlx5: Remove redundant notify fail on give pages Saeed Mahameed
2022-03-09 21:37 ` [net-next 04/16] net/mlx5: Remove redundant error " Saeed Mahameed
2022-03-09 21:37 ` Saeed Mahameed [this message]
2022-03-09 21:37 ` [net-next 06/16] net/mlx5: Change release_all_pages cap bit location Saeed Mahameed
2022-03-09 21:37 ` [net-next 07/16] net/mlx5: Move debugfs entries to separate struct Saeed Mahameed
2022-03-09 21:37 ` [net-next 08/16] net/mlx5: Add pages debugfs Saeed Mahameed
2022-03-09 21:37 ` [net-next 09/16] net/mlx5: Add debugfs counters for page commands failures Saeed Mahameed
2022-03-09 21:37 ` [net-next 10/16] net/mlx5: DR, Align mlx5dv_dr API vport action with FW behavior Saeed Mahameed
2022-03-09 21:37 ` [net-next 11/16] net/mlx5: DR, Add support for matching on Internet Header Length (IHL) Saeed Mahameed
2022-03-09 21:37 ` [net-next 12/16] net/mlx5: DR, Remove unneeded comments Saeed Mahameed
2022-03-09 21:37 ` [net-next 13/16] net/mlx5: DR, Fix handling of different actions on the same STE in STEv1 Saeed Mahameed
2022-03-09 21:37 ` [net-next 14/16] net/mlx5: DR, Rename action modify fields to reflect naming in HW spec Saeed Mahameed
2022-03-09 21:37 ` [net-next 15/16] net/mlx5: DR, Refactor ste_ctx handling for STE v0/1 Saeed Mahameed
2022-03-09 21:37 ` [net-next 16/16] net/mlx5: DR, Add support for ConnectX-7 steering Saeed Mahameed
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20220309213755.610202-6-saeed@kernel.org \
--to=saeed@kernel.org \
--cc=davem@davemloft.net \
--cc=kuba@kernel.org \
--cc=leonro@nvidia.com \
--cc=moshe@nvidia.com \
--cc=netdev@vger.kernel.org \
--cc=saeedm@nvidia.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).