From: Saeed Mahameed <saeedm@nvidia.com>
To: Jakub Kicinski <kuba@kernel.org>
Cc: "David S. Miller" <davem@davemloft.net>, <netdev@vger.kernel.org>,
"Eran Ben Elisha" <eranbe@nvidia.com>,
Saeed Mahameed <saeedm@nvidia.com>
Subject: [net 1/4] net/mlx5: Fix wrong address reclaim when command interface is down
Date: Wed, 2 Dec 2020 20:39:43 -0800 [thread overview]
Message-ID: <20201203043946.235385-2-saeedm@nvidia.com> (raw)
In-Reply-To: <20201203043946.235385-1-saeedm@nvidia.com>
From: Eran Ben Elisha <eranbe@nvidia.com>
When command interface is down, driver to reclaim all 4K page chucks that
were hold by the Firmeware. Fix a bug for 64K page size systems, where
driver repeatedly released only the first chunk of the page.
Define helper function to fill 4K chunks for a given Firmware pages.
Iterate over all unreleased Firmware pages and call the hepler per each.
Fixes: 5adff6a08862 ("net/mlx5: Fix incorrect page count when in internal error")
Signed-off-by: Eran Ben Elisha <eranbe@nvidia.com>
Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
---
.../ethernet/mellanox/mlx5/core/pagealloc.c | 21 +++++++++++++++++--
1 file changed, 19 insertions(+), 2 deletions(-)
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/pagealloc.c b/drivers/net/ethernet/mellanox/mlx5/core/pagealloc.c
index 150638814517..4d7f8a357df7 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/pagealloc.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/pagealloc.c
@@ -422,6 +422,24 @@ static void release_all_pages(struct mlx5_core_dev *dev, u32 func_id,
npages, ec_function, func_id);
}
+static u32 fwp_fill_manage_pages_out(struct fw_page *fwp, u32 *out, u32 index,
+ u32 npages)
+{
+ u32 pages_set = 0;
+ unsigned int n;
+
+ for_each_clear_bit(n, &fwp->bitmask, MLX5_NUM_4K_IN_PAGE) {
+ MLX5_ARRAY_SET64(manage_pages_out, out, pas, index + pages_set,
+ fwp->addr + (n * MLX5_ADAPTER_PAGE_SIZE));
+ pages_set++;
+
+ if (!--npages)
+ break;
+ }
+
+ return pages_set;
+}
+
static int reclaim_pages_cmd(struct mlx5_core_dev *dev,
u32 *in, int in_size, u32 *out, int out_size)
{
@@ -448,8 +466,7 @@ static int reclaim_pages_cmd(struct mlx5_core_dev *dev,
fwp = rb_entry(p, struct fw_page, rb_node);
p = rb_next(p);
- MLX5_ARRAY_SET64(manage_pages_out, out, pas, i, fwp->addr);
- i++;
+ i += fwp_fill_manage_pages_out(fwp, out, i, npages - i);
}
MLX5_SET(manage_pages_out, out, output_num_entries, i);
--
2.26.2
next prev parent reply other threads:[~2020-12-03 4:41 UTC|newest]
Thread overview: 8+ messages / expand[flat|nested] mbox.gz Atom feed top
2020-12-03 4:39 [pull request][net 0/4] mlx5 fixes 2020-12-01 Saeed Mahameed
2020-12-03 4:39 ` Saeed Mahameed [this message]
2020-12-03 4:39 ` [net 2/4] net: mlx5e: fix fs_tcp.c build when IPV6 is not enabled Saeed Mahameed
2020-12-03 4:39 ` [net 3/4] net/mlx5e: kTLS, Enforce HW TX csum offload with kTLS Saeed Mahameed
2020-12-03 4:39 ` [net 4/4] net/mlx5: DR, Proper handling of unsupported Connect-X6DX SW steering Saeed Mahameed
2020-12-03 18:52 ` [pull request][net 0/4] mlx5 fixes 2020-12-01 Jakub Kicinski
2020-12-03 19:16 ` Jakub Kicinski
2020-12-03 19:50 ` Saeed Mahameed
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20201203043946.235385-2-saeedm@nvidia.com \
--to=saeedm@nvidia.com \
--cc=davem@davemloft.net \
--cc=eranbe@nvidia.com \
--cc=kuba@kernel.org \
--cc=netdev@vger.kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).