From: Leon Romanovsky <leon@kernel.org>
To: Doug Ledford <dledford@redhat.com>, Jason Gunthorpe <jgg@mellanox.com>
Cc: Leon Romanovsky <leonro@mellanox.com>,
RDMA mailing list <linux-rdma@vger.kernel.org>,
Artemy Kovalyov <artemyko@mellanox.com>,
Guy Levi <guyle@mellanox.com>, Haggai Eran <haggaie@mellanox.com>,
Jerome Glisse <jglisse@redhat.com>,
Moni Shoua <monis@mellanox.com>,
Saeed Mahameed <saeedm@mellanox.com>,
linux-netdev <netdev@vger.kernel.org>
Subject: [PATCH rdma-next 2/4] IB/mlx5: WQE dump jumps over first 16 bytes
Date: Tue, 19 Mar 2019 11:24:37 +0200 [thread overview]
Message-ID: <20190319092439.10701-3-leon@kernel.org> (raw)
In-Reply-To: <20190319092439.10701-1-leon@kernel.org>
From: Artemy Kovalyov <artemyko@mellanox.com>
Move index increment after its is used or otherwise it will start the
dump of the WQE from second WQE BB.
Fixes: 34f4c9554d8b ("IB/mlx5: Use fragmented QP's buffer for in-kernel users")
Signed-off-by: Artemy Kovalyov <artemyko@mellanox.com>
Signed-off-by: Moni Shoua <monis@mellanox.com>
Signed-off-by: Leon Romanovsky <leonro@mellanox.com>
---
drivers/infiniband/hw/mlx5/qp.c | 5 ++---
1 file changed, 2 insertions(+), 3 deletions(-)
diff --git a/drivers/infiniband/hw/mlx5/qp.c b/drivers/infiniband/hw/mlx5/qp.c
index 6b1f0e76900b..2014fd0fddc7 100644
--- a/drivers/infiniband/hw/mlx5/qp.c
+++ b/drivers/infiniband/hw/mlx5/qp.c
@@ -4724,16 +4724,15 @@ static void set_linv_wr(struct mlx5_ib_qp *qp, void **seg, int *size,
static void dump_wqe(struct mlx5_ib_qp *qp, u32 idx, int size_16)
{
__be32 *p = NULL;
- u32 tidx = idx;
int i, j;
pr_debug("dump WQE index %u:\n", idx);
for (i = 0, j = 0; i < size_16 * 4; i += 4, j += 4) {
if ((i & 0xf) == 0) {
- tidx = (tidx + 1) & (qp->sq.wqe_cnt - 1);
- p = mlx5_frag_buf_get_wqe(&qp->sq.fbc, tidx);
+ p = mlx5_frag_buf_get_wqe(&qp->sq.fbc, idx);
pr_debug("WQBB at %p:\n", (void *)p);
j = 0;
+ idx = (idx + 1) & (qp->sq.wqe_cnt - 1);
}
pr_debug("%08x %08x %08x %08x\n", be32_to_cpu(p[j]),
be32_to_cpu(p[j + 1]), be32_to_cpu(p[j + 2]),
--
2.20.1
next prev parent reply other threads:[~2019-03-19 9:25 UTC|newest]
Thread overview: 11+ messages / expand[flat|nested] mbox.gz Atom feed top
2019-03-19 9:24 [PATCH rdma-next 0/4] Small set of mlx5 related fixes Leon Romanovsky
2019-03-19 9:24 ` [PATCH rdma-next 1/4] IB/mlx5: Reset access mask when looping inside page fault handler Leon Romanovsky
2019-03-19 9:24 ` Leon Romanovsky [this message]
2019-03-19 9:24 ` [PATCH mlx5-next 3/4] net/mlx5: Decrease default mr cache size Leon Romanovsky
2019-03-27 10:07 ` Or Gerlitz
2019-03-27 11:41 ` Leon Romanovsky
2019-03-27 11:58 ` Or Gerlitz
2019-03-27 13:36 ` Leon Romanovsky
2019-03-27 14:24 ` Or Gerlitz
2019-03-19 9:24 ` [PATCH rdma-next 4/4] IB/mlx5: Compare only index part of a memory window rkey Leon Romanovsky
2019-03-27 18:29 ` [PATCH rdma-next 0/4] Small set of mlx5 related fixes Jason Gunthorpe
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20190319092439.10701-3-leon@kernel.org \
--to=leon@kernel.org \
--cc=artemyko@mellanox.com \
--cc=dledford@redhat.com \
--cc=guyle@mellanox.com \
--cc=haggaie@mellanox.com \
--cc=jgg@mellanox.com \
--cc=jglisse@redhat.com \
--cc=leonro@mellanox.com \
--cc=linux-rdma@vger.kernel.org \
--cc=monis@mellanox.com \
--cc=netdev@vger.kernel.org \
--cc=saeedm@mellanox.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).