linux-rdma.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH rdma-next 1/2] RDMA/mlx5: Fix returned type from _mlx5r_umr_zap_mkey()
@ 2025-07-20  9:25 Leon Romanovsky
  2025-07-20  9:25 ` [PATCH rdma-next 2/2] RDMA/mlx5: Fix incorrect MKEY masking Leon Romanovsky
                   ` (2 more replies)
  0 siblings, 3 replies; 4+ messages in thread
From: Leon Romanovsky @ 2025-07-20  9:25 UTC (permalink / raw)
  To: Jason Gunthorpe
  Cc: Leon Romanovsky, Colin King (gmail), Edward Srouji, linux-rdma,
	Michael Guralnik

From: Leon Romanovsky <leonro@nvidia.com>

As Colin reported:
 "The variable zapped_blocks is a size_t type and is being assigned a int
  return value from the call to _mlx5r_umr_zap_mkey. Since zapped_blocks is an
  unsigned type, the error check for zapped_blocks < 0 will never be true."

So separate return error and nblocks assignment.

Fixes: e73242aa14d2 ("RDMA/mlx5: Optimize DMABUF mkey page size")
Reported-by: "Colin King (gmail)" <colin.i.king@gmail.com>
Closes: https://lore.kernel.org/all/79166fb1-3b73-4d37-af02-a17b22eb8e64@gmail.com
Signed-off-by: Leon Romanovsky <leonro@nvidia.com>
---
 drivers/infiniband/hw/mlx5/umr.c | 28 ++++++++++++++--------------
 1 file changed, 14 insertions(+), 14 deletions(-)

diff --git a/drivers/infiniband/hw/mlx5/umr.c b/drivers/infiniband/hw/mlx5/umr.c
index fa5c4ea685b9d..054f6dae24151 100644
--- a/drivers/infiniband/hw/mlx5/umr.c
+++ b/drivers/infiniband/hw/mlx5/umr.c
@@ -992,6 +992,7 @@ _mlx5r_dmabuf_umr_update_pas(struct mlx5_ib_mr *mr, unsigned int flags,
 static int _mlx5r_umr_zap_mkey(struct mlx5_ib_mr *mr,
 			       unsigned int flags,
 			       unsigned int page_shift,
+			       size_t *nblocks,
 			       bool dd)
 {
 	unsigned int old_page_shift = mr->page_shift;
@@ -1000,7 +1001,6 @@ static int _mlx5r_umr_zap_mkey(struct mlx5_ib_mr *mr,
 	size_t page_shift_nblocks;
 	unsigned int max_log_size;
 	int access_mode;
-	size_t nblocks;
 	int err;
 
 	access_mode = dd ? MLX5_MKC_ACCESS_MODE_KSM : MLX5_MKC_ACCESS_MODE_MTT;
@@ -1014,26 +1014,26 @@ static int _mlx5r_umr_zap_mkey(struct mlx5_ib_mr *mr,
 	 * Block size must be aligned to MLX5_UMR_FLEX_ALIGNMENT since it may
 	 * be used as offset into the XLT later on.
 	 */
-	nblocks = ib_umem_num_dma_blocks(mr->umem, 1UL << max_page_shift);
+	*nblocks = ib_umem_num_dma_blocks(mr->umem, 1UL << max_page_shift);
 	if (dd)
-		nblocks = ALIGN(nblocks, MLX5_UMR_KSM_NUM_ENTRIES_ALIGNMENT);
+		*nblocks = ALIGN(*nblocks, MLX5_UMR_KSM_NUM_ENTRIES_ALIGNMENT);
 	else
-		nblocks = ALIGN(nblocks, MLX5_UMR_MTT_NUM_ENTRIES_ALIGNMENT);
+		*nblocks = ALIGN(*nblocks, MLX5_UMR_MTT_NUM_ENTRIES_ALIGNMENT);
 	page_shift_nblocks = ib_umem_num_dma_blocks(mr->umem,
 						    1UL << page_shift);
 	/* If the number of blocks at max possible page shift is greater than
 	 * the number of blocks at the new page size, we should just go over the
 	 * whole mkey entries.
 	 */
-	if (nblocks >= page_shift_nblocks)
-		nblocks = 0;
+	if (*nblocks >= page_shift_nblocks)
+		*nblocks = 0;
 
 	/* Make the first nblocks entries non-present without changing
 	 * page size yet.
 	 */
-	if (nblocks)
+	if (*nblocks)
 		mr->page_shift = max_page_shift;
-	err = _mlx5r_dmabuf_umr_update_pas(mr, flags, 0, nblocks, dd);
+	err = _mlx5r_dmabuf_umr_update_pas(mr, flags, 0, *nblocks, dd);
 	if (err) {
 		mr->page_shift = old_page_shift;
 		return err;
@@ -1042,7 +1042,7 @@ static int _mlx5r_umr_zap_mkey(struct mlx5_ib_mr *mr,
 	/* Change page size to the max page size now that the MR is completely
 	 * non-present.
 	 */
-	if (nblocks) {
+	if (*nblocks) {
 		err = mlx5r_umr_update_mr_page_shift(mr, max_page_shift, dd);
 		if (err) {
 			mr->page_shift = old_page_shift;
@@ -1050,7 +1050,7 @@ static int _mlx5r_umr_zap_mkey(struct mlx5_ib_mr *mr,
 		}
 	}
 
-	return nblocks;
+	return 0;
 }
 
 /**
@@ -1085,10 +1085,10 @@ int mlx5r_umr_dmabuf_update_pgsz(struct mlx5_ib_mr *mr, u32 xlt_flags,
 	size_t total_blocks;
 	int err;
 
-	zapped_blocks = _mlx5r_umr_zap_mkey(mr, xlt_flags, page_shift,
-					    mr->data_direct);
-	if (zapped_blocks < 0)
-		return zapped_blocks;
+	err = _mlx5r_umr_zap_mkey(mr, xlt_flags, page_shift, &zapped_blocks,
+				  mr->data_direct);
+	if (err)
+		return err;
 
 	/* _mlx5r_umr_zap_mkey already enables the mkey */
 	xlt_flags &= ~MLX5_IB_UPD_XLT_ENABLE;
-- 
2.50.1


^ permalink raw reply related	[flat|nested] 4+ messages in thread

* [PATCH rdma-next 2/2] RDMA/mlx5: Fix incorrect MKEY masking
  2025-07-20  9:25 [PATCH rdma-next 1/2] RDMA/mlx5: Fix returned type from _mlx5r_umr_zap_mkey() Leon Romanovsky
@ 2025-07-20  9:25 ` Leon Romanovsky
  2025-07-20 23:00 ` [PATCH rdma-next 1/2] RDMA/mlx5: Fix returned type from _mlx5r_umr_zap_mkey() Zhu Yanjun
  2025-07-21  6:28 ` Leon Romanovsky
  2 siblings, 0 replies; 4+ messages in thread
From: Leon Romanovsky @ 2025-07-20  9:25 UTC (permalink / raw)
  To: Jason Gunthorpe
  Cc: Leon Romanovsky, Edward Srouji, linux-rdma, Michael Guralnik

From: Leon Romanovsky <leonro@nvidia.com>

mkey_mask is __be64 type, while MLX5_MKEY_MASK_PAGE_SIZE is declared as
unsigned long long. This causes to the static checkers errors:

drivers/infiniband/hw/mlx5/umr.c:663:49: warning: invalid assignment: &=
drivers/infiniband/hw/mlx5/umr.c:663:49:    left side has type restricted __be64
drivers/infiniband/hw/mlx5/umr.c:663:49:    right side has type int

Fixes: e73242aa14d2 ("RDMA/mlx5: Optimize DMABUF mkey page size")
Signed-off-by: Leon Romanovsky <leonro@nvidia.com>
---
 drivers/infiniband/hw/mlx5/umr.c | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/drivers/infiniband/hw/mlx5/umr.c b/drivers/infiniband/hw/mlx5/umr.c
index 054f6dae24151..7ef35cddce81c 100644
--- a/drivers/infiniband/hw/mlx5/umr.c
+++ b/drivers/infiniband/hw/mlx5/umr.c
@@ -660,7 +660,8 @@ static void mlx5r_umr_final_update_xlt(struct mlx5_ib_dev *dev,
 		if (!mr->ibmr.length)
 			MLX5_SET(mkc, &wqe->mkey_seg, length64, 1);
 		if (flags & MLX5_IB_UPD_XLT_KEEP_PGSZ)
-			wqe->ctrl_seg.mkey_mask &= ~MLX5_MKEY_MASK_PAGE_SIZE;
+			wqe->ctrl_seg.mkey_mask &=
+				cpu_to_be64(~MLX5_MKEY_MASK_PAGE_SIZE);
 	}
 
 	wqe->ctrl_seg.xlt_octowords =
-- 
2.50.1


^ permalink raw reply related	[flat|nested] 4+ messages in thread

* Re: [PATCH rdma-next 1/2] RDMA/mlx5: Fix returned type from _mlx5r_umr_zap_mkey()
  2025-07-20  9:25 [PATCH rdma-next 1/2] RDMA/mlx5: Fix returned type from _mlx5r_umr_zap_mkey() Leon Romanovsky
  2025-07-20  9:25 ` [PATCH rdma-next 2/2] RDMA/mlx5: Fix incorrect MKEY masking Leon Romanovsky
@ 2025-07-20 23:00 ` Zhu Yanjun
  2025-07-21  6:28 ` Leon Romanovsky
  2 siblings, 0 replies; 4+ messages in thread
From: Zhu Yanjun @ 2025-07-20 23:00 UTC (permalink / raw)
  To: Leon Romanovsky, Jason Gunthorpe
  Cc: Leon Romanovsky, Colin King (gmail), Edward Srouji, linux-rdma,
	Michael Guralnik

在 2025/7/20 2:25, Leon Romanovsky 写道:
> From: Leon Romanovsky <leonro@nvidia.com>
> 
> As Colin reported:
>   "The variable zapped_blocks is a size_t type and is being assigned a int
>    return value from the call to _mlx5r_umr_zap_mkey. Since zapped_blocks is an
>    unsigned type, the error check for zapped_blocks < 0 will never be true."
> 
> So separate return error and nblocks assignment.

size_t is an unsigned type, used to represent the size of objects while 
int is a signed 32-bit integer (on most platforms).

Assign an int value to a size_t causes an implicit conversion from 
signed to unsigned.

It may seem harmless, but it can cause subtle and serious bugs depending 
on the scenarios.

Many CVEs in the kernel result from misuse of signed/unsigned types.

Reviewed-by: Zhu Yanjun <yanjun.zhu@linux.dev>

Zhu Yanjun
  >
> Fixes: e73242aa14d2 ("RDMA/mlx5: Optimize DMABUF mkey page size")
> Reported-by: "Colin King (gmail)" <colin.i.king@gmail.com>
> Closes: https://lore.kernel.org/all/79166fb1-3b73-4d37-af02-a17b22eb8e64@gmail.com
> Signed-off-by: Leon Romanovsky <leonro@nvidia.com>
> ---
>   drivers/infiniband/hw/mlx5/umr.c | 28 ++++++++++++++--------------
>   1 file changed, 14 insertions(+), 14 deletions(-)
> 
> diff --git a/drivers/infiniband/hw/mlx5/umr.c b/drivers/infiniband/hw/mlx5/umr.c
> index fa5c4ea685b9d..054f6dae24151 100644
> --- a/drivers/infiniband/hw/mlx5/umr.c
> +++ b/drivers/infiniband/hw/mlx5/umr.c
> @@ -992,6 +992,7 @@ _mlx5r_dmabuf_umr_update_pas(struct mlx5_ib_mr *mr, unsigned int flags,
>   static int _mlx5r_umr_zap_mkey(struct mlx5_ib_mr *mr,
>   			       unsigned int flags,
>   			       unsigned int page_shift,
> +			       size_t *nblocks,
>   			       bool dd)
>   {
>   	unsigned int old_page_shift = mr->page_shift;
> @@ -1000,7 +1001,6 @@ static int _mlx5r_umr_zap_mkey(struct mlx5_ib_mr *mr,
>   	size_t page_shift_nblocks;
>   	unsigned int max_log_size;
>   	int access_mode;
> -	size_t nblocks;
>   	int err;
>   
>   	access_mode = dd ? MLX5_MKC_ACCESS_MODE_KSM : MLX5_MKC_ACCESS_MODE_MTT;
> @@ -1014,26 +1014,26 @@ static int _mlx5r_umr_zap_mkey(struct mlx5_ib_mr *mr,
>   	 * Block size must be aligned to MLX5_UMR_FLEX_ALIGNMENT since it may
>   	 * be used as offset into the XLT later on.
>   	 */
> -	nblocks = ib_umem_num_dma_blocks(mr->umem, 1UL << max_page_shift);
> +	*nblocks = ib_umem_num_dma_blocks(mr->umem, 1UL << max_page_shift);
>   	if (dd)
> -		nblocks = ALIGN(nblocks, MLX5_UMR_KSM_NUM_ENTRIES_ALIGNMENT);
> +		*nblocks = ALIGN(*nblocks, MLX5_UMR_KSM_NUM_ENTRIES_ALIGNMENT);
>   	else
> -		nblocks = ALIGN(nblocks, MLX5_UMR_MTT_NUM_ENTRIES_ALIGNMENT);
> +		*nblocks = ALIGN(*nblocks, MLX5_UMR_MTT_NUM_ENTRIES_ALIGNMENT);
>   	page_shift_nblocks = ib_umem_num_dma_blocks(mr->umem,
>   						    1UL << page_shift);
>   	/* If the number of blocks at max possible page shift is greater than
>   	 * the number of blocks at the new page size, we should just go over the
>   	 * whole mkey entries.
>   	 */
> -	if (nblocks >= page_shift_nblocks)
> -		nblocks = 0;
> +	if (*nblocks >= page_shift_nblocks)
> +		*nblocks = 0;
>   
>   	/* Make the first nblocks entries non-present without changing
>   	 * page size yet.
>   	 */
> -	if (nblocks)
> +	if (*nblocks)
>   		mr->page_shift = max_page_shift;
> -	err = _mlx5r_dmabuf_umr_update_pas(mr, flags, 0, nblocks, dd);
> +	err = _mlx5r_dmabuf_umr_update_pas(mr, flags, 0, *nblocks, dd);
>   	if (err) {
>   		mr->page_shift = old_page_shift;
>   		return err;
> @@ -1042,7 +1042,7 @@ static int _mlx5r_umr_zap_mkey(struct mlx5_ib_mr *mr,
>   	/* Change page size to the max page size now that the MR is completely
>   	 * non-present.
>   	 */
> -	if (nblocks) {
> +	if (*nblocks) {
>   		err = mlx5r_umr_update_mr_page_shift(mr, max_page_shift, dd);
>   		if (err) {
>   			mr->page_shift = old_page_shift;
> @@ -1050,7 +1050,7 @@ static int _mlx5r_umr_zap_mkey(struct mlx5_ib_mr *mr,
>   		}
>   	}
>   
> -	return nblocks;
> +	return 0;
>   }
>   
>   /**
> @@ -1085,10 +1085,10 @@ int mlx5r_umr_dmabuf_update_pgsz(struct mlx5_ib_mr *mr, u32 xlt_flags,
>   	size_t total_blocks;
>   	int err;
>   
> -	zapped_blocks = _mlx5r_umr_zap_mkey(mr, xlt_flags, page_shift,
> -					    mr->data_direct);
> -	if (zapped_blocks < 0)
> -		return zapped_blocks;
> +	err = _mlx5r_umr_zap_mkey(mr, xlt_flags, page_shift, &zapped_blocks,
> +				  mr->data_direct);
> +	if (err)
> +		return err;
>   
>   	/* _mlx5r_umr_zap_mkey already enables the mkey */
>   	xlt_flags &= ~MLX5_IB_UPD_XLT_ENABLE;


^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: [PATCH rdma-next 1/2] RDMA/mlx5: Fix returned type from _mlx5r_umr_zap_mkey()
  2025-07-20  9:25 [PATCH rdma-next 1/2] RDMA/mlx5: Fix returned type from _mlx5r_umr_zap_mkey() Leon Romanovsky
  2025-07-20  9:25 ` [PATCH rdma-next 2/2] RDMA/mlx5: Fix incorrect MKEY masking Leon Romanovsky
  2025-07-20 23:00 ` [PATCH rdma-next 1/2] RDMA/mlx5: Fix returned type from _mlx5r_umr_zap_mkey() Zhu Yanjun
@ 2025-07-21  6:28 ` Leon Romanovsky
  2 siblings, 0 replies; 4+ messages in thread
From: Leon Romanovsky @ 2025-07-21  6:28 UTC (permalink / raw)
  To: Jason Gunthorpe, Leon Romanovsky
  Cc: Colin King (gmail), Edward Srouji, linux-rdma, Michael Guralnik,
	Leon Romanovsky


On Sun, 20 Jul 2025 12:25:34 +0300, Leon Romanovsky wrote:
> As Colin reported:
>  "The variable zapped_blocks is a size_t type and is being assigned a int
>   return value from the call to _mlx5r_umr_zap_mkey. Since zapped_blocks is an
>   unsigned type, the error check for zapped_blocks < 0 will never be true."
> 
> So separate return error and nblocks assignment.
> 
> [...]

Applied, thanks!

[1/2] RDMA/mlx5: Fix returned type from _mlx5r_umr_zap_mkey()
      https://git.kernel.org/rdma/rdma/c/d59ebb4549ff9b
[2/2] RDMA/mlx5: Fix incorrect MKEY masking
      https://git.kernel.org/rdma/rdma/c/b83440736864ad

Best regards,
-- 
Leon Romanovsky <leon@kernel.org>


^ permalink raw reply	[flat|nested] 4+ messages in thread

end of thread, other threads:[~2025-07-21  6:28 UTC | newest]

Thread overview: 4+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2025-07-20  9:25 [PATCH rdma-next 1/2] RDMA/mlx5: Fix returned type from _mlx5r_umr_zap_mkey() Leon Romanovsky
2025-07-20  9:25 ` [PATCH rdma-next 2/2] RDMA/mlx5: Fix incorrect MKEY masking Leon Romanovsky
2025-07-20 23:00 ` [PATCH rdma-next 1/2] RDMA/mlx5: Fix returned type from _mlx5r_umr_zap_mkey() Zhu Yanjun
2025-07-21  6:28 ` Leon Romanovsky

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).