linux-rdma.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH] IB/mlx5: Expose correct max_sge_rd limit
@ 2016-03-31 16:03 Sagi Grimberg
       [not found] ` <1459440205-25208-1-git-send-email-sagig-NQWnxTmZq1alnMjI0IkVqw@public.gmane.org>
  0 siblings, 1 reply; 4+ messages in thread
From: Sagi Grimberg @ 2016-03-31 16:03 UTC (permalink / raw)
  To: linux-rdma-u79uwXL29TY76Z2rM5mHXA; +Cc: Matan Barak, Leon Romanovsky

mlx5 devices (Connect-IB, ConnectX-4, ConnectX-4-LX) has a limitation
where rdma read work queue entries cannot exceed 512 bytes.
A rdma_read wqe needs to fit in 512 bytes:
- wqe control segment (16 bytes)
- rdma segment (16 bytes)
- scatter elements (16 bytes each)

So max_sge_rd should be: (512 - 16 - 16) / 16 = 30.

Reported-by: Christoph Hellwig <hch-jcswGhMUV9g@public.gmane.org>
Tested-by: Christoph Hellwig <hch-jcswGhMUV9g@public.gmane.org>
Signed-off-by: Sagi Grimberg <sagig-NQWnxTmZq1alnMjI0IkVqw@public.gmane.org>
---
 drivers/infiniband/hw/mlx5/main.c |  2 +-
 include/linux/mlx5/device.h       | 11 +++++++++++
 2 files changed, 12 insertions(+), 1 deletion(-)

diff --git a/drivers/infiniband/hw/mlx5/main.c b/drivers/infiniband/hw/mlx5/main.c
index 03c418ccbc98..ed9cefa1f6f1 100644
--- a/drivers/infiniband/hw/mlx5/main.c
+++ b/drivers/infiniband/hw/mlx5/main.c
@@ -517,7 +517,7 @@ static int mlx5_ib_query_device(struct ib_device *ibdev,
 		     sizeof(struct mlx5_wqe_ctrl_seg)) /
 		     sizeof(struct mlx5_wqe_data_seg);
 	props->max_sge = min(max_rq_sg, max_sq_sg);
-	props->max_sge_rd = props->max_sge;
+	props->max_sge_rd	   = MLX5_MAX_SGE_RD;
 	props->max_cq		   = 1 << MLX5_CAP_GEN(mdev, log_max_cq);
 	props->max_cqe = (1 << MLX5_CAP_GEN(mdev, log_max_cq_sz)) - 1;
 	props->max_mr		   = 1 << MLX5_CAP_GEN(mdev, log_max_mkey);
diff --git a/include/linux/mlx5/device.h b/include/linux/mlx5/device.h
index 987764afa65c..f8b83792939b 100644
--- a/include/linux/mlx5/device.h
+++ b/include/linux/mlx5/device.h
@@ -363,6 +363,17 @@ enum {
 	MLX5_CAP_OFF_CMDIF_CSUM		= 46,
 };
 
+enum {
+	/*
+	 * Max wqe size for rdma read is 512 bytes, so this
+	 * limits our max_sge_rd as the wqe needs to fit:
+	 * - ctrl segment (16 bytes)
+	 * - rdma segment (16 bytes)
+	 * - scatter elements (16 bytes each)
+	 */
+	MLX5_MAX_SGE_RD	= (512 - 16 - 16) / 16
+};
+
 struct mlx5_inbox_hdr {
 	__be16		opcode;
 	u8		rsvd[4];
-- 
1.9.1

--
To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply related	[flat|nested] 4+ messages in thread

* Re: [PATCH] IB/mlx5: Expose correct max_sge_rd limit
       [not found] ` <1459440205-25208-1-git-send-email-sagig-NQWnxTmZq1alnMjI0IkVqw@public.gmane.org>
@ 2016-04-01  0:04   ` Leon Romanovsky
       [not found]     ` <20160401000426.GF2670-2ukJVAZIZ/Y@public.gmane.org>
  0 siblings, 1 reply; 4+ messages in thread
From: Leon Romanovsky @ 2016-04-01  0:04 UTC (permalink / raw)
  To: Sagi Grimberg
  Cc: linux-rdma-u79uwXL29TY76Z2rM5mHXA, Matan Barak, Leon Romanovsky

On Thu, Mar 31, 2016 at 07:03:25PM +0300, Sagi Grimberg wrote:
> mlx5 devices (Connect-IB, ConnectX-4, ConnectX-4-LX) has a limitation
> where rdma read work queue entries cannot exceed 512 bytes.
> A rdma_read wqe needs to fit in 512 bytes:
> - wqe control segment (16 bytes)
> - rdma segment (16 bytes)
> - scatter elements (16 bytes each)
> 
> So max_sge_rd should be: (512 - 16 - 16) / 16 = 30.
> 
> Reported-by: Christoph Hellwig <hch-jcswGhMUV9g@public.gmane.org>
> Tested-by: Christoph Hellwig <hch-jcswGhMUV9g@public.gmane.org>
> Signed-off-by: Sagi Grimberg <sagig-NQWnxTmZq1alnMjI0IkVqw@public.gmane.org>

Thanks Sagi,

Excellent catch, the same as in commit: a5e4ba334e2
("mlx4: Expose correct max_sge_rd limit")

Signed-off-by: Leon Romanovsky <leonro-VPRAkNaXOzVWk0Htik3J/w@public.gmane.org>

> ---
>  drivers/infiniband/hw/mlx5/main.c |  2 +-
>  include/linux/mlx5/device.h       | 11 +++++++++++
>  2 files changed, 12 insertions(+), 1 deletion(-)
> 
> diff --git a/drivers/infiniband/hw/mlx5/main.c b/drivers/infiniband/hw/mlx5/main.c
> index 03c418ccbc98..ed9cefa1f6f1 100644
> --- a/drivers/infiniband/hw/mlx5/main.c
> +++ b/drivers/infiniband/hw/mlx5/main.c
> @@ -517,7 +517,7 @@ static int mlx5_ib_query_device(struct ib_device *ibdev,
>  		     sizeof(struct mlx5_wqe_ctrl_seg)) /
>  		     sizeof(struct mlx5_wqe_data_seg);
>  	props->max_sge = min(max_rq_sg, max_sq_sg);
> -	props->max_sge_rd = props->max_sge;
> +	props->max_sge_rd	   = MLX5_MAX_SGE_RD;
>  	props->max_cq		   = 1 << MLX5_CAP_GEN(mdev, log_max_cq);
>  	props->max_cqe = (1 << MLX5_CAP_GEN(mdev, log_max_cq_sz)) - 1;
>  	props->max_mr		   = 1 << MLX5_CAP_GEN(mdev, log_max_mkey);
> diff --git a/include/linux/mlx5/device.h b/include/linux/mlx5/device.h
> index 987764afa65c..f8b83792939b 100644
> --- a/include/linux/mlx5/device.h
> +++ b/include/linux/mlx5/device.h
> @@ -363,6 +363,17 @@ enum {
>  	MLX5_CAP_OFF_CMDIF_CSUM		= 46,
>  };
>  
> +enum {
> +	/*
> +	 * Max wqe size for rdma read is 512 bytes, so this
> +	 * limits our max_sge_rd as the wqe needs to fit:
> +	 * - ctrl segment (16 bytes)
> +	 * - rdma segment (16 bytes)
> +	 * - scatter elements (16 bytes each)
> +	 */
> +	MLX5_MAX_SGE_RD	= (512 - 16 - 16) / 16
> +};
> +
>  struct mlx5_inbox_hdr {
>  	__be16		opcode;
>  	u8		rsvd[4];
> -- 
> 1.9.1
> 
> --
> To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
> the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
--
To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: [PATCH] IB/mlx5: Expose correct max_sge_rd limit
       [not found]     ` <20160401000426.GF2670-2ukJVAZIZ/Y@public.gmane.org>
@ 2016-04-07  7:05       ` Sagi Grimberg
       [not found]         ` <570606B4.5040402-NQWnxTmZq1alnMjI0IkVqw@public.gmane.org>
  0 siblings, 1 reply; 4+ messages in thread
From: Sagi Grimberg @ 2016-04-07  7:05 UTC (permalink / raw)
  To: Doug Ledford
  Cc: linux-rdma-u79uwXL29TY76Z2rM5mHXA, Matan Barak, Leon Romanovsky



On 01/04/16 03:04, Leon Romanovsky wrote:
> On Thu, Mar 31, 2016 at 07:03:25PM +0300, Sagi Grimberg wrote:
>> mlx5 devices (Connect-IB, ConnectX-4, ConnectX-4-LX) has a limitation
>> where rdma read work queue entries cannot exceed 512 bytes.
>> A rdma_read wqe needs to fit in 512 bytes:
>> - wqe control segment (16 bytes)
>> - rdma segment (16 bytes)
>> - scatter elements (16 bytes each)
>>
>> So max_sge_rd should be: (512 - 16 - 16) / 16 = 30.
>>
>> Reported-by: Christoph Hellwig <hch-jcswGhMUV9g@public.gmane.org>
>> Tested-by: Christoph Hellwig <hch-jcswGhMUV9g@public.gmane.org>
>> Signed-off-by: Sagi Grimberg <sagig-NQWnxTmZq1alnMjI0IkVqw@public.gmane.org>
>
> Thanks Sagi,
>
> Excellent catch, the same as in commit: a5e4ba334e2
> ("mlx4: Expose correct max_sge_rd limit")
>
> Signed-off-by: Leon Romanovsky <leonro-VPRAkNaXOzVWk0Htik3J/w@public.gmane.org>

Doug, this is stable material, can we get it to 4.6-rc
and place a stable tag on it?

Thanks,
Sagi.
--
To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: [PATCH] IB/mlx5: Expose correct max_sge_rd limit
       [not found]         ` <570606B4.5040402-NQWnxTmZq1alnMjI0IkVqw@public.gmane.org>
@ 2016-04-28 14:53           ` Doug Ledford
  0 siblings, 0 replies; 4+ messages in thread
From: Doug Ledford @ 2016-04-28 14:53 UTC (permalink / raw)
  To: Sagi Grimberg
  Cc: linux-rdma-u79uwXL29TY76Z2rM5mHXA, Matan Barak, Leon Romanovsky

[-- Attachment #1: Type: text/plain, Size: 1214 bytes --]

On 04/07/2016 03:05 AM, Sagi Grimberg wrote:
> 
> 
> On 01/04/16 03:04, Leon Romanovsky wrote:
>> On Thu, Mar 31, 2016 at 07:03:25PM +0300, Sagi Grimberg wrote:
>>> mlx5 devices (Connect-IB, ConnectX-4, ConnectX-4-LX) has a limitation
>>> where rdma read work queue entries cannot exceed 512 bytes.
>>> A rdma_read wqe needs to fit in 512 bytes:
>>> - wqe control segment (16 bytes)
>>> - rdma segment (16 bytes)
>>> - scatter elements (16 bytes each)
>>>
>>> So max_sge_rd should be: (512 - 16 - 16) / 16 = 30.
>>>
>>> Reported-by: Christoph Hellwig <hch-jcswGhMUV9g@public.gmane.org>
>>> Tested-by: Christoph Hellwig <hch-jcswGhMUV9g@public.gmane.org>
>>> Signed-off-by: Sagi Grimberg <sagig-NQWnxTmZq1alnMjI0IkVqw@public.gmane.org>
>>
>> Thanks Sagi,
>>
>> Excellent catch, the same as in commit: a5e4ba334e2
>> ("mlx4: Expose correct max_sge_rd limit")
>>
>> Signed-off-by: Leon Romanovsky <leonro-VPRAkNaXOzVWk0Htik3J/w@public.gmane.org>
> 
> Doug, this is stable material, can we get it to 4.6-rc
> and place a stable tag on it?
> 
> Thanks,
> Sagi.

Done, thanks.

-- 
Doug Ledford <dledford-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
              GPG KeyID: 0E572FDD



[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 884 bytes --]

^ permalink raw reply	[flat|nested] 4+ messages in thread

end of thread, other threads:[~2016-04-28 14:53 UTC | newest]

Thread overview: 4+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2016-03-31 16:03 [PATCH] IB/mlx5: Expose correct max_sge_rd limit Sagi Grimberg
     [not found] ` <1459440205-25208-1-git-send-email-sagig-NQWnxTmZq1alnMjI0IkVqw@public.gmane.org>
2016-04-01  0:04   ` Leon Romanovsky
     [not found]     ` <20160401000426.GF2670-2ukJVAZIZ/Y@public.gmane.org>
2016-04-07  7:05       ` Sagi Grimberg
     [not found]         ` <570606B4.5040402-NQWnxTmZq1alnMjI0IkVqw@public.gmane.org>
2016-04-28 14:53           ` Doug Ledford

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).