public inbox for linux-rdma@vger.kernel.org
 help / color / mirror / Atom feed
* [PATCH rdma-next 0/2] Add new IB rate for XDR (8x) support
@ 2025-11-20 15:15 Leon Romanovsky
  2025-11-20 15:15 ` [PATCH rdma-next 1/2] RDMA/core: " Leon Romanovsky
                   ` (2 more replies)
  0 siblings, 3 replies; 8+ messages in thread
From: Leon Romanovsky @ 2025-11-20 15:15 UTC (permalink / raw)
  To: Jason Gunthorpe, Leon Romanovsky
  Cc: linux-rdma, linux-kernel, Maher Sanalla, Michael Guralnik

Nothing super fancy, just addition of new 1600_8x lane speed.

Signed-off-by: Leon Romanovsky <leonro@nvidia.com>
---
Maher Sanalla (2):
      RDMA/core: Add new IB rate for XDR (8x) support
      RDMA/mlx5: Add support for 1600_8x lane speed

 drivers/infiniband/core/verbs.c   | 3 +++
 drivers/infiniband/hw/mlx5/main.c | 4 ++++
 drivers/infiniband/hw/mlx5/qp.c   | 5 +++--
 include/rdma/ib_verbs.h           | 1 +
 4 files changed, 11 insertions(+), 2 deletions(-)
---
base-commit: d056bc45b62b5981ebcd18c4303a915490b8ebe9
change-id: 20251120-speed-8-bc95d6f7d170

Best regards,
--  
Leon Romanovsky <leonro@nvidia.com>


^ permalink raw reply	[flat|nested] 8+ messages in thread

* [PATCH rdma-next 1/2] RDMA/core: Add new IB rate for XDR (8x) support
  2025-11-20 15:15 [PATCH rdma-next 0/2] Add new IB rate for XDR (8x) support Leon Romanovsky
@ 2025-11-20 15:15 ` Leon Romanovsky
  2025-11-21  3:46   ` Kalesh Anakkur Purayil
  2025-11-20 15:15 ` [PATCH rdma-next 2/2] RDMA/mlx5: Add support for 1600_8x lane speed Leon Romanovsky
  2025-11-23  9:53 ` [PATCH rdma-next 0/2] Add new IB rate for XDR (8x) support Leon Romanovsky
  2 siblings, 1 reply; 8+ messages in thread
From: Leon Romanovsky @ 2025-11-20 15:15 UTC (permalink / raw)
  To: Jason Gunthorpe, Leon Romanovsky
  Cc: linux-rdma, linux-kernel, Maher Sanalla, Michael Guralnik

From: Maher Sanalla <msanalla@nvidia.com>

Add the new rates as defined in the Infiniband spec for XDR and 8x
link width support.

Furthermore, modify the utility conversion methods accordingly.

Reference: IB Spec Release 1.8

Reviewed-by: Michael Guralnik <michaelgur@nvidia.com>
Signed-off-by: Maher Sanalla <msanalla@nvidia.com>
Signed-off-by: Leon Romanovsky <leonro@nvidia.com>
---
 drivers/infiniband/core/verbs.c | 3 +++
 include/rdma/ib_verbs.h         | 1 +
 2 files changed, 4 insertions(+)

diff --git a/drivers/infiniband/core/verbs.c b/drivers/infiniband/core/verbs.c
index 3a5f81402d2f..11b1a194de44 100644
--- a/drivers/infiniband/core/verbs.c
+++ b/drivers/infiniband/core/verbs.c
@@ -148,6 +148,7 @@ __attribute_const__ int ib_rate_to_mult(enum ib_rate rate)
 	case IB_RATE_400_GBPS: return 160;
 	case IB_RATE_600_GBPS: return 240;
 	case IB_RATE_800_GBPS: return 320;
+	case IB_RATE_1600_GBPS: return 640;
 	default:	       return  -1;
 	}
 }
@@ -178,6 +179,7 @@ __attribute_const__ enum ib_rate mult_to_ib_rate(int mult)
 	case 160: return IB_RATE_400_GBPS;
 	case 240: return IB_RATE_600_GBPS;
 	case 320: return IB_RATE_800_GBPS;
+	case 640: return IB_RATE_1600_GBPS;
 	default:  return IB_RATE_PORT_CURRENT;
 	}
 }
@@ -208,6 +210,7 @@ __attribute_const__ int ib_rate_to_mbps(enum ib_rate rate)
 	case IB_RATE_400_GBPS: return 425000;
 	case IB_RATE_600_GBPS: return 637500;
 	case IB_RATE_800_GBPS: return 850000;
+	case IB_RATE_1600_GBPS: return 1700000;
 	default:	       return -1;
 	}
 }
diff --git a/include/rdma/ib_verbs.h b/include/rdma/ib_verbs.h
index 0a85af610b6b..6aad66bc5dd7 100644
--- a/include/rdma/ib_verbs.h
+++ b/include/rdma/ib_verbs.h
@@ -859,6 +859,7 @@ enum ib_rate {
 	IB_RATE_400_GBPS = 21,
 	IB_RATE_600_GBPS = 22,
 	IB_RATE_800_GBPS = 23,
+	IB_RATE_1600_GBPS = 25,
 };
 
 /**

-- 
2.51.1


^ permalink raw reply related	[flat|nested] 8+ messages in thread

* [PATCH rdma-next 2/2] RDMA/mlx5: Add support for 1600_8x lane speed
  2025-11-20 15:15 [PATCH rdma-next 0/2] Add new IB rate for XDR (8x) support Leon Romanovsky
  2025-11-20 15:15 ` [PATCH rdma-next 1/2] RDMA/core: " Leon Romanovsky
@ 2025-11-20 15:15 ` Leon Romanovsky
  2025-11-20 22:44   ` yanjun.zhu
  2025-11-21  3:47   ` Kalesh Anakkur Purayil
  2025-11-23  9:53 ` [PATCH rdma-next 0/2] Add new IB rate for XDR (8x) support Leon Romanovsky
  2 siblings, 2 replies; 8+ messages in thread
From: Leon Romanovsky @ 2025-11-20 15:15 UTC (permalink / raw)
  To: Jason Gunthorpe, Leon Romanovsky
  Cc: linux-rdma, linux-kernel, Maher Sanalla, Michael Guralnik

From: Maher Sanalla <msanalla@nvidia.com>

Add a check for 1600G_8X link speed when querying PTYS and report it
back correctly when needed.

While at it, adjust mlx5 function which maps the speed rate from IB
spec values to internal driver values to be able to handle speeds
up to 1600Gbps.

Reviewed-by: Michael Guralnik <michaelgur@nvidia.com>
Signed-off-by: Maher Sanalla <msanalla@nvidia.com>
Signed-off-by: Leon Romanovsky <leonro@nvidia.com>
---
 drivers/infiniband/hw/mlx5/main.c | 4 ++++
 drivers/infiniband/hw/mlx5/qp.c   | 5 +++--
 2 files changed, 7 insertions(+), 2 deletions(-)

diff --git a/drivers/infiniband/hw/mlx5/main.c b/drivers/infiniband/hw/mlx5/main.c
index 90daa58126f4..40284bbb45d6 100644
--- a/drivers/infiniband/hw/mlx5/main.c
+++ b/drivers/infiniband/hw/mlx5/main.c
@@ -511,6 +511,10 @@ static int translate_eth_ext_proto_oper(u32 eth_proto_oper, u16 *active_speed,
 		*active_width = IB_WIDTH_4X;
 		*active_speed = IB_SPEED_XDR;
 		break;
+	case MLX5E_PROT_MASK(MLX5E_1600TAUI_8_1600TBASE_CR8_KR8):
+		*active_width = IB_WIDTH_8X;
+		*active_speed = IB_SPEED_XDR;
+		break;
 	default:
 		return -EINVAL;
 	}
diff --git a/drivers/infiniband/hw/mlx5/qp.c b/drivers/infiniband/hw/mlx5/qp.c
index 88724d15705d..69af20790481 100644
--- a/drivers/infiniband/hw/mlx5/qp.c
+++ b/drivers/infiniband/hw/mlx5/qp.c
@@ -3451,10 +3451,11 @@ int mlx5r_ib_rate(struct mlx5_ib_dev *dev, u8 rate)
 {
 	u32 stat_rate_support;
 
-	if (rate == IB_RATE_PORT_CURRENT || rate == IB_RATE_800_GBPS)
+	if (rate == IB_RATE_PORT_CURRENT || rate == IB_RATE_800_GBPS ||
+	    rate == IB_RATE_1600_GBPS)
 		return 0;
 
-	if (rate < IB_RATE_2_5_GBPS || rate > IB_RATE_800_GBPS)
+	if (rate < IB_RATE_2_5_GBPS || rate > IB_RATE_1600_GBPS)
 		return -EINVAL;
 
 	stat_rate_support = MLX5_CAP_GEN(dev->mdev, stat_rate_support);

-- 
2.51.1


^ permalink raw reply related	[flat|nested] 8+ messages in thread

* Re: [PATCH rdma-next 2/2] RDMA/mlx5: Add support for 1600_8x lane speed
  2025-11-20 15:15 ` [PATCH rdma-next 2/2] RDMA/mlx5: Add support for 1600_8x lane speed Leon Romanovsky
@ 2025-11-20 22:44   ` yanjun.zhu
  2025-11-23  9:49     ` Leon Romanovsky
  2025-11-21  3:47   ` Kalesh Anakkur Purayil
  1 sibling, 1 reply; 8+ messages in thread
From: yanjun.zhu @ 2025-11-20 22:44 UTC (permalink / raw)
  To: Leon Romanovsky, Jason Gunthorpe
  Cc: linux-rdma, linux-kernel, Maher Sanalla, Michael Guralnik

On 11/20/25 7:15 AM, Leon Romanovsky wrote:
> From: Maher Sanalla <msanalla@nvidia.com>
> 
> Add a check for 1600G_8X link speed when querying PTYS and report it
> back correctly when needed.

Amazing — 1600G is supported. I’m not sure whether this rate is 
supported only for InfiniBand or if it’s also available for RoCEv2. In 
any case, having such a high data rate is truly impressive.

Reviewed-by: Zhu Yanjun <yanjun.zhu@linux.dev>

Zhu Yanjun

> 
> While at it, adjust mlx5 function which maps the speed rate from IB
> spec values to internal driver values to be able to handle speeds
> up to 1600Gbps.
> 
> Reviewed-by: Michael Guralnik <michaelgur@nvidia.com>
> Signed-off-by: Maher Sanalla <msanalla@nvidia.com>
> Signed-off-by: Leon Romanovsky <leonro@nvidia.com>
> ---
>   drivers/infiniband/hw/mlx5/main.c | 4 ++++
>   drivers/infiniband/hw/mlx5/qp.c   | 5 +++--
>   2 files changed, 7 insertions(+), 2 deletions(-)
> 
> diff --git a/drivers/infiniband/hw/mlx5/main.c b/drivers/infiniband/hw/mlx5/main.c
> index 90daa58126f4..40284bbb45d6 100644
> --- a/drivers/infiniband/hw/mlx5/main.c
> +++ b/drivers/infiniband/hw/mlx5/main.c
> @@ -511,6 +511,10 @@ static int translate_eth_ext_proto_oper(u32 eth_proto_oper, u16 *active_speed,
>   		*active_width = IB_WIDTH_4X;
>   		*active_speed = IB_SPEED_XDR;
>   		break;
> +	case MLX5E_PROT_MASK(MLX5E_1600TAUI_8_1600TBASE_CR8_KR8):
> +		*active_width = IB_WIDTH_8X;
> +		*active_speed = IB_SPEED_XDR;
> +		break;
>   	default:
>   		return -EINVAL;
>   	}
> diff --git a/drivers/infiniband/hw/mlx5/qp.c b/drivers/infiniband/hw/mlx5/qp.c
> index 88724d15705d..69af20790481 100644
> --- a/drivers/infiniband/hw/mlx5/qp.c
> +++ b/drivers/infiniband/hw/mlx5/qp.c
> @@ -3451,10 +3451,11 @@ int mlx5r_ib_rate(struct mlx5_ib_dev *dev, u8 rate)
>   {
>   	u32 stat_rate_support;
>   
> -	if (rate == IB_RATE_PORT_CURRENT || rate == IB_RATE_800_GBPS)
> +	if (rate == IB_RATE_PORT_CURRENT || rate == IB_RATE_800_GBPS ||
> +	    rate == IB_RATE_1600_GBPS)
>   		return 0;
>   
> -	if (rate < IB_RATE_2_5_GBPS || rate > IB_RATE_800_GBPS)
> +	if (rate < IB_RATE_2_5_GBPS || rate > IB_RATE_1600_GBPS)
>   		return -EINVAL;
>   
>   	stat_rate_support = MLX5_CAP_GEN(dev->mdev, stat_rate_support);
> 


^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH rdma-next 1/2] RDMA/core: Add new IB rate for XDR (8x) support
  2025-11-20 15:15 ` [PATCH rdma-next 1/2] RDMA/core: " Leon Romanovsky
@ 2025-11-21  3:46   ` Kalesh Anakkur Purayil
  0 siblings, 0 replies; 8+ messages in thread
From: Kalesh Anakkur Purayil @ 2025-11-21  3:46 UTC (permalink / raw)
  To: Leon Romanovsky
  Cc: Jason Gunthorpe, linux-rdma, linux-kernel, Maher Sanalla,
	Michael Guralnik

[-- Attachment #1: Type: text/plain, Size: 589 bytes --]

On Thu, Nov 20, 2025 at 9:01 PM Leon Romanovsky <leon@kernel.org> wrote:
>
> From: Maher Sanalla <msanalla@nvidia.com>
>
> Add the new rates as defined in the Infiniband spec for XDR and 8x
> link width support.
>
> Furthermore, modify the utility conversion methods accordingly.
>
> Reference: IB Spec Release 1.8
>
> Reviewed-by: Michael Guralnik <michaelgur@nvidia.com>
> Signed-off-by: Maher Sanalla <msanalla@nvidia.com>
> Signed-off-by: Leon Romanovsky <leonro@nvidia.com>

Reviewed-by: Kalesh AP <kalesh-anakkur.purayil@broadcom.com>


-- 
Regards,
Kalesh AP

[-- Attachment #2: S/MIME Cryptographic Signature --]
[-- Type: application/pkcs7-signature, Size: 5509 bytes --]

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH rdma-next 2/2] RDMA/mlx5: Add support for 1600_8x lane speed
  2025-11-20 15:15 ` [PATCH rdma-next 2/2] RDMA/mlx5: Add support for 1600_8x lane speed Leon Romanovsky
  2025-11-20 22:44   ` yanjun.zhu
@ 2025-11-21  3:47   ` Kalesh Anakkur Purayil
  1 sibling, 0 replies; 8+ messages in thread
From: Kalesh Anakkur Purayil @ 2025-11-21  3:47 UTC (permalink / raw)
  To: Leon Romanovsky
  Cc: Jason Gunthorpe, linux-rdma, linux-kernel, Maher Sanalla,
	Michael Guralnik

[-- Attachment #1: Type: text/plain, Size: 656 bytes --]

On Thu, Nov 20, 2025 at 9:02 PM Leon Romanovsky <leon@kernel.org> wrote:
>
> From: Maher Sanalla <msanalla@nvidia.com>
>
> Add a check for 1600G_8X link speed when querying PTYS and report it
> back correctly when needed.
>
> While at it, adjust mlx5 function which maps the speed rate from IB
> spec values to internal driver values to be able to handle speeds
> up to 1600Gbps.
>
> Reviewed-by: Michael Guralnik <michaelgur@nvidia.com>
> Signed-off-by: Maher Sanalla <msanalla@nvidia.com>
> Signed-off-by: Leon Romanovsky <leonro@nvidia.com>

Reviewed-by: Kalesh AP <kalesh-anakkur.purayil@broadcom.com>



-- 
Regards,
Kalesh AP

[-- Attachment #2: S/MIME Cryptographic Signature --]
[-- Type: application/pkcs7-signature, Size: 5509 bytes --]

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH rdma-next 2/2] RDMA/mlx5: Add support for 1600_8x lane speed
  2025-11-20 22:44   ` yanjun.zhu
@ 2025-11-23  9:49     ` Leon Romanovsky
  0 siblings, 0 replies; 8+ messages in thread
From: Leon Romanovsky @ 2025-11-23  9:49 UTC (permalink / raw)
  To: yanjun.zhu
  Cc: Jason Gunthorpe, linux-rdma, linux-kernel, Maher Sanalla,
	Michael Guralnik

On Thu, Nov 20, 2025 at 02:44:20PM -0800, yanjun.zhu wrote:
> On 11/20/25 7:15 AM, Leon Romanovsky wrote:
> > From: Maher Sanalla <msanalla@nvidia.com>
> > 
> > Add a check for 1600G_8X link speed when querying PTYS and report it
> > back correctly when needed.
> 
> Amazing — 1600G is supported. I’m not sure whether this rate is supported
> only for InfiniBand or if it’s also available for RoCEv2. In any case,
> having such a high data rate is truly impressive.

It is both for InfiniBand and RoCE.

>
> Reviewed-by: Zhu Yanjun <yanjun.zhu@linux.dev>

Thanks

> 
> Zhu Yanjun
> 
> > 
> > While at it, adjust mlx5 function which maps the speed rate from IB
> > spec values to internal driver values to be able to handle speeds
> > up to 1600Gbps.
> > 
> > Reviewed-by: Michael Guralnik <michaelgur@nvidia.com>
> > Signed-off-by: Maher Sanalla <msanalla@nvidia.com>
> > Signed-off-by: Leon Romanovsky <leonro@nvidia.com>
> > ---
> >   drivers/infiniband/hw/mlx5/main.c | 4 ++++
> >   drivers/infiniband/hw/mlx5/qp.c   | 5 +++--
> >   2 files changed, 7 insertions(+), 2 deletions(-)
> > 
> > diff --git a/drivers/infiniband/hw/mlx5/main.c b/drivers/infiniband/hw/mlx5/main.c
> > index 90daa58126f4..40284bbb45d6 100644
> > --- a/drivers/infiniband/hw/mlx5/main.c
> > +++ b/drivers/infiniband/hw/mlx5/main.c
> > @@ -511,6 +511,10 @@ static int translate_eth_ext_proto_oper(u32 eth_proto_oper, u16 *active_speed,
> >   		*active_width = IB_WIDTH_4X;
> >   		*active_speed = IB_SPEED_XDR;
> >   		break;
> > +	case MLX5E_PROT_MASK(MLX5E_1600TAUI_8_1600TBASE_CR8_KR8):
> > +		*active_width = IB_WIDTH_8X;
> > +		*active_speed = IB_SPEED_XDR;
> > +		break;
> >   	default:
> >   		return -EINVAL;
> >   	}
> > diff --git a/drivers/infiniband/hw/mlx5/qp.c b/drivers/infiniband/hw/mlx5/qp.c
> > index 88724d15705d..69af20790481 100644
> > --- a/drivers/infiniband/hw/mlx5/qp.c
> > +++ b/drivers/infiniband/hw/mlx5/qp.c
> > @@ -3451,10 +3451,11 @@ int mlx5r_ib_rate(struct mlx5_ib_dev *dev, u8 rate)
> >   {
> >   	u32 stat_rate_support;
> > -	if (rate == IB_RATE_PORT_CURRENT || rate == IB_RATE_800_GBPS)
> > +	if (rate == IB_RATE_PORT_CURRENT || rate == IB_RATE_800_GBPS ||
> > +	    rate == IB_RATE_1600_GBPS)
> >   		return 0;
> > -	if (rate < IB_RATE_2_5_GBPS || rate > IB_RATE_800_GBPS)
> > +	if (rate < IB_RATE_2_5_GBPS || rate > IB_RATE_1600_GBPS)
> >   		return -EINVAL;
> >   	stat_rate_support = MLX5_CAP_GEN(dev->mdev, stat_rate_support);
> > 
> 
> 

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH rdma-next 0/2] Add new IB rate for XDR (8x) support
  2025-11-20 15:15 [PATCH rdma-next 0/2] Add new IB rate for XDR (8x) support Leon Romanovsky
  2025-11-20 15:15 ` [PATCH rdma-next 1/2] RDMA/core: " Leon Romanovsky
  2025-11-20 15:15 ` [PATCH rdma-next 2/2] RDMA/mlx5: Add support for 1600_8x lane speed Leon Romanovsky
@ 2025-11-23  9:53 ` Leon Romanovsky
  2 siblings, 0 replies; 8+ messages in thread
From: Leon Romanovsky @ 2025-11-23  9:53 UTC (permalink / raw)
  To: Jason Gunthorpe, Leon Romanovsky
  Cc: linux-rdma, linux-kernel, Maher Sanalla, Michael Guralnik


On Thu, 20 Nov 2025 17:15:14 +0200, Leon Romanovsky wrote:
> Nothing super fancy, just addition of new 1600_8x lane speed.
> 
> 

Applied, thanks!

[1/2] RDMA/core: Add new IB rate for XDR (8x) support
      https://git.kernel.org/rdma/rdma/c/9e119870a99e50
[2/2] RDMA/mlx5: Add support for 1600_8x lane speed
      https://git.kernel.org/rdma/rdma/c/45085ad3b2a358

Best regards,
-- 
Leon Romanovsky <leon@kernel.org>


^ permalink raw reply	[flat|nested] 8+ messages in thread

end of thread, other threads:[~2025-11-23  9:53 UTC | newest]

Thread overview: 8+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2025-11-20 15:15 [PATCH rdma-next 0/2] Add new IB rate for XDR (8x) support Leon Romanovsky
2025-11-20 15:15 ` [PATCH rdma-next 1/2] RDMA/core: " Leon Romanovsky
2025-11-21  3:46   ` Kalesh Anakkur Purayil
2025-11-20 15:15 ` [PATCH rdma-next 2/2] RDMA/mlx5: Add support for 1600_8x lane speed Leon Romanovsky
2025-11-20 22:44   ` yanjun.zhu
2025-11-23  9:49     ` Leon Romanovsky
2025-11-21  3:47   ` Kalesh Anakkur Purayil
2025-11-23  9:53 ` [PATCH rdma-next 0/2] Add new IB rate for XDR (8x) support Leon Romanovsky

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox