* [PATCH rdma-next 1/6] IB/core: Add support for XDR link speed
2023-09-20 10:07 [PATCH rdma-next 0/6] Add 800Gb (XDR) speed support Leon Romanovsky
@ 2023-09-20 10:07 ` Leon Romanovsky
2023-09-20 10:07 ` [PATCH rdma-next 2/6] IB/mlx5: Expose XDR speed through MAD Leon Romanovsky
` (6 subsequent siblings)
7 siblings, 0 replies; 9+ messages in thread
From: Leon Romanovsky @ 2023-09-20 10:07 UTC (permalink / raw)
To: Jason Gunthorpe
Cc: Or Har-Toov, David S. Miller, Eric Dumazet, Jakub Kicinski,
linux-rdma, Mark Zhang, netdev, Paolo Abeni, Saeed Mahameed
From: Or Har-Toov <ohartoov@nvidia.com>
Add new IBTA speed XDR, the new rate that was added to Infiniband spec
as part of XDR and supporting signaling rate of 200Gb.
In order to report that value to rdma-core, add new u32 field to
query_port response.
Signed-off-by: Or Har-Toov <ohartoov@nvidia.com>
Reviewed-by: Mark Zhang <markzhang@nvidia.com>
Signed-off-by: Leon Romanovsky <leonro@nvidia.com>
---
drivers/infiniband/core/sysfs.c | 4 ++++
drivers/infiniband/core/uverbs_std_types_device.c | 3 ++-
drivers/infiniband/core/verbs.c | 3 +++
include/rdma/ib_verbs.h | 2 ++
include/uapi/rdma/ib_user_ioctl_verbs.h | 3 ++-
5 files changed, 13 insertions(+), 2 deletions(-)
diff --git a/drivers/infiniband/core/sysfs.c b/drivers/infiniband/core/sysfs.c
index ec5efdc16660..9f97bef02149 100644
--- a/drivers/infiniband/core/sysfs.c
+++ b/drivers/infiniband/core/sysfs.c
@@ -342,6 +342,10 @@ static ssize_t rate_show(struct ib_device *ibdev, u32 port_num,
speed = " NDR";
rate = 1000;
break;
+ case IB_SPEED_XDR:
+ speed = " XDR";
+ rate = 2000;
+ break;
case IB_SPEED_SDR:
default: /* default to SDR for invalid rates */
speed = " SDR";
diff --git a/drivers/infiniband/core/uverbs_std_types_device.c b/drivers/infiniband/core/uverbs_std_types_device.c
index 049684880ae0..fb0555647336 100644
--- a/drivers/infiniband/core/uverbs_std_types_device.c
+++ b/drivers/infiniband/core/uverbs_std_types_device.c
@@ -203,6 +203,7 @@ static int UVERBS_HANDLER(UVERBS_METHOD_QUERY_PORT)(
copy_port_attr_to_resp(&attr, &resp.legacy_resp, ib_dev, port_num);
resp.port_cap_flags2 = attr.port_cap_flags2;
+ resp.active_speed_ex = attr.active_speed;
return uverbs_copy_to_struct_or_zero(attrs, UVERBS_ATTR_QUERY_PORT_RESP,
&resp, sizeof(resp));
@@ -461,7 +462,7 @@ DECLARE_UVERBS_NAMED_METHOD(
UVERBS_ATTR_PTR_OUT(
UVERBS_ATTR_QUERY_PORT_RESP,
UVERBS_ATTR_STRUCT(struct ib_uverbs_query_port_resp_ex,
- reserved),
+ active_speed_ex),
UA_MANDATORY));
DECLARE_UVERBS_NAMED_METHOD(
diff --git a/drivers/infiniband/core/verbs.c b/drivers/infiniband/core/verbs.c
index cc2c37096bba..8a6da87f464b 100644
--- a/drivers/infiniband/core/verbs.c
+++ b/drivers/infiniband/core/verbs.c
@@ -147,6 +147,7 @@ __attribute_const__ int ib_rate_to_mult(enum ib_rate rate)
case IB_RATE_50_GBPS: return 20;
case IB_RATE_400_GBPS: return 160;
case IB_RATE_600_GBPS: return 240;
+ case IB_RATE_800_GBPS: return 320;
default: return -1;
}
}
@@ -176,6 +177,7 @@ __attribute_const__ enum ib_rate mult_to_ib_rate(int mult)
case 20: return IB_RATE_50_GBPS;
case 160: return IB_RATE_400_GBPS;
case 240: return IB_RATE_600_GBPS;
+ case 320: return IB_RATE_800_GBPS;
default: return IB_RATE_PORT_CURRENT;
}
}
@@ -205,6 +207,7 @@ __attribute_const__ int ib_rate_to_mbps(enum ib_rate rate)
case IB_RATE_50_GBPS: return 53125;
case IB_RATE_400_GBPS: return 425000;
case IB_RATE_600_GBPS: return 637500;
+ case IB_RATE_800_GBPS: return 850000;
default: return -1;
}
}
diff --git a/include/rdma/ib_verbs.h b/include/rdma/ib_verbs.h
index e36c0d9aad27..fd61c3a56fa7 100644
--- a/include/rdma/ib_verbs.h
+++ b/include/rdma/ib_verbs.h
@@ -561,6 +561,7 @@ enum ib_port_speed {
IB_SPEED_EDR = 32,
IB_SPEED_HDR = 64,
IB_SPEED_NDR = 128,
+ IB_SPEED_XDR = 256,
};
enum ib_stat_flag {
@@ -840,6 +841,7 @@ enum ib_rate {
IB_RATE_50_GBPS = 20,
IB_RATE_400_GBPS = 21,
IB_RATE_600_GBPS = 22,
+ IB_RATE_800_GBPS = 23,
};
/**
diff --git a/include/uapi/rdma/ib_user_ioctl_verbs.h b/include/uapi/rdma/ib_user_ioctl_verbs.h
index d7c5aaa32744..fe15bc7e9f70 100644
--- a/include/uapi/rdma/ib_user_ioctl_verbs.h
+++ b/include/uapi/rdma/ib_user_ioctl_verbs.h
@@ -220,7 +220,8 @@ enum ib_uverbs_advise_mr_flag {
struct ib_uverbs_query_port_resp_ex {
struct ib_uverbs_query_port_resp legacy_resp;
__u16 port_cap_flags2;
- __u8 reserved[6];
+ __u8 reserved[2];
+ __u32 active_speed_ex;
};
struct ib_uverbs_qp_cap {
--
2.41.0
^ permalink raw reply related [flat|nested] 9+ messages in thread
* [PATCH rdma-next 2/6] IB/mlx5: Expose XDR speed through MAD
2023-09-20 10:07 [PATCH rdma-next 0/6] Add 800Gb (XDR) speed support Leon Romanovsky
2023-09-20 10:07 ` [PATCH rdma-next 1/6] IB/core: Add support for XDR link speed Leon Romanovsky
@ 2023-09-20 10:07 ` Leon Romanovsky
2023-09-20 10:07 ` [PATCH rdma-next 3/6] IB/mlx5: Add support for 800G_8X lane speed Leon Romanovsky
` (5 subsequent siblings)
7 siblings, 0 replies; 9+ messages in thread
From: Leon Romanovsky @ 2023-09-20 10:07 UTC (permalink / raw)
To: Jason Gunthorpe
Cc: Or Har-Toov, David S. Miller, Eric Dumazet, Jakub Kicinski,
linux-rdma, Mark Zhang, netdev, Paolo Abeni, Saeed Mahameed
From: Or Har-Toov <ohartoov@nvidia.com>
Under MAD query port, Report NDR speed when NDR is supported in the port
capability mask.
Signed-off-by: Or Har-Toov <ohartoov@nvidia.com>
Reviewed-by: Mark Zhang <markzhang@nvidia.com>
Signed-off-by: Leon Romanovsky <leonro@nvidia.com>
---
drivers/infiniband/hw/mlx5/mad.c | 13 +++++++++++++
include/rdma/ib_mad.h | 2 ++
2 files changed, 15 insertions(+)
diff --git a/drivers/infiniband/hw/mlx5/mad.c b/drivers/infiniband/hw/mlx5/mad.c
index 8102ef113b7e..0c3c4e64812c 100644
--- a/drivers/infiniband/hw/mlx5/mad.c
+++ b/drivers/infiniband/hw/mlx5/mad.c
@@ -619,6 +619,19 @@ int mlx5_query_mad_ifc_port(struct ib_device *ibdev, u32 port,
}
}
+ /* Check if extended speeds 2 (XDR/...) are supported */
+ if (props->port_cap_flags & IB_PORT_CAP_MASK2_SUP &&
+ props->port_cap_flags2 & IB_PORT_EXTENDED_SPEEDS2_SUP) {
+ ext_active_speed = (out_mad->data[56] >> 4) & 0x6;
+
+ switch (ext_active_speed) {
+ case 2:
+ if (props->port_cap_flags2 & IB_PORT_LINK_SPEED_XDR_SUP)
+ props->active_speed = IB_SPEED_XDR;
+ break;
+ }
+ }
+
/* If reported active speed is QDR, check if is FDR-10 */
if (props->active_speed == 4) {
if (dev->port_caps[port - 1].ext_port_cap &
diff --git a/include/rdma/ib_mad.h b/include/rdma/ib_mad.h
index 2e3843b761e8..3f1b58d8b4bf 100644
--- a/include/rdma/ib_mad.h
+++ b/include/rdma/ib_mad.h
@@ -277,6 +277,8 @@ enum ib_port_capability_mask2_bits {
IB_PORT_LINK_WIDTH_2X_SUP = 1 << 4,
IB_PORT_LINK_SPEED_HDR_SUP = 1 << 5,
IB_PORT_LINK_SPEED_NDR_SUP = 1 << 10,
+ IB_PORT_EXTENDED_SPEEDS2_SUP = 1 << 11,
+ IB_PORT_LINK_SPEED_XDR_SUP = 1 << 12,
};
#define OPA_CLASS_PORT_INFO_PR_SUPPORT BIT(26)
--
2.41.0
^ permalink raw reply related [flat|nested] 9+ messages in thread
* [PATCH rdma-next 3/6] IB/mlx5: Add support for 800G_8X lane speed
2023-09-20 10:07 [PATCH rdma-next 0/6] Add 800Gb (XDR) speed support Leon Romanovsky
2023-09-20 10:07 ` [PATCH rdma-next 1/6] IB/core: Add support for XDR link speed Leon Romanovsky
2023-09-20 10:07 ` [PATCH rdma-next 2/6] IB/mlx5: Expose XDR speed through MAD Leon Romanovsky
@ 2023-09-20 10:07 ` Leon Romanovsky
2023-09-20 10:07 ` [PATCH rdma-next 4/6] IB/mlx5: Rename 400G_8X speed to comply to naming convention Leon Romanovsky
` (4 subsequent siblings)
7 siblings, 0 replies; 9+ messages in thread
From: Leon Romanovsky @ 2023-09-20 10:07 UTC (permalink / raw)
To: Jason Gunthorpe
Cc: Patrisious Haddad, Eric Dumazet, Jakub Kicinski, linux-rdma,
Mark Zhang, netdev, Or Har-Toov, Paolo Abeni, Saeed Mahameed
From: Patrisious Haddad <phaddad@nvidia.com>
Add a check for 800G_8X speed when querying PTYS and report it back
correctly when needed.
Signed-off-by: Patrisious Haddad <phaddad@nvidia.com>
Reviewed-by: Mark Zhang <markzhang@nvidia.com>
Signed-off-by: Leon Romanovsky <leonro@nvidia.com>
---
drivers/infiniband/hw/mlx5/main.c | 4 ++++
drivers/net/ethernet/mellanox/mlx5/core/port.c | 1 +
include/linux/mlx5/port.h | 1 +
3 files changed, 6 insertions(+)
diff --git a/drivers/infiniband/hw/mlx5/main.c b/drivers/infiniband/hw/mlx5/main.c
index f9286820ad3b..830dac95c163 100644
--- a/drivers/infiniband/hw/mlx5/main.c
+++ b/drivers/infiniband/hw/mlx5/main.c
@@ -482,6 +482,10 @@ static int translate_eth_ext_proto_oper(u32 eth_proto_oper, u16 *active_speed,
*active_width = IB_WIDTH_4X;
*active_speed = IB_SPEED_NDR;
break;
+ case MLX5E_PROT_MASK(MLX5E_800GAUI_8_800GBASE_CR8_KR8):
+ *active_width = IB_WIDTH_8X;
+ *active_speed = IB_SPEED_NDR;
+ break;
default:
return -EINVAL;
}
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/port.c b/drivers/net/ethernet/mellanox/mlx5/core/port.c
index be70d1f23a5d..43423543f34c 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/port.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/port.c
@@ -1102,6 +1102,7 @@ static const u32 mlx5e_ext_link_speed[MLX5E_EXT_LINK_MODES_NUMBER] = {
[MLX5E_100GAUI_1_100GBASE_CR_KR] = 100000,
[MLX5E_200GAUI_2_200GBASE_CR2_KR2] = 200000,
[MLX5E_400GAUI_4_400GBASE_CR4_KR4] = 400000,
+ [MLX5E_800GAUI_8_800GBASE_CR8_KR8] = 800000,
};
int mlx5_port_query_eth_proto(struct mlx5_core_dev *dev, u8 port, bool ext,
diff --git a/include/linux/mlx5/port.h b/include/linux/mlx5/port.h
index 98b2e1e149f9..794001ebd003 100644
--- a/include/linux/mlx5/port.h
+++ b/include/linux/mlx5/port.h
@@ -117,6 +117,7 @@ enum mlx5e_ext_link_mode {
MLX5E_200GAUI_2_200GBASE_CR2_KR2 = 13,
MLX5E_400GAUI_8 = 15,
MLX5E_400GAUI_4_400GBASE_CR4_KR4 = 16,
+ MLX5E_800GAUI_8_800GBASE_CR8_KR8 = 19,
MLX5E_EXT_LINK_MODES_NUMBER,
};
--
2.41.0
^ permalink raw reply related [flat|nested] 9+ messages in thread
* [PATCH rdma-next 4/6] IB/mlx5: Rename 400G_8X speed to comply to naming convention
2023-09-20 10:07 [PATCH rdma-next 0/6] Add 800Gb (XDR) speed support Leon Romanovsky
` (2 preceding siblings ...)
2023-09-20 10:07 ` [PATCH rdma-next 3/6] IB/mlx5: Add support for 800G_8X lane speed Leon Romanovsky
@ 2023-09-20 10:07 ` Leon Romanovsky
2023-09-20 10:07 ` [PATCH rdma-next 5/6] IB/mlx5: Adjust mlx5 rate mapping to support 800Gb Leon Romanovsky
` (3 subsequent siblings)
7 siblings, 0 replies; 9+ messages in thread
From: Leon Romanovsky @ 2023-09-20 10:07 UTC (permalink / raw)
To: Jason Gunthorpe
Cc: Patrisious Haddad, Eric Dumazet, Jakub Kicinski, linux-rdma,
Mark Zhang, netdev, Or Har-Toov, Paolo Abeni, Saeed Mahameed
From: Patrisious Haddad <phaddad@nvidia.com>
Rename 400G_8X speed to comply to naming convention.
Signed-off-by: Patrisious Haddad <phaddad@nvidia.com>
Reviewed-by: Mark Zhang <markzhang@nvidia.com>
Signed-off-by: Leon Romanovsky <leonro@nvidia.com>
---
drivers/infiniband/hw/mlx5/main.c | 2 +-
drivers/net/ethernet/mellanox/mlx5/core/port.c | 2 +-
include/linux/mlx5/port.h | 2 +-
3 files changed, 3 insertions(+), 3 deletions(-)
diff --git a/drivers/infiniband/hw/mlx5/main.c b/drivers/infiniband/hw/mlx5/main.c
index 830dac95c163..026f6c04f81e 100644
--- a/drivers/infiniband/hw/mlx5/main.c
+++ b/drivers/infiniband/hw/mlx5/main.c
@@ -474,7 +474,7 @@ static int translate_eth_ext_proto_oper(u32 eth_proto_oper, u16 *active_speed,
*active_width = IB_WIDTH_2X;
*active_speed = IB_SPEED_NDR;
break;
- case MLX5E_PROT_MASK(MLX5E_400GAUI_8):
+ case MLX5E_PROT_MASK(MLX5E_400GAUI_8_400GBASE_CR8):
*active_width = IB_WIDTH_8X;
*active_speed = IB_SPEED_HDR;
break;
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/port.c b/drivers/net/ethernet/mellanox/mlx5/core/port.c
index 43423543f34c..7d8c732818f2 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/port.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/port.c
@@ -1098,7 +1098,7 @@ static const u32 mlx5e_ext_link_speed[MLX5E_EXT_LINK_MODES_NUMBER] = {
[MLX5E_CAUI_4_100GBASE_CR4_KR4] = 100000,
[MLX5E_100GAUI_2_100GBASE_CR2_KR2] = 100000,
[MLX5E_200GAUI_4_200GBASE_CR4_KR4] = 200000,
- [MLX5E_400GAUI_8] = 400000,
+ [MLX5E_400GAUI_8_400GBASE_CR8] = 400000,
[MLX5E_100GAUI_1_100GBASE_CR_KR] = 100000,
[MLX5E_200GAUI_2_200GBASE_CR2_KR2] = 200000,
[MLX5E_400GAUI_4_400GBASE_CR4_KR4] = 400000,
diff --git a/include/linux/mlx5/port.h b/include/linux/mlx5/port.h
index 794001ebd003..26092c78a985 100644
--- a/include/linux/mlx5/port.h
+++ b/include/linux/mlx5/port.h
@@ -115,7 +115,7 @@ enum mlx5e_ext_link_mode {
MLX5E_100GAUI_1_100GBASE_CR_KR = 11,
MLX5E_200GAUI_4_200GBASE_CR4_KR4 = 12,
MLX5E_200GAUI_2_200GBASE_CR2_KR2 = 13,
- MLX5E_400GAUI_8 = 15,
+ MLX5E_400GAUI_8_400GBASE_CR8 = 15,
MLX5E_400GAUI_4_400GBASE_CR4_KR4 = 16,
MLX5E_800GAUI_8_800GBASE_CR8_KR8 = 19,
MLX5E_EXT_LINK_MODES_NUMBER,
--
2.41.0
^ permalink raw reply related [flat|nested] 9+ messages in thread
* [PATCH rdma-next 5/6] IB/mlx5: Adjust mlx5 rate mapping to support 800Gb
2023-09-20 10:07 [PATCH rdma-next 0/6] Add 800Gb (XDR) speed support Leon Romanovsky
` (3 preceding siblings ...)
2023-09-20 10:07 ` [PATCH rdma-next 4/6] IB/mlx5: Rename 400G_8X speed to comply to naming convention Leon Romanovsky
@ 2023-09-20 10:07 ` Leon Romanovsky
2023-09-20 10:07 ` [PATCH rdma-next 6/6] RDMA/ipoib: Add support for XDR speed in ethtool Leon Romanovsky
` (2 subsequent siblings)
7 siblings, 0 replies; 9+ messages in thread
From: Leon Romanovsky @ 2023-09-20 10:07 UTC (permalink / raw)
To: Jason Gunthorpe
Cc: Patrisious Haddad, David S. Miller, Eric Dumazet, Jakub Kicinski,
linux-rdma, Mark Zhang, netdev, Or Har-Toov, Paolo Abeni,
Saeed Mahameed
From: Patrisious Haddad <phaddad@nvidia.com>
Adjust mlx5 function which maps the speed rate from IB spec values
to internal driver values to be able to handle speeds up to 800Gb.
Signed-off-by: Patrisious Haddad <phaddad@nvidia.com>
Reviewed-by: Mark Zhang <markzhang@nvidia.com>
Signed-off-by: Leon Romanovsky <leonro@nvidia.com>
---
drivers/infiniband/hw/mlx5/qp.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/infiniband/hw/mlx5/qp.c b/drivers/infiniband/hw/mlx5/qp.c
index 9261df5328a4..c047c5d66737 100644
--- a/drivers/infiniband/hw/mlx5/qp.c
+++ b/drivers/infiniband/hw/mlx5/qp.c
@@ -3438,7 +3438,7 @@ static int ib_rate_to_mlx5(struct mlx5_ib_dev *dev, u8 rate)
if (rate == IB_RATE_PORT_CURRENT)
return 0;
- if (rate < IB_RATE_2_5_GBPS || rate > IB_RATE_600_GBPS)
+ if (rate < IB_RATE_2_5_GBPS || rate > IB_RATE_800_GBPS)
return -EINVAL;
stat_rate_support = MLX5_CAP_GEN(dev->mdev, stat_rate_support);
--
2.41.0
^ permalink raw reply related [flat|nested] 9+ messages in thread
* [PATCH rdma-next 6/6] RDMA/ipoib: Add support for XDR speed in ethtool
2023-09-20 10:07 [PATCH rdma-next 0/6] Add 800Gb (XDR) speed support Leon Romanovsky
` (4 preceding siblings ...)
2023-09-20 10:07 ` [PATCH rdma-next 5/6] IB/mlx5: Adjust mlx5 rate mapping to support 800Gb Leon Romanovsky
@ 2023-09-20 10:07 ` Leon Romanovsky
2023-09-21 20:42 ` [PATCH rdma-next 0/6] Add 800Gb (XDR) speed support Jacob Keller
2023-09-26 9:39 ` Leon Romanovsky
7 siblings, 0 replies; 9+ messages in thread
From: Leon Romanovsky @ 2023-09-20 10:07 UTC (permalink / raw)
To: Jason Gunthorpe
Cc: Patrisious Haddad, David S. Miller, Eric Dumazet, Jakub Kicinski,
linux-rdma, Mark Zhang, netdev, Or Har-Toov, Paolo Abeni,
Saeed Mahameed
From: Patrisious Haddad <phaddad@nvidia.com>
The IBTA specification 1.7 has new speed - XDR, supporting signaling
rate of 200Gb.
Ethtool support of IPoIB driver translates IB speed to signaling rate.
Added translation of XDR IB type to rate of 200Gb Ethernet speed.
Signed-off-by: Patrisious Haddad <phaddad@nvidia.com>
Reviewed-by: Mark Zhang <markzhang@nvidia.com>
Signed-off-by: Leon Romanovsky <leonro@nvidia.com>
---
drivers/infiniband/ulp/ipoib/ipoib_ethtool.c | 2 ++
1 file changed, 2 insertions(+)
diff --git a/drivers/infiniband/ulp/ipoib/ipoib_ethtool.c b/drivers/infiniband/ulp/ipoib/ipoib_ethtool.c
index 8af99b18d361..7da94fb8d7fa 100644
--- a/drivers/infiniband/ulp/ipoib/ipoib_ethtool.c
+++ b/drivers/infiniband/ulp/ipoib/ipoib_ethtool.c
@@ -174,6 +174,8 @@ static inline int ib_speed_enum_to_int(int speed)
return SPEED_50000;
case IB_SPEED_NDR:
return SPEED_100000;
+ case IB_SPEED_XDR:
+ return SPEED_200000;
}
return SPEED_UNKNOWN;
--
2.41.0
^ permalink raw reply related [flat|nested] 9+ messages in thread
* Re: [PATCH rdma-next 0/6] Add 800Gb (XDR) speed support
2023-09-20 10:07 [PATCH rdma-next 0/6] Add 800Gb (XDR) speed support Leon Romanovsky
` (5 preceding siblings ...)
2023-09-20 10:07 ` [PATCH rdma-next 6/6] RDMA/ipoib: Add support for XDR speed in ethtool Leon Romanovsky
@ 2023-09-21 20:42 ` Jacob Keller
2023-09-26 9:39 ` Leon Romanovsky
7 siblings, 0 replies; 9+ messages in thread
From: Jacob Keller @ 2023-09-21 20:42 UTC (permalink / raw)
To: Leon Romanovsky, Jason Gunthorpe
Cc: Leon Romanovsky, Eric Dumazet, Jakub Kicinski, linux-kernel,
linux-rdma, Mark Zhang, netdev, Or Har-Toov, Paolo Abeni,
Patrisious Haddad, Saeed Mahameed
On 9/20/2023 3:07 AM, Leon Romanovsky wrote:
> From: Leon Romanovsky <leonro@nvidia.com>
>
> Hi,
>
> This series extends RDMA subsystem and mlx5_ib driver to support 800Gb
> (XDR) speed which was added to IBTA v1.7 specification.
>
> Thanks
>
> Or Har-Toov (2):
> IB/core: Add support for XDR link speed
> IB/mlx5: Expose XDR speed through MAD
>
> Patrisious Haddad (4):
> IB/mlx5: Add support for 800G_8X lane speed
> IB/mlx5: Rename 400G_8X speed to comply to naming convention
> IB/mlx5: Adjust mlx5 rate mapping to support 800Gb
> RDMA/ipoib: Add support for XDR speed in ethtool
>
Reviewed-by: Jacob Keller <jacob.e.keller@intel.com>
> drivers/infiniband/core/sysfs.c | 4 ++++
> drivers/infiniband/core/uverbs_std_types_device.c | 3 ++-
> drivers/infiniband/core/verbs.c | 3 +++
> drivers/infiniband/hw/mlx5/mad.c | 13 +++++++++++++
> drivers/infiniband/hw/mlx5/main.c | 6 +++++-
> drivers/infiniband/hw/mlx5/qp.c | 2 +-
> drivers/infiniband/ulp/ipoib/ipoib_ethtool.c | 2 ++
> drivers/net/ethernet/mellanox/mlx5/core/port.c | 3 ++-
> include/linux/mlx5/port.h | 3 ++-
> include/rdma/ib_mad.h | 2 ++
> include/rdma/ib_verbs.h | 2 ++
> include/uapi/rdma/ib_user_ioctl_verbs.h | 3 ++-
> 12 files changed, 40 insertions(+), 6 deletions(-)
>
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [PATCH rdma-next 0/6] Add 800Gb (XDR) speed support
2023-09-20 10:07 [PATCH rdma-next 0/6] Add 800Gb (XDR) speed support Leon Romanovsky
` (6 preceding siblings ...)
2023-09-21 20:42 ` [PATCH rdma-next 0/6] Add 800Gb (XDR) speed support Jacob Keller
@ 2023-09-26 9:39 ` Leon Romanovsky
7 siblings, 0 replies; 9+ messages in thread
From: Leon Romanovsky @ 2023-09-26 9:39 UTC (permalink / raw)
To: Jason Gunthorpe, Leon Romanovsky
Cc: Eric Dumazet, Jakub Kicinski, linux-kernel, linux-rdma,
Mark Zhang, netdev, Or Har-Toov, Paolo Abeni, Patrisious Haddad,
Saeed Mahameed, Leon Romanovsky
On Wed, 20 Sep 2023 13:07:39 +0300, Leon Romanovsky wrote:
> From: Leon Romanovsky <leonro@nvidia.com>
>
> Hi,
>
> This series extends RDMA subsystem and mlx5_ib driver to support 800Gb
> (XDR) speed which was added to IBTA v1.7 specification.
>
> [...]
Applied, thanks!
[1/6] IB/core: Add support for XDR link speed
https://git.kernel.org/rdma/rdma/c/703289ce43f740
[2/6] IB/mlx5: Expose XDR speed through MAD
https://git.kernel.org/rdma/rdma/c/561b4a3ac65597
[3/6] IB/mlx5: Add support for 800G_8X lane speed
https://git.kernel.org/rdma/rdma/c/948f0bf5ad6ac1
[4/6] IB/mlx5: Rename 400G_8X speed to comply to naming convention
https://git.kernel.org/rdma/rdma/c/b28ad32442bec2
[5/6] IB/mlx5: Adjust mlx5 rate mapping to support 800Gb
https://git.kernel.org/rdma/rdma/c/4f4db190893fb8
[6/6] RDMA/ipoib: Add support for XDR speed in ethtool
https://git.kernel.org/rdma/rdma/c/8dc0fd2f5693ab
Best regards,
--
Leon Romanovsky <leon@kernel.org>
^ permalink raw reply [flat|nested] 9+ messages in thread