* [PATCH rdma-rext V3 0/5] RDMA/bnxt_re: Add QP rate limit support
@ 2026-02-02 4:51 Kalesh AP
2026-02-02 4:51 ` [PATCH rdma-rext V3 1/5] RDMA/bnxt_re: Add support for QP rate limiting Kalesh AP
` (4 more replies)
0 siblings, 5 replies; 8+ messages in thread
From: Kalesh AP @ 2026-02-02 4:51 UTC (permalink / raw)
To: leon, jgg; +Cc: linux-rdma, andrew.gospodarek, selvin.xavier, Kalesh AP
Hi,
This patchset supports QP rate limit in the bnxt_re driver.
Broadcom P7 devices supports setting the rate limit while changing
RC QP state from INIT to RTR, RTR to RTS and RTS to RTS. Or, once
the QP is transitioned to RTR or RTS state.
Patch#1 adds support for QP rate limiting in the bnxt_re driver.
Patch#2 adds support to report packet pacing capabilities in the
query_device.
Patch#3 adds support to report QP rate limit in debugfs QP info.
Patch#4 adds a check in mlx5 driver to support QP rate limit only
on Raw Ethernet QPs.
Patch#5 adds stack support for rate limit for RC QPs.
The pull request for rdma-core changes are at:
https://github.com/linux-rdma/rdma-core/pull/1692
V2->V3:
1. re-ordered the patches in the series so that kernel changes will be
added as last patch.
2. removed a defensive check from Patch#1
V1->V2:
1. Added a new patch#5 to limit the support for rate limit only for
Raw Packet QP on mlx5 hardware.
2. Modified to use ibdev_err instead of dev_err in patch#2. Also,
modified to return error for rate_limit for non RC QPs.
Regards,
Kalesh
Kalesh AP (5):
RDMA/bnxt_re: Add support for QP rate limiting
RDMA/bnxt_re: Report packet pacing capabilities when querying device
RDMA/bnxt_re: Report QP rate limit in debugfs
RDMA/mlx5: Support rate limit only for Raw Packet QP
IB/core: Extend rate limit support for RC QPs
drivers/infiniband/core/verbs.c | 9 ++++---
drivers/infiniband/hw/bnxt_re/debugfs.c | 14 ++++++++--
drivers/infiniband/hw/bnxt_re/ib_verbs.c | 33 +++++++++++++++++++++--
drivers/infiniband/hw/bnxt_re/qplib_fp.c | 12 ++++++++-
drivers/infiniband/hw/bnxt_re/qplib_fp.h | 3 +++
drivers/infiniband/hw/bnxt_re/qplib_res.h | 6 +++++
drivers/infiniband/hw/bnxt_re/qplib_sp.c | 5 ++++
drivers/infiniband/hw/bnxt_re/qplib_sp.h | 2 ++
drivers/infiniband/hw/bnxt_re/roce_hsi.h | 13 ++++++---
drivers/infiniband/hw/mlx5/qp.c | 5 ++++
include/uapi/rdma/bnxt_re-abi.h | 16 +++++++++++
11 files changed, 106 insertions(+), 12 deletions(-)
--
2.43.5
^ permalink raw reply [flat|nested] 8+ messages in thread
* [PATCH rdma-rext V3 1/5] RDMA/bnxt_re: Add support for QP rate limiting
2026-02-02 4:51 [PATCH rdma-rext V3 0/5] RDMA/bnxt_re: Add QP rate limit support Kalesh AP
@ 2026-02-02 4:51 ` Kalesh AP
2026-02-02 12:17 ` Leon Romanovsky
2026-02-02 4:51 ` [PATCH rdma-rext V3 2/5] RDMA/bnxt_re: Report packet pacing capabilities when querying device Kalesh AP
` (3 subsequent siblings)
4 siblings, 1 reply; 8+ messages in thread
From: Kalesh AP @ 2026-02-02 4:51 UTC (permalink / raw)
To: leon, jgg
Cc: linux-rdma, andrew.gospodarek, selvin.xavier, Kalesh AP,
Damodharam Ammepalli, Hongguang Gao
Broadcom P7 chips supports applying rate limit to RC QPs.
It allows adjust shaper rate values during the INIT -> RTR,
RTR -> RTS, RTS -> RTS state changes or after QP transitions
to RTR or RTS.
Signed-off-by: Damodharam Ammepalli <damodharam.ammepalli@broadcom.com>
Reviewed-by: Hongguang Gao <hongguang.gao@broadcom.com>
Signed-off-by: Kalesh AP <kalesh-anakkur.purayil@broadcom.com>
---
drivers/infiniband/hw/bnxt_re/ib_verbs.c | 11 ++++++++++-
drivers/infiniband/hw/bnxt_re/qplib_fp.c | 12 +++++++++++-
drivers/infiniband/hw/bnxt_re/qplib_fp.h | 3 +++
drivers/infiniband/hw/bnxt_re/qplib_res.h | 6 ++++++
drivers/infiniband/hw/bnxt_re/qplib_sp.c | 5 +++++
drivers/infiniband/hw/bnxt_re/qplib_sp.h | 2 ++
drivers/infiniband/hw/bnxt_re/roce_hsi.h | 13 +++++++++----
7 files changed, 46 insertions(+), 6 deletions(-)
diff --git a/drivers/infiniband/hw/bnxt_re/ib_verbs.c b/drivers/infiniband/hw/bnxt_re/ib_verbs.c
index f19b55c13d58..2930461be20d 100644
--- a/drivers/infiniband/hw/bnxt_re/ib_verbs.c
+++ b/drivers/infiniband/hw/bnxt_re/ib_verbs.c
@@ -2089,7 +2089,7 @@ int bnxt_re_modify_qp(struct ib_qp *ib_qp, struct ib_qp_attr *qp_attr,
unsigned int flags;
u8 nw_type;
- if (qp_attr_mask & ~IB_QP_ATTR_STANDARD_BITS)
+ if (qp_attr_mask & ~(IB_QP_ATTR_STANDARD_BITS | IB_QP_RATE_LIMIT))
return -EOPNOTSUPP;
qp->qplib_qp.modify_flags = 0;
@@ -2129,6 +2129,15 @@ int bnxt_re_modify_qp(struct ib_qp *ib_qp, struct ib_qp_attr *qp_attr,
bnxt_re_unlock_cqs(qp, flags);
}
}
+
+ if (qp_attr_mask & IB_QP_RATE_LIMIT) {
+ if (qp->qplib_qp.type != IB_QPT_RC ||
+ !_is_modify_qp_rate_limit_supported(dev_attr->dev_cap_flags2))
+ return -EOPNOTSUPP;
+ qp->qplib_qp.ext_modify_flags |=
+ CMDQ_MODIFY_QP_EXT_MODIFY_MASK_RATE_LIMIT_VALID;
+ qp->qplib_qp.rate_limit = qp_attr->rate_limit;
+ }
if (qp_attr_mask & IB_QP_EN_SQD_ASYNC_NOTIFY) {
qp->qplib_qp.modify_flags |=
CMDQ_MODIFY_QP_MODIFY_MASK_EN_SQD_ASYNC_NOTIFY;
diff --git a/drivers/infiniband/hw/bnxt_re/qplib_fp.c b/drivers/infiniband/hw/bnxt_re/qplib_fp.c
index c88f049136fc..3e44311bf939 100644
--- a/drivers/infiniband/hw/bnxt_re/qplib_fp.c
+++ b/drivers/infiniband/hw/bnxt_re/qplib_fp.c
@@ -1313,8 +1313,8 @@ int bnxt_qplib_modify_qp(struct bnxt_qplib_res *res, struct bnxt_qplib_qp *qp)
struct bnxt_qplib_cmdqmsg msg = {};
struct cmdq_modify_qp req = {};
u16 vlan_pcp_vlan_dei_vlan_id;
+ u32 bmask, bmask_ext;
u32 temp32[4];
- u32 bmask;
int rc;
bnxt_qplib_rcfw_cmd_prep((struct cmdq_base *)&req,
@@ -1329,9 +1329,16 @@ int bnxt_qplib_modify_qp(struct bnxt_qplib_res *res, struct bnxt_qplib_qp *qp)
is_optimized_state_transition(qp))
bnxt_set_mandatory_attributes(res, qp, &req);
}
+
bmask = qp->modify_flags;
req.modify_mask = cpu_to_le32(qp->modify_flags);
+ bmask_ext = qp->ext_modify_flags;
+ req.ext_modify_mask = cpu_to_le32(qp->ext_modify_flags);
req.qp_cid = cpu_to_le32(qp->id);
+
+ if (bmask_ext & CMDQ_MODIFY_QP_EXT_MODIFY_MASK_RATE_LIMIT_VALID)
+ req.rate_limit = cpu_to_le32(qp->rate_limit);
+
if (bmask & CMDQ_MODIFY_QP_MODIFY_MASK_STATE) {
req.network_type_en_sqd_async_notify_new_state =
(qp->state & CMDQ_MODIFY_QP_NEW_STATE_MASK) |
@@ -1429,6 +1436,9 @@ int bnxt_qplib_modify_qp(struct bnxt_qplib_res *res, struct bnxt_qplib_qp *qp)
rc = bnxt_qplib_rcfw_send_message(rcfw, &msg);
if (rc)
return rc;
+
+ if (bmask_ext & CMDQ_MODIFY_QP_EXT_MODIFY_MASK_RATE_LIMIT_VALID)
+ qp->shaper_allocation_status = resp.shaper_allocation_status;
qp->cur_qp_state = qp->state;
return 0;
}
diff --git a/drivers/infiniband/hw/bnxt_re/qplib_fp.h b/drivers/infiniband/hw/bnxt_re/qplib_fp.h
index 1b414a73b46d..30c3f99be07b 100644
--- a/drivers/infiniband/hw/bnxt_re/qplib_fp.h
+++ b/drivers/infiniband/hw/bnxt_re/qplib_fp.h
@@ -280,6 +280,7 @@ struct bnxt_qplib_qp {
u8 state;
u8 cur_qp_state;
u64 modify_flags;
+ u32 ext_modify_flags;
u32 max_inline_data;
u32 mtu;
u8 path_mtu;
@@ -346,6 +347,8 @@ struct bnxt_qplib_qp {
bool is_host_msn_tbl;
u8 tos_dscp;
u32 ugid_index;
+ u32 rate_limit;
+ u8 shaper_allocation_status;
};
#define BNXT_RE_MAX_MSG_SIZE 0x80000000
diff --git a/drivers/infiniband/hw/bnxt_re/qplib_res.h b/drivers/infiniband/hw/bnxt_re/qplib_res.h
index 2ea3b7f232a3..9a5dcf97b6f4 100644
--- a/drivers/infiniband/hw/bnxt_re/qplib_res.h
+++ b/drivers/infiniband/hw/bnxt_re/qplib_res.h
@@ -623,4 +623,10 @@ static inline bool _is_max_srq_ext_supported(u16 dev_cap_ext_flags_2)
return !!(dev_cap_ext_flags_2 & CREQ_QUERY_FUNC_RESP_SB_MAX_SRQ_EXTENDED);
}
+static inline bool _is_modify_qp_rate_limit_supported(u16 dev_cap_ext_flags2)
+{
+ return dev_cap_ext_flags2 &
+ CREQ_QUERY_FUNC_RESP_SB_MODIFY_QP_RATE_LIMIT_SUPPORTED;
+}
+
#endif /* __BNXT_QPLIB_RES_H__ */
diff --git a/drivers/infiniband/hw/bnxt_re/qplib_sp.c b/drivers/infiniband/hw/bnxt_re/qplib_sp.c
index 408a34df2667..ec9eb52a8ebf 100644
--- a/drivers/infiniband/hw/bnxt_re/qplib_sp.c
+++ b/drivers/infiniband/hw/bnxt_re/qplib_sp.c
@@ -193,6 +193,11 @@ int bnxt_qplib_get_dev_attr(struct bnxt_qplib_rcfw *rcfw)
attr->max_dpi = le32_to_cpu(sb->max_dpi);
attr->is_atomic = bnxt_qplib_is_atomic_cap(rcfw);
+
+ if (_is_modify_qp_rate_limit_supported(attr->dev_cap_flags2)) {
+ attr->rate_limit_min = le16_to_cpu(sb->rate_limit_min);
+ attr->rate_limit_max = le32_to_cpu(sb->rate_limit_max);
+ }
bail:
dma_free_coherent(&rcfw->pdev->dev, sbuf.size,
sbuf.sb, sbuf.dma_addr);
diff --git a/drivers/infiniband/hw/bnxt_re/qplib_sp.h b/drivers/infiniband/hw/bnxt_re/qplib_sp.h
index 5a45c55c6464..9fadd637cb5b 100644
--- a/drivers/infiniband/hw/bnxt_re/qplib_sp.h
+++ b/drivers/infiniband/hw/bnxt_re/qplib_sp.h
@@ -76,6 +76,8 @@ struct bnxt_qplib_dev_attr {
u16 dev_cap_flags;
u16 dev_cap_flags2;
u32 max_dpi;
+ u16 rate_limit_min;
+ u32 rate_limit_max;
};
struct bnxt_qplib_pd {
diff --git a/drivers/infiniband/hw/bnxt_re/roce_hsi.h b/drivers/infiniband/hw/bnxt_re/roce_hsi.h
index 99ecd72e72e2..aac338f2afd8 100644
--- a/drivers/infiniband/hw/bnxt_re/roce_hsi.h
+++ b/drivers/infiniband/hw/bnxt_re/roce_hsi.h
@@ -690,10 +690,11 @@ struct cmdq_modify_qp {
__le32 ext_modify_mask;
#define CMDQ_MODIFY_QP_EXT_MODIFY_MASK_EXT_STATS_CTX 0x1UL
#define CMDQ_MODIFY_QP_EXT_MODIFY_MASK_SCHQ_ID_VALID 0x2UL
+ #define CMDQ_MODIFY_QP_EXT_MODIFY_MASK_RATE_LIMIT_VALID 0x8UL
__le32 ext_stats_ctx_id;
__le16 schq_id;
__le16 unused_0;
- __le32 reserved32;
+ __le32 rate_limit;
};
/* creq_modify_qp_resp (size:128b/16B) */
@@ -716,7 +717,8 @@ struct creq_modify_qp_resp {
#define CREQ_MODIFY_QP_RESP_PINGPONG_PUSH_INDEX_MASK 0xeUL
#define CREQ_MODIFY_QP_RESP_PINGPONG_PUSH_INDEX_SFT 1
#define CREQ_MODIFY_QP_RESP_PINGPONG_PUSH_STATE 0x10UL
- u8 reserved8;
+ u8 shaper_allocation_status;
+ #define CREQ_MODIFY_QP_RESP_SHAPER_ALLOCATED 0x1UL
__le32 lag_src_mac;
};
@@ -2179,7 +2181,7 @@ struct creq_query_func_resp {
u8 reserved48[6];
};
-/* creq_query_func_resp_sb (size:1088b/136B) */
+/* creq_query_func_resp_sb (size:1280b/160B) */
struct creq_query_func_resp_sb {
u8 opcode;
#define CREQ_QUERY_FUNC_RESP_SB_OPCODE_QUERY_FUNC 0x83UL
@@ -2256,12 +2258,15 @@ struct creq_query_func_resp_sb {
#define CREQ_QUERY_FUNC_RESP_SB_REQ_RETRANSMISSION_SUPPORT_LAST \
CREQ_QUERY_FUNC_RESP_SB_REQ_RETRANSMISSION_SUPPORT_IQM_MSN_TABLE
#define CREQ_QUERY_FUNC_RESP_SB_MAX_SRQ_EXTENDED 0x40UL
+ #define CREQ_QUERY_FUNC_RESP_SB_MODIFY_QP_RATE_LIMIT_SUPPORTED 0x400UL
#define CREQ_QUERY_FUNC_RESP_SB_MIN_RNR_RTR_RTS_OPT_SUPPORTED 0x1000UL
__le16 max_xp_qp_size;
__le16 create_qp_batch_size;
__le16 destroy_qp_batch_size;
__le16 max_srq_ext;
- __le64 reserved64;
+ __le16 reserved16;
+ __le16 rate_limit_min;
+ __le32 rate_limit_max;
};
/* cmdq_set_func_resources (size:448b/56B) */
--
2.43.5
^ permalink raw reply related [flat|nested] 8+ messages in thread
* [PATCH rdma-rext V3 2/5] RDMA/bnxt_re: Report packet pacing capabilities when querying device
2026-02-02 4:51 [PATCH rdma-rext V3 0/5] RDMA/bnxt_re: Add QP rate limit support Kalesh AP
2026-02-02 4:51 ` [PATCH rdma-rext V3 1/5] RDMA/bnxt_re: Add support for QP rate limiting Kalesh AP
@ 2026-02-02 4:51 ` Kalesh AP
2026-02-02 4:51 ` [PATCH rdma-rext V3 3/5] RDMA/bnxt_re: Report QP rate limit in debugfs Kalesh AP
` (2 subsequent siblings)
4 siblings, 0 replies; 8+ messages in thread
From: Kalesh AP @ 2026-02-02 4:51 UTC (permalink / raw)
To: leon, jgg
Cc: linux-rdma, andrew.gospodarek, selvin.xavier, Kalesh AP,
Damodharam Ammepalli, Anantha Prabhu
Enable the support to report packet pacing capabilities
from kernel to user space. Packet pacing allows to limit
the rate to any number between the maximum and minimum.
The capabilities are exposed to user space through query_device.
The following capabilities are reported:
1. The maximum and minimum rate limit in kbps.
2. Bitmap showing which QP types support rate limit.
Signed-off-by: Damodharam Ammepalli <damodharam.ammepalli@broadcom.com>
Signed-off-by: Kalesh AP <kalesh-anakkur.purayil@broadcom.com>
Reviewed-by: Anantha Prabhu <anantha.prabhu@broadcom.com>
---
drivers/infiniband/hw/bnxt_re/ib_verbs.c | 22 +++++++++++++++++++++-
include/uapi/rdma/bnxt_re-abi.h | 16 ++++++++++++++++
2 files changed, 37 insertions(+), 1 deletion(-)
diff --git a/drivers/infiniband/hw/bnxt_re/ib_verbs.c b/drivers/infiniband/hw/bnxt_re/ib_verbs.c
index 2930461be20d..5a33afb30af5 100644
--- a/drivers/infiniband/hw/bnxt_re/ib_verbs.c
+++ b/drivers/infiniband/hw/bnxt_re/ib_verbs.c
@@ -186,6 +186,9 @@ int bnxt_re_query_device(struct ib_device *ibdev,
{
struct bnxt_re_dev *rdev = to_bnxt_re_dev(ibdev, ibdev);
struct bnxt_qplib_dev_attr *dev_attr = rdev->dev_attr;
+ struct bnxt_re_query_device_ex_resp resp = {};
+ size_t outlen = (udata) ? udata->outlen : 0;
+ int rc = 0;
memset(ib_attr, 0, sizeof(*ib_attr));
memcpy(&ib_attr->fw_ver, dev_attr->fw_ver,
@@ -250,7 +253,21 @@ int bnxt_re_query_device(struct ib_device *ibdev,
ib_attr->max_pkeys = 1;
ib_attr->local_ca_ack_delay = BNXT_RE_DEFAULT_ACK_DELAY;
- return 0;
+
+ if ((offsetofend(typeof(resp), packet_pacing_caps) <= outlen) &&
+ _is_modify_qp_rate_limit_supported(dev_attr->dev_cap_flags2)) {
+ resp.packet_pacing_caps.qp_rate_limit_min =
+ dev_attr->rate_limit_min;
+ resp.packet_pacing_caps.qp_rate_limit_max =
+ dev_attr->rate_limit_max;
+ resp.packet_pacing_caps.supported_qpts =
+ 1 << IB_QPT_RC;
+ }
+ if (outlen)
+ rc = ib_copy_to_udata(udata, &resp,
+ min(sizeof(resp), outlen));
+
+ return rc;
}
int bnxt_re_modify_device(struct ib_device *ibdev,
@@ -4400,6 +4417,9 @@ int bnxt_re_alloc_ucontext(struct ib_ucontext *ctx, struct ib_udata *udata)
if (_is_host_msn_table(rdev->qplib_res.dattr->dev_cap_flags2))
resp.comp_mask |= BNXT_RE_UCNTX_CMASK_MSN_TABLE_ENABLED;
+ if (_is_modify_qp_rate_limit_supported(dev_attr->dev_cap_flags2))
+ resp.comp_mask |= BNXT_RE_UCNTX_CMASK_QP_RATE_LIMIT_ENABLED;
+
if (udata->inlen >= sizeof(ureq)) {
rc = ib_copy_from_udata(&ureq, udata, min(udata->inlen, sizeof(ureq)));
if (rc)
diff --git a/include/uapi/rdma/bnxt_re-abi.h b/include/uapi/rdma/bnxt_re-abi.h
index faa9d62b3b30..f24edf1c75eb 100644
--- a/include/uapi/rdma/bnxt_re-abi.h
+++ b/include/uapi/rdma/bnxt_re-abi.h
@@ -56,6 +56,7 @@ enum {
BNXT_RE_UCNTX_CMASK_DBR_PACING_ENABLED = 0x08ULL,
BNXT_RE_UCNTX_CMASK_POW2_DISABLED = 0x10ULL,
BNXT_RE_UCNTX_CMASK_MSN_TABLE_ENABLED = 0x40,
+ BNXT_RE_UCNTX_CMASK_QP_RATE_LIMIT_ENABLED = 0x80ULL,
};
enum bnxt_re_wqe_mode {
@@ -215,4 +216,19 @@ enum bnxt_re_toggle_mem_methods {
BNXT_RE_METHOD_GET_TOGGLE_MEM = (1U << UVERBS_ID_NS_SHIFT),
BNXT_RE_METHOD_RELEASE_TOGGLE_MEM,
};
+
+struct bnxt_re_packet_pacing_caps {
+ __u32 qp_rate_limit_min;
+ __u32 qp_rate_limit_max; /* In kbps */
+ /* Corresponding bit will be set if qp type from
+ * 'enum ib_qp_type' is supported, e.g.
+ * supported_qpts |= 1 << IB_QPT_RC
+ */
+ __u32 supported_qpts;
+ __u32 reserved;
+};
+
+struct bnxt_re_query_device_ex_resp {
+ struct bnxt_re_packet_pacing_caps packet_pacing_caps;
+};
#endif /* __BNXT_RE_UVERBS_ABI_H__*/
--
2.43.5
^ permalink raw reply related [flat|nested] 8+ messages in thread
* [PATCH rdma-rext V3 3/5] RDMA/bnxt_re: Report QP rate limit in debugfs
2026-02-02 4:51 [PATCH rdma-rext V3 0/5] RDMA/bnxt_re: Add QP rate limit support Kalesh AP
2026-02-02 4:51 ` [PATCH rdma-rext V3 1/5] RDMA/bnxt_re: Add support for QP rate limiting Kalesh AP
2026-02-02 4:51 ` [PATCH rdma-rext V3 2/5] RDMA/bnxt_re: Report packet pacing capabilities when querying device Kalesh AP
@ 2026-02-02 4:51 ` Kalesh AP
2026-02-02 4:51 ` [PATCH rdma-rext V3 4/5] RDMA/mlx5: Support rate limit only for Raw Packet QP Kalesh AP
2026-02-02 4:51 ` [PATCH rdma-rext V3 5/5] IB/core: Extend rate limit support for RC QPs Kalesh AP
4 siblings, 0 replies; 8+ messages in thread
From: Kalesh AP @ 2026-02-02 4:51 UTC (permalink / raw)
To: leon, jgg
Cc: linux-rdma, andrew.gospodarek, selvin.xavier, Kalesh AP,
Damodharam Ammepalli
Update QP info debugfs hook to report the rate limit applied
on the QP. 0 means unlimited.
Signed-off-by: Damodharam Ammepalli <damodharam.ammepalli@broadcom.com>
Signed-off-by: Kalesh AP <kalesh-anakkur.purayil@broadcom.com>
---
drivers/infiniband/hw/bnxt_re/debugfs.c | 14 ++++++++++++--
1 file changed, 12 insertions(+), 2 deletions(-)
diff --git a/drivers/infiniband/hw/bnxt_re/debugfs.c b/drivers/infiniband/hw/bnxt_re/debugfs.c
index 88817c86ae24..e025217861c2 100644
--- a/drivers/infiniband/hw/bnxt_re/debugfs.c
+++ b/drivers/infiniband/hw/bnxt_re/debugfs.c
@@ -87,25 +87,35 @@ static ssize_t qp_info_read(struct file *filep,
size_t count, loff_t *ppos)
{
struct bnxt_re_qp *qp = filep->private_data;
+ struct bnxt_qplib_qp *qplib_qp;
+ u32 rate_limit = 0;
char *buf;
int len;
if (*ppos)
return 0;
+ qplib_qp = &qp->qplib_qp;
+ if (qplib_qp->shaper_allocation_status)
+ rate_limit = qplib_qp->rate_limit;
+
buf = kasprintf(GFP_KERNEL,
"QPN\t\t: %d\n"
"transport\t: %s\n"
"state\t\t: %s\n"
"mtu\t\t: %d\n"
"timeout\t\t: %d\n"
- "remote QPN\t: %d\n",
+ "remote QPN\t: %d\n"
+ "shaper allocated : %d\n"
+ "rate limit\t: %d kbps\n",
qp->qplib_qp.id,
bnxt_re_qp_type_str(qp->qplib_qp.type),
bnxt_re_qp_state_str(qp->qplib_qp.state),
qp->qplib_qp.mtu,
qp->qplib_qp.timeout,
- qp->qplib_qp.dest_qpn);
+ qp->qplib_qp.dest_qpn,
+ qplib_qp->shaper_allocation_status,
+ rate_limit);
if (!buf)
return -ENOMEM;
if (count < strlen(buf)) {
--
2.43.5
^ permalink raw reply related [flat|nested] 8+ messages in thread
* [PATCH rdma-rext V3 4/5] RDMA/mlx5: Support rate limit only for Raw Packet QP
2026-02-02 4:51 [PATCH rdma-rext V3 0/5] RDMA/bnxt_re: Add QP rate limit support Kalesh AP
` (2 preceding siblings ...)
2026-02-02 4:51 ` [PATCH rdma-rext V3 3/5] RDMA/bnxt_re: Report QP rate limit in debugfs Kalesh AP
@ 2026-02-02 4:51 ` Kalesh AP
2026-02-02 4:51 ` [PATCH rdma-rext V3 5/5] IB/core: Extend rate limit support for RC QPs Kalesh AP
4 siblings, 0 replies; 8+ messages in thread
From: Kalesh AP @ 2026-02-02 4:51 UTC (permalink / raw)
To: leon, jgg
Cc: linux-rdma, andrew.gospodarek, selvin.xavier, Kalesh AP,
Leon Romanovsky
mlx5 based hardware supports rate limiting only on Raw ethernet QPs.
Added an explicit check to fail the operation on any other QP types.
The rate limit support has been enahanced in the stack for RC QPs too.
Compile tested only.
CC: Leon Romanovsky <leonro@nvidia.com>
Signed-off-by: Kalesh AP <kalesh-anakkur.purayil@broadcom.com>
Signed-off-by: Selvin Xavier <selvin.xavier@broadcom.com>
---
drivers/infiniband/hw/mlx5/qp.c | 5 +++++
1 file changed, 5 insertions(+)
diff --git a/drivers/infiniband/hw/mlx5/qp.c b/drivers/infiniband/hw/mlx5/qp.c
index 69af20790481..0324909e3151 100644
--- a/drivers/infiniband/hw/mlx5/qp.c
+++ b/drivers/infiniband/hw/mlx5/qp.c
@@ -4362,6 +4362,11 @@ static int __mlx5_ib_modify_qp(struct ib_qp *ibqp,
optpar |= ib_mask_to_mlx5_opt(attr_mask);
optpar &= opt_mask[mlx5_cur][mlx5_new][mlx5_st];
+ if (attr_mask & IB_QP_RATE_LIMIT && qp->type != IB_QPT_RAW_PACKET) {
+ err = -EOPNOTSUPP;
+ goto out;
+ }
+
if (qp->type == IB_QPT_RAW_PACKET ||
qp->flags & IB_QP_CREATE_SOURCE_QPN) {
struct mlx5_modify_raw_qp_param raw_qp_param = {};
--
2.43.5
^ permalink raw reply related [flat|nested] 8+ messages in thread
* [PATCH rdma-rext V3 5/5] IB/core: Extend rate limit support for RC QPs
2026-02-02 4:51 [PATCH rdma-rext V3 0/5] RDMA/bnxt_re: Add QP rate limit support Kalesh AP
` (3 preceding siblings ...)
2026-02-02 4:51 ` [PATCH rdma-rext V3 4/5] RDMA/mlx5: Support rate limit only for Raw Packet QP Kalesh AP
@ 2026-02-02 4:51 ` Kalesh AP
4 siblings, 0 replies; 8+ messages in thread
From: Kalesh AP @ 2026-02-02 4:51 UTC (permalink / raw)
To: leon, jgg
Cc: linux-rdma, andrew.gospodarek, selvin.xavier, Kalesh AP,
Damodharam Ammepalli
Broadcom devices supports setting the rate limit while changing
RC QP state from INIT to RTR, RTR to RTS and RTS to RTS.
Signed-off-by: Kalesh AP <kalesh-anakkur.purayil@broadcom.com>
Reviewed-by: Damodharam Ammepalli <damodharam.ammepalli@broadcom.com>
---
drivers/infiniband/core/verbs.c | 9 ++++++---
1 file changed, 6 insertions(+), 3 deletions(-)
diff --git a/drivers/infiniband/core/verbs.c b/drivers/infiniband/core/verbs.c
index 8b56b6b62352..02ebc3e52196 100644
--- a/drivers/infiniband/core/verbs.c
+++ b/drivers/infiniband/core/verbs.c
@@ -1537,7 +1537,8 @@ static const struct {
IB_QP_PKEY_INDEX),
[IB_QPT_RC] = (IB_QP_ALT_PATH |
IB_QP_ACCESS_FLAGS |
- IB_QP_PKEY_INDEX),
+ IB_QP_PKEY_INDEX |
+ IB_QP_RATE_LIMIT),
[IB_QPT_XRC_INI] = (IB_QP_ALT_PATH |
IB_QP_ACCESS_FLAGS |
IB_QP_PKEY_INDEX),
@@ -1585,7 +1586,8 @@ static const struct {
IB_QP_ALT_PATH |
IB_QP_ACCESS_FLAGS |
IB_QP_MIN_RNR_TIMER |
- IB_QP_PATH_MIG_STATE),
+ IB_QP_PATH_MIG_STATE |
+ IB_QP_RATE_LIMIT),
[IB_QPT_XRC_INI] = (IB_QP_CUR_STATE |
IB_QP_ALT_PATH |
IB_QP_ACCESS_FLAGS |
@@ -1619,7 +1621,8 @@ static const struct {
IB_QP_ACCESS_FLAGS |
IB_QP_ALT_PATH |
IB_QP_PATH_MIG_STATE |
- IB_QP_MIN_RNR_TIMER),
+ IB_QP_MIN_RNR_TIMER |
+ IB_QP_RATE_LIMIT),
[IB_QPT_XRC_INI] = (IB_QP_CUR_STATE |
IB_QP_ACCESS_FLAGS |
IB_QP_ALT_PATH |
--
2.43.5
^ permalink raw reply related [flat|nested] 8+ messages in thread
* Re: [PATCH rdma-rext V3 1/5] RDMA/bnxt_re: Add support for QP rate limiting
2026-02-02 4:51 ` [PATCH rdma-rext V3 1/5] RDMA/bnxt_re: Add support for QP rate limiting Kalesh AP
@ 2026-02-02 12:17 ` Leon Romanovsky
2026-02-02 12:51 ` Kalesh Anakkur Purayil
0 siblings, 1 reply; 8+ messages in thread
From: Leon Romanovsky @ 2026-02-02 12:17 UTC (permalink / raw)
To: Kalesh AP
Cc: jgg, linux-rdma, andrew.gospodarek, selvin.xavier,
Damodharam Ammepalli, Hongguang Gao
On Mon, Feb 02, 2026 at 10:21:16AM +0530, Kalesh AP wrote:
> Broadcom P7 chips supports applying rate limit to RC QPs.
> It allows adjust shaper rate values during the INIT -> RTR,
> RTR -> RTS, RTS -> RTS state changes or after QP transitions
> to RTR or RTS.
>
> Signed-off-by: Damodharam Ammepalli <damodharam.ammepalli@broadcom.com>
> Reviewed-by: Hongguang Gao <hongguang.gao@broadcom.com>
> Signed-off-by: Kalesh AP <kalesh-anakkur.purayil@broadcom.com>
> ---
> drivers/infiniband/hw/bnxt_re/ib_verbs.c | 11 ++++++++++-
> drivers/infiniband/hw/bnxt_re/qplib_fp.c | 12 +++++++++++-
> drivers/infiniband/hw/bnxt_re/qplib_fp.h | 3 +++
> drivers/infiniband/hw/bnxt_re/qplib_res.h | 6 ++++++
> drivers/infiniband/hw/bnxt_re/qplib_sp.c | 5 +++++
> drivers/infiniband/hw/bnxt_re/qplib_sp.h | 2 ++
> drivers/infiniband/hw/bnxt_re/roce_hsi.h | 13 +++++++++----
> 7 files changed, 46 insertions(+), 6 deletions(-)
AI generates the following:
Scenario:
1. User creates an RC QP (ext_modify_flags = 0, zero-initialized)
2. User calls `ib_modify_qp()` with `IB_QP_RATE_LIMIT` → `ext_modify_flags` becomes `0x8`
3. User calls `ib_modify_qp()` **without** `IB_QP_RATE_LIMIT` (e.g., just state change)
4. `modify_flags` is reset to 0, but `ext_modify_flags` still has `0x8`
5. In `bnxt_qplib_modify_qp()`:
- `req.ext_modify_mask = cpu_to_le32(qp->ext_modify_flags);` → sends 0x8 to firmware
- `if (bmask_ext & CMDQ_MODIFY_QP_EXT_MODIFY_MASK_RATE_LIMIT_VALID)` → TRUE
- `req.rate_limit = cpu_to_le32(qp->rate_limit);` → sends stale rate_limit value
6. Firmware receives unintended rate limit modification
**Severity**: This is a functional bug that can cause:
- Unintended rate limiting on subsequent QP modifications
- The stale `rate_limit` value being sent to firmware on every modify_qp call
- Unexpected QP behavior
--------------------------------------------------------------------------------
Is it expected behavior?
Thanks
>
> diff --git a/drivers/infiniband/hw/bnxt_re/ib_verbs.c b/drivers/infiniband/hw/bnxt_re/ib_verbs.c
> index f19b55c13d58..2930461be20d 100644
> --- a/drivers/infiniband/hw/bnxt_re/ib_verbs.c
> +++ b/drivers/infiniband/hw/bnxt_re/ib_verbs.c
> @@ -2089,7 +2089,7 @@ int bnxt_re_modify_qp(struct ib_qp *ib_qp, struct ib_qp_attr *qp_attr,
> unsigned int flags;
> u8 nw_type;
>
> - if (qp_attr_mask & ~IB_QP_ATTR_STANDARD_BITS)
> + if (qp_attr_mask & ~(IB_QP_ATTR_STANDARD_BITS | IB_QP_RATE_LIMIT))
> return -EOPNOTSUPP;
>
> qp->qplib_qp.modify_flags = 0;
> @@ -2129,6 +2129,15 @@ int bnxt_re_modify_qp(struct ib_qp *ib_qp, struct ib_qp_attr *qp_attr,
> bnxt_re_unlock_cqs(qp, flags);
> }
> }
> +
> + if (qp_attr_mask & IB_QP_RATE_LIMIT) {
> + if (qp->qplib_qp.type != IB_QPT_RC ||
> + !_is_modify_qp_rate_limit_supported(dev_attr->dev_cap_flags2))
> + return -EOPNOTSUPP;
> + qp->qplib_qp.ext_modify_flags |=
> + CMDQ_MODIFY_QP_EXT_MODIFY_MASK_RATE_LIMIT_VALID;
> + qp->qplib_qp.rate_limit = qp_attr->rate_limit;
> + }
> if (qp_attr_mask & IB_QP_EN_SQD_ASYNC_NOTIFY) {
> qp->qplib_qp.modify_flags |=
> CMDQ_MODIFY_QP_MODIFY_MASK_EN_SQD_ASYNC_NOTIFY;
> diff --git a/drivers/infiniband/hw/bnxt_re/qplib_fp.c b/drivers/infiniband/hw/bnxt_re/qplib_fp.c
> index c88f049136fc..3e44311bf939 100644
> --- a/drivers/infiniband/hw/bnxt_re/qplib_fp.c
> +++ b/drivers/infiniband/hw/bnxt_re/qplib_fp.c
> @@ -1313,8 +1313,8 @@ int bnxt_qplib_modify_qp(struct bnxt_qplib_res *res, struct bnxt_qplib_qp *qp)
> struct bnxt_qplib_cmdqmsg msg = {};
> struct cmdq_modify_qp req = {};
> u16 vlan_pcp_vlan_dei_vlan_id;
> + u32 bmask, bmask_ext;
> u32 temp32[4];
> - u32 bmask;
> int rc;
>
> bnxt_qplib_rcfw_cmd_prep((struct cmdq_base *)&req,
> @@ -1329,9 +1329,16 @@ int bnxt_qplib_modify_qp(struct bnxt_qplib_res *res, struct bnxt_qplib_qp *qp)
> is_optimized_state_transition(qp))
> bnxt_set_mandatory_attributes(res, qp, &req);
> }
> +
> bmask = qp->modify_flags;
> req.modify_mask = cpu_to_le32(qp->modify_flags);
> + bmask_ext = qp->ext_modify_flags;
> + req.ext_modify_mask = cpu_to_le32(qp->ext_modify_flags);
> req.qp_cid = cpu_to_le32(qp->id);
> +
> + if (bmask_ext & CMDQ_MODIFY_QP_EXT_MODIFY_MASK_RATE_LIMIT_VALID)
> + req.rate_limit = cpu_to_le32(qp->rate_limit);
> +
> if (bmask & CMDQ_MODIFY_QP_MODIFY_MASK_STATE) {
> req.network_type_en_sqd_async_notify_new_state =
> (qp->state & CMDQ_MODIFY_QP_NEW_STATE_MASK) |
> @@ -1429,6 +1436,9 @@ int bnxt_qplib_modify_qp(struct bnxt_qplib_res *res, struct bnxt_qplib_qp *qp)
> rc = bnxt_qplib_rcfw_send_message(rcfw, &msg);
> if (rc)
> return rc;
> +
> + if (bmask_ext & CMDQ_MODIFY_QP_EXT_MODIFY_MASK_RATE_LIMIT_VALID)
> + qp->shaper_allocation_status = resp.shaper_allocation_status;
> qp->cur_qp_state = qp->state;
> return 0;
> }
> diff --git a/drivers/infiniband/hw/bnxt_re/qplib_fp.h b/drivers/infiniband/hw/bnxt_re/qplib_fp.h
> index 1b414a73b46d..30c3f99be07b 100644
> --- a/drivers/infiniband/hw/bnxt_re/qplib_fp.h
> +++ b/drivers/infiniband/hw/bnxt_re/qplib_fp.h
> @@ -280,6 +280,7 @@ struct bnxt_qplib_qp {
> u8 state;
> u8 cur_qp_state;
> u64 modify_flags;
> + u32 ext_modify_flags;
> u32 max_inline_data;
> u32 mtu;
> u8 path_mtu;
> @@ -346,6 +347,8 @@ struct bnxt_qplib_qp {
> bool is_host_msn_tbl;
> u8 tos_dscp;
> u32 ugid_index;
> + u32 rate_limit;
> + u8 shaper_allocation_status;
> };
>
> #define BNXT_RE_MAX_MSG_SIZE 0x80000000
> diff --git a/drivers/infiniband/hw/bnxt_re/qplib_res.h b/drivers/infiniband/hw/bnxt_re/qplib_res.h
> index 2ea3b7f232a3..9a5dcf97b6f4 100644
> --- a/drivers/infiniband/hw/bnxt_re/qplib_res.h
> +++ b/drivers/infiniband/hw/bnxt_re/qplib_res.h
> @@ -623,4 +623,10 @@ static inline bool _is_max_srq_ext_supported(u16 dev_cap_ext_flags_2)
> return !!(dev_cap_ext_flags_2 & CREQ_QUERY_FUNC_RESP_SB_MAX_SRQ_EXTENDED);
> }
>
> +static inline bool _is_modify_qp_rate_limit_supported(u16 dev_cap_ext_flags2)
> +{
> + return dev_cap_ext_flags2 &
> + CREQ_QUERY_FUNC_RESP_SB_MODIFY_QP_RATE_LIMIT_SUPPORTED;
> +}
> +
> #endif /* __BNXT_QPLIB_RES_H__ */
> diff --git a/drivers/infiniband/hw/bnxt_re/qplib_sp.c b/drivers/infiniband/hw/bnxt_re/qplib_sp.c
> index 408a34df2667..ec9eb52a8ebf 100644
> --- a/drivers/infiniband/hw/bnxt_re/qplib_sp.c
> +++ b/drivers/infiniband/hw/bnxt_re/qplib_sp.c
> @@ -193,6 +193,11 @@ int bnxt_qplib_get_dev_attr(struct bnxt_qplib_rcfw *rcfw)
> attr->max_dpi = le32_to_cpu(sb->max_dpi);
>
> attr->is_atomic = bnxt_qplib_is_atomic_cap(rcfw);
> +
> + if (_is_modify_qp_rate_limit_supported(attr->dev_cap_flags2)) {
> + attr->rate_limit_min = le16_to_cpu(sb->rate_limit_min);
> + attr->rate_limit_max = le32_to_cpu(sb->rate_limit_max);
> + }
> bail:
> dma_free_coherent(&rcfw->pdev->dev, sbuf.size,
> sbuf.sb, sbuf.dma_addr);
> diff --git a/drivers/infiniband/hw/bnxt_re/qplib_sp.h b/drivers/infiniband/hw/bnxt_re/qplib_sp.h
> index 5a45c55c6464..9fadd637cb5b 100644
> --- a/drivers/infiniband/hw/bnxt_re/qplib_sp.h
> +++ b/drivers/infiniband/hw/bnxt_re/qplib_sp.h
> @@ -76,6 +76,8 @@ struct bnxt_qplib_dev_attr {
> u16 dev_cap_flags;
> u16 dev_cap_flags2;
> u32 max_dpi;
> + u16 rate_limit_min;
> + u32 rate_limit_max;
> };
>
> struct bnxt_qplib_pd {
> diff --git a/drivers/infiniband/hw/bnxt_re/roce_hsi.h b/drivers/infiniband/hw/bnxt_re/roce_hsi.h
> index 99ecd72e72e2..aac338f2afd8 100644
> --- a/drivers/infiniband/hw/bnxt_re/roce_hsi.h
> +++ b/drivers/infiniband/hw/bnxt_re/roce_hsi.h
> @@ -690,10 +690,11 @@ struct cmdq_modify_qp {
> __le32 ext_modify_mask;
> #define CMDQ_MODIFY_QP_EXT_MODIFY_MASK_EXT_STATS_CTX 0x1UL
> #define CMDQ_MODIFY_QP_EXT_MODIFY_MASK_SCHQ_ID_VALID 0x2UL
> + #define CMDQ_MODIFY_QP_EXT_MODIFY_MASK_RATE_LIMIT_VALID 0x8UL
> __le32 ext_stats_ctx_id;
> __le16 schq_id;
> __le16 unused_0;
> - __le32 reserved32;
> + __le32 rate_limit;
> };
>
> /* creq_modify_qp_resp (size:128b/16B) */
> @@ -716,7 +717,8 @@ struct creq_modify_qp_resp {
> #define CREQ_MODIFY_QP_RESP_PINGPONG_PUSH_INDEX_MASK 0xeUL
> #define CREQ_MODIFY_QP_RESP_PINGPONG_PUSH_INDEX_SFT 1
> #define CREQ_MODIFY_QP_RESP_PINGPONG_PUSH_STATE 0x10UL
> - u8 reserved8;
> + u8 shaper_allocation_status;
> + #define CREQ_MODIFY_QP_RESP_SHAPER_ALLOCATED 0x1UL
> __le32 lag_src_mac;
> };
>
> @@ -2179,7 +2181,7 @@ struct creq_query_func_resp {
> u8 reserved48[6];
> };
>
> -/* creq_query_func_resp_sb (size:1088b/136B) */
> +/* creq_query_func_resp_sb (size:1280b/160B) */
> struct creq_query_func_resp_sb {
> u8 opcode;
> #define CREQ_QUERY_FUNC_RESP_SB_OPCODE_QUERY_FUNC 0x83UL
> @@ -2256,12 +2258,15 @@ struct creq_query_func_resp_sb {
> #define CREQ_QUERY_FUNC_RESP_SB_REQ_RETRANSMISSION_SUPPORT_LAST \
> CREQ_QUERY_FUNC_RESP_SB_REQ_RETRANSMISSION_SUPPORT_IQM_MSN_TABLE
> #define CREQ_QUERY_FUNC_RESP_SB_MAX_SRQ_EXTENDED 0x40UL
> + #define CREQ_QUERY_FUNC_RESP_SB_MODIFY_QP_RATE_LIMIT_SUPPORTED 0x400UL
> #define CREQ_QUERY_FUNC_RESP_SB_MIN_RNR_RTR_RTS_OPT_SUPPORTED 0x1000UL
> __le16 max_xp_qp_size;
> __le16 create_qp_batch_size;
> __le16 destroy_qp_batch_size;
> __le16 max_srq_ext;
> - __le64 reserved64;
> + __le16 reserved16;
> + __le16 rate_limit_min;
> + __le32 rate_limit_max;
> };
>
> /* cmdq_set_func_resources (size:448b/56B) */
> --
> 2.43.5
>
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [PATCH rdma-rext V3 1/5] RDMA/bnxt_re: Add support for QP rate limiting
2026-02-02 12:17 ` Leon Romanovsky
@ 2026-02-02 12:51 ` Kalesh Anakkur Purayil
0 siblings, 0 replies; 8+ messages in thread
From: Kalesh Anakkur Purayil @ 2026-02-02 12:51 UTC (permalink / raw)
To: Leon Romanovsky
Cc: jgg, linux-rdma, andrew.gospodarek, selvin.xavier,
Damodharam Ammepalli, Hongguang Gao
[-- Attachment #1: Type: text/plain, Size: 11420 bytes --]
On Mon, Feb 2, 2026 at 5:47 PM Leon Romanovsky <leon@kernel.org> wrote:
>
> On Mon, Feb 02, 2026 at 10:21:16AM +0530, Kalesh AP wrote:
> > Broadcom P7 chips supports applying rate limit to RC QPs.
> > It allows adjust shaper rate values during the INIT -> RTR,
> > RTR -> RTS, RTS -> RTS state changes or after QP transitions
> > to RTR or RTS.
> >
> > Signed-off-by: Damodharam Ammepalli <damodharam.ammepalli@broadcom.com>
> > Reviewed-by: Hongguang Gao <hongguang.gao@broadcom.com>
> > Signed-off-by: Kalesh AP <kalesh-anakkur.purayil@broadcom.com>
> > ---
> > drivers/infiniband/hw/bnxt_re/ib_verbs.c | 11 ++++++++++-
> > drivers/infiniband/hw/bnxt_re/qplib_fp.c | 12 +++++++++++-
> > drivers/infiniband/hw/bnxt_re/qplib_fp.h | 3 +++
> > drivers/infiniband/hw/bnxt_re/qplib_res.h | 6 ++++++
> > drivers/infiniband/hw/bnxt_re/qplib_sp.c | 5 +++++
> > drivers/infiniband/hw/bnxt_re/qplib_sp.h | 2 ++
> > drivers/infiniband/hw/bnxt_re/roce_hsi.h | 13 +++++++++----
> > 7 files changed, 46 insertions(+), 6 deletions(-)
>
> AI generates the following:
>
> Scenario:
> 1. User creates an RC QP (ext_modify_flags = 0, zero-initialized)
> 2. User calls `ib_modify_qp()` with `IB_QP_RATE_LIMIT` → `ext_modify_flags` becomes `0x8`
> 3. User calls `ib_modify_qp()` **without** `IB_QP_RATE_LIMIT` (e.g., just state change)
> 4. `modify_flags` is reset to 0, but `ext_modify_flags` still has `0x8`
> 5. In `bnxt_qplib_modify_qp()`:
> - `req.ext_modify_mask = cpu_to_le32(qp->ext_modify_flags);` → sends 0x8 to firmware
> - `if (bmask_ext & CMDQ_MODIFY_QP_EXT_MODIFY_MASK_RATE_LIMIT_VALID)` → TRUE
> - `req.rate_limit = cpu_to_le32(qp->rate_limit);` → sends stale rate_limit value
> 6. Firmware receives unintended rate limit modification
>
> **Severity**: This is a functional bug that can cause:
> - Unintended rate limiting on subsequent QP modifications
> - The stale `rate_limit` value being sent to firmware on every modify_qp call
> - Unexpected QP behavior
>
> --------------------------------------------------------------------------------
> Is it expected behavior?
Thanks, it was a miss. I will push a new version.
>
> Thanks
>
>
>
> >
> > diff --git a/drivers/infiniband/hw/bnxt_re/ib_verbs.c b/drivers/infiniband/hw/bnxt_re/ib_verbs.c
> > index f19b55c13d58..2930461be20d 100644
> > --- a/drivers/infiniband/hw/bnxt_re/ib_verbs.c
> > +++ b/drivers/infiniband/hw/bnxt_re/ib_verbs.c
> > @@ -2089,7 +2089,7 @@ int bnxt_re_modify_qp(struct ib_qp *ib_qp, struct ib_qp_attr *qp_attr,
> > unsigned int flags;
> > u8 nw_type;
> >
> > - if (qp_attr_mask & ~IB_QP_ATTR_STANDARD_BITS)
> > + if (qp_attr_mask & ~(IB_QP_ATTR_STANDARD_BITS | IB_QP_RATE_LIMIT))
> > return -EOPNOTSUPP;
> >
> > qp->qplib_qp.modify_flags = 0;
> > @@ -2129,6 +2129,15 @@ int bnxt_re_modify_qp(struct ib_qp *ib_qp, struct ib_qp_attr *qp_attr,
> > bnxt_re_unlock_cqs(qp, flags);
> > }
> > }
> > +
> > + if (qp_attr_mask & IB_QP_RATE_LIMIT) {
> > + if (qp->qplib_qp.type != IB_QPT_RC ||
> > + !_is_modify_qp_rate_limit_supported(dev_attr->dev_cap_flags2))
> > + return -EOPNOTSUPP;
> > + qp->qplib_qp.ext_modify_flags |=
> > + CMDQ_MODIFY_QP_EXT_MODIFY_MASK_RATE_LIMIT_VALID;
> > + qp->qplib_qp.rate_limit = qp_attr->rate_limit;
> > + }
> > if (qp_attr_mask & IB_QP_EN_SQD_ASYNC_NOTIFY) {
> > qp->qplib_qp.modify_flags |=
> > CMDQ_MODIFY_QP_MODIFY_MASK_EN_SQD_ASYNC_NOTIFY;
> > diff --git a/drivers/infiniband/hw/bnxt_re/qplib_fp.c b/drivers/infiniband/hw/bnxt_re/qplib_fp.c
> > index c88f049136fc..3e44311bf939 100644
> > --- a/drivers/infiniband/hw/bnxt_re/qplib_fp.c
> > +++ b/drivers/infiniband/hw/bnxt_re/qplib_fp.c
> > @@ -1313,8 +1313,8 @@ int bnxt_qplib_modify_qp(struct bnxt_qplib_res *res, struct bnxt_qplib_qp *qp)
> > struct bnxt_qplib_cmdqmsg msg = {};
> > struct cmdq_modify_qp req = {};
> > u16 vlan_pcp_vlan_dei_vlan_id;
> > + u32 bmask, bmask_ext;
> > u32 temp32[4];
> > - u32 bmask;
> > int rc;
> >
> > bnxt_qplib_rcfw_cmd_prep((struct cmdq_base *)&req,
> > @@ -1329,9 +1329,16 @@ int bnxt_qplib_modify_qp(struct bnxt_qplib_res *res, struct bnxt_qplib_qp *qp)
> > is_optimized_state_transition(qp))
> > bnxt_set_mandatory_attributes(res, qp, &req);
> > }
> > +
> > bmask = qp->modify_flags;
> > req.modify_mask = cpu_to_le32(qp->modify_flags);
> > + bmask_ext = qp->ext_modify_flags;
> > + req.ext_modify_mask = cpu_to_le32(qp->ext_modify_flags);
> > req.qp_cid = cpu_to_le32(qp->id);
> > +
> > + if (bmask_ext & CMDQ_MODIFY_QP_EXT_MODIFY_MASK_RATE_LIMIT_VALID)
> > + req.rate_limit = cpu_to_le32(qp->rate_limit);
> > +
> > if (bmask & CMDQ_MODIFY_QP_MODIFY_MASK_STATE) {
> > req.network_type_en_sqd_async_notify_new_state =
> > (qp->state & CMDQ_MODIFY_QP_NEW_STATE_MASK) |
> > @@ -1429,6 +1436,9 @@ int bnxt_qplib_modify_qp(struct bnxt_qplib_res *res, struct bnxt_qplib_qp *qp)
> > rc = bnxt_qplib_rcfw_send_message(rcfw, &msg);
> > if (rc)
> > return rc;
> > +
> > + if (bmask_ext & CMDQ_MODIFY_QP_EXT_MODIFY_MASK_RATE_LIMIT_VALID)
> > + qp->shaper_allocation_status = resp.shaper_allocation_status;
> > qp->cur_qp_state = qp->state;
> > return 0;
> > }
> > diff --git a/drivers/infiniband/hw/bnxt_re/qplib_fp.h b/drivers/infiniband/hw/bnxt_re/qplib_fp.h
> > index 1b414a73b46d..30c3f99be07b 100644
> > --- a/drivers/infiniband/hw/bnxt_re/qplib_fp.h
> > +++ b/drivers/infiniband/hw/bnxt_re/qplib_fp.h
> > @@ -280,6 +280,7 @@ struct bnxt_qplib_qp {
> > u8 state;
> > u8 cur_qp_state;
> > u64 modify_flags;
> > + u32 ext_modify_flags;
> > u32 max_inline_data;
> > u32 mtu;
> > u8 path_mtu;
> > @@ -346,6 +347,8 @@ struct bnxt_qplib_qp {
> > bool is_host_msn_tbl;
> > u8 tos_dscp;
> > u32 ugid_index;
> > + u32 rate_limit;
> > + u8 shaper_allocation_status;
> > };
> >
> > #define BNXT_RE_MAX_MSG_SIZE 0x80000000
> > diff --git a/drivers/infiniband/hw/bnxt_re/qplib_res.h b/drivers/infiniband/hw/bnxt_re/qplib_res.h
> > index 2ea3b7f232a3..9a5dcf97b6f4 100644
> > --- a/drivers/infiniband/hw/bnxt_re/qplib_res.h
> > +++ b/drivers/infiniband/hw/bnxt_re/qplib_res.h
> > @@ -623,4 +623,10 @@ static inline bool _is_max_srq_ext_supported(u16 dev_cap_ext_flags_2)
> > return !!(dev_cap_ext_flags_2 & CREQ_QUERY_FUNC_RESP_SB_MAX_SRQ_EXTENDED);
> > }
> >
> > +static inline bool _is_modify_qp_rate_limit_supported(u16 dev_cap_ext_flags2)
> > +{
> > + return dev_cap_ext_flags2 &
> > + CREQ_QUERY_FUNC_RESP_SB_MODIFY_QP_RATE_LIMIT_SUPPORTED;
> > +}
> > +
> > #endif /* __BNXT_QPLIB_RES_H__ */
> > diff --git a/drivers/infiniband/hw/bnxt_re/qplib_sp.c b/drivers/infiniband/hw/bnxt_re/qplib_sp.c
> > index 408a34df2667..ec9eb52a8ebf 100644
> > --- a/drivers/infiniband/hw/bnxt_re/qplib_sp.c
> > +++ b/drivers/infiniband/hw/bnxt_re/qplib_sp.c
> > @@ -193,6 +193,11 @@ int bnxt_qplib_get_dev_attr(struct bnxt_qplib_rcfw *rcfw)
> > attr->max_dpi = le32_to_cpu(sb->max_dpi);
> >
> > attr->is_atomic = bnxt_qplib_is_atomic_cap(rcfw);
> > +
> > + if (_is_modify_qp_rate_limit_supported(attr->dev_cap_flags2)) {
> > + attr->rate_limit_min = le16_to_cpu(sb->rate_limit_min);
> > + attr->rate_limit_max = le32_to_cpu(sb->rate_limit_max);
> > + }
> > bail:
> > dma_free_coherent(&rcfw->pdev->dev, sbuf.size,
> > sbuf.sb, sbuf.dma_addr);
> > diff --git a/drivers/infiniband/hw/bnxt_re/qplib_sp.h b/drivers/infiniband/hw/bnxt_re/qplib_sp.h
> > index 5a45c55c6464..9fadd637cb5b 100644
> > --- a/drivers/infiniband/hw/bnxt_re/qplib_sp.h
> > +++ b/drivers/infiniband/hw/bnxt_re/qplib_sp.h
> > @@ -76,6 +76,8 @@ struct bnxt_qplib_dev_attr {
> > u16 dev_cap_flags;
> > u16 dev_cap_flags2;
> > u32 max_dpi;
> > + u16 rate_limit_min;
> > + u32 rate_limit_max;
> > };
> >
> > struct bnxt_qplib_pd {
> > diff --git a/drivers/infiniband/hw/bnxt_re/roce_hsi.h b/drivers/infiniband/hw/bnxt_re/roce_hsi.h
> > index 99ecd72e72e2..aac338f2afd8 100644
> > --- a/drivers/infiniband/hw/bnxt_re/roce_hsi.h
> > +++ b/drivers/infiniband/hw/bnxt_re/roce_hsi.h
> > @@ -690,10 +690,11 @@ struct cmdq_modify_qp {
> > __le32 ext_modify_mask;
> > #define CMDQ_MODIFY_QP_EXT_MODIFY_MASK_EXT_STATS_CTX 0x1UL
> > #define CMDQ_MODIFY_QP_EXT_MODIFY_MASK_SCHQ_ID_VALID 0x2UL
> > + #define CMDQ_MODIFY_QP_EXT_MODIFY_MASK_RATE_LIMIT_VALID 0x8UL
> > __le32 ext_stats_ctx_id;
> > __le16 schq_id;
> > __le16 unused_0;
> > - __le32 reserved32;
> > + __le32 rate_limit;
> > };
> >
> > /* creq_modify_qp_resp (size:128b/16B) */
> > @@ -716,7 +717,8 @@ struct creq_modify_qp_resp {
> > #define CREQ_MODIFY_QP_RESP_PINGPONG_PUSH_INDEX_MASK 0xeUL
> > #define CREQ_MODIFY_QP_RESP_PINGPONG_PUSH_INDEX_SFT 1
> > #define CREQ_MODIFY_QP_RESP_PINGPONG_PUSH_STATE 0x10UL
> > - u8 reserved8;
> > + u8 shaper_allocation_status;
> > + #define CREQ_MODIFY_QP_RESP_SHAPER_ALLOCATED 0x1UL
> > __le32 lag_src_mac;
> > };
> >
> > @@ -2179,7 +2181,7 @@ struct creq_query_func_resp {
> > u8 reserved48[6];
> > };
> >
> > -/* creq_query_func_resp_sb (size:1088b/136B) */
> > +/* creq_query_func_resp_sb (size:1280b/160B) */
> > struct creq_query_func_resp_sb {
> > u8 opcode;
> > #define CREQ_QUERY_FUNC_RESP_SB_OPCODE_QUERY_FUNC 0x83UL
> > @@ -2256,12 +2258,15 @@ struct creq_query_func_resp_sb {
> > #define CREQ_QUERY_FUNC_RESP_SB_REQ_RETRANSMISSION_SUPPORT_LAST \
> > CREQ_QUERY_FUNC_RESP_SB_REQ_RETRANSMISSION_SUPPORT_IQM_MSN_TABLE
> > #define CREQ_QUERY_FUNC_RESP_SB_MAX_SRQ_EXTENDED 0x40UL
> > + #define CREQ_QUERY_FUNC_RESP_SB_MODIFY_QP_RATE_LIMIT_SUPPORTED 0x400UL
> > #define CREQ_QUERY_FUNC_RESP_SB_MIN_RNR_RTR_RTS_OPT_SUPPORTED 0x1000UL
> > __le16 max_xp_qp_size;
> > __le16 create_qp_batch_size;
> > __le16 destroy_qp_batch_size;
> > __le16 max_srq_ext;
> > - __le64 reserved64;
> > + __le16 reserved16;
> > + __le16 rate_limit_min;
> > + __le32 rate_limit_max;
> > };
> >
> > /* cmdq_set_func_resources (size:448b/56B) */
> > --
> > 2.43.5
> >
--
Regards,
Kalesh AP
[-- Attachment #2: S/MIME Cryptographic Signature --]
[-- Type: application/pkcs7-signature, Size: 5509 bytes --]
^ permalink raw reply [flat|nested] 8+ messages in thread
end of thread, other threads:[~2026-02-02 12:51 UTC | newest]
Thread overview: 8+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2026-02-02 4:51 [PATCH rdma-rext V3 0/5] RDMA/bnxt_re: Add QP rate limit support Kalesh AP
2026-02-02 4:51 ` [PATCH rdma-rext V3 1/5] RDMA/bnxt_re: Add support for QP rate limiting Kalesh AP
2026-02-02 12:17 ` Leon Romanovsky
2026-02-02 12:51 ` Kalesh Anakkur Purayil
2026-02-02 4:51 ` [PATCH rdma-rext V3 2/5] RDMA/bnxt_re: Report packet pacing capabilities when querying device Kalesh AP
2026-02-02 4:51 ` [PATCH rdma-rext V3 3/5] RDMA/bnxt_re: Report QP rate limit in debugfs Kalesh AP
2026-02-02 4:51 ` [PATCH rdma-rext V3 4/5] RDMA/mlx5: Support rate limit only for Raw Packet QP Kalesh AP
2026-02-02 4:51 ` [PATCH rdma-rext V3 5/5] IB/core: Extend rate limit support for RC QPs Kalesh AP
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox