* [PATCH v2 00/16] Update drivers to use ib_copy_validate_udata_in()
@ 2026-03-25 21:26 Jason Gunthorpe
2026-03-25 21:26 ` [PATCH v2 01/16] RDMA: Consolidate patterns with offsetofend() to ib_copy_validate_udata_in() Jason Gunthorpe
` (15 more replies)
0 siblings, 16 replies; 19+ messages in thread
From: Jason Gunthorpe @ 2026-03-25 21:26 UTC (permalink / raw)
To: Abhijit Gangurde, Allen Hubbe,
Broadcom internal kernel review list, Bernard Metzler, Bryan Tan,
Cheng Xu, Gal Pressman, Junxian Huang, Kai Shen,
Konstantin Taranov, Krzysztof Czurylo, Leon Romanovsky,
linux-hyperv, linux-rdma, Michal Kalderon, Michael Margolin,
Nelson Escobar, Satish Kharat, Selvin Xavier, Yossi Leybovich,
Chengchang Tang, Tatyana Nikolova, Vishnu Dasa, Yishai Hadas,
Zhu Yanjun
Cc: Long Li, patches
Progress the uAPI work by shifting nearly all drivers to use
ib_copy_validate_udata_in() and its variations.
These helpers are easier to use and enforce a tighter uAPI protocol
for the udata.
v2:
- Drop EFA patch, rename the field instead
- Fix the mlx5 mw change, userspace doesn't use the udata struct at all
v1: https://patch.msgid.link/r/0-v1-2b86f54cda42+7d-rdma_udata_req_jgg@nvidia.com
Jason Gunthorpe (16):
RDMA: Consolidate patterns with offsetofend() to
ib_copy_validate_udata_in()
RDMA: Consolidate patterns with offsetof() to
ib_copy_validate_udata_in()
RDMA: Consolidate patterns with sizeof() to
ib_copy_validate_udata_in()
RDMA: Use ib_copy_validate_udata_in() for implicit full structs
RDMA/pvrdma: Use ib_copy_validate_udata_in() for srq
RDMA/mlx5: Use ib_copy_validate_udata_in() for SRQ
RDMA/mlx5: Use ib_copy_validate_udata_in() for MW
RDMA/mlx4: Use ib_copy_validate_udata_in()
RDMA/mlx4: Use ib_copy_validate_udata_in() for QP
RDMA/hns: Use ib_copy_validate_udata_in()
RDMA: Use ib_copy_validate_udata_in_cm() for zero comp_mask
RDMA/mlx5: Pull comp_mask validation into
ib_copy_validate_udata_in_cm()
RDMA/hns: Add missing comp_mask check in create_qp
RDMA/irdma: Add missing comp_mask check in alloc_ucontext
RDMA: Remove redundant = {} for udata req structs
RDMA/hns: Remove the duplicate calls to ib_copy_validate_udata_in()
drivers/infiniband/hw/efa/efa_verbs.c | 55 ++-----------
drivers/infiniband/hw/erdma/erdma_verbs.c | 6 +-
drivers/infiniband/hw/hns/hns_roce_cq.c | 16 +---
drivers/infiniband/hw/hns/hns_roce_main.c | 6 +-
drivers/infiniband/hw/hns/hns_roce_qp.c | 10 +--
drivers/infiniband/hw/hns/hns_roce_srq.c | 54 ++++--------
.../infiniband/hw/ionic/ionic_controlpath.c | 6 +-
drivers/infiniband/hw/irdma/verbs.c | 12 +--
drivers/infiniband/hw/mana/cq.c | 11 +--
drivers/infiniband/hw/mana/qp.c | 29 +++----
drivers/infiniband/hw/mana/wq.c | 12 +--
drivers/infiniband/hw/mlx4/cq.c | 10 +--
drivers/infiniband/hw/mlx4/main.c | 9 +-
drivers/infiniband/hw/mlx4/qp.c | 82 ++++---------------
drivers/infiniband/hw/mlx4/srq.c | 5 +-
drivers/infiniband/hw/mlx5/cq.c | 14 ++--
drivers/infiniband/hw/mlx5/main.c | 2 +-
drivers/infiniband/hw/mlx5/mr.c | 15 ++--
drivers/infiniband/hw/mlx5/qp.c | 66 ++++-----------
drivers/infiniband/hw/mlx5/srq.c | 17 +---
drivers/infiniband/hw/mthca/mthca_provider.c | 27 +++---
drivers/infiniband/hw/ocrdma/ocrdma_verbs.c | 14 ++--
drivers/infiniband/hw/qedr/verbs.c | 42 ++++------
drivers/infiniband/hw/usnic/usnic_ib_verbs.c | 2 +-
drivers/infiniband/hw/vmw_pvrdma/pvrdma_cq.c | 5 +-
drivers/infiniband/hw/vmw_pvrdma/pvrdma_qp.c | 6 +-
drivers/infiniband/hw/vmw_pvrdma/pvrdma_srq.c | 6 +-
drivers/infiniband/sw/rxe/rxe_verbs.c | 13 +--
drivers/infiniband/sw/siw/siw_verbs.c | 6 +-
29 files changed, 172 insertions(+), 386 deletions(-)
base-commit: eb15cffa15201bd53d1ac296645aa2bc5f726841
--
2.43.0
^ permalink raw reply [flat|nested] 19+ messages in thread
* [PATCH v2 01/16] RDMA: Consolidate patterns with offsetofend() to ib_copy_validate_udata_in()
2026-03-25 21:26 [PATCH v2 00/16] Update drivers to use ib_copy_validate_udata_in() Jason Gunthorpe
@ 2026-03-25 21:26 ` Jason Gunthorpe
2026-03-25 21:26 ` [PATCH v2 02/16] RDMA: Consolidate patterns with offsetof() " Jason Gunthorpe
` (14 subsequent siblings)
15 siblings, 0 replies; 19+ messages in thread
From: Jason Gunthorpe @ 2026-03-25 21:26 UTC (permalink / raw)
To: Abhijit Gangurde, Allen Hubbe,
Broadcom internal kernel review list, Bernard Metzler, Bryan Tan,
Cheng Xu, Gal Pressman, Junxian Huang, Kai Shen,
Konstantin Taranov, Krzysztof Czurylo, Leon Romanovsky,
linux-hyperv, linux-rdma, Michal Kalderon, Michael Margolin,
Nelson Escobar, Satish Kharat, Selvin Xavier, Yossi Leybovich,
Chengchang Tang, Tatyana Nikolova, Vishnu Dasa, Yishai Hadas,
Zhu Yanjun
Cc: Long Li, patches
Go treewide and consolidate all existing patterns using:
* offsetofend() and variations
* ib_is_udata_cleared()
* ib_copy_from_udata()
into a direct call to the new ib_copy_validate_udata_in().
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
---
drivers/infiniband/hw/efa/efa_verbs.c | 47 +++---------------------
drivers/infiniband/hw/irdma/verbs.c | 10 +++---
drivers/infiniband/hw/mlx4/qp.c | 38 ++++----------------
drivers/infiniband/hw/mlx5/qp.c | 51 ++++++---------------------
4 files changed, 26 insertions(+), 120 deletions(-)
diff --git a/drivers/infiniband/hw/efa/efa_verbs.c b/drivers/infiniband/hw/efa/efa_verbs.c
index fc498663cd372f..8d9357e2d513bb 100644
--- a/drivers/infiniband/hw/efa/efa_verbs.c
+++ b/drivers/infiniband/hw/efa/efa_verbs.c
@@ -699,29 +699,9 @@ int efa_create_qp(struct ib_qp *ibqp, struct ib_qp_init_attr *init_attr,
if (err)
goto err_out;
- if (offsetofend(typeof(cmd), driver_qp_type) > udata->inlen) {
- ibdev_dbg(&dev->ibdev,
- "Incompatible ABI params, no input udata\n");
- err = -EINVAL;
+ err = ib_copy_validate_udata_in(udata, cmd, driver_qp_type);
+ if (err)
goto err_out;
- }
-
- if (udata->inlen > sizeof(cmd) &&
- !ib_is_udata_cleared(udata, sizeof(cmd),
- udata->inlen - sizeof(cmd))) {
- ibdev_dbg(&dev->ibdev,
- "Incompatible ABI params, unknown fields in udata\n");
- err = -EINVAL;
- goto err_out;
- }
-
- err = ib_copy_from_udata(&cmd, udata,
- min(sizeof(cmd), udata->inlen));
- if (err) {
- ibdev_dbg(&dev->ibdev,
- "Cannot copy udata for create_qp\n");
- goto err_out;
- }
if (cmd.comp_mask || !is_reserved_cleared(cmd.reserved_98)) {
ibdev_dbg(&dev->ibdev,
@@ -1160,28 +1140,9 @@ int efa_create_user_cq(struct ib_cq *ibcq, const struct ib_cq_init_attr *attr,
goto err_out;
}
- if (offsetofend(typeof(cmd), num_sub_cqs) > udata->inlen) {
- ibdev_dbg(ibdev,
- "Incompatible ABI params, no input udata\n");
- err = -EINVAL;
+ err = ib_copy_validate_udata_in(udata, cmd, num_sub_cqs);
+ if (err)
goto err_out;
- }
-
- if (udata->inlen > sizeof(cmd) &&
- !ib_is_udata_cleared(udata, sizeof(cmd),
- udata->inlen - sizeof(cmd))) {
- ibdev_dbg(ibdev,
- "Incompatible ABI params, unknown fields in udata\n");
- err = -EINVAL;
- goto err_out;
- }
-
- err = ib_copy_from_udata(&cmd, udata,
- min(sizeof(cmd), udata->inlen));
- if (err) {
- ibdev_dbg(ibdev, "Cannot copy udata for create_cq\n");
- goto err_out;
- }
if (cmd.comp_mask || !is_reserved_cleared(cmd.reserved_58)) {
ibdev_dbg(ibdev,
diff --git a/drivers/infiniband/hw/irdma/verbs.c b/drivers/infiniband/hw/irdma/verbs.c
index 7251cd7a21471e..b2978632241900 100644
--- a/drivers/infiniband/hw/irdma/verbs.c
+++ b/drivers/infiniband/hw/irdma/verbs.c
@@ -284,7 +284,6 @@ static void irdma_alloc_push_page(struct irdma_qp *iwqp)
static int irdma_alloc_ucontext(struct ib_ucontext *uctx,
struct ib_udata *udata)
{
-#define IRDMA_ALLOC_UCTX_MIN_REQ_LEN offsetofend(struct irdma_alloc_ucontext_req, rsvd8)
#define IRDMA_ALLOC_UCTX_MIN_RESP_LEN offsetofend(struct irdma_alloc_ucontext_resp, rsvd)
struct ib_device *ibdev = uctx->device;
struct irdma_device *iwdev = to_iwdev(ibdev);
@@ -292,13 +291,14 @@ static int irdma_alloc_ucontext(struct ib_ucontext *uctx,
struct irdma_alloc_ucontext_resp uresp = {};
struct irdma_ucontext *ucontext = to_ucontext(uctx);
struct irdma_uk_attrs *uk_attrs = &iwdev->rf->sc_dev.hw_attrs.uk_attrs;
+ int ret;
- if (udata->inlen < IRDMA_ALLOC_UCTX_MIN_REQ_LEN ||
- udata->outlen < IRDMA_ALLOC_UCTX_MIN_RESP_LEN)
+ if (udata->outlen < IRDMA_ALLOC_UCTX_MIN_RESP_LEN)
return -EINVAL;
- if (ib_copy_from_udata(&req, udata, min(sizeof(req), udata->inlen)))
- return -EINVAL;
+ ret = ib_copy_validate_udata_in(udata, req, rsvd8);
+ if (ret)
+ return ret;
if (req.userspace_ver < 4 || req.userspace_ver > IRDMA_ABI_VER)
goto ver_error;
diff --git a/drivers/infiniband/hw/mlx4/qp.c b/drivers/infiniband/hw/mlx4/qp.c
index 1cb890d3d93cea..b87a4b7949a3a0 100644
--- a/drivers/infiniband/hw/mlx4/qp.c
+++ b/drivers/infiniband/hw/mlx4/qp.c
@@ -710,7 +710,6 @@ static int _mlx4_ib_create_qp_rss(struct ib_pd *pd, struct mlx4_ib_qp *qp,
struct ib_udata *udata)
{
struct mlx4_ib_create_qp_rss ucmd = {};
- size_t required_cmd_sz;
int err;
if (!udata) {
@@ -721,16 +720,10 @@ static int _mlx4_ib_create_qp_rss(struct ib_pd *pd, struct mlx4_ib_qp *qp,
if (udata->outlen)
return -EOPNOTSUPP;
- required_cmd_sz = offsetof(typeof(ucmd), reserved1) +
- sizeof(ucmd.reserved1);
- if (udata->inlen < required_cmd_sz) {
- pr_debug("invalid inlen\n");
- return -EINVAL;
- }
-
- if (ib_copy_from_udata(&ucmd, udata, min(sizeof(ucmd), udata->inlen))) {
+ err = ib_copy_validate_udata_in(udata, ucmd, reserved1);
+ if (err) {
pr_debug("copy failed\n");
- return -EFAULT;
+ return err;
}
if (memchr_inv(ucmd.reserved, 0, sizeof(ucmd.reserved)))
@@ -739,13 +732,6 @@ static int _mlx4_ib_create_qp_rss(struct ib_pd *pd, struct mlx4_ib_qp *qp,
if (ucmd.comp_mask || ucmd.reserved1)
return -EOPNOTSUPP;
- if (udata->inlen > sizeof(ucmd) &&
- !ib_is_udata_cleared(udata, sizeof(ucmd),
- udata->inlen - sizeof(ucmd))) {
- pr_debug("inlen is not supported\n");
- return -EOPNOTSUPP;
- }
-
if (init_attr->qp_type != IB_QPT_RAW_PACKET) {
pr_debug("RSS QP with unsupported QP type %d\n",
init_attr->qp_type);
@@ -4269,22 +4255,12 @@ int mlx4_ib_modify_wq(struct ib_wq *ibwq, struct ib_wq_attr *wq_attr,
{
struct mlx4_ib_qp *qp = to_mqp((struct ib_qp *)ibwq);
struct mlx4_ib_modify_wq ucmd = {};
- size_t required_cmd_sz;
enum ib_wq_state cur_state, new_state;
- int err = 0;
+ int err;
- required_cmd_sz = offsetof(typeof(ucmd), reserved) +
- sizeof(ucmd.reserved);
- if (udata->inlen < required_cmd_sz)
- return -EINVAL;
-
- if (udata->inlen > sizeof(ucmd) &&
- !ib_is_udata_cleared(udata, sizeof(ucmd),
- udata->inlen - sizeof(ucmd)))
- return -EOPNOTSUPP;
-
- if (ib_copy_from_udata(&ucmd, udata, min(sizeof(ucmd), udata->inlen)))
- return -EFAULT;
+ err = ib_copy_validate_udata_in(udata, ucmd, reserved);
+ if (err)
+ return err;
if (ucmd.comp_mask || ucmd.reserved)
return -EOPNOTSUPP;
diff --git a/drivers/infiniband/hw/mlx5/qp.c b/drivers/infiniband/hw/mlx5/qp.c
index 59f9ddb35d4620..d4d5e0d457a0b5 100644
--- a/drivers/infiniband/hw/mlx5/qp.c
+++ b/drivers/infiniband/hw/mlx5/qp.c
@@ -4707,17 +4707,9 @@ int mlx5_ib_modify_qp(struct ib_qp *ibqp, struct ib_qp_attr *attr,
return -ENOSYS;
if (udata && udata->inlen) {
- if (udata->inlen < offsetofend(typeof(ucmd), ece_options))
- return -EINVAL;
-
- if (udata->inlen > sizeof(ucmd) &&
- !ib_is_udata_cleared(udata, sizeof(ucmd),
- udata->inlen - sizeof(ucmd)))
- return -EOPNOTSUPP;
-
- if (ib_copy_from_udata(&ucmd, udata,
- min(udata->inlen, sizeof(ucmd))))
- return -EFAULT;
+ err = ib_copy_validate_udata_in(udata, ucmd, ece_options);
+ if (err)
+ return err;
if (ucmd.comp_mask & ~MLX5_IB_MODIFY_QP_OOO_DP ||
memchr_inv(&ucmd.burst_info.reserved, 0,
@@ -5389,25 +5381,11 @@ static int prepare_user_rq(struct ib_pd *pd,
struct mlx5_ib_dev *dev = to_mdev(pd->device);
struct mlx5_ib_create_wq ucmd = {};
int err;
- size_t required_cmd_sz;
-
- required_cmd_sz = offsetofend(struct mlx5_ib_create_wq,
- single_stride_log_num_of_bytes);
- if (udata->inlen < required_cmd_sz) {
- mlx5_ib_dbg(dev, "invalid inlen\n");
- return -EINVAL;
- }
-
- if (udata->inlen > sizeof(ucmd) &&
- !ib_is_udata_cleared(udata, sizeof(ucmd),
- udata->inlen - sizeof(ucmd))) {
- mlx5_ib_dbg(dev, "inlen is not supported\n");
- return -EOPNOTSUPP;
- }
-
- if (ib_copy_from_udata(&ucmd, udata, min(sizeof(ucmd), udata->inlen))) {
+ err = ib_copy_validate_udata_in(udata, ucmd,
+ single_stride_log_num_of_bytes);
+ if (err) {
mlx5_ib_dbg(dev, "copy failed\n");
- return -EFAULT;
+ return err;
}
if (ucmd.comp_mask & (~MLX5_IB_CREATE_WQ_STRIDING_RQ)) {
@@ -5626,7 +5604,6 @@ int mlx5_ib_modify_wq(struct ib_wq *wq, struct ib_wq_attr *wq_attr,
struct mlx5_ib_dev *dev = to_mdev(wq->device);
struct mlx5_ib_rwq *rwq = to_mrwq(wq);
struct mlx5_ib_modify_wq ucmd = {};
- size_t required_cmd_sz;
int curr_wq_state;
int wq_state;
int inlen;
@@ -5634,17 +5611,9 @@ int mlx5_ib_modify_wq(struct ib_wq *wq, struct ib_wq_attr *wq_attr,
void *rqc;
void *in;
- required_cmd_sz = offsetofend(struct mlx5_ib_modify_wq, reserved);
- if (udata->inlen < required_cmd_sz)
- return -EINVAL;
-
- if (udata->inlen > sizeof(ucmd) &&
- !ib_is_udata_cleared(udata, sizeof(ucmd),
- udata->inlen - sizeof(ucmd)))
- return -EOPNOTSUPP;
-
- if (ib_copy_from_udata(&ucmd, udata, min(sizeof(ucmd), udata->inlen)))
- return -EFAULT;
+ err = ib_copy_validate_udata_in(udata, ucmd, reserved);
+ if (err)
+ return err;
if (ucmd.comp_mask || ucmd.reserved)
return -EOPNOTSUPP;
--
2.43.0
^ permalink raw reply related [flat|nested] 19+ messages in thread
* [PATCH v2 02/16] RDMA: Consolidate patterns with offsetof() to ib_copy_validate_udata_in()
2026-03-25 21:26 [PATCH v2 00/16] Update drivers to use ib_copy_validate_udata_in() Jason Gunthorpe
2026-03-25 21:26 ` [PATCH v2 01/16] RDMA: Consolidate patterns with offsetofend() to ib_copy_validate_udata_in() Jason Gunthorpe
@ 2026-03-25 21:26 ` Jason Gunthorpe
2026-03-25 21:26 ` [PATCH v2 03/16] RDMA: Consolidate patterns with sizeof() " Jason Gunthorpe
` (13 subsequent siblings)
15 siblings, 0 replies; 19+ messages in thread
From: Jason Gunthorpe @ 2026-03-25 21:26 UTC (permalink / raw)
To: Abhijit Gangurde, Allen Hubbe,
Broadcom internal kernel review list, Bernard Metzler, Bryan Tan,
Cheng Xu, Gal Pressman, Junxian Huang, Kai Shen,
Konstantin Taranov, Krzysztof Czurylo, Leon Romanovsky,
linux-hyperv, linux-rdma, Michal Kalderon, Michael Margolin,
Nelson Escobar, Satish Kharat, Selvin Xavier, Yossi Leybovich,
Chengchang Tang, Tatyana Nikolova, Vishnu Dasa, Yishai Hadas,
Zhu Yanjun
Cc: Long Li, patches
Similar to the prior patch, these patterns are open coding an
offsetofend(). The use of offsetof() targets the prior field as the
last field in the struct.
Reviewed-by: Long Li <longli@microsoft.com>
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
---
drivers/infiniband/hw/mana/cq.c | 9 ++-------
drivers/infiniband/hw/mlx5/cq.c | 10 +++-------
2 files changed, 5 insertions(+), 14 deletions(-)
diff --git a/drivers/infiniband/hw/mana/cq.c b/drivers/infiniband/hw/mana/cq.c
index b2749f971cd0af..3f932ef6e5fff6 100644
--- a/drivers/infiniband/hw/mana/cq.c
+++ b/drivers/infiniband/hw/mana/cq.c
@@ -27,14 +27,9 @@ int mana_ib_create_cq(struct ib_cq *ibcq, const struct ib_cq_init_attr *attr,
is_rnic_cq = mana_ib_is_rnic(mdev);
if (udata) {
- if (udata->inlen < offsetof(struct mana_ib_create_cq, flags))
- return -EINVAL;
-
- err = ib_copy_from_udata(&ucmd, udata, min(sizeof(ucmd), udata->inlen));
- if (err) {
- ibdev_dbg(ibdev, "Failed to copy from udata for create cq, %d\n", err);
+ err = ib_copy_validate_udata_in(udata, ucmd, buf_addr);
+ if (err)
return err;
- }
if ((!is_rnic_cq && attr->cqe > mdev->adapter_caps.max_qp_wr) ||
attr->cqe > U32_MAX / COMP_ENTRY_SIZE) {
diff --git a/drivers/infiniband/hw/mlx5/cq.c b/drivers/infiniband/hw/mlx5/cq.c
index 43a7b5ca49dcc9..643b3b7d387834 100644
--- a/drivers/infiniband/hw/mlx5/cq.c
+++ b/drivers/infiniband/hw/mlx5/cq.c
@@ -723,7 +723,6 @@ static int create_cq_user(struct mlx5_ib_dev *dev, struct ib_udata *udata,
struct mlx5_ib_create_cq ucmd = {};
unsigned long page_size;
unsigned int page_offset_quantized;
- size_t ucmdlen;
__be64 *pas;
int ncont;
void *cqc;
@@ -731,12 +730,9 @@ static int create_cq_user(struct mlx5_ib_dev *dev, struct ib_udata *udata,
struct mlx5_ib_ucontext *context = rdma_udata_to_drv_context(
udata, struct mlx5_ib_ucontext, ibucontext);
- ucmdlen = min(udata->inlen, sizeof(ucmd));
- if (ucmdlen < offsetof(struct mlx5_ib_create_cq, flags))
- return -EINVAL;
-
- if (ib_copy_from_udata(&ucmd, udata, ucmdlen))
- return -EFAULT;
+ err = ib_copy_validate_udata_in(udata, ucmd, cqe_comp_res_format);
+ if (err)
+ return err;
if ((ucmd.flags & ~(MLX5_IB_CREATE_CQ_FLAGS_CQE_128B_PAD |
MLX5_IB_CREATE_CQ_FLAGS_UAR_PAGE_INDEX |
--
2.43.0
^ permalink raw reply related [flat|nested] 19+ messages in thread
* [PATCH v2 03/16] RDMA: Consolidate patterns with sizeof() to ib_copy_validate_udata_in()
2026-03-25 21:26 [PATCH v2 00/16] Update drivers to use ib_copy_validate_udata_in() Jason Gunthorpe
2026-03-25 21:26 ` [PATCH v2 01/16] RDMA: Consolidate patterns with offsetofend() to ib_copy_validate_udata_in() Jason Gunthorpe
2026-03-25 21:26 ` [PATCH v2 02/16] RDMA: Consolidate patterns with offsetof() " Jason Gunthorpe
@ 2026-03-25 21:26 ` Jason Gunthorpe
2026-03-29 11:59 ` Bernard Metzler
2026-03-25 21:26 ` [PATCH v2 04/16] RDMA: Use ib_copy_validate_udata_in() for implicit full structs Jason Gunthorpe
` (12 subsequent siblings)
15 siblings, 1 reply; 19+ messages in thread
From: Jason Gunthorpe @ 2026-03-25 21:26 UTC (permalink / raw)
To: Abhijit Gangurde, Allen Hubbe,
Broadcom internal kernel review list, Bernard Metzler, Bryan Tan,
Cheng Xu, Gal Pressman, Junxian Huang, Kai Shen,
Konstantin Taranov, Krzysztof Czurylo, Leon Romanovsky,
linux-hyperv, linux-rdma, Michal Kalderon, Michael Margolin,
Nelson Escobar, Satish Kharat, Selvin Xavier, Yossi Leybovich,
Chengchang Tang, Tatyana Nikolova, Vishnu Dasa, Yishai Hadas,
Zhu Yanjun
Cc: Long Li, patches
Similar to the prior patch, these patterns are open coding an
offsetofend() using sizeof(), which targets the last member of the
current struct.
Reviewed-by: Long Li <longli@microsoft.com>
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
---
drivers/infiniband/hw/mana/qp.c | 27 +++++++++------------------
drivers/infiniband/hw/mana/wq.c | 10 ++--------
drivers/infiniband/hw/mlx4/main.c | 6 ++----
drivers/infiniband/hw/mlx5/cq.c | 2 +-
drivers/infiniband/sw/rxe/rxe_verbs.c | 13 ++-----------
drivers/infiniband/sw/siw/siw_verbs.c | 6 +-----
6 files changed, 17 insertions(+), 47 deletions(-)
diff --git a/drivers/infiniband/hw/mana/qp.c b/drivers/infiniband/hw/mana/qp.c
index 82f84f7ad37a90..69c8d4f7a1f46b 100644
--- a/drivers/infiniband/hw/mana/qp.c
+++ b/drivers/infiniband/hw/mana/qp.c
@@ -111,16 +111,12 @@ static int mana_ib_create_qp_rss(struct ib_qp *ibqp, struct ib_pd *pd,
u32 port;
int ret;
- if (!udata || udata->inlen < sizeof(ucmd))
+ if (!udata)
return -EINVAL;
- ret = ib_copy_from_udata(&ucmd, udata, min(sizeof(ucmd), udata->inlen));
- if (ret) {
- ibdev_dbg(&mdev->ib_dev,
- "Failed copy from udata for create rss-qp, err %d\n",
- ret);
+ ret = ib_copy_validate_udata_in(udata, ucmd, port);
+ if (ret)
return ret;
- }
if (attr->cap.max_recv_wr > mdev->adapter_caps.max_qp_wr) {
ibdev_dbg(&mdev->ib_dev,
@@ -282,15 +278,12 @@ static int mana_ib_create_qp_raw(struct ib_qp *ibqp, struct ib_pd *ibpd,
u32 port;
int err;
- if (!mana_ucontext || udata->inlen < sizeof(ucmd))
+ if (!mana_ucontext)
return -EINVAL;
- err = ib_copy_from_udata(&ucmd, udata, min(sizeof(ucmd), udata->inlen));
- if (err) {
- ibdev_dbg(&mdev->ib_dev,
- "Failed to copy from udata create qp-raw, %d\n", err);
+ err = ib_copy_validate_udata_in(udata, ucmd, port);
+ if (err)
return err;
- }
if (attr->cap.max_send_wr > mdev->adapter_caps.max_qp_wr) {
ibdev_dbg(&mdev->ib_dev,
@@ -535,17 +528,15 @@ static int mana_ib_create_rc_qp(struct ib_qp *ibqp, struct ib_pd *ibpd,
u64 flags = 0;
u32 doorbell;
- if (!udata || udata->inlen < sizeof(ucmd))
+ if (!udata)
return -EINVAL;
mana_ucontext = rdma_udata_to_drv_context(udata, struct mana_ib_ucontext, ibucontext);
doorbell = mana_ucontext->doorbell;
flags = MANA_RC_FLAG_NO_FMR;
- err = ib_copy_from_udata(&ucmd, udata, min(sizeof(ucmd), udata->inlen));
- if (err) {
- ibdev_dbg(&mdev->ib_dev, "Failed to copy from udata, %d\n", err);
+ err = ib_copy_validate_udata_in(udata, ucmd, queue_size);
+ if (err)
return err;
- }
for (i = 0, j = 0; i < MANA_RC_QUEUE_TYPE_MAX; ++i) {
/* skip FMR for user-level RC QPs */
diff --git a/drivers/infiniband/hw/mana/wq.c b/drivers/infiniband/hw/mana/wq.c
index 6206244f762e42..aceeea7f17b339 100644
--- a/drivers/infiniband/hw/mana/wq.c
+++ b/drivers/infiniband/hw/mana/wq.c
@@ -15,15 +15,9 @@ struct ib_wq *mana_ib_create_wq(struct ib_pd *pd,
struct mana_ib_wq *wq;
int err;
- if (udata->inlen < sizeof(ucmd))
- return ERR_PTR(-EINVAL);
-
- err = ib_copy_from_udata(&ucmd, udata, min(sizeof(ucmd), udata->inlen));
- if (err) {
- ibdev_dbg(&mdev->ib_dev,
- "Failed to copy from udata for create wq, %d\n", err);
+ err = ib_copy_validate_udata_in(udata, ucmd, reserved);
+ if (err)
return ERR_PTR(err);
- }
wq = kzalloc_obj(*wq);
if (!wq)
diff --git a/drivers/infiniband/hw/mlx4/main.c b/drivers/infiniband/hw/mlx4/main.c
index 73e17b4339eb60..16e4cffbd7a84d 100644
--- a/drivers/infiniband/hw/mlx4/main.c
+++ b/drivers/infiniband/hw/mlx4/main.c
@@ -50,6 +50,7 @@
#include <rdma/ib_user_verbs.h>
#include <rdma/ib_addr.h>
#include <rdma/ib_cache.h>
+#include <rdma/uverbs_ioctl.h>
#include <net/bonding.h>
@@ -445,10 +446,7 @@ static int mlx4_ib_query_device(struct ib_device *ibdev,
struct mlx4_clock_params clock_params;
if (uhw->inlen) {
- if (uhw->inlen < sizeof(cmd))
- return -EINVAL;
-
- err = ib_copy_from_udata(&cmd, uhw, sizeof(cmd));
+ err = ib_copy_validate_udata_in(uhw, cmd, reserved);
if (err)
return err;
diff --git a/drivers/infiniband/hw/mlx5/cq.c b/drivers/infiniband/hw/mlx5/cq.c
index 643b3b7d387834..f5e75e51c6763f 100644
--- a/drivers/infiniband/hw/mlx5/cq.c
+++ b/drivers/infiniband/hw/mlx5/cq.c
@@ -1229,7 +1229,7 @@ static int resize_user(struct mlx5_ib_dev *dev, struct mlx5_ib_cq *cq,
struct ib_umem *umem;
int err;
- err = ib_copy_from_udata(&ucmd, udata, sizeof(ucmd));
+ err = ib_copy_validate_udata_in(udata, ucmd, reserved1);
if (err)
return err;
diff --git a/drivers/infiniband/sw/rxe/rxe_verbs.c b/drivers/infiniband/sw/rxe/rxe_verbs.c
index fe41362c51444c..c9fd40bfa09eb2 100644
--- a/drivers/infiniband/sw/rxe/rxe_verbs.c
+++ b/drivers/infiniband/sw/rxe/rxe_verbs.c
@@ -452,18 +452,9 @@ static int rxe_modify_srq(struct ib_srq *ibsrq, struct ib_srq_attr *attr,
int err;
if (udata) {
- if (udata->inlen < sizeof(cmd)) {
- err = -EINVAL;
- rxe_dbg_srq(srq, "malformed udata\n");
+ err = ib_copy_validate_udata_in(udata, cmd, mmap_info_addr);
+ if (err)
goto err_out;
- }
-
- err = ib_copy_from_udata(&cmd, udata, sizeof(cmd));
- if (err) {
- err = -EFAULT;
- rxe_dbg_srq(srq, "unable to read udata\n");
- goto err_out;
- }
}
err = rxe_srq_chk_attr(rxe, srq, attr, mask);
diff --git a/drivers/infiniband/sw/siw/siw_verbs.c b/drivers/infiniband/sw/siw/siw_verbs.c
index ef504db8f2b48b..1e1d262a4ae2db 100644
--- a/drivers/infiniband/sw/siw/siw_verbs.c
+++ b/drivers/infiniband/sw/siw/siw_verbs.c
@@ -1373,11 +1373,7 @@ struct ib_mr *siw_reg_user_mr(struct ib_pd *pd, u64 start, u64 len,
struct siw_uresp_reg_mr uresp = {};
struct siw_mem *mem = mr->mem;
- if (udata->inlen < sizeof(ureq)) {
- rv = -EINVAL;
- goto err_out;
- }
- rv = ib_copy_from_udata(&ureq, udata, sizeof(ureq));
+ rv = ib_copy_validate_udata_in(udata, ureq, pad);
if (rv)
goto err_out;
--
2.43.0
^ permalink raw reply related [flat|nested] 19+ messages in thread
* [PATCH v2 04/16] RDMA: Use ib_copy_validate_udata_in() for implicit full structs
2026-03-25 21:26 [PATCH v2 00/16] Update drivers to use ib_copy_validate_udata_in() Jason Gunthorpe
` (2 preceding siblings ...)
2026-03-25 21:26 ` [PATCH v2 03/16] RDMA: Consolidate patterns with sizeof() " Jason Gunthorpe
@ 2026-03-25 21:26 ` Jason Gunthorpe
2026-03-25 21:26 ` [PATCH v2 05/16] RDMA/pvrdma: Use ib_copy_validate_udata_in() for srq Jason Gunthorpe
` (11 subsequent siblings)
15 siblings, 0 replies; 19+ messages in thread
From: Jason Gunthorpe @ 2026-03-25 21:26 UTC (permalink / raw)
To: Abhijit Gangurde, Allen Hubbe,
Broadcom internal kernel review list, Bernard Metzler, Bryan Tan,
Cheng Xu, Gal Pressman, Junxian Huang, Kai Shen,
Konstantin Taranov, Krzysztof Czurylo, Leon Romanovsky,
linux-hyperv, linux-rdma, Michal Kalderon, Michael Margolin,
Nelson Escobar, Satish Kharat, Selvin Xavier, Yossi Leybovich,
Chengchang Tang, Tatyana Nikolova, Vishnu Dasa, Yishai Hadas,
Zhu Yanjun
Cc: Long Li, patches
All of these cases have git blames that say the entire current struct
was introduced at once, so the last member is the right choice.
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
---
drivers/infiniband/hw/erdma/erdma_verbs.c | 6 ++--
.../infiniband/hw/ionic/ionic_controlpath.c | 6 ++--
drivers/infiniband/hw/mthca/mthca_provider.c | 27 +++++++++------
drivers/infiniband/hw/ocrdma/ocrdma_verbs.c | 10 +++---
drivers/infiniband/hw/qedr/verbs.c | 34 ++++++-------------
drivers/infiniband/hw/usnic/usnic_ib_verbs.c | 2 +-
drivers/infiniband/hw/vmw_pvrdma/pvrdma_qp.c | 6 ++--
drivers/infiniband/hw/vmw_pvrdma/pvrdma_srq.c | 6 ++--
8 files changed, 45 insertions(+), 52 deletions(-)
diff --git a/drivers/infiniband/hw/erdma/erdma_verbs.c b/drivers/infiniband/hw/erdma/erdma_verbs.c
index 04136a0281aa4c..5523b4e151e1ff 100644
--- a/drivers/infiniband/hw/erdma/erdma_verbs.c
+++ b/drivers/infiniband/hw/erdma/erdma_verbs.c
@@ -1039,8 +1039,7 @@ int erdma_create_qp(struct ib_qp *ibqp, struct ib_qp_init_attr *attrs,
qp->attrs.rq_size = roundup_pow_of_two(attrs->cap.max_recv_wr);
if (uctx) {
- ret = ib_copy_from_udata(&ureq, udata,
- min(sizeof(ureq), udata->inlen));
+ ret = ib_copy_validate_udata_in(udata, ureq, rsvd0);
if (ret)
goto err_out_xa;
@@ -1980,8 +1979,7 @@ int erdma_create_cq(struct ib_cq *ibcq, const struct ib_cq_init_attr *attr,
struct erdma_ureq_create_cq ureq;
struct erdma_uresp_create_cq uresp;
- ret = ib_copy_from_udata(&ureq, udata,
- min(udata->inlen, sizeof(ureq)));
+ ret = ib_copy_validate_udata_in(udata, ureq, rsvd0);
if (ret)
goto err_out_xa;
diff --git a/drivers/infiniband/hw/ionic/ionic_controlpath.c b/drivers/infiniband/hw/ionic/ionic_controlpath.c
index 4842931f5316ee..cbdb0ea7782a49 100644
--- a/drivers/infiniband/hw/ionic/ionic_controlpath.c
+++ b/drivers/infiniband/hw/ionic/ionic_controlpath.c
@@ -373,7 +373,7 @@ int ionic_alloc_ucontext(struct ib_ucontext *ibctx, struct ib_udata *udata)
phys_addr_t db_phys = 0;
int rc;
- rc = ib_copy_from_udata(&req, udata, sizeof(req));
+ rc = ib_copy_validate_udata_in(udata, req, rsvd);
if (rc)
return rc;
@@ -1223,7 +1223,7 @@ int ionic_create_cq(struct ib_cq *ibcq, const struct ib_cq_init_attr *attr,
int udma_idx = 0, rc;
if (udata) {
- rc = ib_copy_from_udata(&req, udata, sizeof(req));
+ rc = ib_copy_validate_udata_in(udata, req, rsvd);
if (rc)
return rc;
}
@@ -2152,7 +2152,7 @@ int ionic_create_qp(struct ib_qp *ibqp, struct ib_qp_init_attr *attr,
int rc;
if (udata) {
- rc = ib_copy_from_udata(&req, udata, sizeof(req));
+ rc = ib_copy_validate_udata_in(udata, req, rsvd);
if (rc)
return rc;
} else {
diff --git a/drivers/infiniband/hw/mthca/mthca_provider.c b/drivers/infiniband/hw/mthca/mthca_provider.c
index 6a0795332616dc..7467e3dff7ebb8 100644
--- a/drivers/infiniband/hw/mthca/mthca_provider.c
+++ b/drivers/infiniband/hw/mthca/mthca_provider.c
@@ -402,8 +402,9 @@ static int mthca_create_srq(struct ib_srq *ibsrq,
return -EOPNOTSUPP;
if (udata) {
- if (ib_copy_from_udata(&ucmd, udata, sizeof(ucmd)))
- return -EFAULT;
+ err = ib_copy_validate_udata_in(udata, ucmd, db_page);
+ if (err)
+ return err;
err = mthca_map_user_db(to_mdev(ibsrq->device), &context->uar,
context->db_tab, ucmd.db_index,
@@ -472,8 +473,9 @@ static int mthca_create_qp(struct ib_qp *ibqp,
case IB_QPT_UD:
{
if (udata) {
- if (ib_copy_from_udata(&ucmd, udata, sizeof(ucmd)))
- return -EFAULT;
+ err = ib_copy_validate_udata_in(udata, ucmd, rq_db_index);
+ if (err)
+ return err;
err = mthca_map_user_db(dev, &context->uar,
context->db_tab,
@@ -594,8 +596,9 @@ static int mthca_create_cq(struct ib_cq *ibcq,
return -EINVAL;
if (udata) {
- if (ib_copy_from_udata(&ucmd, udata, sizeof(ucmd)))
- return -EFAULT;
+ err = ib_copy_validate_udata_in(udata, ucmd, set_db_index);
+ if (err)
+ return err;
err = mthca_map_user_db(to_mdev(ibdev), &context->uar,
context->db_tab, ucmd.set_db_index,
@@ -720,10 +723,9 @@ static int mthca_resize_cq(struct ib_cq *ibcq, int entries, struct ib_udata *uda
goto out;
lkey = cq->resize_buf->buf.mr.ibmr.lkey;
} else {
- if (ib_copy_from_udata(&ucmd, udata, sizeof ucmd)) {
- ret = -EFAULT;
+ ret = ib_copy_validate_udata_in(udata, ucmd, reserved);
+ if (ret)
goto out;
- }
lkey = ucmd.lkey;
}
@@ -851,8 +853,11 @@ static struct ib_mr *mthca_reg_user_mr(struct ib_pd *pd, u64 start, u64 length,
}
++context->reg_mr_warned;
ucmd.mr_attrs = 0;
- } else if (ib_copy_from_udata(&ucmd, udata, sizeof ucmd))
- return ERR_PTR(-EFAULT);
+ } else {
+ err = ib_copy_validate_udata_in(udata, ucmd, reserved);
+ if (err)
+ return ERR_PTR(err);
+ }
mr = kmalloc_obj(*mr);
if (!mr)
diff --git a/drivers/infiniband/hw/ocrdma/ocrdma_verbs.c b/drivers/infiniband/hw/ocrdma/ocrdma_verbs.c
index 7383b67e172312..8b285fcc638701 100644
--- a/drivers/infiniband/hw/ocrdma/ocrdma_verbs.c
+++ b/drivers/infiniband/hw/ocrdma/ocrdma_verbs.c
@@ -983,8 +983,9 @@ int ocrdma_create_cq(struct ib_cq *ibcq, const struct ib_cq_init_attr *attr,
return -EOPNOTSUPP;
if (udata) {
- if (ib_copy_from_udata(&ureq, udata, sizeof(ureq)))
- return -EFAULT;
+ status = ib_copy_validate_udata_in(udata, ureq, rsvd);
+ if (status)
+ return status;
} else
ureq.dpp_cq = 0;
@@ -1312,8 +1313,9 @@ int ocrdma_create_qp(struct ib_qp *ibqp, struct ib_qp_init_attr *attrs,
memset(&ureq, 0, sizeof(ureq));
if (udata) {
- if (ib_copy_from_udata(&ureq, udata, sizeof(ureq)))
- return -EFAULT;
+ status = ib_copy_validate_udata_in(udata, ureq, rsvd1);
+ if (status)
+ return status;
}
ocrdma_set_qp_init_params(qp, pd, attrs);
if (udata == NULL)
diff --git a/drivers/infiniband/hw/qedr/verbs.c b/drivers/infiniband/hw/qedr/verbs.c
index 2fa9e07710d31f..42d20b35ff3fe0 100644
--- a/drivers/infiniband/hw/qedr/verbs.c
+++ b/drivers/infiniband/hw/qedr/verbs.c
@@ -273,12 +273,9 @@ int qedr_alloc_ucontext(struct ib_ucontext *uctx, struct ib_udata *udata)
return -EFAULT;
if (udata->inlen) {
- rc = ib_copy_from_udata(&ureq, udata,
- min(sizeof(ureq), udata->inlen));
- if (rc) {
- DP_ERR(dev, "Problem copying data from user space\n");
- return -EFAULT;
- }
+ rc = ib_copy_validate_udata_in(udata, ureq, reserved);
+ if (rc)
+ return rc;
ctx->edpm_mode = !!(ureq.context_flags &
QEDR_ALLOC_UCTX_EDPM_MODE);
ctx->db_rec = !!(ureq.context_flags & QEDR_ALLOC_UCTX_DB_REC);
@@ -949,12 +946,9 @@ int qedr_create_cq(struct ib_cq *ibcq, const struct ib_cq_init_attr *attr,
db_offset = DB_ADDR_SHIFT(DQ_PWM_OFFSET_UCM_RDMA_CQ_CONS_32BIT);
if (udata) {
- if (ib_copy_from_udata(&ureq, udata, min(sizeof(ureq),
- udata->inlen))) {
- DP_ERR(dev,
- "create cq: problem copying data from user space\n");
- goto err0;
- }
+ rc = ib_copy_validate_udata_in(udata, ureq, len);
+ if (rc)
+ return rc;
if (!ureq.len) {
DP_ERR(dev,
@@ -1575,12 +1569,9 @@ int qedr_create_srq(struct ib_srq *ibsrq, struct ib_srq_init_attr *init_attr,
hw_srq->max_sges = init_attr->attr.max_sge;
if (udata) {
- if (ib_copy_from_udata(&ureq, udata, min(sizeof(ureq),
- udata->inlen))) {
- DP_ERR(dev,
- "create srq: problem copying data from user space\n");
- goto err0;
- }
+ rc = ib_copy_validate_udata_in(udata, ureq, srq_len);
+ if (rc)
+ return rc;
rc = qedr_init_srq_user_params(udata, srq, &ureq, 0);
if (rc)
@@ -1860,12 +1851,9 @@ static int qedr_create_user_qp(struct qedr_dev *dev,
}
if (udata) {
- rc = ib_copy_from_udata(&ureq, udata, min(sizeof(ureq),
- udata->inlen));
- if (rc) {
- DP_ERR(dev, "Problem copying data from user space\n");
+ rc = ib_copy_validate_udata_in(udata, ureq, rq_len);
+ if (rc)
return rc;
- }
}
if (qedr_qp_has_sq(qp)) {
diff --git a/drivers/infiniband/hw/usnic/usnic_ib_verbs.c b/drivers/infiniband/hw/usnic/usnic_ib_verbs.c
index 16b269128f52d3..615de9c4209bf1 100644
--- a/drivers/infiniband/hw/usnic/usnic_ib_verbs.c
+++ b/drivers/infiniband/hw/usnic/usnic_ib_verbs.c
@@ -476,7 +476,7 @@ int usnic_ib_create_qp(struct ib_qp *ibqp, struct ib_qp_init_attr *init_attr,
if (init_attr->create_flags)
return -EOPNOTSUPP;
- err = ib_copy_from_udata(&cmd, udata, sizeof(cmd));
+ err = ib_copy_validate_udata_in(udata, cmd, spec);
if (err) {
usnic_err("%s: cannot copy udata for create_qp\n",
dev_name(&us_ibdev->ib_dev.dev));
diff --git a/drivers/infiniband/hw/vmw_pvrdma/pvrdma_qp.c b/drivers/infiniband/hw/vmw_pvrdma/pvrdma_qp.c
index 98b2a0090bf2a1..16aab967a20308 100644
--- a/drivers/infiniband/hw/vmw_pvrdma/pvrdma_qp.c
+++ b/drivers/infiniband/hw/vmw_pvrdma/pvrdma_qp.c
@@ -49,6 +49,7 @@
#include <rdma/ib_addr.h>
#include <rdma/ib_smi.h>
#include <rdma/ib_user_verbs.h>
+#include <rdma/uverbs_ioctl.h>
#include "pvrdma.h"
@@ -252,10 +253,9 @@ int pvrdma_create_qp(struct ib_qp *ibqp, struct ib_qp_init_attr *init_attr,
dev_dbg(&dev->pdev->dev,
"create queuepair from user space\n");
- if (ib_copy_from_udata(&ucmd, udata, sizeof(ucmd))) {
- ret = -EFAULT;
+ ret = ib_copy_validate_udata_in(udata, ucmd, qp_addr);
+ if (ret)
goto err_qp;
- }
/* Userspace supports qpn and qp handles? */
if (dev->dsr_version >= PVRDMA_QPHANDLE_VERSION &&
diff --git a/drivers/infiniband/hw/vmw_pvrdma/pvrdma_srq.c b/drivers/infiniband/hw/vmw_pvrdma/pvrdma_srq.c
index bdc2703532c6cc..d31fb692fcaafb 100644
--- a/drivers/infiniband/hw/vmw_pvrdma/pvrdma_srq.c
+++ b/drivers/infiniband/hw/vmw_pvrdma/pvrdma_srq.c
@@ -49,6 +49,7 @@
#include <rdma/ib_addr.h>
#include <rdma/ib_smi.h>
#include <rdma/ib_user_verbs.h>
+#include <rdma/uverbs_ioctl.h>
#include "pvrdma.h"
@@ -141,10 +142,9 @@ int pvrdma_create_srq(struct ib_srq *ibsrq, struct ib_srq_init_attr *init_attr,
dev_dbg(&dev->pdev->dev,
"create shared receive queue from user space\n");
- if (ib_copy_from_udata(&ucmd, udata, sizeof(ucmd))) {
- ret = -EFAULT;
+ ret = ib_copy_validate_udata_in(udata, ucmd, reserved);
+ if (ret)
goto err_srq;
- }
srq->umem = ib_umem_get(ibsrq->device, ucmd.buf_addr, ucmd.buf_size, 0);
if (IS_ERR(srq->umem)) {
--
2.43.0
^ permalink raw reply related [flat|nested] 19+ messages in thread
* [PATCH v2 05/16] RDMA/pvrdma: Use ib_copy_validate_udata_in() for srq
2026-03-25 21:26 [PATCH v2 00/16] Update drivers to use ib_copy_validate_udata_in() Jason Gunthorpe
` (3 preceding siblings ...)
2026-03-25 21:26 ` [PATCH v2 04/16] RDMA: Use ib_copy_validate_udata_in() for implicit full structs Jason Gunthorpe
@ 2026-03-25 21:26 ` Jason Gunthorpe
2026-03-25 21:26 ` [PATCH v2 06/16] RDMA/mlx5: Use ib_copy_validate_udata_in() for SRQ Jason Gunthorpe
` (10 subsequent siblings)
15 siblings, 0 replies; 19+ messages in thread
From: Jason Gunthorpe @ 2026-03-25 21:26 UTC (permalink / raw)
To: Abhijit Gangurde, Allen Hubbe,
Broadcom internal kernel review list, Bernard Metzler, Bryan Tan,
Cheng Xu, Gal Pressman, Junxian Huang, Kai Shen,
Konstantin Taranov, Krzysztof Czurylo, Leon Romanovsky,
linux-hyperv, linux-rdma, Michal Kalderon, Michael Margolin,
Nelson Escobar, Satish Kharat, Selvin Xavier, Yossi Leybovich,
Chengchang Tang, Tatyana Nikolova, Vishnu Dasa, Yishai Hadas,
Zhu Yanjun
Cc: Long Li, patches
struct pvrdma_create_srq was introduced when the driver was first
merged but was never used. At that point it had only buf_addr. Later
when SRQ was introduced the struct was expanded. So unlike the other
cases that grab the first struct member based on git blame this
uses the entire struct.
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
---
drivers/infiniband/hw/vmw_pvrdma/pvrdma_cq.c | 5 ++---
1 file changed, 2 insertions(+), 3 deletions(-)
diff --git a/drivers/infiniband/hw/vmw_pvrdma/pvrdma_cq.c b/drivers/infiniband/hw/vmw_pvrdma/pvrdma_cq.c
index b3df6eb9b8eff6..bc3adcc1ae67c2 100644
--- a/drivers/infiniband/hw/vmw_pvrdma/pvrdma_cq.c
+++ b/drivers/infiniband/hw/vmw_pvrdma/pvrdma_cq.c
@@ -134,10 +134,9 @@ int pvrdma_create_cq(struct ib_cq *ibcq, const struct ib_cq_init_attr *attr,
cq->is_kernel = !udata;
if (!cq->is_kernel) {
- if (ib_copy_from_udata(&ucmd, udata, sizeof(ucmd))) {
- ret = -EFAULT;
+ ret = ib_copy_validate_udata_in(udata, ucmd, reserved);
+ if (ret)
goto err_cq;
- }
cq->umem = ib_umem_get(ibdev, ucmd.buf_addr, ucmd.buf_size,
IB_ACCESS_LOCAL_WRITE);
--
2.43.0
^ permalink raw reply related [flat|nested] 19+ messages in thread
* [PATCH v2 06/16] RDMA/mlx5: Use ib_copy_validate_udata_in() for SRQ
2026-03-25 21:26 [PATCH v2 00/16] Update drivers to use ib_copy_validate_udata_in() Jason Gunthorpe
` (4 preceding siblings ...)
2026-03-25 21:26 ` [PATCH v2 05/16] RDMA/pvrdma: Use ib_copy_validate_udata_in() for srq Jason Gunthorpe
@ 2026-03-25 21:26 ` Jason Gunthorpe
2026-03-25 21:26 ` [PATCH v2 07/16] RDMA/mlx5: Use ib_copy_validate_udata_in() for MW Jason Gunthorpe
` (9 subsequent siblings)
15 siblings, 0 replies; 19+ messages in thread
From: Jason Gunthorpe @ 2026-03-25 21:26 UTC (permalink / raw)
To: Abhijit Gangurde, Allen Hubbe,
Broadcom internal kernel review list, Bernard Metzler, Bryan Tan,
Cheng Xu, Gal Pressman, Junxian Huang, Kai Shen,
Konstantin Taranov, Krzysztof Czurylo, Leon Romanovsky,
linux-hyperv, linux-rdma, Michal Kalderon, Michael Margolin,
Nelson Escobar, Satish Kharat, Selvin Xavier, Yossi Leybovich,
Chengchang Tang, Tatyana Nikolova, Vishnu Dasa, Yishai Hadas,
Zhu Yanjun
Cc: Long Li, patches
flags is the last member for mlx5_ib_create_srq, the uidx is a
later extension.
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
---
drivers/infiniband/hw/mlx5/srq.c | 15 +++------------
1 file changed, 3 insertions(+), 12 deletions(-)
diff --git a/drivers/infiniband/hw/mlx5/srq.c b/drivers/infiniband/hw/mlx5/srq.c
index 17e018554d81d5..6d89c0242cab61 100644
--- a/drivers/infiniband/hw/mlx5/srq.c
+++ b/drivers/infiniband/hw/mlx5/srq.c
@@ -48,25 +48,16 @@ static int create_srq_user(struct ib_pd *pd, struct mlx5_ib_srq *srq,
struct mlx5_ib_create_srq ucmd = {};
struct mlx5_ib_ucontext *ucontext = rdma_udata_to_drv_context(
udata, struct mlx5_ib_ucontext, ibucontext);
- size_t ucmdlen;
int err;
u32 uidx = MLX5_IB_DEFAULT_UIDX;
- ucmdlen = min(udata->inlen, sizeof(ucmd));
-
- if (ib_copy_from_udata(&ucmd, udata, ucmdlen)) {
- mlx5_ib_dbg(dev, "failed copy udata\n");
- return -EFAULT;
- }
+ err = ib_copy_validate_udata_in(udata, ucmd, flags);
+ if (err)
+ return err;
if (ucmd.reserved0 || ucmd.reserved1)
return -EINVAL;
- if (udata->inlen > sizeof(ucmd) &&
- !ib_is_udata_cleared(udata, sizeof(ucmd),
- udata->inlen - sizeof(ucmd)))
- return -EINVAL;
-
if (in->type != IB_SRQT_BASIC) {
err = get_srq_user_index(ucontext, &ucmd, udata->inlen, &uidx);
if (err)
--
2.43.0
^ permalink raw reply related [flat|nested] 19+ messages in thread
* [PATCH v2 07/16] RDMA/mlx5: Use ib_copy_validate_udata_in() for MW
2026-03-25 21:26 [PATCH v2 00/16] Update drivers to use ib_copy_validate_udata_in() Jason Gunthorpe
` (5 preceding siblings ...)
2026-03-25 21:26 ` [PATCH v2 06/16] RDMA/mlx5: Use ib_copy_validate_udata_in() for SRQ Jason Gunthorpe
@ 2026-03-25 21:26 ` Jason Gunthorpe
2026-03-25 21:26 ` [PATCH v2 08/16] RDMA/mlx4: Use ib_copy_validate_udata_in() Jason Gunthorpe
` (8 subsequent siblings)
15 siblings, 0 replies; 19+ messages in thread
From: Jason Gunthorpe @ 2026-03-25 21:26 UTC (permalink / raw)
To: Abhijit Gangurde, Allen Hubbe,
Broadcom internal kernel review list, Bernard Metzler, Bryan Tan,
Cheng Xu, Gal Pressman, Junxian Huang, Kai Shen,
Konstantin Taranov, Krzysztof Czurylo, Leon Romanovsky,
linux-hyperv, linux-rdma, Michal Kalderon, Michael Margolin,
Nelson Escobar, Satish Kharat, Selvin Xavier, Yossi Leybovich,
Chengchang Tang, Tatyana Nikolova, Vishnu Dasa, Yishai Hadas,
Zhu Yanjun
Cc: Long Li, patches
The userspace side on MW made a mistake and never actually used the udata
driver structure that was defined so it always passes 0 length. Keep the
kernel structure but this conversion has to permit 0 length as well.
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
---
drivers/infiniband/hw/mlx5/mr.c | 15 ++++++---------
1 file changed, 6 insertions(+), 9 deletions(-)
diff --git a/drivers/infiniband/hw/mlx5/mr.c b/drivers/infiniband/hw/mlx5/mr.c
index cbe34251e340b9..3ef467ac9e3d15 100644
--- a/drivers/infiniband/hw/mlx5/mr.c
+++ b/drivers/infiniband/hw/mlx5/mr.c
@@ -1774,16 +1774,13 @@ int mlx5_ib_alloc_mw(struct ib_mw *ibmw, struct ib_udata *udata)
__u32 response_length;
} resp = {};
- err = ib_copy_from_udata(&req, udata, min(udata->inlen, sizeof(req)));
- if (err)
- return err;
+ if (udata->inlen) {
+ err = ib_copy_validate_udata_in_cm(udata, req, reserved2, 0);
+ if (err)
+ return err;
+ }
- if (req.comp_mask || req.reserved1 || req.reserved2)
- return -EOPNOTSUPP;
-
- if (udata->inlen > sizeof(req) &&
- !ib_is_udata_cleared(udata, sizeof(req),
- udata->inlen - sizeof(req)))
+ if (req.reserved1 || req.reserved2)
return -EOPNOTSUPP;
ndescs = req.num_klms ? roundup(req.num_klms, 4) : roundup(1, 4);
--
2.43.0
^ permalink raw reply related [flat|nested] 19+ messages in thread
* [PATCH v2 08/16] RDMA/mlx4: Use ib_copy_validate_udata_in()
2026-03-25 21:26 [PATCH v2 00/16] Update drivers to use ib_copy_validate_udata_in() Jason Gunthorpe
` (6 preceding siblings ...)
2026-03-25 21:26 ` [PATCH v2 07/16] RDMA/mlx5: Use ib_copy_validate_udata_in() for MW Jason Gunthorpe
@ 2026-03-25 21:26 ` Jason Gunthorpe
2026-03-25 21:26 ` [PATCH v2 09/16] RDMA/mlx4: Use ib_copy_validate_udata_in() for QP Jason Gunthorpe
` (7 subsequent siblings)
15 siblings, 0 replies; 19+ messages in thread
From: Jason Gunthorpe @ 2026-03-25 21:26 UTC (permalink / raw)
To: Abhijit Gangurde, Allen Hubbe,
Broadcom internal kernel review list, Bernard Metzler, Bryan Tan,
Cheng Xu, Gal Pressman, Junxian Huang, Kai Shen,
Konstantin Taranov, Krzysztof Czurylo, Leon Romanovsky,
linux-hyperv, linux-rdma, Michal Kalderon, Michael Margolin,
Nelson Escobar, Satish Kharat, Selvin Xavier, Yossi Leybovich,
Chengchang Tang, Tatyana Nikolova, Vishnu Dasa, Yishai Hadas,
Zhu Yanjun
Cc: Long Li, patches
Follow the last member of each struct at the point
MLX4_IB_UVERBS_ABI_VERSION was set to 4.
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
---
drivers/infiniband/hw/mlx4/cq.c | 10 +++++-----
drivers/infiniband/hw/mlx4/qp.c | 8 ++------
drivers/infiniband/hw/mlx4/srq.c | 5 +++--
3 files changed, 10 insertions(+), 13 deletions(-)
diff --git a/drivers/infiniband/hw/mlx4/cq.c b/drivers/infiniband/hw/mlx4/cq.c
index 8535fd561691d7..ed4c2e740670be 100644
--- a/drivers/infiniband/hw/mlx4/cq.c
+++ b/drivers/infiniband/hw/mlx4/cq.c
@@ -168,10 +168,9 @@ int mlx4_ib_create_user_cq(struct ib_cq *ibcq,
INIT_LIST_HEAD(&cq->send_qp_list);
INIT_LIST_HEAD(&cq->recv_qp_list);
- if (ib_copy_from_udata(&ucmd, udata, sizeof(ucmd))) {
- err = -EFAULT;
+ err = ib_copy_validate_udata_in(udata, ucmd, db_addr);
+ if (err)
goto err_cq;
- }
buf_addr = (void *)(unsigned long)ucmd.buf_addr;
@@ -332,8 +331,9 @@ static int mlx4_alloc_resize_umem(struct mlx4_ib_dev *dev, struct mlx4_ib_cq *cq
if (cq->resize_umem)
return -EBUSY;
- if (ib_copy_from_udata(&ucmd, udata, sizeof ucmd))
- return -EFAULT;
+ err = ib_copy_validate_udata_in(udata, ucmd, buf_addr);
+ if (err)
+ return err;
cq->resize_buf = kmalloc_obj(*cq->resize_buf);
if (!cq->resize_buf)
diff --git a/drivers/infiniband/hw/mlx4/qp.c b/drivers/infiniband/hw/mlx4/qp.c
index b87a4b7949a3a0..deb1b0306aa7a1 100644
--- a/drivers/infiniband/hw/mlx4/qp.c
+++ b/drivers/infiniband/hw/mlx4/qp.c
@@ -1053,16 +1053,12 @@ static int create_qp_common(struct ib_pd *pd, struct ib_qp_init_attr *init_attr,
if (udata) {
struct mlx4_ib_create_qp ucmd;
- size_t copy_len;
int shift;
int n;
- copy_len = sizeof(struct mlx4_ib_create_qp);
-
- if (ib_copy_from_udata(&ucmd, udata, copy_len)) {
- err = -EFAULT;
+ err = ib_copy_validate_udata_in(udata, ucmd, sq_no_prefetch);
+ if (err)
goto err;
- }
qp->inl_recv_sz = ucmd.inl_recv_sz;
diff --git a/drivers/infiniband/hw/mlx4/srq.c b/drivers/infiniband/hw/mlx4/srq.c
index c4cf91235eee3a..5b23e5f8b84aca 100644
--- a/drivers/infiniband/hw/mlx4/srq.c
+++ b/drivers/infiniband/hw/mlx4/srq.c
@@ -111,8 +111,9 @@ int mlx4_ib_create_srq(struct ib_srq *ib_srq,
if (udata) {
struct mlx4_ib_create_srq ucmd;
- if (ib_copy_from_udata(&ucmd, udata, sizeof(ucmd)))
- return -EFAULT;
+ err = ib_copy_validate_udata_in(udata, ucmd, db_addr);
+ if (err)
+ return err;
srq->umem =
ib_umem_get(ib_srq->device, ucmd.buf_addr, buf_size, 0);
--
2.43.0
^ permalink raw reply related [flat|nested] 19+ messages in thread
* [PATCH v2 09/16] RDMA/mlx4: Use ib_copy_validate_udata_in() for QP
2026-03-25 21:26 [PATCH v2 00/16] Update drivers to use ib_copy_validate_udata_in() Jason Gunthorpe
` (7 preceding siblings ...)
2026-03-25 21:26 ` [PATCH v2 08/16] RDMA/mlx4: Use ib_copy_validate_udata_in() Jason Gunthorpe
@ 2026-03-25 21:26 ` Jason Gunthorpe
2026-03-25 21:26 ` [PATCH v2 10/16] RDMA/hns: Use ib_copy_validate_udata_in() Jason Gunthorpe
` (6 subsequent siblings)
15 siblings, 0 replies; 19+ messages in thread
From: Jason Gunthorpe @ 2026-03-25 21:26 UTC (permalink / raw)
To: Abhijit Gangurde, Allen Hubbe,
Broadcom internal kernel review list, Bernard Metzler, Bryan Tan,
Cheng Xu, Gal Pressman, Junxian Huang, Kai Shen,
Konstantin Taranov, Krzysztof Czurylo, Leon Romanovsky,
linux-hyperv, linux-rdma, Michal Kalderon, Michael Margolin,
Nelson Escobar, Satish Kharat, Selvin Xavier, Yossi Leybovich,
Chengchang Tang, Tatyana Nikolova, Vishnu Dasa, Yishai Hadas,
Zhu Yanjun
Cc: Long Li, patches
Move the validation of the udata to the same function that copies it.
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
---
drivers/infiniband/hw/mlx4/qp.c | 25 +++----------------------
1 file changed, 3 insertions(+), 22 deletions(-)
diff --git a/drivers/infiniband/hw/mlx4/qp.c b/drivers/infiniband/hw/mlx4/qp.c
index deb1b0306aa7a1..40ddd723d7b549 100644
--- a/drivers/infiniband/hw/mlx4/qp.c
+++ b/drivers/infiniband/hw/mlx4/qp.c
@@ -854,7 +854,6 @@ static int create_rq(struct ib_pd *pd, struct ib_qp_init_attr *init_attr,
unsigned long flags;
int range_size;
struct mlx4_ib_create_wq wq;
- size_t copy_len;
int shift;
int n;
@@ -867,12 +866,9 @@ static int create_rq(struct ib_pd *pd, struct ib_qp_init_attr *init_attr,
qp->state = IB_QPS_RESET;
- copy_len = min(sizeof(struct mlx4_ib_create_wq), udata->inlen);
-
- if (ib_copy_from_udata(&wq, udata, copy_len)) {
- err = -EFAULT;
+ err = ib_copy_validate_udata_in(udata, wq, comp_mask);
+ if (err)
goto err;
- }
if (wq.comp_mask || wq.reserved[0] || wq.reserved[1] ||
wq.reserved[2]) {
@@ -4112,26 +4108,11 @@ struct ib_wq *mlx4_ib_create_wq(struct ib_pd *pd,
struct mlx4_dev *dev = to_mdev(pd->device)->dev;
struct ib_qp_init_attr ib_qp_init_attr = {};
struct mlx4_ib_qp *qp;
- struct mlx4_ib_create_wq ucmd;
- int err, required_cmd_sz;
+ int err;
if (!udata)
return ERR_PTR(-EINVAL);
- required_cmd_sz = offsetof(typeof(ucmd), comp_mask) +
- sizeof(ucmd.comp_mask);
- if (udata->inlen < required_cmd_sz) {
- pr_debug("invalid inlen\n");
- return ERR_PTR(-EINVAL);
- }
-
- if (udata->inlen > sizeof(ucmd) &&
- !ib_is_udata_cleared(udata, sizeof(ucmd),
- udata->inlen - sizeof(ucmd))) {
- pr_debug("inlen is not supported\n");
- return ERR_PTR(-EOPNOTSUPP);
- }
-
if (udata->outlen)
return ERR_PTR(-EOPNOTSUPP);
--
2.43.0
^ permalink raw reply related [flat|nested] 19+ messages in thread
* [PATCH v2 10/16] RDMA/hns: Use ib_copy_validate_udata_in()
2026-03-25 21:26 [PATCH v2 00/16] Update drivers to use ib_copy_validate_udata_in() Jason Gunthorpe
` (8 preceding siblings ...)
2026-03-25 21:26 ` [PATCH v2 09/16] RDMA/mlx4: Use ib_copy_validate_udata_in() for QP Jason Gunthorpe
@ 2026-03-25 21:26 ` Jason Gunthorpe
2026-03-25 21:26 ` [PATCH v2 11/16] RDMA: Use ib_copy_validate_udata_in_cm() for zero comp_mask Jason Gunthorpe
` (5 subsequent siblings)
15 siblings, 0 replies; 19+ messages in thread
From: Jason Gunthorpe @ 2026-03-25 21:26 UTC (permalink / raw)
To: Abhijit Gangurde, Allen Hubbe,
Broadcom internal kernel review list, Bernard Metzler, Bryan Tan,
Cheng Xu, Gal Pressman, Junxian Huang, Kai Shen,
Konstantin Taranov, Krzysztof Czurylo, Leon Romanovsky,
linux-hyperv, linux-rdma, Michal Kalderon, Michael Margolin,
Nelson Escobar, Satish Kharat, Selvin Xavier, Yossi Leybovich,
Chengchang Tang, Tatyana Nikolova, Vishnu Dasa, Yishai Hadas,
Zhu Yanjun
Cc: Long Li, patches
Follow the last struct member from the commit when the struct was
added to the kernel.
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
---
drivers/infiniband/hw/hns/hns_roce_cq.c | 16 +--------------
drivers/infiniband/hw/hns/hns_roce_main.c | 4 ++--
drivers/infiniband/hw/hns/hns_roce_qp.c | 8 ++------
drivers/infiniband/hw/hns/hns_roce_srq.c | 25 +++--------------------
4 files changed, 8 insertions(+), 45 deletions(-)
diff --git a/drivers/infiniband/hw/hns/hns_roce_cq.c b/drivers/infiniband/hw/hns/hns_roce_cq.c
index 857a913326cd88..621568e114054b 100644
--- a/drivers/infiniband/hw/hns/hns_roce_cq.c
+++ b/drivers/infiniband/hw/hns/hns_roce_cq.c
@@ -350,20 +350,6 @@ static int verify_cq_create_attr(struct hns_roce_dev *hr_dev,
return 0;
}
-static int get_cq_ucmd(struct hns_roce_cq *hr_cq, struct ib_udata *udata,
- struct hns_roce_ib_create_cq *ucmd)
-{
- struct ib_device *ibdev = hr_cq->ib_cq.device;
- int ret;
-
- ret = ib_copy_from_udata(ucmd, udata, min(udata->inlen, sizeof(*ucmd)));
- if (ret) {
- ibdev_err(ibdev, "failed to copy CQ udata, ret = %d.\n", ret);
- return ret;
- }
-
- return 0;
-}
static void set_cq_param(struct hns_roce_cq *hr_cq, u32 cq_entries, int vector,
struct hns_roce_ib_create_cq *ucmd)
@@ -428,7 +414,7 @@ int hns_roce_create_cq(struct ib_cq *ib_cq, const struct ib_cq_init_attr *attr,
goto err_out;
if (udata) {
- ret = get_cq_ucmd(hr_cq, udata, &ucmd);
+ ret = ib_copy_validate_udata_in(udata, ucmd, db_addr);
if (ret)
goto err_out;
}
diff --git a/drivers/infiniband/hw/hns/hns_roce_main.c b/drivers/infiniband/hw/hns/hns_roce_main.c
index 1148d732f94fbf..ec6fb3f1177941 100644
--- a/drivers/infiniband/hw/hns/hns_roce_main.c
+++ b/drivers/infiniband/hw/hns/hns_roce_main.c
@@ -36,6 +36,7 @@
#include <rdma/ib_smi.h>
#include <rdma/ib_user_verbs.h>
#include <rdma/ib_cache.h>
+#include <rdma/uverbs_ioctl.h>
#include "hns_roce_common.h"
#include "hns_roce_device.h"
#include "hns_roce_hem.h"
@@ -433,8 +434,7 @@ static int hns_roce_alloc_ucontext(struct ib_ucontext *uctx,
resp.qp_tab_size = hr_dev->caps.num_qps;
resp.srq_tab_size = hr_dev->caps.num_srqs;
- ret = ib_copy_from_udata(&ucmd, udata,
- min(udata->inlen, sizeof(ucmd)));
+ ret = ib_copy_validate_udata_in(udata, ucmd, reserved);
if (ret)
goto error_out;
diff --git a/drivers/infiniband/hw/hns/hns_roce_qp.c b/drivers/infiniband/hw/hns/hns_roce_qp.c
index 6a2dff4bd2d0fc..3d6eb22cbcd940 100644
--- a/drivers/infiniband/hw/hns/hns_roce_qp.c
+++ b/drivers/infiniband/hw/hns/hns_roce_qp.c
@@ -1130,13 +1130,9 @@ static int set_qp_param(struct hns_roce_dev *hr_dev, struct hns_roce_qp *hr_qp,
}
if (udata) {
- ret = ib_copy_from_udata(ucmd, udata,
- min(udata->inlen, sizeof(*ucmd)));
- if (ret) {
- ibdev_err(ibdev,
- "failed to copy QP ucmd, ret = %d\n", ret);
+ ret = ib_copy_validate_udata_in(udata, *ucmd, reserved);
+ if (ret)
return ret;
- }
uctx = rdma_udata_to_drv_context(udata, struct hns_roce_ucontext,
ibucontext);
diff --git a/drivers/infiniband/hw/hns/hns_roce_srq.c b/drivers/infiniband/hw/hns/hns_roce_srq.c
index 8a6efb6b9c9eba..b37a76587aa868 100644
--- a/drivers/infiniband/hw/hns/hns_roce_srq.c
+++ b/drivers/infiniband/hw/hns/hns_roce_srq.c
@@ -346,14 +346,9 @@ static int alloc_srq_buf(struct hns_roce_dev *hr_dev, struct hns_roce_srq *srq,
int ret;
if (udata) {
- ret = ib_copy_from_udata(&ucmd, udata,
- min(udata->inlen, sizeof(ucmd)));
- if (ret) {
- ibdev_err(&hr_dev->ib_dev,
- "failed to copy SRQ udata, ret = %d.\n",
- ret);
+ ret = ib_copy_validate_udata_in(udata, ucmd, que_addr);
+ if (ret)
return ret;
- }
}
ret = alloc_srq_idx(hr_dev, srq, udata, ucmd.que_addr);
@@ -387,20 +382,6 @@ static void free_srq_buf(struct hns_roce_dev *hr_dev, struct hns_roce_srq *srq)
free_srq_idx(hr_dev, srq);
}
-static int get_srq_ucmd(struct hns_roce_srq *srq, struct ib_udata *udata,
- struct hns_roce_ib_create_srq *ucmd)
-{
- struct ib_device *ibdev = srq->ibsrq.device;
- int ret;
-
- ret = ib_copy_from_udata(ucmd, udata, min(udata->inlen, sizeof(*ucmd)));
- if (ret) {
- ibdev_err(ibdev, "failed to copy SRQ udata, ret = %d.\n", ret);
- return ret;
- }
-
- return 0;
-}
static void free_srq_db(struct hns_roce_dev *hr_dev, struct hns_roce_srq *srq,
struct ib_udata *udata)
@@ -430,7 +411,7 @@ static int alloc_srq_db(struct hns_roce_dev *hr_dev, struct hns_roce_srq *srq,
int ret;
if (udata) {
- ret = get_srq_ucmd(srq, udata, &ucmd);
+ ret = ib_copy_validate_udata_in(udata, ucmd, que_addr);
if (ret)
return ret;
--
2.43.0
^ permalink raw reply related [flat|nested] 19+ messages in thread
* [PATCH v2 11/16] RDMA: Use ib_copy_validate_udata_in_cm() for zero comp_mask
2026-03-25 21:26 [PATCH v2 00/16] Update drivers to use ib_copy_validate_udata_in() Jason Gunthorpe
` (9 preceding siblings ...)
2026-03-25 21:26 ` [PATCH v2 10/16] RDMA/hns: Use ib_copy_validate_udata_in() Jason Gunthorpe
@ 2026-03-25 21:26 ` Jason Gunthorpe
2026-03-25 21:26 ` [PATCH v2 12/16] RDMA/mlx5: Pull comp_mask validation into ib_copy_validate_udata_in_cm() Jason Gunthorpe
` (4 subsequent siblings)
15 siblings, 0 replies; 19+ messages in thread
From: Jason Gunthorpe @ 2026-03-25 21:26 UTC (permalink / raw)
To: Abhijit Gangurde, Allen Hubbe,
Broadcom internal kernel review list, Bernard Metzler, Bryan Tan,
Cheng Xu, Gal Pressman, Junxian Huang, Kai Shen,
Konstantin Taranov, Krzysztof Czurylo, Leon Romanovsky,
linux-hyperv, linux-rdma, Michal Kalderon, Michael Margolin,
Nelson Escobar, Satish Kharat, Selvin Xavier, Yossi Leybovich,
Chengchang Tang, Tatyana Nikolova, Vishnu Dasa, Yishai Hadas,
Zhu Yanjun
Cc: Long Li, patches
All of these cases require a 0 comp_mask. Consolidate these into
using ib_copy_validate_udata_in_cm() and remove the open coded
comp_mask test.
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
---
drivers/infiniband/hw/efa/efa_verbs.c | 8 ++++----
drivers/infiniband/hw/mlx4/main.c | 5 +----
drivers/infiniband/hw/mlx4/qp.c | 13 ++++++-------
drivers/infiniband/hw/mlx5/qp.c | 4 ++--
4 files changed, 13 insertions(+), 17 deletions(-)
diff --git a/drivers/infiniband/hw/efa/efa_verbs.c b/drivers/infiniband/hw/efa/efa_verbs.c
index 8d9357e2d513bb..064d5136ba405d 100644
--- a/drivers/infiniband/hw/efa/efa_verbs.c
+++ b/drivers/infiniband/hw/efa/efa_verbs.c
@@ -699,11 +699,11 @@ int efa_create_qp(struct ib_qp *ibqp, struct ib_qp_init_attr *init_attr,
if (err)
goto err_out;
- err = ib_copy_validate_udata_in(udata, cmd, driver_qp_type);
+ err = ib_copy_validate_udata_in_cm(udata, cmd, driver_qp_type, 0);
if (err)
goto err_out;
- if (cmd.comp_mask || !is_reserved_cleared(cmd.reserved_98)) {
+ if (!is_reserved_cleared(cmd.reserved_98)) {
ibdev_dbg(&dev->ibdev,
"Incompatible ABI params, unknown fields in udata\n");
err = -EINVAL;
@@ -1140,11 +1140,11 @@ int efa_create_user_cq(struct ib_cq *ibcq, const struct ib_cq_init_attr *attr,
goto err_out;
}
- err = ib_copy_validate_udata_in(udata, cmd, num_sub_cqs);
+ err = ib_copy_validate_udata_in_cm(udata, cmd, num_sub_cqs, 0);
if (err)
goto err_out;
- if (cmd.comp_mask || !is_reserved_cleared(cmd.reserved_58)) {
+ if (!is_reserved_cleared(cmd.reserved_58)) {
ibdev_dbg(ibdev,
"Incompatible ABI params, unknown fields in udata\n");
err = -EINVAL;
diff --git a/drivers/infiniband/hw/mlx4/main.c b/drivers/infiniband/hw/mlx4/main.c
index 16e4cffbd7a84d..037f02b5f28fb5 100644
--- a/drivers/infiniband/hw/mlx4/main.c
+++ b/drivers/infiniband/hw/mlx4/main.c
@@ -446,13 +446,10 @@ static int mlx4_ib_query_device(struct ib_device *ibdev,
struct mlx4_clock_params clock_params;
if (uhw->inlen) {
- err = ib_copy_validate_udata_in(uhw, cmd, reserved);
+ err = ib_copy_validate_udata_in_cm(uhw, cmd, reserved, 0);
if (err)
return err;
- if (cmd.comp_mask)
- return -EINVAL;
-
if (cmd.reserved)
return -EINVAL;
}
diff --git a/drivers/infiniband/hw/mlx4/qp.c b/drivers/infiniband/hw/mlx4/qp.c
index 40ddd723d7b549..cfb54ffcaac22c 100644
--- a/drivers/infiniband/hw/mlx4/qp.c
+++ b/drivers/infiniband/hw/mlx4/qp.c
@@ -720,7 +720,7 @@ static int _mlx4_ib_create_qp_rss(struct ib_pd *pd, struct mlx4_ib_qp *qp,
if (udata->outlen)
return -EOPNOTSUPP;
- err = ib_copy_validate_udata_in(udata, ucmd, reserved1);
+ err = ib_copy_validate_udata_in_cm(udata, ucmd, reserved1, 0);
if (err) {
pr_debug("copy failed\n");
return err;
@@ -729,7 +729,7 @@ static int _mlx4_ib_create_qp_rss(struct ib_pd *pd, struct mlx4_ib_qp *qp,
if (memchr_inv(ucmd.reserved, 0, sizeof(ucmd.reserved)))
return -EOPNOTSUPP;
- if (ucmd.comp_mask || ucmd.reserved1)
+ if (ucmd.reserved1)
return -EOPNOTSUPP;
if (init_attr->qp_type != IB_QPT_RAW_PACKET) {
@@ -866,12 +866,11 @@ static int create_rq(struct ib_pd *pd, struct ib_qp_init_attr *init_attr,
qp->state = IB_QPS_RESET;
- err = ib_copy_validate_udata_in(udata, wq, comp_mask);
+ err = ib_copy_validate_udata_in_cm(udata, wq, comp_mask, 0);
if (err)
goto err;
- if (wq.comp_mask || wq.reserved[0] || wq.reserved[1] ||
- wq.reserved[2]) {
+ if (wq.reserved[0] || wq.reserved[1] || wq.reserved[2]) {
pr_debug("user command isn't supported\n");
err = -EOPNOTSUPP;
goto err;
@@ -4235,11 +4234,11 @@ int mlx4_ib_modify_wq(struct ib_wq *ibwq, struct ib_wq_attr *wq_attr,
enum ib_wq_state cur_state, new_state;
int err;
- err = ib_copy_validate_udata_in(udata, ucmd, reserved);
+ err = ib_copy_validate_udata_in_cm(udata, ucmd, reserved, 0);
if (err)
return err;
- if (ucmd.comp_mask || ucmd.reserved)
+ if (ucmd.reserved)
return -EOPNOTSUPP;
if (wq_attr_mask & IB_WQ_FLAGS)
diff --git a/drivers/infiniband/hw/mlx5/qp.c b/drivers/infiniband/hw/mlx5/qp.c
index d4d5e0d457a0b5..68c6e107747693 100644
--- a/drivers/infiniband/hw/mlx5/qp.c
+++ b/drivers/infiniband/hw/mlx5/qp.c
@@ -5611,11 +5611,11 @@ int mlx5_ib_modify_wq(struct ib_wq *wq, struct ib_wq_attr *wq_attr,
void *rqc;
void *in;
- err = ib_copy_validate_udata_in(udata, ucmd, reserved);
+ err = ib_copy_validate_udata_in_cm(udata, ucmd, reserved, 0);
if (err)
return err;
- if (ucmd.comp_mask || ucmd.reserved)
+ if (ucmd.reserved)
return -EOPNOTSUPP;
inlen = MLX5_ST_SZ_BYTES(modify_rq_in);
--
2.43.0
^ permalink raw reply related [flat|nested] 19+ messages in thread
* [PATCH v2 12/16] RDMA/mlx5: Pull comp_mask validation into ib_copy_validate_udata_in_cm()
2026-03-25 21:26 [PATCH v2 00/16] Update drivers to use ib_copy_validate_udata_in() Jason Gunthorpe
` (10 preceding siblings ...)
2026-03-25 21:26 ` [PATCH v2 11/16] RDMA: Use ib_copy_validate_udata_in_cm() for zero comp_mask Jason Gunthorpe
@ 2026-03-25 21:26 ` Jason Gunthorpe
2026-03-25 21:26 ` [PATCH v2 13/16] RDMA/hns: Add missing comp_mask check in create_qp Jason Gunthorpe
` (3 subsequent siblings)
15 siblings, 0 replies; 19+ messages in thread
From: Jason Gunthorpe @ 2026-03-25 21:26 UTC (permalink / raw)
To: Abhijit Gangurde, Allen Hubbe,
Broadcom internal kernel review list, Bernard Metzler, Bryan Tan,
Cheng Xu, Gal Pressman, Junxian Huang, Kai Shen,
Konstantin Taranov, Krzysztof Czurylo, Leon Romanovsky,
linux-hyperv, linux-rdma, Michal Kalderon, Michael Margolin,
Nelson Escobar, Satish Kharat, Selvin Xavier, Yossi Leybovich,
Chengchang Tang, Tatyana Nikolova, Vishnu Dasa, Yishai Hadas,
Zhu Yanjun
Cc: Long Li, patches
Directly check the supported comp_mask bitmap using
ib_copy_validate_udata_in_cm() and remove the open coding.
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
---
drivers/infiniband/hw/mlx5/qp.c | 17 ++++++++---------
1 file changed, 8 insertions(+), 9 deletions(-)
diff --git a/drivers/infiniband/hw/mlx5/qp.c b/drivers/infiniband/hw/mlx5/qp.c
index 68c6e107747693..3b602ed0a2dafc 100644
--- a/drivers/infiniband/hw/mlx5/qp.c
+++ b/drivers/infiniband/hw/mlx5/qp.c
@@ -4707,12 +4707,12 @@ int mlx5_ib_modify_qp(struct ib_qp *ibqp, struct ib_qp_attr *attr,
return -ENOSYS;
if (udata && udata->inlen) {
- err = ib_copy_validate_udata_in(udata, ucmd, ece_options);
+ err = ib_copy_validate_udata_in_cm(udata, ucmd, ece_options,
+ MLX5_IB_MODIFY_QP_OOO_DP);
if (err)
return err;
- if (ucmd.comp_mask & ~MLX5_IB_MODIFY_QP_OOO_DP ||
- memchr_inv(&ucmd.burst_info.reserved, 0,
+ if (memchr_inv(&ucmd.burst_info.reserved, 0,
sizeof(ucmd.burst_info.reserved)))
return -EOPNOTSUPP;
@@ -5381,17 +5381,16 @@ static int prepare_user_rq(struct ib_pd *pd,
struct mlx5_ib_dev *dev = to_mdev(pd->device);
struct mlx5_ib_create_wq ucmd = {};
int err;
- err = ib_copy_validate_udata_in(udata, ucmd,
- single_stride_log_num_of_bytes);
+
+ err = ib_copy_validate_udata_in_cm(udata, ucmd,
+ single_stride_log_num_of_bytes,
+ MLX5_IB_CREATE_WQ_STRIDING_RQ);
if (err) {
mlx5_ib_dbg(dev, "copy failed\n");
return err;
}
- if (ucmd.comp_mask & (~MLX5_IB_CREATE_WQ_STRIDING_RQ)) {
- mlx5_ib_dbg(dev, "invalid comp mask\n");
- return -EOPNOTSUPP;
- } else if (ucmd.comp_mask & MLX5_IB_CREATE_WQ_STRIDING_RQ) {
+ if (ucmd.comp_mask & MLX5_IB_CREATE_WQ_STRIDING_RQ) {
if (!MLX5_CAP_GEN(dev->mdev, striding_rq)) {
mlx5_ib_dbg(dev, "Striding RQ is not supported\n");
return -EOPNOTSUPP;
--
2.43.0
^ permalink raw reply related [flat|nested] 19+ messages in thread
* [PATCH v2 13/16] RDMA/hns: Add missing comp_mask check in create_qp
2026-03-25 21:26 [PATCH v2 00/16] Update drivers to use ib_copy_validate_udata_in() Jason Gunthorpe
` (11 preceding siblings ...)
2026-03-25 21:26 ` [PATCH v2 12/16] RDMA/mlx5: Pull comp_mask validation into ib_copy_validate_udata_in_cm() Jason Gunthorpe
@ 2026-03-25 21:26 ` Jason Gunthorpe
2026-03-25 21:27 ` [PATCH v2 14/16] RDMA/irdma: Add missing comp_mask check in alloc_ucontext Jason Gunthorpe
` (2 subsequent siblings)
15 siblings, 0 replies; 19+ messages in thread
From: Jason Gunthorpe @ 2026-03-25 21:26 UTC (permalink / raw)
To: Abhijit Gangurde, Allen Hubbe,
Broadcom internal kernel review list, Bernard Metzler, Bryan Tan,
Cheng Xu, Gal Pressman, Junxian Huang, Kai Shen,
Konstantin Taranov, Krzysztof Czurylo, Leon Romanovsky,
linux-hyperv, linux-rdma, Michal Kalderon, Michael Margolin,
Nelson Escobar, Satish Kharat, Selvin Xavier, Yossi Leybovich,
Chengchang Tang, Tatyana Nikolova, Vishnu Dasa, Yishai Hadas,
Zhu Yanjun
Cc: Long Li, patches
hns has a comp_mask field that was never checked for validity, check
it.
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
---
drivers/infiniband/hw/hns/hns_roce_qp.c | 4 +++-
1 file changed, 3 insertions(+), 1 deletion(-)
diff --git a/drivers/infiniband/hw/hns/hns_roce_qp.c b/drivers/infiniband/hw/hns/hns_roce_qp.c
index 3d6eb22cbcd940..a27ea85bb06323 100644
--- a/drivers/infiniband/hw/hns/hns_roce_qp.c
+++ b/drivers/infiniband/hw/hns/hns_roce_qp.c
@@ -1130,7 +1130,9 @@ static int set_qp_param(struct hns_roce_dev *hr_dev, struct hns_roce_qp *hr_qp,
}
if (udata) {
- ret = ib_copy_validate_udata_in(udata, *ucmd, reserved);
+ ret = ib_copy_validate_udata_in_cm(
+ udata, *ucmd, reserved,
+ HNS_ROCE_CREATE_QP_MASK_CONGEST_TYPE);
if (ret)
return ret;
--
2.43.0
^ permalink raw reply related [flat|nested] 19+ messages in thread
* [PATCH v2 14/16] RDMA/irdma: Add missing comp_mask check in alloc_ucontext
2026-03-25 21:26 [PATCH v2 00/16] Update drivers to use ib_copy_validate_udata_in() Jason Gunthorpe
` (12 preceding siblings ...)
2026-03-25 21:26 ` [PATCH v2 13/16] RDMA/hns: Add missing comp_mask check in create_qp Jason Gunthorpe
@ 2026-03-25 21:27 ` Jason Gunthorpe
2026-03-25 22:16 ` Jacob Moroni
2026-03-25 21:27 ` [PATCH v2 15/16] RDMA: Remove redundant = {} for udata req structs Jason Gunthorpe
2026-03-25 21:27 ` [PATCH v2 16/16] RDMA/hns: Remove the duplicate calls to ib_copy_validate_udata_in() Jason Gunthorpe
15 siblings, 1 reply; 19+ messages in thread
From: Jason Gunthorpe @ 2026-03-25 21:27 UTC (permalink / raw)
To: Abhijit Gangurde, Allen Hubbe,
Broadcom internal kernel review list, Bernard Metzler, Bryan Tan,
Cheng Xu, Gal Pressman, Junxian Huang, Kai Shen,
Konstantin Taranov, Krzysztof Czurylo, Leon Romanovsky,
linux-hyperv, linux-rdma, Michal Kalderon, Michael Margolin,
Nelson Escobar, Satish Kharat, Selvin Xavier, Yossi Leybovich,
Chengchang Tang, Tatyana Nikolova, Vishnu Dasa, Yishai Hadas,
Zhu Yanjun
Cc: Long Li, patches
irdma has a comp_mask field that was never checked for validity, check
it.
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
---
drivers/infiniband/hw/irdma/verbs.c | 4 +++-
1 file changed, 3 insertions(+), 1 deletion(-)
diff --git a/drivers/infiniband/hw/irdma/verbs.c b/drivers/infiniband/hw/irdma/verbs.c
index b2978632241900..d695130b187bdd 100644
--- a/drivers/infiniband/hw/irdma/verbs.c
+++ b/drivers/infiniband/hw/irdma/verbs.c
@@ -296,7 +296,9 @@ static int irdma_alloc_ucontext(struct ib_ucontext *uctx,
if (udata->outlen < IRDMA_ALLOC_UCTX_MIN_RESP_LEN)
return -EINVAL;
- ret = ib_copy_validate_udata_in(udata, req, rsvd8);
+ ret = ib_copy_validate_udata_in_cm(udata, req, rsvd8,
+ IRDMA_ALLOC_UCTX_USE_RAW_ATTR |
+ IRDMA_SUPPORT_WQE_FORMAT_V2);
if (ret)
return ret;
--
2.43.0
^ permalink raw reply related [flat|nested] 19+ messages in thread
* [PATCH v2 15/16] RDMA: Remove redundant = {} for udata req structs
2026-03-25 21:26 [PATCH v2 00/16] Update drivers to use ib_copy_validate_udata_in() Jason Gunthorpe
` (13 preceding siblings ...)
2026-03-25 21:27 ` [PATCH v2 14/16] RDMA/irdma: Add missing comp_mask check in alloc_ucontext Jason Gunthorpe
@ 2026-03-25 21:27 ` Jason Gunthorpe
2026-03-25 21:27 ` [PATCH v2 16/16] RDMA/hns: Remove the duplicate calls to ib_copy_validate_udata_in() Jason Gunthorpe
15 siblings, 0 replies; 19+ messages in thread
From: Jason Gunthorpe @ 2026-03-25 21:27 UTC (permalink / raw)
To: Abhijit Gangurde, Allen Hubbe,
Broadcom internal kernel review list, Bernard Metzler, Bryan Tan,
Cheng Xu, Gal Pressman, Junxian Huang, Kai Shen,
Konstantin Taranov, Krzysztof Czurylo, Leon Romanovsky,
linux-hyperv, linux-rdma, Michal Kalderon, Michael Margolin,
Nelson Escobar, Satish Kharat, Selvin Xavier, Yossi Leybovich,
Chengchang Tang, Tatyana Nikolova, Vishnu Dasa, Yishai Hadas,
Zhu Yanjun
Cc: Long Li, patches
Now that all of the udata request structs are loaded with the helpers
the callers should not pre-zero them. The helpers all guarantee that
the entire struct is filled with something.
Reviewed-by: Long Li <longli@microsoft.com>
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
---
drivers/infiniband/hw/efa/efa_verbs.c | 4 ++--
drivers/infiniband/hw/hns/hns_roce_main.c | 2 +-
drivers/infiniband/hw/hns/hns_roce_srq.c | 2 +-
drivers/infiniband/hw/mana/cq.c | 2 +-
drivers/infiniband/hw/mana/qp.c | 2 +-
drivers/infiniband/hw/mana/wq.c | 2 +-
drivers/infiniband/hw/mlx4/qp.c | 4 ++--
drivers/infiniband/hw/mlx5/cq.c | 2 +-
drivers/infiniband/hw/mlx5/main.c | 2 +-
drivers/infiniband/hw/mlx5/qp.c | 4 ++--
drivers/infiniband/hw/mlx5/srq.c | 2 +-
drivers/infiniband/hw/ocrdma/ocrdma_verbs.c | 4 +++-
drivers/infiniband/hw/qedr/verbs.c | 8 ++++----
13 files changed, 21 insertions(+), 19 deletions(-)
diff --git a/drivers/infiniband/hw/efa/efa_verbs.c b/drivers/infiniband/hw/efa/efa_verbs.c
index 064d5136ba405d..69a8f373262c6b 100644
--- a/drivers/infiniband/hw/efa/efa_verbs.c
+++ b/drivers/infiniband/hw/efa/efa_verbs.c
@@ -682,7 +682,7 @@ int efa_create_qp(struct ib_qp *ibqp, struct ib_qp_init_attr *init_attr,
struct efa_com_create_qp_result create_qp_resp;
struct efa_dev *dev = to_edev(ibqp->device);
struct efa_ibv_create_qp_resp resp = {};
- struct efa_ibv_create_qp cmd = {};
+ struct efa_ibv_create_qp cmd;
struct efa_qp *qp = to_eqp(ibqp);
struct efa_ucontext *ucontext;
u16 supported_efa_flags = 0;
@@ -1121,7 +1121,7 @@ int efa_create_user_cq(struct ib_cq *ibcq, const struct ib_cq_init_attr *attr,
struct efa_com_create_cq_result result;
struct ib_device *ibdev = ibcq->device;
struct efa_dev *dev = to_edev(ibdev);
- struct efa_ibv_create_cq cmd = {};
+ struct efa_ibv_create_cq cmd;
struct efa_cq *cq = to_ecq(ibcq);
int entries = attr->cqe;
bool set_src_addr;
diff --git a/drivers/infiniband/hw/hns/hns_roce_main.c b/drivers/infiniband/hw/hns/hns_roce_main.c
index ec6fb3f1177941..0dbe99aab6ad21 100644
--- a/drivers/infiniband/hw/hns/hns_roce_main.c
+++ b/drivers/infiniband/hw/hns/hns_roce_main.c
@@ -425,7 +425,7 @@ static int hns_roce_alloc_ucontext(struct ib_ucontext *uctx,
struct hns_roce_ucontext *context = to_hr_ucontext(uctx);
struct hns_roce_dev *hr_dev = to_hr_dev(uctx->device);
struct hns_roce_ib_alloc_ucontext_resp resp = {};
- struct hns_roce_ib_alloc_ucontext ucmd = {};
+ struct hns_roce_ib_alloc_ucontext ucmd;
int ret = -EAGAIN;
if (!hr_dev->active)
diff --git a/drivers/infiniband/hw/hns/hns_roce_srq.c b/drivers/infiniband/hw/hns/hns_roce_srq.c
index b37a76587aa868..601f8cdfce96a3 100644
--- a/drivers/infiniband/hw/hns/hns_roce_srq.c
+++ b/drivers/infiniband/hw/hns/hns_roce_srq.c
@@ -406,7 +406,7 @@ static int alloc_srq_db(struct hns_roce_dev *hr_dev, struct hns_roce_srq *srq,
struct ib_udata *udata,
struct hns_roce_ib_create_srq_resp *resp)
{
- struct hns_roce_ib_create_srq ucmd = {};
+ struct hns_roce_ib_create_srq ucmd;
struct hns_roce_ucontext *uctx;
int ret;
diff --git a/drivers/infiniband/hw/mana/cq.c b/drivers/infiniband/hw/mana/cq.c
index 3f932ef6e5fff6..f4cbe21763bf11 100644
--- a/drivers/infiniband/hw/mana/cq.c
+++ b/drivers/infiniband/hw/mana/cq.c
@@ -13,7 +13,7 @@ int mana_ib_create_cq(struct ib_cq *ibcq, const struct ib_cq_init_attr *attr,
struct mana_ib_create_cq_resp resp = {};
struct mana_ib_ucontext *mana_ucontext;
struct ib_device *ibdev = ibcq->device;
- struct mana_ib_create_cq ucmd = {};
+ struct mana_ib_create_cq ucmd;
struct mana_ib_dev *mdev;
bool is_rnic_cq;
u32 doorbell;
diff --git a/drivers/infiniband/hw/mana/qp.c b/drivers/infiniband/hw/mana/qp.c
index 69c8d4f7a1f46b..ddc30d37d715f6 100644
--- a/drivers/infiniband/hw/mana/qp.c
+++ b/drivers/infiniband/hw/mana/qp.c
@@ -97,7 +97,7 @@ static int mana_ib_create_qp_rss(struct ib_qp *ibqp, struct ib_pd *pd,
container_of(pd->device, struct mana_ib_dev, ib_dev);
struct ib_rwq_ind_table *ind_tbl = attr->rwq_ind_tbl;
struct mana_ib_create_qp_rss_resp resp = {};
- struct mana_ib_create_qp_rss ucmd = {};
+ struct mana_ib_create_qp_rss ucmd;
mana_handle_t *mana_ind_table;
struct mana_port_context *mpc;
unsigned int ind_tbl_size;
diff --git a/drivers/infiniband/hw/mana/wq.c b/drivers/infiniband/hw/mana/wq.c
index aceeea7f17b339..5c2134a0b1a196 100644
--- a/drivers/infiniband/hw/mana/wq.c
+++ b/drivers/infiniband/hw/mana/wq.c
@@ -11,7 +11,7 @@ struct ib_wq *mana_ib_create_wq(struct ib_pd *pd,
{
struct mana_ib_dev *mdev =
container_of(pd->device, struct mana_ib_dev, ib_dev);
- struct mana_ib_create_wq ucmd = {};
+ struct mana_ib_create_wq ucmd;
struct mana_ib_wq *wq;
int err;
diff --git a/drivers/infiniband/hw/mlx4/qp.c b/drivers/infiniband/hw/mlx4/qp.c
index cfb54ffcaac22c..790be09d985a1a 100644
--- a/drivers/infiniband/hw/mlx4/qp.c
+++ b/drivers/infiniband/hw/mlx4/qp.c
@@ -709,7 +709,7 @@ static int _mlx4_ib_create_qp_rss(struct ib_pd *pd, struct mlx4_ib_qp *qp,
struct ib_qp_init_attr *init_attr,
struct ib_udata *udata)
{
- struct mlx4_ib_create_qp_rss ucmd = {};
+ struct mlx4_ib_create_qp_rss ucmd;
int err;
if (!udata) {
@@ -4230,7 +4230,7 @@ int mlx4_ib_modify_wq(struct ib_wq *ibwq, struct ib_wq_attr *wq_attr,
u32 wq_attr_mask, struct ib_udata *udata)
{
struct mlx4_ib_qp *qp = to_mqp((struct ib_qp *)ibwq);
- struct mlx4_ib_modify_wq ucmd = {};
+ struct mlx4_ib_modify_wq ucmd;
enum ib_wq_state cur_state, new_state;
int err;
diff --git a/drivers/infiniband/hw/mlx5/cq.c b/drivers/infiniband/hw/mlx5/cq.c
index f5e75e51c6763f..1f94863e755cc7 100644
--- a/drivers/infiniband/hw/mlx5/cq.c
+++ b/drivers/infiniband/hw/mlx5/cq.c
@@ -720,7 +720,7 @@ static int create_cq_user(struct mlx5_ib_dev *dev, struct ib_udata *udata,
int *cqe_size, int *index, int *inlen,
struct uverbs_attr_bundle *attrs)
{
- struct mlx5_ib_create_cq ucmd = {};
+ struct mlx5_ib_create_cq ucmd;
unsigned long page_size;
unsigned int page_offset_quantized;
__be64 *pas;
diff --git a/drivers/infiniband/hw/mlx5/main.c b/drivers/infiniband/hw/mlx5/main.c
index ff2c02c85625ce..fe3de414bfcad5 100644
--- a/drivers/infiniband/hw/mlx5/main.c
+++ b/drivers/infiniband/hw/mlx5/main.c
@@ -2178,7 +2178,7 @@ static int mlx5_ib_alloc_ucontext(struct ib_ucontext *uctx,
{
struct ib_device *ibdev = uctx->device;
struct mlx5_ib_dev *dev = to_mdev(ibdev);
- struct mlx5_ib_alloc_ucontext_req_v2 req = {};
+ struct mlx5_ib_alloc_ucontext_req_v2 req;
struct mlx5_ib_alloc_ucontext_resp resp = {};
struct mlx5_ib_ucontext *context = to_mucontext(uctx);
struct mlx5_bfreg_info *bfregi;
diff --git a/drivers/infiniband/hw/mlx5/qp.c b/drivers/infiniband/hw/mlx5/qp.c
index 3b602ed0a2dafc..8f50e7342a7694 100644
--- a/drivers/infiniband/hw/mlx5/qp.c
+++ b/drivers/infiniband/hw/mlx5/qp.c
@@ -4692,7 +4692,7 @@ int mlx5_ib_modify_qp(struct ib_qp *ibqp, struct ib_qp_attr *attr,
struct mlx5_ib_dev *dev = to_mdev(ibqp->device);
struct mlx5_ib_modify_qp_resp resp = {};
struct mlx5_ib_qp *qp = to_mqp(ibqp);
- struct mlx5_ib_modify_qp ucmd = {};
+ struct mlx5_ib_modify_qp ucmd;
enum ib_qp_type qp_type;
enum ib_qp_state cur_state, new_state;
int err = -EINVAL;
@@ -5379,7 +5379,7 @@ static int prepare_user_rq(struct ib_pd *pd,
struct mlx5_ib_rwq *rwq)
{
struct mlx5_ib_dev *dev = to_mdev(pd->device);
- struct mlx5_ib_create_wq ucmd = {};
+ struct mlx5_ib_create_wq ucmd;
int err;
err = ib_copy_validate_udata_in_cm(udata, ucmd,
diff --git a/drivers/infiniband/hw/mlx5/srq.c b/drivers/infiniband/hw/mlx5/srq.c
index 6d89c0242cab61..852f6f502d14d0 100644
--- a/drivers/infiniband/hw/mlx5/srq.c
+++ b/drivers/infiniband/hw/mlx5/srq.c
@@ -45,7 +45,7 @@ static int create_srq_user(struct ib_pd *pd, struct mlx5_ib_srq *srq,
struct ib_udata *udata, int buf_size)
{
struct mlx5_ib_dev *dev = to_mdev(pd->device);
- struct mlx5_ib_create_srq ucmd = {};
+ struct mlx5_ib_create_srq ucmd;
struct mlx5_ib_ucontext *ucontext = rdma_udata_to_drv_context(
udata, struct mlx5_ib_ucontext, ibucontext);
int err;
diff --git a/drivers/infiniband/hw/ocrdma/ocrdma_verbs.c b/drivers/infiniband/hw/ocrdma/ocrdma_verbs.c
index 8b285fcc638701..eed149f7a942b8 100644
--- a/drivers/infiniband/hw/ocrdma/ocrdma_verbs.c
+++ b/drivers/infiniband/hw/ocrdma/ocrdma_verbs.c
@@ -1311,12 +1311,14 @@ int ocrdma_create_qp(struct ib_qp *ibqp, struct ib_qp_init_attr *attrs,
if (status)
goto gen_err;
- memset(&ureq, 0, sizeof(ureq));
if (udata) {
status = ib_copy_validate_udata_in(udata, ureq, rsvd1);
if (status)
return status;
+ } else {
+ memset(&ureq, 0, sizeof(ureq));
}
+
ocrdma_set_qp_init_params(qp, pd, attrs);
if (udata == NULL)
qp->cap_flags |= (OCRDMA_QP_MW_BIND | OCRDMA_QP_LKEY0 |
diff --git a/drivers/infiniband/hw/qedr/verbs.c b/drivers/infiniband/hw/qedr/verbs.c
index 42d20b35ff3fe0..679aa6f3a63bc5 100644
--- a/drivers/infiniband/hw/qedr/verbs.c
+++ b/drivers/infiniband/hw/qedr/verbs.c
@@ -264,7 +264,7 @@ int qedr_alloc_ucontext(struct ib_ucontext *uctx, struct ib_udata *udata)
int rc;
struct qedr_ucontext *ctx = get_qedr_ucontext(uctx);
struct qedr_alloc_ucontext_resp uresp = {};
- struct qedr_alloc_ucontext_req ureq = {};
+ struct qedr_alloc_ucontext_req ureq;
struct qedr_dev *dev = get_qedr_dev(ibdev);
struct qed_rdma_add_user_out_params oparams;
struct qedr_user_mmap_entry *entry;
@@ -913,7 +913,7 @@ int qedr_create_cq(struct ib_cq *ibcq, const struct ib_cq_init_attr *attr,
};
struct qedr_dev *dev = get_qedr_dev(ibdev);
struct qed_rdma_create_cq_in_params params;
- struct qedr_create_cq_ureq ureq = {};
+ struct qedr_create_cq_ureq ureq;
int vector = attr->comp_vector;
int entries = attr->cqe;
struct qedr_cq *cq = get_qedr_cq(ibcq);
@@ -1541,7 +1541,7 @@ int qedr_create_srq(struct ib_srq *ibsrq, struct ib_srq_init_attr *init_attr,
struct qedr_dev *dev = get_qedr_dev(ibsrq->device);
struct qed_rdma_create_srq_out_params out_params;
struct qedr_pd *pd = get_qedr_pd(ibsrq->pd);
- struct qedr_create_srq_ureq ureq = {};
+ struct qedr_create_srq_ureq ureq;
u64 pbl_base_addr, phy_prod_pair_addr;
struct qedr_srq_hwq_info *hw_srq;
u32 page_cnt, page_size;
@@ -1837,7 +1837,7 @@ static int qedr_create_user_qp(struct qedr_dev *dev,
struct qed_rdma_create_qp_in_params in_params;
struct qed_rdma_create_qp_out_params out_params;
struct qedr_create_qp_uresp uresp = {};
- struct qedr_create_qp_ureq ureq = {};
+ struct qedr_create_qp_ureq ureq;
int alloc_and_init = rdma_protocol_roce(&dev->ibdev, 1);
struct qedr_ucontext *ctx = NULL;
struct qedr_pd *pd = NULL;
--
2.43.0
^ permalink raw reply related [flat|nested] 19+ messages in thread
* [PATCH v2 16/16] RDMA/hns: Remove the duplicate calls to ib_copy_validate_udata_in()
2026-03-25 21:26 [PATCH v2 00/16] Update drivers to use ib_copy_validate_udata_in() Jason Gunthorpe
` (14 preceding siblings ...)
2026-03-25 21:27 ` [PATCH v2 15/16] RDMA: Remove redundant = {} for udata req structs Jason Gunthorpe
@ 2026-03-25 21:27 ` Jason Gunthorpe
15 siblings, 0 replies; 19+ messages in thread
From: Jason Gunthorpe @ 2026-03-25 21:27 UTC (permalink / raw)
To: Abhijit Gangurde, Allen Hubbe,
Broadcom internal kernel review list, Bernard Metzler, Bryan Tan,
Cheng Xu, Gal Pressman, Junxian Huang, Kai Shen,
Konstantin Taranov, Krzysztof Czurylo, Leon Romanovsky,
linux-hyperv, linux-rdma, Michal Kalderon, Michael Margolin,
Nelson Escobar, Satish Kharat, Selvin Xavier, Yossi Leybovich,
Chengchang Tang, Tatyana Nikolova, Vishnu Dasa, Yishai Hadas,
Zhu Yanjun
Cc: Long Li, patches
A udata should be read only once per ioctl, not multiple times.
Multiple reads make it unclear what the content is since userspace can
change it between the reads.
Lift the ib_copy_validate_udata_in() out of
alloc_srq_buf()/alloc_srq_db() and into hns_roce_create_srq().
Found by AI.
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
---
drivers/infiniband/hw/hns/hns_roce_srq.c | 35 +++++++++++-------------
1 file changed, 16 insertions(+), 19 deletions(-)
diff --git a/drivers/infiniband/hw/hns/hns_roce_srq.c b/drivers/infiniband/hw/hns/hns_roce_srq.c
index 601f8cdfce96a3..cb848e8e6bbd76 100644
--- a/drivers/infiniband/hw/hns/hns_roce_srq.c
+++ b/drivers/infiniband/hw/hns/hns_roce_srq.c
@@ -340,22 +340,16 @@ static int set_srq_param(struct hns_roce_srq *srq,
}
static int alloc_srq_buf(struct hns_roce_dev *hr_dev, struct hns_roce_srq *srq,
- struct ib_udata *udata)
+ struct ib_udata *udata,
+ struct hns_roce_ib_create_srq *ucmd)
{
- struct hns_roce_ib_create_srq ucmd = {};
int ret;
- if (udata) {
- ret = ib_copy_validate_udata_in(udata, ucmd, que_addr);
- if (ret)
- return ret;
- }
-
- ret = alloc_srq_idx(hr_dev, srq, udata, ucmd.que_addr);
+ ret = alloc_srq_idx(hr_dev, srq, udata, ucmd->que_addr);
if (ret)
return ret;
- ret = alloc_srq_wqe_buf(hr_dev, srq, udata, ucmd.buf_addr);
+ ret = alloc_srq_wqe_buf(hr_dev, srq, udata, ucmd->buf_addr);
if (ret)
goto err_idx;
@@ -404,22 +398,18 @@ static void free_srq_db(struct hns_roce_dev *hr_dev, struct hns_roce_srq *srq,
static int alloc_srq_db(struct hns_roce_dev *hr_dev, struct hns_roce_srq *srq,
struct ib_udata *udata,
+ struct hns_roce_ib_create_srq *ucmd,
struct hns_roce_ib_create_srq_resp *resp)
{
- struct hns_roce_ib_create_srq ucmd;
struct hns_roce_ucontext *uctx;
int ret;
if (udata) {
- ret = ib_copy_validate_udata_in(udata, ucmd, que_addr);
- if (ret)
- return ret;
-
if ((hr_dev->caps.flags & HNS_ROCE_CAP_FLAG_SRQ_RECORD_DB) &&
- (ucmd.req_cap_flags & HNS_ROCE_SRQ_CAP_RECORD_DB)) {
+ (ucmd->req_cap_flags & HNS_ROCE_SRQ_CAP_RECORD_DB)) {
uctx = rdma_udata_to_drv_context(udata,
struct hns_roce_ucontext, ibucontext);
- ret = hns_roce_db_map_user(uctx, ucmd.db_addr,
+ ret = hns_roce_db_map_user(uctx, ucmd->db_addr,
&srq->rdb);
if (ret)
return ret;
@@ -448,6 +438,7 @@ int hns_roce_create_srq(struct ib_srq *ib_srq,
struct hns_roce_dev *hr_dev = to_hr_dev(ib_srq->device);
struct hns_roce_ib_create_srq_resp resp = {};
struct hns_roce_srq *srq = to_hr_srq(ib_srq);
+ struct hns_roce_ib_create_srq ucmd = {};
int ret;
mutex_init(&srq->mutex);
@@ -457,11 +448,17 @@ int hns_roce_create_srq(struct ib_srq *ib_srq,
if (ret)
goto err_out;
- ret = alloc_srq_buf(hr_dev, srq, udata);
+ if (udata) {
+ ret = ib_copy_validate_udata_in(udata, ucmd, que_addr);
+ if (ret)
+ goto err_out;
+ }
+
+ ret = alloc_srq_buf(hr_dev, srq, udata, &ucmd);
if (ret)
goto err_out;
- ret = alloc_srq_db(hr_dev, srq, udata, &resp);
+ ret = alloc_srq_db(hr_dev, srq, udata, &ucmd, &resp);
if (ret)
goto err_srq_buf;
--
2.43.0
^ permalink raw reply related [flat|nested] 19+ messages in thread
* Re: [PATCH v2 14/16] RDMA/irdma: Add missing comp_mask check in alloc_ucontext
2026-03-25 21:27 ` [PATCH v2 14/16] RDMA/irdma: Add missing comp_mask check in alloc_ucontext Jason Gunthorpe
@ 2026-03-25 22:16 ` Jacob Moroni
0 siblings, 0 replies; 19+ messages in thread
From: Jacob Moroni @ 2026-03-25 22:16 UTC (permalink / raw)
To: Jason Gunthorpe
Cc: Abhijit Gangurde, Allen Hubbe,
Broadcom internal kernel review list, Bernard Metzler, Bryan Tan,
Cheng Xu, Gal Pressman, Junxian Huang, Kai Shen,
Konstantin Taranov, Krzysztof Czurylo, Leon Romanovsky,
linux-hyperv, linux-rdma, Michal Kalderon, Michael Margolin,
Nelson Escobar, Satish Kharat, Selvin Xavier, Yossi Leybovich,
Chengchang Tang, Tatyana Nikolova, Vishnu Dasa, Yishai Hadas,
Zhu Yanjun, Long Li, patches
Reviewed-by: Jacob Moroni <jmoroni@google.com>
On Wed, Mar 25, 2026 at 5:27 PM Jason Gunthorpe <jgg@nvidia.com> wrote:
>
> irdma has a comp_mask field that was never checked for validity, check
> it.
>
> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
> ---
> drivers/infiniband/hw/irdma/verbs.c | 4 +++-
> 1 file changed, 3 insertions(+), 1 deletion(-)
>
> diff --git a/drivers/infiniband/hw/irdma/verbs.c b/drivers/infiniband/hw/irdma/verbs.c
> index b2978632241900..d695130b187bdd 100644
> --- a/drivers/infiniband/hw/irdma/verbs.c
> +++ b/drivers/infiniband/hw/irdma/verbs.c
> @@ -296,7 +296,9 @@ static int irdma_alloc_ucontext(struct ib_ucontext *uctx,
> if (udata->outlen < IRDMA_ALLOC_UCTX_MIN_RESP_LEN)
> return -EINVAL;
>
> - ret = ib_copy_validate_udata_in(udata, req, rsvd8);
> + ret = ib_copy_validate_udata_in_cm(udata, req, rsvd8,
> + IRDMA_ALLOC_UCTX_USE_RAW_ATTR |
> + IRDMA_SUPPORT_WQE_FORMAT_V2);
> if (ret)
> return ret;
>
> --
> 2.43.0
>
>
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: [PATCH v2 03/16] RDMA: Consolidate patterns with sizeof() to ib_copy_validate_udata_in()
2026-03-25 21:26 ` [PATCH v2 03/16] RDMA: Consolidate patterns with sizeof() " Jason Gunthorpe
@ 2026-03-29 11:59 ` Bernard Metzler
0 siblings, 0 replies; 19+ messages in thread
From: Bernard Metzler @ 2026-03-29 11:59 UTC (permalink / raw)
To: Jason Gunthorpe, Abhijit Gangurde, Allen Hubbe,
Broadcom internal kernel review list, Bryan Tan, Cheng Xu,
Gal Pressman, Junxian Huang, Kai Shen, Konstantin Taranov,
Krzysztof Czurylo, Leon Romanovsky, linux-hyperv, linux-rdma,
Michal Kalderon, Michael Margolin, Nelson Escobar, Satish Kharat,
Selvin Xavier, Yossi Leybovich, Chengchang Tang, Tatyana Nikolova,
Vishnu Dasa, Yishai Hadas, Zhu Yanjun
Cc: Long Li, patches
On 25.03.2026 22:26, Jason Gunthorpe wrote:
> Similar to the prior patch, these patterns are open coding an
> offsetofend() using sizeof(), which targets the last member of the
> current struct.
>
> Reviewed-by: Long Li <longli@microsoft.com>
> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
> ---
> drivers/infiniband/hw/mana/qp.c | 27 +++++++++------------------
> drivers/infiniband/hw/mana/wq.c | 10 ++--------
> drivers/infiniband/hw/mlx4/main.c | 6 ++----
> drivers/infiniband/hw/mlx5/cq.c | 2 +-
> drivers/infiniband/sw/rxe/rxe_verbs.c | 13 ++-----------
> drivers/infiniband/sw/siw/siw_verbs.c | 6 +-----
> 6 files changed, 17 insertions(+), 47 deletions(-)
>
> diff --git a/drivers/infiniband/hw/mana/qp.c b/drivers/infiniband/hw/mana/qp.c
> index 82f84f7ad37a90..69c8d4f7a1f46b 100644
> --- a/drivers/infiniband/hw/mana/qp.c
> +++ b/drivers/infiniband/hw/mana/qp.c
> @@ -111,16 +111,12 @@ static int mana_ib_create_qp_rss(struct ib_qp *ibqp, struct ib_pd *pd,
> u32 port;
> int ret;
>
> - if (!udata || udata->inlen < sizeof(ucmd))
> + if (!udata)
> return -EINVAL;
>
> - ret = ib_copy_from_udata(&ucmd, udata, min(sizeof(ucmd), udata->inlen));
> - if (ret) {
> - ibdev_dbg(&mdev->ib_dev,
> - "Failed copy from udata for create rss-qp, err %d\n",
> - ret);
> + ret = ib_copy_validate_udata_in(udata, ucmd, port);
> + if (ret)
> return ret;
> - }
>
> if (attr->cap.max_recv_wr > mdev->adapter_caps.max_qp_wr) {
> ibdev_dbg(&mdev->ib_dev,
> @@ -282,15 +278,12 @@ static int mana_ib_create_qp_raw(struct ib_qp *ibqp, struct ib_pd *ibpd,
> u32 port;
> int err;
>
> - if (!mana_ucontext || udata->inlen < sizeof(ucmd))
> + if (!mana_ucontext)
> return -EINVAL;
>
> - err = ib_copy_from_udata(&ucmd, udata, min(sizeof(ucmd), udata->inlen));
> - if (err) {
> - ibdev_dbg(&mdev->ib_dev,
> - "Failed to copy from udata create qp-raw, %d\n", err);
> + err = ib_copy_validate_udata_in(udata, ucmd, port);
> + if (err)
> return err;
> - }
>
> if (attr->cap.max_send_wr > mdev->adapter_caps.max_qp_wr) {
> ibdev_dbg(&mdev->ib_dev,
> @@ -535,17 +528,15 @@ static int mana_ib_create_rc_qp(struct ib_qp *ibqp, struct ib_pd *ibpd,
> u64 flags = 0;
> u32 doorbell;
>
> - if (!udata || udata->inlen < sizeof(ucmd))
> + if (!udata)
> return -EINVAL;
>
> mana_ucontext = rdma_udata_to_drv_context(udata, struct mana_ib_ucontext, ibucontext);
> doorbell = mana_ucontext->doorbell;
> flags = MANA_RC_FLAG_NO_FMR;
> - err = ib_copy_from_udata(&ucmd, udata, min(sizeof(ucmd), udata->inlen));
> - if (err) {
> - ibdev_dbg(&mdev->ib_dev, "Failed to copy from udata, %d\n", err);
> + err = ib_copy_validate_udata_in(udata, ucmd, queue_size);
> + if (err)
> return err;
> - }
>
> for (i = 0, j = 0; i < MANA_RC_QUEUE_TYPE_MAX; ++i) {
> /* skip FMR for user-level RC QPs */
> diff --git a/drivers/infiniband/hw/mana/wq.c b/drivers/infiniband/hw/mana/wq.c
> index 6206244f762e42..aceeea7f17b339 100644
> --- a/drivers/infiniband/hw/mana/wq.c
> +++ b/drivers/infiniband/hw/mana/wq.c
> @@ -15,15 +15,9 @@ struct ib_wq *mana_ib_create_wq(struct ib_pd *pd,
> struct mana_ib_wq *wq;
> int err;
>
> - if (udata->inlen < sizeof(ucmd))
> - return ERR_PTR(-EINVAL);
> -
> - err = ib_copy_from_udata(&ucmd, udata, min(sizeof(ucmd), udata->inlen));
> - if (err) {
> - ibdev_dbg(&mdev->ib_dev,
> - "Failed to copy from udata for create wq, %d\n", err);
> + err = ib_copy_validate_udata_in(udata, ucmd, reserved);
> + if (err)
> return ERR_PTR(err);
> - }
>
> wq = kzalloc_obj(*wq);
> if (!wq)
> diff --git a/drivers/infiniband/hw/mlx4/main.c b/drivers/infiniband/hw/mlx4/main.c
> index 73e17b4339eb60..16e4cffbd7a84d 100644
> --- a/drivers/infiniband/hw/mlx4/main.c
> +++ b/drivers/infiniband/hw/mlx4/main.c
> @@ -50,6 +50,7 @@
> #include <rdma/ib_user_verbs.h>
> #include <rdma/ib_addr.h>
> #include <rdma/ib_cache.h>
> +#include <rdma/uverbs_ioctl.h>
>
> #include <net/bonding.h>
>
> @@ -445,10 +446,7 @@ static int mlx4_ib_query_device(struct ib_device *ibdev,
> struct mlx4_clock_params clock_params;
>
> if (uhw->inlen) {
> - if (uhw->inlen < sizeof(cmd))
> - return -EINVAL;
> -
> - err = ib_copy_from_udata(&cmd, uhw, sizeof(cmd));
> + err = ib_copy_validate_udata_in(uhw, cmd, reserved);
> if (err)
> return err;
>
> diff --git a/drivers/infiniband/hw/mlx5/cq.c b/drivers/infiniband/hw/mlx5/cq.c
> index 643b3b7d387834..f5e75e51c6763f 100644
> --- a/drivers/infiniband/hw/mlx5/cq.c
> +++ b/drivers/infiniband/hw/mlx5/cq.c
> @@ -1229,7 +1229,7 @@ static int resize_user(struct mlx5_ib_dev *dev, struct mlx5_ib_cq *cq,
> struct ib_umem *umem;
> int err;
>
> - err = ib_copy_from_udata(&ucmd, udata, sizeof(ucmd));
> + err = ib_copy_validate_udata_in(udata, ucmd, reserved1);
> if (err)
> return err;
>
> diff --git a/drivers/infiniband/sw/rxe/rxe_verbs.c b/drivers/infiniband/sw/rxe/rxe_verbs.c
> index fe41362c51444c..c9fd40bfa09eb2 100644
> --- a/drivers/infiniband/sw/rxe/rxe_verbs.c
> +++ b/drivers/infiniband/sw/rxe/rxe_verbs.c
> @@ -452,18 +452,9 @@ static int rxe_modify_srq(struct ib_srq *ibsrq, struct ib_srq_attr *attr,
> int err;
>
> if (udata) {
> - if (udata->inlen < sizeof(cmd)) {
> - err = -EINVAL;
> - rxe_dbg_srq(srq, "malformed udata\n");
> + err = ib_copy_validate_udata_in(udata, cmd, mmap_info_addr);
> + if (err)
> goto err_out;
> - }
> -
> - err = ib_copy_from_udata(&cmd, udata, sizeof(cmd));
> - if (err) {
> - err = -EFAULT;
> - rxe_dbg_srq(srq, "unable to read udata\n");
> - goto err_out;
> - }
> }
>
> err = rxe_srq_chk_attr(rxe, srq, attr, mask);
> diff --git a/drivers/infiniband/sw/siw/siw_verbs.c b/drivers/infiniband/sw/siw/siw_verbs.c
> index ef504db8f2b48b..1e1d262a4ae2db 100644
> --- a/drivers/infiniband/sw/siw/siw_verbs.c
> +++ b/drivers/infiniband/sw/siw/siw_verbs.c
> @@ -1373,11 +1373,7 @@ struct ib_mr *siw_reg_user_mr(struct ib_pd *pd, u64 start, u64 len,
> struct siw_uresp_reg_mr uresp = {};
> struct siw_mem *mem = mr->mem;
>
> - if (udata->inlen < sizeof(ureq)) {
> - rv = -EINVAL;
> - goto err_out;
> - }
> - rv = ib_copy_from_udata(&ureq, udata, sizeof(ureq));
> + rv = ib_copy_validate_udata_in(udata, ureq, pad);
> if (rv)
> goto err_out;
>
Looks good for siw driver. Thank you.
Reviewed-by: Bernard Metzler <bernard.metzler@linux.dev>
^ permalink raw reply [flat|nested] 19+ messages in thread
end of thread, other threads:[~2026-03-29 11:59 UTC | newest]
Thread overview: 19+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2026-03-25 21:26 [PATCH v2 00/16] Update drivers to use ib_copy_validate_udata_in() Jason Gunthorpe
2026-03-25 21:26 ` [PATCH v2 01/16] RDMA: Consolidate patterns with offsetofend() to ib_copy_validate_udata_in() Jason Gunthorpe
2026-03-25 21:26 ` [PATCH v2 02/16] RDMA: Consolidate patterns with offsetof() " Jason Gunthorpe
2026-03-25 21:26 ` [PATCH v2 03/16] RDMA: Consolidate patterns with sizeof() " Jason Gunthorpe
2026-03-29 11:59 ` Bernard Metzler
2026-03-25 21:26 ` [PATCH v2 04/16] RDMA: Use ib_copy_validate_udata_in() for implicit full structs Jason Gunthorpe
2026-03-25 21:26 ` [PATCH v2 05/16] RDMA/pvrdma: Use ib_copy_validate_udata_in() for srq Jason Gunthorpe
2026-03-25 21:26 ` [PATCH v2 06/16] RDMA/mlx5: Use ib_copy_validate_udata_in() for SRQ Jason Gunthorpe
2026-03-25 21:26 ` [PATCH v2 07/16] RDMA/mlx5: Use ib_copy_validate_udata_in() for MW Jason Gunthorpe
2026-03-25 21:26 ` [PATCH v2 08/16] RDMA/mlx4: Use ib_copy_validate_udata_in() Jason Gunthorpe
2026-03-25 21:26 ` [PATCH v2 09/16] RDMA/mlx4: Use ib_copy_validate_udata_in() for QP Jason Gunthorpe
2026-03-25 21:26 ` [PATCH v2 10/16] RDMA/hns: Use ib_copy_validate_udata_in() Jason Gunthorpe
2026-03-25 21:26 ` [PATCH v2 11/16] RDMA: Use ib_copy_validate_udata_in_cm() for zero comp_mask Jason Gunthorpe
2026-03-25 21:26 ` [PATCH v2 12/16] RDMA/mlx5: Pull comp_mask validation into ib_copy_validate_udata_in_cm() Jason Gunthorpe
2026-03-25 21:26 ` [PATCH v2 13/16] RDMA/hns: Add missing comp_mask check in create_qp Jason Gunthorpe
2026-03-25 21:27 ` [PATCH v2 14/16] RDMA/irdma: Add missing comp_mask check in alloc_ucontext Jason Gunthorpe
2026-03-25 22:16 ` Jacob Moroni
2026-03-25 21:27 ` [PATCH v2 15/16] RDMA: Remove redundant = {} for udata req structs Jason Gunthorpe
2026-03-25 21:27 ` [PATCH v2 16/16] RDMA/hns: Remove the duplicate calls to ib_copy_validate_udata_in() Jason Gunthorpe
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox