* [PATCH v2 01/16] RDMA/mana: Fix error unwind in mana_ib_create_qp_rss()
2026-04-06 17:40 [PATCH v2 00/16] Convert all drivers to the new udata response flow Jason Gunthorpe
@ 2026-04-06 17:40 ` Jason Gunthorpe
2026-04-06 17:40 ` [PATCH v2 02/16] RDMA/ocrdma: Clarify the mm_head searching Jason Gunthorpe
` (14 subsequent siblings)
15 siblings, 0 replies; 19+ messages in thread
From: Jason Gunthorpe @ 2026-04-06 17:40 UTC (permalink / raw)
To: Abhijit Gangurde, Allen Hubbe,
Broadcom internal kernel review list, Bernard Metzler,
Potnuri Bharat Teja, Bryan Tan, Cheng Xu, Dennis Dalessandro,
Gal Pressman, Junxian Huang, Kai Shen, Kalesh AP,
Konstantin Taranov, Krzysztof Czurylo, Leon Romanovsky,
linux-hyperv, linux-rdma, Long Li, Michal Kalderon,
Michael Margolin, Nelson Escobar, Satish Kharat, Selvin Xavier,
Yossi Leybovich, Chengchang Tang, Tatyana Nikolova, Vishnu Dasa,
Yishai Hadas
Cc: Adit Ranadive, Aditya Sarwade, Bryan Tan, Dexuan Cui,
Doug Ledford, George Zhang, Jorgen Hansen, Leon Romanovsky,
Parav Pandit, patches, Roland Dreier, Roland Dreier, Ajay Sharma,
stable
Sashiko points out that mana_ib_cfg_vport_steering() is leaked, the normal
destroy path cleans it up.
Cc: stable@vger.kernel.org
Fixes: 0266a177631d ("RDMA/mana_ib: Add a driver for Microsoft Azure Network Adapter")
Link: https://sashiko.dev/#/patchset/0-v1-e911b76a94d1%2B65d95-rdma_udata_rep_jgg%40nvidia.com?part=4
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
---
drivers/infiniband/hw/mana/qp.c | 4 +++-
1 file changed, 3 insertions(+), 1 deletion(-)
diff --git a/drivers/infiniband/hw/mana/qp.c b/drivers/infiniband/hw/mana/qp.c
index 645581359cee0b..f503445a38f2d8 100644
--- a/drivers/infiniband/hw/mana/qp.c
+++ b/drivers/infiniband/hw/mana/qp.c
@@ -215,13 +215,15 @@ static int mana_ib_create_qp_rss(struct ib_qp *ibqp, struct ib_pd *pd,
ibdev_dbg(&mdev->ib_dev,
"Failed to copy to udata create rss-qp, %d\n",
ret);
- goto fail;
+ goto err_disable_vport_rx;
}
kfree(mana_ind_table);
return 0;
+err_disable_vport_rx:
+ mana_disable_vport_rx(mpc);
fail:
while (i-- > 0) {
ibwq = ind_tbl->ind_tbl[i];
--
2.43.0
^ permalink raw reply related [flat|nested] 19+ messages in thread* [PATCH v2 02/16] RDMA/ocrdma: Clarify the mm_head searching
2026-04-06 17:40 [PATCH v2 00/16] Convert all drivers to the new udata response flow Jason Gunthorpe
2026-04-06 17:40 ` [PATCH v2 01/16] RDMA/mana: Fix error unwind in mana_ib_create_qp_rss() Jason Gunthorpe
@ 2026-04-06 17:40 ` Jason Gunthorpe
2026-04-06 17:40 ` [PATCH v2 03/16] RDMA/ocrdma: Don't NULL deref uctx on errors in ocrdma_copy_pd_uresp() Jason Gunthorpe
` (13 subsequent siblings)
15 siblings, 0 replies; 19+ messages in thread
From: Jason Gunthorpe @ 2026-04-06 17:40 UTC (permalink / raw)
To: Abhijit Gangurde, Allen Hubbe,
Broadcom internal kernel review list, Bernard Metzler,
Potnuri Bharat Teja, Bryan Tan, Cheng Xu, Dennis Dalessandro,
Gal Pressman, Junxian Huang, Kai Shen, Kalesh AP,
Konstantin Taranov, Krzysztof Czurylo, Leon Romanovsky,
linux-hyperv, linux-rdma, Long Li, Michal Kalderon,
Michael Margolin, Nelson Escobar, Satish Kharat, Selvin Xavier,
Yossi Leybovich, Chengchang Tang, Tatyana Nikolova, Vishnu Dasa,
Yishai Hadas
Cc: Adit Ranadive, Aditya Sarwade, Bryan Tan, Dexuan Cui,
Doug Ledford, George Zhang, Jorgen Hansen, Leon Romanovsky,
Parav Pandit, patches, Roland Dreier, Roland Dreier, Ajay Sharma,
stable
The intention of this code is to find matching entries exactly, the driver
never creates phys_addr's with different lens so the current expression is
not a bug, but it doesn't make sense and confuses review tooling.
Search for exact match instead.
Link: https://sashiko.dev/#/patchset/0-v1-e911b76a94d1%2B65d95-rdma_udata_rep_jgg%40nvidia.com?part=4
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
---
drivers/infiniband/hw/ocrdma/ocrdma_verbs.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/drivers/infiniband/hw/ocrdma/ocrdma_verbs.c b/drivers/infiniband/hw/ocrdma/ocrdma_verbs.c
index c17e2a54dbcaf9..463c9a5703fc4e 100644
--- a/drivers/infiniband/hw/ocrdma/ocrdma_verbs.c
+++ b/drivers/infiniband/hw/ocrdma/ocrdma_verbs.c
@@ -215,7 +215,7 @@ static void ocrdma_del_mmap(struct ocrdma_ucontext *uctx, u64 phy_addr,
mutex_lock(&uctx->mm_list_lock);
list_for_each_entry_safe(mm, tmp, &uctx->mm_head, entry) {
- if (len != mm->key.len && phy_addr != mm->key.phy_addr)
+ if (len != mm->key.len || phy_addr != mm->key.phy_addr)
continue;
list_del(&mm->entry);
@@ -233,7 +233,7 @@ static bool ocrdma_search_mmap(struct ocrdma_ucontext *uctx, u64 phy_addr,
mutex_lock(&uctx->mm_list_lock);
list_for_each_entry(mm, &uctx->mm_head, entry) {
- if (len != mm->key.len && phy_addr != mm->key.phy_addr)
+ if (len != mm->key.len || phy_addr != mm->key.phy_addr)
continue;
found = true;
--
2.43.0
^ permalink raw reply related [flat|nested] 19+ messages in thread* [PATCH v2 03/16] RDMA/ocrdma: Don't NULL deref uctx on errors in ocrdma_copy_pd_uresp()
2026-04-06 17:40 [PATCH v2 00/16] Convert all drivers to the new udata response flow Jason Gunthorpe
2026-04-06 17:40 ` [PATCH v2 01/16] RDMA/mana: Fix error unwind in mana_ib_create_qp_rss() Jason Gunthorpe
2026-04-06 17:40 ` [PATCH v2 02/16] RDMA/ocrdma: Clarify the mm_head searching Jason Gunthorpe
@ 2026-04-06 17:40 ` Jason Gunthorpe
2026-04-06 17:40 ` [PATCH v2 04/16] RDMA/vmw_pvrdma: Fix double free on pvrdma_alloc_ucontext() error path Jason Gunthorpe
` (12 subsequent siblings)
15 siblings, 0 replies; 19+ messages in thread
From: Jason Gunthorpe @ 2026-04-06 17:40 UTC (permalink / raw)
To: Abhijit Gangurde, Allen Hubbe,
Broadcom internal kernel review list, Bernard Metzler,
Potnuri Bharat Teja, Bryan Tan, Cheng Xu, Dennis Dalessandro,
Gal Pressman, Junxian Huang, Kai Shen, Kalesh AP,
Konstantin Taranov, Krzysztof Czurylo, Leon Romanovsky,
linux-hyperv, linux-rdma, Long Li, Michal Kalderon,
Michael Margolin, Nelson Escobar, Satish Kharat, Selvin Xavier,
Yossi Leybovich, Chengchang Tang, Tatyana Nikolova, Vishnu Dasa,
Yishai Hadas
Cc: Adit Ranadive, Aditya Sarwade, Bryan Tan, Dexuan Cui,
Doug Ledford, George Zhang, Jorgen Hansen, Leon Romanovsky,
Parav Pandit, patches, Roland Dreier, Roland Dreier, Ajay Sharma,
stable
Sashiko points out that pd->uctx isn't initialized until late in the
function so all these error flow references are NULL and will crash. Use
the uctx that isn't NULL.
Cc: stable@vger.kernel.org
Fixes: fe2caefcdf58 ("RDMA/ocrdma: Add driver for Emulex OneConnect IBoE RDMA adapter")
Link: https://sashiko.dev/#/patchset/0-v1-e911b76a94d1%2B65d95-rdma_udata_rep_jgg%40nvidia.com?part=4
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
---
drivers/infiniband/hw/ocrdma/ocrdma_verbs.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/drivers/infiniband/hw/ocrdma/ocrdma_verbs.c b/drivers/infiniband/hw/ocrdma/ocrdma_verbs.c
index 463c9a5703fc4e..a88cc5d84af828 100644
--- a/drivers/infiniband/hw/ocrdma/ocrdma_verbs.c
+++ b/drivers/infiniband/hw/ocrdma/ocrdma_verbs.c
@@ -620,9 +620,9 @@ static int ocrdma_copy_pd_uresp(struct ocrdma_dev *dev, struct ocrdma_pd *pd,
ucopy_err:
if (pd->dpp_enabled)
- ocrdma_del_mmap(pd->uctx, dpp_page_addr, PAGE_SIZE);
+ ocrdma_del_mmap(uctx, dpp_page_addr, PAGE_SIZE);
dpp_map_err:
- ocrdma_del_mmap(pd->uctx, db_page_addr, db_page_size);
+ ocrdma_del_mmap(uctx, db_page_addr, db_page_size);
return status;
}
--
2.43.0
^ permalink raw reply related [flat|nested] 19+ messages in thread* [PATCH v2 04/16] RDMA/vmw_pvrdma: Fix double free on pvrdma_alloc_ucontext() error path
2026-04-06 17:40 [PATCH v2 00/16] Convert all drivers to the new udata response flow Jason Gunthorpe
` (2 preceding siblings ...)
2026-04-06 17:40 ` [PATCH v2 03/16] RDMA/ocrdma: Don't NULL deref uctx on errors in ocrdma_copy_pd_uresp() Jason Gunthorpe
@ 2026-04-06 17:40 ` Jason Gunthorpe
2026-04-06 17:40 ` [PATCH v2 05/16] RDMA/mlx4: Fix resource leak on error in mlx4_ib_create_srq() Jason Gunthorpe
` (11 subsequent siblings)
15 siblings, 0 replies; 19+ messages in thread
From: Jason Gunthorpe @ 2026-04-06 17:40 UTC (permalink / raw)
To: Abhijit Gangurde, Allen Hubbe,
Broadcom internal kernel review list, Bernard Metzler,
Potnuri Bharat Teja, Bryan Tan, Cheng Xu, Dennis Dalessandro,
Gal Pressman, Junxian Huang, Kai Shen, Kalesh AP,
Konstantin Taranov, Krzysztof Czurylo, Leon Romanovsky,
linux-hyperv, linux-rdma, Long Li, Michal Kalderon,
Michael Margolin, Nelson Escobar, Satish Kharat, Selvin Xavier,
Yossi Leybovich, Chengchang Tang, Tatyana Nikolova, Vishnu Dasa,
Yishai Hadas
Cc: Adit Ranadive, Aditya Sarwade, Bryan Tan, Dexuan Cui,
Doug Ledford, George Zhang, Jorgen Hansen, Leon Romanovsky,
Parav Pandit, patches, Roland Dreier, Roland Dreier, Ajay Sharma,
stable
Sashiko points out that pvrdma_uar_free() is already called within
pvrdma_dealloc_ucontext(), so calling it before triggers a double free.
Cc: stable@vger.kernel.org
Fixes: 29c8d9eba550 ("IB: Add vmw_pvrdma driver")
Link: https://sashiko.dev/#/patchset/0-v1-e911b76a94d1%2B65d95-rdma_udata_rep_jgg%40nvidia.com?part=4
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
---
drivers/infiniband/hw/vmw_pvrdma/pvrdma_verbs.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/infiniband/hw/vmw_pvrdma/pvrdma_verbs.c b/drivers/infiniband/hw/vmw_pvrdma/pvrdma_verbs.c
index bcd43dc30e21c6..c7c2b41060e526 100644
--- a/drivers/infiniband/hw/vmw_pvrdma/pvrdma_verbs.c
+++ b/drivers/infiniband/hw/vmw_pvrdma/pvrdma_verbs.c
@@ -322,7 +322,7 @@ int pvrdma_alloc_ucontext(struct ib_ucontext *uctx, struct ib_udata *udata)
uresp.qp_tab_size = vdev->dsr->caps.max_qp;
ret = ib_copy_to_udata(udata, &uresp, sizeof(uresp));
if (ret) {
- pvrdma_uar_free(vdev, &context->uar);
+ /* pvrdma_dealloc_ucontext() also frees the UAR */
pvrdma_dealloc_ucontext(&context->ibucontext);
return -EFAULT;
}
--
2.43.0
^ permalink raw reply related [flat|nested] 19+ messages in thread* [PATCH v2 05/16] RDMA/mlx4: Fix resource leak on error in mlx4_ib_create_srq()
2026-04-06 17:40 [PATCH v2 00/16] Convert all drivers to the new udata response flow Jason Gunthorpe
` (3 preceding siblings ...)
2026-04-06 17:40 ` [PATCH v2 04/16] RDMA/vmw_pvrdma: Fix double free on pvrdma_alloc_ucontext() error path Jason Gunthorpe
@ 2026-04-06 17:40 ` Jason Gunthorpe
2026-04-06 17:40 ` [PATCH v2 06/16] RDMA/hns: Fix xarray race in hns_roce_create_srq() Jason Gunthorpe
` (10 subsequent siblings)
15 siblings, 0 replies; 19+ messages in thread
From: Jason Gunthorpe @ 2026-04-06 17:40 UTC (permalink / raw)
To: Abhijit Gangurde, Allen Hubbe,
Broadcom internal kernel review list, Bernard Metzler,
Potnuri Bharat Teja, Bryan Tan, Cheng Xu, Dennis Dalessandro,
Gal Pressman, Junxian Huang, Kai Shen, Kalesh AP,
Konstantin Taranov, Krzysztof Czurylo, Leon Romanovsky,
linux-hyperv, linux-rdma, Long Li, Michal Kalderon,
Michael Margolin, Nelson Escobar, Satish Kharat, Selvin Xavier,
Yossi Leybovich, Chengchang Tang, Tatyana Nikolova, Vishnu Dasa,
Yishai Hadas
Cc: Adit Ranadive, Aditya Sarwade, Bryan Tan, Dexuan Cui,
Doug Ledford, George Zhang, Jorgen Hansen, Leon Romanovsky,
Parav Pandit, patches, Roland Dreier, Roland Dreier, Ajay Sharma,
stable
Sashiko points out that mlx4_srq_alloc() was not undone during error
unwind, add the missing call to mlx4_srq_free().
Cc: stable@vger.kernel.org
Fixes: 225c7b1feef1 ("IB/mlx4: Add a driver Mellanox ConnectX InfiniBand adapters")
Link: https://sashiko.dev/#/patchset/0-v1-e911b76a94d1%2B65d95-rdma_udata_rep_jgg%40nvidia.com?part=8
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
---
drivers/infiniband/hw/mlx4/srq.c | 4 +++-
1 file changed, 3 insertions(+), 1 deletion(-)
diff --git a/drivers/infiniband/hw/mlx4/srq.c b/drivers/infiniband/hw/mlx4/srq.c
index 5b23e5f8b84aca..767840736d583b 100644
--- a/drivers/infiniband/hw/mlx4/srq.c
+++ b/drivers/infiniband/hw/mlx4/srq.c
@@ -194,13 +194,15 @@ int mlx4_ib_create_srq(struct ib_srq *ib_srq,
if (udata)
if (ib_copy_to_udata(udata, &srq->msrq.srqn, sizeof (__u32))) {
err = -EFAULT;
- goto err_wrid;
+ goto err_srq;
}
init_attr->attr.max_wr = srq->msrq.max - 1;
return 0;
+err_srq:
+ mlx4_srq_free(dev->dev, &srq->msrq);
err_wrid:
if (udata)
mlx4_ib_db_unmap_user(ucontext, &srq->db);
--
2.43.0
^ permalink raw reply related [flat|nested] 19+ messages in thread* [PATCH v2 06/16] RDMA/hns: Fix xarray race in hns_roce_create_srq()
2026-04-06 17:40 [PATCH v2 00/16] Convert all drivers to the new udata response flow Jason Gunthorpe
` (4 preceding siblings ...)
2026-04-06 17:40 ` [PATCH v2 05/16] RDMA/mlx4: Fix resource leak on error in mlx4_ib_create_srq() Jason Gunthorpe
@ 2026-04-06 17:40 ` Jason Gunthorpe
2026-04-07 13:39 ` Junxian Huang
2026-04-06 17:40 ` [PATCH v2 07/16] RDMA: Use ib_is_udata_in_empty() for places calling ib_is_udata_cleared() Jason Gunthorpe
` (9 subsequent siblings)
15 siblings, 1 reply; 19+ messages in thread
From: Jason Gunthorpe @ 2026-04-06 17:40 UTC (permalink / raw)
To: Abhijit Gangurde, Allen Hubbe,
Broadcom internal kernel review list, Bernard Metzler,
Potnuri Bharat Teja, Bryan Tan, Cheng Xu, Dennis Dalessandro,
Gal Pressman, Junxian Huang, Kai Shen, Kalesh AP,
Konstantin Taranov, Krzysztof Czurylo, Leon Romanovsky,
linux-hyperv, linux-rdma, Long Li, Michal Kalderon,
Michael Margolin, Nelson Escobar, Satish Kharat, Selvin Xavier,
Yossi Leybovich, Chengchang Tang, Tatyana Nikolova, Vishnu Dasa,
Yishai Hadas
Cc: Adit Ranadive, Aditya Sarwade, Bryan Tan, Dexuan Cui,
Doug Ledford, George Zhang, Jorgen Hansen, Leon Romanovsky,
Parav Pandit, patches, Roland Dreier, Roland Dreier, Ajay Sharma,
stable
Sashiko points out that once the srq memory is stored into the xarray by
alloc_srqc() it can immediately be looked up by:
xa_lock(&srq_table->xa);
srq = xa_load(&srq_table->xa, srqn & (hr_dev->caps.num_srqs - 1));
if (srq)
refcount_inc(&srq->refcount);
xa_unlock(&srq_table->xa);
Which will fail refcount debug because the refcount is 0 and then crash:
srq->event(srq, event_type);
Because event is NULL.
Use refcount_inc_not_zero() instead to ensure a partially prepared srq is
never retrieved from the event handler and fix the ordering of the
initialization so refcount becomes 1 only after it is fully ready.
Link: https://sashiko.dev/#/patchset/0-v1-e911b76a94d1%2B65d95-rdma_udata_rep_jgg%40nvidia.com?part=3
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
---
drivers/infiniband/hw/hns/hns_roce_srq.c | 6 +++---
1 file changed, 3 insertions(+), 3 deletions(-)
diff --git a/drivers/infiniband/hw/hns/hns_roce_srq.c b/drivers/infiniband/hw/hns/hns_roce_srq.c
index cb848e8e6bbd76..d6201ddde0292a 100644
--- a/drivers/infiniband/hw/hns/hns_roce_srq.c
+++ b/drivers/infiniband/hw/hns/hns_roce_srq.c
@@ -16,8 +16,8 @@ void hns_roce_srq_event(struct hns_roce_dev *hr_dev, u32 srqn, int event_type)
xa_lock(&srq_table->xa);
srq = xa_load(&srq_table->xa, srqn & (hr_dev->caps.num_srqs - 1));
- if (srq)
- refcount_inc(&srq->refcount);
+ if (srq && !refcount_inc_not_zero(&srq->refcount))
+ srq = NULL;
xa_unlock(&srq_table->xa);
if (!srq) {
@@ -481,8 +481,8 @@ int hns_roce_create_srq(struct ib_srq *ib_srq,
}
srq->event = hns_roce_ib_srq_event;
- refcount_set(&srq->refcount, 1);
init_completion(&srq->free);
+ refcount_set(&srq->refcount, 1);
return 0;
--
2.43.0
^ permalink raw reply related [flat|nested] 19+ messages in thread* Re: [PATCH v2 06/16] RDMA/hns: Fix xarray race in hns_roce_create_srq()
2026-04-06 17:40 ` [PATCH v2 06/16] RDMA/hns: Fix xarray race in hns_roce_create_srq() Jason Gunthorpe
@ 2026-04-07 13:39 ` Junxian Huang
2026-04-07 14:03 ` Jason Gunthorpe
0 siblings, 1 reply; 19+ messages in thread
From: Junxian Huang @ 2026-04-07 13:39 UTC (permalink / raw)
To: Jason Gunthorpe, Abhijit Gangurde, Allen Hubbe,
Broadcom internal kernel review list, Bernard Metzler,
Potnuri Bharat Teja, Bryan Tan, Cheng Xu, Dennis Dalessandro,
Gal Pressman, Kai Shen, Kalesh AP, Konstantin Taranov,
Krzysztof Czurylo, Leon Romanovsky, linux-hyperv, linux-rdma,
Long Li, Michal Kalderon, Michael Margolin, Nelson Escobar,
Satish Kharat, Selvin Xavier, Yossi Leybovich, Chengchang Tang,
Tatyana Nikolova, Vishnu Dasa, Yishai Hadas
Cc: Adit Ranadive, Aditya Sarwade, Bryan Tan, Dexuan Cui,
Doug Ledford, George Zhang, Jorgen Hansen, Leon Romanovsky,
Parav Pandit, patches, Roland Dreier, Roland Dreier, Ajay Sharma,
stable
On 2026/4/7 1:40, Jason Gunthorpe wrote:
> Sashiko points out that once the srq memory is stored into the xarray by
> alloc_srqc() it can immediately be looked up by:
>
> xa_lock(&srq_table->xa);
> srq = xa_load(&srq_table->xa, srqn & (hr_dev->caps.num_srqs - 1));
> if (srq)
> refcount_inc(&srq->refcount);
> xa_unlock(&srq_table->xa);
>
> Which will fail refcount debug because the refcount is 0 and then crash:
>
> srq->event(srq, event_type);
>
> Because event is NULL.
I don't think this will actually happen because HW won't report an SRQ
event before the SRQ is fully ready and actually used.
From the perspective of coding, I'm fine with this change, but since
there is similar logic for QP event, could you also apply this change
to QP?
Junxian
>
> Use refcount_inc_not_zero() instead to ensure a partially prepared srq is
> never retrieved from the event handler and fix the ordering of the
> initialization so refcount becomes 1 only after it is fully ready.
>
> Link: https://sashiko.dev/#/patchset/0-v1-e911b76a94d1%2B65d95-rdma_udata_rep_jgg%40nvidia.com?part=3
> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
> ---
> drivers/infiniband/hw/hns/hns_roce_srq.c | 6 +++---
> 1 file changed, 3 insertions(+), 3 deletions(-)
>
> diff --git a/drivers/infiniband/hw/hns/hns_roce_srq.c b/drivers/infiniband/hw/hns/hns_roce_srq.c
> index cb848e8e6bbd76..d6201ddde0292a 100644
> --- a/drivers/infiniband/hw/hns/hns_roce_srq.c
> +++ b/drivers/infiniband/hw/hns/hns_roce_srq.c
> @@ -16,8 +16,8 @@ void hns_roce_srq_event(struct hns_roce_dev *hr_dev, u32 srqn, int event_type)
>
> xa_lock(&srq_table->xa);
> srq = xa_load(&srq_table->xa, srqn & (hr_dev->caps.num_srqs - 1));
> - if (srq)
> - refcount_inc(&srq->refcount);
> + if (srq && !refcount_inc_not_zero(&srq->refcount))
> + srq = NULL;
> xa_unlock(&srq_table->xa);
>
> if (!srq) {
> @@ -481,8 +481,8 @@ int hns_roce_create_srq(struct ib_srq *ib_srq,
> }
>
> srq->event = hns_roce_ib_srq_event;
> - refcount_set(&srq->refcount, 1);
> init_completion(&srq->free);
> + refcount_set(&srq->refcount, 1);
>
> return 0;
>
^ permalink raw reply [flat|nested] 19+ messages in thread* Re: [PATCH v2 06/16] RDMA/hns: Fix xarray race in hns_roce_create_srq()
2026-04-07 13:39 ` Junxian Huang
@ 2026-04-07 14:03 ` Jason Gunthorpe
0 siblings, 0 replies; 19+ messages in thread
From: Jason Gunthorpe @ 2026-04-07 14:03 UTC (permalink / raw)
To: Junxian Huang
Cc: Abhijit Gangurde, Allen Hubbe,
Broadcom internal kernel review list, Bernard Metzler,
Potnuri Bharat Teja, Bryan Tan, Cheng Xu, Dennis Dalessandro,
Gal Pressman, Kai Shen, Kalesh AP, Konstantin Taranov,
Krzysztof Czurylo, Leon Romanovsky, linux-hyperv, linux-rdma,
Long Li, Michal Kalderon, Michael Margolin, Nelson Escobar,
Satish Kharat, Selvin Xavier, Yossi Leybovich, Chengchang Tang,
Tatyana Nikolova, Vishnu Dasa, Yishai Hadas, Adit Ranadive,
Aditya Sarwade, Bryan Tan, Dexuan Cui, Doug Ledford, George Zhang,
Jorgen Hansen, Leon Romanovsky, Parav Pandit, patches,
Roland Dreier, Roland Dreier, Ajay Sharma, stable
On Tue, Apr 07, 2026 at 09:39:52PM +0800, Junxian Huang wrote:
>
>
> On 2026/4/7 1:40, Jason Gunthorpe wrote:
> > Sashiko points out that once the srq memory is stored into the xarray by
> > alloc_srqc() it can immediately be looked up by:
> >
> > xa_lock(&srq_table->xa);
> > srq = xa_load(&srq_table->xa, srqn & (hr_dev->caps.num_srqs - 1));
> > if (srq)
> > refcount_inc(&srq->refcount);
> > xa_unlock(&srq_table->xa);
> >
> > Which will fail refcount debug because the refcount is 0 and then crash:
> >
> > srq->event(srq, event_type);
> >
> > Because event is NULL.
>
> I don't think this will actually happen because HW won't report an SRQ
> event before the SRQ is fully ready and actually used.
Probably, but also maybe there is some crazy race where EQ event can
be generated and the SRQ cycled before it is collected..
There is also a second bug here that Shashiko noticed on this patch
that the order is wrong, the goto unwind in create will call
free_srqc() but it hasn't yet setup the completion. I will fix that in
a v3..
> From the perspective of coding, I'm fine with this change, but since
> there is similar logic for QP event, could you also apply this change
> to QP?
Sure
Jason
^ permalink raw reply [flat|nested] 19+ messages in thread
* [PATCH v2 07/16] RDMA: Use ib_is_udata_in_empty() for places calling ib_is_udata_cleared()
2026-04-06 17:40 [PATCH v2 00/16] Convert all drivers to the new udata response flow Jason Gunthorpe
` (5 preceding siblings ...)
2026-04-06 17:40 ` [PATCH v2 06/16] RDMA/hns: Fix xarray race in hns_roce_create_srq() Jason Gunthorpe
@ 2026-04-06 17:40 ` Jason Gunthorpe
2026-04-06 17:40 ` [PATCH v2 08/16] IB/rdmavt: Don't abuse udata and ib_respond_udata() Jason Gunthorpe
` (8 subsequent siblings)
15 siblings, 0 replies; 19+ messages in thread
From: Jason Gunthorpe @ 2026-04-06 17:40 UTC (permalink / raw)
To: Abhijit Gangurde, Allen Hubbe,
Broadcom internal kernel review list, Bernard Metzler,
Potnuri Bharat Teja, Bryan Tan, Cheng Xu, Dennis Dalessandro,
Gal Pressman, Junxian Huang, Kai Shen, Kalesh AP,
Konstantin Taranov, Krzysztof Czurylo, Leon Romanovsky,
linux-hyperv, linux-rdma, Long Li, Michal Kalderon,
Michael Margolin, Nelson Escobar, Satish Kharat, Selvin Xavier,
Yossi Leybovich, Chengchang Tang, Tatyana Nikolova, Vishnu Dasa,
Yishai Hadas
Cc: Adit Ranadive, Aditya Sarwade, Bryan Tan, Dexuan Cui,
Doug Ledford, George Zhang, Jorgen Hansen, Leon Romanovsky,
Parav Pandit, patches, Roland Dreier, Roland Dreier, Ajay Sharma,
stable
Convert the pattern:
if (udata->inlen && !ib_is_udata_cleared(udata, 0, udata->inlen))
Using Coccinelle:
virtual patch
virtual context
virtual report
@@
expression udata;
@@
(
- udata->inlen && !ib_is_udata_cleared(udata, 0, udata->inlen)
+ !ib_is_udata_in_empty(udata)
|
- udata->inlen > 0 && !ib_is_udata_cleared(udata, 0, udata->inlen)
+ !ib_is_udata_in_empty(udata)
)
@@
expression udata;
@@
- udata && udata->inlen && !ib_is_udata_cleared(udata, 0, udata->inlen)
+ !ib_is_udata_in_empty(udata)
These cases are already checking for zeroed data that the kernel does
not understand.
Run another pass with AI to propagate the return code correctly and
remove redundant prints.
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
---
drivers/infiniband/hw/efa/efa_verbs.c | 43 +++++++++------------------
drivers/infiniband/hw/mlx4/main.c | 6 ++--
drivers/infiniband/hw/mlx4/qp.c | 7 ++---
drivers/infiniband/hw/mlx5/main.c | 5 ++--
drivers/infiniband/hw/mlx5/qp.c | 7 ++---
5 files changed, 26 insertions(+), 42 deletions(-)
diff --git a/drivers/infiniband/hw/efa/efa_verbs.c b/drivers/infiniband/hw/efa/efa_verbs.c
index 7bd0838ebc99e4..3ad5d6e27b1590 100644
--- a/drivers/infiniband/hw/efa/efa_verbs.c
+++ b/drivers/infiniband/hw/efa/efa_verbs.c
@@ -218,12 +218,9 @@ int efa_query_device(struct ib_device *ibdev,
struct efa_dev *dev = to_edev(ibdev);
int err;
- if (udata && udata->inlen &&
- !ib_is_udata_cleared(udata, 0, udata->inlen)) {
- ibdev_dbg(ibdev,
- "Incompatible ABI params, udata not cleared\n");
- return -EINVAL;
- }
+ err = ib_is_udata_in_empty(udata);
+ if (err)
+ return err;
dev_attr = &dev->dev_attr;
@@ -433,13 +430,9 @@ int efa_alloc_pd(struct ib_pd *ibpd, struct ib_udata *udata)
struct efa_pd *pd = to_epd(ibpd);
int err;
- if (udata->inlen &&
- !ib_is_udata_cleared(udata, 0, udata->inlen)) {
- ibdev_dbg(&dev->ibdev,
- "Incompatible ABI params, udata not cleared\n");
- err = -EINVAL;
+ err = ib_is_udata_in_empty(udata);
+ if (err)
goto err_out;
- }
err = efa_com_alloc_pd(&dev->edev, &result);
if (err)
@@ -982,12 +975,9 @@ int efa_modify_qp(struct ib_qp *ibqp, struct ib_qp_attr *qp_attr,
if (qp_attr_mask & ~IB_QP_ATTR_STANDARD_BITS)
return -EOPNOTSUPP;
- if (udata->inlen &&
- !ib_is_udata_cleared(udata, 0, udata->inlen)) {
- ibdev_dbg(&dev->ibdev,
- "Incompatible ABI params, udata not cleared\n");
- return -EINVAL;
- }
+ err = ib_is_udata_in_empty(udata);
+ if (err)
+ return err;
cur_state = qp_attr_mask & IB_QP_CUR_STATE ? qp_attr->cur_qp_state :
qp->state;
@@ -1612,13 +1602,11 @@ static struct efa_mr *efa_alloc_mr(struct ib_pd *ibpd, int access_flags,
struct efa_dev *dev = to_edev(ibpd->device);
int supp_access_flags;
struct efa_mr *mr;
+ int ret;
- if (udata && udata->inlen &&
- !ib_is_udata_cleared(udata, 0, udata->inlen)) {
- ibdev_dbg(&dev->ibdev,
- "Incompatible ABI params, udata not cleared\n");
- return ERR_PTR(-EINVAL);
- }
+ ret = ib_is_udata_in_empty(udata);
+ if (ret)
+ return ERR_PTR(ret);
supp_access_flags =
IB_ACCESS_LOCAL_WRITE |
@@ -2082,12 +2070,9 @@ int efa_create_ah(struct ib_ah *ibah,
goto err_out;
}
- if (udata->inlen &&
- !ib_is_udata_cleared(udata, 0, udata->inlen)) {
- ibdev_dbg(&dev->ibdev, "Incompatible ABI params\n");
- err = -EINVAL;
+ err = ib_is_udata_in_empty(udata);
+ if (err)
goto err_out;
- }
memcpy(params.dest_addr, ah_attr->grh.dgid.raw,
sizeof(params.dest_addr));
diff --git a/drivers/infiniband/hw/mlx4/main.c b/drivers/infiniband/hw/mlx4/main.c
index 464c9ab4251636..16e9ce8138cb30 100644
--- a/drivers/infiniband/hw/mlx4/main.c
+++ b/drivers/infiniband/hw/mlx4/main.c
@@ -1696,9 +1696,9 @@ static struct ib_flow *mlx4_ib_create_flow(struct ib_qp *qp,
(flow_attr->type != IB_FLOW_ATTR_NORMAL))
return ERR_PTR(-EOPNOTSUPP);
- if (udata &&
- udata->inlen && !ib_is_udata_cleared(udata, 0, udata->inlen))
- return ERR_PTR(-EOPNOTSUPP);
+ err = ib_is_udata_in_empty(udata);
+ if (err)
+ return ERR_PTR(err);
memset(type, 0, sizeof(type));
diff --git a/drivers/infiniband/hw/mlx4/qp.c b/drivers/infiniband/hw/mlx4/qp.c
index 790be09d985a1a..aca8a985ce33cd 100644
--- a/drivers/infiniband/hw/mlx4/qp.c
+++ b/drivers/infiniband/hw/mlx4/qp.c
@@ -4297,10 +4297,9 @@ int mlx4_ib_create_rwq_ind_table(struct ib_rwq_ind_table *rwq_ind_table,
size_t min_resp_len;
int i, err = 0;
- if (udata->inlen > 0 &&
- !ib_is_udata_cleared(udata, 0,
- udata->inlen))
- return -EOPNOTSUPP;
+ err = ib_is_udata_in_empty(udata);
+ if (err)
+ return err;
min_resp_len = offsetof(typeof(resp), reserved) + sizeof(resp.reserved);
if (udata->outlen && udata->outlen < min_resp_len)
diff --git a/drivers/infiniband/hw/mlx5/main.c b/drivers/infiniband/hw/mlx5/main.c
index e02bfb1479f5c3..7d435cf5a2fdae 100644
--- a/drivers/infiniband/hw/mlx5/main.c
+++ b/drivers/infiniband/hw/mlx5/main.c
@@ -964,8 +964,9 @@ static int mlx5_ib_query_device(struct ib_device *ibdev,
resp.response_length = resp_len;
- if (uhw && uhw->inlen && !ib_is_udata_cleared(uhw, 0, uhw->inlen))
- return -EINVAL;
+ err = ib_is_udata_in_empty(uhw);
+ if (err)
+ return err;
memset(props, 0, sizeof(*props));
err = mlx5_query_system_image_guid(ibdev,
diff --git a/drivers/infiniband/hw/mlx5/qp.c b/drivers/infiniband/hw/mlx5/qp.c
index 8f50e7342a7694..81d98b5010f1ca 100644
--- a/drivers/infiniband/hw/mlx5/qp.c
+++ b/drivers/infiniband/hw/mlx5/qp.c
@@ -5533,10 +5533,9 @@ int mlx5_ib_create_rwq_ind_table(struct ib_rwq_ind_table *ib_rwq_ind_table,
u32 *in;
void *rqtc;
- if (udata->inlen > 0 &&
- !ib_is_udata_cleared(udata, 0,
- udata->inlen))
- return -EOPNOTSUPP;
+ err = ib_is_udata_in_empty(udata);
+ if (err)
+ return err;
if (init_attr->log_ind_tbl_size >
MLX5_CAP_GEN(dev->mdev, log_max_rqt_size)) {
--
2.43.0
^ permalink raw reply related [flat|nested] 19+ messages in thread* [PATCH v2 08/16] IB/rdmavt: Don't abuse udata and ib_respond_udata()
2026-04-06 17:40 [PATCH v2 00/16] Convert all drivers to the new udata response flow Jason Gunthorpe
` (6 preceding siblings ...)
2026-04-06 17:40 ` [PATCH v2 07/16] RDMA: Use ib_is_udata_in_empty() for places calling ib_is_udata_cleared() Jason Gunthorpe
@ 2026-04-06 17:40 ` Jason Gunthorpe
2026-04-06 17:40 ` [PATCH v2 09/16] RDMA: Convert drivers using min to ib_respond_udata() Jason Gunthorpe
` (7 subsequent siblings)
15 siblings, 0 replies; 19+ messages in thread
From: Jason Gunthorpe @ 2026-04-06 17:40 UTC (permalink / raw)
To: Abhijit Gangurde, Allen Hubbe,
Broadcom internal kernel review list, Bernard Metzler,
Potnuri Bharat Teja, Bryan Tan, Cheng Xu, Dennis Dalessandro,
Gal Pressman, Junxian Huang, Kai Shen, Kalesh AP,
Konstantin Taranov, Krzysztof Czurylo, Leon Romanovsky,
linux-hyperv, linux-rdma, Long Li, Michal Kalderon,
Michael Margolin, Nelson Escobar, Satish Kharat, Selvin Xavier,
Yossi Leybovich, Chengchang Tang, Tatyana Nikolova, Vishnu Dasa,
Yishai Hadas
Cc: Adit Ranadive, Aditya Sarwade, Bryan Tan, Dexuan Cui,
Doug Ledford, George Zhang, Jorgen Hansen, Leon Romanovsky,
Parav Pandit, patches, Roland Dreier, Roland Dreier, Ajay Sharma,
stable
Use copy_to_user() directly since the data is not being placed in the
udata response memory.
It is unclear why this is trying to do two copies, but leave it alone.
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
---
drivers/infiniband/sw/rdmavt/srq.c | 19 +++++++++----------
1 file changed, 9 insertions(+), 10 deletions(-)
diff --git a/drivers/infiniband/sw/rdmavt/srq.c b/drivers/infiniband/sw/rdmavt/srq.c
index fe125bf85b2726..d022aa56c5bfd5 100644
--- a/drivers/infiniband/sw/rdmavt/srq.c
+++ b/drivers/infiniband/sw/rdmavt/srq.c
@@ -128,6 +128,7 @@ int rvt_modify_srq(struct ib_srq *ibsrq, struct ib_srq_attr *attr,
struct rvt_srq *srq = ibsrq_to_rvtsrq(ibsrq);
struct rvt_dev_info *dev = ib_to_rvt(ibsrq->device);
struct rvt_rq tmp_rq = {};
+ __u64 offset_addr;
int ret = 0;
if (attr_mask & IB_SRQ_MAX_WR) {
@@ -149,19 +150,17 @@ int rvt_modify_srq(struct ib_srq *ibsrq, struct ib_srq_attr *attr,
return -ENOMEM;
/* Check that we can write the offset to mmap. */
if (udata && udata->inlen >= sizeof(__u64)) {
- __u64 offset_addr;
__u64 offset = 0;
ret = ib_copy_from_udata(&offset_addr, udata,
sizeof(offset_addr));
if (ret)
goto bail_free;
- udata->outbuf = (void __user *)
- (unsigned long)offset_addr;
- ret = ib_copy_to_udata(udata, &offset,
- sizeof(offset));
- if (ret)
+ if (copy_to_user(u64_to_user_ptr(offset_addr), &offset,
+ sizeof(offset))) {
+ ret = -EFAULT;
goto bail_free;
+ }
}
spin_lock_irq(&srq->rq.kwq->c_lock);
@@ -236,10 +235,10 @@ int rvt_modify_srq(struct ib_srq *ibsrq, struct ib_srq_attr *attr,
* See rvt_mmap() for details.
*/
if (udata && udata->inlen >= sizeof(__u64)) {
- ret = ib_copy_to_udata(udata, &ip->offset,
- sizeof(ip->offset));
- if (ret)
- return ret;
+ if (copy_to_user(u64_to_user_ptr(offset_addr),
+ &ip->offset,
+ sizeof(ip->offset)))
+ return -EFAULT;
}
/*
--
2.43.0
^ permalink raw reply related [flat|nested] 19+ messages in thread* [PATCH v2 09/16] RDMA: Convert drivers using min to ib_respond_udata()
2026-04-06 17:40 [PATCH v2 00/16] Convert all drivers to the new udata response flow Jason Gunthorpe
` (7 preceding siblings ...)
2026-04-06 17:40 ` [PATCH v2 08/16] IB/rdmavt: Don't abuse udata and ib_respond_udata() Jason Gunthorpe
@ 2026-04-06 17:40 ` Jason Gunthorpe
2026-04-06 17:40 ` [PATCH v2 10/16] RDMA: Convert drivers using sizeof() " Jason Gunthorpe
` (6 subsequent siblings)
15 siblings, 0 replies; 19+ messages in thread
From: Jason Gunthorpe @ 2026-04-06 17:40 UTC (permalink / raw)
To: Abhijit Gangurde, Allen Hubbe,
Broadcom internal kernel review list, Bernard Metzler,
Potnuri Bharat Teja, Bryan Tan, Cheng Xu, Dennis Dalessandro,
Gal Pressman, Junxian Huang, Kai Shen, Kalesh AP,
Konstantin Taranov, Krzysztof Czurylo, Leon Romanovsky,
linux-hyperv, linux-rdma, Long Li, Michal Kalderon,
Michael Margolin, Nelson Escobar, Satish Kharat, Selvin Xavier,
Yossi Leybovich, Chengchang Tang, Tatyana Nikolova, Vishnu Dasa,
Yishai Hadas
Cc: Adit Ranadive, Aditya Sarwade, Bryan Tan, Dexuan Cui,
Doug Ledford, George Zhang, Jorgen Hansen, Leon Romanovsky,
Parav Pandit, patches, Roland Dreier, Roland Dreier, Ajay Sharma,
stable
Convert the pattern:
ib_copy_to_udata(udata, &resp, min(sizeof(resp), udata->outlen));
Using Coccinelle:
@@
identifier resp;
expression udata;
@@
- ib_copy_to_udata(udata, &resp, min(sizeof(resp), udata->outlen))
+ ib_respond_udata(udata, resp)
@@
identifier resp;
expression udata;
@@
- ib_copy_to_udata(udata, &resp, min(udata->outlen, sizeof(resp)))
+ ib_respond_udata(udata, resp)
Run another pass with AI to propagate the return code correctly and
remove redundant prints.
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
---
drivers/infiniband/hw/efa/efa_verbs.c | 44 +++++-------------
drivers/infiniband/hw/erdma/erdma_verbs.c | 3 +-
drivers/infiniband/hw/hns/hns_roce_ah.c | 4 +-
drivers/infiniband/hw/hns/hns_roce_cq.c | 3 +-
drivers/infiniband/hw/hns/hns_roce_main.c | 3 +-
drivers/infiniband/hw/hns/hns_roce_pd.c | 8 ++--
drivers/infiniband/hw/hns/hns_roce_qp.c | 13 ++----
drivers/infiniband/hw/hns/hns_roce_srq.c | 6 +--
drivers/infiniband/hw/irdma/verbs.c | 48 +++++++-------------
drivers/infiniband/hw/mana/cq.c | 6 +--
drivers/infiniband/hw/mana/qp.c | 6 +--
drivers/infiniband/hw/mlx5/srq.c | 7 +--
drivers/infiniband/hw/vmw_pvrdma/pvrdma_qp.c | 8 ++--
13 files changed, 49 insertions(+), 110 deletions(-)
diff --git a/drivers/infiniband/hw/efa/efa_verbs.c b/drivers/infiniband/hw/efa/efa_verbs.c
index 3ad5d6e27b1590..395290ab05847a 100644
--- a/drivers/infiniband/hw/efa/efa_verbs.c
+++ b/drivers/infiniband/hw/efa/efa_verbs.c
@@ -270,13 +270,9 @@ int efa_query_device(struct ib_device *ibdev,
if (dev->neqs)
resp.device_caps |= EFA_QUERY_DEVICE_CAPS_CQ_NOTIFICATIONS;
- err = ib_copy_to_udata(udata, &resp,
- min(sizeof(resp), udata->outlen));
- if (err) {
- ibdev_dbg(ibdev,
- "Failed to copy udata for query_device\n");
+ err = ib_respond_udata(udata, resp);
+ if (err)
return err;
- }
}
return 0;
@@ -442,13 +438,9 @@ int efa_alloc_pd(struct ib_pd *ibpd, struct ib_udata *udata)
resp.pdn = result.pdn;
if (udata->outlen) {
- err = ib_copy_to_udata(udata, &resp,
- min(sizeof(resp), udata->outlen));
- if (err) {
- ibdev_dbg(&dev->ibdev,
- "Failed to copy udata for alloc_pd\n");
+ err = ib_respond_udata(udata, resp);
+ if (err)
goto err_dealloc_pd;
- }
}
ibdev_dbg(&dev->ibdev, "Allocated pd[%d]\n", pd->pdn);
@@ -782,14 +774,9 @@ int efa_create_qp(struct ib_qp *ibqp, struct ib_qp_init_attr *init_attr,
qp->max_inline_data = init_attr->cap.max_inline_data;
if (udata->outlen) {
- err = ib_copy_to_udata(udata, &resp,
- min(sizeof(resp), udata->outlen));
- if (err) {
- ibdev_dbg(&dev->ibdev,
- "Failed to copy udata for qp[%u]\n",
- create_qp_resp.qp_num);
+ err = ib_respond_udata(udata, resp);
+ if (err)
goto err_remove_mmap_entries;
- }
}
ibdev_dbg(&dev->ibdev, "Created qp[%d]\n", qp->ibqp.qp_num);
@@ -1226,13 +1213,9 @@ int efa_create_user_cq(struct ib_cq *ibcq, const struct ib_cq_init_attr *attr,
}
if (udata->outlen) {
- err = ib_copy_to_udata(udata, &resp,
- min(sizeof(resp), udata->outlen));
- if (err) {
- ibdev_dbg(ibdev,
- "Failed to copy udata for create_cq\n");
+ err = ib_respond_udata(udata, resp);
+ if (err)
goto err_xa_erase;
- }
}
ibdev_dbg(ibdev, "Created cq[%d], cq depth[%u]. dma[%pad] virt[0x%p]\n",
@@ -1935,8 +1918,7 @@ int efa_alloc_ucontext(struct ib_ucontext *ibucontext, struct ib_udata *udata)
resp.max_tx_batch = dev->dev_attr.max_tx_batch;
resp.min_sq_wr = dev->dev_attr.min_sq_depth;
- err = ib_copy_to_udata(udata, &resp,
- min(sizeof(resp), udata->outlen));
+ err = ib_respond_udata(udata, resp);
if (err)
goto err_dealloc_uar;
@@ -2087,13 +2069,9 @@ int efa_create_ah(struct ib_ah *ibah,
resp.efa_address_handle = result.ah;
if (udata->outlen) {
- err = ib_copy_to_udata(udata, &resp,
- min(sizeof(resp), udata->outlen));
- if (err) {
- ibdev_dbg(&dev->ibdev,
- "Failed to copy udata for create_ah response\n");
+ err = ib_respond_udata(udata, resp);
+ if (err)
goto err_destroy_ah;
- }
}
ibdev_dbg(&dev->ibdev, "Created ah[%d]\n", ah->ah);
diff --git a/drivers/infiniband/hw/erdma/erdma_verbs.c b/drivers/infiniband/hw/erdma/erdma_verbs.c
index 5523b4e151e1ff..9bba470c6e3257 100644
--- a/drivers/infiniband/hw/erdma/erdma_verbs.c
+++ b/drivers/infiniband/hw/erdma/erdma_verbs.c
@@ -1990,8 +1990,7 @@ int erdma_create_cq(struct ib_cq *ibcq, const struct ib_cq_init_attr *attr,
uresp.cq_id = cq->cqn;
uresp.num_cqe = depth;
- ret = ib_copy_to_udata(udata, &uresp,
- min(sizeof(uresp), udata->outlen));
+ ret = ib_respond_udata(udata, uresp);
if (ret)
goto err_free_res;
} else {
diff --git a/drivers/infiniband/hw/hns/hns_roce_ah.c b/drivers/infiniband/hw/hns/hns_roce_ah.c
index 8a605da8a93c97..925ddf15b68102 100644
--- a/drivers/infiniband/hw/hns/hns_roce_ah.c
+++ b/drivers/infiniband/hw/hns/hns_roce_ah.c
@@ -32,6 +32,7 @@
#include <rdma/ib_addr.h>
#include <rdma/ib_cache.h>
+#include <rdma/uverbs_ioctl.h>
#include "hns_roce_device.h"
#include "hns_roce_hw_v2.h"
@@ -112,8 +113,7 @@ int hns_roce_create_ah(struct ib_ah *ibah, struct rdma_ah_init_attr *init_attr,
resp.priority = ah->av.sl;
resp.tc_mode = tc_mode;
memcpy(resp.dmac, ah_attr->roce.dmac, ETH_ALEN);
- ret = ib_copy_to_udata(udata, &resp,
- min(udata->outlen, sizeof(resp)));
+ ret = ib_respond_udata(udata, resp);
}
err_out:
diff --git a/drivers/infiniband/hw/hns/hns_roce_cq.c b/drivers/infiniband/hw/hns/hns_roce_cq.c
index 621568e114054b..24de651f735e03 100644
--- a/drivers/infiniband/hw/hns/hns_roce_cq.c
+++ b/drivers/infiniband/hw/hns/hns_roce_cq.c
@@ -452,8 +452,7 @@ int hns_roce_create_cq(struct ib_cq *ib_cq, const struct ib_cq_init_attr *attr,
if (udata) {
resp.cqn = hr_cq->cqn;
- ret = ib_copy_to_udata(udata, &resp,
- min(udata->outlen, sizeof(resp)));
+ ret = ib_respond_udata(udata, resp);
if (ret)
goto err_cqc;
}
diff --git a/drivers/infiniband/hw/hns/hns_roce_main.c b/drivers/infiniband/hw/hns/hns_roce_main.c
index 0dbe99aab6ad21..c17ff5347a0147 100644
--- a/drivers/infiniband/hw/hns/hns_roce_main.c
+++ b/drivers/infiniband/hw/hns/hns_roce_main.c
@@ -477,8 +477,7 @@ static int hns_roce_alloc_ucontext(struct ib_ucontext *uctx,
resp.cqe_size = hr_dev->caps.cqe_sz;
- ret = ib_copy_to_udata(udata, &resp,
- min(udata->outlen, sizeof(resp)));
+ ret = ib_respond_udata(udata, resp);
if (ret)
goto error_fail_copy_to_udata;
diff --git a/drivers/infiniband/hw/hns/hns_roce_pd.c b/drivers/infiniband/hw/hns/hns_roce_pd.c
index 225c3e328e0e08..73bb000574c50d 100644
--- a/drivers/infiniband/hw/hns/hns_roce_pd.c
+++ b/drivers/infiniband/hw/hns/hns_roce_pd.c
@@ -30,6 +30,7 @@
* SOFTWARE.
*/
+#include <rdma/uverbs_ioctl.h>
#include "hns_roce_device.h"
void hns_roce_init_pd_table(struct hns_roce_dev *hr_dev)
@@ -61,12 +62,9 @@ int hns_roce_alloc_pd(struct ib_pd *ibpd, struct ib_udata *udata)
if (udata) {
struct hns_roce_ib_alloc_pd_resp resp = {.pdn = pd->pdn};
- ret = ib_copy_to_udata(udata, &resp,
- min(udata->outlen, sizeof(resp)));
- if (ret) {
+ ret = ib_respond_udata(udata, resp);
+ if (ret)
ida_free(&pd_ida->ida, id);
- ibdev_err(ib_dev, "failed to copy to udata, ret = %d\n", ret);
- }
}
return ret;
diff --git a/drivers/infiniband/hw/hns/hns_roce_qp.c b/drivers/infiniband/hw/hns/hns_roce_qp.c
index a27ea85bb06323..6d63613dcd5a9a 100644
--- a/drivers/infiniband/hw/hns/hns_roce_qp.c
+++ b/drivers/infiniband/hw/hns/hns_roce_qp.c
@@ -1235,12 +1235,9 @@ static int hns_roce_create_qp_common(struct hns_roce_dev *hr_dev,
if (udata) {
resp.cap_flags = hr_qp->en_flags;
- ret = ib_copy_to_udata(udata, &resp,
- min(udata->outlen, sizeof(resp)));
- if (ret) {
- ibdev_err(ibdev, "copy qp resp failed!\n");
+ ret = ib_respond_udata(udata, resp);
+ if (ret)
goto err_flow_ctrl;
- }
}
if (hr_dev->caps.flags & HNS_ROCE_CAP_FLAG_QP_FLOW_CTRL) {
@@ -1487,11 +1484,7 @@ int hns_roce_modify_qp(struct ib_qp *ibqp, struct ib_qp_attr *attr,
if (udata && udata->outlen) {
resp.tc_mode = hr_qp->tc_mode;
resp.priority = hr_qp->sl;
- ret = ib_copy_to_udata(udata, &resp,
- min(udata->outlen, sizeof(resp)));
- if (ret)
- ibdev_err_ratelimited(&hr_dev->ib_dev,
- "failed to copy modify qp resp.\n");
+ ret = ib_respond_udata(udata, resp);
}
out:
diff --git a/drivers/infiniband/hw/hns/hns_roce_srq.c b/drivers/infiniband/hw/hns/hns_roce_srq.c
index d6201ddde0292a..113037c83a0376 100644
--- a/drivers/infiniband/hw/hns/hns_roce_srq.c
+++ b/drivers/infiniband/hw/hns/hns_roce_srq.c
@@ -473,11 +473,9 @@ int hns_roce_create_srq(struct ib_srq *ib_srq,
if (udata) {
resp.cap_flags = srq->cap_flags;
resp.srqn = srq->srqn;
- if (ib_copy_to_udata(udata, &resp,
- min(udata->outlen, sizeof(resp)))) {
- ret = -EFAULT;
+ ret = ib_respond_udata(udata, resp);
+ if (ret)
goto err_srqc;
- }
}
srq->event = hns_roce_ib_srq_event;
diff --git a/drivers/infiniband/hw/irdma/verbs.c b/drivers/infiniband/hw/irdma/verbs.c
index 17086048d2d7fc..79e72a457e7983 100644
--- a/drivers/infiniband/hw/irdma/verbs.c
+++ b/drivers/infiniband/hw/irdma/verbs.c
@@ -325,9 +325,9 @@ static int irdma_alloc_ucontext(struct ib_ucontext *uctx,
uresp.max_pds = iwdev->rf->sc_dev.hw_attrs.max_hw_pds;
uresp.wq_size = iwdev->rf->sc_dev.hw_attrs.max_qp_wr * 2;
uresp.kernel_ver = req.userspace_ver;
- if (ib_copy_to_udata(udata, &uresp,
- min(sizeof(uresp), udata->outlen)))
- return -EFAULT;
+ ret = ib_respond_udata(udata, uresp);
+ if (ret)
+ return ret;
} else {
u64 bar_off = (uintptr_t)iwdev->rf->sc_dev.hw_regs[IRDMA_DB_ADDR_OFFSET];
@@ -354,10 +354,10 @@ static int irdma_alloc_ucontext(struct ib_ucontext *uctx,
uresp.comp_mask |= IRDMA_ALLOC_UCTX_MIN_HW_WQ_SIZE;
uresp.max_hw_srq_quanta = uk_attrs->max_hw_srq_quanta;
uresp.comp_mask |= IRDMA_ALLOC_UCTX_MAX_HW_SRQ_QUANTA;
- if (ib_copy_to_udata(udata, &uresp,
- min(sizeof(uresp), udata->outlen))) {
+ ret = ib_respond_udata(udata, uresp);
+ if (ret) {
rdma_user_mmap_entry_remove(ucontext->db_mmap_entry);
- return -EFAULT;
+ return ret;
}
}
@@ -420,11 +420,9 @@ static int irdma_alloc_pd(struct ib_pd *pd, struct ib_udata *udata)
ibucontext);
irdma_sc_pd_init(dev, sc_pd, pd_id, ucontext->abi_ver);
uresp.pd_id = pd_id;
- if (ib_copy_to_udata(udata, &uresp,
- min(sizeof(uresp), udata->outlen))) {
- err = -EFAULT;
+ err = ib_respond_udata(udata, uresp);
+ if (err)
goto error;
- }
} else {
irdma_sc_pd_init(dev, sc_pd, pd_id, IRDMA_ABI_VER);
}
@@ -1124,10 +1122,8 @@ static int irdma_create_qp(struct ib_qp *ibqp,
uresp.qp_id = qp_num;
uresp.qp_caps = qp->qp_uk.qp_caps;
- err_code = ib_copy_to_udata(udata, &uresp,
- min(sizeof(uresp), udata->outlen));
+ err_code = ib_respond_udata(udata, uresp);
if (err_code) {
- ibdev_dbg(&iwdev->ibdev, "VERBS: copy_to_udata failed\n");
irdma_destroy_qp(&iwqp->ibqp, udata);
return err_code;
}
@@ -1612,12 +1608,9 @@ int irdma_modify_qp_roce(struct ib_qp *ibqp, struct ib_qp_attr *attr,
uresp.push_valid = 1;
uresp.push_offset = iwqp->sc_qp.push_offset;
}
- ret = ib_copy_to_udata(udata, &uresp, min(sizeof(uresp),
- udata->outlen));
+ ret = ib_respond_udata(udata, uresp);
if (ret) {
irdma_remove_push_mmap_entries(iwqp);
- ibdev_dbg(&iwdev->ibdev,
- "VERBS: copy_to_udata failed\n");
return ret;
}
}
@@ -1860,12 +1853,9 @@ int irdma_modify_qp(struct ib_qp *ibqp, struct ib_qp_attr *attr, int attr_mask,
uresp.push_offset = iwqp->sc_qp.push_offset;
}
- err = ib_copy_to_udata(udata, &uresp, min(sizeof(uresp),
- udata->outlen));
+ err = ib_respond_udata(udata, uresp);
if (err) {
irdma_remove_push_mmap_entries(iwqp);
- ibdev_dbg(&iwdev->ibdev,
- "VERBS: copy_to_udata failed\n");
return err;
}
}
@@ -2418,11 +2408,9 @@ static int irdma_create_srq(struct ib_srq *ibsrq,
resp.srq_id = iwsrq->srq_num;
resp.srq_size = ukinfo->srq_size;
- if (ib_copy_to_udata(udata, &resp,
- min(sizeof(resp), udata->outlen))) {
- err_code = -EPROTO;
+ err_code = ib_respond_udata(udata, resp);
+ if (err_code)
goto srq_destroy;
- }
}
return 0;
@@ -2664,13 +2652,9 @@ static int irdma_create_cq(struct ib_cq *ibcq,
resp.cq_id = info.cq_uk_init_info.cq_id;
resp.cq_size = info.cq_uk_init_info.cq_size;
- if (ib_copy_to_udata(udata, &resp,
- min(sizeof(resp), udata->outlen))) {
- ibdev_dbg(&iwdev->ibdev,
- "VERBS: copy to user data\n");
- err_code = -EPROTO;
+ err_code = ib_respond_udata(udata, resp);
+ if (err_code)
goto cq_destroy;
- }
}
init_completion(&iwcq->free_cq);
@@ -5330,7 +5314,7 @@ static int irdma_create_user_ah(struct ib_ah *ibah,
mutex_unlock(&iwdev->rf->ah_tbl_lock);
uresp.ah_id = ah->sc_ah.ah_info.ah_idx;
- err = ib_copy_to_udata(udata, &uresp, min(sizeof(uresp), udata->outlen));
+ err = ib_respond_udata(udata, uresp);
if (err)
irdma_destroy_ah(ibah, attr->flags);
diff --git a/drivers/infiniband/hw/mana/cq.c b/drivers/infiniband/hw/mana/cq.c
index f4cbe21763bf11..43b3ef65d3fc6d 100644
--- a/drivers/infiniband/hw/mana/cq.c
+++ b/drivers/infiniband/hw/mana/cq.c
@@ -79,11 +79,9 @@ int mana_ib_create_cq(struct ib_cq *ibcq, const struct ib_cq_init_attr *attr,
if (udata) {
resp.cqid = cq->queue.id;
- err = ib_copy_to_udata(udata, &resp, min(sizeof(resp), udata->outlen));
- if (err) {
- ibdev_dbg(&mdev->ib_dev, "Failed to copy to udata, %d\n", err);
+ err = ib_respond_udata(udata, resp);
+ if (err)
goto err_remove_cq_cb;
- }
}
spin_lock_init(&cq->cq_lock);
diff --git a/drivers/infiniband/hw/mana/qp.c b/drivers/infiniband/hw/mana/qp.c
index f503445a38f2d8..afc1d0e299aaf4 100644
--- a/drivers/infiniband/hw/mana/qp.c
+++ b/drivers/infiniband/hw/mana/qp.c
@@ -555,11 +555,9 @@ static int mana_ib_create_rc_qp(struct ib_qp *ibqp, struct ib_pd *ibpd,
resp.queue_id[j] = qp->rc_qp.queues[i].id;
j++;
}
- err = ib_copy_to_udata(udata, &resp, min(sizeof(resp), udata->outlen));
- if (err) {
- ibdev_dbg(&mdev->ib_dev, "Failed to copy to udata, %d\n", err);
+ err = ib_respond_udata(udata, resp);
+ if (err)
goto destroy_qp;
- }
}
err = mana_table_store_qp(mdev, qp);
diff --git a/drivers/infiniband/hw/mlx5/srq.c b/drivers/infiniband/hw/mlx5/srq.c
index 852f6f502d14d0..3fb8519a4ce0d7 100644
--- a/drivers/infiniband/hw/mlx5/srq.c
+++ b/drivers/infiniband/hw/mlx5/srq.c
@@ -292,12 +292,9 @@ int mlx5_ib_create_srq(struct ib_srq *ib_srq,
.srqn = srq->msrq.srqn,
};
- if (ib_copy_to_udata(udata, &resp, min(udata->outlen,
- sizeof(resp)))) {
- mlx5_ib_dbg(dev, "copy to user failed\n");
- err = -EFAULT;
+ err = ib_respond_udata(udata, resp);
+ if (err)
goto err_core;
- }
}
init_attr->attr.max_wr = srq->msrq.max - 1;
diff --git a/drivers/infiniband/hw/vmw_pvrdma/pvrdma_qp.c b/drivers/infiniband/hw/vmw_pvrdma/pvrdma_qp.c
index 16aab967a20308..cefcb243c3a6f2 100644
--- a/drivers/infiniband/hw/vmw_pvrdma/pvrdma_qp.c
+++ b/drivers/infiniband/hw/vmw_pvrdma/pvrdma_qp.c
@@ -406,12 +406,10 @@ int pvrdma_create_qp(struct ib_qp *ibqp, struct ib_qp_init_attr *init_attr,
qp_resp.qpn = qp->ibqp.qp_num;
qp_resp.qp_handle = qp->qp_handle;
- if (ib_copy_to_udata(udata, &qp_resp,
- min(udata->outlen, sizeof(qp_resp)))) {
- dev_warn(&dev->pdev->dev,
- "failed to copy back udata\n");
+ ret = ib_respond_udata(udata, qp_resp);
+ if (ret) {
__pvrdma_destroy_qp(dev, qp);
- return -EINVAL;
+ return ret;
}
}
--
2.43.0
^ permalink raw reply related [flat|nested] 19+ messages in thread* [PATCH v2 10/16] RDMA: Convert drivers using sizeof() to ib_respond_udata()
2026-04-06 17:40 [PATCH v2 00/16] Convert all drivers to the new udata response flow Jason Gunthorpe
` (8 preceding siblings ...)
2026-04-06 17:40 ` [PATCH v2 09/16] RDMA: Convert drivers using min to ib_respond_udata() Jason Gunthorpe
@ 2026-04-06 17:40 ` Jason Gunthorpe
2026-04-06 17:40 ` [PATCH v2 11/16] RDMA/cxgb4: Convert " Jason Gunthorpe
` (5 subsequent siblings)
15 siblings, 0 replies; 19+ messages in thread
From: Jason Gunthorpe @ 2026-04-06 17:40 UTC (permalink / raw)
To: Abhijit Gangurde, Allen Hubbe,
Broadcom internal kernel review list, Bernard Metzler,
Potnuri Bharat Teja, Bryan Tan, Cheng Xu, Dennis Dalessandro,
Gal Pressman, Junxian Huang, Kai Shen, Kalesh AP,
Konstantin Taranov, Krzysztof Czurylo, Leon Romanovsky,
linux-hyperv, linux-rdma, Long Li, Michal Kalderon,
Michael Margolin, Nelson Escobar, Satish Kharat, Selvin Xavier,
Yossi Leybovich, Chengchang Tang, Tatyana Nikolova, Vishnu Dasa,
Yishai Hadas
Cc: Adit Ranadive, Aditya Sarwade, Bryan Tan, Dexuan Cui,
Doug Ledford, George Zhang, Jorgen Hansen, Leon Romanovsky,
Parav Pandit, patches, Roland Dreier, Roland Dreier, Ajay Sharma,
stable
Convert the pattern:
ib_copy_to_udata(udata, &resp, sizeof(resp));
Using Coccinelle:
@@
identifier resp;
expression udata;
@@
- ib_copy_to_udata(udata, &resp, sizeof(resp))
+ ib_respond_udata(udata, resp)
Run another pass with AI to propagate the return code correctly and
remove redundant prints.
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
---
drivers/infiniband/hw/cxgb4/provider.c | 9 ++++++---
drivers/infiniband/hw/cxgb4/qp.c | 4 ++--
drivers/infiniband/hw/erdma/erdma_verbs.c | 4 ++--
.../infiniband/hw/ionic/ionic_controlpath.c | 8 ++++----
drivers/infiniband/hw/mana/qp.c | 16 ++++------------
drivers/infiniband/hw/mlx4/main.c | 8 ++++----
drivers/infiniband/hw/mlx5/main.c | 5 +++--
drivers/infiniband/hw/mthca/mthca_provider.c | 5 +++--
drivers/infiniband/hw/ocrdma/ocrdma_verbs.c | 19 +++++++------------
drivers/infiniband/hw/qedr/verbs.c | 7 +------
drivers/infiniband/hw/usnic/usnic_ib_verbs.c | 9 ++-------
drivers/infiniband/hw/vmw_pvrdma/pvrdma_cq.c | 7 +++----
drivers/infiniband/hw/vmw_pvrdma/pvrdma_srq.c | 6 +++---
.../infiniband/hw/vmw_pvrdma/pvrdma_verbs.c | 11 +++++------
drivers/infiniband/sw/rdmavt/cq.c | 2 +-
drivers/infiniband/sw/rdmavt/qp.c | 3 +--
drivers/infiniband/sw/siw/siw_verbs.c | 10 +++++-----
17 files changed, 56 insertions(+), 77 deletions(-)
diff --git a/drivers/infiniband/hw/cxgb4/provider.c b/drivers/infiniband/hw/cxgb4/provider.c
index 616019ac1da501..a119e8793aef40 100644
--- a/drivers/infiniband/hw/cxgb4/provider.c
+++ b/drivers/infiniband/hw/cxgb4/provider.c
@@ -52,6 +52,7 @@
#include <rdma/ib_smi.h>
#include <rdma/ib_umem.h>
#include <rdma/ib_user_verbs.h>
+#include <rdma/uverbs_ioctl.h>
#include "iw_cxgb4.h"
@@ -209,8 +210,9 @@ static int c4iw_allocate_pd(struct ib_pd *pd, struct ib_udata *udata)
{
struct c4iw_pd *php = to_c4iw_pd(pd);
struct ib_device *ibdev = pd->device;
- u32 pdid;
struct c4iw_dev *rhp;
+ u32 pdid;
+ int ret;
pr_debug("ibdev %p\n", ibdev);
rhp = (struct c4iw_dev *) ibdev;
@@ -223,9 +225,10 @@ static int c4iw_allocate_pd(struct ib_pd *pd, struct ib_udata *udata)
if (udata) {
struct c4iw_alloc_pd_resp uresp = {.pdid = php->pdid};
- if (ib_copy_to_udata(udata, &uresp, sizeof(uresp))) {
+ ret = ib_respond_udata(udata, uresp);
+ if (ret) {
c4iw_deallocate_pd(&php->ibpd, udata);
- return -EFAULT;
+ return ret;
}
}
mutex_lock(&rhp->rdev.stats.lock);
diff --git a/drivers/infiniband/hw/cxgb4/qp.c b/drivers/infiniband/hw/cxgb4/qp.c
index d9a86e4c546189..f9c7030ac6bfd0 100644
--- a/drivers/infiniband/hw/cxgb4/qp.c
+++ b/drivers/infiniband/hw/cxgb4/qp.c
@@ -2280,7 +2280,7 @@ int c4iw_create_qp(struct ib_qp *qp, struct ib_qp_init_attr *attrs,
ucontext->key += PAGE_SIZE;
}
spin_unlock(&ucontext->mmap_lock);
- ret = ib_copy_to_udata(udata, &uresp, sizeof(uresp));
+ ret = ib_respond_udata(udata, uresp);
if (ret)
goto err_free_ma_sync_key;
sq_key_mm->key = uresp.sq_key;
@@ -2777,7 +2777,7 @@ int c4iw_create_srq(struct ib_srq *ib_srq, struct ib_srq_init_attr *attrs,
uresp.srq_db_gts_key = ucontext->key;
ucontext->key += PAGE_SIZE;
spin_unlock(&ucontext->mmap_lock);
- ret = ib_copy_to_udata(udata, &uresp, sizeof(uresp));
+ ret = ib_respond_udata(udata, uresp);
if (ret)
goto err_free_srq_db_key_mm;
srq_key_mm->key = uresp.srq_key;
diff --git a/drivers/infiniband/hw/erdma/erdma_verbs.c b/drivers/infiniband/hw/erdma/erdma_verbs.c
index 9bba470c6e3257..92a65970ab6fa1 100644
--- a/drivers/infiniband/hw/erdma/erdma_verbs.c
+++ b/drivers/infiniband/hw/erdma/erdma_verbs.c
@@ -1055,7 +1055,7 @@ int erdma_create_qp(struct ib_qp *ibqp, struct ib_qp_init_attr *attrs,
uresp.qp_id = QP_ID(qp);
uresp.rq_offset = qp->user_qp.rq_offset;
- ret = ib_copy_to_udata(udata, &uresp, sizeof(uresp));
+ ret = ib_respond_udata(udata, uresp);
if (ret)
goto err_out_cmd;
} else {
@@ -1571,7 +1571,7 @@ int erdma_alloc_ucontext(struct ib_ucontext *ibctx, struct ib_udata *udata)
uresp.dev_id = dev->pdev->device;
- ret = ib_copy_to_udata(udata, &uresp, sizeof(uresp));
+ ret = ib_respond_udata(udata, uresp);
if (ret)
goto err_put_mmap_entries;
diff --git a/drivers/infiniband/hw/ionic/ionic_controlpath.c b/drivers/infiniband/hw/ionic/ionic_controlpath.c
index 7051a81cca9420..2b01345848ddb7 100644
--- a/drivers/infiniband/hw/ionic/ionic_controlpath.c
+++ b/drivers/infiniband/hw/ionic/ionic_controlpath.c
@@ -414,7 +414,7 @@ int ionic_alloc_ucontext(struct ib_ucontext *ibctx, struct ib_udata *udata)
if (dev->lif_cfg.rq_expdb)
resp.expdb_qtypes |= IONIC_EXPDB_RQ;
- rc = ib_copy_to_udata(udata, &resp, sizeof(resp));
+ rc = ib_respond_udata(udata, resp);
if (rc)
goto err_resp;
@@ -752,7 +752,7 @@ int ionic_create_ah(struct ib_ah *ibah, struct rdma_ah_init_attr *init_attr,
if (udata) {
resp.ahid = ah->ahid;
- rc = ib_copy_to_udata(udata, &resp, sizeof(resp));
+ rc = ib_respond_udata(udata, resp);
if (rc)
goto err_resp;
}
@@ -1263,7 +1263,7 @@ int ionic_create_cq(struct ib_cq *ibcq, const struct ib_cq_init_attr *attr,
if (udata) {
resp.udma_mask = vcq->udma_mask;
- rc = ib_copy_to_udata(udata, &resp, sizeof(resp));
+ rc = ib_respond_udata(udata, resp);
if (rc)
goto err_resp;
}
@@ -2315,7 +2315,7 @@ int ionic_create_qp(struct ib_qp *ibqp, struct ib_qp_init_attr *attr,
resp.rq_cmb = qp->rq_cmb;
}
- rc = ib_copy_to_udata(udata, &resp, sizeof(resp));
+ rc = ib_respond_udata(udata, resp);
if (rc)
goto err_resp;
}
diff --git a/drivers/infiniband/hw/mana/qp.c b/drivers/infiniband/hw/mana/qp.c
index afc1d0e299aaf4..9758bb8533f155 100644
--- a/drivers/infiniband/hw/mana/qp.c
+++ b/drivers/infiniband/hw/mana/qp.c
@@ -210,13 +210,9 @@ static int mana_ib_create_qp_rss(struct ib_qp *ibqp, struct ib_pd *pd,
if (ret)
goto fail;
- ret = ib_copy_to_udata(udata, &resp, sizeof(resp));
- if (ret) {
- ibdev_dbg(&mdev->ib_dev,
- "Failed to copy to udata create rss-qp, %d\n",
- ret);
+ ret = ib_respond_udata(udata, resp);
+ if (ret)
goto err_disable_vport_rx;
- }
kfree(mana_ind_table);
@@ -351,13 +347,9 @@ static int mana_ib_create_qp_raw(struct ib_qp *ibqp, struct ib_pd *ibpd,
resp.cqid = send_cq->queue.id;
resp.tx_vp_offset = pd->tx_vp_offset;
- err = ib_copy_to_udata(udata, &resp, sizeof(resp));
- if (err) {
- ibdev_dbg(&mdev->ib_dev,
- "Failed copy udata for create qp-raw, %d\n",
- err);
+ err = ib_respond_udata(udata, resp);
+ if (err)
goto err_remove_cq_cb;
- }
return 0;
diff --git a/drivers/infiniband/hw/mlx4/main.c b/drivers/infiniband/hw/mlx4/main.c
index 16e9ce8138cb30..ce77e893065c92 100644
--- a/drivers/infiniband/hw/mlx4/main.c
+++ b/drivers/infiniband/hw/mlx4/main.c
@@ -1121,16 +1121,16 @@ static int mlx4_ib_alloc_ucontext(struct ib_ucontext *uctx,
mutex_init(&context->wqn_ranges_mutex);
if (ibdev->ops.uverbs_abi_ver == MLX4_IB_UVERBS_NO_DEV_CAPS_ABI_VERSION)
- err = ib_copy_to_udata(udata, &resp_v3, sizeof(resp_v3));
+ err = ib_respond_udata(udata, resp_v3);
else
- err = ib_copy_to_udata(udata, &resp, sizeof(resp));
+ err = ib_respond_udata(udata, resp);
if (err) {
mlx4_uar_free(to_mdev(ibdev)->dev, &context->uar);
- return -EFAULT;
+ return err;
}
- return err;
+ return 0;
}
static void mlx4_ib_dealloc_ucontext(struct ib_ucontext *ibcontext)
diff --git a/drivers/infiniband/hw/mlx5/main.c b/drivers/infiniband/hw/mlx5/main.c
index 7d435cf5a2fdae..57d3b80e7550b6 100644
--- a/drivers/infiniband/hw/mlx5/main.c
+++ b/drivers/infiniband/hw/mlx5/main.c
@@ -2791,9 +2791,10 @@ static int mlx5_ib_alloc_pd(struct ib_pd *ibpd, struct ib_udata *udata)
pd->uid = uid;
if (udata) {
resp.pdn = pd->pdn;
- if (ib_copy_to_udata(udata, &resp, sizeof(resp))) {
+ err = ib_respond_udata(udata, resp);
+ if (err) {
mlx5_cmd_dealloc_pd(to_mdev(ibdev)->mdev, pd->pdn, uid);
- return -EFAULT;
+ return err;
}
}
diff --git a/drivers/infiniband/hw/mthca/mthca_provider.c b/drivers/infiniband/hw/mthca/mthca_provider.c
index e8d5d865c1f1f7..07c60797c86091 100644
--- a/drivers/infiniband/hw/mthca/mthca_provider.c
+++ b/drivers/infiniband/hw/mthca/mthca_provider.c
@@ -311,10 +311,11 @@ static int mthca_alloc_ucontext(struct ib_ucontext *uctx,
return err;
}
- if (ib_copy_to_udata(udata, &uresp, sizeof(uresp))) {
+ err = ib_respond_udata(udata, uresp);
+ if (err) {
mthca_cleanup_user_db_tab(to_mdev(ibdev), &context->uar, context->db_tab);
mthca_uar_free(to_mdev(ibdev), &context->uar);
- return -EFAULT;
+ return err;
}
context->reg_mr_warned = 0;
diff --git a/drivers/infiniband/hw/ocrdma/ocrdma_verbs.c b/drivers/infiniband/hw/ocrdma/ocrdma_verbs.c
index a88cc5d84af828..2a174d0fe6ca1e 100644
--- a/drivers/infiniband/hw/ocrdma/ocrdma_verbs.c
+++ b/drivers/infiniband/hw/ocrdma/ocrdma_verbs.c
@@ -502,7 +502,7 @@ int ocrdma_alloc_ucontext(struct ib_ucontext *uctx, struct ib_udata *udata)
resp.dpp_wqe_size = dev->attr.wqe_size;
memcpy(resp.fw_ver, dev->attr.fw_ver, sizeof(resp.fw_ver));
- status = ib_copy_to_udata(udata, &resp, sizeof(resp));
+ status = ib_respond_udata(udata, resp);
if (status)
goto cpy_err;
return 0;
@@ -611,7 +611,7 @@ static int ocrdma_copy_pd_uresp(struct ocrdma_dev *dev, struct ocrdma_pd *pd,
rsp.dpp_page_addr_lo = dpp_page_addr;
}
- status = ib_copy_to_udata(udata, &rsp, sizeof(rsp));
+ status = ib_respond_udata(udata, rsp);
if (status)
goto ucopy_err;
@@ -945,12 +945,9 @@ static int ocrdma_copy_cq_uresp(struct ocrdma_dev *dev, struct ocrdma_cq *cq,
uresp.db_page_addr = ocrdma_get_db_addr(dev, uctx->cntxt_pd->id);
uresp.db_page_size = dev->nic_info.db_page_size;
uresp.phase_change = cq->phase_change ? 1 : 0;
- status = ib_copy_to_udata(udata, &uresp, sizeof(uresp));
- if (status) {
- pr_err("%s(%d) copy error cqid=0x%x.\n",
- __func__, dev->id, cq->id);
+ status = ib_respond_udata(udata, uresp);
+ if (status)
goto err;
- }
status = ocrdma_add_mmap(uctx, uresp.db_page_addr, uresp.db_page_size);
if (status)
goto err;
@@ -1206,11 +1203,9 @@ static int ocrdma_copy_qp_uresp(struct ocrdma_qp *qp,
uresp.dpp_credit = dpp_credit_lmt;
uresp.dpp_offset = dpp_offset;
}
- status = ib_copy_to_udata(udata, &uresp, sizeof(uresp));
- if (status) {
- pr_err("%s(%d) user copy error.\n", __func__, dev->id);
+ status = ib_respond_udata(udata, uresp);
+ if (status)
goto err;
- }
status = ocrdma_add_mmap(pd->uctx, uresp.sq_page_addr[0],
uresp.sq_page_size);
if (status)
@@ -1754,7 +1749,7 @@ static int ocrdma_copy_srq_uresp(struct ocrdma_dev *dev, struct ocrdma_srq *srq,
uresp.db_shift = 16;
}
- status = ib_copy_to_udata(udata, &uresp, sizeof(uresp));
+ status = ib_respond_udata(udata, uresp);
if (status)
return status;
status = ocrdma_add_mmap(srq->pd->uctx, uresp.rq_page_addr[0],
diff --git a/drivers/infiniband/hw/qedr/verbs.c b/drivers/infiniband/hw/qedr/verbs.c
index 679aa6f3a63bc5..3b86ea1cf88883 100644
--- a/drivers/infiniband/hw/qedr/verbs.c
+++ b/drivers/infiniband/hw/qedr/verbs.c
@@ -1251,15 +1251,10 @@ static int qedr_copy_srq_uresp(struct qedr_dev *dev,
struct qedr_srq *srq, struct ib_udata *udata)
{
struct qedr_create_srq_uresp uresp = {};
- int rc;
uresp.srq_id = srq->srq_id;
- rc = ib_copy_to_udata(udata, &uresp, sizeof(uresp));
- if (rc)
- DP_ERR(dev, "create srq: problem copying data to user space\n");
-
- return rc;
+ return ib_respond_udata(udata, uresp);
}
static void qedr_copy_rq_uresp(struct qedr_dev *dev,
diff --git a/drivers/infiniband/hw/usnic/usnic_ib_verbs.c b/drivers/infiniband/hw/usnic/usnic_ib_verbs.c
index 615de9c4209bf1..e887f03a84d063 100644
--- a/drivers/infiniband/hw/usnic/usnic_ib_verbs.c
+++ b/drivers/infiniband/hw/usnic/usnic_ib_verbs.c
@@ -82,7 +82,6 @@ static void usnic_ib_fw_string_to_u64(char *fw_ver_str, u64 *fw_ver)
static int usnic_ib_fill_create_qp_resp(struct usnic_ib_qp_grp *qp_grp,
struct ib_udata *udata)
{
- struct usnic_ib_dev *us_ibdev;
struct usnic_ib_create_qp_resp resp;
struct pci_dev *pdev;
struct vnic_dev_bar *bar;
@@ -92,7 +91,6 @@ static int usnic_ib_fill_create_qp_resp(struct usnic_ib_qp_grp *qp_grp,
memset(&resp, 0, sizeof(resp));
- us_ibdev = qp_grp->vf->pf;
pdev = usnic_vnic_get_pdev(qp_grp->vf->vnic);
if (!pdev) {
usnic_err("Failed to get pdev of qp_grp %d\n",
@@ -157,12 +155,9 @@ static int usnic_ib_fill_create_qp_resp(struct usnic_ib_qp_grp *qp_grp,
struct usnic_ib_qp_grp_flow, link);
resp.transport = default_flow->trans_type;
- err = ib_copy_to_udata(udata, &resp, sizeof(resp));
- if (err) {
- usnic_err("Failed to copy udata for %s",
- dev_name(&us_ibdev->ib_dev.dev));
+ err = ib_respond_udata(udata, resp);
+ if (err)
return err;
- }
return 0;
}
diff --git a/drivers/infiniband/hw/vmw_pvrdma/pvrdma_cq.c b/drivers/infiniband/hw/vmw_pvrdma/pvrdma_cq.c
index bc3adcc1ae67c2..d5bfdbfe1376d1 100644
--- a/drivers/infiniband/hw/vmw_pvrdma/pvrdma_cq.c
+++ b/drivers/infiniband/hw/vmw_pvrdma/pvrdma_cq.c
@@ -203,11 +203,10 @@ int pvrdma_create_cq(struct ib_cq *ibcq, const struct ib_cq_init_attr *attr,
cq->uar = &context->uar;
/* Copy udata back. */
- if (ib_copy_to_udata(udata, &cq_resp, sizeof(cq_resp))) {
- dev_warn(&dev->pdev->dev,
- "failed to copy back udata\n");
+ ret = ib_respond_udata(udata, cq_resp);
+ if (ret) {
pvrdma_destroy_cq(&cq->ibcq, udata);
- return -EINVAL;
+ return ret;
}
}
diff --git a/drivers/infiniband/hw/vmw_pvrdma/pvrdma_srq.c b/drivers/infiniband/hw/vmw_pvrdma/pvrdma_srq.c
index d31fb692fcaafb..e69eadde6c26e9 100644
--- a/drivers/infiniband/hw/vmw_pvrdma/pvrdma_srq.c
+++ b/drivers/infiniband/hw/vmw_pvrdma/pvrdma_srq.c
@@ -195,10 +195,10 @@ int pvrdma_create_srq(struct ib_srq *ibsrq, struct ib_srq_init_attr *init_attr,
spin_unlock_irqrestore(&dev->srq_tbl_lock, flags);
/* Copy udata back. */
- if (ib_copy_to_udata(udata, &srq_resp, sizeof(srq_resp))) {
- dev_warn(&dev->pdev->dev, "failed to copy back udata\n");
+ ret = ib_respond_udata(udata, srq_resp);
+ if (ret) {
pvrdma_destroy_srq(&srq->ibsrq, udata);
- return -EINVAL;
+ return ret;
}
return 0;
diff --git a/drivers/infiniband/hw/vmw_pvrdma/pvrdma_verbs.c b/drivers/infiniband/hw/vmw_pvrdma/pvrdma_verbs.c
index c7c2b41060e526..b9c3202b9545e3 100644
--- a/drivers/infiniband/hw/vmw_pvrdma/pvrdma_verbs.c
+++ b/drivers/infiniband/hw/vmw_pvrdma/pvrdma_verbs.c
@@ -320,11 +320,11 @@ int pvrdma_alloc_ucontext(struct ib_ucontext *uctx, struct ib_udata *udata)
/* copy back to user */
uresp.qp_tab_size = vdev->dsr->caps.max_qp;
- ret = ib_copy_to_udata(udata, &uresp, sizeof(uresp));
+ ret = ib_respond_udata(udata, uresp);
if (ret) {
/* pvrdma_dealloc_ucontext() also frees the UAR */
pvrdma_dealloc_ucontext(&context->ibucontext);
- return -EFAULT;
+ return ret;
}
return 0;
@@ -430,11 +430,10 @@ int pvrdma_alloc_pd(struct ib_pd *ibpd, struct ib_udata *udata)
pd_resp.pdn = resp->pd_handle;
if (udata) {
- if (ib_copy_to_udata(udata, &pd_resp, sizeof(pd_resp))) {
- dev_warn(&dev->pdev->dev,
- "failed to copy back protection domain\n");
+ ret = ib_respond_udata(udata, pd_resp);
+ if (ret) {
pvrdma_dealloc_pd(&pd->ibpd, udata);
- return -EFAULT;
+ return ret;
}
}
diff --git a/drivers/infiniband/sw/rdmavt/cq.c b/drivers/infiniband/sw/rdmavt/cq.c
index 30904c6ae852db..45404611c9ce56 100644
--- a/drivers/infiniband/sw/rdmavt/cq.c
+++ b/drivers/infiniband/sw/rdmavt/cq.c
@@ -372,7 +372,7 @@ int rvt_resize_cq(struct ib_cq *ibcq, unsigned int cqe, struct ib_udata *udata)
if (udata && udata->outlen >= sizeof(__u64)) {
__u64 offset = 0;
- ret = ib_copy_to_udata(udata, &offset, sizeof(offset));
+ ret = ib_respond_udata(udata, offset);
if (ret)
goto bail_free;
}
diff --git a/drivers/infiniband/sw/rdmavt/qp.c b/drivers/infiniband/sw/rdmavt/qp.c
index b519d9d0e42913..5fdc37bd64e834 100644
--- a/drivers/infiniband/sw/rdmavt/qp.c
+++ b/drivers/infiniband/sw/rdmavt/qp.c
@@ -1194,8 +1194,7 @@ int rvt_create_qp(struct ib_qp *ibqp, struct ib_qp_init_attr *init_attr,
if (!qp->r_rq.wq) {
__u64 offset = 0;
- ret = ib_copy_to_udata(udata, &offset,
- sizeof(offset));
+ ret = ib_respond_udata(udata, offset);
if (ret)
goto bail_qpn;
} else {
diff --git a/drivers/infiniband/sw/siw/siw_verbs.c b/drivers/infiniband/sw/siw/siw_verbs.c
index 1e1d262a4ae2db..b34f3d6547ffc7 100644
--- a/drivers/infiniband/sw/siw/siw_verbs.c
+++ b/drivers/infiniband/sw/siw/siw_verbs.c
@@ -102,7 +102,7 @@ int siw_alloc_ucontext(struct ib_ucontext *base_ctx, struct ib_udata *udata)
rv = -EINVAL;
goto err_out;
}
- rv = ib_copy_to_udata(udata, &uresp, sizeof(uresp));
+ rv = ib_respond_udata(udata, uresp);
if (rv)
goto err_out;
@@ -472,7 +472,7 @@ int siw_create_qp(struct ib_qp *ibqp, struct ib_qp_init_attr *attrs,
rv = -EINVAL;
goto err_out_xa;
}
- rv = ib_copy_to_udata(udata, &uresp, sizeof(uresp));
+ rv = ib_respond_udata(udata, uresp);
if (rv)
goto err_out_xa;
}
@@ -1205,7 +1205,7 @@ int siw_create_cq(struct ib_cq *base_cq, const struct ib_cq_init_attr *attr,
rv = -EINVAL;
goto err_out;
}
- rv = ib_copy_to_udata(udata, &uresp, sizeof(uresp));
+ rv = ib_respond_udata(udata, uresp);
if (rv)
goto err_out;
}
@@ -1386,7 +1386,7 @@ struct ib_mr *siw_reg_user_mr(struct ib_pd *pd, u64 start, u64 len,
rv = -EINVAL;
goto err_out;
}
- rv = ib_copy_to_udata(udata, &uresp, sizeof(uresp));
+ rv = ib_respond_udata(udata, uresp);
if (rv)
goto err_out;
}
@@ -1646,7 +1646,7 @@ int siw_create_srq(struct ib_srq *base_srq,
rv = -EINVAL;
goto err_out;
}
- rv = ib_copy_to_udata(udata, &uresp, sizeof(uresp));
+ rv = ib_respond_udata(udata, uresp);
if (rv)
goto err_out;
}
--
2.43.0
^ permalink raw reply related [flat|nested] 19+ messages in thread* [PATCH v2 11/16] RDMA/cxgb4: Convert to ib_respond_udata()
2026-04-06 17:40 [PATCH v2 00/16] Convert all drivers to the new udata response flow Jason Gunthorpe
` (9 preceding siblings ...)
2026-04-06 17:40 ` [PATCH v2 10/16] RDMA: Convert drivers using sizeof() " Jason Gunthorpe
@ 2026-04-06 17:40 ` Jason Gunthorpe
2026-04-06 17:40 ` [PATCH v2 12/16] RDMA/qedr: Replace qedr_ib_copy_to_udata() with ib_respond_udata() Jason Gunthorpe
` (4 subsequent siblings)
15 siblings, 0 replies; 19+ messages in thread
From: Jason Gunthorpe @ 2026-04-06 17:40 UTC (permalink / raw)
To: Abhijit Gangurde, Allen Hubbe,
Broadcom internal kernel review list, Bernard Metzler,
Potnuri Bharat Teja, Bryan Tan, Cheng Xu, Dennis Dalessandro,
Gal Pressman, Junxian Huang, Kai Shen, Kalesh AP,
Konstantin Taranov, Krzysztof Czurylo, Leon Romanovsky,
linux-hyperv, linux-rdma, Long Li, Michal Kalderon,
Michael Margolin, Nelson Escobar, Satish Kharat, Selvin Xavier,
Yossi Leybovich, Chengchang Tang, Tatyana Nikolova, Vishnu Dasa,
Yishai Hadas
Cc: Adit Ranadive, Aditya Sarwade, Bryan Tan, Dexuan Cui,
Doug Ledford, George Zhang, Jorgen Hansen, Leon Romanovsky,
Parav Pandit, patches, Roland Dreier, Roland Dreier, Ajay Sharma,
stable
These cases carefully work around 32-bit unpadded structures, but
the min integrated into ib_respond_udata() handles this
automatically. Zero-initialize data that would not have been copied.
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
---
drivers/infiniband/hw/cxgb4/cq.c | 8 +++-----
drivers/infiniband/hw/cxgb4/provider.c | 5 ++---
2 files changed, 5 insertions(+), 8 deletions(-)
diff --git a/drivers/infiniband/hw/cxgb4/cq.c b/drivers/infiniband/hw/cxgb4/cq.c
index e31fb9134aa818..47508df4cec023 100644
--- a/drivers/infiniband/hw/cxgb4/cq.c
+++ b/drivers/infiniband/hw/cxgb4/cq.c
@@ -1115,13 +1115,11 @@ int c4iw_create_cq(struct ib_cq *ibcq, const struct ib_cq_init_attr *attr,
/* communicate to the userspace that
* kernel driver supports 64B CQE
*/
- uresp.flags |= C4IW_64B_CQE;
+ if (!ucontext->is_32b_cqe)
+ uresp.flags |= C4IW_64B_CQE;
spin_unlock(&ucontext->mmap_lock);
- ret = ib_copy_to_udata(udata, &uresp,
- ucontext->is_32b_cqe ?
- sizeof(uresp) - sizeof(uresp.flags) :
- sizeof(uresp));
+ ret = ib_respond_udata(udata, uresp);
if (ret)
goto err_free_mm2;
diff --git a/drivers/infiniband/hw/cxgb4/provider.c b/drivers/infiniband/hw/cxgb4/provider.c
index a119e8793aef40..0e3827022c63da 100644
--- a/drivers/infiniband/hw/cxgb4/provider.c
+++ b/drivers/infiniband/hw/cxgb4/provider.c
@@ -80,7 +80,7 @@ static int c4iw_alloc_ucontext(struct ib_ucontext *ucontext,
struct ib_device *ibdev = ucontext->device;
struct c4iw_ucontext *context = to_c4iw_ucontext(ucontext);
struct c4iw_dev *rhp = to_c4iw_dev(ibdev);
- struct c4iw_alloc_ucontext_resp uresp;
+ struct c4iw_alloc_ucontext_resp uresp = {};
int ret = 0;
struct c4iw_mm_entry *mm = NULL;
@@ -106,8 +106,7 @@ static int c4iw_alloc_ucontext(struct ib_ucontext *ucontext,
context->key += PAGE_SIZE;
spin_unlock(&context->mmap_lock);
- ret = ib_copy_to_udata(udata, &uresp,
- sizeof(uresp) - sizeof(uresp.reserved));
+ ret = ib_respond_udata(udata, uresp);
if (ret)
goto err_mm;
--
2.43.0
^ permalink raw reply related [flat|nested] 19+ messages in thread* [PATCH v2 12/16] RDMA/qedr: Replace qedr_ib_copy_to_udata() with ib_respond_udata()
2026-04-06 17:40 [PATCH v2 00/16] Convert all drivers to the new udata response flow Jason Gunthorpe
` (10 preceding siblings ...)
2026-04-06 17:40 ` [PATCH v2 11/16] RDMA/cxgb4: Convert " Jason Gunthorpe
@ 2026-04-06 17:40 ` Jason Gunthorpe
2026-04-06 17:40 ` [PATCH v2 13/16] RDMA/mlx: Replace response_len " Jason Gunthorpe
` (3 subsequent siblings)
15 siblings, 0 replies; 19+ messages in thread
From: Jason Gunthorpe @ 2026-04-06 17:40 UTC (permalink / raw)
To: Abhijit Gangurde, Allen Hubbe,
Broadcom internal kernel review list, Bernard Metzler,
Potnuri Bharat Teja, Bryan Tan, Cheng Xu, Dennis Dalessandro,
Gal Pressman, Junxian Huang, Kai Shen, Kalesh AP,
Konstantin Taranov, Krzysztof Czurylo, Leon Romanovsky,
linux-hyperv, linux-rdma, Long Li, Michal Kalderon,
Michael Margolin, Nelson Escobar, Satish Kharat, Selvin Xavier,
Yossi Leybovich, Chengchang Tang, Tatyana Nikolova, Vishnu Dasa,
Yishai Hadas
Cc: Adit Ranadive, Aditya Sarwade, Bryan Tan, Dexuan Cui,
Doug Ledford, George Zhang, Jorgen Hansen, Leon Romanovsky,
Parav Pandit, patches, Roland Dreier, Roland Dreier, Ajay Sharma,
stable
This is another instance of the min() pattern.
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
---
drivers/infiniband/hw/qedr/verbs.c | 35 +++++-------------------------
1 file changed, 6 insertions(+), 29 deletions(-)
diff --git a/drivers/infiniband/hw/qedr/verbs.c b/drivers/infiniband/hw/qedr/verbs.c
index 3b86ea1cf88883..79190c5b8b50b0 100644
--- a/drivers/infiniband/hw/qedr/verbs.c
+++ b/drivers/infiniband/hw/qedr/verbs.c
@@ -64,14 +64,6 @@ enum {
QEDR_USER_MMAP_PHYS_PAGE,
};
-static inline int qedr_ib_copy_to_udata(struct ib_udata *udata, void *src,
- size_t len)
-{
- size_t min_len = min_t(size_t, len, udata->outlen);
-
- return ib_copy_to_udata(udata, src, min_len);
-}
-
int qedr_query_pkey(struct ib_device *ibdev, u32 port, u16 index, u16 *pkey)
{
if (index >= QEDR_ROCE_PKEY_TABLE_LEN)
@@ -340,7 +332,7 @@ int qedr_alloc_ucontext(struct ib_ucontext *uctx, struct ib_udata *udata)
uresp.sges_per_srq_wr = dev->attr.max_srq_sge;
uresp.max_cqes = QEDR_MAX_CQES;
- rc = qedr_ib_copy_to_udata(udata, &uresp, sizeof(uresp));
+ rc = ib_respond_udata(udata, uresp);
if (rc)
goto err;
@@ -459,9 +451,8 @@ int qedr_alloc_pd(struct ib_pd *ibpd, struct ib_udata *udata)
struct qedr_ucontext *context = rdma_udata_to_drv_context(
udata, struct qedr_ucontext, ibucontext);
- rc = qedr_ib_copy_to_udata(udata, &uresp, sizeof(uresp));
+ rc = ib_respond_udata(udata, uresp);
if (rc) {
- DP_ERR(dev, "copy error pd_id=0x%x.\n", pd_id);
dev->ops->rdma_dealloc_pd(dev->rdma_ctx, pd_id);
return rc;
}
@@ -696,12 +687,10 @@ static void qedr_db_recovery_del(struct qedr_dev *dev,
dev->ops->common->db_recovery_del(dev->cdev, db_addr, db_data);
}
-static int qedr_copy_cq_uresp(struct qedr_dev *dev,
- struct qedr_cq *cq, struct ib_udata *udata,
+static int qedr_copy_cq_uresp(struct qedr_cq *cq, struct ib_udata *udata,
u32 db_offset)
{
struct qedr_create_cq_uresp uresp;
- int rc;
memset(&uresp, 0, sizeof(uresp));
@@ -711,11 +700,7 @@ static int qedr_copy_cq_uresp(struct qedr_dev *dev,
uresp.db_rec_addr =
rdma_user_mmap_get_offset(cq->q.db_mmap_entry);
- rc = qedr_ib_copy_to_udata(udata, &uresp, sizeof(uresp));
- if (rc)
- DP_ERR(dev, "copy error cqid=0x%x.\n", cq->icid);
-
- return rc;
+ return ib_respond_udata(udata, uresp);
}
static void consume_cqe(struct qedr_cq *cq)
@@ -994,7 +979,7 @@ int qedr_create_cq(struct ib_cq *ibcq, const struct ib_cq_init_attr *attr,
spin_lock_init(&cq->cq_lock);
if (udata) {
- rc = qedr_copy_cq_uresp(dev, cq, udata, db_offset);
+ rc = qedr_copy_cq_uresp(cq, udata, db_offset);
if (rc)
goto err2;
@@ -1298,8 +1283,6 @@ static int qedr_copy_qp_uresp(struct qedr_dev *dev,
struct qedr_qp *qp, struct ib_udata *udata,
struct qedr_create_qp_uresp *uresp)
{
- int rc;
-
memset(uresp, 0, sizeof(*uresp));
if (qedr_qp_has_sq(qp))
@@ -1311,13 +1294,7 @@ static int qedr_copy_qp_uresp(struct qedr_dev *dev,
uresp->atomic_supported = dev->atomic_cap != IB_ATOMIC_NONE;
uresp->qp_id = qp->qp_id;
- rc = qedr_ib_copy_to_udata(udata, uresp, sizeof(*uresp));
- if (rc)
- DP_ERR(dev,
- "create qp: failed a copy to user space with qp icid=0x%x.\n",
- qp->icid);
-
- return rc;
+ return ib_respond_udata(udata, *uresp);
}
static void qedr_reset_qp_hwq_info(struct qedr_qp_hwq_info *qph)
--
2.43.0
^ permalink raw reply related [flat|nested] 19+ messages in thread* [PATCH v2 13/16] RDMA/mlx: Replace response_len with ib_respond_udata()
2026-04-06 17:40 [PATCH v2 00/16] Convert all drivers to the new udata response flow Jason Gunthorpe
` (11 preceding siblings ...)
2026-04-06 17:40 ` [PATCH v2 12/16] RDMA/qedr: Replace qedr_ib_copy_to_udata() with ib_respond_udata() Jason Gunthorpe
@ 2026-04-06 17:40 ` Jason Gunthorpe
2026-04-06 17:40 ` [PATCH v2 14/16] RDMA: Use proper driver data response structs instead of open coding Jason Gunthorpe
` (2 subsequent siblings)
15 siblings, 0 replies; 19+ messages in thread
From: Jason Gunthorpe @ 2026-04-06 17:40 UTC (permalink / raw)
To: Abhijit Gangurde, Allen Hubbe,
Broadcom internal kernel review list, Bernard Metzler,
Potnuri Bharat Teja, Bryan Tan, Cheng Xu, Dennis Dalessandro,
Gal Pressman, Junxian Huang, Kai Shen, Kalesh AP,
Konstantin Taranov, Krzysztof Czurylo, Leon Romanovsky,
linux-hyperv, linux-rdma, Long Li, Michal Kalderon,
Michael Margolin, Nelson Escobar, Satish Kharat, Selvin Xavier,
Yossi Leybovich, Chengchang Tang, Tatyana Nikolova, Vishnu Dasa,
Yishai Hadas
Cc: Adit Ranadive, Aditya Sarwade, Bryan Tan, Dexuan Cui,
Doug Ledford, George Zhang, Jorgen Hansen, Leon Romanovsky,
Parav Pandit, patches, Roland Dreier, Roland Dreier, Ajay Sharma,
stable
The Mellanox drivers have a pattern where they compute the response
length they think they need based on what the user asked for, then
blindly write that ignoring the provided size limit on the response
structure.
Drop this and just use ib_respond_udata() which caps the response
struct to the user's memory, which is fine for what mlx5 is doing.
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
---
drivers/infiniband/hw/mlx4/main.c | 2 +-
drivers/infiniband/hw/mlx4/qp.c | 2 +-
drivers/infiniband/hw/mlx5/ah.c | 2 +-
drivers/infiniband/hw/mlx5/main.c | 4 ++--
drivers/infiniband/hw/mlx5/mr.c | 2 +-
drivers/infiniband/hw/mlx5/qp.c | 10 +++++-----
6 files changed, 11 insertions(+), 11 deletions(-)
diff --git a/drivers/infiniband/hw/mlx4/main.c b/drivers/infiniband/hw/mlx4/main.c
index ce77e893065c92..4b187ec9e01738 100644
--- a/drivers/infiniband/hw/mlx4/main.c
+++ b/drivers/infiniband/hw/mlx4/main.c
@@ -626,7 +626,7 @@ static int mlx4_ib_query_device(struct ib_device *ibdev,
}
if (uhw->outlen) {
- err = ib_copy_to_udata(uhw, &resp, resp.response_length);
+ err = ib_respond_udata(uhw, resp);
if (err)
goto out;
}
diff --git a/drivers/infiniband/hw/mlx4/qp.c b/drivers/infiniband/hw/mlx4/qp.c
index aca8a985ce33cd..8dc4196218bf05 100644
--- a/drivers/infiniband/hw/mlx4/qp.c
+++ b/drivers/infiniband/hw/mlx4/qp.c
@@ -4331,7 +4331,7 @@ int mlx4_ib_create_rwq_ind_table(struct ib_rwq_ind_table *rwq_ind_table,
if (udata->outlen) {
resp.response_length = offsetof(typeof(resp), response_length) +
sizeof(resp.response_length);
- err = ib_copy_to_udata(udata, &resp, resp.response_length);
+ err = ib_respond_udata(udata, resp);
}
return err;
diff --git a/drivers/infiniband/hw/mlx5/ah.c b/drivers/infiniband/hw/mlx5/ah.c
index 531a57f9ee7e8b..a3aa700d08355d 100644
--- a/drivers/infiniband/hw/mlx5/ah.c
+++ b/drivers/infiniband/hw/mlx5/ah.c
@@ -121,7 +121,7 @@ int mlx5_ib_create_ah(struct ib_ah *ibah, struct rdma_ah_init_attr *init_attr,
resp.response_length = min_resp_len;
memcpy(resp.dmac, ah_attr->roce.dmac, ETH_ALEN);
- err = ib_copy_to_udata(udata, &resp, resp.response_length);
+ err = ib_respond_udata(udata, resp);
if (err)
return err;
}
diff --git a/drivers/infiniband/hw/mlx5/main.c b/drivers/infiniband/hw/mlx5/main.c
index 57d3b80e7550b6..84dddaded6fdef 100644
--- a/drivers/infiniband/hw/mlx5/main.c
+++ b/drivers/infiniband/hw/mlx5/main.c
@@ -1355,7 +1355,7 @@ static int mlx5_ib_query_device(struct ib_device *ibdev,
}
if (uhw_outlen) {
- err = ib_copy_to_udata(uhw, &resp, resp.response_length);
+ err = ib_respond_udata(uhw, resp);
if (err)
return err;
@@ -2280,7 +2280,7 @@ static int mlx5_ib_alloc_ucontext(struct ib_ucontext *uctx,
goto out_mdev;
resp.response_length = min(udata->outlen, sizeof(resp));
- err = ib_copy_to_udata(udata, &resp, resp.response_length);
+ err = ib_respond_udata(udata, resp);
if (err)
goto out_mdev;
diff --git a/drivers/infiniband/hw/mlx5/mr.c b/drivers/infiniband/hw/mlx5/mr.c
index 3ef467ac9e3d15..8eb922bd3b663d 100644
--- a/drivers/infiniband/hw/mlx5/mr.c
+++ b/drivers/infiniband/hw/mlx5/mr.c
@@ -1811,7 +1811,7 @@ int mlx5_ib_alloc_mw(struct ib_mw *ibmw, struct ib_udata *udata)
resp.response_length =
min(offsetofend(typeof(resp), response_length), udata->outlen);
if (resp.response_length) {
- err = ib_copy_to_udata(udata, &resp, resp.response_length);
+ err = ib_respond_udata(udata, resp);
if (err)
goto free_mkey;
}
diff --git a/drivers/infiniband/hw/mlx5/qp.c b/drivers/infiniband/hw/mlx5/qp.c
index 81d98b5010f1ca..4a7363327d2a8e 100644
--- a/drivers/infiniband/hw/mlx5/qp.c
+++ b/drivers/infiniband/hw/mlx5/qp.c
@@ -3327,7 +3327,7 @@ int mlx5_ib_create_qp(struct ib_qp *ibqp, struct ib_qp_init_attr *attr,
* including MLX5_IB_QPT_DCT, which doesn't need it.
* In that case, resp will be filled with zeros.
*/
- err = ib_copy_to_udata(udata, ¶ms.resp, params.outlen);
+ err = ib_respond_udata(udata, params.resp);
if (err)
goto destroy_qp;
@@ -4626,7 +4626,7 @@ static int mlx5_ib_modify_dct(struct ib_qp *ibqp, struct ib_qp_attr *attr,
resp.dctn = qp->dct.mdct.mqp.qpn;
if (MLX5_CAP_GEN(dev->mdev, ece_support))
resp.ece_options = MLX5_GET(create_dct_out, out, ece);
- err = ib_copy_to_udata(udata, &resp, resp.response_length);
+ err = ib_respond_udata(udata, resp);
if (err) {
mlx5_core_destroy_dct(dev, &qp->dct.mdct);
return err;
@@ -4785,7 +4785,7 @@ int mlx5_ib_modify_qp(struct ib_qp *ibqp, struct ib_qp_attr *attr,
if (!err && resp.response_length &&
udata->outlen >= resp.response_length)
/* Return -EFAULT to the user and expect him to destroy QP. */
- err = ib_copy_to_udata(udata, &resp, resp.response_length);
+ err = ib_respond_udata(udata, resp);
out:
mutex_unlock(&qp->mutex);
@@ -5485,7 +5485,7 @@ struct ib_wq *mlx5_ib_create_wq(struct ib_pd *pd,
if (udata->outlen) {
resp.response_length = offsetofend(
struct mlx5_ib_create_wq_resp, response_length);
- err = ib_copy_to_udata(udata, &resp, resp.response_length);
+ err = ib_respond_udata(udata, resp);
if (err)
goto err_copy;
}
@@ -5576,7 +5576,7 @@ int mlx5_ib_create_rwq_ind_table(struct ib_rwq_ind_table *ib_rwq_ind_table,
resp.response_length =
offsetofend(struct mlx5_ib_create_rwq_ind_tbl_resp,
response_length);
- err = ib_copy_to_udata(udata, &resp, resp.response_length);
+ err = ib_respond_udata(udata, resp);
if (err)
goto err_copy;
}
--
2.43.0
^ permalink raw reply related [flat|nested] 19+ messages in thread* [PATCH v2 14/16] RDMA: Use proper driver data response structs instead of open coding
2026-04-06 17:40 [PATCH v2 00/16] Convert all drivers to the new udata response flow Jason Gunthorpe
` (12 preceding siblings ...)
2026-04-06 17:40 ` [PATCH v2 13/16] RDMA/mlx: Replace response_len " Jason Gunthorpe
@ 2026-04-06 17:40 ` Jason Gunthorpe
2026-04-06 17:40 ` [PATCH v2 15/16] RDMA: Add missed = {} initialization to uresp structs Jason Gunthorpe
2026-04-06 17:40 ` [PATCH v2 16/16] RDMA: Replace memset with = {} pattern for ib_respond_udata() Jason Gunthorpe
15 siblings, 0 replies; 19+ messages in thread
From: Jason Gunthorpe @ 2026-04-06 17:40 UTC (permalink / raw)
To: Abhijit Gangurde, Allen Hubbe,
Broadcom internal kernel review list, Bernard Metzler,
Potnuri Bharat Teja, Bryan Tan, Cheng Xu, Dennis Dalessandro,
Gal Pressman, Junxian Huang, Kai Shen, Kalesh AP,
Konstantin Taranov, Krzysztof Czurylo, Leon Romanovsky,
linux-hyperv, linux-rdma, Long Li, Michal Kalderon,
Michael Margolin, Nelson Escobar, Satish Kharat, Selvin Xavier,
Yossi Leybovich, Chengchang Tang, Tatyana Nikolova, Vishnu Dasa,
Yishai Hadas
Cc: Adit Ranadive, Aditya Sarwade, Bryan Tan, Dexuan Cui,
Doug Ledford, George Zhang, Jorgen Hansen, Leon Romanovsky,
Parav Pandit, patches, Roland Dreier, Roland Dreier, Ajay Sharma,
stable
At some point the response structs were added and rdma-core is using
them, but the kernel was not changed to use them as well. Replace
the open-coded copy with the right struct and ib_respond_udata().
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
---
drivers/infiniband/hw/mlx4/cq.c | 7 ++--
drivers/infiniband/hw/mlx4/main.c | 11 ++++--
drivers/infiniband/hw/mlx4/srq.c | 12 ++++---
drivers/infiniband/hw/mlx5/cq.c | 7 ++--
drivers/infiniband/hw/mthca/mthca_provider.c | 35 ++++++++++++++------
5 files changed, 48 insertions(+), 24 deletions(-)
diff --git a/drivers/infiniband/hw/mlx4/cq.c b/drivers/infiniband/hw/mlx4/cq.c
index 7a6eb602d4a6de..7e4505f6c78b30 100644
--- a/drivers/infiniband/hw/mlx4/cq.c
+++ b/drivers/infiniband/hw/mlx4/cq.c
@@ -142,6 +142,7 @@ int mlx4_ib_create_user_cq(struct ib_cq *ibcq,
{
struct ib_udata *udata = &attrs->driver_udata;
struct ib_device *ibdev = ibcq->device;
+ struct mlx4_ib_create_cq_resp uresp = {};
int entries = attr->cqe;
int vector = attr->comp_vector;
struct mlx4_ib_dev *dev = to_mdev(ibdev);
@@ -219,10 +220,10 @@ int mlx4_ib_create_user_cq(struct ib_cq *ibcq,
cq->mcq.event = mlx4_ib_cq_event;
cq->mcq.usage = MLX4_RES_USAGE_USER_VERBS;
- if (ib_copy_to_udata(udata, &cq->mcq.cqn, sizeof(__u32))) {
- err = -EFAULT;
+ uresp.cqn = cq->mcq.cqn;
+ err = ib_respond_udata(udata, uresp);
+ if (err)
goto err_cq_free;
- }
return 0;
diff --git a/drivers/infiniband/hw/mlx4/main.c b/drivers/infiniband/hw/mlx4/main.c
index 4b187ec9e01738..25f9738bd77223 100644
--- a/drivers/infiniband/hw/mlx4/main.c
+++ b/drivers/infiniband/hw/mlx4/main.c
@@ -1199,9 +1199,14 @@ static int mlx4_ib_alloc_pd(struct ib_pd *ibpd, struct ib_udata *udata)
if (err)
return err;
- if (udata && ib_copy_to_udata(udata, &pd->pdn, sizeof(__u32))) {
- mlx4_pd_free(to_mdev(ibdev)->dev, pd->pdn);
- return -EFAULT;
+ if (udata) {
+ struct mlx4_ib_alloc_pd_resp uresp = { .pdn = pd->pdn };
+
+ err = ib_respond_udata(udata, uresp);
+ if (err) {
+ mlx4_pd_free(to_mdev(ibdev)->dev, pd->pdn);
+ return err;
+ }
}
return 0;
}
diff --git a/drivers/infiniband/hw/mlx4/srq.c b/drivers/infiniband/hw/mlx4/srq.c
index 767840736d583b..dd868f9b893d70 100644
--- a/drivers/infiniband/hw/mlx4/srq.c
+++ b/drivers/infiniband/hw/mlx4/srq.c
@@ -191,11 +191,15 @@ int mlx4_ib_create_srq(struct ib_srq *ib_srq,
srq->msrq.event = mlx4_ib_srq_event;
srq->ibsrq.ext.xrc.srq_num = srq->msrq.srqn;
- if (udata)
- if (ib_copy_to_udata(udata, &srq->msrq.srqn, sizeof (__u32))) {
- err = -EFAULT;
+ if (udata) {
+ struct mlx4_ib_create_srq_resp uresp = {
+ .srqn = srq->msrq.srqn
+ };
+
+ err = ib_respond_udata(udata, uresp);
+ if (err)
goto err_srq;
- }
+ }
init_attr->attr.max_wr = srq->msrq.max - 1;
diff --git a/drivers/infiniband/hw/mlx5/cq.c b/drivers/infiniband/hw/mlx5/cq.c
index a76b7a36087d98..c548d4dfbbc96a 100644
--- a/drivers/infiniband/hw/mlx5/cq.c
+++ b/drivers/infiniband/hw/mlx5/cq.c
@@ -949,6 +949,7 @@ int mlx5_ib_create_user_cq(struct ib_cq *ibcq,
{
struct ib_udata *udata = &attrs->driver_udata;
struct ib_device *ibdev = ibcq->device;
+ struct mlx5_ib_create_cq_resp uresp = {};
int entries = attr->cqe;
int vector = attr->comp_vector;
struct mlx5_ib_dev *dev = to_mdev(ibdev);
@@ -1015,10 +1016,10 @@ int mlx5_ib_create_user_cq(struct ib_cq *ibcq,
INIT_LIST_HEAD(&cq->wc_list);
- if (ib_copy_to_udata(udata, &cq->mcq.cqn, sizeof(__u32))) {
- err = -EFAULT;
+ uresp.cqn = cq->mcq.cqn;
+ err = ib_respond_udata(udata, uresp);
+ if (err)
goto err_cmd;
- }
kvfree(cqb);
return 0;
diff --git a/drivers/infiniband/hw/mthca/mthca_provider.c b/drivers/infiniband/hw/mthca/mthca_provider.c
index 07c60797c86091..afa97d3801f783 100644
--- a/drivers/infiniband/hw/mthca/mthca_provider.c
+++ b/drivers/infiniband/hw/mthca/mthca_provider.c
@@ -357,9 +357,12 @@ static int mthca_alloc_pd(struct ib_pd *ibpd, struct ib_udata *udata)
return err;
if (udata) {
- if (ib_copy_to_udata(udata, &pd->pd_num, sizeof (__u32))) {
+ struct mthca_alloc_pd_resp uresp = { .pdn = pd->pd_num };
+
+ err = ib_respond_udata(udata, uresp);
+ if (err) {
mthca_pd_free(to_mdev(ibdev), pd);
- return -EFAULT;
+ return err;
}
}
@@ -428,11 +431,17 @@ static int mthca_create_srq(struct ib_srq *ibsrq,
if (err)
return err;
- if (context && ib_copy_to_udata(udata, &srq->srqn, sizeof(__u32))) {
- mthca_free_srq(to_mdev(ibsrq->device), srq);
- mthca_unmap_user_db(to_mdev(ibsrq->device), &context->uar,
- context->db_tab, ucmd.db_index);
- return -EFAULT;
+ if (context) {
+ struct mthca_create_srq_resp uresp = { .srqn = srq->srqn };
+
+ err = ib_respond_udata(udata, uresp);
+ if (err) {
+ mthca_free_srq(to_mdev(ibsrq->device), srq);
+ mthca_unmap_user_db(to_mdev(ibsrq->device),
+ &context->uar, context->db_tab,
+ ucmd.db_index);
+ return err;
+ }
}
return 0;
@@ -631,10 +640,14 @@ static int mthca_create_cq(struct ib_cq *ibcq,
if (err)
goto err_unmap_arm;
- if (udata && ib_copy_to_udata(udata, &cq->cqn, sizeof(__u32))) {
- mthca_free_cq(to_mdev(ibdev), cq);
- err = -EFAULT;
- goto err_unmap_arm;
+ if (udata) {
+ struct mthca_create_cq_resp uresp = { .cqn = cq->cqn };
+
+ err = ib_respond_udata(udata, uresp);
+ if (err) {
+ mthca_free_cq(to_mdev(ibdev), cq);
+ goto err_unmap_arm;
+ }
}
cq->resize_buf = NULL;
--
2.43.0
^ permalink raw reply related [flat|nested] 19+ messages in thread* [PATCH v2 15/16] RDMA: Add missed = {} initialization to uresp structs
2026-04-06 17:40 [PATCH v2 00/16] Convert all drivers to the new udata response flow Jason Gunthorpe
` (13 preceding siblings ...)
2026-04-06 17:40 ` [PATCH v2 14/16] RDMA: Use proper driver data response structs instead of open coding Jason Gunthorpe
@ 2026-04-06 17:40 ` Jason Gunthorpe
2026-04-06 17:40 ` [PATCH v2 16/16] RDMA: Replace memset with = {} pattern for ib_respond_udata() Jason Gunthorpe
15 siblings, 0 replies; 19+ messages in thread
From: Jason Gunthorpe @ 2026-04-06 17:40 UTC (permalink / raw)
To: Abhijit Gangurde, Allen Hubbe,
Broadcom internal kernel review list, Bernard Metzler,
Potnuri Bharat Teja, Bryan Tan, Cheng Xu, Dennis Dalessandro,
Gal Pressman, Junxian Huang, Kai Shen, Kalesh AP,
Konstantin Taranov, Krzysztof Czurylo, Leon Romanovsky,
linux-hyperv, linux-rdma, Long Li, Michal Kalderon,
Michael Margolin, Nelson Escobar, Satish Kharat, Selvin Xavier,
Yossi Leybovich, Chengchang Tang, Tatyana Nikolova, Vishnu Dasa,
Yishai Hadas
Cc: Adit Ranadive, Aditya Sarwade, Bryan Tan, Dexuan Cui,
Doug Ledford, George Zhang, Jorgen Hansen, Leon Romanovsky,
Parav Pandit, patches, Roland Dreier, Roland Dreier, Ajay Sharma,
stable
All of these are fully initialized so no bugs are being fixed. Add
the missing initializer as a precaution against future changes.
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
---
drivers/infiniband/hw/bnxt_re/ib_verbs.c | 2 +-
drivers/infiniband/hw/erdma/erdma_verbs.c | 2 +-
drivers/infiniband/hw/mlx4/main.c | 4 ++--
drivers/infiniband/hw/mlx5/main.c | 2 +-
4 files changed, 5 insertions(+), 5 deletions(-)
diff --git a/drivers/infiniband/hw/bnxt_re/ib_verbs.c b/drivers/infiniband/hw/bnxt_re/ib_verbs.c
index 7ed294516b7edb..ccb362d6d2e669 100644
--- a/drivers/infiniband/hw/bnxt_re/ib_verbs.c
+++ b/drivers/infiniband/hw/bnxt_re/ib_verbs.c
@@ -1884,7 +1884,7 @@ int bnxt_re_create_qp(struct ib_qp *ib_qp, struct ib_qp_init_attr *qp_init_attr,
}
if (udata) {
- struct bnxt_re_qp_resp resp;
+ struct bnxt_re_qp_resp resp = {};
resp.qpid = qp->qplib_qp.id;
resp.rsvd = 0;
diff --git a/drivers/infiniband/hw/erdma/erdma_verbs.c b/drivers/infiniband/hw/erdma/erdma_verbs.c
index 92a65970ab6fa1..c8a35337ba51e8 100644
--- a/drivers/infiniband/hw/erdma/erdma_verbs.c
+++ b/drivers/infiniband/hw/erdma/erdma_verbs.c
@@ -1977,7 +1977,7 @@ int erdma_create_cq(struct ib_cq *ibcq, const struct ib_cq_init_attr *attr,
if (!rdma_is_kernel_res(&ibcq->res)) {
struct erdma_ureq_create_cq ureq;
- struct erdma_uresp_create_cq uresp;
+ struct erdma_uresp_create_cq uresp = {};
ret = ib_copy_validate_udata_in(udata, ureq, rsvd0);
if (ret)
diff --git a/drivers/infiniband/hw/mlx4/main.c b/drivers/infiniband/hw/mlx4/main.c
index 25f9738bd77223..d50743f090bf21 100644
--- a/drivers/infiniband/hw/mlx4/main.c
+++ b/drivers/infiniband/hw/mlx4/main.c
@@ -1090,8 +1090,8 @@ static int mlx4_ib_alloc_ucontext(struct ib_ucontext *uctx,
struct ib_device *ibdev = uctx->device;
struct mlx4_ib_dev *dev = to_mdev(ibdev);
struct mlx4_ib_ucontext *context = to_mucontext(uctx);
- struct mlx4_ib_alloc_ucontext_resp_v3 resp_v3;
- struct mlx4_ib_alloc_ucontext_resp resp;
+ struct mlx4_ib_alloc_ucontext_resp_v3 resp_v3 = {};
+ struct mlx4_ib_alloc_ucontext_resp resp = {};
int err;
if (!dev->ib_active)
diff --git a/drivers/infiniband/hw/mlx5/main.c b/drivers/infiniband/hw/mlx5/main.c
index 84dddaded6fdef..a6a696864f9e0a 100644
--- a/drivers/infiniband/hw/mlx5/main.c
+++ b/drivers/infiniband/hw/mlx5/main.c
@@ -2772,7 +2772,7 @@ static int mlx5_ib_alloc_pd(struct ib_pd *ibpd, struct ib_udata *udata)
{
struct mlx5_ib_pd *pd = to_mpd(ibpd);
struct ib_device *ibdev = ibpd->device;
- struct mlx5_ib_alloc_pd_resp resp;
+ struct mlx5_ib_alloc_pd_resp resp = {};
int err;
u32 out[MLX5_ST_SZ_DW(alloc_pd_out)] = {};
u32 in[MLX5_ST_SZ_DW(alloc_pd_in)] = {};
--
2.43.0
^ permalink raw reply related [flat|nested] 19+ messages in thread* [PATCH v2 16/16] RDMA: Replace memset with = {} pattern for ib_respond_udata()
2026-04-06 17:40 [PATCH v2 00/16] Convert all drivers to the new udata response flow Jason Gunthorpe
` (14 preceding siblings ...)
2026-04-06 17:40 ` [PATCH v2 15/16] RDMA: Add missed = {} initialization to uresp structs Jason Gunthorpe
@ 2026-04-06 17:40 ` Jason Gunthorpe
15 siblings, 0 replies; 19+ messages in thread
From: Jason Gunthorpe @ 2026-04-06 17:40 UTC (permalink / raw)
To: Abhijit Gangurde, Allen Hubbe,
Broadcom internal kernel review list, Bernard Metzler,
Potnuri Bharat Teja, Bryan Tan, Cheng Xu, Dennis Dalessandro,
Gal Pressman, Junxian Huang, Kai Shen, Kalesh AP,
Konstantin Taranov, Krzysztof Czurylo, Leon Romanovsky,
linux-hyperv, linux-rdma, Long Li, Michal Kalderon,
Michael Margolin, Nelson Escobar, Satish Kharat, Selvin Xavier,
Yossi Leybovich, Chengchang Tang, Tatyana Nikolova, Vishnu Dasa,
Yishai Hadas
Cc: Adit Ranadive, Aditya Sarwade, Bryan Tan, Dexuan Cui,
Doug Ledford, George Zhang, Jorgen Hansen, Leon Romanovsky,
Parav Pandit, patches, Roland Dreier, Roland Dreier, Ajay Sharma,
stable
Most drivers do this already, but some open-code a memset. Switch
all instances found. qedr_copy_qp_uresp() is already called with
zeroed memory so that memset is redundant.
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
---
drivers/infiniband/hw/cxgb4/cq.c | 3 +--
drivers/infiniband/hw/cxgb4/qp.c | 6 ++----
drivers/infiniband/hw/erdma/erdma_verbs.c | 4 +---
drivers/infiniband/hw/ocrdma/ocrdma_verbs.c | 12 ++++--------
drivers/infiniband/hw/qedr/verbs.c | 6 +-----
drivers/infiniband/hw/usnic/usnic_ib_verbs.c | 4 +---
6 files changed, 10 insertions(+), 25 deletions(-)
diff --git a/drivers/infiniband/hw/cxgb4/cq.c b/drivers/infiniband/hw/cxgb4/cq.c
index 47508df4cec023..d1517f2560b981 100644
--- a/drivers/infiniband/hw/cxgb4/cq.c
+++ b/drivers/infiniband/hw/cxgb4/cq.c
@@ -1004,7 +1004,7 @@ int c4iw_create_cq(struct ib_cq *ibcq, const struct ib_cq_init_attr *attr,
struct c4iw_dev *rhp = to_c4iw_dev(ibcq->device);
struct c4iw_cq *chp = to_c4iw_cq(ibcq);
struct c4iw_create_cq ucmd;
- struct c4iw_create_cq_resp uresp;
+ struct c4iw_create_cq_resp uresp = {};
int ret, wr_len;
size_t memsize, hwentries;
struct c4iw_mm_entry *mm, *mm2;
@@ -1102,7 +1102,6 @@ int c4iw_create_cq(struct ib_cq *ibcq, const struct ib_cq_init_attr *attr,
if (!mm2)
goto err_free_mm;
- memset(&uresp, 0, sizeof(uresp));
uresp.qid_mask = rhp->rdev.cqmask;
uresp.cqid = chp->cq.cqid;
uresp.size = chp->cq.size;
diff --git a/drivers/infiniband/hw/cxgb4/qp.c b/drivers/infiniband/hw/cxgb4/qp.c
index f9c7030ac6bfd0..e295f79e0cd3e5 100644
--- a/drivers/infiniband/hw/cxgb4/qp.c
+++ b/drivers/infiniband/hw/cxgb4/qp.c
@@ -2120,7 +2120,7 @@ int c4iw_create_qp(struct ib_qp *qp, struct ib_qp_init_attr *attrs,
struct c4iw_pd *php;
struct c4iw_cq *schp;
struct c4iw_cq *rchp;
- struct c4iw_create_qp_resp uresp;
+ struct c4iw_create_qp_resp uresp = {};
unsigned int sqsize, rqsize = 0;
struct c4iw_ucontext *ucontext = rdma_udata_to_drv_context(
udata, struct c4iw_ucontext, ibucontext);
@@ -2242,7 +2242,6 @@ int c4iw_create_qp(struct ib_qp *qp, struct ib_qp_init_attr *attrs,
goto err_free_sq_db_key;
}
}
- memset(&uresp, 0, sizeof(uresp));
if (t4_sq_onchip(&qhp->wq.sq)) {
ma_sync_key_mm = kmalloc_obj(*ma_sync_key_mm);
if (!ma_sync_key_mm) {
@@ -2686,7 +2685,7 @@ int c4iw_create_srq(struct ib_srq *ib_srq, struct ib_srq_init_attr *attrs,
struct c4iw_dev *rhp;
struct c4iw_srq *srq = to_c4iw_srq(ib_srq);
struct c4iw_pd *php;
- struct c4iw_create_srq_resp uresp;
+ struct c4iw_create_srq_resp uresp = {};
struct c4iw_ucontext *ucontext;
struct c4iw_mm_entry *srq_key_mm, *srq_db_key_mm;
int rqsize;
@@ -2764,7 +2763,6 @@ int c4iw_create_srq(struct ib_srq *ib_srq, struct ib_srq_init_attr *attrs,
ret = -ENOMEM;
goto err_free_srq_key_mm;
}
- memset(&uresp, 0, sizeof(uresp));
uresp.flags = srq->flags;
uresp.qid_mask = rhp->rdev.qpmask;
uresp.srqid = srq->wq.qid;
diff --git a/drivers/infiniband/hw/erdma/erdma_verbs.c b/drivers/infiniband/hw/erdma/erdma_verbs.c
index c8a35337ba51e8..b59c2e3a5306d1 100644
--- a/drivers/infiniband/hw/erdma/erdma_verbs.c
+++ b/drivers/infiniband/hw/erdma/erdma_verbs.c
@@ -996,7 +996,7 @@ int erdma_create_qp(struct ib_qp *ibqp, struct ib_qp_init_attr *attrs,
struct erdma_ucontext *uctx = rdma_udata_to_drv_context(
udata, struct erdma_ucontext, ibucontext);
struct erdma_ureq_create_qp ureq;
- struct erdma_uresp_create_qp uresp;
+ struct erdma_uresp_create_qp uresp = {};
void *old_entry;
int ret = 0;
@@ -1048,8 +1048,6 @@ int erdma_create_qp(struct ib_qp *ibqp, struct ib_qp_init_attr *attrs,
if (ret)
goto err_out_xa;
- memset(&uresp, 0, sizeof(uresp));
-
uresp.num_sqe = qp->attrs.sq_size;
uresp.num_rqe = qp->attrs.rq_size;
uresp.qp_id = QP_ID(qp);
diff --git a/drivers/infiniband/hw/ocrdma/ocrdma_verbs.c b/drivers/infiniband/hw/ocrdma/ocrdma_verbs.c
index 2a174d0fe6ca1e..383f1d9c15d151 100644
--- a/drivers/infiniband/hw/ocrdma/ocrdma_verbs.c
+++ b/drivers/infiniband/hw/ocrdma/ocrdma_verbs.c
@@ -586,11 +586,10 @@ static int ocrdma_copy_pd_uresp(struct ocrdma_dev *dev, struct ocrdma_pd *pd,
u64 db_page_addr;
u64 dpp_page_addr = 0;
u32 db_page_size;
- struct ocrdma_alloc_pd_uresp rsp;
+ struct ocrdma_alloc_pd_uresp rsp = {};
struct ocrdma_ucontext *uctx = rdma_udata_to_drv_context(
udata, struct ocrdma_ucontext, ibucontext);
- memset(&rsp, 0, sizeof(rsp));
rsp.id = pd->id;
rsp.dpp_enabled = pd->dpp_enabled;
db_page_addr = ocrdma_get_db_addr(dev, pd->id);
@@ -930,13 +929,12 @@ static int ocrdma_copy_cq_uresp(struct ocrdma_dev *dev, struct ocrdma_cq *cq,
int status;
struct ocrdma_ucontext *uctx = rdma_udata_to_drv_context(
udata, struct ocrdma_ucontext, ibucontext);
- struct ocrdma_create_cq_uresp uresp;
+ struct ocrdma_create_cq_uresp uresp = {};
/* this must be user flow! */
if (!udata)
return -EINVAL;
- memset(&uresp, 0, sizeof(uresp));
uresp.cq_id = cq->id;
uresp.page_size = PAGE_ALIGN(cq->len);
uresp.num_pages = 1;
@@ -1173,11 +1171,10 @@ static int ocrdma_copy_qp_uresp(struct ocrdma_qp *qp,
{
int status;
u64 usr_db;
- struct ocrdma_create_qp_uresp uresp;
+ struct ocrdma_create_qp_uresp uresp = {};
struct ocrdma_pd *pd = qp->pd;
struct ocrdma_dev *dev = get_ocrdma_dev(pd->ibpd.device);
- memset(&uresp, 0, sizeof(uresp));
usr_db = dev->nic_info.unmapped_db +
(pd->id * dev->nic_info.db_page_size);
uresp.qp_id = qp->id;
@@ -1730,9 +1727,8 @@ static int ocrdma_copy_srq_uresp(struct ocrdma_dev *dev, struct ocrdma_srq *srq,
struct ib_udata *udata)
{
int status;
- struct ocrdma_create_srq_uresp uresp;
+ struct ocrdma_create_srq_uresp uresp = {};
- memset(&uresp, 0, sizeof(uresp));
uresp.rq_dbid = srq->rq.dbid;
uresp.num_rq_pages = 1;
uresp.rq_page_addr[0] = virt_to_phys(srq->rq.va);
diff --git a/drivers/infiniband/hw/qedr/verbs.c b/drivers/infiniband/hw/qedr/verbs.c
index 79190c5b8b50b0..1af908275ca729 100644
--- a/drivers/infiniband/hw/qedr/verbs.c
+++ b/drivers/infiniband/hw/qedr/verbs.c
@@ -690,9 +690,7 @@ static void qedr_db_recovery_del(struct qedr_dev *dev,
static int qedr_copy_cq_uresp(struct qedr_cq *cq, struct ib_udata *udata,
u32 db_offset)
{
- struct qedr_create_cq_uresp uresp;
-
- memset(&uresp, 0, sizeof(uresp));
+ struct qedr_create_cq_uresp uresp = {};
uresp.db_offset = db_offset;
uresp.icid = cq->icid;
@@ -1283,8 +1281,6 @@ static int qedr_copy_qp_uresp(struct qedr_dev *dev,
struct qedr_qp *qp, struct ib_udata *udata,
struct qedr_create_qp_uresp *uresp)
{
- memset(uresp, 0, sizeof(*uresp));
-
if (qedr_qp_has_sq(qp))
qedr_copy_sq_uresp(dev, uresp, qp);
diff --git a/drivers/infiniband/hw/usnic/usnic_ib_verbs.c b/drivers/infiniband/hw/usnic/usnic_ib_verbs.c
index e887f03a84d063..261f18a8368543 100644
--- a/drivers/infiniband/hw/usnic/usnic_ib_verbs.c
+++ b/drivers/infiniband/hw/usnic/usnic_ib_verbs.c
@@ -82,15 +82,13 @@ static void usnic_ib_fw_string_to_u64(char *fw_ver_str, u64 *fw_ver)
static int usnic_ib_fill_create_qp_resp(struct usnic_ib_qp_grp *qp_grp,
struct ib_udata *udata)
{
- struct usnic_ib_create_qp_resp resp;
+ struct usnic_ib_create_qp_resp resp = {};
struct pci_dev *pdev;
struct vnic_dev_bar *bar;
struct usnic_vnic_res_chunk *chunk;
struct usnic_ib_qp_grp_flow *default_flow;
int i, err;
- memset(&resp, 0, sizeof(resp));
-
pdev = usnic_vnic_get_pdev(qp_grp->vf->vnic);
if (!pdev) {
usnic_err("Failed to get pdev of qp_grp %d\n",
--
2.43.0
^ permalink raw reply related [flat|nested] 19+ messages in thread