* [PATCH for-next v2] RDMA/rxe: Remove redundant page presence check
@ 2025-06-08 9:59 Daisuke Matsuda
2025-06-08 11:23 ` Zhu Yanjun
0 siblings, 1 reply; 4+ messages in thread
From: Daisuke Matsuda @ 2025-06-08 9:59 UTC (permalink / raw)
To: linux-rdma, leon, jgg, zyjzyj2000; +Cc: Daisuke Matsuda
hmm_pfn_to_page() does not return NULL. ib_umem_odp_map_dma_and_lock()
should return an error in case the target pages cannot be mapped until
timeout, so these checks can safely be removed.
Signed-off-by: Daisuke Matsuda <dskmtsd@gmail.com>
---
drivers/infiniband/sw/rxe/rxe_odp.c | 13 +------------
1 file changed, 1 insertion(+), 12 deletions(-)
diff --git a/drivers/infiniband/sw/rxe/rxe_odp.c b/drivers/infiniband/sw/rxe/rxe_odp.c
index dbc5a5600eb7..02841346e30c 100644
--- a/drivers/infiniband/sw/rxe/rxe_odp.c
+++ b/drivers/infiniband/sw/rxe/rxe_odp.c
@@ -203,8 +203,6 @@ static int __rxe_odp_mr_copy(struct rxe_mr *mr, u64 iova, void *addr,
page = hmm_pfn_to_page(umem_odp->map.pfn_list[idx]);
user_va = kmap_local_page(page);
- if (!user_va)
- return -EFAULT;
src = (dir == RXE_TO_MR_OBJ) ? addr : user_va;
dest = (dir == RXE_TO_MR_OBJ) ? user_va : addr;
@@ -286,8 +284,6 @@ static enum resp_states rxe_odp_do_atomic_op(struct rxe_mr *mr, u64 iova,
idx = rxe_odp_iova_to_index(umem_odp, iova);
page_offset = rxe_odp_iova_to_page_offset(umem_odp, iova);
page = hmm_pfn_to_page(umem_odp->map.pfn_list[idx]);
- if (!page)
- return RESPST_ERR_RKEY_VIOLATION;
if (unlikely(page_offset & 0x7)) {
rxe_dbg_mr(mr, "iova not aligned\n");
@@ -352,10 +348,6 @@ int rxe_odp_flush_pmem_iova(struct rxe_mr *mr, u64 iova,
page_offset = rxe_odp_iova_to_page_offset(umem_odp, iova);
page = hmm_pfn_to_page(umem_odp->map.pfn_list[index]);
- if (!page) {
- mutex_unlock(&umem_odp->umem_mutex);
- return -EFAULT;
- }
bytes = min_t(unsigned int, length,
mr_page_size(mr) - page_offset);
@@ -398,10 +390,7 @@ enum resp_states rxe_odp_do_atomic_write(struct rxe_mr *mr, u64 iova, u64 value)
page_offset = rxe_odp_iova_to_page_offset(umem_odp, iova);
index = rxe_odp_iova_to_index(umem_odp, iova);
page = hmm_pfn_to_page(umem_odp->map.pfn_list[index]);
- if (!page) {
- mutex_unlock(&umem_odp->umem_mutex);
- return RESPST_ERR_RKEY_VIOLATION;
- }
+
/* See IBA A19.4.2 */
if (unlikely(page_offset & 0x7)) {
mutex_unlock(&umem_odp->umem_mutex);
--
2.43.0
^ permalink raw reply related [flat|nested] 4+ messages in thread
* Re: [PATCH for-next v2] RDMA/rxe: Remove redundant page presence check
2025-06-08 9:59 [PATCH for-next v2] RDMA/rxe: Remove redundant page presence check Daisuke Matsuda
@ 2025-06-08 11:23 ` Zhu Yanjun
2025-06-09 7:43 ` Zhu Yanjun
2025-06-11 15:12 ` Daisuke Matsuda
0 siblings, 2 replies; 4+ messages in thread
From: Zhu Yanjun @ 2025-06-08 11:23 UTC (permalink / raw)
To: Daisuke Matsuda, linux-rdma, leon, jgg, zyjzyj2000
在 2025/6/8 11:59, Daisuke Matsuda 写道:
> hmm_pfn_to_page() does not return NULL. ib_umem_odp_map_dma_and_lock()
> should return an error in case the target pages cannot be mapped until
> timeout, so these checks can safely be removed.
>
> Signed-off-by: Daisuke Matsuda <dskmtsd@gmail.com>
> ---
> drivers/infiniband/sw/rxe/rxe_odp.c | 13 +------------
> 1 file changed, 1 insertion(+), 12 deletions(-)
>
> diff --git a/drivers/infiniband/sw/rxe/rxe_odp.c b/drivers/infiniband/sw/rxe/rxe_odp.c
> index dbc5a5600eb7..02841346e30c 100644
> --- a/drivers/infiniband/sw/rxe/rxe_odp.c
> +++ b/drivers/infiniband/sw/rxe/rxe_odp.c
> @@ -203,8 +203,6 @@ static int __rxe_odp_mr_copy(struct rxe_mr *mr, u64 iova, void *addr,
>
> page = hmm_pfn_to_page(umem_odp->map.pfn_list[idx]);
> user_va = kmap_local_page(page);
> - if (!user_va)
> - return -EFAULT;
>
> src = (dir == RXE_TO_MR_OBJ) ? addr : user_va;
> dest = (dir == RXE_TO_MR_OBJ) ? user_va : addr;
> @@ -286,8 +284,6 @@ static enum resp_states rxe_odp_do_atomic_op(struct rxe_mr *mr, u64 iova,
> idx = rxe_odp_iova_to_index(umem_odp, iova);
> page_offset = rxe_odp_iova_to_page_offset(umem_odp, iova);
> page = hmm_pfn_to_page(umem_odp->map.pfn_list[idx]);
The function hmm_pfn_to_page will finally be "(mem_map + ((pfn) -
ARCH_PFN_OFFSET))"
The procedure is as below:
hmm_pfn_to_page -- > pfn_to_page -- > __pfn_to_page -- > (mem_map +
((pfn) - ARCH_PFN_OFFSET))
Thus, I am fine with it.
> - if (!page)
> - return RESPST_ERR_RKEY_VIOLATION;
>
> if (unlikely(page_offset & 0x7)) {
Normally page_offset error handler should be after this line
"page_offset = rxe_odp_iova_to_page_offset(umem_odp, iova);"
Why is this error handler after hmm_pfn_to_page?
> rxe_dbg_mr(mr, "iova not aligned\n");
> @@ -352,10 +348,6 @@ int rxe_odp_flush_pmem_iova(struct rxe_mr *mr, u64 iova,
> page_offset = rxe_odp_iova_to_page_offset(umem_odp, iova);
>
> page = hmm_pfn_to_page(umem_odp->map.pfn_list[index]);
> - if (!page) {
> - mutex_unlock(&umem_odp->umem_mutex);
> - return -EFAULT;
> - }
>
> bytes = min_t(unsigned int, length,
> mr_page_size(mr) - page_offset);
> @@ -398,10 +390,7 @@ enum resp_states rxe_odp_do_atomic_write(struct rxe_mr *mr, u64 iova, u64 value)
> page_offset = rxe_odp_iova_to_page_offset(umem_odp, iova);
> index = rxe_odp_iova_to_index(umem_odp, iova);
> page = hmm_pfn_to_page(umem_odp->map.pfn_list[index]);
> - if (!page) {
> - mutex_unlock(&umem_odp->umem_mutex);
> - return RESPST_ERR_RKEY_VIOLATION;
> - }
> +
> /* See IBA A19.4.2 */
> if (unlikely(page_offset & 0x7)) {
Ditto, page_offset error handler is not after the line "page_offset =
rxe_odp_iova_to_page_offset(umem_odp, iova);" ?
Thanks
Yanjun.Zhu
> mutex_unlock(&umem_odp->umem_mutex);
^ permalink raw reply [flat|nested] 4+ messages in thread
* Re: [PATCH for-next v2] RDMA/rxe: Remove redundant page presence check
2025-06-08 11:23 ` Zhu Yanjun
@ 2025-06-09 7:43 ` Zhu Yanjun
2025-06-11 15:12 ` Daisuke Matsuda
1 sibling, 0 replies; 4+ messages in thread
From: Zhu Yanjun @ 2025-06-09 7:43 UTC (permalink / raw)
To: Daisuke Matsuda, linux-rdma, leon, jgg, zyjzyj2000
在 2025/6/8 13:23, Zhu Yanjun 写道:
> 在 2025/6/8 11:59, Daisuke Matsuda 写道:
>> hmm_pfn_to_page() does not return NULL. ib_umem_odp_map_dma_and_lock()
>> should return an error in case the target pages cannot be mapped until
>> timeout, so these checks can safely be removed.
>>
>> Signed-off-by: Daisuke Matsuda <dskmtsd@gmail.com>
>> ---
>> drivers/infiniband/sw/rxe/rxe_odp.c | 13 +------------
>> 1 file changed, 1 insertion(+), 12 deletions(-)
>>
>> diff --git a/drivers/infiniband/sw/rxe/rxe_odp.c b/drivers/infiniband/
>> sw/rxe/rxe_odp.c
>> index dbc5a5600eb7..02841346e30c 100644
>> --- a/drivers/infiniband/sw/rxe/rxe_odp.c
>> +++ b/drivers/infiniband/sw/rxe/rxe_odp.c
>> @@ -203,8 +203,6 @@ static int __rxe_odp_mr_copy(struct rxe_mr *mr,
>> u64 iova, void *addr,
>> page = hmm_pfn_to_page(umem_odp->map.pfn_list[idx]);
>> user_va = kmap_local_page(page);
>> - if (!user_va)
>> - return -EFAULT;
>> src = (dir == RXE_TO_MR_OBJ) ? addr : user_va;
>> dest = (dir == RXE_TO_MR_OBJ) ? user_va : addr;
>> @@ -286,8 +284,6 @@ static enum resp_states
>> rxe_odp_do_atomic_op(struct rxe_mr *mr, u64 iova,
>> idx = rxe_odp_iova_to_index(umem_odp, iova);
>> page_offset = rxe_odp_iova_to_page_offset(umem_odp, iova);
>> page = hmm_pfn_to_page(umem_odp->map.pfn_list[idx]);
>
> The function hmm_pfn_to_page will finally be "(mem_map + ((pfn) -
> ARCH_PFN_OFFSET))"
>
> The procedure is as below:
>
> hmm_pfn_to_page -- > pfn_to_page -- > __pfn_to_page -- > (mem_map +
> ((pfn) - ARCH_PFN_OFFSET))
>
> Thus, I am fine with it.
>
>> - if (!page)
>> - return RESPST_ERR_RKEY_VIOLATION;
>> if (unlikely(page_offset & 0x7)) {
>
> Normally page_offset error handler should be after this line
> "page_offset = rxe_odp_iova_to_page_offset(umem_odp, iova);"
>
> Why is this error handler after hmm_pfn_to_page?
>
>> rxe_dbg_mr(mr, "iova not aligned\n");
>> @@ -352,10 +348,6 @@ int rxe_odp_flush_pmem_iova(struct rxe_mr *mr,
>> u64 iova,
>> page_offset = rxe_odp_iova_to_page_offset(umem_odp, iova);
>> page = hmm_pfn_to_page(umem_odp->map.pfn_list[index]);
>> - if (!page) {
>> - mutex_unlock(&umem_odp->umem_mutex);
>> - return -EFAULT;
>> - }
>> bytes = min_t(unsigned int, length,
>> mr_page_size(mr) - page_offset);
>> @@ -398,10 +390,7 @@ enum resp_states rxe_odp_do_atomic_write(struct
>> rxe_mr *mr, u64 iova, u64 value)
>> page_offset = rxe_odp_iova_to_page_offset(umem_odp, iova);
>> index = rxe_odp_iova_to_index(umem_odp, iova);
>> page = hmm_pfn_to_page(umem_odp->map.pfn_list[index]);
>> - if (!page) {
>> - mutex_unlock(&umem_odp->umem_mutex);
>> - return RESPST_ERR_RKEY_VIOLATION;
>> - }
>> +
>> /* See IBA A19.4.2 */
>> if (unlikely(page_offset & 0x7)) {
>
> Ditto, page_offset error handler is not after the line "page_offset =
> rxe_odp_iova_to_page_offset(umem_odp, iova);" ?
Other than that, I am fine with this commit.
Thanks,
Reviewed-by: Zhu Yanjun <yanjun.zhu@linux.dev>
Zhu Yanjun
>
> Thanks
> Yanjun.Zhu
>
>> mutex_unlock(&umem_odp->umem_mutex);
>
--
Best Regards,
Yanjun.Zhu
^ permalink raw reply [flat|nested] 4+ messages in thread
* Re: [PATCH for-next v2] RDMA/rxe: Remove redundant page presence check
2025-06-08 11:23 ` Zhu Yanjun
2025-06-09 7:43 ` Zhu Yanjun
@ 2025-06-11 15:12 ` Daisuke Matsuda
1 sibling, 0 replies; 4+ messages in thread
From: Daisuke Matsuda @ 2025-06-11 15:12 UTC (permalink / raw)
To: Zhu Yanjun, linux-rdma, leon, jgg, zyjzyj2000
On 2025/06/08 20:23, Zhu Yanjun wrote:
> 在 2025/6/8 11:59, Daisuke Matsuda 写道:
>> hmm_pfn_to_page() does not return NULL. ib_umem_odp_map_dma_and_lock()
>> should return an error in case the target pages cannot be mapped until
>> timeout, so these checks can safely be removed.
>>
>> Signed-off-by: Daisuke Matsuda <dskmtsd@gmail.com>
>> ---
<...>
>
>> - if (!page)
>> - return RESPST_ERR_RKEY_VIOLATION;
>> if (unlikely(page_offset & 0x7)) {
>
> Normally page_offset error handler should be after this line "page_offset = rxe_odp_iova_to_page_offset(umem_odp, iova);"
>
> Why is this error handler after hmm_pfn_to_page?
>
>> rxe_dbg_mr(mr, "iova not aligned\n");
>> @@ -352,10 +348,6 @@ int rxe_odp_flush_pmem_iova(struct rxe_mr *mr, u64 iova,
>> page_offset = rxe_odp_iova_to_page_offset(umem_odp, iova);
>> page = hmm_pfn_to_page(umem_odp->map.pfn_list[index]);
>> - if (!page) {
>> - mutex_unlock(&umem_odp->umem_mutex);
>> - return -EFAULT;
>> - }
>> bytes = min_t(unsigned int, length,
>> mr_page_size(mr) - page_offset);
>> @@ -398,10 +390,7 @@ enum resp_states rxe_odp_do_atomic_write(struct rxe_mr *mr, u64 iova, u64 value)
>> page_offset = rxe_odp_iova_to_page_offset(umem_odp, iova);
>> index = rxe_odp_iova_to_index(umem_odp, iova);
>> page = hmm_pfn_to_page(umem_odp->map.pfn_list[index]);
>> - if (!page) {
>> - mutex_unlock(&umem_odp->umem_mutex);
>> - return RESPST_ERR_RKEY_VIOLATION;
>> - }
>> +
>> /* See IBA A19.4.2 */
>> if (unlikely(page_offset & 0x7)) {
>
> Ditto, page_offset error handler is not after the line "page_offset = rxe_odp_iova_to_page_offset(umem_odp, iova);" ?
That is a sensible question.
I will submit v3 patch to move the error checks.
Thank you,
Daisuke
^ permalink raw reply [flat|nested] 4+ messages in thread
end of thread, other threads:[~2025-06-11 15:12 UTC | newest]
Thread overview: 4+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2025-06-08 9:59 [PATCH for-next v2] RDMA/rxe: Remove redundant page presence check Daisuke Matsuda
2025-06-08 11:23 ` Zhu Yanjun
2025-06-09 7:43 ` Zhu Yanjun
2025-06-11 15:12 ` Daisuke Matsuda
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox