public inbox for linux-rdma@vger.kernel.org
 help / color / mirror / Atom feed
* [bug report] [rdma] RXE ODP test hangs with new DMA map API
@ 2025-05-21 12:48 Daisuke Matsuda
  2025-05-21 13:16 ` Daisuke Matsuda
                   ` (2 more replies)
  0 siblings, 3 replies; 4+ messages in thread
From: Daisuke Matsuda @ 2025-05-21 12:48 UTC (permalink / raw)
  To: linux-rdma, linux-kernel, leon, jgg, zyjzyj2000

Hi,

After these two patches are merged to the for-next tree, RXE ODP test always hangs:
   RDMA/core: Convert UMEM ODP DMA mapping to caching IOVA and page linkage
   RDMA/umem: Store ODP access mask information in PFN
cf. https://lore.kernel.org/linux-rdma/cover.1745831017.git.leon@kernel.org/

Here is the console log:
```
$ ./build/bin/run_tests.py -v -k odp
test_odp_dc_traffic (tests.test_mlx5_dc.DCTest.test_odp_dc_traffic) ... skipped 'Can not run the test over non MLX5 device'
test_devx_rc_qp_odp_traffic (tests.test_mlx5_devx.Mlx5DevxRcTrafficTest.test_devx_rc_qp_odp_traffic) ... skipped 'Can not run the test over non MLX5 device'
test_odp_mkey_list_new_api (tests.test_mlx5_mkey.Mlx5MkeyTest.test_odp_mkey_list_new_api)
Create Mkeys above ODP MR, configure it with memory layout using the new API and ... skipped 'Could not open mlx5 context (This is not an MLX5 device)'
test_odp_async_prefetch_rc_traffic (tests.test_odp.OdpTestCase.test_odp_async_prefetch_rc_traffic) ...


```

It looks that the python process is somehow stuck in uverbs_destroy_ufile_hw():
```
$ sudo cat /proc/1845/task/1845/stack
[<0>] uverbs_destroy_ufile_hw+0x24/0x100 [ib_uverbs]
[<0>] ib_uverbs_close+0x1b/0xc0 [ib_uverbs]
[<0>] __fput+0xea/0x2d0
[<0>] ____fput+0x15/0x20
[<0>] task_work_run+0x5d/0xa0
[<0>] do_exit+0x316/0xa50
[<0>] make_task_dead+0x81/0x160
[<0>] rewind_stack_and_make_dead+0x16/0x20
```

I am not sure about the root cause but hope we can fix this before the next merge window.

Thanks,
Daisuke

^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: [bug report] [rdma] RXE ODP test hangs with new DMA map API
  2025-05-21 12:48 [bug report] [rdma] RXE ODP test hangs with new DMA map API Daisuke Matsuda
@ 2025-05-21 13:16 ` Daisuke Matsuda
  2025-05-22  8:32 ` Leon Romanovsky
  2025-05-23 12:51 ` Daisuke Matsuda
  2 siblings, 0 replies; 4+ messages in thread
From: Daisuke Matsuda @ 2025-05-21 13:16 UTC (permalink / raw)
  To: linux-rdma, linux-kernel, leon, jgg, zyjzyj2000


>    RDMA/umem: Store ODP access mask information in PFN

This one generates a build error. I modified manually and tried running the test.
It also hanged with ERROR:
```
$ ./build/bin/run_tests.py -v -k odp
test_odp_dc_traffic (tests.test_mlx5_dc.DCTest.test_odp_dc_traffic) ... skipped                                                                                                                                                              'Can not run the test over non MLX5 device'
test_devx_rc_qp_odp_traffic (tests.test_mlx5_devx.Mlx5DevxRcTrafficTest.test_dev                                                                                                                                                             x_rc_qp_odp_traffic) ... skipped 'Can not run the test over non MLX5 device'
test_odp_mkey_list_new_api (tests.test_mlx5_mkey.Mlx5MkeyTest.test_odp_mkey_list                                                                                                                                                             _new_api)
Create Mkeys above ODP MR, configure it with memory layout using the new API and                                                                                                                                                              ... skipped 'Could not open mlx5 context (This is not an MLX5 device)'
test_odp_async_prefetch_rc_traffic (tests.test_odp.OdpTestCase.test_odp_async_pr                                                                                                                                                             efetch_rc_traffic) ... skipped 'Advise MR with flags (0) and advice (0) is not s                                                                                                                                                             upported'
test_odp_implicit_async_prefetch_rc_traffic (tests.test_odp.OdpTestCase.test_odp                                                                                                                                                             _implicit_async_prefetch_rc_traffic) ... skipped 'ODP implicit is not supported'
test_odp_implicit_rc_traffic (tests.test_odp.OdpTestCase.test_odp_implicit_rc_tr                                                                                                                                                             affic) ... skipped 'ODP implicit is not supported'
test_odp_implicit_sync_prefetch_rc_traffic (tests.test_odp.OdpTestCase.test_odp_                                                                                                                                                             implicit_sync_prefetch_rc_traffic) ... skipped 'ODP implicit is not supported'
test_odp_prefetch_async_no_page_fault_rc_traffic (tests.test_odp.OdpTestCase.tes                                                                                                                                                             t_odp_prefetch_async_no_page_fault_rc_traffic) ... skipped 'Advise MR with flags                                                                                                                                                              (0) and advice (2) is not supported'
test_odp_prefetch_sync_no_page_fault_rc_traffic (tests.test_odp.OdpTestCase.test                                                                                                                                                             _odp_prefetch_sync_no_page_fault_rc_traffic) ... skipped 'Advise MR with flags (                                                                                                                                                             1) and advice (2) is not supported'
test_odp_qp_ex_rc_atomic_write (tests.test_odp.OdpTestCase.test_odp_qp_ex_rc_ato                                                                                                                                                             mic_write) ... ERROR


```

Here are the stack of the process:
```
[<0>] rxe_ib_invalidate_range+0x3e/0xa0 [rdma_rxe]
[<0>] __mmu_notifier_invalidate_range_start+0x197/0x200
[<0>] unmap_vmas+0x184/0x190
[<0>] vms_clear_ptes+0x12c/0x190
[<0>] vms_complete_munmap_vmas+0x83/0x1d0
[<0>] do_vmi_align_munmap+0x17f/0x1b0
[<0>] do_vmi_munmap+0xd3/0x190
[<0>] __vm_munmap+0xbb/0x190
[<0>] __x64_sys_munmap+0x1b/0x30
[<0>] x64_sys_call+0x1ea8/0x2660
[<0>] do_syscall_64+0x7e/0x170
[<0>] entry_SYSCALL_64_after_hwframe+0x76/0x7e
```
I think this one is related to umem mutex.

So it looks there are two problems:
The stuck issue in rxe_ib_invalidate_range() comes from "RDMA/umem: Store ODP access mask information in PFN",
and the stuck issue in uverbs_destroy_ufile_hw() derives from "RDMA/core: Convert UMEM ODP DMA mapping to caching IOVA and page linkage".

I'd welcome your help in fixing them.

Thanks,
Daisuke

On 2025/05/21 21:48, Daisuke Matsuda wrote:
> Hi,
> 
> After these two patches are merged to the for-next tree, RXE ODP test always hangs:
>    RDMA/core: Convert UMEM ODP DMA mapping to caching IOVA and page linkage
>    RDMA/umem: Store ODP access mask information in PFN
> cf. https://lore.kernel.org/linux-rdma/cover.1745831017.git.leon@kernel.org/
> 
> Here is the console log:
> ```
> $ ./build/bin/run_tests.py -v -k odp
> test_odp_dc_traffic (tests.test_mlx5_dc.DCTest.test_odp_dc_traffic) ... skipped 'Can not run the test over non MLX5 device'
> test_devx_rc_qp_odp_traffic (tests.test_mlx5_devx.Mlx5DevxRcTrafficTest.test_devx_rc_qp_odp_traffic) ... skipped 'Can not run the test over non MLX5 device'
> test_odp_mkey_list_new_api (tests.test_mlx5_mkey.Mlx5MkeyTest.test_odp_mkey_list_new_api)
> Create Mkeys above ODP MR, configure it with memory layout using the new API and ... skipped 'Could not open mlx5 context (This is not an MLX5 device)'
> test_odp_async_prefetch_rc_traffic (tests.test_odp.OdpTestCase.test_odp_async_prefetch_rc_traffic) ...
> 
> 
> ```
> 
> It looks that the python process is somehow stuck in uverbs_destroy_ufile_hw():
> ```
> $ sudo cat /proc/1845/task/1845/stack
> [<0>] uverbs_destroy_ufile_hw+0x24/0x100 [ib_uverbs]
> [<0>] ib_uverbs_close+0x1b/0xc0 [ib_uverbs]
> [<0>] __fput+0xea/0x2d0
> [<0>] ____fput+0x15/0x20
> [<0>] task_work_run+0x5d/0xa0
> [<0>] do_exit+0x316/0xa50
> [<0>] make_task_dead+0x81/0x160
> [<0>] rewind_stack_and_make_dead+0x16/0x20
> ```
> 
> I am not sure about the root cause but hope we can fix this before the next merge window.
> 
> Thanks,
> Daisuke


^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: [bug report] [rdma] RXE ODP test hangs with new DMA map API
  2025-05-21 12:48 [bug report] [rdma] RXE ODP test hangs with new DMA map API Daisuke Matsuda
  2025-05-21 13:16 ` Daisuke Matsuda
@ 2025-05-22  8:32 ` Leon Romanovsky
  2025-05-23 12:51 ` Daisuke Matsuda
  2 siblings, 0 replies; 4+ messages in thread
From: Leon Romanovsky @ 2025-05-22  8:32 UTC (permalink / raw)
  To: Daisuke Matsuda; +Cc: linux-rdma, linux-kernel, jgg, zyjzyj2000

On Wed, May 21, 2025 at 09:48:27PM +0900, Daisuke Matsuda wrote:
> Hi,
> 
> After these two patches are merged to the for-next tree, RXE ODP test always hangs:
>   RDMA/core: Convert UMEM ODP DMA mapping to caching IOVA and page linkage
>   RDMA/umem: Store ODP access mask information in PFN
> cf. https://lore.kernel.org/linux-rdma/cover.1745831017.git.leon@kernel.org/
> 
> Here is the console log:
> ```
> $ ./build/bin/run_tests.py -v -k odp
> test_odp_dc_traffic (tests.test_mlx5_dc.DCTest.test_odp_dc_traffic) ... skipped 'Can not run the test over non MLX5 device'
> test_devx_rc_qp_odp_traffic (tests.test_mlx5_devx.Mlx5DevxRcTrafficTest.test_devx_rc_qp_odp_traffic) ... skipped 'Can not run the test over non MLX5 device'
> test_odp_mkey_list_new_api (tests.test_mlx5_mkey.Mlx5MkeyTest.test_odp_mkey_list_new_api)
> Create Mkeys above ODP MR, configure it with memory layout using the new API and ... skipped 'Could not open mlx5 context (This is not an MLX5 device)'
> test_odp_async_prefetch_rc_traffic (tests.test_odp.OdpTestCase.test_odp_async_prefetch_rc_traffic) ...
> 
> 
> ```
> 
> It looks that the python process is somehow stuck in uverbs_destroy_ufile_hw():
> ```
> $ sudo cat /proc/1845/task/1845/stack
> [<0>] uverbs_destroy_ufile_hw+0x24/0x100 [ib_uverbs]
> [<0>] ib_uverbs_close+0x1b/0xc0 [ib_uverbs]
> [<0>] __fput+0xea/0x2d0
> [<0>] ____fput+0x15/0x20
> [<0>] task_work_run+0x5d/0xa0
> [<0>] do_exit+0x316/0xa50
> [<0>] make_task_dead+0x81/0x160
> [<0>] rewind_stack_and_make_dead+0x16/0x20
> ```
> 
> I am not sure about the root cause but hope we can fix this before the next merge window.

Can you please try this fix?

diff --git a/drivers/infiniband/sw/rxe/rxe_odp.c b/drivers/infiniband/sw/rxe/rxe_odp.c
index a1416626f61a5..0f67167ddddd1 100644
--- a/drivers/infiniband/sw/rxe/rxe_odp.c
+++ b/drivers/infiniband/sw/rxe/rxe_odp.c
@@ -137,7 +137,7 @@ static inline bool rxe_check_pagefault(struct ib_umem_odp *umem_odp,
        while (addr < iova + length) {
                idx = (addr - ib_umem_start(umem_odp)) >> umem_odp->page_shift;
 
-               if (!(umem_odp->map.pfn_list[idx] & perm)) {
+               if (!(umem_odp->map.pfn_list[idx] & HMM_PFN_VALID)) {
                        need_fault = true;
                        break;
               

> 
> Thanks,
> Daisuke

^ permalink raw reply related	[flat|nested] 4+ messages in thread

* Re: [bug report] [rdma] RXE ODP test hangs with new DMA map API
  2025-05-21 12:48 [bug report] [rdma] RXE ODP test hangs with new DMA map API Daisuke Matsuda
  2025-05-21 13:16 ` Daisuke Matsuda
  2025-05-22  8:32 ` Leon Romanovsky
@ 2025-05-23 12:51 ` Daisuke Matsuda
  2 siblings, 0 replies; 4+ messages in thread
From: Daisuke Matsuda @ 2025-05-23 12:51 UTC (permalink / raw)
  To: linux-rdma, linux-kernel, leon, jgg, zyjzyj2000

Some additional information:

NULL Pointer dereference is observed while calling ibv_reg_mr(3) to allocate a MR for ODP,
and that triggered the interruption which resulted in the stuck in uverbs_destroy_ufile_hw().
==========
[  488.242907] BUG: kernel NULL pointer dereference, address: 00000000000002fc
[  488.242923] #PF: supervisor read access in kernel mode
[  488.242932] #PF: error_code(0x0000) - not-present page
[  488.242940] PGD 1028eb067 P4D 1028eb067 PUD 105da0067 PMD 0
[  488.242951] Oops: Oops: 0000 [#1] SMP NOPTI
[  488.242960] CPU: 3 UID: 1000 PID: 1854 Comm: python3 Tainted: G        W           6.15.0-rc1+ #11 PREEMPT(voluntary)
[  488.242976] Tainted: [W]=WARN
[  488.242981] Hardware name: Trigkey Key N/Key N, BIOS KEYN101 09/02/2024
[  488.242990] RIP: 0010:hmm_dma_map_alloc+0x25/0x100
[  488.243002] Code: 90 90 90 90 90 0f 1f 44 00 00 55 48 89 e5 41 57 41 56 49 89 d6 49 c1 e6 0c 41 55 41 54 53 49 39 ce 0f 82 c6 00 00 00 49 89 fc <f6> 87 fc 02 00 00 20 0f 84 af 00 00 00 49 89 f5 48 89 d3 49 89 cf
[  488.243024] RSP: 0018:ffffd3d3420eb830 EFLAGS: 00010246
[  488.243032] RAX: 0000000000001000 RBX: ffff8b727c7f7400 RCX: 0000000000001000
[  488.243042] RDX: 0000000000000001 RSI: ffff8b727c7f74b0 RDI: 0000000000000000
[  488.243052] RBP: ffffd3d3420eb858 R08: 0000000000000000 R09: 0000000000000000
[  488.243062] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
[  488.243072] R13: 00007262a622a000 R14: 0000000000001000 R15: ffff8b727c7f74b0
[  488.243082] FS:  00007262a62a1080(0000) GS:ffff8b762ac3e000(0000) knlGS:0000000000000000
[  488.243094] CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[  488.243102] CR2: 00000000000002fc CR3: 000000010a1f0004 CR4: 0000000000f72ef0
[  488.243113] PKRU: 55555554
[  488.243117] Call Trace:
[  488.243123]  <TASK>
[  488.243131]  ib_init_umem_odp+0xb6/0x110 [ib_uverbs]
[  488.243151]  ib_umem_odp_get+0xf0/0x150 [ib_uverbs]
[  488.243166]  rxe_odp_mr_init_user+0x71/0x170 [rdma_rxe]
[  488.243181]  rxe_reg_user_mr+0x217/0x2e0 [rdma_rxe]
[  488.243194]  ib_uverbs_reg_mr+0x19e/0x2e0 [ib_uverbs]
[  488.243209]  ib_uverbs_handler_UVERBS_METHOD_INVOKE_WRITE+0xd9/0x150 [ib_uverbs]
[  488.243227]  ib_uverbs_cmd_verbs+0xd19/0xee0 [ib_uverbs]
[  488.243243]  ? mmap_region+0x63/0xd0
[  488.243251]  ? __pfx_ib_uverbs_handler_UVERBS_METHOD_INVOKE_WRITE+0x10/0x10 [ib_uverbs]
[  488.243272]  ib_uverbs_ioctl+0xba/0x130 [ib_uverbs]
[  488.243287]  __x64_sys_ioctl+0xa4/0xe0
[  488.243295]  x64_sys_call+0x1178/0x2660
[  488.243303]  do_syscall_64+0x7e/0x170
[  488.243310]  ? syscall_exit_to_user_mode+0x4e/0x250
[  488.243319]  ? do_syscall_64+0x8a/0x170
[  488.243326]  ? do_syscall_64+0x8a/0x170
[  488.243332]  ? syscall_exit_to_user_mode+0x4e/0x250
[  488.243340]  ? do_syscall_64+0x8a/0x170
[  488.243347]  ? syscall_exit_to_user_mode+0x4e/0x250
[  488.243355]  ? do_syscall_64+0x8a/0x170
[  488.243361]  ? do_user_addr_fault+0x1d2/0x8d0
[  488.243369]  ? irqentry_exit_to_user_mode+0x43/0x250
[  488.243378]  ? irqentry_exit+0x43/0x50
[  488.243384]  ? exc_page_fault+0x93/0x1d0
[  488.243391]  entry_SYSCALL_64_after_hwframe+0x76/0x7e
[  488.243400] RIP: 0033:0x7262a6124ded
[  488.243406] Code: 04 25 28 00 00 00 48 89 45 c8 31 c0 48 8d 45 10 c7 45 b0 10 00 00 00 48 89 45 b8 48 8d 45 d0 48 89 45 c0 b8 10 00 00 00 0f 05 <89> c2 3d 00 f0 ff ff 77 1a 48 8b 45 c8 64 48 2b 04 25 28 00 00 00
[  488.243428] RSP: 002b:00007fffd08c3960 EFLAGS: 00000246 ORIG_RAX: 0000000000000010
[  488.243439] RAX: ffffffffffffffda RBX: 00007fffd08c39f0 RCX: 00007262a6124ded
[  488.243449] RDX: 00007fffd08c3a10 RSI: 00000000c0181b01 RDI: 0000000000000007
[  488.243459] RBP: 00007fffd08c39b0 R08: 0000000014107820 R09: 00007fffd08c3b44
[  488.243469] R10: 000000000000000c R11: 0000000000000246 R12: 00007fffd08c3b44
[  488.243478] R13: 000000000000000c R14: 00007fffd08c3b58 R15: 0000000014107960
[  488.243489]  </TASK>
[  488.243492] Modules linked in: rdma_rxe ip6_udp_tunnel udp_tunnel ib_uverbs ib_core xt_conntrack nft_chain_nat xt_MASQUERADE nf_nat nf_conntrack nf_defrag_ipv6 nf_defrag_ipv4 xfrm_user xfrm_algo xt_addrtype nft_compat nf_tables qrtr xe snd_hda_codec_realtek drm_ttm_helper snd_hda_codec_generic drm_suballoc_helper snd_hda_scodec_component gpu_sched drm_gpuvm drm_exec drm_gpusvm snd_sof_pci_intel_tgl snd_sof_pci_intel_cnl snd_sof_intel_hda_generic snd_sof_pci snd_sof_xtensa_dsp snd_sof_intel_hda_common snd_soc_hdac_hda snd_sof_intel_hda snd_hda_codec_hdmi rtw88_8821ce rtw88_8821c snd_sof rtw88_pci i915 rtw88_core intel_rapl_msr snd_sof_utils intel_rapl_common snd_soc_acpi_intel_match snd_soc_acpi x86_pkg_temp_thermal snd_soc_acpi_intel_sdca_quirks intel_powerclamp snd_sof_intel_hda_mlink mac80211 snd_hda_ext_core snd_soc_sdca binfmt_misc snd_soc_core snd_compress coretemp kvm_intel snd_hda_intel nls_iso8859_1 snd_intel_dspcfg btusb kvm snd_hda_codec btrtl btintel cmdlinepart snd_hwdep libarc4 btbcm snd_hda_core spi_nor
[  488.243531]  btmtk cfg80211 bluetooth snd_pcm mtd i2c_algo_bit ee1004 snd_timer rapl wmi_bmof drm_buddy snd i2c_i801 mei_me ttm intel_pmc_core mei intel_cstate spi_intel_pci i2c_smbus soundcore spi_intel drm_display_helper pmt_telemetry igen6_edac video pmt_class intel_vsec wmi acpi_tad acpi_pad mac_hid sch_fq_codel overlay iptable_filter ip6table_filter ip6_tables br_netfilter bridge stp llc arp_tables dm_multipath msr efi_pstore nfnetlink dmi_sysfs ip_tables x_tables autofs4 btrfs blake2b_generic raid10 raid456 async_raid6_recov async_memcpy async_pq async_xor async_tx xor raid6_pq raid1 raid0 spi_pxa2xx_platform dw_dmac dw_dmac_core 8250_dw spi_pxa2xx_core polyval_clmulni polyval_generic ghash_clmulni_intel r8169 sha256_ssse3 intel_lpss_pci sha1_ssse3 intel_lpss ahci idma64 realtek libahci pinctrl_alderlake aesni_intel crypto_simd cryptd
[  488.245175] CR2: 00000000000002fc
[  488.245901] ---[ end trace 0000000000000000 ]---
[  488.345979] RIP: 0010:hmm_dma_map_alloc+0x25/0x100
[  488.346754] Code: 90 90 90 90 90 0f 1f 44 00 00 55 48 89 e5 41 57 41 56 49 89 d6 49 c1 e6 0c 41 55 41 54 53 49 39 ce 0f 82 c6 00 00 00 49 89 fc <f6> 87 fc 02 00 00 20 0f 84 af 00 00 00 49 89 f5 48 89 d3 49 89 cf
[  488.347530] RSP: 0018:ffffd3d3420eb830 EFLAGS: 00010246
[  488.348320] RAX: 0000000000001000 RBX: ffff8b727c7f7400 RCX: 0000000000001000
[  488.349112] RDX: 0000000000000001 RSI: ffff8b727c7f74b0 RDI: 0000000000000000
[  488.349894] RBP: ffffd3d3420eb858 R08: 0000000000000000 R09: 0000000000000000
[  488.350669] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
[  488.351439] R13: 00007262a622a000 R14: 0000000000001000 R15: ffff8b727c7f74b0
[  488.352210] FS:  00007262a62a1080(0000) GS:ffff8b762ac3e000(0000) knlGS:0000000000000000
[  488.352963] CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[  488.353699] CR2: 00000000000002fc CR3: 000000010a1f0004 CR4: 0000000000f72ef0
[  488.354436] PKRU: 55555554
[  488.355173] note: python3[1854] exited with irqs disabled
==========

The root cause is that dev is NULL in hmm_dma_map_alloc() because
RXE passes NULL for dma_device when calling ib_register_device(),
which is allowed according to the comment in ib_register_device()
==========
int hmm_dma_map_alloc(struct device *dev, struct hmm_dma_map *map,
                       size_t nr_entries, size_t dma_entry_size)
{
         bool dma_need_sync = false;
         bool use_iova;

         if (!(nr_entries * PAGE_SIZE / dma_entry_size))
                 return -EINVAL;

         /*
          * The HMM API violates our normal DMA buffer ownership rules and can't
          * transfer buffer ownership.  The dma_addressing_limited() check is a
          * best approximation to ensure no swiotlb buffering happens.
          */
#ifdef CONFIG_DMA_NEED_SYNC
         dma_need_sync = !dev->dma_skip_sync; // <--- NULL pointer dereference
#endif /* CONFIG_DMA_NEED_SYNC */
         if (dma_need_sync || dma_addressing_limited(dev)) // <--- NULL pointer dereference
                 return -EOPNOTSUPP;
==========

I have found that the issue can be resolved by adding some NULL checks in hmm.c,
so I will make a patch and post the fix in a day or so.

Thanks,
Daisuke

On 2025/05/21 21:48, Daisuke Matsuda wrote:
> Hi,
> 
> After these two patches are merged to the for-next tree, RXE ODP test always hangs:
>    RDMA/core: Convert UMEM ODP DMA mapping to caching IOVA and page linkage
>    RDMA/umem: Store ODP access mask information in PFN
> cf. https://lore.kernel.org/linux-rdma/cover.1745831017.git.leon@kernel.org/
> 
> Here is the console log:
> ```
> $ ./build/bin/run_tests.py -v -k odp
> test_odp_dc_traffic (tests.test_mlx5_dc.DCTest.test_odp_dc_traffic) ... skipped 'Can not run the test over non MLX5 device'
> test_devx_rc_qp_odp_traffic (tests.test_mlx5_devx.Mlx5DevxRcTrafficTest.test_devx_rc_qp_odp_traffic) ... skipped 'Can not run the test over non MLX5 device'
> test_odp_mkey_list_new_api (tests.test_mlx5_mkey.Mlx5MkeyTest.test_odp_mkey_list_new_api)
> Create Mkeys above ODP MR, configure it with memory layout using the new API and ... skipped 'Could not open mlx5 context (This is not an MLX5 device)'
> test_odp_async_prefetch_rc_traffic (tests.test_odp.OdpTestCase.test_odp_async_prefetch_rc_traffic) ...
> 
> 
> ```
> 
> It looks that the python process is somehow stuck in uverbs_destroy_ufile_hw():
> ```
> $ sudo cat /proc/1845/task/1845/stack
> [<0>] uverbs_destroy_ufile_hw+0x24/0x100 [ib_uverbs]
> [<0>] ib_uverbs_close+0x1b/0xc0 [ib_uverbs]
> [<0>] __fput+0xea/0x2d0
> [<0>] ____fput+0x15/0x20
> [<0>] task_work_run+0x5d/0xa0
> [<0>] do_exit+0x316/0xa50
> [<0>] make_task_dead+0x81/0x160
> [<0>] rewind_stack_and_make_dead+0x16/0x20




> ```
> 
> I am not sure about the root cause but hope we can fix this before the next merge window.
> 
> Thanks,
> Daisuke


^ permalink raw reply	[flat|nested] 4+ messages in thread

end of thread, other threads:[~2025-05-23 12:51 UTC | newest]

Thread overview: 4+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2025-05-21 12:48 [bug report] [rdma] RXE ODP test hangs with new DMA map API Daisuke Matsuda
2025-05-21 13:16 ` Daisuke Matsuda
2025-05-22  8:32 ` Leon Romanovsky
2025-05-23 12:51 ` Daisuke Matsuda

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox