From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from out-182.mta0.migadu.com (out-182.mta0.migadu.com [91.218.175.182]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 0F6B5178395 for ; Sat, 24 May 2025 07:52:53 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=91.218.175.182 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1748073177; cv=none; b=iGrdIf7G7XzMC9/O6PgsNPZ9W1S0graYZzGe3pP4hNfthCOpFKsEyYv8DFtB0k3j01QU4mLFsJ3vy/s2nolXDsMMBNjVFLfNd0g34uG4TgiMmI8/8v8AUlV05BRAoOLO51JyuFQCqwJtQtdGS5k35tm9c5NWaG1oEK82vX3aTec= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1748073177; c=relaxed/simple; bh=I2M+DSoUQKJytql1zu8n5XvZfvWxraNnJMqKcVWqP3E=; h=Message-ID:Date:MIME-Version:Subject:From:To:Cc:References: In-Reply-To:Content-Type; b=nDtcaBnRbiN7B4Zk4iuc7bbCJl8Zz7D5VoW8YKq0lQMbMjDPO6pu5CS3ZN8ADEL6+R4gto582tC+5mw7y0HQwmlokYBEoXlTt+5HI2TEYZNeZBK+Pg83D6Ajr/70q4mZJTaC5HL2wLX2dHVj7bD51TtqOJl+ZgaxoOpEBmm5Nw4= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.dev; spf=pass smtp.mailfrom=linux.dev; dkim=pass (1024-bit key) header.d=linux.dev header.i=@linux.dev header.b=DFyD/Lm6; arc=none smtp.client-ip=91.218.175.182 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.dev Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=linux.dev Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=linux.dev header.i=@linux.dev header.b="DFyD/Lm6" Message-ID: DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linux.dev; s=key1; t=1748073172; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=y4JX8DbHRuT001MxCKrNVZDXKTB28eKQ8TZVhjscGFo=; b=DFyD/Lm60cVNlmKAXpgQU/SiSDAfreIFWKDibSl6rJFZC8RjW3IDA1h+KXixj+3yUzwppq ylI8nI0WnLYrRlvqHYtoJM4O9x3IvpFtVCvMnEAIak99SRkyAuFo636XrFQdPWs+ueIwp+ 6CvDkIoZ8WEM5RnLwkPvzUwXdb96cRo= Date: Sat, 24 May 2025 09:52:48 +0200 Precedence: bulk X-Mailing-List: linux-rdma@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Subject: Re: [PATCH for-next v1] RDMA/core: Avoid hmm_dma_map_alloc() for virtual DMA devices X-Report-Abuse: Please report any abuse attempt to abuse@migadu.com and include these headers. From: Zhu Yanjun To: Daisuke Matsuda , linux-kernel@vger.kernel.org, linux-rdma@vger.kernel.org, leon@kernel.org, jgg@ziepe.ca, zyjzyj2000@gmail.com Cc: hch@infradead.org References: <20250523184701.11004-1-dskmtsd@gmail.com> In-Reply-To: Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 8bit X-Migadu-Flow: FLOW_OUT 在 2025/5/24 9:31, Zhu Yanjun 写道: > 在 2025/5/23 20:47, Daisuke Matsuda 写道: >> Drivers such as rxe, which use virtual DMA, must not call into the DMA >> mapping core since they lack physical DMA capabilities. Otherwise, a NULL >> pointer dereference is observed as shown below. This patch ensures the >> RDMA >> core handles virtual and physical DMA paths appropriately. >> >> This fixes the following kernel oops: >> >>   BUG: kernel NULL pointer dereference, address: 00000000000002fc >>   #PF: supervisor read access in kernel mode >>   #PF: error_code(0x0000) - not-present page >>   PGD 1028eb067 P4D 1028eb067 PUD 105da0067 PMD 0 >>   Oops: Oops: 0000 [#1] SMP NOPTI >>   CPU: 3 UID: 1000 PID: 1854 Comm: python3 Tainted: G >> W           6.15.0-rc1+ #11 PREEMPT(voluntary) >>   Tainted: [W]=WARN >>   Hardware name: Trigkey Key N/Key N, BIOS KEYN101 09/02/2024 >>   RIP: 0010:hmm_dma_map_alloc+0x25/0x100 >>   Code: 90 90 90 90 90 0f 1f 44 00 00 55 48 89 e5 41 57 41 56 49 89 d6 >> 49 c1 e6 0c 41 55 41 54 53 49 39 ce 0f 82 c6 00 00 00 49 89 fc 87 >> fc 02 00 00 20 0f 84 af 00 00 00 49 89 f5 48 89 d3 49 89 cf >>   RSP: 0018:ffffd3d3420eb830 EFLAGS: 00010246 >>   RAX: 0000000000001000 RBX: ffff8b727c7f7400 RCX: 0000000000001000 >>   RDX: 0000000000000001 RSI: ffff8b727c7f74b0 RDI: 0000000000000000 >>   RBP: ffffd3d3420eb858 R08: 0000000000000000 R09: 0000000000000000 >>   R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000 >>   R13: 00007262a622a000 R14: 0000000000001000 R15: ffff8b727c7f74b0 >>   FS:  00007262a62a1080(0000) GS:ffff8b762ac3e000(0000) >> knlGS:0000000000000000 >>   CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033 >>   CR2: 00000000000002fc CR3: 000000010a1f0004 CR4: 0000000000f72ef0 >>   PKRU: 55555554 >>   Call Trace: >>    >>    ib_init_umem_odp+0xb6/0x110 [ib_uverbs] >>    ib_umem_odp_get+0xf0/0x150 [ib_uverbs] >>    rxe_odp_mr_init_user+0x71/0x170 [rdma_rxe] >>    rxe_reg_user_mr+0x217/0x2e0 [rdma_rxe] >>    ib_uverbs_reg_mr+0x19e/0x2e0 [ib_uverbs] >>    ib_uverbs_handler_UVERBS_METHOD_INVOKE_WRITE+0xd9/0x150 [ib_uverbs] >>    ib_uverbs_cmd_verbs+0xd19/0xee0 [ib_uverbs] >>    ? mmap_region+0x63/0xd0 >>    ? __pfx_ib_uverbs_handler_UVERBS_METHOD_INVOKE_WRITE+0x10/0x10 >> [ib_uverbs] >>    ib_uverbs_ioctl+0xba/0x130 [ib_uverbs] >>    __x64_sys_ioctl+0xa4/0xe0 >>    x64_sys_call+0x1178/0x2660 >>    do_syscall_64+0x7e/0x170 >>    ? syscall_exit_to_user_mode+0x4e/0x250 >>    ? do_syscall_64+0x8a/0x170 >>    ? do_syscall_64+0x8a/0x170 >>    ? syscall_exit_to_user_mode+0x4e/0x250 >>    ? do_syscall_64+0x8a/0x170 >>    ? syscall_exit_to_user_mode+0x4e/0x250 >>    ? do_syscall_64+0x8a/0x170 >>    ? do_user_addr_fault+0x1d2/0x8d0 >>    ? irqentry_exit_to_user_mode+0x43/0x250 >>    ? irqentry_exit+0x43/0x50 >>    ? exc_page_fault+0x93/0x1d0 >>    entry_SYSCALL_64_after_hwframe+0x76/0x7e >>   RIP: 0033:0x7262a6124ded >>   Code: 04 25 28 00 00 00 48 89 45 c8 31 c0 48 8d 45 10 c7 45 b0 10 00 >> 00 00 48 89 45 b8 48 8d 45 d0 48 89 45 c0 b8 10 00 00 00 0f 05 <89> c2 >> 3d 00 f0 ff ff 77 1a 48 8b 45 c8 64 48 2b 04 25 28 00 00 00 >>   RSP: 002b:00007fffd08c3960 EFLAGS: 00000246 ORIG_RAX: 0000000000000010 >>   RAX: ffffffffffffffda RBX: 00007fffd08c39f0 RCX: 00007262a6124ded >>   RDX: 00007fffd08c3a10 RSI: 00000000c0181b01 RDI: 0000000000000007 >>   RBP: 00007fffd08c39b0 R08: 0000000014107820 R09: 00007fffd08c3b44 >>   R10: 000000000000000c R11: 0000000000000246 R12: 00007fffd08c3b44 >>   R13: 000000000000000c R14: 00007fffd08c3b58 R15: 0000000014107960 >>    >> >> Fixes: 1efe8c0670d6 ("RDMA/core: Convert UMEM ODP DMA mapping to >> caching IOVA and page linkage") >> Closes: https://lore.kernel.org/ >> all/3e8f343f-7d66-4f7a-9f08-3910623e322f@gmail.com/ >> Signed-off-by: Daisuke Matsuda >> --- >>   drivers/infiniband/core/device.c   | 24 ++++++++++++++++++++++++ >>   drivers/infiniband/core/umem_odp.c |  6 +++--- >>   include/rdma/ib_verbs.h            | 12 ++++++++++++ >>   3 files changed, 39 insertions(+), 3 deletions(-) >> >> diff --git a/drivers/infiniband/core/device.c b/drivers/infiniband/ >> core/device.c >> index b4e3e4beb7f4..8be4797c66ec 100644 >> --- a/drivers/infiniband/core/device.c >> +++ b/drivers/infiniband/core/device.c >> @@ -2864,6 +2864,30 @@ int ib_dma_virt_map_sg(struct ib_device *dev, >> struct scatterlist *sg, int nents) >>       return nents; >>   } >>   EXPORT_SYMBOL(ib_dma_virt_map_sg); >> +int ib_dma_virt_map_alloc(struct device *dev, struct hmm_dma_map *map, >> +              size_t nr_entries, size_t dma_entry_size) >> +{ >> +    if (!(nr_entries * PAGE_SIZE / dma_entry_size)) >> +        return -EINVAL; >> + >> +    map->dma_entry_size = dma_entry_size; >> +    map->pfn_list = kvcalloc(nr_entries, sizeof(*map->pfn_list), >> +                 GFP_KERNEL | __GFP_NOWARN); >> +    if (!map->pfn_list) >> +        return -ENOMEM; >> + >> +    map->dma_list = kvcalloc(nr_entries, sizeof(*map->dma_list), >> +                 GFP_KERNEL | __GFP_NOWARN); >> +    if (!map->dma_list) >> +        goto err_dma; >> + >> +    return 0; >> + >> +err_dma: >> +    kvfree(map->pfn_list); >> +    return -ENOMEM; >> +} >> +EXPORT_SYMBOL(ib_dma_virt_map_alloc); >>   #endif /* CONFIG_INFINIBAND_VIRT_DMA */ >>   static const struct rdma_nl_cbs ibnl_ls_cb_table[RDMA_NL_LS_NUM_OPS] >> = { >> diff --git a/drivers/infiniband/core/umem_odp.c b/drivers/infiniband/ >> core/umem_odp.c >> index 51d518989914..aa03f3fc84d0 100644 >> --- a/drivers/infiniband/core/umem_odp.c >> +++ b/drivers/infiniband/core/umem_odp.c >> @@ -75,9 +75,9 @@ static int ib_init_umem_odp(struct ib_umem_odp >> *umem_odp, >>       if (unlikely(end < page_size)) >>           return -EOVERFLOW; >> -    ret = hmm_dma_map_alloc(dev->dma_device, &umem_odp->map, >> -                (end - start) >> PAGE_SHIFT, >> -                1 << umem_odp->page_shift); >> +    ret = ib_dma_map_alloc(dev, &umem_odp->map, >> +                   (end - start) >> PAGE_SHIFT, >> +                   1 << umem_odp->page_shift); >>       if (ret) >>           return ret; >> diff --git a/include/rdma/ib_verbs.h b/include/rdma/ib_verbs.h >> index b06a0ed81bdd..10813f348b99 100644 >> --- a/include/rdma/ib_verbs.h >> +++ b/include/rdma/ib_verbs.h >> @@ -36,6 +36,7 @@ >>   #include >>   #include >>   #include >> +#include >>   #include >>   #include >>   #include >> @@ -4221,6 +4222,17 @@ static inline void ib_dma_unmap_sg_attrs(struct >> ib_device *dev, >>                      dma_attrs); >>   } >> +int ib_dma_virt_map_alloc(struct device *dev, struct hmm_dma_map *map, >> +              size_t nr_entries, size_t dma_entry_size); >> +static inline int ib_dma_map_alloc(struct ib_device *dev, struct >> hmm_dma_map *map, >> +                   size_t nr_entries, size_t dma_entry_size) >> +{ >> +    if (ib_uses_virt_dma(dev)) >> +        return ib_dma_virt_map_alloc(dev->dma_device, map, nr_entries, >> +                         dma_entry_size); > > Other emulated RDMA devices driver also call ib_dma_virt_map_alloc? > Only rxe will call ib_dma_virt_map_alloc? As I know, other emulated RDMA driver also implemented ODP, and with the current hmm_dma_map_alloc, it can work well. So in the above "if (ib_uses_virt_dma(dev))", changed to if (rxe_dev), then go to call ib_dma_virt_map_alloc. Yanjun.Zhu > > Zhu Yanjun > >> +    return hmm_dma_map_alloc(dev->dma_device, map, nr_entries, >> dma_entry_size); >> +} >> + >>   /** >>    * ib_dma_map_sgtable_attrs - Map a scatter/gather table to DMA >> addresses >>    * @dev: The device for which the DMA addresses are to be created > -- Best Regards, Yanjun.Zhu