linux-rdma.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Leon Romanovsky <leon@kernel.org>
To: lirongqing <lirongqing@baidu.com>
Cc: Jason Gunthorpe <jgg@ziepe.ca>,
	huangjunxian6@hisilicon.com, linux-rdma@vger.kernel.org
Subject: Re: [PATCH][v3] RDMA/core: Prevent soft lockup during large user memory region cleanup
Date: Mon, 24 Nov 2025 12:07:34 +0200	[thread overview]
Message-ID: <20251124100734.GA12483@unreal> (raw)
In-Reply-To: <20251124050621.2622-1-lirongqing@baidu.com>

On Mon, Nov 24, 2025 at 01:06:21PM +0800, lirongqing wrote:
> From: Li RongQing <lirongqing@baidu.com>
> 
> When a process exits with numerous large, pinned memory regions consisting
> of 4KB pages, the cleanup of the memory region through __ib_umem_release()
> may cause soft lockups. This is because unpin_user_page_range_dirty_lock()
> is called in a tight loop for unpin and releasing page without yielding the
> CPU.
> 
>  watchdog: BUG: soft lockup - CPU#44 stuck for 26s! [python3:73464]
>  Kernel panic - not syncing: softlockup: hung tasks
>  CPU: 44 PID: 73464 Comm: python3 Tainted: G           OEL
> 
>  asm_sysvec_apic_timer_interrupt+0x1b/0x20
>  RIP: 0010:free_unref_page+0xff/0x190
> 
>   ? free_unref_page+0xe3/0x190
>   __put_page+0x77/0xe0
>   put_compound_head+0xed/0x100
>   unpin_user_page_range_dirty_lock+0xb2/0x180
>   __ib_umem_release+0x57/0xb0 [ib_core]
>   ib_umem_release+0x3f/0xd0 [ib_core]
>   mlx5_ib_dereg_mr+0x2e9/0x440 [mlx5_ib]
>   ib_dereg_mr_user+0x43/0xb0 [ib_core]
>   uverbs_free_mr+0x15/0x20 [ib_uverbs]
>   destroy_hw_idr_uobject+0x21/0x60 [ib_uverbs]
>   uverbs_destroy_uobject+0x38/0x1b0 [ib_uverbs]
>   __uverbs_cleanup_ufile+0xd1/0x150 [ib_uverbs]
>   uverbs_destroy_ufile_hw+0x3f/0x100 [ib_uverbs]
>   ib_uverbs_close+0x1f/0xb0 [ib_uverbs]
>   __fput+0x9c/0x280
>   ____fput+0xe/0x20
>   task_work_run+0x6a/0xb0
>   do_exit+0x217/0x3c0
>   do_group_exit+0x3b/0xb0
>   get_signal+0x150/0x900
>   arch_do_signal_or_restart+0xde/0x100
>   exit_to_user_mode_loop+0xc4/0x160
>   exit_to_user_mode_prepare+0xa0/0xb0
>   syscall_exit_to_user_mode+0x27/0x50
>   do_syscall_64+0x63/0xb0
> 
> Fix the soft lockup by adding cond_resched() calls in __ib_umem_release, To
> minimize performance impact on releasing memory regions, introduce a
> RESCHED_THRESHOLD_ON_PAGE, call cond_resched() per it, and cond_resched()
> to be called during the very first iteration.
> 
> Signed-off-by: Li RongQing <lirongqing@baidu.com>
> ---
> diff v2: limit calling cond_resched per 4k

It is already too late for v3, because v2 was already merged.
Please send separate patch.

Thanks

> 
>  drivers/infiniband/core/umem.c | 8 +++++++-
>  1 file changed, 7 insertions(+), 1 deletion(-)
> 
> diff --git a/drivers/infiniband/core/umem.c b/drivers/infiniband/core/umem.c
> index c5b6863..ff540a2 100644
> --- a/drivers/infiniband/core/umem.c
> +++ b/drivers/infiniband/core/umem.c
> @@ -45,6 +45,8 @@
>  
>  #include "uverbs.h"
>  
> +#define RESCHED_LOOP_CNT_THRESHOLD 0x1000
> +
>  static void __ib_umem_release(struct ib_device *dev, struct ib_umem *umem, int dirty)
>  {
>  	bool make_dirty = umem->writable && dirty;
> @@ -55,10 +57,14 @@ static void __ib_umem_release(struct ib_device *dev, struct ib_umem *umem, int d
>  		ib_dma_unmap_sgtable_attrs(dev, &umem->sgt_append.sgt,
>  					   DMA_BIDIRECTIONAL, 0);
>  
> -	for_each_sgtable_sg(&umem->sgt_append.sgt, sg, i)
> +	for_each_sgtable_sg(&umem->sgt_append.sgt, sg, i) {
>  		unpin_user_page_range_dirty_lock(sg_page(sg),
>  			DIV_ROUND_UP(sg->length, PAGE_SIZE), make_dirty);
>  
> +		if (!(i % RESCHED_LOOP_CNT_THRESHOLD))
> +			cond_resched();
> +	}
> +
>  	sg_free_append_table(&umem->sgt_append);
>  }
>  
> -- 
> 2.9.4
> 

      reply	other threads:[~2025-11-24 10:07 UTC|newest]

Thread overview: 2+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2025-11-24  5:06 [PATCH][v3] RDMA/core: Prevent soft lockup during large user memory region cleanup lirongqing
2025-11-24 10:07 ` Leon Romanovsky [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20251124100734.GA12483@unreal \
    --to=leon@kernel.org \
    --cc=huangjunxian6@hisilicon.com \
    --cc=jgg@ziepe.ca \
    --cc=linux-rdma@vger.kernel.org \
    --cc=lirongqing@baidu.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).