From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 33FB8ECAAD5 for ; Mon, 5 Sep 2022 11:48:49 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S238266AbiIELsr (ORCPT ); Mon, 5 Sep 2022 07:48:47 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:36168 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S238049AbiIELsp (ORCPT ); Mon, 5 Sep 2022 07:48:45 -0400 Received: from dfw.source.kernel.org (dfw.source.kernel.org [IPv6:2604:1380:4641:c500::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id BA2975C372; Mon, 5 Sep 2022 04:48:44 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id A824F61135; Mon, 5 Sep 2022 11:48:43 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 8F384C433C1; Mon, 5 Sep 2022 11:48:42 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1662378523; bh=CU1F9oVMXPSjnB5uqc+Q9X9/0CYjxsa7OADqosHIpz4=; h=Date:From:To:Cc:Subject:References:In-Reply-To:From; b=sREMkQhhTj/OLke4bjojZnTcp1WUunAF6Blcm9GC9rnp2X+ODxeXRzx/x7ZbNeWJV 3MaCfQKwLi3A3EtcuOVE3qTCKoe3MvBmLHMlzjA/fY3B3jEqxSBwfAF7kD0aFcry1J F6yBGbJrvSGsx6mG4reQFjvTmwGvDML28QxvO2KvijNrw2OEUtAG5byFdrRuRMc6Om yTbOEhc+69gXvue2m7tlbl8J5kqukhGk6MbWRmijN/kWwB+u+04W5AshQXiDq2tQff m73FU7kLPHZ6YQRrDteWqpikjOa3dwIzq4t0biS5AMOpmRL6JiwNg1Y3yzBJKCjk9E k3CqVs5b+tKqw== Date: Mon, 5 Sep 2022 14:48:38 +0300 From: Leon Romanovsky To: Jason Gunthorpe , Andrew Morton , linux-mm@kvack.org Cc: Yishai Hadas , linux-rdma@vger.kernel.org, Maor Gottlieb , linux-kernel@vger.kernel.org, Thomas Gleixner Subject: Re: [PATCH RESEND rdma-rc] IB/core: Fix a nested dead lock as part of ODP flow Message-ID: References: <74d93541ea533ef7daec6f126deb1072500aeb16.1661251841.git.leonro@nvidia.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Mon, Aug 29, 2022 at 11:15:56AM +0300, Leon Romanovsky wrote: > On Wed, Aug 24, 2022 at 09:10:36AM +0300, Leon Romanovsky wrote: > > From: Yishai Hadas > > > > Fix a nested dead lock as part of ODP flow by using mmput_async(). > > > > From the below call trace [1] can see that calling mmput() once we have > > the umem_odp->umem_mutex locked as required by > > ib_umem_odp_map_dma_and_lock() might trigger in the same task the > > exit_mmap()->__mmu_notifier_release()->mlx5_ib_invalidate_range() which > > may dead lock when trying to lock the same mutex. > > > > Moving to use mmput_async() will solve the problem as the above > > exit_mmap() flow will be called in other task and will be executed once > > the lock will be available. > > > > [1] > > [64843.077665] task:kworker/u133:2 state:D stack: 0 pid:80906 ppid: > > 2 flags:0x00004000 > > [64843.077672] Workqueue: mlx5_ib_page_fault mlx5_ib_eqe_pf_action [mlx5_ib] > > [64843.077719] Call Trace: > > [64843.077722] > > [64843.077724] __schedule+0x23d/0x590 > > [64843.077729] schedule+0x4e/0xb0 > > [64843.077735] schedule_preempt_disabled+0xe/0x10 > > [64843.077740] __mutex_lock.constprop.0+0x263/0x490 > > [64843.077747] __mutex_lock_slowpath+0x13/0x20 > > [64843.077752] mutex_lock+0x34/0x40 > > [64843.077758] mlx5_ib_invalidate_range+0x48/0x270 [mlx5_ib] > > [64843.077808] __mmu_notifier_release+0x1a4/0x200 > > [64843.077816] exit_mmap+0x1bc/0x200 > > [64843.077822] ? walk_page_range+0x9c/0x120 > > [64843.077828] ? __cond_resched+0x1a/0x50 > > [64843.077833] ? mutex_lock+0x13/0x40 > > [64843.077839] ? uprobe_clear_state+0xac/0x120 > > [64843.077860] mmput+0x5f/0x140 > > [64843.077867] ib_umem_odp_map_dma_and_lock+0x21b/0x580 [ib_core] > > [64843.077931] pagefault_real_mr+0x9a/0x140 [mlx5_ib] > > [64843.077962] pagefault_mr+0xb4/0x550 [mlx5_ib] > > [64843.077992] pagefault_single_data_segment.constprop.0+0x2ac/0x560 > > [mlx5_ib] > > [64843.078022] mlx5_ib_eqe_pf_action+0x528/0x780 [mlx5_ib] > > [64843.078051] process_one_work+0x22b/0x3d0 > > [64843.078059] worker_thread+0x53/0x410 > > [64843.078065] ? process_one_work+0x3d0/0x3d0 > > [64843.078073] kthread+0x12a/0x150 > > [64843.078079] ? set_kthread_struct+0x50/0x50 > > [64843.078085] ret_from_fork+0x22/0x30 > > [64843.078093] > > > > Fixes: 36f30e486dce ("IB/core: Improve ODP to use hmm_range_fault()") > > Reviewed-by: Maor Gottlieb > > Signed-off-by: Yishai Hadas > > Signed-off-by: Leon Romanovsky > > --- > > Resend to larger forum. > > https://lore.kernel.org/all/74d93541ea533ef7daec6f126deb1072500aeb16.1661251841.git.leonro@nvidia.com > > --- > > drivers/infiniband/core/umem_odp.c | 2 +- > > kernel/fork.c | 1 + > > 2 files changed, 2 insertions(+), 1 deletion(-) > > Any objections? I didn't hear any. Applied to rdma-rc. Thanks