From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-9.8 required=3.0 tests=DKIMWL_WL_HIGH,DKIM_SIGNED, DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH,MAILING_LIST_MULTI, SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 46D56C4360C for ; Sun, 6 Oct 2019 18:03:49 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 1D9D52067B for ; Sun, 6 Oct 2019 18:03:49 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1570385029; bh=WzO849voWEKaa0OBPjiaLmwguB4Ane8Fgl20YwGunt0=; h=From:To:Cc:Subject:Date:In-Reply-To:References:List-ID:From; b=Kg1K9rkSRCT7qTs9QcYkQgoVoQXeEnKJCXqTOu9CDtwFjzU3w8HDcNc77ZV/HXjnA tenLkFt00ifOo5IR6f2gnz7MyL49BwmfOaK+qnvIEugjWV0kdsAeMhsTvQqk6ELAhY ImV7JOLfujKVmXZGAhwXYW7kAqnMZzX+RilrvrwY= Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727336AbfJFRXt (ORCPT ); Sun, 6 Oct 2019 13:23:49 -0400 Received: from mail.kernel.org ([198.145.29.99]:48500 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727664AbfJFRXs (ORCPT ); Sun, 6 Oct 2019 13:23:48 -0400 Received: from localhost (83-86-89-107.cable.dynamic.v4.ziggo.nl [83.86.89.107]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id E75E820862; Sun, 6 Oct 2019 17:23:46 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1570382627; bh=WzO849voWEKaa0OBPjiaLmwguB4Ane8Fgl20YwGunt0=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=vvWq7dP2zmKr6+eD4fAWoGtEFR4DeLgBawyx+E1JcAOhE96/+7l+6xZkDVfKE5HB4 qiOXKywtARGH9JbZjnMzbgaupJv2gUKtDL1SDCIeADx5wTJFHwDKa4xHAu8SwOmS1R m1Yjq69WxTLyBrrwN+L/UrGenKg8XvcbWXu9b3+M= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, Changwei Ge , Joseph Qi , Mark Fasheh , Joel Becker , Junxiao Bi , Andrew Morton , Linus Torvalds , Sasha Levin Subject: [PATCH 4.9 28/47] ocfs2: wait for recovering done after direct unlock request Date: Sun, 6 Oct 2019 19:21:15 +0200 Message-Id: <20191006172018.374535051@linuxfoundation.org> X-Mailer: git-send-email 2.23.0 In-Reply-To: <20191006172016.873463083@linuxfoundation.org> References: <20191006172016.873463083@linuxfoundation.org> User-Agent: quilt/0.66 MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Sender: stable-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: stable@vger.kernel.org From: Changwei Ge [ Upstream commit 0a3775e4f883912944481cf2ef36eb6383a9cc74 ] There is a scenario causing ocfs2 umount hang when multiple hosts are rebooting at the same time. NODE1 NODE2 NODE3 send unlock requset to NODE2 dies become recovery master recover NODE2 find NODE2 dead mark resource RECOVERING directly remove lock from grant list calculate usage but RECOVERING marked **miss the window of purging clear RECOVERING To reproduce this issue, crash a host and then umount ocfs2 from another node. To solve this, just let unlock progress wait for recovery done. Link: http://lkml.kernel.org/r/1550124866-20367-1-git-send-email-gechangwei@live.cn Signed-off-by: Changwei Ge Reviewed-by: Joseph Qi Cc: Mark Fasheh Cc: Joel Becker Cc: Junxiao Bi Cc: Changwei Ge Signed-off-by: Andrew Morton Signed-off-by: Linus Torvalds Signed-off-by: Sasha Levin --- fs/ocfs2/dlm/dlmunlock.c | 23 +++++++++++++++++++---- 1 file changed, 19 insertions(+), 4 deletions(-) diff --git a/fs/ocfs2/dlm/dlmunlock.c b/fs/ocfs2/dlm/dlmunlock.c index 1082b2c3014be..5f2a120240e51 100644 --- a/fs/ocfs2/dlm/dlmunlock.c +++ b/fs/ocfs2/dlm/dlmunlock.c @@ -105,7 +105,8 @@ static enum dlm_status dlmunlock_common(struct dlm_ctxt *dlm, enum dlm_status status; int actions = 0; int in_use; - u8 owner; + u8 owner; + int recovery_wait = 0; mlog(0, "master_node = %d, valblk = %d\n", master_node, flags & LKM_VALBLK); @@ -208,9 +209,12 @@ static enum dlm_status dlmunlock_common(struct dlm_ctxt *dlm, } if (flags & LKM_CANCEL) lock->cancel_pending = 0; - else - lock->unlock_pending = 0; - + else { + if (!lock->unlock_pending) + recovery_wait = 1; + else + lock->unlock_pending = 0; + } } /* get an extra ref on lock. if we are just switching @@ -244,6 +248,17 @@ leave: spin_unlock(&res->spinlock); wake_up(&res->wq); + if (recovery_wait) { + spin_lock(&res->spinlock); + /* Unlock request will directly succeed after owner dies, + * and the lock is already removed from grant list. We have to + * wait for RECOVERING done or we miss the chance to purge it + * since the removement is much faster than RECOVERING proc. + */ + __dlm_wait_on_lockres_flags(res, DLM_LOCK_RES_RECOVERING); + spin_unlock(&res->spinlock); + } + /* let the caller's final dlm_lock_put handle the actual kfree */ if (actions & DLM_UNLOCK_FREE_LOCK) { /* this should always be coupled with list removal */ -- 2.20.1