From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 8832A7BB01; Wed, 21 Feb 2024 13:58:59 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1708523939; cv=none; b=n/aeMBDBncQMY4Gm8/bHRloPkMkfnHY31h2+whMqLInCHiDH6qMniHn5KtjcNJBoxrUKO6FKVY+ENpGEJP3NRC823JG9milKuNbRiwYbTwzM+4ziMBLSt08dyli4oNilYHJfdfYPVPNkEyv/JU5IZbhWQ4cTrCPzCm2iCKVbBVw= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1708523939; c=relaxed/simple; bh=VpJAgLDCTfhxiDpQdHQ/OFUpmqPkc4DxJggMT2m92AQ=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=bA4Lld4x6e/R8a4yFC6qXfCxfiDde4xCm9eyaUb/s2rc5LYKNtwPvikEhfmmb5y2pQCKm8ND3rQBJddgHsqDPfiIcyyOjAmaPhhe8kTaCS5DUZ7oxn4U5hOLljgLwod6bN+2/+HRYT8IUcLm21EuemZnBU65fgczOvmn+ottWMs= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=linuxfoundation.org header.i=@linuxfoundation.org header.b=t6aW3AwO; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=linuxfoundation.org header.i=@linuxfoundation.org header.b="t6aW3AwO" Received: by smtp.kernel.org (Postfix) with ESMTPSA id EA282C43390; Wed, 21 Feb 2024 13:58:58 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linuxfoundation.org; s=korg; t=1708523939; bh=VpJAgLDCTfhxiDpQdHQ/OFUpmqPkc4DxJggMT2m92AQ=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=t6aW3AwOlL8vKFBgB0b+cMVzzHFqKhGvZ73/IcHCATMgMaAabSKaP37wilZrvPULZ k73Al1RsjwENcxwAjV+dSU+oo5EAIb5IaBewNqJMR6731qX0t9fxf4b7QHOfg2PZVs FX0cjcz7T59RRWZzHdn0u2KH6NMfviSglrqAV2PY= From: Greg Kroah-Hartman To: stable@vger.kernel.org Cc: Greg Kroah-Hartman , patches@lists.linux.dev, Ilya Dryomov , Dongsheng Yang Subject: [PATCH 5.10 067/379] rbd: dont move requests to the running list on errors Date: Wed, 21 Feb 2024 14:04:06 +0100 Message-ID: <20240221125956.898145178@linuxfoundation.org> X-Mailer: git-send-email 2.43.2 In-Reply-To: <20240221125954.917878865@linuxfoundation.org> References: <20240221125954.917878865@linuxfoundation.org> User-Agent: quilt/0.67 X-stable: review X-Patchwork-Hint: ignore Precedence: bulk X-Mailing-List: patches@lists.linux.dev List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: 8bit 5.10-stable review patch. If anyone has any objections, please let me know. ------------------ From: Ilya Dryomov commit ded080c86b3f99683774af0441a58fc2e3d60cae upstream. The running list is supposed to contain requests that are pinning the exclusive lock, i.e. those that must be flushed before exclusive lock is released. When wake_lock_waiters() is called to handle an error, requests on the acquiring list are failed with that error and no flushing takes place. Briefly moving them to the running list is not only pointless but also harmful: if exclusive lock gets acquired before all of their state machines are scheduled and go through rbd_lock_del_request(), we trigger rbd_assert(list_empty(&rbd_dev->running_list)); in rbd_try_acquire_lock(). Cc: stable@vger.kernel.org Fixes: 637cd060537d ("rbd: new exclusive lock wait/wake code") Signed-off-by: Ilya Dryomov Reviewed-by: Dongsheng Yang Signed-off-by: Greg Kroah-Hartman --- drivers/block/rbd.c | 22 ++++++++++++++-------- 1 file changed, 14 insertions(+), 8 deletions(-) --- a/drivers/block/rbd.c +++ b/drivers/block/rbd.c @@ -3517,14 +3517,15 @@ static bool rbd_lock_add_request(struct static void rbd_lock_del_request(struct rbd_img_request *img_req) { struct rbd_device *rbd_dev = img_req->rbd_dev; - bool need_wakeup; + bool need_wakeup = false; lockdep_assert_held(&rbd_dev->lock_rwsem); spin_lock(&rbd_dev->lock_lists_lock); - rbd_assert(!list_empty(&img_req->lock_item)); - list_del_init(&img_req->lock_item); - need_wakeup = (rbd_dev->lock_state == RBD_LOCK_STATE_RELEASING && - list_empty(&rbd_dev->running_list)); + if (!list_empty(&img_req->lock_item)) { + list_del_init(&img_req->lock_item); + need_wakeup = (rbd_dev->lock_state == RBD_LOCK_STATE_RELEASING && + list_empty(&rbd_dev->running_list)); + } spin_unlock(&rbd_dev->lock_lists_lock); if (need_wakeup) complete(&rbd_dev->releasing_wait); @@ -3907,14 +3908,19 @@ static void wake_lock_waiters(struct rbd return; } - list_for_each_entry(img_req, &rbd_dev->acquiring_list, lock_item) { + while (!list_empty(&rbd_dev->acquiring_list)) { + img_req = list_first_entry(&rbd_dev->acquiring_list, + struct rbd_img_request, lock_item); mutex_lock(&img_req->state_mutex); rbd_assert(img_req->state == RBD_IMG_EXCLUSIVE_LOCK); + if (!result) + list_move_tail(&img_req->lock_item, + &rbd_dev->running_list); + else + list_del_init(&img_req->lock_item); rbd_img_schedule(img_req, result); mutex_unlock(&img_req->state_mutex); } - - list_splice_tail_init(&rbd_dev->acquiring_list, &rbd_dev->running_list); } static bool locker_equal(const struct ceph_locker *lhs,