From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 3CFE51AF0BB; Sun, 24 Nov 2024 12:39:45 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1732451986; cv=none; b=FCg9KocpshW+qP74v7mzmUb8OBF+3djEFN00f92j6B1n1jeThPyrAQneM8fw5gfMnODXn/sqX0UhJNtpp2V3marO9zaPck4mcJ0eiaPX6qOwLVK67vcJim+UNOJe0lHgFh39A+AqblSTiz5bV1bsQFkrhTMReIVmiPSzL2JrayY= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1732451986; c=relaxed/simple; bh=1JL38mHsjezLxpbxsehR+pofaJ+iVXsIW6lvakmeT48=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=YoJhhllyn9pSJDx3Oc9SSUUdTLytSf6aq8zDLTp10SSBUsxgpqQZ+iu2Ay9TjPt9YteuvTzmsgFWnlmlvOL7SU4ZHqy/uZdbVcGw4iIrMqOz+AtTa6VsWpglK9jdBPlJkkR2jMOemE+tjjf1+AkJvt/nM7lNAOlZrAkDdfjX6k8= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=DRqNwXL8; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="DRqNwXL8" Received: by smtp.kernel.org (Postfix) with ESMTPSA id CDD73C4CED6; Sun, 24 Nov 2024 12:39:44 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1732451985; bh=1JL38mHsjezLxpbxsehR+pofaJ+iVXsIW6lvakmeT48=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=DRqNwXL8nJaGHjj4vCQVBtL9/WJxoBp9AyRLCl4lcoVbYTCdKoLrQzvDJLLn/boKP g5zlNL9/FpvZ0DV1ANFndfvlZeLDfcqmWk2FHelvDfeMGTSo+qAbBsg1cnc7ko3IB5 CO+26QiHiNyfEMnrMoODaiAhhtQ67b2Ue77qrBIgkoHaEl9dhFO3erUIA2yTrajU3O B51hcS2tdrGqEZGB2SnD+94U/MySN329iCkMjEmADehB+v7w9Jt69p6WM/+lzyX+1I 2sm8z0ukc5sfFs27KbUbVtaTby90+mWDiIElhHK0I7svj0MG6rUN5VSrDHe/5uQnae yl9RKPfuqia7A== From: Sasha Levin To: linux-kernel@vger.kernel.org, stable@vger.kernel.org Cc: John Garry , Yu Kuai , Hannes Reinecke , Jens Axboe , Sasha Levin , song@kernel.org, linux-raid@vger.kernel.org Subject: [PATCH AUTOSEL 6.12 16/19] md/raid1: Handle bio_split() errors Date: Sun, 24 Nov 2024 07:38:51 -0500 Message-ID: <20241124123912.3335344-16-sashal@kernel.org> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20241124123912.3335344-1-sashal@kernel.org> References: <20241124123912.3335344-1-sashal@kernel.org> Precedence: bulk X-Mailing-List: linux-raid@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-stable: review X-Patchwork-Hint: Ignore X-stable-base: Linux 6.12.1 Content-Transfer-Encoding: 8bit From: John Garry [ Upstream commit b1a7ad8b5c4fa28325ee7b369a2d545d3e16ccde ] Add proper bio_split() error handling. For any error, call raid_end_bio_io() and return. For the case of an in the write path, we need to undo the increment in the rdev pending count and NULLify the r1_bio->bios[] pointers. For read path failure, we need to undo rdev pending count increment from the earlier read_balance() call. Reviewed-by: Yu Kuai Reviewed-by: Hannes Reinecke Signed-off-by: John Garry Link: https://lore.kernel.org/r/20241111112150.3756529-6-john.g.garry@oracle.com Signed-off-by: Jens Axboe Signed-off-by: Sasha Levin --- drivers/md/raid1.c | 33 +++++++++++++++++++++++++++++++-- 1 file changed, 31 insertions(+), 2 deletions(-) diff --git a/drivers/md/raid1.c b/drivers/md/raid1.c index 6c9d24203f39f..7e023e9303c8a 100644 --- a/drivers/md/raid1.c +++ b/drivers/md/raid1.c @@ -1322,7 +1322,7 @@ static void raid1_read_request(struct mddev *mddev, struct bio *bio, const enum req_op op = bio_op(bio); const blk_opf_t do_sync = bio->bi_opf & REQ_SYNC; int max_sectors; - int rdisk; + int rdisk, error; bool r1bio_existed = !!r1_bio; /* @@ -1383,6 +1383,11 @@ static void raid1_read_request(struct mddev *mddev, struct bio *bio, if (max_sectors < bio_sectors(bio)) { struct bio *split = bio_split(bio, max_sectors, gfp, &conf->bio_split); + + if (IS_ERR(split)) { + error = PTR_ERR(split); + goto err_handle; + } bio_chain(split, bio); submit_bio_noacct(bio); bio = split; @@ -1410,6 +1415,13 @@ static void raid1_read_request(struct mddev *mddev, struct bio *bio, read_bio->bi_private = r1_bio; mddev_trace_remap(mddev, read_bio, r1_bio->sector); submit_bio_noacct(read_bio); + return; + +err_handle: + atomic_dec(&mirror->rdev->nr_pending); + bio->bi_status = errno_to_blk_status(error); + set_bit(R1BIO_Uptodate, &r1_bio->state); + raid_end_bio_io(r1_bio); } static void raid1_write_request(struct mddev *mddev, struct bio *bio, @@ -1417,7 +1429,7 @@ static void raid1_write_request(struct mddev *mddev, struct bio *bio, { struct r1conf *conf = mddev->private; struct r1bio *r1_bio; - int i, disks; + int i, disks, k, error; unsigned long flags; struct md_rdev *blocked_rdev; int first_clone; @@ -1576,6 +1588,11 @@ static void raid1_write_request(struct mddev *mddev, struct bio *bio, if (max_sectors < bio_sectors(bio)) { struct bio *split = bio_split(bio, max_sectors, GFP_NOIO, &conf->bio_split); + + if (IS_ERR(split)) { + error = PTR_ERR(split); + goto err_handle; + } bio_chain(split, bio); submit_bio_noacct(bio); bio = split; @@ -1660,6 +1677,18 @@ static void raid1_write_request(struct mddev *mddev, struct bio *bio, /* In case raid1d snuck in to freeze_array */ wake_up_barrier(conf); + return; +err_handle: + for (k = 0; k < i; k++) { + if (r1_bio->bios[k]) { + rdev_dec_pending(conf->mirrors[k].rdev, mddev); + r1_bio->bios[k] = NULL; + } + } + + bio->bi_status = errno_to_blk_status(error); + set_bit(R1BIO_Uptodate, &r1_bio->state); + raid_end_bio_io(r1_bio); } static bool raid1_make_request(struct mddev *mddev, struct bio *bio) -- 2.43.0