From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail.linuxfoundation.org ([140.211.169.12]:35382 "EHLO mail.linuxfoundation.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2992735AbbHHVGb (ORCPT ); Sat, 8 Aug 2015 17:06:31 -0400 Subject: Patch "md/raid1: fix test for 'was read error from last working device'." has been added to the 3.10-stable tree To: neilb@suse.com, alex.bolshoy@gmail.com, gregkh@linuxfoundation.org Cc: , From: Date: Sat, 08 Aug 2015 14:06:30 -0700 Message-ID: <143906799010081@kroah.com> MIME-Version: 1.0 Content-Type: text/plain; charset=ANSI_X3.4-1968 Content-Transfer-Encoding: 8bit Sender: stable-owner@vger.kernel.org List-ID: This is a note to let you know that I've just added the patch titled md/raid1: fix test for 'was read error from last working device'. to the 3.10-stable tree which can be found at: http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=summary The filename of the patch is: md-raid1-fix-test-for-was-read-error-from-last-working-device.patch and it can be found in the queue-3.10 subdirectory. If you, or anyone else, feels it should not be added to the stable tree, please let know about it. >>From 34cab6f42003cb06f48f86a86652984dec338ae9 Mon Sep 17 00:00:00 2001 From: NeilBrown Date: Fri, 24 Jul 2015 09:22:16 +1000 Subject: md/raid1: fix test for 'was read error from last working device'. From: NeilBrown commit 34cab6f42003cb06f48f86a86652984dec338ae9 upstream. When we get a read error from the last working device, we don't try to repair it, and don't fail the device. We simple report a read error to the caller. However the current test for 'is this the last working device' is wrong. When there is only one fully working device, it assumes that a non-faulty device is that device. However a spare which is rebuilding would be non-faulty but so not the only working device. So change the test from "!Faulty" to "In_sync". If ->degraded says there is only one fully working device and this device is in_sync, this must be the one. This bug has existed since we allowed read_balance to read from a recovering spare in v3.0 Reported-and-tested-by: Alexander Lyakas Fixes: 76073054c95b ("md/raid1: clean up read_balance.") Signed-off-by: NeilBrown Signed-off-by: Greg Kroah-Hartman --- drivers/md/raid1.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) --- a/drivers/md/raid1.c +++ b/drivers/md/raid1.c @@ -327,7 +327,7 @@ static void raid1_end_read_request(struc spin_lock_irqsave(&conf->device_lock, flags); if (r1_bio->mddev->degraded == conf->raid_disks || (r1_bio->mddev->degraded == conf->raid_disks-1 && - !test_bit(Faulty, &conf->mirrors[mirror].rdev->flags))) + test_bit(In_sync, &conf->mirrors[mirror].rdev->flags))) uptodate = 1; spin_unlock_irqrestore(&conf->device_lock, flags); } Patches currently in stable-queue which might be from neilb@suse.com are queue-3.10/md-raid1-fix-test-for-was-read-error-from-last-working-device.patch