From: Anand Jain <Anand.Jain@oracle.com>
To: Suman C <schakrava@gmail.com>
Cc: linux-btrfs <linux-btrfs@vger.kernel.org>
Subject: Re: what is the best way to monitor raid1 drive failures?
Date: Mon, 13 Oct 2014 10:21:35 +0800 [thread overview]
Message-ID: <543B372F.10509@oracle.com> (raw)
In-Reply-To: <CAPF83msBVh_EqHAFxzmicnQjZKe7k4ioNEd8q3vwTxLNDx5prA@mail.gmail.com>
Suman,
> To simulate the failure, I detached one of the drives from the system.
> After that, I see no sign of a problem except for these errors:
Are you physically pulling out the device ? I wonder if lsblk or blkid
shows the error ? reporting device missing logic is in the progs (so
have that latest) and it works provided user script such as blkid/lsblk
also reports the problem. OR for soft-detach tests you could use
devmgt at http://github.com/anajain/devmgt
Also I am trying to get the device management framework for the btrfs
with a more better device management and reporting.
Thanks, Anand
On 10/13/14 07:50, Suman C wrote:
> Hi,
>
> I am testing some disk failure scenarios in a 2 drive raid1 mirror.
> They are 4GB each, virtual SATA drives inside virtualbox.
>
> To simulate the failure, I detached one of the drives from the system.
> After that, I see no sign of a problem except for these errors:
>
> Oct 12 15:37:14 rock-dev kernel: btrfs: bdev /dev/sdb errs: wr 0, rd
> 0, flush 1, corrupt 0, gen 0
> Oct 12 15:37:14 rock-dev kernel: lost page write due to I/O error on /dev/sdb
>
> /dev/sdb is gone from the system, but btrfs fi show still lists it.
>
> Label: raid1pool uuid: 4e5d8b43-1d34-4672-8057-99c51649b7c6
> Total devices 2 FS bytes used 1.46GiB
> devid 1 size 4.00GiB used 2.45GiB path /dev/sdb
> devid 2 size 4.00GiB used 2.43GiB path /dev/sdc
>
> I am able to read and write just fine, but do see the above errors in dmesg.
>
> What is the best way to find out that one of the drives has gone bad?
>
> Suman
> --
> To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at http://vger.kernel.org/majordomo-info.html
>
next prev parent reply other threads:[~2014-10-13 2:21 UTC|newest]
Thread overview: 12+ messages / expand[flat|nested] mbox.gz Atom feed top
2014-10-12 23:50 what is the best way to monitor raid1 drive failures? Suman C
2014-10-13 2:21 ` Anand Jain [this message]
2014-10-13 19:50 ` Suman C
2014-10-14 2:13 ` Anand Jain
2014-10-14 14:48 ` Suman C
2014-10-14 14:52 ` Rich Freeman
2014-10-14 15:05 ` Suman C
2014-10-14 19:15 ` Chris Murphy
2014-10-14 20:11 ` Suman C
2014-10-24 16:13 ` Chris Murphy
2014-10-14 22:00 ` Duncan
2014-10-15 4:11 ` Anand Jain
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=543B372F.10509@oracle.com \
--to=anand.jain@oracle.com \
--cc=linux-btrfs@vger.kernel.org \
--cc=schakrava@gmail.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).