linux-btrfs.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* what is the best way to monitor raid1 drive failures?
@ 2014-10-12 23:50 Suman C
  2014-10-13  2:21 ` Anand Jain
  0 siblings, 1 reply; 12+ messages in thread
From: Suman C @ 2014-10-12 23:50 UTC (permalink / raw)
  To: linux-btrfs

Hi,

I am testing some disk failure scenarios in a 2 drive raid1 mirror.
They are 4GB each, virtual SATA drives inside virtualbox.

To simulate the failure, I detached one of the drives from the system.
After that, I see no sign of a problem except for these errors:

Oct 12 15:37:14 rock-dev kernel: btrfs: bdev /dev/sdb errs: wr 0, rd
0, flush 1, corrupt 0, gen 0
Oct 12 15:37:14 rock-dev kernel: lost page write due to I/O error on /dev/sdb

/dev/sdb is gone from the system, but btrfs fi show still lists it.

Label: raid1pool  uuid: 4e5d8b43-1d34-4672-8057-99c51649b7c6
    Total devices 2 FS bytes used 1.46GiB
    devid    1 size 4.00GiB used 2.45GiB path /dev/sdb
    devid    2 size 4.00GiB used 2.43GiB path /dev/sdc

I am able to read and write just fine, but do see the above errors in dmesg.

What is the best way to find out that one of the drives has gone bad?

Suman

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: what is the best way to monitor raid1 drive failures?
  2014-10-12 23:50 what is the best way to monitor raid1 drive failures? Suman C
@ 2014-10-13  2:21 ` Anand Jain
  2014-10-13 19:50   ` Suman C
  0 siblings, 1 reply; 12+ messages in thread
From: Anand Jain @ 2014-10-13  2:21 UTC (permalink / raw)
  To: Suman C; +Cc: linux-btrfs


Suman,

 > To simulate the failure, I detached one of the drives from the system.
 > After that, I see no sign of a problem except for these errors:

  Are you physically pulling out the device ? I wonder if lsblk or blkid
  shows the error ? reporting device missing logic is in the progs (so
  have that latest) and it works provided user script such as blkid/lsblk
  also reports the problem. OR for soft-detach tests you could use
  devmgt at http://github.com/anajain/devmgt

  Also I am trying to get the device management framework for the btrfs
  with a more better device management and reporting.

Thanks,  Anand


On 10/13/14 07:50, Suman C wrote:
> Hi,
>
> I am testing some disk failure scenarios in a 2 drive raid1 mirror.
> They are 4GB each, virtual SATA drives inside virtualbox.
>
> To simulate the failure, I detached one of the drives from the system.
> After that, I see no sign of a problem except for these errors:
>
> Oct 12 15:37:14 rock-dev kernel: btrfs: bdev /dev/sdb errs: wr 0, rd
> 0, flush 1, corrupt 0, gen 0
> Oct 12 15:37:14 rock-dev kernel: lost page write due to I/O error on /dev/sdb
>
> /dev/sdb is gone from the system, but btrfs fi show still lists it.
>
> Label: raid1pool  uuid: 4e5d8b43-1d34-4672-8057-99c51649b7c6
>      Total devices 2 FS bytes used 1.46GiB
>      devid    1 size 4.00GiB used 2.45GiB path /dev/sdb
>      devid    2 size 4.00GiB used 2.43GiB path /dev/sdc
>
> I am able to read and write just fine, but do see the above errors in dmesg.
>
> What is the best way to find out that one of the drives has gone bad?
>
> Suman
> --
> To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
>

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: what is the best way to monitor raid1 drive failures?
  2014-10-13  2:21 ` Anand Jain
@ 2014-10-13 19:50   ` Suman C
  2014-10-14  2:13     ` Anand Jain
  0 siblings, 1 reply; 12+ messages in thread
From: Suman C @ 2014-10-13 19:50 UTC (permalink / raw)
  To: Anand Jain; +Cc: linux-btrfs

I had progs 3.12 and updated to the latest from git(3.16). With this
update, btrfs fi show reports there is a missing device immediately
after i pull it out. Thanks!

I am using virtualbox to test this. So, I am detaching the drive like so:

vboxmanage storageattach <vm> --storagectl <controller> --port <port>
--device <device> --medium none

Next I am going to try and test a more realistic scenario where a
harddrive is not pulled out, but is damaged.

Can/does btrfs mark a filesystem(say, 2 drive raid1) degraded or
unhealthy automatically when one drive is damaged badly enough that it
cannot be written to or read from reliably?

Suman

On Sun, Oct 12, 2014 at 7:21 PM, Anand Jain <Anand.Jain@oracle.com> wrote:
>
> Suman,
>
>> To simulate the failure, I detached one of the drives from the system.
>> After that, I see no sign of a problem except for these errors:
>
>  Are you physically pulling out the device ? I wonder if lsblk or blkid
>  shows the error ? reporting device missing logic is in the progs (so
>  have that latest) and it works provided user script such as blkid/lsblk
>  also reports the problem. OR for soft-detach tests you could use
>  devmgt at http://github.com/anajain/devmgt
>
>  Also I am trying to get the device management framework for the btrfs
>  with a more better device management and reporting.
>
> Thanks,  Anand
>
>
>
> On 10/13/14 07:50, Suman C wrote:
>>
>> Hi,
>>
>> I am testing some disk failure scenarios in a 2 drive raid1 mirror.
>> They are 4GB each, virtual SATA drives inside virtualbox.
>>
>> To simulate the failure, I detached one of the drives from the system.
>> After that, I see no sign of a problem except for these errors:
>>
>> Oct 12 15:37:14 rock-dev kernel: btrfs: bdev /dev/sdb errs: wr 0, rd
>> 0, flush 1, corrupt 0, gen 0
>> Oct 12 15:37:14 rock-dev kernel: lost page write due to I/O error on
>> /dev/sdb
>>
>> /dev/sdb is gone from the system, but btrfs fi show still lists it.
>>
>> Label: raid1pool  uuid: 4e5d8b43-1d34-4672-8057-99c51649b7c6
>>      Total devices 2 FS bytes used 1.46GiB
>>      devid    1 size 4.00GiB used 2.45GiB path /dev/sdb
>>      devid    2 size 4.00GiB used 2.43GiB path /dev/sdc
>>
>> I am able to read and write just fine, but do see the above errors in
>> dmesg.
>>
>> What is the best way to find out that one of the drives has gone bad?
>>
>> Suman
>> --
>> To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
>> the body of a message to majordomo@vger.kernel.org
>> More majordomo info at  http://vger.kernel.org/majordomo-info.html
>>
>

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: what is the best way to monitor raid1 drive failures?
  2014-10-13 19:50   ` Suman C
@ 2014-10-14  2:13     ` Anand Jain
  2014-10-14 14:48       ` Suman C
  0 siblings, 1 reply; 12+ messages in thread
From: Anand Jain @ 2014-10-14  2:13 UTC (permalink / raw)
  To: Suman C; +Cc: linux-btrfs




On 10/14/14 03:50, Suman C wrote:
> I had progs 3.12 and updated to the latest from git(3.16). With this
> update, btrfs fi show reports there is a missing device immediately
> after i pull it out. Thanks!
>
> I am using virtualbox to test this. So, I am detaching the drive like so:
>
> vboxmanage storageattach <vm> --storagectl <controller> --port <port>
> --device <device> --medium none
>
> Next I am going to try and test a more realistic scenario where a
> harddrive is not pulled out, but is damaged.


> Can/does btrfs mark a filesystem(say, 2 drive raid1) degraded or
> unhealthy automatically when one drive is damaged badly enough that it
> cannot be written to or read from reliably?

  There are some gaps as directly compared to an enterprise volume
  manager, which is being fixed.  but pls do report what you find.

Thanks, Anand


> Suman
>
> On Sun, Oct 12, 2014 at 7:21 PM, Anand Jain <Anand.Jain@oracle.com> wrote:
>>
>> Suman,
>>
>>> To simulate the failure, I detached one of the drives from the system.
>>> After that, I see no sign of a problem except for these errors:
>>
>>   Are you physically pulling out the device ? I wonder if lsblk or blkid
>>   shows the error ? reporting device missing logic is in the progs (so
>>   have that latest) and it works provided user script such as blkid/lsblk
>>   also reports the problem. OR for soft-detach tests you could use
>>   devmgt at http://github.com/anajain/devmgt
>>
>>   Also I am trying to get the device management framework for the btrfs
>>   with a more better device management and reporting.
>>
>> Thanks,  Anand
>>
>>
>>
>> On 10/13/14 07:50, Suman C wrote:
>>>
>>> Hi,
>>>
>>> I am testing some disk failure scenarios in a 2 drive raid1 mirror.
>>> They are 4GB each, virtual SATA drives inside virtualbox.
>>>
>>> To simulate the failure, I detached one of the drives from the system.
>>> After that, I see no sign of a problem except for these errors:
>>>
>>> Oct 12 15:37:14 rock-dev kernel: btrfs: bdev /dev/sdb errs: wr 0, rd
>>> 0, flush 1, corrupt 0, gen 0
>>> Oct 12 15:37:14 rock-dev kernel: lost page write due to I/O error on
>>> /dev/sdb
>>>
>>> /dev/sdb is gone from the system, but btrfs fi show still lists it.
>>>
>>> Label: raid1pool  uuid: 4e5d8b43-1d34-4672-8057-99c51649b7c6
>>>       Total devices 2 FS bytes used 1.46GiB
>>>       devid    1 size 4.00GiB used 2.45GiB path /dev/sdb
>>>       devid    2 size 4.00GiB used 2.43GiB path /dev/sdc
>>>
>>> I am able to read and write just fine, but do see the above errors in
>>> dmesg.
>>>
>>> What is the best way to find out that one of the drives has gone bad?
>>>
>>> Suman
>>> --
>>> To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
>>> the body of a message to majordomo@vger.kernel.org
>>> More majordomo info at  http://vger.kernel.org/majordomo-info.html
>>>
>>
> --
> To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
>

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: what is the best way to monitor raid1 drive failures?
  2014-10-14  2:13     ` Anand Jain
@ 2014-10-14 14:48       ` Suman C
  2014-10-14 14:52         ` Rich Freeman
                           ` (3 more replies)
  0 siblings, 4 replies; 12+ messages in thread
From: Suman C @ 2014-10-14 14:48 UTC (permalink / raw)
  To: Anand Jain; +Cc: linux-btrfs

Hi,

Here's a simple raid1 recovery experiment that's not working as expected.

kernel: 3.17, latest mainline
progs: 3.16.1

I started with a simple raid1 mirror of 2 drives (sda and sdb). The
filesystem is functional, I created one subvol, put some data,
read/write tested etc..

yanked the sdb out. (this is physical/hardware). btrfs fi show prints
drive missing, as expected.

powered the machine down. removed the "bad"(yanked out sdb) drive and
replaced it with a new drive. Powered up the machine.

The new drive shows up as sdb. btrfs fi show still prints drive missing.

mounted the filesystem with ro,degraded

tried adding the "new" sdb drive which results in the following error.
(-f because the new drive has a fs from past)

# btrfs device add -f /dev/sdb /mnt2/raid1pool
/dev/sdb is mounted

Unless I am missing something, this looks like a bug.

Let me know, I can retest.

Thanks
Suman

On Mon, Oct 13, 2014 at 7:13 PM, Anand Jain <Anand.Jain@oracle.com> wrote:
>
>
>
> On 10/14/14 03:50, Suman C wrote:
>>
>> I had progs 3.12 and updated to the latest from git(3.16). With this
>> update, btrfs fi show reports there is a missing device immediately
>> after i pull it out. Thanks!
>>
>> I am using virtualbox to test this. So, I am detaching the drive like so:
>>
>> vboxmanage storageattach <vm> --storagectl <controller> --port <port>
>> --device <device> --medium none
>>
>> Next I am going to try and test a more realistic scenario where a
>> harddrive is not pulled out, but is damaged.
>
>
>
>> Can/does btrfs mark a filesystem(say, 2 drive raid1) degraded or
>> unhealthy automatically when one drive is damaged badly enough that it
>> cannot be written to or read from reliably?
>
>
>  There are some gaps as directly compared to an enterprise volume
>  manager, which is being fixed.  but pls do report what you find.
>
> Thanks, Anand
>
>
>
>> Suman
>>
>> On Sun, Oct 12, 2014 at 7:21 PM, Anand Jain <Anand.Jain@oracle.com> wrote:
>>>
>>>
>>> Suman,
>>>
>>>> To simulate the failure, I detached one of the drives from the system.
>>>> After that, I see no sign of a problem except for these errors:
>>>
>>>
>>>   Are you physically pulling out the device ? I wonder if lsblk or blkid
>>>   shows the error ? reporting device missing logic is in the progs (so
>>>   have that latest) and it works provided user script such as blkid/lsblk
>>>   also reports the problem. OR for soft-detach tests you could use
>>>   devmgt at http://github.com/anajain/devmgt
>>>
>>>   Also I am trying to get the device management framework for the btrfs
>>>   with a more better device management and reporting.
>>>
>>> Thanks,  Anand
>>>
>>>
>>>
>>> On 10/13/14 07:50, Suman C wrote:
>>>>
>>>>
>>>> Hi,
>>>>
>>>> I am testing some disk failure scenarios in a 2 drive raid1 mirror.
>>>> They are 4GB each, virtual SATA drives inside virtualbox.
>>>>
>>>> To simulate the failure, I detached one of the drives from the system.
>>>> After that, I see no sign of a problem except for these errors:
>>>>
>>>> Oct 12 15:37:14 rock-dev kernel: btrfs: bdev /dev/sdb errs: wr 0, rd
>>>> 0, flush 1, corrupt 0, gen 0
>>>> Oct 12 15:37:14 rock-dev kernel: lost page write due to I/O error on
>>>> /dev/sdb
>>>>
>>>> /dev/sdb is gone from the system, but btrfs fi show still lists it.
>>>>
>>>> Label: raid1pool  uuid: 4e5d8b43-1d34-4672-8057-99c51649b7c6
>>>>       Total devices 2 FS bytes used 1.46GiB
>>>>       devid    1 size 4.00GiB used 2.45GiB path /dev/sdb
>>>>       devid    2 size 4.00GiB used 2.43GiB path /dev/sdc
>>>>
>>>> I am able to read and write just fine, but do see the above errors in
>>>> dmesg.
>>>>
>>>> What is the best way to find out that one of the drives has gone bad?
>>>>
>>>> Suman
>>>> --
>>>> To unsubscribe from this list: send the line "unsubscribe linux-btrfs"
>>>> in
>>>> the body of a message to majordomo@vger.kernel.org
>>>> More majordomo info at  http://vger.kernel.org/majordomo-info.html
>>>>
>>>
>> --
>> To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
>> the body of a message to majordomo@vger.kernel.org
>> More majordomo info at  http://vger.kernel.org/majordomo-info.html
>>
>

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: what is the best way to monitor raid1 drive failures?
  2014-10-14 14:48       ` Suman C
@ 2014-10-14 14:52         ` Rich Freeman
  2014-10-14 15:05           ` Suman C
  2014-10-14 19:15         ` Chris Murphy
                           ` (2 subsequent siblings)
  3 siblings, 1 reply; 12+ messages in thread
From: Rich Freeman @ 2014-10-14 14:52 UTC (permalink / raw)
  To: Suman C; +Cc: Anand Jain, linux-btrfs

On Tue, Oct 14, 2014 at 10:48 AM, Suman C <schakrava@gmail.com> wrote:
>
> The new drive shows up as sdb. btrfs fi show still prints drive missing.
>
> mounted the filesystem with ro,degraded
>
> tried adding the "new" sdb drive which results in the following error.
> (-f because the new drive has a fs from past)
>
> # btrfs device add -f /dev/sdb /mnt2/raid1pool
> /dev/sdb is mounted
>
> Unless I am missing something, this looks like a bug.
>

You need to first run btrfs device delete missing /mnt2/raid1pool I
believe (missing is a keyword for a missing device in the array - if
the device were still present you could specify it by /dev/sdX).

--
Rich

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: what is the best way to monitor raid1 drive failures?
  2014-10-14 14:52         ` Rich Freeman
@ 2014-10-14 15:05           ` Suman C
  0 siblings, 0 replies; 12+ messages in thread
From: Suman C @ 2014-10-14 15:05 UTC (permalink / raw)
  To: Rich Freeman; +Cc: Anand Jain, linux-btrfs

I cannot delete that way because it would mean going below the minimum
number of devices and it fails, as explained in the wiki.

The solution from wiki is to add a new device and then delete the old
one, but the problem here may be due to the new device appearing with
the same name(sdb)?

Suman

On Tue, Oct 14, 2014 at 7:52 AM, Rich Freeman
<r-btrfs@thefreemanclan.net> wrote:
> On Tue, Oct 14, 2014 at 10:48 AM, Suman C <schakrava@gmail.com> wrote:
>>
>> The new drive shows up as sdb. btrfs fi show still prints drive missing.
>>
>> mounted the filesystem with ro,degraded
>>
>> tried adding the "new" sdb drive which results in the following error.
>> (-f because the new drive has a fs from past)
>>
>> # btrfs device add -f /dev/sdb /mnt2/raid1pool
>> /dev/sdb is mounted
>>
>> Unless I am missing something, this looks like a bug.
>>
>
> You need to first run btrfs device delete missing /mnt2/raid1pool I
> believe (missing is a keyword for a missing device in the array - if
> the device were still present you could specify it by /dev/sdX).
>
> --
> Rich

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: what is the best way to monitor raid1 drive failures?
  2014-10-14 14:48       ` Suman C
  2014-10-14 14:52         ` Rich Freeman
@ 2014-10-14 19:15         ` Chris Murphy
  2014-10-14 20:11           ` Suman C
  2014-10-14 22:00         ` Duncan
  2014-10-15  4:11         ` Anand Jain
  3 siblings, 1 reply; 12+ messages in thread
From: Chris Murphy @ 2014-10-14 19:15 UTC (permalink / raw)
  To: linux-btrfs


On Oct 14, 2014, at 10:48 AM, Suman C <schakrava@gmail.com> wrote:
> 
> mounted the filesystem with ro,degraded
> 
> tried adding the "new" sdb drive which results in the following error.
> (-f because the new drive has a fs from past)
> 
> # btrfs device add -f /dev/sdb /mnt2/raid1pool
> /dev/sdb is mounted
> 
> Unless I am missing something, this looks like a bug.

Strange message. I expect a device can't be added to a volume mounted ro. If the device add command works on a volume mounted rw, then the bug is the message '/dev/sdb is mounted' when adding device to ro mounted volume.


Chris Murphy


^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: what is the best way to monitor raid1 drive failures?
  2014-10-14 19:15         ` Chris Murphy
@ 2014-10-14 20:11           ` Suman C
  2014-10-24 16:13             ` Chris Murphy
  0 siblings, 1 reply; 12+ messages in thread
From: Suman C @ 2014-10-14 20:11 UTC (permalink / raw)
  To: Chris Murphy; +Cc: linux-btrfs

After the reboot step, where I indicated that I mounted ro, I was
unable to mount rw or rw,degraded. I get the "mount: wrong fs type,
bad option, bad superblock...." error if I try to mount it rw.

What might be the reason for that?

Suman

On Tue, Oct 14, 2014 at 12:15 PM, Chris Murphy <lists@colorremedies.com> wrote:
>
> On Oct 14, 2014, at 10:48 AM, Suman C <schakrava@gmail.com> wrote:
>>
>> mounted the filesystem with ro,degraded
>>
>> tried adding the "new" sdb drive which results in the following error.
>> (-f because the new drive has a fs from past)
>>
>> # btrfs device add -f /dev/sdb /mnt2/raid1pool
>> /dev/sdb is mounted
>>
>> Unless I am missing something, this looks like a bug.
>
> Strange message. I expect a device can't be added to a volume mounted ro. If the device add command works on a volume mounted rw, then the bug is the message '/dev/sdb is mounted' when adding device to ro mounted volume.
>
>
> Chris Murphy
>
> --
> To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: what is the best way to monitor raid1 drive failures?
  2014-10-14 14:48       ` Suman C
  2014-10-14 14:52         ` Rich Freeman
  2014-10-14 19:15         ` Chris Murphy
@ 2014-10-14 22:00         ` Duncan
  2014-10-15  4:11         ` Anand Jain
  3 siblings, 0 replies; 12+ messages in thread
From: Duncan @ 2014-10-14 22:00 UTC (permalink / raw)
  To: linux-btrfs

Suman C posted on Tue, 14 Oct 2014 07:48:01 -0700 as excerpted:

> Here's a simple raid1 recovery experiment that's not working as
> expected.
> 
> kernel: 3.17, latest mainline progs: 3.16.1
> 
> I started with a simple raid1 mirror of 2 drives (sda and sdb). The
> filesystem is functional, I created one subvol, put some data,
> read/write tested etc..
> 
> yanked the sdb out. (this is physical/hardware). btrfs fi show prints
> drive missing, as expected.
> 
> powered the machine down. removed the "bad"(yanked out sdb) drive and
> replaced it with a new drive. Powered up the machine.
> 
> The new drive shows up as sdb. btrfs fi show still prints drive missing.
> 
> mounted the filesystem with ro,degraded
> 
> tried adding the "new" sdb drive which results in the following error.
> (-f because the new drive has a fs from past)
> 
> # btrfs device add -f /dev/sdb /mnt2/raid1pool /dev/sdb is mounted

While I'm not sure it'll get you past the error, did you try...

# btrfs replace ...

That's the new way to /replace/ a missing device, adding a new one and 
deleting the old one (which can be missing) at the same time.  See the 
btrfs-replace manpage.

While the btrfs-replace manpage says that you have to use the <devid> 
format if the device is missing, it isn't particularly helpful in telling 
what that format actually is.  Do a btrfs fi show and use the appropriate 
devid /number/ from there. =:^)

Please report back as I'm using btrfs raid1 as well, but my own tests are 
rather stale by this point and I'd have to figure it out as I went.  So 
I'm highly interested in your results. =:^)

(FWIW, personally I'd have made that btrfs device replace, instead of 
btrfs replace, to keep it grouped with the other device operations, but 
whatever, it's its own top-level command, now.  Tho at least the
btrfs-device manpage mentions btrfs replace and its manpage as well.  But 
I still think having replace as its own top-level command is confusing.)

-- 
Duncan - List replies preferred.   No HTML msgs.
"Every nonfree program has a lord, a master --
and if you use the program, he is your master."  Richard Stallman


^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: what is the best way to monitor raid1 drive failures?
  2014-10-14 14:48       ` Suman C
                           ` (2 preceding siblings ...)
  2014-10-14 22:00         ` Duncan
@ 2014-10-15  4:11         ` Anand Jain
  3 siblings, 0 replies; 12+ messages in thread
From: Anand Jain @ 2014-10-15  4:11 UTC (permalink / raw)
  To: Suman C; +Cc: linux-btrfs



On 10/14/14 22:48, Suman C wrote:
> Hi,
>
> Here's a simple raid1 recovery experiment that's not working as expected.
>
> kernel: 3.17, latest mainline
> progs: 3.16.1
>
> I started with a simple raid1 mirror of 2 drives (sda and sdb). The
> filesystem is functional, I created one subvol, put some data,
> read/write tested etc..
>
> yanked the sdb out. (this is physical/hardware). btrfs fi show prints
> drive missing, as expected.


> powered the machine down. removed the "bad"(yanked out sdb) drive and
> replaced it with a new drive. Powered up the machine.

  Does new drive at sdb contain some stale btrfs SB ? and was that
  FS mounted during boot ?

  Or simply unmount; wipefs -a /dev/sdb; reboot. That will help to
  achieve your test case.

> The new drive shows up as sdb. btrfs fi show still prints drive missing.

  good in fact.

> mounted the filesystem with ro,degraded

  ro wasn't required. And now you should be able to replace using
  devid/missing string.

> tried adding the "new" sdb drive which results in the following error.
> (-f because the new drive has a fs from past)
>
> # btrfs device add -f /dev/sdb /mnt2/raid1pool
> /dev/sdb is mounted
>
> Unless I am missing something, this looks like a bug.

  Our progs check_mounted() is not too strong, there might be bug.
  But as of now some info as above is missing.


  If you could have experimental patch:
     [PATCH RFC] btrfs: introduce procfs interface for the device list

  And read /proc/fs/btrfs/devlist
    That will tell you the "real" status of devices inside the kernel.
    progs tries to act intelligent at times not required.


Anand


> Let me know, I can retest.
>
> Thanks
> Suman
>
> On Mon, Oct 13, 2014 at 7:13 PM, Anand Jain <Anand.Jain@oracle.com> wrote:
>>
>>
>>
>> On 10/14/14 03:50, Suman C wrote:
>>>
>>> I had progs 3.12 and updated to the latest from git(3.16). With this
>>> update, btrfs fi show reports there is a missing device immediately
>>> after i pull it out. Thanks!
>>>
>>> I am using virtualbox to test this. So, I am detaching the drive like so:
>>>
>>> vboxmanage storageattach <vm> --storagectl <controller> --port <port>
>>> --device <device> --medium none
>>>
>>> Next I am going to try and test a more realistic scenario where a
>>> harddrive is not pulled out, but is damaged.
>>
>>
>>
>>> Can/does btrfs mark a filesystem(say, 2 drive raid1) degraded or
>>> unhealthy automatically when one drive is damaged badly enough that it
>>> cannot be written to or read from reliably?
>>
>>
>>   There are some gaps as directly compared to an enterprise volume
>>   manager, which is being fixed.  but pls do report what you find.
>>
>> Thanks, Anand
>>
>>
>>
>>> Suman
>>>
>>> On Sun, Oct 12, 2014 at 7:21 PM, Anand Jain <Anand.Jain@oracle.com> wrote:
>>>>
>>>>
>>>> Suman,
>>>>
>>>>> To simulate the failure, I detached one of the drives from the system.
>>>>> After that, I see no sign of a problem except for these errors:
>>>>
>>>>
>>>>    Are you physically pulling out the device ? I wonder if lsblk or blkid
>>>>    shows the error ? reporting device missing logic is in the progs (so
>>>>    have that latest) and it works provided user script such as blkid/lsblk
>>>>    also reports the problem. OR for soft-detach tests you could use
>>>>    devmgt at http://github.com/anajain/devmgt
>>>>
>>>>    Also I am trying to get the device management framework for the btrfs
>>>>    with a more better device management and reporting.
>>>>
>>>> Thanks,  Anand
>>>>
>>>>
>>>>
>>>> On 10/13/14 07:50, Suman C wrote:
>>>>>
>>>>>
>>>>> Hi,
>>>>>
>>>>> I am testing some disk failure scenarios in a 2 drive raid1 mirror.
>>>>> They are 4GB each, virtual SATA drives inside virtualbox.
>>>>>
>>>>> To simulate the failure, I detached one of the drives from the system.
>>>>> After that, I see no sign of a problem except for these errors:
>>>>>
>>>>> Oct 12 15:37:14 rock-dev kernel: btrfs: bdev /dev/sdb errs: wr 0, rd
>>>>> 0, flush 1, corrupt 0, gen 0
>>>>> Oct 12 15:37:14 rock-dev kernel: lost page write due to I/O error on
>>>>> /dev/sdb
>>>>>
>>>>> /dev/sdb is gone from the system, but btrfs fi show still lists it.
>>>>>
>>>>> Label: raid1pool  uuid: 4e5d8b43-1d34-4672-8057-99c51649b7c6
>>>>>        Total devices 2 FS bytes used 1.46GiB
>>>>>        devid    1 size 4.00GiB used 2.45GiB path /dev/sdb
>>>>>        devid    2 size 4.00GiB used 2.43GiB path /dev/sdc
>>>>>
>>>>> I am able to read and write just fine, but do see the above errors in
>>>>> dmesg.
>>>>>
>>>>> What is the best way to find out that one of the drives has gone bad?
>>>>>
>>>>> Suman
>>>>> --
>>>>> To unsubscribe from this list: send the line "unsubscribe linux-btrfs"
>>>>> in
>>>>> the body of a message to majordomo@vger.kernel.org
>>>>> More majordomo info at  http://vger.kernel.org/majordomo-info.html
>>>>>
>>>>
>>> --
>>> To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
>>> the body of a message to majordomo@vger.kernel.org
>>> More majordomo info at  http://vger.kernel.org/majordomo-info.html
>>>
>>
> --
> To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
>

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: what is the best way to monitor raid1 drive failures?
  2014-10-14 20:11           ` Suman C
@ 2014-10-24 16:13             ` Chris Murphy
  0 siblings, 0 replies; 12+ messages in thread
From: Chris Murphy @ 2014-10-24 16:13 UTC (permalink / raw)
  To: Suman C; +Cc: linux-btrfs


On Oct 14, 2014, at 2:11 PM, Suman C <schakrava@gmail.com> wrote:

> After the reboot step, where I indicated that I mounted ro, I was
> unable to mount rw or rw,degraded. I get the "mount: wrong fs type,
> bad option, bad superblock…." error if I try to mount it rw.

I would test with kernel 3.17 at least, but better to use 3.18rc1 now. If you can reproduce it, then I'd file a bug. I'm a bit haphazard in my bug filing. It seems more effective to start a new subject with the word bug in brackets, and subject something like "can't mount rw,degraded" and then go through the reproduce steps being clear how the fs was created and all that. Also note not only what does happen, but what you expect to happen instead. Seems obvious but it should be done that way. I'd also file it as a bug the way we're asked to on the wiki which is bugzilla.kernel.org.

Chris Murphy

^ permalink raw reply	[flat|nested] 12+ messages in thread

end of thread, other threads:[~2014-10-24 16:13 UTC | newest]

Thread overview: 12+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2014-10-12 23:50 what is the best way to monitor raid1 drive failures? Suman C
2014-10-13  2:21 ` Anand Jain
2014-10-13 19:50   ` Suman C
2014-10-14  2:13     ` Anand Jain
2014-10-14 14:48       ` Suman C
2014-10-14 14:52         ` Rich Freeman
2014-10-14 15:05           ` Suman C
2014-10-14 19:15         ` Chris Murphy
2014-10-14 20:11           ` Suman C
2014-10-24 16:13             ` Chris Murphy
2014-10-14 22:00         ` Duncan
2014-10-15  4:11         ` Anand Jain

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).