From: Anand Jain <anand.jain@oracle.com>
To: Martin <develop@imagmbh.de>
Cc: Duncan <1i5t5.duncan@cox.net>, linux-btrfs@vger.kernel.org
Subject: Re: rw-mount-problem after raid1-failure
Date: Wed, 10 Jun 2015 15:46:52 +0800 [thread overview]
Message-ID: <5577EB6C.6090405@oracle.com> (raw)
In-Reply-To: <1631897.Efqz0Ef53G@malu-aspire-v3-771>
On 06/10/2015 02:58 PM, Martin wrote:
> Hello Anand,
>
> the
>
>> mount -o degraded <good-disk> <-- this should work
>
> is my problem. The fist times it works but suddently, after a reboot, it fails
> with message "BTRFS: too many missing devices, writeable mount is not allowed"
> in kernel log.
the failed(ing) disk is it still physically in the system ?
when btrfs finds EIO on the intermittently failing disk,
ro-mode kicks in, (there are some opportunity for fixes which
I am trying). To recover, the approach is to make the failing
disk a missing disk instead, by pulling out the failing disk
from the system and boot. When system finds disk missing
(not EIO rather) it should mount rw,degraded (from the VM part
at least) and then replace (with a new disk) should work.
Thanks, Anand
> "btrfs fi show /backup2" shows:
> Label: none uuid: 6d755db5-f8bb-494e-9bdc-cf524ff99512
> Total devices 2 FS bytes used 3.50TiB
> devid 4 size 7.19TiB used 4.02TiB path /dev/sdb2
> *** Some devices missing
>
> I suppose there is a "marker", telling the system only to mount in ro-mode?
>
> Due to the ro-mount I can't replace the missing one because all the btrfs-
> commands need rw-access ...
>
> Martin
>
> Am Mittwoch, 10. Juni 2015, 14:38:38 schrieb Anand Jain:
>> Ah thanks David. So its 2 disks RAID1.
>>
>> Martin,
>>
>> disk pool error handle is primitive as of now. readonly is the only
>> action it would take. rest of recovery action is manual. thats
>> unacceptable in a data center solutions. I don't recommend btrfs VM
>> productions yet. But we are working to get that to a complete VM.
>>
>> For now, for your pool recovery: pls try this.
>>
>> - After reboot.
>> - modunload and modload (so that kernel devlist is empty)
>> - mount -o degraded <good-disk> <-- this should work.
>> - btrfs fi show -m <-- Should show missing if you don't let me know.
>> - Do a replace of the missing disk without reading the source disk.
>>
>> Good luck.
>>
>> Thanks, Anand
>>
>> On 06/10/2015 11:58 AM, Duncan wrote:
>>> Anand Jain posted on Wed, 10 Jun 2015 09:19:37 +0800 as excerpted:
>>>> On 06/09/2015 01:10 AM, Martin wrote:
>>>>> Hello!
>>>>>
>>>>> I have a raid1-btrfs-system (Kernel 3.19.0-18-generic, Ubuntu Vivid
>>>>> Vervet, btrfs-tools 3.17-1.1). One disk failed some days ago. I could
>>>>> remount the remaining one with "-o degraded". After one day and some
>>>>> write-operations (with no errrors) I had to reboot the system. And now
>>>>> I can not mount "rw" anymore, only "-o degraded,ro" is possible.
>>>>>
>>>>> In the kernel log I found BTRFS: too many missing devices, writeable
>>>>> mount is not allowed.
>>>>>
>>>>> I read about https://bugzilla.kernel.org/show_bug.cgi?id=60594 but I
>>>>> did no conversion to a single drive.
>>>>>
>>>>> How can I mount the disk "rw" to remove the "missing" drive and add a
>>>>> new one?
>>>>> Because there are many snapshots of the filesystem, copying the system
>>>>> would be only the last alternative ;-)
>>>>
>>>> How many disks you had in the RAID1. How many are failed ?
>>>
>>> The answer is (a bit indirectly) in what you quoted. Repeating:
>>>>> One disk failed[.] I could remount the remaining one[.]
>>>
>>> So it was a two-device raid1, one failed device, one remaining, unfailed.
>>
>> --
>> To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
>> the body of a message to majordomo@vger.kernel.org
>> More majordomo info at http://vger.kernel.org/majordomo-info.html
>
> --
> To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at http://vger.kernel.org/majordomo-info.html
>
next prev parent reply other threads:[~2015-06-10 7:47 UTC|newest]
Thread overview: 12+ messages / expand[flat|nested] mbox.gz Atom feed top
2015-06-08 17:10 rw-mount-problem after raid1-failure Martin
2015-06-10 1:19 ` Anand Jain
2015-06-10 3:58 ` Duncan
2015-06-10 6:38 ` Anand Jain
2015-06-10 6:58 ` Martin
2015-06-10 7:46 ` Anand Jain [this message]
2015-06-10 12:05 ` Martin
2015-06-11 0:04 ` Anand Jain
2015-06-11 13:03 ` Martin
2015-06-12 10:38 ` Anand Jain
2015-06-14 18:24 ` Martin
2015-06-15 0:58 ` Anand Jain
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=5577EB6C.6090405@oracle.com \
--to=anand.jain@oracle.com \
--cc=1i5t5.duncan@cox.net \
--cc=develop@imagmbh.de \
--cc=linux-btrfs@vger.kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox