linux-btrfs.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: "Lluís Batlle i Rossell" <viric@viric.name>
To: Btrfs mailing list <linux-btrfs@vger.kernel.org>
Subject: Can't replace a faulty disk of raid1
Date: Fri, 26 Oct 2012 12:57:22 +0200	[thread overview]
Message-ID: <20121026105721.GU2052@vicerveza.homeunix.net> (raw)

Hello,

I had a raid1 btrfs (540GB) on vanilla 3.6.3, a disk failed, and removed it at
power off, plugged in a new one, partitioned it (to 110GB, by error), and added
it to btrfs.

I tried to remove the missing device, and it said "Input/output error" after a
while. Next attempts simply gave "Invalid argument".

I repartitioned, rebooted the system, and made the partition grow: "btrfs fi
resize 3:max /"

# btrfs fi show
Label: 'mainbtrfs'  uuid: 2ebf9e90-104c-47a4-adff-fada1ce3b682
    Total devices 3 FS bytes used 445.06GB
    devid    1 size 539.95GB used 539.95GB path /dev/sda5
    devid    3 size 539.95GB used 96.90GB path /dev/sdb1   <= New disk
    *** Some devices missing

The size appeared fine (I checked it at byte-amount level, to ensure I have not
set 4K smaller for example). But attempting the 'btrfs device delete missing /'
again gave the same outcome.

I tried "btrfs balance start /", and after a while, also ends with "Input/output
error". In any of the cases above, I have an error message in dmesg. dmesg only
shows usual 'relocating block...' and 'found 4 extents'.

I see that the /dev/sdb1, in any operation above I do, never goes beyond those 'used
96.90GB'. So, I'm stuck not being able to go back to raid1, with a degraded
mount.

Some data:

# btrfs fi df /
Data, RAID1: total=507.62GB, used=417.08GB
Data: total=25.32GB, used=22.48GB
System, RAID1: total=32.00MB, used=92.00KB
System: total=4.00MB, used=0.00
Metadata, RAID1: total=19.97GB, used=5.50GB

Mount log:
[   10.939163] device label mainbtrfs devid 1 transid 194548 /dev/sda5
[   10.939856] btrfs: allowing degraded mounts
[   10.939939] btrfs: disk space caching is enabled
[   10.940652] warning devid 2 missing
[   10.987500] btrfs: bdev (null) errs: wr 6702, rd 2632, flush 312, corrupt 1970, gen 573
[   10.987636] btrfs: bdev /dev/sda5 errs: wr 52, rd 13, flush 0, corrupt 2, gen 8
[   14.391309] btrfs: unlinked 1 orphans
[   22.319849] btrfs: use lzo compression
[   22.319937] btrfs: disk space caching is enabled
[   27.481405] udevd[1451]: starting version 173
[   28.493786] device label mainbtrfs devid 3 transid 194549 /dev/sdb1
[   28.930870] device fsid 30781650-3053-4273-b640-ec86a442c945 devid 1 transid 2272 /dev/sda3
[   28.947632] device label mainbtrfs devid 1 transid 194549 /dev/sda5


Any help?

Thank you,
Lluís.

             reply	other threads:[~2012-10-26 10:57 UTC|newest]

Thread overview: 5+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2012-10-26 10:57 Lluís Batlle i Rossell [this message]
2012-10-26 10:59 ` Can't replace a faulty disk of raid1 Lluís Batlle i Rossell
2012-10-26 11:41   ` Goffredo Baroncelli
2012-10-26 13:23     ` Lluís Batlle i Rossell
2012-10-31 18:50 ` Lluís Batlle i Rossell

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20121026105721.GU2052@vicerveza.homeunix.net \
    --to=viric@viric.name \
    --cc=linux-btrfs@vger.kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).