linux-btrfs.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Wolfgang Mader <Wolfgang_Mader@brain-frog.de>
To: BTRFS <linux-btrfs@vger.kernel.org>
Subject: Read i/o errs and disk replacement
Date: Tue, 18 Feb 2014 14:19:47 +0100	[thread overview]
Message-ID: <8051054.BLVnDBVVi7@fuckup> (raw)

Hi all,

well, I hit the first incidence where I really have to work with my btrfs 
setup. To get things straight I want to double-check here to not screw things 
up right from the start. We are talking about a home server. There is no time 
or user pressure involved, and there are backups, too.


Software
-------------
Linux 3.13.3
Btrfs v3.12


Hardware
---------------
5 1T hard drives configured to be a raid 10 for both data and metadata
    Data, RAID10: total=282.00GiB, used=273.33GiB
    System, RAID10: total=64.00MiB, used=36.00KiB
    Metadata, RAID10: total=1.00GiB, used=660.48MiB


Error
--------
This is not btrfs' fault but due to an hd error. I saw in the system logs
    btrfs: bdev /dev/sdb errs: wr 0, rd 2, flush 0, corrupt 0, gen 0
and a subsequent check on btrfs showed
    [/dev/sdb].write_io_errs   0
    [/dev/sdb].read_io_errs    2
    [/dev/sdb].flush_io_errs   0
    [/dev/sdb].corruption_errs 0
    [/dev/sdb].generation_errs 0

So, I have a read error on sdb.


Questions
---------------
1)
Do I have to take action immediately (shutdown the system, umount the file 
system)? Can I even ignore the error? Unfortunately, I can not access SMART 
information through the sata interface of the enclosure which hosts the hds.

2)
I only can replace the disk, not add a new one and than swap over. There is no 
space left in the disk enclosure I am using. I also can not guarantee that if 
I remove sdb and start the system up again that all the other disks are named 
the same as they are now, and that the newly added disk will be names sdb 
again. Is this an issue?

3)
I know that btrfs can handle disks of different sizes. Is there a downside if I 
go for a 3T disk and add it to the 1T disks? Is there e.g. more stuff saved on 
the 3T disk, and if this ones fails I lose redundancy? Is a soft transition to 
3T where I replace every dying 1T disk with a 3T disk advisable?


Proposed solution for the current issue
--------------------------------------------------------------
1)
Delete the faulted drive using
    btrfs device delete /dev/sdb /path/to/pool
2)
Format the new disk with btrfs
    mkfs.btrfs
3)
Add the new disk to the filesystem using
    btrfs device add /dev/newdiskname /path/to/pool
4)
Balance the file system
    btrfs fs balance /path/to/pool

Is this the proper way to deal with the situation?


Thank you for your advice.
Best,
Wolfgang

             reply	other threads:[~2014-02-18 14:00 UTC|newest]

Thread overview: 8+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2014-02-18 13:19 Wolfgang Mader [this message]
2014-02-18 18:48 ` Read i/o errs and disk replacement Chris Murphy
2014-02-18 21:33   ` Wolfgang Mader
2014-02-18 22:02     ` Chris Murphy
2014-02-18 22:45       ` Duncan
2014-02-18 23:12         ` Chris Murphy
2014-02-19 20:05       ` Wolfgang Mader
2014-02-18 22:54   ` Duncan

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=8051054.BLVnDBVVi7@fuckup \
    --to=wolfgang_mader@brain-frog.de \
    --cc=linux-btrfs@vger.kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).