Linux Btrfs filesystem development
 help / color / mirror / Atom feed
From: Andrei Borzenkov <arvidjaar@gmail.com>
To: Stefan Malte Schumacher <s.schumacher@netcologne.de>,
	Btrfs BTRFS <linux-btrfs@vger.kernel.org>
Subject: Re: Drives failures in irregular RAID1-Pool
Date: Fri, 28 Jul 2023 20:00:56 +0300	[thread overview]
Message-ID: <333b028f-a916-ccf9-8339-408b8963fd8e@gmail.com> (raw)
In-Reply-To: <CAA3ktqmUXi3phYodmV7q8HQ4XvDvWo8q59z0UbR5TkQWcf5a=w@mail.gmail.com>

On 28.07.2023 18:26, Stefan Malte Schumacher wrote:
> Thanks for the quick reply. Is there any way for me to validate if the
> filesystem has redundant copies of all my files on different drives?

I can only think of "btrfs scrub".

> I read that it was suggested to do a full rebalance when adding a
> drive to a RAID5 array. Should I do the same when adding a new disk to
> my array?
> 

I do not know where you read it and I cannot comment on something I have 
not seen. But for RAID1 I do not see any reason to do it.

> Yours sincerely
> Stefan
> 
> Am Fr., 28. Juli 2023 um 17:22 Uhr schrieb Andrei Borzenkov
> <arvidjaar@gmail.com>:
>>
>> On 28.07.2023 16:59, Stefan Malte Schumacher wrote:
>>> Hello,
>>>
>>> I recently read something about raidz and truenas, which led to me
>>> realizing that despite using it for years as my main file storage I
>>> couldn't answer the same question regarding btrfs. Here it comes:
>>>
>>> I have a pool of harddisks of different sizes using RAID1 for Data and
>>> Metadata. Can the largest drive fail without causing any data loss? I
>>> always assumed that the data would be distributed in a way that would
>>> prevent data loss regardless of the drive size, but now I realize I
>>> have never experienced this before and should prepare for this
>>> scenario.
>>>
>>
>> RAID1 should store each data copy on a different drive, which means all
>> data on a failed disk must have another copy on some other disk.
>>
>>> Total devices 6 FS bytes used 27.72TiB
>>> devid    7 size 9.10TiB used 6.89TiB path /dev/sdb
>>> devid    8 size 16.37TiB used 14.15TiB path /dev/sdf
>>> devid    9 size 9.10TiB used 6.90TiB path /dev/sda
>>> devid   10 size 12.73TiB used 10.53TiB path /dev/sdd
>>> devid   11 size 12.73TiB used 10.54TiB path /dev/sde
>>> devid   12 size 9.10TiB used 6.90TiB path /dev/sdc
>>>
>>> Yours sincerely
>>> Stefan Schumacher
>>


  parent reply	other threads:[~2023-07-28 17:01 UTC|newest]

Thread overview: 6+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2023-07-28 13:59 Drives failures in irregular RAID1-Pool Stefan Malte Schumacher
2023-07-28 15:22 ` Andrei Borzenkov
2023-07-28 15:26   ` Stefan Malte Schumacher
2023-07-28 15:53     ` joshua
2023-07-28 17:00     ` Andrei Borzenkov [this message]
2023-07-28 18:52       ` Forza

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=333b028f-a916-ccf9-8339-408b8963fd8e@gmail.com \
    --to=arvidjaar@gmail.com \
    --cc=linux-btrfs@vger.kernel.org \
    --cc=s.schumacher@netcologne.de \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox