From: Forza <forza@tnonline.net>
To: Andrei Borzenkov <arvidjaar@gmail.com>,
Stefan Malte Schumacher <s.schumacher@netcologne.de>,
Btrfs BTRFS <linux-btrfs@vger.kernel.org>
Subject: Re: Drives failures in irregular RAID1-Pool
Date: Fri, 28 Jul 2023 20:52:18 +0200 (GMT+02:00) [thread overview]
Message-ID: <ed04c54.104d2cbb.1899dd83132@tnonline.net> (raw)
In-Reply-To: <333b028f-a916-ccf9-8339-408b8963fd8e@gmail.com>
---- From: Andrei Borzenkov <arvidjaar@gmail.com> -- Sent: 2023-07-28 - 19:00 ----
> On 28.07.2023 18:26, Stefan Malte Schumacher wrote:
>> Thanks for the quick reply. Is there any way for me to validate if the
>> filesystem has redundant copies of all my files on different drives?
>
> I can only think of "btrfs scrub".
>
>> I read that it was suggested to do a full rebalance when adding a
>> drive to a RAID5 array. Should I do the same when adding a new disk to
>> my array?
>>
>
> I do not know where you read it and I cannot comment on something I have
> not seen. But for RAID1 I do not see any reason to do it.
It may be needed if the layout becomes too unbalanced, else you may end up with no free space ENOSPC errors.
There is also the size restrictions to think of when mixing different sizes. Use the btrfs size calculator to see if you end up with some unusable space with the sizes you have.
https://carfax.org.uk/btrfs-usage/?c=2&slo=1&shi=1&p=0&dg=1&d=910&d=1273&d=1273&d=910&d=1637&d=910
>
>> Yours sincerely
>> Stefan
>>
>> Am Fr., 28. Juli 2023 um 17:22 Uhr schrieb Andrei Borzenkov
>> <arvidjaar@gmail.com>:
>>>
>>> On 28.07.2023 16:59, Stefan Malte Schumacher wrote:
>>>> Hello,
>>>>
>>>> I recently read something about raidz and truenas, which led to me
>>>> realizing that despite using it for years as my main file storage I
>>>> couldn't answer the same question regarding btrfs. Here it comes:
>>>>
>>>> I have a pool of harddisks of different sizes using RAID1 for Data and
>>>> Metadata. Can the largest drive fail without causing any data loss? I
>>>> always assumed that the data would be distributed in a way that would
>>>> prevent data loss regardless of the drive size, but now I realize I
>>>> have never experienced this before and should prepare for this
>>>> scenario.
>>>>
>>>
>>> RAID1 should store each data copy on a different drive, which means all
>>> data on a failed disk must have another copy on some other disk.
>>>
>>>> Total devices 6 FS bytes used 27.72TiB
>>>> devid 7 size 9.10TiB used 6.89TiB path /dev/sdb
>>>> devid 8 size 16.37TiB used 14.15TiB path /dev/sdf
>>>> devid 9 size 9.10TiB used 6.90TiB path /dev/sda
>>>> devid 10 size 12.73TiB used 10.53TiB path /dev/sdd
>>>> devid 11 size 12.73TiB used 10.54TiB path /dev/sde
>>>> devid 12 size 9.10TiB used 6.90TiB path /dev/sdc
>>>>
>>>> Yours sincerely
>>>> Stefan Schumacher
>>>
>
prev parent reply other threads:[~2023-07-28 18:52 UTC|newest]
Thread overview: 6+ messages / expand[flat|nested] mbox.gz Atom feed top
2023-07-28 13:59 Drives failures in irregular RAID1-Pool Stefan Malte Schumacher
2023-07-28 15:22 ` Andrei Borzenkov
2023-07-28 15:26 ` Stefan Malte Schumacher
2023-07-28 15:53 ` joshua
2023-07-28 17:00 ` Andrei Borzenkov
2023-07-28 18:52 ` Forza [this message]
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=ed04c54.104d2cbb.1899dd83132@tnonline.net \
--to=forza@tnonline.net \
--cc=arvidjaar@gmail.com \
--cc=linux-btrfs@vger.kernel.org \
--cc=s.schumacher@netcologne.de \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox