linux-btrfs.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Qu Wenruo <wqu@suse.com>
To: linux-btrfs <linux-btrfs@vger.kernel.org>
Subject: Re: btrfs RAID5 or btrfs on md RAID5?
Date: Mon, 22 Sep 2025 18:36:03 +0930	[thread overview]
Message-ID: <95ece5d8-0e5a-4db9-8603-c819980c3a3b@suse.com> (raw)
In-Reply-To: <20250922082854.GD2624931@tik.uni-stuttgart.de>



在 2025/9/22 17:58, Ulli Horlacher 写道:
> On Mon 2025-09-22 (17:11), Qu Wenruo wrote:
> 
>>> Is btrfs RAID5 ready for production usage or shall I use non-RAID btrfs on
>>> top of a md RAID5?
>>
>> Neither is perfect.
> 
> We live in a non-perfect world :-}
> 
> 
>> Btrfs RAID56 has no journal to protect against write hole.
> 
> What does this mean?
> What is a write hole and want is the danger with it?

Write-hole means during a partial stripe update, a power loss happened, 
the parity may be out-of-sync.

This means that stripe will not get the full protection of RAID5.

E.g. after that power loss one device is lost, then btrfs may not be 
able to rebuild the correct data.

> 
> 
>> So you either run RAID5 for data only
> 
> This is a mkfs.btrfs option?
> Shall I use "mkfs.btrfs -m dup" or "mkfs.btrfs -m raid1"?

For RAID5, RAID1 is preferred for data.

> 
> 
>> and ran full scrub after every unexpected power loss (slow, and no
>> further writes until scrub is done, which is further maintanance burden).
> 
> Ubuntu has (like most Linux distributions) systemd.
> How can I detect a previous power loss and force full scrub on booting?

Not sure. You may dig into systemd docs to find that out.

> 
> 
>> Or just don't use RAID5 at all.
> 
> You suggest btrfs RAID0?

I'd suggest RAID10. But that means you're "wasting" half of your capacity.

Thanks,
Qu

> As I wrote: I have 4 x 4 TB SAS SSD (enterprise hardware, very reliable).
> 
> Another disk layout option for me could be:
> 
> 64 GB / filesystem RAID1
> 32 GB swap RAID1
> 3.9 TB /home
> 3.9 TB /data
> 3.9 TB /VM
> 3.9 TB /backup
> 
> In case of a SSD damage failure I have to recover from (external) backup.
> 


  reply	other threads:[~2025-09-22  9:06 UTC|newest]

Thread overview: 21+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2025-09-22  7:09 btrfs RAID5 or btrfs on md RAID5? Ulli Horlacher
2025-09-22  7:41 ` Qu Wenruo
2025-09-22  8:28   ` Ulli Horlacher
2025-09-22  9:06     ` Qu Wenruo [this message]
2025-09-22  9:23       ` Ulli Horlacher
2025-09-22  9:27         ` Qu Wenruo
2025-10-20  9:00           ` Ulli Horlacher
2025-10-20  9:31             ` Andrei Borzenkov
2025-09-22  9:43   ` Ulli Horlacher
2025-09-22 10:41     ` Qu Wenruo
2025-10-21  1:02   ` DanglingPointer
2025-10-21 15:46     ` Mark Harmstone
2025-10-21 15:53       ` Christoph Anton Mitterer
2025-10-21 16:15         ` Jukka Larja
2025-10-21 16:45         ` Mark Harmstone
2025-10-21 17:32           ` Andrei Borzenkov
2025-10-21 17:43             ` Mark Harmstone
2025-10-21 19:32           ` Goffredo Baroncelli
2025-10-21 22:19             ` DanglingPointer
2025-09-22  8:07 ` Lukas Straub
2025-09-22  8:50   ` Ulli Horlacher

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=95ece5d8-0e5a-4db9-8603-c819980c3a3b@suse.com \
    --to=wqu@suse.com \
    --cc=linux-btrfs@vger.kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).