* Does Linux-6.12 have the missing dev = single/degraded chunk bug?
@ 2025-04-17 20:53 Nicholas D Steeves
0 siblings, 0 replies; 2+ messages in thread
From: Nicholas D Steeves @ 2025-04-17 20:53 UTC (permalink / raw)
To: linux-btrfs
[-- Attachment #1: Type: text/plain, Size: 1223 bytes --]
Hi,
I've started to review documentation of old bugs and workarounds for the
upcoming Debian 13 (Trixie) release, and I'm wondering what the state of the
single/degraded chunk bug is for Linux-6.12.
Specifically, I seem to remember that the users/sysadmins had to resolve a
raid1 with a missing device the next time the system rebooted. So the
reproducer was:
1. Device disappears.
2. Reboot occurs.
3. Sysadmin doesn't notice degraded raid1 (or raid10), and btrfs writes
profile=single chunks.
4. System reboots a second time.
5. The btrfs volume is now permanently read-only.
I vaguely remember that profile=degraded chunks may have been introduced
some time between 2022 and now. Does this mean state #5 no longer
occurs, because profile=degraded chunks are written at state #3, and
that a sysadmin can simply add a new device and rebalance?
I also seem to remember that there's a "mount -o degraded" state, and
that it's not automatic, but I can't remember if it's a state that can
be recovered from with a device add and rebalance.
Finally, are raid1, raid10, and raid1c3 and raid1c4 all comparably
mature at this point? Particularly with respect to this old and surprising
"gotcha"?
Kind regards,
Nicholas
[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 857 bytes --]
^ permalink raw reply [flat|nested] 2+ messages in thread
* Does Linux-6.12 have the missing dev = single/degraded chunk bug?
@ 2025-12-02 2:01 Nicholas D Steeves
0 siblings, 0 replies; 2+ messages in thread
From: Nicholas D Steeves @ 2025-12-02 2:01 UTC (permalink / raw)
To: linux-btrfs
[-- Attachment #1: Type: text/plain, Size: 1101 bytes --]
Hi,
I'm reviewing documentation of outstanding bugs and workarounds, and I'm
wondering what the state of the single/degraded chunk bug is for
Linux-6.12.
Specifically, I seem to remember that the users/sysadmins had to resolve a
raid1 with a missing device the next time the system rebooted. So the
reproducer was:
1. Device disappears.
2. Reboot occurs.
3. Filesystem fails to mount due to missing device.
Thus
4. Sysadmin mounts with "-o degraded" and btrfs writes profile=single
chunks.
5. For whatever reasons sysadmin doesn't succeed in replacing the
missing device and rebalancing both data and metadata.
6. System reboots a second time.
7. The btrfs volume is now permanently read-only.
I vaguely remember that profile=degraded chunks may have been introduced
some time between 2022 and now. Does this mean state #7 no longer
occurs, because profile=degraded chunks are written at state #4, and
that a sysadmin can still add a new device and rebalance after the
second reboot?
Finally, are raid1, raid10, and raid1c3 and raid1c4 all comparably
mature at this point?
Regards,
Nicholas
[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 861 bytes --]
^ permalink raw reply [flat|nested] 2+ messages in thread
end of thread, other threads:[~2025-12-02 2:01 UTC | newest]
Thread overview: 2+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2025-12-02 2:01 Does Linux-6.12 have the missing dev = single/degraded chunk bug? Nicholas D Steeves
-- strict thread matches above, loose matches on Subject: below --
2025-04-17 20:53 Nicholas D Steeves
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox