public inbox for linux-btrfs@vger.kernel.org
 help / color / mirror / Atom feed
From: "Aaron W. Swenson" <aaron@grandmasfridge.org>
To: linux-btrfs@vger.kernel.org
Subject: RAID1 array failed to read chunk root
Date: Mon, 22 Jan 2024 09:53:04 -0500	[thread overview]
Message-ID: <87zfwxe7vf.fsf@grandmasfridge.org> (raw)

[-- Attachment #1: Type: text/plain, Size: 3317 bytes --]

After moving residences, I've finally got my computer setup to 
find the array failed to mount. When trying to mount the RAID 
array, I get:

	root # mount -o compress=lzo,noatime,degraded /dev/sdc 
	/srv
	mount: /srv: wrong fs type, bad option, bad superblock on 
	/dev/sdc, missing codepage or helper program, or other 
	error.
       	dmesg(1) may have more information after failed mount 
       system call.

And in dmesg I see:

	[394680.895543] BTRFS info (device sdd): using crc32c 
	(crc32c-generic) checksum algorithm
	[394680.895555] BTRFS info (device sdd): use lzo 
	compression, level 0
	[394680.895557] BTRFS info (device sdd): allowing degraded 
	mounts
	[394680.895558] BTRFS info (device sdd): disk space 
	caching is enabled
	[394680.895802] BTRFS error (device sdd): failed to read 
	chunk root
	[394680.895903] BTRFS error (device sdd): open_ctree 
	failed

Running the command:

	root # btrfs rescue chunk-recover -v /dev/sdd

Takes a few hours (there are eight 4 TB drives in the array). It 
selected three devices from the RAID1 array (I think it was sdc, 
sdd, and sde...but that bit got purged from the scrollback 
buffer), and ultimately resulted in:

	Invalid mapping for 17983143280640-17983143297024, got 
	23995541880832-23996615622656
	Couldn't map the block 17983143280640
	Couldn't read tree root
	open with broken chunk error
	Chunk tree recovery failed

Here's some stats about my machine:

# uname -a
Gentoo Linux martineau 6.1.28-gentoo #1 SMP Sat May 27 19:30:38 
EDT 2023 x86_64 Intel(R) Core(TM) i3-4160 CPU @ 3.60GHz 
GenuineIntel GNU/Linux
# btrfs version
btrfs-progs v6.6.3

Here's a table of my drives (excludes new Seagate Ironwolf 4TB 
that's still in the packaging). All drives in Bay 1 and Bay 2 are 
a part of the same RAID1 array. The Crucial drive is an SSD that's 
setup in the boring typical fashion for a root drive. A couple 
have failed in the past, but I had been able to mount degraded and 
replace the failed drive. I should note, a couple of days ago it 
reported that /dev/sdg was missing (same experience I've had twice 
before), which is why I have the spare drive. Now, it isn't 
reporting anything about the drive.

| ID | Path 	| Bay | Slot | Make	| Model 
  | Size  |
|----+----------+-----+------+---------+---------------------------------------+-------|
|  1 | /dev/sda |   0 |	0 | Crucial | BX100 (CT250BX100SSD1) 
   | 250GB |
| 10 | /dev/sdc |   1 |	1 | Seagate | Constellation ES.3 
  (ST4000NM0033-9ZM) | 4TB   |
|  9 | /dev/sdb |   1 |	2 | Seagate | Constellation ES.3 
   (ST4000NM0033-9ZM) | 4TB   |
|  7 | /dev/sdi |   1 |	3 | Seagate | Constellation ES.3 
   (ST4000NM0033-9ZM) | 4TB   |
|  8 | /dev/sdh |   1 |	4 | Seagate | Ironwolf (ST4000VN008-2DR1) 
   | 4TB   |
|  5 | /dev/sdf |   2 |	1 | Seagate | Constellation ES.3 
   (ST4000NM0033-9ZM) | 4TB   |
|  6 | /dev/sdd |   2 |	2 | Seagate | Ironwolf (ST4000VN008-2DR1) 
   | 4TB   |
|  4 | /dev/sdg |   2 |	3 | Seagate | Constellation ES.3 
   (ST4000NM0033-9ZM) | 4TB   |
|  3 | /dev/sde |   2 |	4 | Seagate | Constellation ES.3 
   (ST4000NM0033-9ZM) | 4TB   |

It isn't the end of the world if I lose the data, but some of the 
videos and photos are sentimental. There's no time crunch for me, 
so if it takes a long time to work through, I have the time to do 
so.

WKR,
Aaron

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 394 bytes --]

                 reply	other threads:[~2024-01-22 15:00 UTC|newest]

Thread overview: [no followups] expand[flat|nested]  mbox.gz  Atom feed

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=87zfwxe7vf.fsf@grandmasfridge.org \
    --to=aaron@grandmasfridge.org \
    --cc=linux-btrfs@vger.kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox