linux-btrfs.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Hans van Kranenburg <hans.van.kranenburg@mendix.com>
To: "Scott E. Blomquist" <sb@techsquare.com>,
	Btrfs BTRFS <linux-btrfs@vger.kernel.org>
Subject: Re: trouble mounting btrfs filesystem....
Date: Tue, 14 Aug 2018 17:13:54 +0200	[thread overview]
Message-ID: <ed6f2563-8bfc-10bb-4c5c-405894d2807c@mendix.com> (raw)
In-Reply-To: <23408.34902.778845.675960@techsquare.com>

On 08/12/2018 09:19 PM, Scott E. Blomquist wrote:
> 
> Hi All,
> 
> Early this morning there was a power glitch that affected our system.
> 
> The second enclosure went offline but the file system stayed up for a
> bit before rebooting and recovering the 2 missing arrays sdb1 and
> sdc1.
> 
> When mounting we get....
> 
>     Aug 12 14:52:43 localhost kernel: [ 8536.649270] BTRFS info (device sda1): has skinny extents
>     Aug 12 14:54:52 localhost kernel: [ 8665.900321] BTRFS error (device sda1): parent transid verify failed on 177443463479296 wanted 2159304 found 2159295
>     Aug 12 14:54:52 localhost kernel: [ 8665.985512] BTRFS error (device sda1): parent transid verify failed on 177443463479296 wanted 2159304 found 2159295
>     Aug 12 14:54:52 localhost kernel: [ 8666.056845] BTRFS error (device sda1): failed to read block groups: -5
>     Aug 12 14:54:52 localhost kernel: [ 8666.254178] BTRFS error (device sda1): open_ctree failed
> 
> We are here...
> 
>     # uname -a
>     Linux localhost 4.17.14-custom #1 SMP Sun Aug 12 11:54:00 EDT 2018 x86_64 x86_64 x86_64 GNU/Linux
> 
>     # btrfs --version
>     btrfs-progs v4.17.1
>     
>     # btrfs filesystem show
>     Label: none  uuid: 8337c837-58cb-430a-a929-7f6d2f50bdbb
>             Total devices 3 FS bytes used 75.05TiB
>             devid    1 size 47.30TiB used 42.07TiB path /dev/sda1
>             devid    2 size 21.83TiB used 16.61TiB path /dev/sdb1
>             devid    3 size 21.83TiB used 16.61TiB path /dev/sdc1

What kind of devices are this? You say enclosure... is it a bunch of
disks doing its own RAID, with btrfs on top?

Do you have RAID1 metadata on top of that, or single?

At least if you go the mkfs route (I read the other replies) then also
find out what happened. If your storage is losing data in situations
like this while it told btrfs that the data was safe, you're running a
dangerous operation.

-- 
Hans van Kranenburg

      parent reply	other threads:[~2018-08-14 18:01 UTC|newest]

Thread overview: 11+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2018-08-12 19:19 trouble mounting btrfs filesystem Scott E. Blomquist
2018-08-14 12:39 ` Scott E. Blomquist
2018-08-14 13:00   ` Dmitrii Tcvetkov
2018-08-14 13:31     ` Scott E. Blomquist
2018-08-14 13:41       ` Dmitrii Tcvetkov
2018-08-14 13:51         ` Scott E. Blomquist
2018-08-14 20:11         ` Roman Mamedov
2018-08-14 15:16     ` Hans van Kranenburg
2018-08-14 17:09       ` Andrei Borzenkov
2018-08-14 23:11         ` Hans van Kranenburg
2018-08-14 15:13 ` Hans van Kranenburg [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=ed6f2563-8bfc-10bb-4c5c-405894d2807c@mendix.com \
    --to=hans.van.kranenburg@mendix.com \
    --cc=linux-btrfs@vger.kernel.org \
    --cc=sb@techsquare.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).