linux-raid.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Phil Turmel <philip@turmel.org>
To: Cooper tron <cooper@cobhc.ca>, linux-raid@vger.kernel.org
Subject: Re: raid 6 4 disk failure, improper --create leads to bad superblock
Date: Sun, 29 Dec 2013 15:33:21 -0500	[thread overview]
Message-ID: <52C08711.2010200@turmel.org> (raw)
In-Reply-To: <CAG9GHm3_UNO_jZguFtGpMwMs=a3ut2FXjNWfFV9MU1nC-YJYWQ@mail.gmail.com>

On 12/28/2013 06:02 PM, Cooper tron wrote:
> If you could please CC me directly. TIA
> 
> I have a raid that I've recently realized is ridden with flaws. Id
> like to be able to mount it one last time to get a current backup of
> user generated data. Then rebuild it with proper hardware.

[trim /]

> I recently added 1 more drive going from 9-10. Here is where things
> get murky. We just had a killer ice storm, brownouts and power issues
> for days. Right as I was growing. So one drive (sde, at the time)
> failed during the grow. While investigating I was forced to shut down
> due to my ups screaming at me. Once power is back, I boot up and
> theres a second drive marked faulty (don't recall which). Smartctl
> told me both drives were OK. So I readded them, as they were resyncing
> 2 more got marked faulty....  There I sat with 4 drives out of the
> array (when I should have came for help). No amount of --assemble
> would start the array. I did not try any --force. All the drives
> tested as being relatively healthy so I took a chance.
> 
> I finally got the array to start with --create --raid-devices=10 /dev/sda (etc.)

--force --assemble was your only hope.

You did a --create while the devices were in an incomplete --grow state.
 You also did nothing to maintain the original metadata version, data
offset, or chunk size.  Your description implies you also left off
--assume-clean.

Your data is *gone*.

[trim /]

> I found almost an exact case scenario from some emails, where it was
> suggested to --create again with the proper permutations and the raid
> should rebuild with hopefully some data intact.  So I tried again this
> time just specifying a 64K chunk. After an 11 hour resync. I still
> have a bad superblock when trying to mount/fsck.

There's no resync time when using --assume-clean, and it is vital to
successfully performing such a parameter search.  Leave it out even
*once* and *poof*, your data is gone.

> Without any records of the order of failures, or even an old --examine
> or --detail to show me how the raid was shaped when it was running or
> its last 'sane' state. Is there any chance I will see that data again?

Nope, sorry.  Even if you had used --assume-clean, you disrupted a
--grow operation, losing the information MD needed to continue reshaping
from one layout to another.

> Happy Holidays!

Merry Christmas and Happy New Year.  My condolences on your lost data.

Phil


      reply	other threads:[~2013-12-29 20:33 UTC|newest]

Thread overview: 2+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2013-12-28 23:02 raid 6 4 disk failure, improper --create leads to bad superblock Cooper tron
2013-12-29 20:33 ` Phil Turmel [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=52C08711.2010200@turmel.org \
    --to=philip@turmel.org \
    --cc=cooper@cobhc.ca \
    --cc=linux-raid@vger.kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).