linux-raid.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: NeilBrown <nfbrown@novell.com>
To: George Rapp <george.rapp@gmail.com>,
	Mikael Abrahamsson <swmike@swm.pp.se>
Cc: linux-raid@vger.kernel.org
Subject: Re: RAID 6 "Failed to restore critical section for reshape, sorry." - recovery advice?
Date: Mon, 21 Dec 2015 12:35:44 +1100	[thread overview]
Message-ID: <87d1u060sv.fsf@notabene.neil.brown.name> (raw)
In-Reply-To: <CAF-KpgYe9Vxgcy2E5T1_9HdEM0YMfATMzN7WcjHra_EE+sOOTg@mail.gmail.com>

[-- Attachment #1: Type: text/plain, Size: 1641 bytes --]

On Fri, Dec 11 2015, George Rapp wrote:
>
> I appear to be too early in the reshape for auto-recovery, but too far
> along to just say "never mind on that whole reshape business". Any
> other thoughts?
>

What this means is that you've hit a corner case that was never thought
through properly and isn't handled correctly.

The current state of the array is (I think) that it looks like a reshape
to reduce the number of devices in the array has very nearly completed.
Only the first stripe needs to be completed.  Whether that first stripe
is still in the old "N+1" device layout or the new "N" device layout is
unknown to the kernel - this information is only in the backup file
(which doesn't exist).
By telling mdadm --invalid-backup, you effectively tell mdadm that there
is nothing useful in the backup file so it should know that the reshape
has actually completed.  But it has no way to tell the kernel that.
What it should do in this case is (I think) rewrite the metadata to
record that the reshape is complete.  But it doesn't.

I shouldn't be too hard to fix, but it isn't trivial either and I'm
unlikely to get anywhere before the Christmas break.

If you can get reshape to work at all (disable selinux?) you could try
--update=revert-reshape and let the reshape to more devices progress for
a while, and then revert it.

If you cannot get anywhere, then use
  "mdadm --dump=/tmp/whatever /dev/mdthing"

to create a copy of the metadata in some spares files.
Then tar those up (a compressed tarchive should be tiny) and email them.
Then I can try and see if I can make something work on exactly the array
you have.

NeilBrown


[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 818 bytes --]

  parent reply	other threads:[~2015-12-21  1:35 UTC|newest]

Thread overview: 10+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2015-12-09 23:12 RAID 6 "Failed to restore critical section for reshape, sorry." - recovery advice? George Rapp
2015-12-10  8:22 ` Mikael Abrahamsson
2015-12-10 22:05   ` George Rapp
2015-12-10 22:34     ` George Rapp
2015-12-15 14:11       ` Mikael Abrahamsson
2015-12-21  1:35       ` NeilBrown [this message]
2015-12-23  2:04         ` George Rapp
2015-12-23  2:18           ` NeilBrown
     [not found]             ` <CAF-KpgZ=HY_HKvj5buFOKseUV0GLeOLR1m3B0EYxrYcD3R5ieA@mail.gmail.com>
2016-01-04  2:16               ` NeilBrown
2016-01-22 19:24                 ` George Rapp

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=87d1u060sv.fsf@notabene.neil.brown.name \
    --to=nfbrown@novell.com \
    --cc=george.rapp@gmail.com \
    --cc=linux-raid@vger.kernel.org \
    --cc=swmike@swm.pp.se \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).