linux-raid.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* Panicked and deleted superblock
@ 2016-10-30 18:23 Peter Hoffmann
  2016-10-30 19:43 ` Andreas Klauer
  2016-11-04  4:34 ` NeilBrown
  0 siblings, 2 replies; 7+ messages in thread
From: Peter Hoffmann @ 2016-10-30 18:23 UTC (permalink / raw)
  To: linux-raid

My problem is the result of working late and not informing myself
previously, I'm fully aware that I should have had a backup, be less
spontaneous and more cautious.

The initial situation is a RAID-5 array with three disks. I assume it to
look follows:

| Disk 1   | Disk 2   | Disk 3   |
|----------|----------|----------|
|    out   | Block 2  | P(1,2)   |
|    of    | P(3,4)   | Block 4  |	degenerated but working
|   sync   | Block 5  | Block 6  |


Then I started the re-sync:

| Disk 1   | Disk 2   | Disk 3   |
|----------|----------|----------|
| Block 1  | Block 2  | P(1,2)   |
| Block 3  | P(3,4)   | Block 4  |   	already synced
| P(5,6)   | Block 5  | Block 6  |
               . . .
|    out   | Block b  | P(a,b)   |
|    of    | P(c,d)   | Block d  |	not yet synced
|   sync   | Block e  | Block f  |

But I didn't wait for it to finish as I actually wanted to add a fourth
disk and so started a grow process. But I just changed the size of the
array, I didn't actually add the fourth disk (don't ask why I cannot
recall it). I assume that both processes - re-sync  and grow - raced
through the array and did their job.

| Disk 1   | Disk 2   | Disk 3   |
|----------|----------|----------|
| Block 1  | Block 2  | Block 3  |
| Block 4  | Block 5  | P(4,5,6) |	with four disks but degenerated
| Block 7  | P(7,8,9) | Block 8  |
               . . .
| Block a  | Block b  | P(a,b)   |
| Block c  | P(c,d)   | Block d  |	not yet grown but synced
| P(e,f)   | Block e  | Block f  |
               . . .
|    out   | Block V  | P(U,V)   |
|    of    | P(W,X)   | Block X  |		not yet synced
|   sync   | Block Y  | Block Z  |

And after running for a while - my NAS is very slow (partly because all
disks are LUKS'd), mdstat showed around 1GiB of Data processed - we had
a blackout. Water dropped in a distribution socket and *poff*. After a
reboot I wanted to resemble everything, didn't know what I was doing so
the RAID superblock is now lost and I failed to reassemble (this is the
part I really can't recall, I panicked). I never wrote anything to the
actual array so I assume, better hope that no actual data is lost.

I have a plan but wanted to check with you before doing anything stupid
again.
My idea is to look for that magic number of the ext4-fs to find the
beginning of Block 1 on Disk 1, then I would copy an reasonable amount
of data and try to figure out how big Block 1 and hence chunk-size is -
perhaps fsck.ext4 can help do that? After that I copy another reasonable
amount of data from Disks 1-3 to figure out the border between the grown
Stripes and the synced Stripes. And from there on I'd have my data in a
defined state from which I can save the whole file system.
One thing I'm wondering is if I got the layout right. And the other
might be rather a case for the ext4-mailing list but I'd ask it anyway:
how can I figure where the file system starts to be corrupted?

embarrassed Greetings,
Peter Hoffmann


^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: Panicked and deleted superblock
  2016-10-30 18:23 Panicked and deleted superblock Peter Hoffmann
@ 2016-10-30 19:43 ` Andreas Klauer
  2016-10-30 20:45   ` Peter Hoffmann
  2016-11-04  4:34 ` NeilBrown
  1 sibling, 1 reply; 7+ messages in thread
From: Andreas Klauer @ 2016-10-30 19:43 UTC (permalink / raw)
  To: Peter Hoffmann; +Cc: linux-raid

On Sun, Oct 30, 2016 at 07:23:00PM +0100, Peter Hoffmann wrote:
> I assume that both processes - re-sync  and grow - raced
> through the array and did their job.

Oi oi oi, it's still one process per raid, no races. Isn't it? 
I'm not a kernel developer so I don't really *know* what happens 
in this case, but in my imagination it should go something like 
- disk that is not fully synced, treat as unsynced/degraded, repopulate.

Either that or it's actually smart enough to remember it synced up to X 
and just does the right thing(tm), whatever that is. But that sounds 
like having to write out a lot of special cases instead of handling 
the degraded case you must be able to cope with anyhow.

You have to re-write it and recalculate all parity anyway since the grow 
changes everything.

As long as it didn't consider your half a disk to be fully synced, 
your data should be completely fine. The only question is - where. ;)

> And after running for a while - my NAS is very slow (partly because all
> disks are LUKS'd), mdstat showed around 1GiB of Data processed - we had
> a blackout.

Stop trying to scare me! I'm not scared. 
You you you and your spine-chilling halloween horror story.

Slow because of LUKS? You don't have LUKS below your RAID layer, right?
Right? (Right?)

> the RAID superblock is now lost

Other people have proved Murphy's Law before, you know, 
why bother doing it again?

> My idea is to look for that magic number of the ext4-fs to find the
> beginning of Block 1 on Disk 1, then I would copy an reasonable amount
> of data and try to figure out how big Block 1 and hence chunk-size is -
> perhaps fsck.ext4 can help do that?

Determining the data offset, that's fine, only one thing to consider.
Growing RAIDs changes that very offset you're looking for, so.
Even if you find it, it's still wrong.

> One thing I'm wondering is if I got the layout right. And the other
> might be rather a case for the ext4-mailing list but I'd ask it anyway:
> how can I figure where the file system starts to be corrupted?

Let's not care about your filesystem for now. Also forget fsck.

It's dangerous to go alone. Take this.

https://raid.wiki.kernel.org/index.php/Recovering_a_failed_software_RAID#Making_the_harddisks_read-only_using_an_overlay_file

Create two overlays. Two. Okay?

Overlay #1: You create your not-fully-grown 4 disk raid.

You have to figure out the disk order, raid level, metadata version, 
data offset, chunk size, layout, and some things I don't remember. 
If you got it right there should be a filesystem on the raid device. 
Or a LUKS header. Or something that makes any sense whatsoever at 
least for however far the reshape actually progressed.

Overlay #2: You create your not-fully-synced 3 disk raid.
            Leaving the not-fully-synced disk as missing.

Basically this is the same thing as #1, except the data offset 
might be different, there's obviously no 4th disk, and one of 
the other three missing.

There probably WON'T be a filesystem on this one because it's 
already grown over. So the beginning of this device is garbage, 
it only starts making sense after the area that wasn't reshaped.

If it was unencrypted... oh well. It wasn't. Was it?
Now you've done it, I'm confused.

Then you find the point where data overlaps and create a linear mapping. 
It overlaps because 4 disk more space than 3 so 1GB on 4 won't overwrite 
1GB on 3 so there is an overlapping zone.

And you're done. At least in terms of having access to the whole thing.

Easy peasy.

Regards
Andreas Klauer

PS: Do you _really_ not have anything left. Logfiles? Anything?
    Maybe you asked anything about your raid anywhere before 
    and posted examine along with it, tucked away in some 
    linux forum or chat you might have perused...

    Please check. Your story is really interesting but nothing 
    beats hard facts such as actual output of your crap.

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: Panicked and deleted superblock
  2016-10-30 19:43 ` Andreas Klauer
@ 2016-10-30 20:45   ` Peter Hoffmann
  2016-10-30 21:11     ` Andreas Klauer
  0 siblings, 1 reply; 7+ messages in thread
From: Peter Hoffmann @ 2016-10-30 20:45 UTC (permalink / raw)
  To: linux-raid

On Sun, Oct 30, 2016 at 08:43:00PM +100 Andreas Klauer wrote:
> On Sun, Oct 30, 2016 at 07:23:00PM +0100, Peter Hoffmann wrote:
>> I assume that both processes - re-sync  and grow - raced
>> through the array and did their job.
> 
> Oi oi oi, it's still one process per raid, no races. Isn't it? 
> I'm not a kernel developer so I don't really *know* what happens 
> in this case, but in my imagination it should go something like 
> - disk that is not fully synced, treat as unsynced/degraded, repopulate.
> 
> Either that or it's actually smart enough to remember it synced up to X 
> and just does the right thing(tm), whatever that is. But that sounds 
> like having to write out a lot of special cases instead of handling 
> the degraded case you must be able to cope with anyhow.
> 
> You have to re-write it and recalculate all parity anyway since the grow 
> changes everything.
> 
> As long as it didn't consider your half a disk to be fully synced, 
> your data should be completely fine. The only question is - where. ;)
All right, but even if the re-sync stopped as I started growing, like
you wrote, there shouldn't anything be lost as growing consumes more
than it writes, stripe wise speaking

| (D1) |  D2  |  D3  |      |  D1  |  D2  |  D3  | (D4) |
|------|------|------|      |------|------|------|------|
| (B1) |  B2  | P1,2 | ->   |  B1  |  B2  |  B3  |(P123)|
| (B4) | P3,4 |  B3  |      |  B5  |  B6  | P456 | (B4) |
| (P)  |  B5  |  B6  |      |  ?   | [B5] | [B6] |      |
Where () shows non existent but should-be-there blocks and
      [] existing but shouldn't be there blocks

>> And after running for a while - my NAS is very slow (partly because all
>> disks are LUKS'd), mdstat showed around 1GiB of Data processed - we had
>> a blackout.
> 
> Stop trying to scare me! I'm not scared. 
> You you you and your spine-chilling halloween horror story.
> 
> Slow because of LUKS? You don't have LUKS below your RAID layer, right?
> Right? (Right?)
Ehm, ehm, may I call my lawer ;-) Yes,

/dev/sda2 --luks--> /dev/mapper/HDD_0 \
/dev/sdb2 --luks--> /dev/mapper/HDD_1 --raid--> /dev/md127 -ext4-> /raid
/dev/sdc2 --luks--> /dev/mapper/HDD_2 /

>> the RAID superblock is now lost
> 
> Other people have proved Murphy's Law before, you know, 
> why bother doing it again?
> 
>> My idea is to look for that magic number of the ext4-fs to find the
>> beginning of Block 1 on Disk 1, then I would copy an reasonable amount
>> of data and try to figure out how big Block 1 and hence chunk-size is -
>> perhaps fsck.ext4 can help do that?
> 
> Determining the data offset, that's fine, only one thing to consider.
> Growing RAIDs changes that very offset you're looking for, so.
> Even if you find it, it's still wrong.
> 
>> One thing I'm wondering is if I got the layout right. And the other
>> might be rather a case for the ext4-mailing list but I'd ask it anyway:
>> how can I figure where the file system starts to be corrupted?
> 
> Let's not care about your filesystem for now. Also forget fsck.
> 
> It's dangerous to go alone. Take this.
> 
> https://raid.wiki.kernel.org/index.php/Recovering_a_failed_software_RAID#Making_the_harddisks_read-only_using_an_overlay_file
> 
> Create two overlays. Two. Okay?
> 
> Overlay #1: You create your not-fully-grown 4 disk raid.
> 
> You have to figure out the disk order, raid level, metadata version,
* disk order seems pretty obvious to me _UU and later UUU_
* raid level is 5
* 1) data offset on the grown system seems to be 100000
     (at least I find ext4s magic signature at 100000+400+38)
  2) no idea where it might be for the unsynced version
* chunk size I have no idea, I might have adjusted it from the default
value for better alignment with the file system
* layout should be the default left-symetric
  (all diagrams in the original mail are wrong as data blocks in a
stripe start after parity block not with first disk)
* anything else?


> data offset, chunk size, layout, and some things I don't remember. 
> If you got it right there should be a filesystem on the raid device. 
> Or a LUKS header. Or something that makes any sense whatsoever at 
> least for however far the reshape actually progressed.
> 
> Overlay #2: You create your not-fully-synced 3 disk raid.
>             Leaving the not-fully-synced disk as missing.
> 
> Basically this is the same thing as #1, except the data offset 
> might be different, there's obviously no 4th disk, and one of 
> the other three missing.
> 
> There probably WON'T be a filesystem on this one because it's 
> already grown over. So the beginning of this device is garbage, 
> it only starts making sense after the area that wasn't reshaped.
So I'm looking for a sequence of bytes that is duplicated on both
overlays. This way I find the border between both parts.

> If it was unencrypted... oh well. It wasn't. Was it?
> Now you've done it, I'm confused.
> 
> Then you find the point where data overlaps and create a linear mapping. 
> It overlaps because 4 disk more space than 3 so 1GB on 4 won't overwrite 
> 1GB on 3 so there is an overlapping zone.
> 
> And you're done. At least in terms of having access to the whole thing.
> 
> Easy peasy.
> 
> Regards
> Andreas Klauer
Thank you, that overlay file system is the way to go

> PS: Do you _really_ not have anything left. Logfiles? Anything?
>     Maybe you asked anything about your raid anywhere before 
>     and posted examine along with it, tucked away in some 
>     linux forum or chat you might have perused...
> 
>     Please check. Your story is really interesting but nothing 
>     beats hard facts such as actual output of your crap.
I'd be happy to have any such things but I never had any trouble before

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: Panicked and deleted superblock
  2016-10-30 20:45   ` Peter Hoffmann
@ 2016-10-30 21:11     ` Andreas Klauer
  2016-10-31 22:36       ` Peter Hoffmann
  0 siblings, 1 reply; 7+ messages in thread
From: Andreas Klauer @ 2016-10-30 21:11 UTC (permalink / raw)
  To: Peter Hoffmann; +Cc: linux-raid

On Sun, Oct 30, 2016 at 09:45:27PM +0100, Peter Hoffmann wrote:
> there shouldn't anything be lost as growing consumes more
> than it writes, stripe wise speaking

That's what I meant by 'overlap' - it's the wrong word I guess.

> /dev/sda2 --luks--> /dev/mapper/HDD_0 \
> /dev/sdb2 --luks--> /dev/mapper/HDD_1 --raid--> /dev/md127 -ext4-> /raid
> /dev/sdc2 --luks--> /dev/mapper/HDD_2 /

You're hoping it be faster since three threads instead of one?
Adds the overhead of encrypting parity. Not sure if worth it.
This idea belongs to another era (before AES-NI).

But it's good, that way, you have "unencrypted" data on your RAID and can 
make deductions from that raw data as to chunk size and such things. 

> * anything else?

This is where I don't know how to provide specific help.
Since you did not provide specific data I can work with.
Your data offset sounds strange to me but with overlay, 
it's faster to just go ahead and try.

You'll have to figure out the details by yourself, pretty much.

Once you have the correct offset you might be able to deduct the other 
offset. Create 4 loop devices size of your disks (sparse files in tmpfs, 
truncate -s thefile, losetup), create a 3 disk raid, grow to 4 disks, 
check with mdadm --examine if & how the data offset changed.

> So I'm looking for a sequence of bytes that is duplicated on both
> overlays. This way I find the border between both parts.

Yes, there should be an identical region (let's hope not zeroes)
and you should roughly determine the end of that region and that's 
your entry point for a linear device mapping.

Regards
Andreas Klauer

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: Panicked and deleted superblock
  2016-10-30 21:11     ` Andreas Klauer
@ 2016-10-31 22:36       ` Peter Hoffmann
  2016-10-31 23:03         ` Andreas Klauer
  0 siblings, 1 reply; 7+ messages in thread
From: Peter Hoffmann @ 2016-10-31 22:36 UTC (permalink / raw)
  To: linux-raid

I'm a bit confused using the overlay function.

(A) If I follow the manual on the wiki [1] the new raid device md0
seemingly just contains random junks of byte, nothing like the header of
the original decrypted device.
(B) But if I copy say 400M of each original drive at least the hexdump
looks like the head of an ext4 file system and is what I expected from
looking at the original decrypted device

Steps for (A):

    for ((i=0; $i < 3; i++ )); do
      dev=/dev/mapper/HDD_$i

      dd if=/dev/zero of=overlay-$i bs=4M count=100 # alternative 1
      # truncate -s1850G overlay-$i # alternativ 2

      loop=$(losetup -f --show -- overlay-$i)
      echo 0 $(blockdev --getsize $dev) snapshot $dev $loop P 8 | \
        dmsetup create HDD_overlay_0_$i
    done
    mdadm --create --assume-clean --level=5 --raid-devices=4 /dev/md0
/dev/mapper/HDD_overlay_0_[012] missing

Steps for (B):

    for ((i=0; $i < 3; i++ )); do
      dd if=/dev/mapper/HDD_$i of=copy-$i bs=4M count=100
      loops="$loops $(losetup -f --show -- copy-$i"
    done
    mdadm --create --assume-clean --level=5 --raid-devices=4 /dev/md1
$loops missing
	
Both ways should look exactly the same at least for the first 1200M,
shouldn't they?

Greetings
P. Hoffmann


1)
https://raid.wiki.kernel.org/index.php/Recovering_a_failed_software_RAID#Making_the_harddisks_read-only_using_an_overlay_file

Am 30.10.2016 um 22:11 schrieb Andreas Klauer:
> On Sun, Oct 30, 2016 at 09:45:27PM +0100, Peter Hoffmann wrote:
>> there shouldn't anything be lost as growing consumes more
>> than it writes, stripe wise speaking
> 
> That's what I meant by 'overlap' - it's the wrong word I guess.
> 
>> /dev/sda2 --luks--> /dev/mapper/HDD_0 \
>> /dev/sdb2 --luks--> /dev/mapper/HDD_1 --raid--> /dev/md127 -ext4-> /raid
>> /dev/sdc2 --luks--> /dev/mapper/HDD_2 /
> 
> You're hoping it be faster since three threads instead of one?
> Adds the overhead of encrypting parity. Not sure if worth it.
> This idea belongs to another era (before AES-NI).
> 
> But it's good, that way, you have "unencrypted" data on your RAID and can 
> make deductions from that raw data as to chunk size and such things. 
> 
>> * anything else?
> 
> This is where I don't know how to provide specific help.
> Since you did not provide specific data I can work with.
> Your data offset sounds strange to me but with overlay, 
> it's faster to just go ahead and try.
> 
> You'll have to figure out the details by yourself, pretty much.
> 
> Once you have the correct offset you might be able to deduct the other 
> offset. Create 4 loop devices size of your disks (sparse files in tmpfs, 
> truncate -s thefile, losetup), create a 3 disk raid, grow to 4 disks, 
> check with mdadm --examine if & how the data offset changed.
> 
>> So I'm looking for a sequence of bytes that is duplicated on both
>> overlays. This way I find the border between both parts.
> 
> Yes, there should be an identical region (let's hope not zeroes)
> and you should roughly determine the end of that region and that's 
> your entry point for a linear device mapping.
> 
> Regards
> Andreas Klauer
> --
> To unsubscribe from this list: send the line "unsubscribe linux-raid" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
> 

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: Panicked and deleted superblock
  2016-10-31 22:36       ` Peter Hoffmann
@ 2016-10-31 23:03         ` Andreas Klauer
  0 siblings, 0 replies; 7+ messages in thread
From: Andreas Klauer @ 2016-10-31 23:03 UTC (permalink / raw)
  To: Peter Hoffmann; +Cc: linux-raid

On Mon, Oct 31, 2016 at 11:36:39PM +0100, Peter Hoffmann wrote:
> I'm a bit confused using the overlay function.

There really needs to be a standard utility for this.

Well, the "overlay manipulation functions" on that wiki page come close. 
You can paste/source those once in the shell and use them like commands. 
You need two sets of overlays, so you'd also have to create two sets of 
those functions (using different mapper names).

You should verify that the overlay itself is good by comparing the size 
(blockdev --getsize64 /dev/thing /dev/mapper/overlaything) as well as 
the content (cmp /dev/thing /dev/mapper/thething, no output if there is 
no difference, you can ctrl-c).

>     mdadm --create --assume-clean --level=5 --raid-devices=4 /dev/md0

This is wrong, or rather, when re-creating RAID you must be much more 
verbose with the options. http://unix.stackexchange.com/a/131927/30851

> Both ways should look exactly the same at least for the first 1200M,
> shouldn't they?

If you deliberately created smaller overlays, mdadm uses different 
defaults depending on device size too. If you expect it to be identical 
also check that --examine (data offset etc.) looks identical.

There's no point in using overlays that are not identical to the 
devices they're overlaying.

Regards
Andreas Klauer

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: Panicked and deleted superblock
  2016-10-30 18:23 Panicked and deleted superblock Peter Hoffmann
  2016-10-30 19:43 ` Andreas Klauer
@ 2016-11-04  4:34 ` NeilBrown
  1 sibling, 0 replies; 7+ messages in thread
From: NeilBrown @ 2016-11-04  4:34 UTC (permalink / raw)
  To: Peter Hoffmann, linux-raid

[-- Attachment #1: Type: text/plain, Size: 4181 bytes --]

On Mon, Oct 31 2016, Peter Hoffmann wrote:

> My problem is the result of working late and not informing myself
> previously, I'm fully aware that I should have had a backup, be less
> spontaneous and more cautious.
>
> The initial situation is a RAID-5 array with three disks. I assume it to
> look follows:
>
> | Disk 1   | Disk 2   | Disk 3   |
> |----------|----------|----------|
> |    out   | Block 2  | P(1,2)   |
> |    of    | P(3,4)   | Block 4  |	degenerated but working
> |   sync   | Block 5  | Block 6  |

The default RAID5 layout (there a 4 to choose from) is
#define ALGORITHM_LEFT_SYMMETRIC	2 /* Rotating Parity N with Data Continuation */

The first data block on a stripe is always located after the parity
block.
So if data is D0 D1 D2 D3.... then

   D0   D1   P01
   D3   P23  D2
   P45  D4   D5

>
>
> Then I started the re-sync:
>
> | Disk 1   | Disk 2   | Disk 3   |
> |----------|----------|----------|
> | Block 1  | Block 2  | P(1,2)   |
> | Block 3  | P(3,4)   | Block 4  |   	already synced
> | P(5,6)   | Block 5  | Block 6  |
>                . . .
> |    out   | Block b  | P(a,b)   |
> |    of    | P(c,d)   | Block d  |	not yet synced
> |   sync   | Block e  | Block f  |
>
> But I didn't wait for it to finish as I actually wanted to add a fourth
> disk and so started a grow process. But I just changed the size of the
> array, I didn't actually add the fourth disk (don't ask why I cannot
> recall it). I assume that both processes - re-sync  and grow - raced
> through the array and did their job.

So you ran
  mdadm --grow /dev/md0 --raid-disks 4 --force

???
You would need --force or mdadm would refuse to do such a silly thing.

Also, the kernel would refuse to let a reshape start while a resync was
on-going, so the reshape attempt should have been rejected anyway.

>
> | Disk 1   | Disk 2   | Disk 3   |
> |----------|----------|----------|
> | Block 1  | Block 2  | Block 3  |
> | Block 4  | Block 5  | P(4,5,6) |	with four disks but degenerated
> | Block 7  | P(7,8,9) | Block 8  |
>                . . .
> | Block a  | Block b  | P(a,b)   |
> | Block c  | P(c,d)   | Block d  |	not yet grown but synced
> | P(e,f)   | Block e  | Block f  |
>                . . .
> |    out   | Block V  | P(U,V)   |
> |    of    | P(W,X)   | Block X  |		not yet synced
> |   sync   | Block Y  | Block Z  |
>
> And after running for a while - my NAS is very slow (partly because all
> disks are LUKS'd), mdstat showed around 1GiB of Data processed - we had
> a blackout. Water dropped in a distribution socket and *poff*. After a
> reboot I wanted to resemble everything, didn't know what I was doing so
> the RAID superblock is now lost and I failed to reassemble (this is the
> part I really can't recall, I panicked). I never wrote anything to the
> actual array so I assume, better hope that no actual data is lost.

So you deliberately erased the RAID superblock?  Presumably not.
Maybe you ran "mdadm --create ...." to try to create a new array?  That
would do it.

If the reshape hadn't actually started, then you have some chance of
recovering your data.  If it had, then recovery is virtually impossible
because you don't know how far it got.

>
> I have a plan but wanted to check with you before doing anything stupid
> again.
> My idea is to look for that magic number of the ext4-fs to find the
> beginning of Block 1 on Disk 1, then I would copy an reasonable amount
> of data and try to figure out how big Block 1 and hence chunk-size is -
> perhaps fsck.ext4 can help do that? After that I copy another reasonable
> amount of data from Disks 1-3 to figure out the border between the grown
> Stripes and the synced Stripes. And from there on I'd have my data in a
> defined state from which I can save the whole file system.
> One thing I'm wondering is if I got the layout right. And the other
> might be rather a case for the ext4-mailing list but I'd ask it anyway:
> how can I figure where the file system starts to be corrupted?

You might be able to make something like this work .. if reshape hadn't
started.  But if you can live without recovering the data, then that is
probably the more cost effective option.

NeilBrown

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 800 bytes --]

^ permalink raw reply	[flat|nested] 7+ messages in thread

end of thread, other threads:[~2016-11-04  4:34 UTC | newest]

Thread overview: 7+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2016-10-30 18:23 Panicked and deleted superblock Peter Hoffmann
2016-10-30 19:43 ` Andreas Klauer
2016-10-30 20:45   ` Peter Hoffmann
2016-10-30 21:11     ` Andreas Klauer
2016-10-31 22:36       ` Peter Hoffmann
2016-10-31 23:03         ` Andreas Klauer
2016-11-04  4:34 ` NeilBrown

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).