From: "Patrik Dahlström" <risca@powerlamerz.org>
To: Andreas Klauer <Andreas.Klauer@metamorpher.de>
Cc: linux-raid@vger.kernel.org
Subject: Re: Recover array after I panicked
Date: Sun, 23 Apr 2017 15:49:16 +0200 [thread overview]
Message-ID: <9fcdece2-68b7-8e5b-5995-caf15af18bf3@powerlamerz.org> (raw)
In-Reply-To: <20170423131605.GA9955@metamorpher.de>
On 04/23/2017 03:16 PM, Andreas Klauer wrote:
> On Sun, Apr 23, 2017 at 01:12:54PM +0200, Patrik Dahlström wrote:
>> I got some of that!
>
>> [ 3.100700] RAID conf printout:
>> [ 3.100700] --- level:5 rd:5 wd:5
>> [ 3.100700] disk 0, o:1, dev:sda
>> [ 3.100700] disk 1, o:1, dev:sdb
>> [ 3.100701] disk 2, o:1, dev:sdd
>> [ 3.100701] disk 3, o:1, dev:sdc
>> [ 3.100701] disk 4, o:1, dev:sde
>> [ 3.101006] created bitmap (44 pages) for device md1
>> [ 3.102245] md1: bitmap initialized from disk: read 3 pages, set 0 of
>> 89423 bits
>> [ 3.159019] md1: detected capacity change from 0 to 24004163272704
>
> Fairly standard, RAID5, presumably 1.2 metadata with 128M data offset,
> which is the default mdadm uses lately. Older RAIDs would have smaller
> data offsets.
>
> So... ...the output above really is from before any of your accidents?
Yes, it is from before adding /dev/sdf and starting a reshape
> How old is your raid ...?
The raid is roughly 1 year old. It started as a combination of raids:
md0: 4x2TB raid5
md1: 2x6TB + md0 raid5
A few months after that, md0 was replaced with a 6 TB drive (/dev/sdd).
Last august I added /dev/sde and this january I added /dev/sde.
Yesterday I tried to add /dev/sdf.
>
> Tested with loop devices:
>
> # truncate -s 6001175126016 0 1 2 3 4
> # losetup --find --show
> # mdadm --create /dev/md42 --assume-clean --data-offset=128M --level=5 --raid-devices=5 /dev/loop[01234]
>
> | [14580.373999] md/raid:md42: device loop4 operational as raid disk 4
> | [14580.373999] md/raid:md42: device loop3 operational as raid disk 3
> | [14580.374000] md/raid:md42: device loop2 operational as raid disk 2
> | [14580.374000] md/raid:md42: device loop1 operational as raid disk 1
> | [14580.374001] md/raid:md42: device loop0 operational as raid disk 0
> | [14580.374308] md/raid:md42: raid level 5 active with 5 out of 5 devices, algorithm 2
> | [14580.377043] md42: detected capacity change from 0 to 24004163272704
>
> (Results in identical capacity as yours so it's the most likely match.)
>
> Again, you'd do this with overlays only...
I did
$ mdadm --create /dev/md1 --assume-clean --data-offset=128M --level=5 --raid-devices=5 /dev/mapper/sd[abdce]
$ dmesg | tail
[10079.442770] md: bind<dm-2>
[10079.442835] md: bind<dm-5>
[10079.442889] md: bind<dm-1>
[10079.442954] md: bind<dm-3>
[10079.443015] md: bind<dm-4>
[10079.443814] md/raid:md1: device dm-4 operational as raid disk 4
[10079.443815] md/raid:md1: device dm-3 operational as raid disk 3
[10079.443816] md/raid:md1: device dm-1 operational as raid disk 2
[10079.443830] md/raid:md1: device dm-5 operational as raid disk 1
[10079.443830] md/raid:md1: device dm-2 operational as raid disk 0
[10079.444123] md/raid:md1: allocated 5432kB
[10079.444168] md/raid:md1: raid level 5 active with 5 out of 5 devices, algorithm 2
[10079.444169] RAID conf printout:
[10079.444170] --- level:5 rd:5 wd:5
[10079.444171] disk 0, o:1, dev:dm-2
[10079.444171] disk 1, o:1, dev:dm-5
[10079.444172] disk 2, o:1, dev:dm-1
[10079.444173] disk 3, o:1, dev:dm-3
[10079.444173] disk 4, o:1, dev:dm-4
[10079.444237] created bitmap (44 pages) for device md1
[10079.446272] md1: bitmap initialized from disk: read 3 pages, set 89423 of 89423 bits
[10079.451821] md1: detected capacity change from 0 to 24004163272704
$ mdadm --detail /dev/md1
/dev/md1:
Version : 1.2
Creation Time : Sun Apr 23 15:40:15 2017
Raid Level : raid5
Array Size : 23441565696 (22355.62 GiB 24004.16 GB)
Used Dev Size : 5860391424 (5588.90 GiB 6001.04 GB)
Raid Devices : 5
Total Devices : 5
Persistence : Superblock is persistent
Intent Bitmap : Internal
Update Time : Sun Apr 23 15:40:15 2017
State : clean
Active Devices : 5
Working Devices : 5
Failed Devices : 0
Spare Devices : 0
Layout : left-symmetric
Chunk Size : 512K
Name : rack-server-1:1 (local to host rack-server-1)
UUID : 6beee843:59371bd6:c9278c83:1eb89111
Events : 0
Number Major Minor RaidDevice State
0 252 2 0 active sync /dev/dm-2
1 252 5 1 active sync /dev/dm-5
2 252 1 2 active sync /dev/dm-1
3 252 3 3 active sync /dev/dm-3
4 252 4 4 active sync /dev/dm-4
$ mount /dev/md1 /storage
mount: wrong fs type, bad option, bad superblock on /dev/md1,
missing codepage or helper program, or other error
In some cases useful info is found in syslog - try
dmesg | tail or so.
Still no luck. Was the drives added in the wrong order?
>
> Regards
> Andreas Klauer
>
next prev parent reply other threads:[~2017-04-23 13:49 UTC|newest]
Thread overview: 63+ messages / expand[flat|nested] mbox.gz Atom feed top
2017-04-23 9:47 Recover array after I panicked Patrik Dahlström
2017-04-23 10:16 ` Andreas Klauer
2017-04-23 10:23 ` Patrik Dahlström
2017-04-23 10:46 ` Andreas Klauer
2017-04-23 11:12 ` Patrik Dahlström
2017-04-23 11:36 ` Wols Lists
2017-04-23 11:47 ` Patrik Dahlström
2017-04-23 11:53 ` Reindl Harald
2017-04-23 11:58 ` Roman Mamedov
2017-04-23 12:11 ` Wols Lists
2017-04-23 12:15 ` Patrik Dahlström
2017-04-24 21:04 ` Phil Turmel
2017-04-24 21:56 ` Patrik Dahlström
2017-04-24 23:35 ` Phil Turmel
2017-04-23 13:16 ` Andreas Klauer
2017-04-23 13:49 ` Patrik Dahlström [this message]
2017-04-23 14:36 ` Andreas Klauer
2017-04-23 14:45 ` Patrik Dahlström
2017-04-23 12:32 ` Patrik Dahlström
2017-04-23 12:45 ` Andreas Klauer
2017-04-23 12:57 ` Patrik Dahlström
2017-04-23 14:06 ` Brad Campbell
2017-04-23 14:09 ` Patrik Dahlström
2017-04-23 14:20 ` Patrik Dahlström
2017-04-23 14:25 ` Brad Campbell
2017-04-23 14:48 ` Andreas Klauer
2017-04-23 15:11 ` Patrik Dahlström
2017-04-23 15:24 ` Patrik Dahlström
2017-04-23 15:42 ` Andreas Klauer
2017-04-23 16:29 ` Patrik Dahlström
2017-04-23 19:21 ` Patrik Dahlström
2017-04-24 2:09 ` Brad Campbell
2017-04-24 7:34 ` Patrik Dahlström
2017-04-24 11:04 ` Andreas Klauer
2017-04-24 12:13 ` Patrik Dahlström
2017-04-24 12:37 ` Andreas Klauer
2017-04-24 12:54 ` Patrik Dahlström
2017-04-24 13:39 ` Andreas Klauer
2017-04-24 14:05 ` Patrik Dahlström
2017-04-24 14:21 ` Andreas Klauer
2017-04-24 16:00 ` Patrik Dahlström
2017-04-24 23:00 ` Patrik Dahlström
2017-04-25 0:16 ` Andreas Klauer
2017-04-25 8:44 ` Patrik Dahlström
2017-04-25 9:01 ` Andreas Klauer
2017-04-25 10:40 ` Patrik Dahlström
2017-04-25 10:51 ` Patrik Dahlström
2017-04-25 11:08 ` Andreas Klauer
2017-04-25 11:37 ` Patrik Dahlström
2017-04-25 12:41 ` Andreas Klauer
2017-04-25 18:22 ` Wols Lists
2017-04-27 19:57 ` Patrik Dahlström
2017-04-27 23:12 ` Andreas Klauer
2017-04-28 7:11 ` Patrik Dahlström
2017-04-28 9:52 ` Andreas Klauer
2017-04-28 10:31 ` Patrik Dahlström
2017-04-28 11:39 ` Andreas Klauer
2017-04-28 22:46 ` Patrik Dahlström
2017-04-29 9:56 ` Andreas Klauer
2017-05-02 13:08 ` Patrik Dahlström
2017-05-02 13:11 ` Brad Campbell
2017-05-02 15:49 ` Anthony Youngman
2017-04-25 23:01 ` Patrik Dahlström
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=9fcdece2-68b7-8e5b-5995-caf15af18bf3@powerlamerz.org \
--to=risca@powerlamerz.org \
--cc=Andreas.Klauer@metamorpher.de \
--cc=linux-raid@vger.kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox