From: "Patrik Dahlström" <risca@powerlamerz.org>
To: Wols Lists <antlists@youngman.org.uk>, Roman Mamedov <rm@romanrm.net>
Cc: Andreas Klauer <Andreas.Klauer@metamorpher.de>,
linux-raid@vger.kernel.org
Subject: Re: Recover array after I panicked
Date: Sun, 23 Apr 2017 14:15:45 +0200 [thread overview]
Message-ID: <e4c2acc6-420a-e454-fe74-c82737a1bbf0@powerlamerz.org> (raw)
In-Reply-To: <58FC99ED.7080705@youngman.org.uk>
On 04/23/2017 02:11 PM, Wols Lists wrote:
> On 23/04/17 12:58, Roman Mamedov wrote:
>> On Sun, 23 Apr 2017 12:36:24 +0100
>> Wols Lists <antlists@youngman.org.uk> wrote:
>>
>>> And, as the raid wiki tells you, download lspci and run that
>>
>> Maybe you meant lsdrv. https://github.com/pturmel/lsdrv
>>
> Sorry, yes I did ... (too many ls_xxx commands :-)
Ok, I had to patch lsdrv a bit to make it run. Diff:
diff --git a/lsdrv b/lsdrv
index fe6e77d..e868dbc 100755
--- a/lsdrv
+++ b/lsdrv
@@ -386,7 +386,8 @@ def probe_block(blocklink):
peers = " (w/ %s)" % ",".join(peers)
else:
peers = ""
- blk.FS = "MD %s (%s/%s)%s %s" % (blk.array.md.LEVEL, blk.slave.slot, blk.array.md.raid_disks, peers, blk.slave.state)
+ if blk.array.md:
+ blk.FS = "MD %s (%s/%s)%s %s" % (blk.array.md.LEVEL, blk.slave.slot, blk.array.md.raid_disks, peers, blk.slave.state)
else:
blk.__dict__.update(extractvars(runx(['mdadm', '--export', '--examine', '/dev/block/'+blk.dev])))
blk.FS = "MD %s (%s) inactive" % (blk.MD_LEVEL, blk.MD_DEVICES)
@@ -402,9 +403,11 @@ def probe_block(blocklink):
else:
blk.FS = "Empty/Unknown"
if blk.ID_FS_LABEL:
- blk.FS += " '%s'" % blk.ID_FS_LABEL
+ if blk.FS:
+ blk.FS += " '%s'" % blk.ID_FS_LABEL
if blk.ID_FS_UUID:
- blk.FS += " {%s}" % blk.ID_FS_UUID
+ if blk.FS:
+ blk.FS += " {%s}" % blk.ID_FS_UUID
for part in blk.partitions:
probe_block(blkpath+'/'+part)
return blk
Here's the output of a run. Overlays are enabled:
PCI [ahci] 00:17.0 SATA controller: Intel Corporation Sunrise Point-H SATA controller [AHCI mode] (rev 31)
├scsi 0:0:0:0 ATA WDC WD60EFRX-68M {WD-WX91D6535N7Y}
│└sda 5.46t [8:0] None
│ └dm-2 5.46t [252:2] MD raid5 (6) inactive 'rack-server-1:1' {510d9668-d30c-b4cd-cc76-9fcace98c3b1}
├scsi 1:0:0:0 ATA WDC WD600PF4PZ-4 {WD-WX11D741AE8K}
│└sdb 5.64t [8:16] None
│ └dm-5 5.64t [252:5] MD raid5 (6) inactive 'rack-server-1:1' {510d9668-d30c-b4cd-cc76-9fcace98c3b1}
├scsi 2:0:0:0 ATA WDC WD60EFRX-68M {WD-WX11DC449Y02}
│└sdc 5.46t [8:32] None
│ └dm-1 5.46t [252:1] MD raid5 (6) inactive 'rack-server-1:1' {510d9668-d30c-b4cd-cc76-9fcace98c3b1}
├scsi 3:0:0:0 ATA WDC WD60EFRX-68L {WD-WX11DA53427A}
│└sdd 5.46t [8:48] None
│ └dm-3 5.46t [252:3] MD raid5 (6) inactive 'rack-server-1:1' {510d9668-d30c-b4cd-cc76-9fcace98c3b1}
├scsi 4:0:0:0 ATA WDC WD60EFRX-68L {WD-WXB1HB4W238J}
│└sde 5.46t [8:64] None
│ └dm-4 5.46t [252:4] MD raid5 (6) inactive 'rack-server-1:1' {510d9668-d30c-b4cd-cc76-9fcace98c3b1}
└scsi 5:0:0:0 ATA WDC WD60EFRX-68L {WD-WX41D75LN7CK}
└sdf 5.46t [8:80] None
└dm-6 5.46t [252:6] MD raid5 (6) inactive 'rack-server-1:1' {18cd5b54-707a-36df-36be-8f01e8a77122}
USB [usb-storage] Bus 001 Device 003: ID 152d:2338 JMicron Technology Corp. / JMicron USA Technology Corp. JM20337 Hi-Speed USB to SATA & PATA Combo Bridge {77C301992933}
└scsi 6:0:0:0 WDC WD20 WD-WMC301992933 {WD-WMC301992933}
└sdg 1.82t [8:96] Partitioned (dos)
├sdg1 1.80t [8:97] ext4 {eb94342f-2eea-4318-9f79-3517ae1ccaad}
│└Mounted as /dev/sdg1 @ /
├sdg2 1.00k [8:98] Partitioned (dos)
└sdg5 15.93g [8:101] swap {568ea822-2f0c-42a8-a355-1a2e856728a0}
└dm-0 15.93g [252:0] swap {fac64c73-bb78-417d-9323-a5dd381178bf}
USB [usb-storage] Bus 001 Device 006: ID 0781:5567 SanDisk Corp. Cruzer Blade {2005224340054080F2CD}
└scsi 9:0:0:0 SanDisk Cruzer Blade {2005224340054080F2CD}
└sdh 3.73g [8:112] Partitioned (dos)
└sdh1 3.73g [8:113] vfat {6E17-F675}
└Mounted as /dev/sdh1 @ /media/cdrom
Other Block Devices
├loop0 5.86t [7:0] DM_snapshot_cow
│└dm-4 5.46t [252:4] MD raid5 (6) inactive 'rack-server-1:1' {510d9668-d30c-b4cd-cc76-9fcace98c3b1}
├loop1 5.86t [7:1] DM_snapshot_cow
│└dm-1 5.46t [252:1] MD raid5 (6) inactive 'rack-server-1:1' {510d9668-d30c-b4cd-cc76-9fcace98c3b1}
├loop2 5.86t [7:2] Empty/Unknown
│└dm-2 5.46t [252:2] MD raid5 (6) inactive 'rack-server-1:1' {510d9668-d30c-b4cd-cc76-9fcace98c3b1}
├loop3 5.86t [7:3] DM_snapshot_cow
│└dm-3 5.46t [252:3] MD raid5 (6) inactive 'rack-server-1:1' {510d9668-d30c-b4cd-cc76-9fcace98c3b1}
├loop4 5.86t [7:4] DM_snapshot_cow
│└dm-5 5.64t [252:5] MD raid5 (6) inactive 'rack-server-1:1' {510d9668-d30c-b4cd-cc76-9fcace98c3b1}
├loop5 5.86t [7:5] DM_snapshot_cow
│└dm-6 5.46t [252:6] MD raid5 (6) inactive 'rack-server-1:1' {18cd5b54-707a-36df-36be-8f01e8a77122}
├loop6 0.00k [7:6] Empty/Unknown
└loop7 0.00k [7:7] Empty/Unknown
Please note that the superblocks have probably been trashed by Permute arrays
>
> Cheers,
> Wol
>
next prev parent reply other threads:[~2017-04-23 12:15 UTC|newest]
Thread overview: 63+ messages / expand[flat|nested] mbox.gz Atom feed top
2017-04-23 9:47 Recover array after I panicked Patrik Dahlström
2017-04-23 10:16 ` Andreas Klauer
2017-04-23 10:23 ` Patrik Dahlström
2017-04-23 10:46 ` Andreas Klauer
2017-04-23 11:12 ` Patrik Dahlström
2017-04-23 11:36 ` Wols Lists
2017-04-23 11:47 ` Patrik Dahlström
2017-04-23 11:53 ` Reindl Harald
2017-04-23 11:58 ` Roman Mamedov
2017-04-23 12:11 ` Wols Lists
2017-04-23 12:15 ` Patrik Dahlström [this message]
2017-04-24 21:04 ` Phil Turmel
2017-04-24 21:56 ` Patrik Dahlström
2017-04-24 23:35 ` Phil Turmel
2017-04-23 13:16 ` Andreas Klauer
2017-04-23 13:49 ` Patrik Dahlström
2017-04-23 14:36 ` Andreas Klauer
2017-04-23 14:45 ` Patrik Dahlström
2017-04-23 12:32 ` Patrik Dahlström
2017-04-23 12:45 ` Andreas Klauer
2017-04-23 12:57 ` Patrik Dahlström
2017-04-23 14:06 ` Brad Campbell
2017-04-23 14:09 ` Patrik Dahlström
2017-04-23 14:20 ` Patrik Dahlström
2017-04-23 14:25 ` Brad Campbell
2017-04-23 14:48 ` Andreas Klauer
2017-04-23 15:11 ` Patrik Dahlström
2017-04-23 15:24 ` Patrik Dahlström
2017-04-23 15:42 ` Andreas Klauer
2017-04-23 16:29 ` Patrik Dahlström
2017-04-23 19:21 ` Patrik Dahlström
2017-04-24 2:09 ` Brad Campbell
2017-04-24 7:34 ` Patrik Dahlström
2017-04-24 11:04 ` Andreas Klauer
2017-04-24 12:13 ` Patrik Dahlström
2017-04-24 12:37 ` Andreas Klauer
2017-04-24 12:54 ` Patrik Dahlström
2017-04-24 13:39 ` Andreas Klauer
2017-04-24 14:05 ` Patrik Dahlström
2017-04-24 14:21 ` Andreas Klauer
2017-04-24 16:00 ` Patrik Dahlström
2017-04-24 23:00 ` Patrik Dahlström
2017-04-25 0:16 ` Andreas Klauer
2017-04-25 8:44 ` Patrik Dahlström
2017-04-25 9:01 ` Andreas Klauer
2017-04-25 10:40 ` Patrik Dahlström
2017-04-25 10:51 ` Patrik Dahlström
2017-04-25 11:08 ` Andreas Klauer
2017-04-25 11:37 ` Patrik Dahlström
2017-04-25 12:41 ` Andreas Klauer
2017-04-25 18:22 ` Wols Lists
2017-04-27 19:57 ` Patrik Dahlström
2017-04-27 23:12 ` Andreas Klauer
2017-04-28 7:11 ` Patrik Dahlström
2017-04-28 9:52 ` Andreas Klauer
2017-04-28 10:31 ` Patrik Dahlström
2017-04-28 11:39 ` Andreas Klauer
2017-04-28 22:46 ` Patrik Dahlström
2017-04-29 9:56 ` Andreas Klauer
2017-05-02 13:08 ` Patrik Dahlström
2017-05-02 13:11 ` Brad Campbell
2017-05-02 15:49 ` Anthony Youngman
2017-04-25 23:01 ` Patrik Dahlström
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=e4c2acc6-420a-e454-fe74-c82737a1bbf0@powerlamerz.org \
--to=risca@powerlamerz.org \
--cc=Andreas.Klauer@metamorpher.de \
--cc=antlists@youngman.org.uk \
--cc=linux-raid@vger.kernel.org \
--cc=rm@romanrm.net \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox