linux-raid.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: "L.M.J" <linuxmasterjedi@free.fr>
To: Scott D'Vileskis <sdvileskis@gmail.com>
Cc: "linux-raid@vger.kernel.org" <linux-raid@vger.kernel.org>
Subject: Re: Corrupted ext4 filesystem after mdadm manipulation error
Date: Fri, 25 Apr 2014 16:43:56 +0200	[thread overview]
Message-ID: <20140425164356.368e9026@netstation> (raw)
In-Reply-To: <CAK_KU4a0FCwG2fjAQYX4jM2xC9SbMc=qp_7T_v4hC5eztDFqOA@mail.gmail.com>

Le Fri, 25 Apr 2014 09:36:12 -0400,
"Scott D'Vileskis" <sdvileskis@gmail.com> a écrit :

> As a last ditch effort, try the --create again but with the two
> potentially good disks in the right order:
> 
> mdadm --create /dev/md0 --level=5 --raid-devices=3 missing /dev/sdc1 /dev/sdd1


root@gateway:~# mdadm --create /dev/md0 --level=5 --raid-devices=3 missing /dev/sdc1 /dev/sdd1
mdadm: /dev/sdc1 appears to be part of a raid array:
    level=raid5 devices=3 ctime=Fri Apr 25 16:20:32 2014
mdadm: /dev/sdd1 appears to be part of a raid array:
    level=raid5 devices=3 ctime=Fri Apr 25 16:20:32 2014
Continue creating array? y
mdadm: array /dev/md0 started.


root@gateway:~# ls -l /dev/md*
brw-rw---- 1 root disk   9, 0 2014-04-25 16:34 /dev/md0
brw-rw---- 1 root disk 254, 0 2014-04-25 16:19 /dev/md_d0
lrwxrwxrwx 1 root root      7 2014-04-25 16:04 /dev/md_d0p1 -> md/d0p1
lrwxrwxrwx 1 root root      7 2014-04-25 16:04 /dev/md_d0p2 -> md/d0p2
lrwxrwxrwx 1 root root      7 2014-04-25 16:04 /dev/md_d0p3 -> md/d0p3
lrwxrwxrwx 1 root root      7 2014-04-25 16:04 /dev/md_d0p4 -> md/d0p4

/dev/md:
total 0
brw------- 1 root root 254, 0 2014-04-25 16:04 d0
brw------- 1 root root 254, 1 2014-04-25 16:04 d0p1
brw------- 1 root root 254, 2 2014-04-25 16:04 d0p2
brw------- 1 root root 254, 3 2014-04-25 16:04 d0p3
brw------- 1 root root 254, 4 2014-04-25 16:04 d0p4


root@gateway:~# cat /proc/mdstat 
Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] 
md0 : active raid5 sdd1[2] sdc1[1]
      3907023872 blocks level 5, 64k chunk, algorithm 2 [3/2] [_UU]
      
unused devices: <none>


root@gateway:~# pvscan 
  No matching physical volumes found


root@gateway:~# pvdisplay 



root@gateway:~# dd if=/dev/md0 of=/tmp/md0.dd count=10 bs=1M
10+0 enregistrements lus
10+0 enregistrements écrits
10485760 octets (10 MB) copiés, 0,271947 s, 38,6 MB/s


I can see in /tmp/md0.dd a lot of binary stuff, and sometimes, text :

physical_volumes {

pv0 {
id = "5DZit9-6o5V-a1vu-1D1q-fnc0-syEj-kVwAnW"
device = "/dev/md0"

status = ["ALLOCATABLE"]
flags = []
dev_size = 7814047360
pe_start = 384
pe_count = 953863
}
}

logical_volumes {

lvdata {
id = "JiwAjc-qkvI-58Ru-RO8n-r63Z-ll3E-SJazO7"
status = ["READ", "WRITE", "VISIBLE"]
flags = []
segment_count = 1

segment1 {
start_extent = 0
extent_count = 115200

type = "striped"
stripe_count = 1        # linear

stripes = [

[...]

lvdata_snapshot_J5 {
id = "Mcvgul-Qo2L-1sPB-LvtI-KuME-fiiM-6DXeph"
status = ["READ"]
flags = []
segment_count = 1

segment1 {
start_extent = 0
extent_count = 25600

type = "striped"
stripe_count = 1        # linear

stripes = [
"pv0", 284160
]
}
}


[...]

lvdata_snapshot_J5 is a snap I've created a few days before my mdadm chaos, so i'm pretty sure some datas are
still on the drives... Am I wrong ?

Thanks




--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

  reply	other threads:[~2014-04-25 14:43 UTC|newest]

Thread overview: 19+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2014-04-24  5:05 Corrupted ext4 filesystem after mdadm manipulation error L.M.J
2014-04-24 17:48 ` L.M.J
     [not found]   ` <CAK_KU4a+Ep7=F=NSbb-hqN6Rvayx4QPWm-M2403OHn5-LVaNZw@mail.gmail.com>
2014-04-24 18:35     ` L.M.J
     [not found]       ` <CAK_KU4Zh-azXEEzW4f1m=boCZDKevqaSHxW0XoAgRdrCbm2PkA@mail.gmail.com>
2014-04-24 19:53         ` L.M.J
     [not found]         ` <CAK_KU4aDDaUSGgcGBwCeO+yE0Qa_pUmMdAHMu7pqO7dqEEC71g@mail.gmail.com>
2014-04-24 19:56           ` L.M.J
2014-04-24 20:31             ` Scott D'Vileskis
2014-04-24 22:25               ` Why would a recreation cause a different number of blocks?? Jeff Wiegley
2014-04-25  3:34                 ` Mikael Abrahamsson
2014-04-25  5:02                   ` Jeff Wiegley
2014-04-25  6:01                     ` Mikael Abrahamsson
2014-04-25  6:45                       ` Jeff Wiegley
2014-04-25  7:25                         ` Mikael Abrahamsson
2014-04-25  7:05                       ` Jeff Wiegley
     [not found]             ` <CAK_KU4YUejncX9yQk4HM5HE=1-qPPxOibuRauFheo3jaBc8SaQ@mail.gmail.com>
2014-04-25  5:13               ` Corrupted ext4 filesystem after mdadm manipulation error L.M.J
2014-04-25  6:04                 ` Mikael Abrahamsson
2014-04-25 11:43                   ` L. M. J
2014-04-25 13:36                     ` Scott D'Vileskis
2014-04-25 14:43                       ` L.M.J [this message]
2014-04-25 18:37                       ` Is disk order relative or are the numbers absolute? Jeff Wiegley

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20140425164356.368e9026@netstation \
    --to=linuxmasterjedi@free.fr \
    --cc=linux-raid@vger.kernel.org \
    --cc=sdvileskis@gmail.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).