linux-raid.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: "L.M.J" <linuxmasterjedi@free.fr>
To: Scott D'Vileskis <sdvileskis@gmail.com>
Cc: linux-raid@vger.kernel.org
Subject: Re: Corrupted ext4 filesystem after mdadm manipulation error
Date: Thu, 24 Apr 2014 20:35:06 +0200	[thread overview]
Message-ID: <20140424203506.5fdee0d3@netstation> (raw)
In-Reply-To: <CAK_KU4a+Ep7=F=NSbb-hqN6Rvayx4QPWm-M2403OHn5-LVaNZw@mail.gmail.com>

Hello Scott,

  Do you think I've lost my data 100% for sure ? fsck recovered 50% of the files, don't you thing there is
  still something to save ?

 Thanks


Le Thu, 24 Apr 2014 14:13:05 -0400,
"Scott D'Vileskis" <sdvileskis@gmail.com> a écrit :

> NEVER USE "CREATE" ON FILESYSTEMS OR RAID ARRAYS UNLESS YOU KNOW WHAT YOU
> ARE DOING!
> CREATE destroys things in the creation process, especially with the --force
> option.
> 
> The create argument is only done to create a new array, it will start with
> two drives as 'good' drives and the last will likely be the degraded drive,
> so it will start resyncing and blowing away data on the last drive.  If you
> used the --assume clean argument, and it DID NOT resync the drives, you
> might be able to recreate the array with the two good disks, provided you
> know the original order.
> 
> If you used the --create option, and didn't have your disks in the same
> order they were originally in, you probably lost your data.
> 
> Since you replaced a disk, with no data (or worse, with bad data), you
> should have assembled the array, in degraded mode WITHOUT the
> --assume-clean argument.
> 
> If C & D contain your data, and B used to..
> mdadm --assemble /dev/md0 missing /dev/sdc1 /dev/sdd1
> You might have to --force the assembly. If it works, and it runs in
> degraded mode, mount your filesystem and take a backup.
> 
> Next, then add your replacement drive back in:
> mdadm --add /dev/md0 /dev/sdb1
> (Note, if sdb1 has some superblock data, you might have to
> --zero-superblock first)
> 
> 
> Good luck.
> 
> 
> On Thu, Apr 24, 2014 at 1:48 PM, L.M.J <linuxmasterjedi@free.fr> wrote:
> 
> > Up please :-(
> >
> > Le Thu, 24 Apr 2014 07:05:48 +0200,
> > "L.M.J" <linuxmasterjedi@free.fr> a écrit :
> >
> > > Hi,
> > >
> > > For the third time, I had to change a failed drive from my home linux
> > RAID5 box. Previous one went right and
> > > this time, I don't know what I did wrong, but I broke my RAID5. Well, at
> > least, he didn't want to
> > > start. /dev/sdb was the failed drive /dev/sdc and /dev/sdd are OK.
> > >
> > > I tried to reassemble the RAID with this command after I replace sdb and
> > create a new partition :
> > >
> > >  ~# mdadm -Cv /dev/md0 --assume-clean --level=5 --raid-devices=3
> > /dev/sdc1 /dev/sdd1 /dev/sdb1
> > > -> '-C' was not a good idea here
> > >
> > > Well, I guess I did an another mistake here, I should have done this
> > instead :
> > >  ~# mdadm -Av /dev/md0 --assume-clean --level=5 --raid-devices=3
> > /dev/sdc1 /dev/sdd1 missing
> > >
> > > Maybe this wipe out my data...
> > > Let's go futher, then, pvdisplay, pvscan, vgdisplay returns empty
> > information
> > >
> > > Google helped me, and I did this :
> > >  ~# dd if=/dev/md0 bs=512 count=255 skip=1 of=/tmp/md0.txt
> > >
> > >       [..]
> > >       physical_volumes {
> > >               pv0 {
> > >                       id = "5DZit9-6o5V-a1vu-1D1q-fnc0-syEj-kVwAnW"
> > >                       device = "/dev/md0"
> > >                       status = ["ALLOCATABLE"]
> > >                       flags = []
> > >                       dev_size = 7814047360
> > >                       pe_start = 384
> > >                       pe_count = 953863
> > >               }
> > >       }
> > >       logical_volumes {
> > >
> > >               lvdata {
> > >                       id = "JiwAjc-qkvI-58Ru-RO8n-r63Z-ll3E-SJazO7"
> > >                       status = ["READ", "WRITE", "VISIBLE"]
> > >                       flags = []
> > >                       segment_count = 1
> > >       [..]
> > >
> > > Since I saw lvm information, I guess I haven't lost all information
> > yet...
> > >
> > > I tried an unhoped command :
> > >  ~# pvcreate --uuid "5DZit9-6o5V-a1vu-1D1q-fnc0-syEj-kVwAnW"
> > > --restorefile /etc/lvm/archive/lvm-raid_00302.vg /dev/md0 Then,
> > >
> > >  ~# vgcfgrestore lvm-raid
> > >
> > >  ~# lvs -a -o +devices
> > >    LV     VG       Attr   LSize   Origin Snap%  Move Log Copy%  Convert
> >  Devices
> > >    lvdata lvm-raid -wi-a- 450,00g
> >               /dev/md0(148480)
> > >    lvmp   lvm-raid -wi-a-  80,00g
> >               /dev/md0(263680)
> > > Then :
> > >  ~# lvchange -ay /dev/lvm-raid/lv*
> > >
> > > I was quite happy until now.
> > > Problem appears now when I try to mount those 2 LV (lvdata & lvmp) as
> > ext4 partition :
> > >  ~# mount /home/foo/RAID_mp/
> > >
> > >  ~# mount | grep -i mp
> > >       /dev/mapper/lvm--raid-lvmp on /home/foo/RAID_mp type ext4 (rw)
> > >
> > >  ~# df -h /home/foo/RAID_mp
> > >       Filesystem                            Size  Used Avail Use%
> > Mounted on
> > >       /dev/mapper/lvm--raid-lvmp   79G   61G   19G  77%
> > /home/foo/RAID_mp
> > >
> > > Here is the big problem
> > >  ~# ls -la /home/foo/RAID_mp
> > >       total 0
> > >
> > > I did a LVM R/W snapshot on the /dev/mapper/lvm--raid-lvmp LV, I fsck
> > it. I recover 50% of the files only,
> > > all located in lost-+found/ directory with names heading with #xxxxx.
> > >
> > > I would like to know if there is a last chance to recover my data ?
> > >
> > > Thanks
> > > --
> > > To unsubscribe from this list: send the line "unsubscribe linux-raid" in
> > > the body of a message to majordomo@vger.kernel.org
> > > More majordomo info at  http://vger.kernel.org/majordomo-info.html
> > --
> > To unsubscribe from this list: send the line "unsubscribe linux-raid" in
> > the body of a message to majordomo@vger.kernel.org
> > More majordomo info at  http://vger.kernel.org/majordomo-info.html
> >
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

  parent reply	other threads:[~2014-04-24 18:35 UTC|newest]

Thread overview: 19+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2014-04-24  5:05 Corrupted ext4 filesystem after mdadm manipulation error L.M.J
2014-04-24 17:48 ` L.M.J
     [not found]   ` <CAK_KU4a+Ep7=F=NSbb-hqN6Rvayx4QPWm-M2403OHn5-LVaNZw@mail.gmail.com>
2014-04-24 18:35     ` L.M.J [this message]
     [not found]       ` <CAK_KU4Zh-azXEEzW4f1m=boCZDKevqaSHxW0XoAgRdrCbm2PkA@mail.gmail.com>
2014-04-24 19:53         ` L.M.J
     [not found]         ` <CAK_KU4aDDaUSGgcGBwCeO+yE0Qa_pUmMdAHMu7pqO7dqEEC71g@mail.gmail.com>
2014-04-24 19:56           ` L.M.J
2014-04-24 20:31             ` Scott D'Vileskis
2014-04-24 22:25               ` Why would a recreation cause a different number of blocks?? Jeff Wiegley
2014-04-25  3:34                 ` Mikael Abrahamsson
2014-04-25  5:02                   ` Jeff Wiegley
2014-04-25  6:01                     ` Mikael Abrahamsson
2014-04-25  6:45                       ` Jeff Wiegley
2014-04-25  7:25                         ` Mikael Abrahamsson
2014-04-25  7:05                       ` Jeff Wiegley
     [not found]             ` <CAK_KU4YUejncX9yQk4HM5HE=1-qPPxOibuRauFheo3jaBc8SaQ@mail.gmail.com>
2014-04-25  5:13               ` Corrupted ext4 filesystem after mdadm manipulation error L.M.J
2014-04-25  6:04                 ` Mikael Abrahamsson
2014-04-25 11:43                   ` L. M. J
2014-04-25 13:36                     ` Scott D'Vileskis
2014-04-25 14:43                       ` L.M.J
2014-04-25 18:37                       ` Is disk order relative or are the numbers absolute? Jeff Wiegley

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20140424203506.5fdee0d3@netstation \
    --to=linuxmasterjedi@free.fr \
    --cc=linux-raid@vger.kernel.org \
    --cc=sdvileskis@gmail.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).