* RAID 5 inaccessible - continued
@ 2006-02-13 15:08 Krekna Mektek
2006-02-14 8:35 ` Krekna Mektek
0 siblings, 1 reply; 13+ messages in thread
From: Krekna Mektek @ 2006-02-13 15:08 UTC (permalink / raw)
To: linux-raid
All right, this weekend I was able to use dd to create an imagefile
out of the disk.
I did the folowing:
dd conv=noerror if=dev/hdd1 of=/mnt/hdb1/Faulty-RAIDDisk.img
losetup /dev/loop0 /mnt/hdb1/Faulty-RAIDDisk.img
I edited the mdadm.conf, by replacing /dev/hdd1 for /dev/loop0.
But it did not work out (yet).
madm -E /dev/loop0
mdadm: No super block found on /dev/loop0 (Expected magic a92b4efc,
got 00000000)
How can I continue best?
- mdadm -A --force /dev/md0
or
- can I restore the superblock from the hdd1 disk (which is still alive)
or
- can I configure mdadm.conf other than this:
(/dev/hdc1 is spare, probably out of date)
DEVICE /dev/hdb1 /dev/hdc1 /dev/loop0
ARRAY /dev/md0 devices=/dev/hdb1,/dev/hdc1,/dev/loop0
or
- some other solution?
Krekna
2006/2/8, Krekna Mektek <krekna@gmail.com>:
> Hi,
>
> I found out that my storage drive was gone and I went to my server to
> check out what wrong.
> I've got 3 400GB disks wich form the array.
>
> I found out I had one spare and one faulty drive, and the RAID 5 array
> was not able to recover.
> After a reboot because of some stuff with Xen my main rootdisk (hda)
> was also failing, and the whole machine was not able to boot anymore.
> And there I was...
> After I tried to commit suicide and did not succeed, I went back to my
> server to try something out.
> I booted with Knoppix 4.02 and edited the mdadm.conf as follows:
>
> DEVICE /dev/hd[bcd]1
> ARRAY /dev/md0 devices=/dev/hdb1,/dev/hdc1,/dev/hdd1
>
>
> I executed mdrun and the following messages appeared:
>
> Forcing event count in /dev/hdd1(2) from 81190986 upto 88231796
> clearing FAULTY flag for device 2 in /dev/md0 for /dev/hdd1
> /dev/md0 has been started with 2 drives (out of 3) and 1 spare.
>
> So I thought I was lucky enough, to get back my data, maybe a bit lost
> concerning the event count which is missing some. Am I right?
>
> But, when I tried to mount it the next day, this was also not
> happening. I ended up with one faulty, one spare and one active. After
> stopping and starting the array sometimes the array was rebuilding
> again. I found out that the disk that it needs to rebuilt the array
> (hdd1 that is) is
> getting errors and falls back to faulty again.
>
>
>
> Number Major Minor RaidDevice State
> 0 3 65 0 active sync
> 1 0 0 - removed
> 2 22 65 2 active sync
>
> 3 22 1 1 spare rebuilding
>
>
> and then this:
>
> Rebuild Status : 1% complete
>
> Number Major Minor RaidDevice State
> 0 3 65 0 active sync
> 1 0 0 - removed
> 2 0 0 - removed
>
> 3 22 1 1 spare rebuilding
> 4 22 65 2 faulty
>
> And my dmesg is full of these errors coming from the faulty hdd:
> end_request: I/O error, dev hdd, sector 13614775
> hdd: dma_intr: status=0x51 { DriveReady SeekComplete Error }
> hdd: dma_intr: error=0x40 { UncorrectableError }, LBAsect=13615063,
> high=0, low=13615063, sector=13614783
> ide: failed opcode was: unknown
> end_request: I/O error, dev hdd, sector 13614783
>
>
> I guess this will never succeed...
>
> Is there away to get this data back from the individual disks perhaps?
>
>
> FYI:
>
>
> root@6[~]# cat /proc/mdstat
> Personalities : [raid5]
> md0 : active raid5 hdb1[0] hdc1[3] hdd1[4](F)
> 781417472 blocks level 5, 64k chunk, algorithm 2 [3/1] [U__]
> [>....................] recovery = 1.7% (6807460/390708736)
> finish=3626.9min speed=1764K/sec
> unused devices: <none>
>
> Krekna
>
^ permalink raw reply [flat|nested] 13+ messages in thread* Re: RAID 5 inaccessible - continued
2006-02-13 15:08 RAID 5 inaccessible - continued Krekna Mektek
@ 2006-02-14 8:35 ` Krekna Mektek
2006-02-14 9:40 ` Neil Brown
0 siblings, 1 reply; 13+ messages in thread
From: Krekna Mektek @ 2006-02-14 8:35 UTC (permalink / raw)
To: linux-raid
Krekna is crying out loud in the empty wilderness....
No one there to help me?
Krekna
2006/2/13, Krekna Mektek <krekna@gmail.com>:
> All right, this weekend I was able to use dd to create an imagefile
> out of the disk.
> I did the folowing:
>
> dd conv=noerror if=dev/hdd1 of=/mnt/hdb1/Faulty-RAIDDisk.img
> losetup /dev/loop0 /mnt/hdb1/Faulty-RAIDDisk.img
>
> I edited the mdadm.conf, by replacing /dev/hdd1 for /dev/loop0.
>
> But it did not work out (yet).
>
> madm -E /dev/loop0
> mdadm: No super block found on /dev/loop0 (Expected magic a92b4efc,
> got 00000000)
>
>
> How can I continue best?
>
> - mdadm -A --force /dev/md0
>
> or
>
> - can I restore the superblock from the hdd1 disk (which is still alive)
>
> or
>
> - can I configure mdadm.conf other than this:
> (/dev/hdc1 is spare, probably out of date)
>
> DEVICE /dev/hdb1 /dev/hdc1 /dev/loop0
> ARRAY /dev/md0 devices=/dev/hdb1,/dev/hdc1,/dev/loop0
>
> or
> - some other solution?
>
> Krekna
>
> 2006/2/8, Krekna Mektek <krekna@gmail.com>:
> > Hi,
> >
> > I found out that my storage drive was gone and I went to my server to
> > check out what wrong.
> > I've got 3 400GB disks wich form the array.
> >
> > I found out I had one spare and one faulty drive, and the RAID 5 array
> > was not able to recover.
> > After a reboot because of some stuff with Xen my main rootdisk (hda)
> > was also failing, and the whole machine was not able to boot anymore.
> > And there I was...
> > After I tried to commit suicide and did not succeed, I went back to my
> > server to try something out.
> > I booted with Knoppix 4.02 and edited the mdadm.conf as follows:
> >
> > DEVICE /dev/hd[bcd]1
> > ARRAY /dev/md0 devices=/dev/hdb1,/dev/hdc1,/dev/hdd1
> >
> >
> > I executed mdrun and the following messages appeared:
> >
> > Forcing event count in /dev/hdd1(2) from 81190986 upto 88231796
> > clearing FAULTY flag for device 2 in /dev/md0 for /dev/hdd1
> > /dev/md0 has been started with 2 drives (out of 3) and 1 spare.
> >
> > So I thought I was lucky enough, to get back my data, maybe a bit lost
> > concerning the event count which is missing some. Am I right?
> >
> > But, when I tried to mount it the next day, this was also not
> > happening. I ended up with one faulty, one spare and one active. After
> > stopping and starting the array sometimes the array was rebuilding
> > again. I found out that the disk that it needs to rebuilt the array
> > (hdd1 that is) is
> > getting errors and falls back to faulty again.
> >
> >
> >
> > Number Major Minor RaidDevice State
> > 0 3 65 0 active sync
> > 1 0 0 - removed
> > 2 22 65 2 active sync
> >
> > 3 22 1 1 spare rebuilding
> >
> >
> > and then this:
> >
> > Rebuild Status : 1% complete
> >
> > Number Major Minor RaidDevice State
> > 0 3 65 0 active sync
> > 1 0 0 - removed
> > 2 0 0 - removed
> >
> > 3 22 1 1 spare rebuilding
> > 4 22 65 2 faulty
> >
> > And my dmesg is full of these errors coming from the faulty hdd:
> > end_request: I/O error, dev hdd, sector 13614775
> > hdd: dma_intr: status=0x51 { DriveReady SeekComplete Error }
> > hdd: dma_intr: error=0x40 { UncorrectableError }, LBAsect=13615063,
> > high=0, low=13615063, sector=13614783
> > ide: failed opcode was: unknown
> > end_request: I/O error, dev hdd, sector 13614783
> >
> >
> > I guess this will never succeed...
> >
> > Is there away to get this data back from the individual disks perhaps?
> >
> >
> > FYI:
> >
> >
> > root@6[~]# cat /proc/mdstat
> > Personalities : [raid5]
> > md0 : active raid5 hdb1[0] hdc1[3] hdd1[4](F)
> > 781417472 blocks level 5, 64k chunk, algorithm 2 [3/1] [U__]
> > [>....................] recovery = 1.7% (6807460/390708736)
> > finish=3626.9min speed=1764K/sec
> > unused devices: <none>
> >
> > Krekna
> >
>
^ permalink raw reply [flat|nested] 13+ messages in thread* Re: RAID 5 inaccessible - continued
2006-02-14 8:35 ` Krekna Mektek
@ 2006-02-14 9:40 ` Neil Brown
2006-02-14 10:35 ` Krekna Mektek
0 siblings, 1 reply; 13+ messages in thread
From: Neil Brown @ 2006-02-14 9:40 UTC (permalink / raw)
To: Krekna Mektek; +Cc: linux-raid
On Tuesday February 14, krekna@gmail.com wrote:
> Krekna is crying out loud in the empty wilderness....
> No one there to help me?
Nope :-)
> > I did the folowing:
> >
> > dd conv=noerror if=dev/hdd1 of=/mnt/hdb1/Faulty-RAIDDisk.img
> > losetup /dev/loop0 /mnt/hdb1/Faulty-RAIDDisk.img
..
> >
> > But it did not work out (yet).
> >
> > madm -E /dev/loop0
> > mdadm: No super block found on /dev/loop0 (Expected magic a92b4efc,
> > got 00000000)
...
> >
> > - can I restore the superblock from the hdd1 disk (which is still alive)
> >
If mdadm -E /dev/hdd1 shows a valid superblock, and mdadm -E
/dev/loop0 doesn't, then your 'dd' wasn't very successful.
What is the size of /mnt/hdb1/Faulty-RAIDDisk.img ?? What is the size
of /dev/hdd1?
BTW, you don't need to edit mdadm.conf to try things out. Just
mdadm -A /dev/md0 /dev/hdb1 /dev/hdc1 /dev/loop0
NeilBrown
^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: RAID 5 inaccessible - continued
2006-02-14 9:40 ` Neil Brown
@ 2006-02-14 10:35 ` Krekna Mektek
[not found] ` <43F1FFE1.2010107@h3c.com>
2006-02-15 9:59 ` Burkhard Carstens
0 siblings, 2 replies; 13+ messages in thread
From: Krekna Mektek @ 2006-02-14 10:35 UTC (permalink / raw)
To: linux-raid
2006/2/14, Neil Brown <neilb@suse.de>:
> On Tuesday February 14, krekna@gmail.com wrote:
> > Krekna is crying out loud in the empty wilderness....
> > No one there to help me?
>
> Nope :-)
>
> > > I did the folowing:
> > >
> > > dd conv=noerror if=dev/hdd1 of=/mnt/hdb1/Faulty-RAIDDisk.img
> > > losetup /dev/loop0 /mnt/hdb1/Faulty-RAIDDisk.img
> ..
> > >
> > > But it did not work out (yet).
> > >
> > > madm -E /dev/loop0
> > > mdadm: No super block found on /dev/loop0 (Expected magic a92b4efc,
> > > got 00000000)
> ...
> > >
> > > - can I restore the superblock from the hdd1 disk (which is still alive)
> > >
>
> If mdadm -E /dev/hdd1 shows a valid superblock, and mdadm -E
> /dev/loop0 doesn't, then your 'dd' wasn't very successful.
Hi,
Actually, /dev/hdd1 partition is 390708801 blocks, the disk itself is
a Hitachi 400GB disk.
So this is 390708801 blocks of 1024 bytes. This is according to my
calculation 400085812224 bytes.
The Faulty-RAIDDisk.img is according to my ls -l 400085771264 bytes.
So it looks like they are quite the same, and the difference between
the two is 40960 bytes. These are 40 blocks, so 36 are missing?
The dd actually succeeded, and did finish the job in about one day.
The badblocks were found after about the first 7 Gigs.
Is there no way like the conv=noerror for mdadm, to just continue?
Can I restore the superblock on the .img file somehow?
Is it probably save to --zero-superblock all the three disks and that
the RAID array will create two new superblocks (Leaving the spare out,
because its probably out of date).
I can do the dd again, but I think it will do the same thing, because
it finished 'succesfully'.
The superblock is at the end of the disk I read, about the last
64-128K or something.
Krekna
>
> What is the size of /mnt/hdb1/Faulty-RAIDDisk.img ?? What is the size
> of /dev/hdd1?
>
> BTW, you don't need to edit mdadm.conf to try things out. Just
>
> mdadm -A /dev/md0 /dev/hdb1 /dev/hdc1 /dev/loop0
>
> NeilBrown
>
^ permalink raw reply [flat|nested] 13+ messages in thread[parent not found: <43F1FFE1.2010107@h3c.com>]
* Re: RAID 5 inaccessible - continued
[not found] ` <43F1FFE1.2010107@h3c.com>
@ 2006-02-14 17:18 ` Krekna Mektek
2006-02-14 17:41 ` David Greaves
2006-02-15 9:35 ` Krekna Mektek
1 sibling, 1 reply; 13+ messages in thread
From: Krekna Mektek @ 2006-02-14 17:18 UTC (permalink / raw)
To: Mike Hardy, linux-raid
2006/2/14, Mike Hardy <mhardy@h3c.com>:
>
>
> Krekna Mektek wrote:
>
> > The dd actually succeeded, and did finish the job in about one day.
> > The badblocks were found after about the first 7 Gigs.
>
> Is this a 3-disk raid5 array? With two healthy disks and one bad disk?
Hi Mike!
It is a 3 disk array, that is one spare (which was set as spare by
Linux, previously marked as faulty I guess. This disk is probably out
of date, because I don't know when this did happen, and I don't have
the logs anymore).
One disk is okay, and the faulty is probably also okay, except for the
76 bad sectors.
I want to rebuilt from the good one and the faulty one. That's why I
wanted to dd the disk to an image file, but it complains it has no
boot sector.
In this case, I actually *can* try the 2.6.15+ kernel then? Because
the rebuilt *IS* working, except for the fact that it stops at 1,7%,
which happens to be at the bad block area indeed.
So, there actually IS a possibility now I was requesting for: mdadm to
skip over the bad blocks area? That mean I can try again, but now with
2.6.15?
And if not. I still don't understand why this did not work:
<quote>
> > > I did the folowing:
> > >
> > > dd conv=noerror if=dev/hdd1 of=/mnt/hdb1/Faulty-RAIDDisk.img
> > > losetup /dev/loop0 /mnt/hdb1/Faulty-RAIDDisk.img
> ..
> > >
> > > But it did not work out (yet).
> > >
> > > madm -E /dev/loop0
> > > mdadm: No super block found on /dev/loop0 (Expected magic a92b4efc,
> > > got 00000000)
</quote>
Thanks for your help!
Krekna
>
> If so, then what you really want is a new kernel (2.6.15+? 2.6.14+?)
> that has raid5 read-error-handling code in it. Neil just coded that up.
>
> If it's missing a disk and has a disk with bad sectors, then you've
> already lost data, but you could use a combination of smart tests and dd
> to zero out those specific sectors (and only those sectors...) then sync
> a new disk up with the array...
>
> -Mike
>
^ permalink raw reply [flat|nested] 13+ messages in thread* Re: RAID 5 inaccessible - continued
2006-02-14 17:18 ` Krekna Mektek
@ 2006-02-14 17:41 ` David Greaves
2006-02-15 9:36 ` Krekna Mektek
[not found] ` <8b24c8b10602151218i43886b75h@mail.gmail.com>
0 siblings, 2 replies; 13+ messages in thread
From: David Greaves @ 2006-02-14 17:41 UTC (permalink / raw)
To: Krekna Mektek; +Cc: Mike Hardy, linux-raid
Krekna Mektek wrote:
>I want to rebuilt from the good one and the faulty one. That's why I
>wanted to dd the disk to an image file, but it complains it has no
>boot sector.
>
>
>
>>>>I did the folowing:
>>>>
>>>>dd conv=noerror if=dev/hdd1 of=/mnt/hdb1/Faulty-RAIDDisk.img
>>>>losetup /dev/loop0 /mnt/hdb1/Faulty-RAIDDisk.img
>>>>
>>>>
You could try doing this again using ddrescue (google if you need to
install it):
ddrescue dev/hdd1 /mnt/hdb1/Faulty-RAIDDisk.img /mnt/hdb1/Faulty-RAIDDisk.log
Then do it again using -r10 (to increase the retries on the faulty sectors)
ddrescue -r10 dev/hdd1 /mnt/hdb1/Faulty-RAIDDisk.img /mnt/hdb1/Faulty-RAIDDisk.log
This will be much quicker because the log file contains details of the
faulty sectors.
With luck (mucho luck) you may not even lose data.
David
--
^ permalink raw reply [flat|nested] 13+ messages in thread* Re: RAID 5 inaccessible - continued
2006-02-14 17:41 ` David Greaves
@ 2006-02-15 9:36 ` Krekna Mektek
[not found] ` <8b24c8b10602151218i43886b75h@mail.gmail.com>
1 sibling, 0 replies; 13+ messages in thread
From: Krekna Mektek @ 2006-02-15 9:36 UTC (permalink / raw)
To: David Greaves; +Cc: Mike Hardy, linux-raid
Hi David,
Thank you.
I will give ddrescue a shot then. But what is it different than a dd
with the noerror conversion?
Krekna
2006/2/14, David Greaves <david@dgreaves.com>:
> Krekna Mektek wrote:
>
> >I want to rebuilt from the good one and the faulty one. That's why I
> >wanted to dd the disk to an image file, but it complains it has no
> >boot sector.
> >
> >
> >
> >>>>I did the folowing:
> >>>>
> >>>>dd conv=noerror if=dev/hdd1 of=/mnt/hdb1/Faulty-RAIDDisk.img
> >>>>losetup /dev/loop0 /mnt/hdb1/Faulty-RAIDDisk.img
> >>>>
> >>>>
> You could try doing this again using ddrescue (google if you need to
> install it):
>
> ddrescue dev/hdd1 /mnt/hdb1/Faulty-RAIDDisk.img /mnt/hdb1/Faulty-RAIDDisk.log
>
> Then do it again using -r10 (to increase the retries on the faulty sectors)
>
> ddrescue -r10 dev/hdd1 /mnt/hdb1/Faulty-RAIDDisk.img /mnt/hdb1/Faulty-RAIDDisk.log
>
> This will be much quicker because the log file contains details of the
> faulty sectors.
> With luck (mucho luck) you may not even lose data.
>
> David
>
> --
>
>
^ permalink raw reply [flat|nested] 13+ messages in thread[parent not found: <8b24c8b10602151218i43886b75h@mail.gmail.com>]
* Re: RAID 5 inaccessible - continued
[not found] ` <43F1FFE1.2010107@h3c.com>
2006-02-14 17:18 ` Krekna Mektek
@ 2006-02-15 9:35 ` Krekna Mektek
1 sibling, 0 replies; 13+ messages in thread
From: Krekna Mektek @ 2006-02-15 9:35 UTC (permalink / raw)
To: Mike Hardy, linux-raid
I tried this one yesterday, with 2.6.15 that is. The same thing
happened, at the bad block sectors it tried and skipped some but after
a minute or two it stopped again.
Hmmmz.
Krekna
2006/2/14, Mike Hardy <mhardy@h3c.com>:
>
>
> Krekna Mektek wrote:
>
> > The dd actually succeeded, and did finish the job in about one day.
> > The badblocks were found after about the first 7 Gigs.
>
> Is this a 3-disk raid5 array? With two healthy disks and one bad disk?
>
> If so, then what you really want is a new kernel (2.6.15+? 2.6.14+?)
> that has raid5 read-error-handling code in it. Neil just coded that up.
>
> If it's missing a disk and has a disk with bad sectors, then you've
> already lost data, but you could use a combination of smart tests and dd
> to zero out those specific sectors (and only those sectors...) then sync
> a new disk up with the array...
>
> -Mike
>
^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: RAID 5 inaccessible - continued
2006-02-14 10:35 ` Krekna Mektek
[not found] ` <43F1FFE1.2010107@h3c.com>
@ 2006-02-15 9:59 ` Burkhard Carstens
2006-02-15 15:09 ` Krekna Mektek
1 sibling, 1 reply; 13+ messages in thread
From: Burkhard Carstens @ 2006-02-15 9:59 UTC (permalink / raw)
To: linux-raid
Am Dienstag, 14. Februar 2006 11:35 schrieb Krekna Mektek:
[...]
> Actually, /dev/hdd1 partition is 390708801 blocks, the disk itself is
> a Hitachi 400GB disk.
> So this is 390708801 blocks of 1024 bytes. This is according to my
> calculation 400085812224 bytes.
> The Faulty-RAIDDisk.img is according to my ls -l 400085771264 bytes.
>
> So it looks like they are quite the same, and the difference between
> the two is 40960 bytes. These are 40 blocks, so 36 are missing?
>
> The dd actually succeeded, and did finish the job in about one day.
> The badblocks were found after about the first 7 Gigs.
>
> Is there no way like the conv=noerror for mdadm, to just continue?
> Can I restore the superblock on the .img file somehow?
> Is it probably save to --zero-superblock all the three disks and that
> the RAID array will create two new superblocks (Leaving the spare
> out, because its probably out of date).
>
> I can do the dd again, but I think it will do the same thing, because
> it finished 'succesfully'.
> The superblock is at the end of the disk I read, about the last
> 64-128K or something.
My experience is that dd conv=noerror doesn't do the job correctly!! It
still won't write a block that it cannot read.
Please use "dd_rescue -A /dev/hdd1 /mnt/hdb1/Faulty-RAIDDisk.img"
instead. See "dd_rescue --help".
> ADEVICE /dev/hdb1 /dev/hdc1 /dev/loop0
> ARRAY /dev/md0 devices=/dev/hdb1,/dev/hdc1,/dev/loop0
another thing: /mnt/hdb1/ is not the same hdb1, you are using in the
raid5, is it?
It might be a bad idea to mount /dev/hdb1, write to it,
and afterwards assemble the array with hdb1 being part of it ... Extra
bad, if loop0 points to a file on hdb1 ?? However, if you did dd to a
file on the partition, that should be part of the degraded raid5 array,
I guess your data is already gone ...
Good luck
Burkhard
^ permalink raw reply [flat|nested] 13+ messages in thread* Re: RAID 5 inaccessible - continued
2006-02-15 9:59 ` Burkhard Carstens
@ 2006-02-15 15:09 ` Krekna Mektek
0 siblings, 0 replies; 13+ messages in thread
From: Krekna Mektek @ 2006-02-15 15:09 UTC (permalink / raw)
To: Burkhard Carstens; +Cc: linux-raid
Er,,, no :)
This was on another machine, luckily, I am not that stupid.
I was using /dev/hdb in that one, sorry to be a bit unclear about that.
Also, I usually sit on my hands for a sec when I do such powerfull commands.
Good, so I make some chances then with dd_rescue, I'll let you know
then, I have to try this tonight.
Krekna
2006/2/15, Burkhard Carstens <suse-ml@onlinehome.de>:
> Am Dienstag, 14. Februar 2006 11:35 schrieb Krekna Mektek:
> [...]
> > Actually, /dev/hdd1 partition is 390708801 blocks, the disk itself is
> > a Hitachi 400GB disk.
> > So this is 390708801 blocks of 1024 bytes. This is according to my
> > calculation 400085812224 bytes.
> > The Faulty-RAIDDisk.img is according to my ls -l 400085771264 bytes.
> >
> > So it looks like they are quite the same, and the difference between
> > the two is 40960 bytes. These are 40 blocks, so 36 are missing?
> >
> > The dd actually succeeded, and did finish the job in about one day.
> > The badblocks were found after about the first 7 Gigs.
> >
> > Is there no way like the conv=noerror for mdadm, to just continue?
> > Can I restore the superblock on the .img file somehow?
> > Is it probably save to --zero-superblock all the three disks and that
> > the RAID array will create two new superblocks (Leaving the spare
> > out, because its probably out of date).
> >
> > I can do the dd again, but I think it will do the same thing, because
> > it finished 'succesfully'.
> > The superblock is at the end of the disk I read, about the last
> > 64-128K or something.
>
> My experience is that dd conv=noerror doesn't do the job correctly!! It
> still won't write a block that it cannot read.
> Please use "dd_rescue -A /dev/hdd1 /mnt/hdb1/Faulty-RAIDDisk.img"
> instead. See "dd_rescue --help".
>
> > ADEVICE /dev/hdb1 /dev/hdc1 /dev/loop0
> > ARRAY /dev/md0 devices=/dev/hdb1,/dev/hdc1,/dev/loop0
>
> another thing: /mnt/hdb1/ is not the same hdb1, you are using in the
> raid5, is it?
>
> It might be a bad idea to mount /dev/hdb1, write to it,
> and afterwards assemble the array with hdb1 being part of it ... Extra
> bad, if loop0 points to a file on hdb1 ?? However, if you did dd to a
> file on the partition, that should be part of the degraded raid5 array,
> I guess your data is already gone ...
>
> Good luck
> Burkhard
>
> -
> To unsubscribe from this list: send the line "unsubscribe linux-raid" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at http://vger.kernel.org/majordomo-info.html
>
^ permalink raw reply [flat|nested] 13+ messages in thread
end of thread, other threads:[~2006-02-16 16:42 UTC | newest]
Thread overview: 13+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2006-02-13 15:08 RAID 5 inaccessible - continued Krekna Mektek
2006-02-14 8:35 ` Krekna Mektek
2006-02-14 9:40 ` Neil Brown
2006-02-14 10:35 ` Krekna Mektek
[not found] ` <43F1FFE1.2010107@h3c.com>
2006-02-14 17:18 ` Krekna Mektek
2006-02-14 17:41 ` David Greaves
2006-02-15 9:36 ` Krekna Mektek
[not found] ` <8b24c8b10602151218i43886b75h@mail.gmail.com>
[not found] ` <43F3B119.1000800@dgreaves.com>
2006-02-16 14:32 ` Krekna Mektek
2006-02-16 15:08 ` Krekna Mektek
2006-02-16 16:42 ` Krekna Mektek
2006-02-15 9:35 ` Krekna Mektek
2006-02-15 9:59 ` Burkhard Carstens
2006-02-15 15:09 ` Krekna Mektek
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).