From: "Janos Haar" <janos.haar@netcenter.hu>
To: Neil Brown <neilb@suse.de>
Cc: linux-raid@vger.kernel.org
Subject: Re: questions about softraid limitations
Date: Fri, 16 May 2008 12:00:35 +0200 [thread overview]
Message-ID: <029b01c8b73b$b0c2a980$9300a8c0@dcccs> (raw)
In-Reply-To: 18476.58840.602946.872085@notabene.brown
Hello Neil,
(sorry Neil for duplication, but first i forget to cc)
----- Original Message -----
From: "Neil Brown" <neilb@suse.de>
To: "Janos Haar" <djani22@netcenter.hu>
Cc: "David Greaves" <david@dgreaves.com>; <linux-raid@vger.kernel.org>
Sent: Friday, May 16, 2008 3:39 AM
Subject: Re: questions about softraid limitations
> On Thursday May 15, djani22@netcenter.hu wrote:
>> Hello David,
>>
>> ----- Original Message -----
>> From: "David Greaves" <david@dgreaves.com>
>> To: "Janos Haar" <djani22@netcenter.hu>
>> Cc: <linux-raid@vger.kernel.org>
>> Sent: Wednesday, May 14, 2008 12:45 PM
>> Subject: Re: questions about softraid limitations
>>
>>
>> > Janos Haar wrote:
>> >> Hello list, Neil,
>> >
>> > Hi Janos
>> >
>> >> I have worked on a faulty hw raid card data recovery some days before.
>> >> The project is already successfully done, but i run into some
>> >> limitations.
>> >
>> > Firstly, are you aware that Linux SW raid will not understand disks
>> > written by
>> > hardware raid.
>>
>> Yes, i know, but the linux raid is a great tool to try it, and if the
>> user
>> know what he is doing, it is safe too. :-)
>
> As long as the user also knows what the kernel is doing .....
>
> If you build an md array on top of a read-only device, the array is
> still writable, and the device gets written too!!
>
> Yes, it is a bug. I hadn't thought about that case before. I will
> look into it.
Ooops. :-)
>
>>
>> >
>> >> Than try to build an "old fashion" linear arrays from each disks + 64k
>> >> another blockdevice. (for store the superblock)
>> >> But the mdadm refused to _build_ the array, because the source scsi
>> >> drive is jumpered to readonly. Why? :-)
>> > This will not allow md to write superblocks to the disks.
>>
>> I think exactly for this steps:
>>
>> dd if=/dev/zero of=suberblock.bin bs=64k count=1
> ^p
>> losetup /dev/loop0 superblock.bin
>> blockdev --setro /dev/sda
>> mdadm --build -l linear /dev/md0 /dev/sda /dev/loop0
> ^ --raid-disks=2
>
>>
>> The superblock area is writable.
>> And this is enough to try to assemble the array to do the recovery, but
>> this
>> step is refused.
>
> What error message do you get? It worked for me (once I added
> --raid-disks=2).
The previous example is just an on the fly typing, in real readonly jumpered
scsi sda gives az error message!
>
> You probably want superblock.bin to be more than 64K. The superblock
> is located between 64K and 128K from the end of the device, depending
> on device size. It is always a multiple of 64K from the start of the
> device.
Usually i used a 128MB! disk partitions for this.
This is safe enough.
And sometimes we need more loop devices than 8.... :-)
>
>>
>> >
>> >>
>> >> I try to build the array with --readonly option, but the mdadm still
>> >> dont understand what i want. (yes, i know, rtfm...)
>> > This will start the array in readonly mode - you've not created an
>> > array
>> > yet
>> > because you haven't written any superblocks...
>>
>> Yes, i only want to build, not to create.
>>
>> >
>> >
>> >> Its OK, but what about building a readonly raid 5 array for recovery
>> >> usage only? :-)
>> > That's fine. If they are md raid disks. Yours aren't yet since you
>> > haven't
>> > written the superblocks.
>>
>> I only want to help for some people to get back the data.
>> I only need to build, not to create.
>
> And this you can do ... but not with mdadm at the moment
> unfortunately.
>
> What carefully :-)
> --------------------------------------------------------------
> /tmp# cd /sys/block/md0/md
> /sys/block/md0/md# echo 65536 > chunk_size
> /sys/block/md0/md# echo 2 > layout
> /sys/block/md0/md# echo raid5 > level
> /sys/block/md0/md# echo none > metadata_version
> /sys/block/md0/md# echo 5 > raid_disks
> /sys/block/md0/md# ls -l /dev/sdb
> brw-rw---- 1 root disk 8, 16 2008-05-16 11:13 /dev/sdb
> /sys/block/md0/md# ls -l /dev/sdc
> brw-rw---- 1 root disk 8, 32 2008-05-16 11:13 /dev/sdc
> /sys/block/md0/md# echo 8:16 > new_dev
> /sys/block/md0/md# echo 8:32 > new_dev
> /sys/block/md0/md# echo 8:48 > new_dev
> /sys/block/md0/md# echo 8:64 > new_dev
> /sys/block/md0/md# echo 8:80 > new_dev
> /sys/block/md0/md# echo 0 > dev-sdb/slot
> /sys/block/md0/md# echo 1 > dev-sdc/slot
> /sys/block/md0/md# echo 2 > dev-sdd/slot
> /sys/block/md0/md# echo 3 > dev-sde/slot
> /sys/block/md0/md# echo 4 > dev-sdf/slot
> /sys/block/md0/md# echo 156250000 > dev-sdb/size
> /sys/block/md0/md# echo 156250000 > dev-sdc/size
> /sys/block/md0/md# echo 156250000 > dev-sdd/size
> /sys/block/md0/md# echo 156250000 > dev-sde/size
> /sys/block/md0/md# echo 156250000 > dev-sdf/size
> /sys/block/md0/md# echo readonly > array_state
> /sys/block/md0/md# cat /proc/mdstat
> Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4]
> [multipath] [faulty]
> md0 : active (read-only) raid5 sdf[4] sde[3] sdd[2] sdc[1] sdb[0]
> 624999936 blocks super non-persistent level 5, 64k chunk, algorithm 2
> [5/5] [UUUUU]
>
> unused devices: <none>
> ----------------------------------------------------------
>
> Did you catch all of that?
sysfs! :-)
Wow! :-)
This is what really helps for me, thanks! :-)
But what about the other people?
Mdadm will know this too?
>
> The per-device 'size' is in K - I took it straight from
> /proc/partitions.
> The chunk_size is in bytes.
>
> Have fun.
Thank you!
Next time i will try it. :-)
Janos
>
> NeilBrown
> --
> To unsubscribe from this list: send the line "unsubscribe linux-raid" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at http://vger.kernel.org/majordomo-info.html
next prev parent reply other threads:[~2008-05-16 10:00 UTC|newest]
Thread overview: 18+ messages / expand[flat|nested] mbox.gz Atom feed top
2008-05-14 0:34 questions about softraid limitations Janos Haar
2008-05-14 10:45 ` David Greaves
2008-05-14 23:29 ` Janos Haar
2008-05-16 1:39 ` Neil Brown
2008-05-16 6:05 ` [OT] " Peter Rabbitson
2008-05-18 23:52 ` Neil Brown
2008-05-16 10:00 ` Janos Haar [this message]
2008-05-16 8:36 ` David Greaves
2008-05-16 9:18 ` David Greaves
2008-05-16 9:28 ` Janos Haar
2008-05-18 9:11 ` David Greaves
2008-05-18 11:11 ` Janos Haar
2008-05-18 13:00 ` David Greaves
2008-05-18 21:51 ` Janos Haar
2008-05-18 19:36 ` David Lethe
2008-05-18 22:23 ` David Greaves
2008-05-18 22:38 ` Janos Haar
-- strict thread matches above, loose matches on Subject: below --
2008-05-18 23:18 David Lethe
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to='029b01c8b73b$b0c2a980$9300a8c0@dcccs' \
--to=janos.haar@netcenter.hu \
--cc=linux-raid@vger.kernel.org \
--cc=neilb@suse.de \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).