* (unknown),
@ 2008-05-14 12:53 Henry, Andrew
2008-05-14 21:13 ` David Greaves
0 siblings, 1 reply; 12+ messages in thread
From: Henry, Andrew @ 2008-05-14 12:53 UTC (permalink / raw)
To: linux-raid@vger.kernel.org
I'm new to software RAID and this list. I read a few months of archives to see if I found answers but only partly...
I set up a raid1 set using 2xWD Mybook eSATA discs on a Sil CardBus controller. I was not aware of automount rules and it didn't work, and I want to wipe it all and start again but cannot. I read the thread listed in my subject and it helped me quite a lot but not fully. Perhaps someone would be kind enough to help me the rest of the way. This is what I have done:
1. badblocks -c 10240 -s -w -t random -v /dev/sd[ab]
2. parted /dev/sdX mklabel msdos ##on both drives
3a. parted /dev/sdX mkpart primary 0 500.1GB ##on both drives
3b. parted /dev/sdX set 1 raid on ##on both drives
4. mdadm --create --verbose /dev/md0 --metadata=1.0 --raid-devices=2 --level=raid1 --name=backupArray /dev/sd[ab]1
5. mdadm --examine --scan | tee /etc/mdadm.conf and set 'DEVICES partitions' so that I don't hard code any devide names that may change on reboot.
6. mdadm --assemble --name=mdBackup /dev/md0 ##assemble is run during --create it seems and this was not needed.
7. cryptsetup --verbose --verify-passphrase luksFormat /dev/md0
8. cryptsetup luksOpen /dev/md0 raid500
9. pvcreate /dev/mapper/raid500
10. vgcreate vgbackup /dev/mapper/raid500
11. lvcreate --name lvbackup --size 450G vgbackup ## check PEs first with vgdisplay
12. mkfs.ext3 -j -m 1 -O dir_index,filetype,sparse_super /dev/vgbackup/lvbackup
13. mkdir /mnt/raid500; mount /dev/vgbackup/lvbackup /mnt/raid500"
This worked perfectly. Did not test but everything lokked fine and I could use the mount. Thought: lets see if everything comes up at boot (yes, I had edited fstab to mount /dev/vgbackup/lvbackup and set crypttab to start luks on raid500.
Reboot failed. Fsck could not check raid device and would not boot. Kernel had not autodetected md0. I now know this is because superblock format 1.0 puts metadata at end of device and therefore kernel cannot autodetect.
I started a LiveCD, mounted my root lvm, removed entries from fstab/crypttab and rebooted. Reboot was now OK.
Now I tried to wipe the array so I can re-create with 0.9 metadata superblock.
I ran dd on sd[ab] for a few hundred megs, which wiped partitions. I removed /etc/mdadm.conf. I then repartitioned and rebooted. I then tried to recreate the array with:
mdadm --create --verbose /dev/md0 --raid-devices=2 --level=raid1 /dev/sd[ab]1
but it reports that the devices are already part of an array and do I want to continue?? I say yes and it then immedialtely says "out of sync, resyncing existing array" (not exact words but I suppose you get the idea)
I reboot to kill sync and then dd again, repartition, etc ect then reboot.
Now when server comes up, fdisk reports (it's the two 500GB discs that are in the array):
[root@k2 ~]# fdisk -l
Disk /dev/hda: 80.0 GB, 80026361856 bytes
255 heads, 63 sectors/track, 9729 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Device Boot Start End Blocks Id System
/dev/hda1 * 1 19 152586 83 Linux
/dev/hda2 20 9729 77995575 8e Linux LVM
Disk /dev/sda: 500.1 GB, 500107862016 bytes
255 heads, 63 sectors/track, 60801 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Device Boot Start End Blocks Id System
/dev/sda1 1 60801 488384001 fd Linux raid autodetect
Disk /dev/sdb: 320.0 GB, 320072933376 bytes
255 heads, 63 sectors/track, 38913 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Device Boot Start End Blocks Id System
/dev/sdb1 1 38913 312568641 83 Linux
Disk /dev/md0: 500.1 GB, 500105150464 bytes
2 heads, 4 sectors/track, 122095984 cylinders
Units = cylinders of 8 * 512 = 4096 bytes
Disk /dev/md0 doesn't contain a valid partition table
Where previously, I had /dev/sdc that was the same as /dev/sda above (ignore the 320GB, that is separate and on boot, they sometimes come up in different order).
Now, I cannot write to sda above (500GB disc) with commands: dd, mdadm -zero-superblock etc etc. I can write to md0 with dd but what the heck happened to sdc?? Why did it become /dev/md0??
Now I read the forum thread and ran dd on beginning and end of sda and md0 with /dev/zero using seek to skip first 490GB and deleted /dev/md0 then rebooted and now I see sda but there is no sdc or md0.
I cannot see any copy of mdadm.conf in /boot and initramfs-update does not work on CentOS, but I am more used to Debian and do not know the CentOS equivalent. I do know that I have now completely dd'ed the first 10MB and last 2MB of sda and md0 and have deleted (with rm -f) /dev/md0, and now *only* /dev/sda (plus internal had and extra 320GB sdb) shows up in fdisk -l: There is no md0 or sdc.
So after all that rambling, my question is:
Why did /dev/md0 appear in fdisk -l when it had previously been sda/sdb even after successfully creating my array before reboot?
How do I remove the array? Have I now done everything to remove it?
I suppose (hope) that if I go to the server and power cycle it and the esata discs, my sdc probably will appear again ( I have not done this yet-no chance today) but why does it not appear after a soft reboot after having dd'd /dev/md0?
andrew henry
Oracle DBA
infra solutions|ao/bas|dba
Logica
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re:
2008-05-14 12:53 (unknown), Henry, Andrew
@ 2008-05-14 21:13 ` David Greaves
[not found] ` <3ECBDC05781B3D48ABD520A01ABF2F9B12C5435703@SE-EX008.groupinfra.com>
0 siblings, 1 reply; 12+ messages in thread
From: David Greaves @ 2008-05-14 21:13 UTC (permalink / raw)
To: Henry, Andrew; +Cc: linux-raid@vger.kernel.org
Henry, Andrew wrote:
> I'm new to software RAID and this list. I read a few months of archives to see if I found answers but only partly...
OK - good idea to start with a simple setup then... oh, wait...
> 1. badblocks -c 10240 -s -w -t random -v /dev/sd[ab]
fine
> 2. parted /dev/sdX mklabel msdos ##on both drives
> 3a. parted /dev/sdX mkpart primary 0 500.1GB ##on both drives
> 3b. parted /dev/sdX set 1 raid on ##on both drives
no point setting raid type since autodetect is not needed
> 4. mdadm --create --verbose /dev/md0 --metadata=1.0 --raid-devices=2 --level=raid1 --name=backupArray /dev/sd[ab]1
a mirror - so the same data/partitions should go to /dev/sda1 /dev/sdb1
> 5. mdadm --examine --scan | tee /etc/mdadm.conf and set 'DEVICES partitions' so that I don't hard code any devide names that may change on reboot.
hmm - on my Debian box I'd get /dev/md/backupArray as the device name I think -
I override this though
> 6. mdadm --assemble --name=mdBackup /dev/md0 ##assemble is run during --create it seems and this was not needed.
> 7. cryptsetup --verbose --verify-passphrase luksFormat /dev/md0
> 8. cryptsetup luksOpen /dev/md0 raid500
good luck with that
> 9. pvcreate /dev/mapper/raid500
> 10. vgcreate vgbackup /dev/mapper/raid500
> 11. lvcreate --name lvbackup --size 450G vgbackup ## check PEs first with vgdisplay
and that...
Seriously, they should work fine - but not a lot of people do this kind of thing
and there may be issues layering this many device layers (eg ISTR a suggestion
that 4K stacks may not be good). Be prepared to submit bug reports and have good
backups.
> 12. mkfs.ext3 -j -m 1 -O dir_index,filetype,sparse_super /dev/vgbackup/lvbackup
Well, I suppose you could have partitioned the lvm volume and used XFS and a
separate journal for maximum complexity <grin>
> 13. mkdir /mnt/raid500; mount /dev/vgbackup/lvbackup /mnt/raid500"
> This worked perfectly. Did not test but everything lokked fine and I could use the mount. Thought: lets see if everything comes up at boot (yes, I had edited fstab to mount /dev/vgbackup/lvbackup and set crypttab to start luks on raid500.
> Reboot failed.
I suspect you mean that the filesystem wasn't mounted.
Do you really mean that the machine wouldn't boot - that's bad - you may have
blatted some bootsector somewhere.
Raid admin does not need you to use dd or hack at disk partitions any more than
mkfs does.
> Fsck could not check raid device and would not boot. Kernel had not
autodetected md0. I now know this is because superblock format 1.0 puts
metadata at end of device and therefore kernel cannot autodetect.
Technically it's not the sb location that prevents the kernel autodetecting -
it's a design decision that only supports autodetect for v0.9
You don't need autodetect - if you wanted an encrypted lvm root fs then you'd
need an initrd anyhow.
Just make sure you're using a distro that 'does the right thing' and assembles
arrays according to your mdadm.conf at rc?.d time
(nb what distro/kernel are you using)
> I started a LiveCD, mounted my root lvm, removed entries from fstab/crypttab and rebooted. Reboot was now OK.
> Now I tried to wipe the array so I can re-create with 0.9 metadata superblock.
mdadm --zero-superblock
> I ran dd on sd[ab] for a few hundred megs, which wiped partitions. I removed /etc/mdadm.conf. I then repartitioned and rebooted. I then tried to recreate the array with:
which failed since the sb is at the end of the device
http://linux-raid.osdl.org/index.php/Superblock
> mdadm --create --verbose /dev/md0 --raid-devices=2 --level=raid1 /dev/sd[ab]1
>
> but it reports that the devices are already part of an array and do I want to continue?? I say yes and it then immedialtely says "out of sync, resyncing existing array" (not exact words but I suppose you get the idea)
> I reboot to kill sync and then dd again, repartition, etc ect then reboot.
> Now when server comes up, fdisk reports (it's the two 500GB discs that are in the array):
This is all probably down to randomly dd'ing the disks/partitions...
>
> [root@k2 ~]# fdisk -l
>
> Disk /dev/hda: 80.0 GB, 80026361856 bytes
> 255 heads, 63 sectors/track, 9729 cylinders
> Units = cylinders of 16065 * 512 = 8225280 bytes
>
> Device Boot Start End Blocks Id System
> /dev/hda1 * 1 19 152586 83 Linux
> /dev/hda2 20 9729 77995575 8e Linux LVM
>
> Disk /dev/sda: 500.1 GB, 500107862016 bytes
> 255 heads, 63 sectors/track, 60801 cylinders
> Units = cylinders of 16065 * 512 = 8225280 bytes
>
> Device Boot Start End Blocks Id System
> /dev/sda1 1 60801 488384001 fd Linux raid autodetect
>
> Disk /dev/sdb: 320.0 GB, 320072933376 bytes
> 255 heads, 63 sectors/track, 38913 cylinders
> Units = cylinders of 16065 * 512 = 8225280 bytes
>
> Device Boot Start End Blocks Id System
> /dev/sdb1 1 38913 312568641 83 Linux
Err, this ^^^ is a 320GB drive. You said 2 500Gb drives...
Mirroring them will work but it will (silently-ish) only use the first 320Gb
>
> Disk /dev/md0: 500.1 GB, 500105150464 bytes
> 2 heads, 4 sectors/track, 122095984 cylinders
> Units = cylinders of 8 * 512 = 4096 bytes
and somehow md0 is sized at 500Gb
what does /proc/mdstat say?
> Disk /dev/md0 doesn't contain a valid partition table
>
> Where previously, I had /dev/sdc that was the same as /dev/sda above (ignore the 320GB, that is separate and on boot, they sometimes come up in different order).
So what kernel/distro did you use for the liveCD/main OS?
> Now, I cannot write to sda above (500GB disc) with commands: dd, mdadm -zero-superblock etc etc. I can write to md0 with dd but what the heck happened to sdc?? Why did it become /dev/md0??
> Now I read the forum thread and ran dd on beginning and end of sda and md0 with /dev/zero using seek to skip first 490GB and deleted /dev/md0 then rebooted and now I see sda but there is no sdc or md0.
What's /dev/sdc?
> I cannot see any copy of mdadm.conf in /boot and initramfs-update does not work on CentOS, but I am more used to Debian and do not know the CentOS equivalent. I do know that I have now completely dd'ed the first 10MB and last 2MB of sda and md0 and have deleted (with rm -f) /dev/md0, and now *only* /dev/sda (plus internal had and extra 320GB sdb) shows up in fdisk -l: There is no md0 or sdc.
>
> So after all that rambling, my question is:
>
> Why did /dev/md0 appear in fdisk -l when it had previously been sda/sdb even after successfully creating my array before reboot?
fdisk -l looks at all the devices for partitions.
sdc isn't there (hardware failure?)
> How do I remove the array? Have I now done everything to remove it?
mdadm --stop
> I suppose (hope) that if I go to the server and power cycle it and the esata discs, my sdc probably will appear again ( I have not done this yet-no chance today) but why does it not appear after a soft reboot after having dd'd /dev/md0?
Got to admit - I'm confused....
Go and try to make a simple ext3 on a mirror of your 2 500Gb drives. No 'dd'
required.
Once you have that working try playing with mdadm.
Then encrypt it and layer ext3 on that.
I have no idea what you're trying to achieve with lvm - do you need it?
Have a good luck here too : http://linux-raid.osdl.org/
David
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: (no subject): should have read--"Regarding thread '"Deleting mdadm RAID arrays'".
[not found] ` <3ECBDC05781B3D48ABD520A01ABF2F9B12C5435703@SE-EX008.groupinfra.com>
@ 2008-05-15 14:01 ` David Greaves
2008-05-15 15:33 ` Henry, Andrew
0 siblings, 1 reply; 12+ messages in thread
From: David Greaves @ 2008-05-15 14:01 UTC (permalink / raw)
To: Henry, Andrew, LinuxRaid
Lets keep it 'on list' for the benefit of others :)
Henry, Andrew wrote:
> Well, I want RAID1 for failover, and encryption for security and lvm to be able to add devices at a later stage.
Yes, makes sense. Just a 'warning' (but that's too strong) to be aware that this
layering may help uncover some bugs :)
> Sorry, didn't mean that it will not boot at all. It boots but hangs on mounting the device I have given in fstab.
OK
>> (nb what distro/kernel are you using)
>
> Im using CentOS 5.1 x86_64 with 2.6.18-53 as OS and the LiveCD I used was Ubuntu 8.04 x86_64.
OK.
This kernel is very old wrt mainline although I suspect the distro will have
backported many bugfixes and improvements I have no idea which :)
>>> So after all that rambling, my question is:
>>>
>>> Why did /dev/md0 appear in fdisk -l when it had previously been sda/sdb
>> even after successfully creating my array before reboot?
>> fdisk -l looks at all the devices for partitions.
>> sdc isn't there (hardware failure?)
>
>
> Yes, it was hardware failure. The Sil controller had completely locked up on one port, probably due to all the dd'ing going on. I had to completely turn everything off and unplug cables. When I rebooted, I could then see my 2 500GB discs and my 320GB disc. Just to clarify: The 500GB discs are replacements for the single 320GB disc I have at the moment. The reason why I want to raid/dmcrypt/lvm is that I want extra security of RAID1 and I will lvm it because I plan to buy a second 320GB at a later stage and then RAID1 the two 320GBs in the same manner as above and add them to the same logical volume as the 2 500GB discs.
OK. That's not good though.
>>> How do I remove the array? Have I now done everything to remove it?
>> mdadm --stop
>
> Do I not need to do -f /dev/sda -r /dev/sda to remove them properly??
starting and stopping an array is normal operation.
Adding/removing disks is usually a recovery activity.
To 'destroy' an array you should stop it and zero the superblocks on the
component devices.
> Ok, after power cycling it all, my 2 500GB discs came back according to fdisk -l.
good.
> Then I booted a LiveCD and dd'd sda and sdb from there, both the beginning of the device at 10MB and the last 256KB of the devices.
OK - however random incantations of other commands are not recommended or needed
for md on it's own.
> I then rebooted into CentOS and they showed up as unpartitioned devices and /proc/mdstat was empty.
OK.
> I them proceeded to create a new array with mdadm --create and it says the same thing as before: that they are already part of an array!
> I thought if I wiped the device and removed the config file it would wipe it but apparently there is something else I need to do?
Well, fixing your email client to wrap lines helps!
There is. Use --zero-superblock. Not aware of any bugs but you're on old systems
here.
> Anyway, I answered yes to the question of "do you want to continue" and it then says "out of sync, syncing discs" and *that* is when the /dev/md0 device appears when running fdisk -l, but now I can still see both sda and sdb.
OK. all as expected.
> Does /dev/md0 get registered with fdisk -l when there is an active array running?
fdisk scans the system for block devices and when an array is running it shows
up - usually by udev nowadays but mdadm will also created device nodes I think.
> At least I can still see the discs now. So now it's been syncing all night and it's 50% complete.
That's slow - my RAID5 takes 3hrs to do 320Gb - mirrors should be a *lot* faster.
> I start to get the feeling that I need to use mdadm to stop, set fail and remove the devices to do this properly and to not dd them!
Err, yes.
> If I let the syncing continue, so that mdadm thinks the array is OK, can I then stop and remove them properly with mdadm? How?
This is what distros are for...
You are doing a lot of things that are not needed.
> I want to wipe it all now and start again because I definitely want to autodetect on boot.
Many people are confused by this - your distro will detect and mount the array
on boot. It will then run lvm and dmcrypt over the top.
You *do not* need (and should not use) kernel autodetect. You should assemble
the array in the init scripts.
David
^ permalink raw reply [flat|nested] 12+ messages in thread
* RE: (no subject): should have read--"Regarding thread '"Deleting mdadm RAID arrays'".
2008-05-15 14:01 ` (no subject): should have read--"Regarding thread '"Deleting mdadm RAID arrays'" David Greaves
@ 2008-05-15 15:33 ` Henry, Andrew
2008-05-15 16:04 ` Twigathy
2008-05-16 9:02 ` Henry, Andrew
0 siblings, 2 replies; 12+ messages in thread
From: Henry, Andrew @ 2008-05-15 15:33 UTC (permalink / raw)
To: David Greaves, LinuxRaid
> > At least I can still see the discs now. So now it's been syncing all
> night and it's 50% complete.
> That's slow - my RAID5 takes 3hrs to do 320Gb - mirrors should be a *lot*
> faster.
Hmmm. It's syncing at 6056k/s. dd ran at 320MB/s. worrying. Wonder if the controller is not that good. Sorry for line breaks, Outlook.
> You *do not* need (and should not use) kernel autodetect. You should
> assemble
> the array in the init scripts.
How can I stop the kernel from autodetecting? You just made me realize that this would solve my other problem: I cannot reboot my server remotely because it asks for the dmcrypt password on boot when I put a line in crypttab. Mounting everything with scripts *after* boot would let me reboot remotely. :)
Thanks a lot for the help. I'll try mdadm --stop as soon as syncing has finished (95% complete now!!!)
>
> David
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: (no subject): should have read--"Regarding thread '"Deleting mdadm RAID arrays'".
2008-05-15 15:33 ` Henry, Andrew
@ 2008-05-15 16:04 ` Twigathy
2008-05-16 7:35 ` Henry, Andrew
2008-05-16 9:02 ` Henry, Andrew
1 sibling, 1 reply; 12+ messages in thread
From: Twigathy @ 2008-05-15 16:04 UTC (permalink / raw)
To: linux-raid
Hi,
A couple of threads up-mailinglist I posted about a few dodgey PCI sil
cards. I had a couple of faulty sil 3512 based cards. You might want
to invest in something a bit better, or at least swap out cables and
see if that helps.
I experienced exactly the same freezing of ports. Upgraded to a new
motherboard with lots of SATA ports onboard and all was well. About
half the cables in that machine changed too, so...yeah. Good luck!
Just my £0.02. Or $0.02. :-)
T
2008/5/15 Henry, Andrew <andrew.henry@logica.com>:
>> > At least I can still see the discs now. So now it's been syncing all
>> night and it's 50% complete.
>> That's slow - my RAID5 takes 3hrs to do 320Gb - mirrors should be a *lot*
>> faster.
>
> Hmmm. It's syncing at 6056k/s. dd ran at 320MB/s. worrying. Wonder if the controller is not that good. Sorry for line breaks, Outlook.
>
>> You *do not* need (and should not use) kernel autodetect. You should
>> assemble
>> the array in the init scripts.
>
> How can I stop the kernel from autodetecting? You just made me realize that this would solve my other problem: I cannot reboot my server remotely because it asks for the dmcrypt password on boot when I put a line in crypttab. Mounting everything with scripts *after* boot would let me reboot remotely. :)
>
> Thanks a lot for the help. I'll try mdadm --stop as soon as syncing has finished (95% complete now!!!)
>>
>> David
>
> --
> To unsubscribe from this list: send the line "unsubscribe linux-raid" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at http://vger.kernel.org/majordomo-info.html
>
^ permalink raw reply [flat|nested] 12+ messages in thread
* RE: (no subject): should have read--"Regarding thread '"Deleting mdadm RAID arrays'".
2008-05-15 16:04 ` Twigathy
@ 2008-05-16 7:35 ` Henry, Andrew
0 siblings, 0 replies; 12+ messages in thread
From: Henry, Andrew @ 2008-05-16 7:35 UTC (permalink / raw)
To: Twigathy, linux-raid@vger.kernel.org
> -----Original Message-----
> From: linux-raid-owner@vger.kernel.org [mailto:linux-raid-
> owner@vger.kernel.org] On Behalf Of Twigathy
> Sent: 15 May 2008 18:04
> To: linux-raid@vger.kernel.org
> Subject: Re: (no subject): should have read--"Regarding thread '"Deleting
> mdadm RAID arrays'".
>
> Hi,
>
> A couple of threads up-mailinglist I posted about a few dodgey PCI sil
> cards. I had a couple of faulty sil 3512 based cards. You might want
> to invest in something a bit better, or at least swap out cables and
> see if that helps.
>
Cack. I usually 'reasearch' hardware purchases thouroughly but in this case, I needed a CardBus controller, and there aren't that many to choose from, and the sil 3512 actually had support for linux! First time I have ever seen a product say that, but then again it's a while since I purchased hardware.
I wonder if it is just transfers between devices on the two ports, as dd to one disk has 320MB/s which is good.
Anyone else know of working CardBus eSATA adapters for Linux?
--andrew
^ permalink raw reply [flat|nested] 12+ messages in thread
* RE: (no subject): should have read--"Regarding thread '"Deleting mdadm RAID arrays'".
2008-05-15 15:33 ` Henry, Andrew
2008-05-15 16:04 ` Twigathy
@ 2008-05-16 9:02 ` Henry, Andrew
2008-05-19 6:10 ` Neil Brown
1 sibling, 1 reply; 12+ messages in thread
From: Henry, Andrew @ 2008-05-16 9:02 UTC (permalink / raw)
To: David Greaves, LinuxRaid
> -----Original Message-----
>
> Thanks a lot for the help. I'll try mdadm --stop as soon as syncing has
> finished (95% complete now!!!)
>
Well, sync finished successfully.
Mdadm --dtop /dev/md0 # OK
Mdadm --zero-superblock --force /dev/sda1 # OK
Mdadm --zero-superblock --force /dev/sdb1 # OK
These return to prompt without any messages. If I then run them a second time they complain that the device is not part of an array. All well and good.
Mdadm --create --verbose /dev/md0 --level=raid1 --raid-devices=2 /dev/sd[ab]1 # NOT OK. Complains "already an array" starts a new resync.
Mdadm --stop /dev/md0 # stops resync :)
What else is needed? Am I unable to recreate the array on md0? Must I choose a new device such as md1? Or is there another stop to erasing an array?
^ permalink raw reply [flat|nested] 12+ messages in thread
* RE: (no subject): should have read--"Regarding thread '"Deleting mdadm RAID arrays'".
2008-05-16 9:02 ` Henry, Andrew
@ 2008-05-19 6:10 ` Neil Brown
2008-05-19 14:21 ` Henry, Andrew
0 siblings, 1 reply; 12+ messages in thread
From: Neil Brown @ 2008-05-19 6:10 UTC (permalink / raw)
To: Henry, Andrew; +Cc: David Greaves, LinuxRaid
On Friday May 16, andrew.henry@logica.com wrote:
> > -----Original Message-----
> >
> > Thanks a lot for the help. I'll try mdadm --stop as soon as syncing has
> > finished (95% complete now!!!)
> >
>
> Well, sync finished successfully.
>
> Mdadm --dtop /dev/md0 # OK
> Mdadm --zero-superblock --force /dev/sda1 # OK
> Mdadm --zero-superblock --force /dev/sdb1 # OK
>
> These return to prompt without any messages. If I then run them a second time they complain that the device is not part of an array. All well and good.
>
> Mdadm --create --verbose /dev/md0 --level=raid1 --raid-devices=2 /dev/sd[ab]1 # NOT OK. Complains "already an array" starts a new resync.
I wasn't paying close attention to this thread, so maybe I missed
something significant, but what exactly is the "complaint" you get
here?
>
> Mdadm --stop /dev/md0 # stops resync :)
>
> What else is needed? Am I unable to recreate the array on md0? Must I choose a new device such as md1? Or is there another stop to erasing an array?
Why do you feel a need to erase an array?
NeilBrown
^ permalink raw reply [flat|nested] 12+ messages in thread
* RE: (no subject): should have read--"Regarding thread '"Deleting mdadm RAID arrays'".
2008-05-19 6:10 ` Neil Brown
@ 2008-05-19 14:21 ` Henry, Andrew
2008-05-19 18:08 ` David Greaves
0 siblings, 1 reply; 12+ messages in thread
From: Henry, Andrew @ 2008-05-19 14:21 UTC (permalink / raw)
To: Neil Brown; +Cc: David Greaves, LinuxRaid
-----Original Message-----
From: Neil Brown [mailto:neilb@suse.de]
Sent: 19 May 2008 08:10
To: Henry, Andrew
Cc: David Greaves; LinuxRaid
Subject: RE: (no subject): should have read--"Regarding thread '"Deleting mdadm RAID arrays'".
> These return to prompt without any messages. If I then run them a second time they complain that the device is not part of an array. All well and good.
>
> Mdadm --create --verbose /dev/md0 --level=raid1 --raid-devices=2 /dev/sd[ab]1 # NOT OK. Complains "already an array" starts a new resync.
I wasn't paying close attention to this thread, so maybe I missed
something significant, but what exactly is the "complaint" you get
here?
Mdadm was saying that the array to be created was already part of an array, but I fixed this now by running dd if=/dev/zero of=/dev/sd[ab] to wipe the whole disk
Now when I run:
mdadm --create --verbose /dev/md0 --raid-devices=2 --level=raid1 /dev/sd[ab]1
It replies:
mdadm: size set to 488383936K
mdadm: array /dev/md0 started.
However, when I look at mdstat I see the following:
[root@k2 ~]# cat /proc/mdstat
Personalities : [raid1]
md0 : active raid1 sdb1[1] sda1[0]
488383936 blocks [2/2] [UU]
[>....................] resync = 0.0% (187520/488383936) finish=1301.1min speed=6250K/sec
unused devices: <none>
[root@k2 ~]#
Why does it "resync" upon creating a new array?
>
> Mdadm --stop /dev/md0 # stops resync :)
>
> What else is needed? Am I unable to recreate the array on md0? Must I choose a new device such as md1? Or is there another stop to erasing an array?
Why do you feel a need to erase an array?
Because I created it with version 1.0 superblock and it wasn't getting autodetected by the kernel 2.6.18-53. I want to re-create it with version 0.9 superblock.
NeilBrown
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: (no subject): should have read--"Regarding thread '"Deleting mdadm RAID arrays'".
2008-05-19 14:21 ` Henry, Andrew
@ 2008-05-19 18:08 ` David Greaves
2008-05-20 6:40 ` Henry, Andrew
0 siblings, 1 reply; 12+ messages in thread
From: David Greaves @ 2008-05-19 18:08 UTC (permalink / raw)
To: Henry, Andrew; +Cc: Neil Brown, LinuxRaid
Henry, Andrew wrote:
> Why does it "resync" upon creating a new array?
Do you remember in your first post I pointed here: http://linux-raid.osdl.org/
Well:
http://linux-raid.osdl.org/index.php/Initial_Array_Creation
David
^ permalink raw reply [flat|nested] 12+ messages in thread
* RE: (no subject): should have read--"Regarding thread '"Deleting mdadm RAID arrays'".
2008-05-19 18:08 ` David Greaves
@ 2008-05-20 6:40 ` Henry, Andrew
2008-05-20 7:34 ` David Greaves
0 siblings, 1 reply; 12+ messages in thread
From: Henry, Andrew @ 2008-05-20 6:40 UTC (permalink / raw)
To: David Greaves; +Cc: Neil Brown, LinuxRaid
Hi David,
Yes, I did read the the howto but maybe I read through it too fast, because the second link you posted below was not part of the main link structure, that I could tell, but the info in it was quite interesting, thanks for the info.
--andrew
andrew henry
+46 (0)40-251144
-----Original Message-----
From: David Greaves [mailto:david@dgreaves.com]
Sent: 19 May 2008 20:09
To: Henry, Andrew
Cc: Neil Brown; LinuxRaid
Subject: Re: (no subject): should have read--"Regarding thread '"Deleting mdadm RAID arrays'".
Henry, Andrew wrote:
> Why does it "resync" upon creating a new array?
Do you remember in your first post I pointed here: http://linux-raid.osdl.org/
Well:
http://linux-raid.osdl.org/index.php/Initial_Array_Creation
David
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: (no subject): should have read--"Regarding thread '"Deleting mdadm RAID arrays'".
2008-05-20 6:40 ` Henry, Andrew
@ 2008-05-20 7:34 ` David Greaves
0 siblings, 0 replies; 12+ messages in thread
From: David Greaves @ 2008-05-20 7:34 UTC (permalink / raw)
To: Henry, Andrew; +Cc: LinuxRaid
Henry, Andrew wrote:
> Hi David,
>
> Yes, I did read the the howto but maybe I read through it too fast, because the second link you posted below was not part of the main link structure, that I could tell, but the info in it was quite interesting, thanks for the info.
Glad it helped.
Fair point :)
David
^ permalink raw reply [flat|nested] 12+ messages in thread
end of thread, other threads:[~2008-05-20 7:34 UTC | newest]
Thread overview: 12+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2008-05-14 12:53 (unknown), Henry, Andrew
2008-05-14 21:13 ` David Greaves
[not found] ` <3ECBDC05781B3D48ABD520A01ABF2F9B12C5435703@SE-EX008.groupinfra.com>
2008-05-15 14:01 ` (no subject): should have read--"Regarding thread '"Deleting mdadm RAID arrays'" David Greaves
2008-05-15 15:33 ` Henry, Andrew
2008-05-15 16:04 ` Twigathy
2008-05-16 7:35 ` Henry, Andrew
2008-05-16 9:02 ` Henry, Andrew
2008-05-19 6:10 ` Neil Brown
2008-05-19 14:21 ` Henry, Andrew
2008-05-19 18:08 ` David Greaves
2008-05-20 6:40 ` Henry, Andrew
2008-05-20 7:34 ` David Greaves
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).