* RAID 5 Whole Devices - Partition
@ 2006-05-30 17:08 Michael Theodoulou
2006-05-30 17:49 ` Luca Berra
2006-05-30 23:48 ` Neil Brown
0 siblings, 2 replies; 4+ messages in thread
From: Michael Theodoulou @ 2006-05-30 17:08 UTC (permalink / raw)
To: linux-raid
Hello,
I am trying to create a RAID5 array out of 3 160GB SATA drives. After
i create the array i want to partition the device into 2 partitions.
The system lies on a SCSI disk and the 2 partitions will be used for
data storage.
The SATA host is an HPT374 device with drivers compiled in the kernel.
These are the steps i followed
mdadm -Cv --auto=part /dev/md_d0 --chunk=64 -l 5 --raid-devices=3
/dev/hde /dev/hdi /dev/hdk
Running this command notifies me that there is an ext2 fs on one of
the drives even if i fdisked them before and removed all partititions.
Why is this happening?
In anycase i continue with the array creation
After initialization 5 new devices are created in /dev
/dev/md_d0
/dev/md_d0p1
/dev/md_d0_p1
/dev/md_d0_p2
/dev/md_d0_p3
/dev/md_d0_p4
The problems arise when i reboot.
A device /dev/md0 seems to keep the 3 disks busy and as a result when
the time comes
to assemble the array i get the error that the disks are busy.
When the system boots i cat /proc/mdstat and see that /dev/md0 is a
raid5 array made of the two disks and it comes up as degraded
I can then stop the array using mdadm -S /dev/md0 and restart it using
mdadm -As which uses the correct /dev/md_d0. Examining that shows its
clean and ok
/dev/md_d0:
Version : 00.90.01
Creation Time : Tue May 30 17:03:31 2006
Raid Level : raid5
Array Size : 312581632 (298.10 GiB 320.08 GB)
Device Size : 156290816 (149.05 GiB 160.04 GB)
Raid Devices : 3
Total Devices : 3
Preferred Minor : 0
Persistence : Superblock is persistent
Update Time : Tue May 30 19:48:03 2006
State : clean
Active Devices : 3
Working Devices : 3
Failed Devices : 0
Spare Devices : 0
Layout : left-symmetric
Chunk Size : 64K
Number Major Minor RaidDevice State
0 33 0 0 active sync /dev/hde
1 56 0 1 active sync /dev/hdi
2 57 0 2 active sync /dev/hdk
UUID : 9f520781:7f3c2052:1cb5078e:c3f3b95c
Events : 0.2
Is this the expected behavior? Why doesnt the kernel ignore /dev/md0
and tries to use it? I tried using raid=noautodetect but it didnt help
I am using 2.6.9
This is my mdadm.conf
DEVICE /dev/hde /dev/hdi /dev/hdk
ARRAY /dev/md_d0 level=raid5 num-devices=3
UUID=9f520781:7f3c2052:1cb5078e:c3f3b95c
devices=/dev/hde,/dev/hdi,/dev/hdk auto=partition
MAILADDR myemail@mydomain.tld
Furthermore when i fdisk the drives after all of this i can see the 2
partitions on /dev/hde and /dev/hdi but /dev/hdk shows that no
partition exists. Is this a sign of data corruption or drive failure?
Shouldnt all 3 drives show the same partition information?
fdisk /dev/hde
/dev/hde1 1 19457 156288352 fd Linux raid autodetect
fdisk /dev/hdi
/dev/hdi1 1 19457 156288321 fd Linux raid autodetect
And for fdisk /dev/hdk i get :
Warning: invalid flag 0x0000 of partition table 4 will be corrected by w(rite)
So what am i doing wrong? How can i get the expected behavior? ie on
bootime a RAID5 array is created and available from /dev/md_d0
Thank you for your time
Michael Theodoulou
^ permalink raw reply [flat|nested] 4+ messages in thread* Re: RAID 5 Whole Devices - Partition
2006-05-30 17:08 RAID 5 Whole Devices - Partition Michael Theodoulou
@ 2006-05-30 17:49 ` Luca Berra
2006-05-30 19:25 ` Michael Theodoulou
2006-05-30 23:48 ` Neil Brown
1 sibling, 1 reply; 4+ messages in thread
From: Luca Berra @ 2006-05-30 17:49 UTC (permalink / raw)
To: linux-raid
On Tue, May 30, 2006 at 08:08:03PM +0300, Michael Theodoulou wrote:
>Hello,
>
>I am trying to create a RAID5 array out of 3 160GB SATA drives. After
>i create the array i want to partition the device into 2 partitions.
>
>The system lies on a SCSI disk and the 2 partitions will be used for
>data storage.
>The SATA host is an HPT374 device with drivers compiled in the kernel.
>
>These are the steps i followed
>
>mdadm -Cv --auto=part /dev/md_d0 --chunk=64 -l 5 --raid-devices=3
>/dev/hde /dev/hdi /dev/hdk
>
>Running this command notifies me that there is an ext2 fs on one of
>the drives even if i fdisked them before and removed all partititions.
....
>Furthermore when i fdisk the drives after all of this i can see the 2
>partitions on /dev/hde and /dev/hdi but /dev/hdk shows that no
>partition exists. Is this a sign of data corruption or drive failure?
are you sure you removed all partitions before creating the md
>Shouldnt all 3 drives show the same partition information?
the drives should not contain any partition information.
(well actually the first will show an invalid partition table, since the
partition of the mdp array will be written exactly at the beginning of
the first raid disk.
L.
--
Luca Berra -- bluca@comedia.it
Communication Media & Services S.r.l.
/"\
\ / ASCII RIBBON CAMPAIGN
X AGAINST HTML MAIL
/ \
^ permalink raw reply [flat|nested] 4+ messages in thread* Re: RAID 5 Whole Devices - Partition
2006-05-30 17:49 ` Luca Berra
@ 2006-05-30 19:25 ` Michael Theodoulou
0 siblings, 0 replies; 4+ messages in thread
From: Michael Theodoulou @ 2006-05-30 19:25 UTC (permalink / raw)
To: linux-raid
On 5/30/06, Luca Berra <bluca@comedia.it> wrote:
> On Tue, May 30, 2006 at 08:08:03PM +0300, Michael Theodoulou wrote:
> >Hello,
> >
> >I am trying to create a RAID5 array out of 3 160GB SATA drives. After
> >i create the array i want to partition the device into 2 partitions.
> >
> >The system lies on a SCSI disk and the 2 partitions will be used for
> >data storage.
> >The SATA host is an HPT374 device with drivers compiled in the kernel.
> >
> >These are the steps i followed
> >
> >mdadm -Cv --auto=part /dev/md_d0 --chunk=64 -l 5 --raid-devices=3
> >/dev/hde /dev/hdi /dev/hdk
> >
> >Running this command notifies me that there is an ext2 fs on one of
> >the drives even if i fdisked them before and removed all partititions.
> ....
> >Furthermore when i fdisk the drives after all of this i can see the 2
> >partitions on /dev/hde and /dev/hdi but /dev/hdk shows that no
> >partition exists. Is this a sign of data corruption or drive failure?
> are you sure you removed all partitions before creating the md
I run fdisk on each disk and deleted all partitions. Wrote the
partition table to disk. Removed /etc/mdadm.conf, disabled mdmonitor
and rebooted.
>
> >Shouldnt all 3 drives show the same partition information?
> the drives should not contain any partition information.
> (well actually the first will show an invalid partition table, since the
> partition of the mdp array will be written exactly at the beginning of
> the first raid disk.
I havent partitioned the disks, all the partitions where created after
running mdadm to create the array.
Michael
^ permalink raw reply [flat|nested] 4+ messages in thread
* Re: RAID 5 Whole Devices - Partition
2006-05-30 17:08 RAID 5 Whole Devices - Partition Michael Theodoulou
2006-05-30 17:49 ` Luca Berra
@ 2006-05-30 23:48 ` Neil Brown
1 sibling, 0 replies; 4+ messages in thread
From: Neil Brown @ 2006-05-30 23:48 UTC (permalink / raw)
To: Michael Theodoulou; +Cc: linux-raid
On Tuesday May 30, michael.theodoulou@gmail.com wrote:
> Hello,
>
> I am trying to create a RAID5 array out of 3 160GB SATA drives. After
> i create the array i want to partition the device into 2 partitions.
>
> The system lies on a SCSI disk and the 2 partitions will be used for
> data storage.
> The SATA host is an HPT374 device with drivers compiled in the kernel.
>
> These are the steps i followed
>
> mdadm -Cv --auto=part /dev/md_d0 --chunk=64 -l 5 --raid-devices=3
> /dev/hde /dev/hdi /dev/hdk
>
> Running this command notifies me that there is an ext2 fs on one of
> the drives even if i fdisked them before and removed all partititions.
> Why is this happening?
The ext2 superblock is on the second 1K for the device.
The only place that fdisk writes is in the first 512 bytes. So fdisk
is never going to remove the signature of a an ext2 filesystem.
>
> In anycase i continue with the array creation
This is the right thing to do.
>
> After initialization 5 new devices are created in /dev
>
> /dev/md_d0
> /dev/md_d0p1
> /dev/md_d0_p1
> /dev/md_d0_p2
> /dev/md_d0_p3
> /dev/md_d0_p4
>
> The problems arise when i reboot.
> A device /dev/md0 seems to keep the 3 disks busy and as a result when
You need to find out where that is coming from. Complete kernel logs
might help. Maybe you have an initrd which is trying to be helpful?
> the time comes
> to assemble the array i get the error that the disks are busy.
> When the system boots i cat /proc/mdstat and see that /dev/md0 is a
> raid5 array made of the two disks and it comes up as degraded
>
> I can then stop the array using mdadm -S /dev/md0 and restart it using
> mdadm -As which uses the correct /dev/md_d0. Examining that shows its
> clean and ok
>
> /dev/md_d0:
> Version : 00.90.01
> Creation Time : Tue May 30 17:03:31 2006
> Raid Level : raid5
> Array Size : 312581632 (298.10 GiB 320.08 GB)
> Device Size : 156290816 (149.05 GiB 160.04 GB)
> Raid Devices : 3
> Total Devices : 3
> Preferred Minor : 0
> Persistence : Superblock is persistent
>
> Update Time : Tue May 30 19:48:03 2006
> State : clean
> Active Devices : 3
> Working Devices : 3
> Failed Devices : 0
> Spare Devices : 0
>
> Layout : left-symmetric
> Chunk Size : 64K
>
> Number Major Minor RaidDevice State
> 0 33 0 0 active sync /dev/hde
> 1 56 0 1 active sync /dev/hdi
> 2 57 0 2 active sync /dev/hdk
> UUID : 9f520781:7f3c2052:1cb5078e:c3f3b95c
> Events : 0.2
>
> Is this the expected behavior? Why doesnt the kernel ignore /dev/md0
> and tries to use it? I tried using raid=noautodetect but it didnt help
> I am using 2.6.9
Most be something else trying to start the array. Maybe a stray
'raidstart'. Maybe something in an initrd.
>
> This is my mdadm.conf
> DEVICE /dev/hde /dev/hdi /dev/hdk
> ARRAY /dev/md_d0 level=raid5 num-devices=3
> UUID=9f520781:7f3c2052:1cb5078e:c3f3b95c
> devices=/dev/hde,/dev/hdi,/dev/hdk auto=partition
> MAILADDR myemail@mydomain.tld
This should work providing the device names of the ide drives never
change -- which is fairly safe. It isn't safe for SCSI drives.
>
> Furthermore when i fdisk the drives after all of this i can see the 2
> partitions on /dev/hde and /dev/hdi but /dev/hdk shows that no
> partition exists. Is this a sign of data corruption or drive failure?
> Shouldnt all 3 drives show the same partition information?
No. The drives shouldn't really have partition information at all.
The raid array has the partition information.
However the first block of /dev/hde is also the first block of
/dev/md_d0, so it will appear to have the same partition table.
And the first block of /dev/hdk is an 'xor' of the first blocks of hdi
and hde. So if the first block of hdi is all zeros, then the first
block of /dev/hdk will have the same partition table.
> fdisk /dev/hde
> /dev/hde1 1 19457 156288352 fd Linux raid autodetect
>
> fdisk /dev/hdi
> /dev/hdi1 1 19457 156288321 fd Linux raid
> autodetect
When you created the partitions in /dev/md_d0, you must have set the
partition type to 'Linux raid autodetect'. You don't want to do that.
Change it to 'Linux' or whatever.
NeilBrown
^ permalink raw reply [flat|nested] 4+ messages in thread
end of thread, other threads:[~2006-05-30 23:48 UTC | newest]
Thread overview: 4+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2006-05-30 17:08 RAID 5 Whole Devices - Partition Michael Theodoulou
2006-05-30 17:49 ` Luca Berra
2006-05-30 19:25 ` Michael Theodoulou
2006-05-30 23:48 ` Neil Brown
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).