* Questions about software RAID
@ 2005-04-18 19:50 tmp
2005-04-18 20:12 ` David Greaves
` (2 more replies)
0 siblings, 3 replies; 10+ messages in thread
From: tmp @ 2005-04-18 19:50 UTC (permalink / raw)
To: linux-raid
I read the software RAID-HOWTO, but the below 6 questions is still
unclear. I have asked around on IRC-channels and it seems that I am not
the only one being confused. Maybe the HOWTO could be updated to
clearify the below items?
1) I have a RAID-1 setup with one spare disk. A disk crashes and the
spare disk takes over. Now, when the crashed disk is replaced with a new
one, what is then happening with the role of the spare disk? Is it
reverting to its old role as spare disk?
If it is NOT reverting to it's old role, then the raidtab file will
suddenly be out-of-sync with reality. Is that correct?
Does the answer given here differ in e.g. RAID-5 setups?
2) The new disk has to be manually partitioned before beeing used in the
array. What happens if the new partitions are larger than other
partitions used in the array? What happens if they are smaller?
3) Must all partition types be 0xFD? What happens if they are not?
4) I guess the partitions itself doesn't have to be formated as the
filesystem is on the RAID-level. Is that correct?
5) Removing a disk requires that I do a "mdadm -r" on all the partitions
that is involved in a RAID array. I attempt to by a hot-swap capable
controler, so what happens if I just pull out the disk without this
manual removal command?
Aren't there some more hotswap-friendly setup?
6) I know that the kernel does stripping automatically if more
partitions are given as swap partitions in /etc/fstab. But can it also
handle if one disk crashes? I.e. do I have to let my swap disk be a
RAID-setup too if I wan't it to continue upon disk crash?
Thanks!
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: Questions about software RAID
2005-04-18 19:50 Questions about software RAID tmp
@ 2005-04-18 20:12 ` David Greaves
2005-04-18 23:12 ` tmp
2005-04-18 20:15 ` Questions about software RAID Peter T. Breuer
2005-04-18 20:50 ` Frank Wittig
2 siblings, 1 reply; 10+ messages in thread
From: David Greaves @ 2005-04-18 20:12 UTC (permalink / raw)
To: tmp; +Cc: linux-raid
tmp wrote:
>I read the software RAID-HOWTO, but the below 6 questions is still
>unclear. I have asked around on IRC-channels and it seems that I am not
>the only one being confused. Maybe the HOWTO could be updated to
>clearify the below items?
>
>
>1) I have a RAID-1 setup with one spare disk. A disk crashes and the
>spare disk takes over. Now, when the crashed disk is replaced with a new
>one, what is then happening with the role of the spare disk?
>
the new disk is spare, the array doesn't revert to it's original state.
> Is it
>reverting to its old role as spare disk?
>
>
so no it doesn't.
>If it is NOT reverting to it's old role, then the raidtab file will
>suddenly be out-of-sync with reality. Is that correct?
>
>
yes
raidtab is deprecated - man mdadm
>Does the answer given here differ in e.g. RAID-5 setups?
>
>
no
>
>2) The new disk has to be manually partitioned before beeing used in the
>array.
>
no it doesn't. You could use the whole disk (/dev/hdb).
In general, AFAIK, partitions are better as they allow automatic
assembly at boot.
> What happens if the new partitions are larger than other
>partitions used in the array?
>
nothing special - eventually, if you replace all the partitions with
bigger ones you can 'grow' the array
> What happens if they are smaller?
>
>
it won't work (doh!)
>
>3) Must all partition types be 0xFD? What happens if they are not?
>
>
no
They won't be autodetected by the _kernel_
>
>4) I guess the partitions itself doesn't have to be formated as the
>filesystem is on the RAID-level. Is that correct?
>
>
compulsory!
>
>5) Removing a disk requires that I do a "mdadm -r" on all the partitions
>that is involved in a RAID array. I attempt to by a hot-swap capable
>controler, so what happens if I just pull out the disk without this
>manual removal command?
>
>
as far as md is concerned the disk disappeared.
I _think_ this is just like mdadm -r.
>Aren't there some more hotswap-friendly setup?
>
>
What's unfriendly?
>
>6) I know that the kernel does stripping automatically if more
>partitions are given as swap partitions in /etc/fstab. But can it also
>handle if one disk crashes?
>
no - striping <> mirroring
The kernel will fail to read data on the crashed disk - game over.
> I.e. do I have to let my swap disk be a
>RAID-setup too if I wan't it to continue upon disk crash?
>
>
yes - a mirror, not a stripe.
David
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: Questions about software RAID
2005-04-18 19:50 Questions about software RAID tmp
2005-04-18 20:12 ` David Greaves
@ 2005-04-18 20:15 ` Peter T. Breuer
2005-04-18 20:50 ` Frank Wittig
2 siblings, 0 replies; 10+ messages in thread
From: Peter T. Breuer @ 2005-04-18 20:15 UTC (permalink / raw)
To: linux-raid
tmp <skrald@amossen.dk> wrote:
> 1) I have a RAID-1 setup with one spare disk. A disk crashes and the
> spare disk takes over. Now, when the crashed disk is replaced with a new
> one, what is then happening with the role of the spare disk? Is it
> reverting to its old role as spare disk?
Try it and see. Run raidsetfaulty on one disk. That will bring the
spare in. Run raidhotremove on the original. Then "replace" it
with raidhotadd.
> If it is NOT reverting to it's old role, then the raidtab file will
> suddenly be out-of-sync with reality. Is that correct?
Shrug. It was "out of sync" as you call it the moment the spare disk
started to be used not as a spare but as part of the array.
> Does the answer given here differ in e.g. RAID-5 setups?
No.
> 2) The new disk has to be manually partitioned before beeing used in the
> array. What happens if the new partitions are larger than other
> partitions used in the array?
Bigger is fine, obviously!
> What happens if they are smaller?
They can't be used.
> 3) Must all partition types be 0xFD? What happens if they are not?
They can be anything you like. If they aren't, then the kernel
can't set them up at boot.
> 4) I guess the partitions itself doesn't have to be formated as the
> filesystem is on the RAID-level. Is that correct?
?? Sentence does not compute, I am afraid.
> 5) Removing a disk requires that I do a "mdadm -r" on all the partitions
> that is involved in a RAID array.
Does it? Well, I see that you mean "removing a disk intentionally".
> I attempt to by a hot-swap capable
> controler, so what happens if I just pull out the disk without this
> manual removal command?
The disk will error at the next access and will be faulted out of the
array.
> Aren't there some more hotswap-friendly setup?
?? Not sure what you mean. You mean, can you program the hotplug system
to do a setfaulty and remove from the array? Yes. Look at your hotplug
scripts in /etc/hotplug. But it' always going to be late hatever it
does, given that pulling the array is the trigger!
> 6) I know that the kernel does stripping automatically if more
> partitions are given as swap partitions in /etc/fstab. But can it also
> handle if one disk crashes? I.e. do I have to let my swap disk be a
> RAID-setup too if I wan't it to continue upon disk crash?
People have recently pointed out that raiding your swap makes sense
exactly in order to cope robustly with this eventually. You'd have had
to raid everything ELSE on the dead disk, of course, so I'm not quite
as sure as everyone else that it's a truly staggeringly wonderful idea.
Peter
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: Questions about software RAID
2005-04-18 19:50 Questions about software RAID tmp
2005-04-18 20:12 ` David Greaves
2005-04-18 20:15 ` Questions about software RAID Peter T. Breuer
@ 2005-04-18 20:50 ` Frank Wittig
2 siblings, 0 replies; 10+ messages in thread
From: Frank Wittig @ 2005-04-18 20:50 UTC (permalink / raw)
To: tmp; +Cc: linux-raid
[-- Attachment #1: Type: text/plain, Size: 696 bytes --]
tmp wrote:
>2) The new disk has to be manually partitioned before beeing used in the
>array. What happens if the new partitions are larger than other
>partitions used in the array? What happens if they are smaller?
>
>
there's no problem to create partitions which have exactly the same size
as the old ones.
your disks can be from a different manufacturer, have differnt sizes,
different number of physical heads or anything else.
if you set up your disks to have the same geometry (heads, cylinders,
sectors) you can have partitions exactly the same size. (the extra size
of larger disks won't be lost.)
read "man fdisk" and have a look at the parameters -C -H and -S...
greetings,
frank
[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 251 bytes --]
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: Questions about software RAID
2005-04-18 20:12 ` David Greaves
@ 2005-04-18 23:12 ` tmp
2005-04-19 6:36 ` Peter T. Breuer
` (2 more replies)
0 siblings, 3 replies; 10+ messages in thread
From: tmp @ 2005-04-18 23:12 UTC (permalink / raw)
To: David Greaves; +Cc: linux-raid
Thanks for your answers! They led to a couple of new questions,
however. :-)
I've read "man mdadm" and "man mdadm.conf" but I certainly doesn't have
an overview of software RAID.
> yes
> raidtab is deprecated - man mdadm
OK. The HOWTO describes mostly a raidtools context, however. Is the
following correct then?
mdadm.conf may be considered as the replacement for raidtab. When mdadm
starts it consults this file and starts the raid arrays correspondingly.
This leads to the following:
a) If mdadm starts the arrays, how can I then boot from a RAID device
(mdadm isn't started upon boot)?
I don't quite get which parts of the RAID system are controled by the
kernel and which parts are controled by mdadm.
b) Whenever I replace disks, the runtime configuration changes. I assume
that I should manually edit mdadm.conf in order to make corespond to
reality?
> >2) The new disk has to be manually partitioned before beeing used in the
> >array.
> no it doesn't. You could use the whole disk (/dev/hdb).
> In general, AFAIK, partitions are better as they allow automatic
> assembly at boot.
Is it correct that I can use whole disks (/dev/hdb) only if I make a
partitionable array and thus creates the partitions UPON the raid
mechanism?
As far as I can see, partitionable arrays makes disk replacements easier
as you can just replace the disk and let the RAID software take care of
syncing the new disk with existing partitioning. Is that correct?
You say I can't boot from such a partitionable raid array. Is that
correctly understood?
Can I "grow" a partitionable raid array if I replace the existing disks
with larger ones later?
Would you prefer manual partitioned disks, even though disk replacements
are a bit more difficult?
I guess that mdadm automatically writes persistent superblocks to all
disks?
> >3) Must all partition types be 0xFD? What happens if they are not?
> no
> They won't be autodetected by the _kernel_
OK, so it is generally a good idea to always set the partition types to
0xFD, I guess.
> >4) I guess the partitions itself doesn't have to be formated as the
> >filesystem is on the RAID-level. Is that correct?
> compulsory!
I meant, the /dev/mdX has to be formatted, not the individual
partitions. Still right?
> >5) Removing a disk requires that I do a "mdadm -r" on all the partitions
> >that is involved in a RAID array. I attempt to by a hot-swap capable
> >controler, so what happens if I just pull out the disk without this
> >manual removal command?
> as far as md is concerned the disk disappeared.
> I _think_ this is just like mdadm -r.
So I could actually just pull out the disk, insert a new one and do a
"mdadm -a /dev/mdX /dev/sdY"?
The RAID system won't detect the newly inserted disk itself?
> > I.e. do I have to let my swap disk be a
> >RAID-setup too if I wan't it to continue upon disk crash?
> yes - a mirror, not a stripe.
OK. Depending on your recomendations above, I could either make it a
swap partition on a partitionable array or create an array for the swap
in the conventional way (of existing partitions).
Thanks again for your help!
Are there some HOWTO out there, that is up-to-date and is based on RAID
usage with mdadm and kernel 2.6 instead of raidtools and kernel 2.2/2.4?
I can't possibly be the only one with these newbie questions. :-)
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: Questions about software RAID
2005-04-18 23:12 ` tmp
@ 2005-04-19 6:36 ` Peter T. Breuer
2005-04-19 7:15 ` Luca Berra
2005-04-19 12:08 ` Don't use whole disks for raid arrays [was: Questions about software RAID] Michael Tokarev
2 siblings, 0 replies; 10+ messages in thread
From: Peter T. Breuer @ 2005-04-19 6:36 UTC (permalink / raw)
To: linux-raid
tmp <skrald@amossen.dk> wrote:
> I've read "man mdadm" and "man mdadm.conf" but I certainly doesn't have
> an overview of software RAID.
Then try using it instead/as well as reading about it, and you will
obtain a more cmprehensive understanding.
> OK. The HOWTO describes mostly a raidtools context, however. Is the
> following correct then?
> mdadm.conf may be considered as the replacement for raidtab. When mdadm
No. Mdadm (generally speaking) does NOT use a configuration file and
that is perhaps its major difference wrt to raidtools. Tt's command
line. You can see for yourself what the man page itself summarises as
the differences (the one about not using a configuration file is #2 of
3):
mdadm is a program that can be used to create, manage, and monitor
MD devices. As such it provides a similar set of functionality to
the raidtools packages. The key differ ences between mdadm and
raidtools are:
mdadm is a single program and not a collection of pro grams.
mdadm can perform (almost) all of its functions with out having
a configuration file and does not use one by default. Also mdadm
helps with management of the configuration file.
mdadm can provide information about your arrays (through Query,
Detail, and Examine) that raidtools cannot.
> starts it consults this file and starts the raid arrays correspondingly.
No. As far as I am aware, the config file contains such details of
existing raid arrays as may conveniently be discovered during a
physical scan, and as such cntains only redundant information that at
most may save the cost of a physical scan during such operations as may
require it.
Feel free to correct me!
> This leads to the following:
Then I'll ignore it :-).
> Is it correct that I can use whole disks (/dev/hdb) only if I make a
> partitionable array and thus creates the partitions UPON the raid
> mechanism?
Incomprehensible, I am afraid. You can use either partitions or whole
disks in a raid array.
> As far as I can see, partitionable arrays makes disk replacements easier
Oh - you mean that the partitions can be recognized at bootup by the
kernel.
> You say I can't boot from such a partitionable raid array. Is that
> correctly understood?
Partitionable? Or partitioned? I'm not sure what you mean.
You would be able to boot via lilo from a partitioned RAID1 array, since
all lilo requires is a block map of here to read the kernel image from,
and either component of the RAID1 would do, and I'm sure that lilo has
been altered to allow the use of both/either components blockmap during
its startup routines.
I don't know if grub can boot from a RAID1 array but it strikes me as
likely since it would be able to ignore the raid1-ness and boot
successfully just as though it were a (pre-raid-aware) lilo.
> Can I "grow" a partitionable raid array if I replace the existing disks
> with larger ones later?
Partitionable? Or partitioned? If you grew the array you would be
extending it beyond the last partition. The partition table itself is n
sector zero, so it is not affected. You would presumably next change
the partitions to take advatage of the increased size available.
> Would you prefer manual partitioned disks, even though disk replacements
> are a bit more difficult?
I don't understand.
> I guess that mdadm automatically writes persistent superblocks to all
> disks?
By default, yes?
> I meant, the /dev/mdX has to be formatted, not the individual
> partitions. Still right?
I'm not sure what you mean. You mean "/dev/mdXy" by "individual
partitions"?
> So I could actually just pull out the disk, insert a new one and do a
> "mdadm -a /dev/mdX /dev/sdY"?
You might want to check that the old has been removed as well as faulted
first. I would imagine it is "only" faulted. But it doesn't matter.
> The RAID system won't detect the newly inserted disk itself?
It obeys commands. You can program the hotplug system to add it in
autmatically.
> Are there some HOWTO out there, that is up-to-date and is based on RAID
> usage with mdadm and kernel 2.6 instead of raidtools and kernel 2.2/2.4?
What there is seems fine to me if you can use the mdadm equivalents
instead of raidhotadd and raidsetfaulty and raidhotremve and mkraid.
The config file is not needed.
Peter
-
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: Questions about software RAID
2005-04-18 23:12 ` tmp
2005-04-19 6:36 ` Peter T. Breuer
@ 2005-04-19 7:15 ` Luca Berra
2005-04-19 8:08 ` David Greaves
2005-04-19 12:08 ` Don't use whole disks for raid arrays [was: Questions about software RAID] Michael Tokarev
2 siblings, 1 reply; 10+ messages in thread
From: Luca Berra @ 2005-04-19 7:15 UTC (permalink / raw)
To: linux-raid
On Tue, Apr 19, 2005 at 01:12:16AM +0200, tmp wrote:
>mdadm.conf may be considered as the replacement for raidtab. When mdadm
>starts it consults this file and starts the raid arrays correspondingly.
>This leads to the following:
yes, and no
mdadm does not need a configuration, but the config file helps.
check
http://cvs.mandrakesoft.com/cgi-bin/cvsweb.cgi/SPECS/mdadm/raidtabtomdadm.sh
for a script to convert from an existing raidtab to mdadm.conf
>a) If mdadm starts the arrays, how can I then boot from a RAID device
>(mdadm isn't started upon boot)?
>I don't quite get which parts of the RAID system are controled by the
>kernel and which parts are controled by mdadm.
the best choice is having an initrd containing mdassemble (part of
mdadm) and the configuration file.
the second to best choice is using the kernel command line to assemble
the raid array.
the last chance is using FD partition and in-kernel autodetect.
>b) Whenever I replace disks, the runtime configuration changes. I assume
>that I should manually edit mdadm.conf in order to make corespond to
>reality?
no, the mdadm configuration file only contains information on how to
identify the raid components, not their status. if you only use the UUID
to identify the array you will be able to find it whatever you do to it.
>> >2) The new disk has to be manually partitioned before beeing used in the
>> >array.
>> no it doesn't. You could use the whole disk (/dev/hdb).
>> In general, AFAIK, partitions are better as they allow automatic
>> assembly at boot.
>
>Is it correct that I can use whole disks (/dev/hdb) only if I make a
>partitionable array and thus creates the partitions UPON the raid
>mechanism?
no, you can use a whole disk as a whole disk, there is no law that you
have to partition it. usually you do because it is easier to manage, but
you could use LVM instead of partitions.
>As far as I can see, partitionable arrays makes disk replacements easier
>as you can just replace the disk and let the RAID software take care of
>syncing the new disk with existing partitioning. Is that correct?
layering the partitions above the raid array is easier to manage.
>You say I can't boot from such a partitionable raid array. Is that
>correctly understood?
why not?
>Can I "grow" a partitionable raid array if I replace the existing disks
>with larger ones later?
yes, you will have free (non partitioned) space at the end.
>Would you prefer manual partitioned disks, even though disk replacements
>are a bit more difficult?
YMMV
>I guess that mdadm automatically writes persistent superblocks to all
>disks?
unless you tell it not to, when creating an array with mdadm it writes a
persistent superblock.
>> >3) Must all partition types be 0xFD? What happens if they are not?
>> no
>> They won't be autodetected by the _kernel_
>
>OK, so it is generally a good idea to always set the partition types to
>0xFD, I guess.
many people find it easier to understand if raid partitions are set to
0XFD. kernel autodetection is broken and should not be relied upon.
>> >4) I guess the partitions itself doesn't have to be formated as the
>> >filesystem is on the RAID-level. Is that correct?
>> compulsory!
>
>I meant, the /dev/mdX has to be formatted, not the individual
>partitions. Still right?
compulsory! if you do anything on the individual components you'll damage data.
>> >5) Removing a disk requires that I do a "mdadm -r" on all the partitions
>> >that is involved in a RAID array. I attempt to by a hot-swap capable
>> >controler, so what happens if I just pull out the disk without this
>> >manual removal command?
>> as far as md is concerned the disk disappeared.
>> I _think_ this is just like mdadm -r.
i think it will be marked faulty, not removed.
>So I could actually just pull out the disk, insert a new one and do a
>"mdadm -a /dev/mdX /dev/sdY"?
>The RAID system won't detect the newly inserted disk itself?
no, think of it as flexibility. if you want you can build something
using the "hotplug" subsystem.
...
>Are there some HOWTO out there, that is up-to-date and is based on RAID
>usage with mdadm and kernel 2.6 instead of raidtools and kernel 2.2/2.4?
>I can't possibly be the only one with these newbie questions. :-)
one last word:
never trust howtos (they should be called howidid), they have the
tendency to apply to the author configuration, not yours.
general documentation is far more accurate.
L.
--
Luca Berra -- bluca@comedia.it
Communication Media & Services S.r.l.
/"\
\ / ASCII RIBBON CAMPAIGN
X AGAINST HTML MAIL
/ \
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: Questions about software RAID
2005-04-19 7:15 ` Luca Berra
@ 2005-04-19 8:08 ` David Greaves
2005-04-19 12:18 ` Michael Tokarev
0 siblings, 1 reply; 10+ messages in thread
From: David Greaves @ 2005-04-19 8:08 UTC (permalink / raw)
To: linux-raid
Luca Berra wrote:
> many people find it easier to understand if raid partitions are set to
> 0XFD. kernel autodetection is broken and should not be relied upon.
Could you clarify what is broken?
I understood that it was simplistic (ie if you have a raid0 built over a
raid5
or something exotic then it may have problems) but essentially worked.
Could it be :
* broken for complex raid on raid
* broken for root devices
* fine for 'simple', non-root devices
>
>
>>> >4) I guess the partitions itself doesn't have to be formated as the
>>> >filesystem is on the RAID-level. Is that correct?
>>> compulsory!
>>
>>
>> I meant, the /dev/mdX has to be formatted, not the individual
>> partitions. Still right?
>
> compulsory! if you do anything on the individual components you'll
> damage data.
>
>>> >5) Removing a disk requires that I do a "mdadm -r" on all the
>>> partitions
>>> >that is involved in a RAID array. I attempt to by a hot-swap capable
>>> >controler, so what happens if I just pull out the disk without this
>>> >manual removal command?
>>> as far as md is concerned the disk disappeared.
>>> I _think_ this is just like mdadm -r.
>>
> i think it will be marked faulty, not removed.
yep - you're right, I remember now.
You have to mdadm -r remove it and re-add it once you restore the disk.
>
>> So I could actually just pull out the disk, insert a new one and do a
>> "mdadm -a /dev/mdX /dev/sdY"?
>> The RAID system won't detect the newly inserted disk itself?
>
> no, think of it as flexibility. if you want you can build something
> using the "hotplug" subsystem.
or:
no, it would be mighty strange if the raid subsystem just grabbed every
new disk it saw...
Think of what would happen when I insert my camera's compact flash card
and it suddenly gets used as a hot spare <grin>
I'll leave Luca's last word - although it's also worth re-reading Peter's
first words!!
David
> one last word:
> never trust howtos (they should be called howidid), they have the
> tendency to apply to the author configuration, not yours.
> general documentation is far more accurate.
^ permalink raw reply [flat|nested] 10+ messages in thread
* Don't use whole disks for raid arrays [was: Questions about software RAID]
2005-04-18 23:12 ` tmp
2005-04-19 6:36 ` Peter T. Breuer
2005-04-19 7:15 ` Luca Berra
@ 2005-04-19 12:08 ` Michael Tokarev
2 siblings, 0 replies; 10+ messages in thread
From: Michael Tokarev @ 2005-04-19 12:08 UTC (permalink / raw)
To: tmp; +Cc: David Greaves, linux-raid
A followup about one single question.
tmp wrote:
[]
> Is it correct that I can use whole disks (/dev/hdb) only if I make a
> partitionable array and thus creates the partitions UPON the raid
> mechanism?
Just don't use whole disks for md arrays. *Especially* if you want
to create partitions inside the array. Instead, create a single
partition (/dev/hdb1) - you will waste the first sector on the disk,
but will be much safer. The reason is trivial:
Linux raid subsystem is designed to leave almost the whole underlying
device from its very beginning to almost the end for the data, it
stores its superblock (metadata information) at the *end* of the
device (this way, you can mount eg a single component of your
raid1 array without md layer at all, for recovery purposes).
Whenever you will use the whole disk, /dev/hdb, for the raid arrays,
or not, kernel will still look at the partition table in the disk.
This table is at the very beginning of it. If md array is at the
whole disk, very beginning of the disk is the same as the very
beginning of the array. So, kernel may recognize something written
to the start of the array as a partition table, and "activate"
all the /dev/hdbN devices.
This is especially the case when you create partitions *inside* the
array (md1p1 etc) -- the same partition table (now valid one) will
be seen in /dev/hdb itself *and* in /dev/md1.
Now, when kernel recognized and activated partitions this way,
the partitions physically will reside somewhere inside the array.
For one, it is unsafe to access the partitions, obviously, and
the kernel will not warn/deny your accesses.
But it is worse. Suppose you're assembling your arrays by searching
all devices for a superblocks. The device you want is /dev/hdb,
but kernel recognized partitions on it, and now the superblock is
at the end of both /dev/hdb and the last partition on it, say,
/dev/hdb4 -- you're lucky if your raid assembly tools will pick
up the right one... (Ok ok, the same applies to normal partitions
as well: it's always ambiguous choice if your last partition is a
part of a raid array, what to chooce: the last partition or the
whole disk)
Also suppose you will later want to boot from this drive, eg
because your real boot drive failed - you will have to actually
move your data off by a single sector to free the room for real
partition table...
To summarize: don't leave the kernel with more than one choice.
It's trivial to avoid the whole issue, with some more yet unknown
to me possible bad sides, by just creating a single partition on
the drive and be done with it, once and forever.
/mjt
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: Questions about software RAID
2005-04-19 8:08 ` David Greaves
@ 2005-04-19 12:18 ` Michael Tokarev
0 siblings, 0 replies; 10+ messages in thread
From: Michael Tokarev @ 2005-04-19 12:18 UTC (permalink / raw)
To: David Greaves; +Cc: linux-raid
David Greaves wrote:
> Luca Berra wrote:
>
>> many people find it easier to understand if raid partitions are set to
>> 0XFD. kernel autodetection is broken and should not be relied upon.
>
> Could you clarify what is broken?
> I understood that it was simplistic (ie if you have a raid0 built over a
> raid5
> or something exotic then it may have problems) but essentially worked.
> Could it be :
> * broken for complex raid on raid
> * broken for root devices
> * fine for 'simple', non-root devices
It works when everything works. If something does not work (your disk
died, you moved disks, or esp. you added another disk from another
machine wich was also a part of (another) raid array), every bad
thing can happen, from just inability to assemble the array at all,
to using the wrong disks/partitions, and to assembling the wrong
array (the one from another machine). If it's your root device
you're trying to assemble, recovery involves booting from a rescue
CD and cleaning stuff up, which can be problematic at times.
/mjt
^ permalink raw reply [flat|nested] 10+ messages in thread
end of thread, other threads:[~2005-04-19 12:18 UTC | newest]
Thread overview: 10+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2005-04-18 19:50 Questions about software RAID tmp
2005-04-18 20:12 ` David Greaves
2005-04-18 23:12 ` tmp
2005-04-19 6:36 ` Peter T. Breuer
2005-04-19 7:15 ` Luca Berra
2005-04-19 8:08 ` David Greaves
2005-04-19 12:18 ` Michael Tokarev
2005-04-19 12:08 ` Don't use whole disks for raid arrays [was: Questions about software RAID] Michael Tokarev
2005-04-18 20:15 ` Questions about software RAID Peter T. Breuer
2005-04-18 20:50 ` Frank Wittig
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).