linux-raid.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* Best Practice for Raid1 Root
@ 2004-01-14 23:43 Terrence Martin
  2004-01-15  0:06 ` Christian Kivalo
                   ` (3 more replies)
  0 siblings, 4 replies; 9+ messages in thread
From: Terrence Martin @ 2004-01-14 23:43 UTC (permalink / raw)
  To: linux-raid

Hi,

I wanted to post this question for a while.

On several systems I have configured a root software raid setup with two 
IDE hard drives. The systems are always some version of redhat. Each 
disk has its own controller and is partitioned similar to the following, 
maybe with more partitions, but this is the minimum.

hda1 fd   100M
hda2 swap 1024M
hda3 fd   10G

hdc1 fd   100M
hdc2 swap 1024M
hdc3 fd   10G

The Raid devices would be

/dev/md0 mounted under /boot made of /dev/hda1 and /dev/hdc1
/dev/md1 mounted under / made of /dev/hda3 and /dev/hdc3

The boot loader is grub and I want both /boot and / raided.

In the event of a failure of hda I would like the system to switch to 
hdc. This works fine. However what I have had problems with is if the 
system reboots. If /dev/hda is unavailable I no longer have a disk with 
a boot sector set up correctly. Unless I have a floppy or CDROM with a 
boot loader the system will not come up.

So my main question is what is the best practice to get a workable boot 
sector on /dev/hdc? How are other people making sure that their system 
remains bootable after a disk failure of the boot disk? Is it even 
possible with software raid and PC BIOS? Also when you replace /dev/hda 
how are you getting a valid boot sector on that disk?

I have found grub to often be problematic so that even when I move the 
good drive to be hda grub does not like to install itself correctly.

I am sure I am approaching this incorrect in some way, I am just not 
sure what is the right way.

Terrence Martin
UCSD Physics


^ permalink raw reply	[flat|nested] 9+ messages in thread

* RE: Best Practice for Raid1 Root
  2004-01-14 23:43 Best Practice for Raid1 Root Terrence Martin
@ 2004-01-15  0:06 ` Christian Kivalo
  2004-01-15  0:32   ` Michael Tokarev
  2004-01-15  0:26 ` Michael Tokarev
                   ` (2 subsequent siblings)
  3 siblings, 1 reply; 9+ messages in thread
From: Christian Kivalo @ 2004-01-15  0:06 UTC (permalink / raw)
  To: linux-raid

> Hi,
>
[snip]
> The boot loader is grub and I want both /boot and / raided.
>
> In the event of a failure of hda I would like the system to switch to
> hdc. This works fine. However what I have had problems with is if the
> system reboots. If /dev/hda is unavailable I no longer have a
> disk with
> a boot sector set up correctly. Unless I have a floppy or
> CDROM with a
> boot loader the system will not come up.
>
> So my main question is what is the best practice to get a
> workable boot
> sector on /dev/hdc? How are other people making sure that
> their system
> remains bootable after a disk failure of the boot disk? Is it even
> possible with software raid and PC BIOS? Also when you
> replace /dev/hda
> how are you getting a valid boot sector on that disk?
>
> I have found grub to often be problematic so that even when I
> move the
> good drive to be hda grub does not like to install itself correctly.
>
> I am sure I am approaching this incorrect in some way, I am just not
> sure what is the right way.
>
> Terrence Martin
> UCSD Physics

hi!

one possible solution would be to use lilo. lilo is capable of booting
from md raid1 arrays. look at lilo.conf(5) manpage for 'raid-extra-boot'
in 'GLOBAL OPTIONS'. i use it on most of my maschines. AFAIK grub is not
able to boot from /dev/mdX.

hth
christian


^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: Best Practice for Raid1 Root
  2004-01-14 23:43 Best Practice for Raid1 Root Terrence Martin
  2004-01-15  0:06 ` Christian Kivalo
@ 2004-01-15  0:26 ` Michael Tokarev
  2004-01-15  0:59   ` Terrence Martin
  2004-01-15  8:42 ` Gordon Henderson
  2004-01-18 21:58 ` Frank van Maarseveen
  3 siblings, 1 reply; 9+ messages in thread
From: Michael Tokarev @ 2004-01-15  0:26 UTC (permalink / raw)
  To: Terrence Martin; +Cc: linux-raid

Terrence Martin wrote:
> Hi,
> 
> I wanted to post this question for a while.
> 
> On several systems I have configured a root software raid setup with two 
> IDE hard drives. The systems are always some version of redhat. Each 
> disk has its own controller and is partitioned similar to the following, 
> maybe with more partitions, but this is the minimum.
> 
> hda1 fd   100M
> hda2 swap 1024M
> hda3 fd   10G
> 
> hdc1 fd   100M
> hdc2 swap 1024M
> hdc3 fd   10G
> 
> The Raid devices would be
> 
> /dev/md0 mounted under /boot made of /dev/hda1 and /dev/hdc1
> /dev/md1 mounted under / made of /dev/hda3 and /dev/hdc3

You aren't using raid1 for swap, yes?
Using two (or more) swap partitions in equivalent of raid0 array
(listing all them in fstab with the same priority) looks like a
rather common case, and indeed it works good (you're getting
stripe speed this way)... until one disk crashes.  And in case
of disk failure, your running system goes complete havoc,
including possible filesystem corruption and very probable data
corruption due to bad ("missing") parts of virtual memory.
It happened to us recently - we where using 2-disk systems,
mirroring everything but swap... it was not a nice lesson... ;)
 From now on, I'm using raid1 for swap too.  Yes it is much
slower than using several plain swap partitions, and less
efficient too, but it is much more safe.

> The boot loader is grub and I want both /boot and / raided.
> 
> In the event of a failure of hda I would like the system to switch to 
> hdc. This works fine. However what I have had problems with is if the 
> system reboots. If /dev/hda is unavailable I no longer have a disk with 
> a boot sector set up correctly. Unless I have a floppy or CDROM with a 
> boot loader the system will not come up.
> 
> So my main question is what is the best practice to get a workable boot 
> sector on /dev/hdc? How are other people making sure that their system 
> remains bootable after a disk failure of the boot disk? Is it even 
> possible with software raid and PC BIOS? Also when you replace /dev/hda 
> how are you getting a valid boot sector on that disk?

The answer really depends.  There's no boot program set out there (where
boot program set is everything from BIOS to the OS boot loader) that is
able to deal with every kind of first (boot) disk failure.  There are 2
scenarios of disk failure: when your failed /dev/hda is dead completely,
just like as it just unplugged, so BIOS and OS boot loader does not even
see/recognize it (from my expirience this is the most common scenario,
YMMV).  And second choice is when your boot disk is alive but have some
bad/unreadable/whatever sectors that belongs to data used during boot
sequence, so the disk is recognized but boot fails due to read errors.

It's easy to deal with first case (first disk dead completely).  I wasn't
able to use grub in that case, but lilo works just fine.  For that, I
use standard MBR on both /dev/hda and /dev/hdc (your case), and install
lilo into /dev/md0 (install=/dev/md0 in lilo.conf), making corresponding
/dev/hd[ac]1 bootable ("active") partitions.  This way, boot sector gets
"mirrored" manually when installing the MBR, and lilo maps are mirrored
by raid code.  Lilo uses 0x80 BIOS disk number for the boot map for all
the disks that forms /dev/md0 (regardless of actual number of them) - it
treats /dev/md0 array like a single disk.  This way, you may remove/fail
first (or second or 3rd in multidisk config) disk and your system will
boot from first disk available, provided your bios will skip missing
disks and assign 0x80 number to first disk really present.  There's one
limitation of this method: disk layout should be exactly the same on all
disks (at least /dev/hd[ac]1 partition placement), or else lilo map will
be invalid on some disks and valid on others.

But there's no good way to deal with second scenario.  Especially since
the problem (failed read) may happen when reading partition table or MBR
by BIOS - a piece of code you usually can't modify/control.  Provided MBR
read correctly by BIOS, loaded into memory and first stage of lilo/whatever
is executing, next steps depends on the OS boot loader (lilo, grub, ...).
It *may* recognize/know about raid1 array it is booting from, and try other
disks in case read from first disk fails.  But none of currently existing
linux boot loaders does that as far as I know.

So to summarize: it seems like using lilo, installing it into raid array
instead of MBR, and using standard MBR to boot the machine allows you to deal
with at least one disk failure scenario, while other scenario is problematic
in all cases....

/mjt


^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: Best Practice for Raid1 Root
  2004-01-15  0:06 ` Christian Kivalo
@ 2004-01-15  0:32   ` Michael Tokarev
  2004-01-15 12:48     ` Luca Berra
  0 siblings, 1 reply; 9+ messages in thread
From: Michael Tokarev @ 2004-01-15  0:32 UTC (permalink / raw)
  To: Christian Kivalo; +Cc: linux-raid

Christian Kivalo wrote:
>>Hi,
>>
> 
> [snip]
> 
>>The boot loader is grub and I want both /boot and / raided.
>>
>>In the event of a failure of hda I would like the system to switch to
>>hdc. This works fine. However what I have had problems with is if the
>>system reboots. If /dev/hda is unavailable I no longer have a disk with
>>a boot sector set up correctly. Unless I have a floppy or CDROM with a
>>boot loader the system will not come up.
[]
> one possible solution would be to use lilo. lilo is capable of booting
> from md raid1 arrays. look at lilo.conf(5) manpage for 'raid-extra-boot'
> in 'GLOBAL OPTIONS'. i use it on most of my maschines. AFAIK grub is not
> able to boot from /dev/mdX.

When using raid-extra-boot, lilo treats second disk as bios #0x81,
3rd disk as bios #0x82 and so on.  That to say - if your first disk
fails and BIOS will substitute second disk in place of first, making
it 0x80 instead of 0x81, lilo will not boot too, exactly in a way
why grub installed into /dev/hdb (that was e.g. temporarily plugged
into another machine just to make it bootable etc) does not want to
boot from it when it will be /dev/hda again (when returned back into
it's own machine).

Instead of using raid-extra-boot, IMHO, it is more safe to install
lilo into the raid array instead of the MBR, and use standard MBR
to boot from active partition.  But granted, lilo's raid-extra-boot
will work around different disk layouts, while lilo on raid with
standard MBR will not.

/mjt


^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: Best Practice for Raid1 Root
  2004-01-15  0:26 ` Michael Tokarev
@ 2004-01-15  0:59   ` Terrence Martin
  2004-01-15  1:22     ` Terrence Martin
  0 siblings, 1 reply; 9+ messages in thread
From: Terrence Martin @ 2004-01-15  0:59 UTC (permalink / raw)
  Cc: linux-raid

Thank you for the detailed post. My primary concern is the complete 
failure case since even if there are block problems that cause a partial 
boot (and subsequent failure) a quick unplug of the disk will simulate 
the complete failure state. It is also fairly easy to document that. :)

I had not considered that grub would not be the better solution in this 
case and the older lilo would be the preferred.

While I have managed to grok some of the details of grub it is fairly 
complex. Your technique for lilo gives me a hint though on what I may 
have to do to get grub to work. Of course I have lilo to fall back on.

I do have a concern that moving forward lilo may disappear as an option 
from RH, but it is in RHAS3.0 so I guess I am good for a while.

Also thank you for the tip about swap. I had not considered placing swap 
on an md device to ensure reliability. I will do that as well.

Thanks again,

Terrence





Michael Tokarev wrote:
> Terrence Martin wrote:
> 
>> Hi,
>>
>> I wanted to post this question for a while.
>>
>> On several systems I have configured a root software raid setup with 
>> two IDE hard drives. The systems are always some version of redhat. 
>> Each disk has its own controller and is partitioned similar to the 
>> following, maybe with more partitions, but this is the minimum.
>>
>> hda1 fd   100M
>> hda2 swap 1024M
>> hda3 fd   10G
>>
>> hdc1 fd   100M
>> hdc2 swap 1024M
>> hdc3 fd   10G
>>
>> The Raid devices would be
>>
>> /dev/md0 mounted under /boot made of /dev/hda1 and /dev/hdc1
>> /dev/md1 mounted under / made of /dev/hda3 and /dev/hdc3
> 
> 
> You aren't using raid1 for swap, yes?
> Using two (or more) swap partitions in equivalent of raid0 array
> (listing all them in fstab with the same priority) looks like a
> rather common case, and indeed it works good (you're getting
> stripe speed this way)... until one disk crashes.  And in case
> of disk failure, your running system goes complete havoc,
> including possible filesystem corruption and very probable data
> corruption due to bad ("missing") parts of virtual memory.
> It happened to us recently - we where using 2-disk systems,
> mirroring everything but swap... it was not a nice lesson... ;)
>  From now on, I'm using raid1 for swap too.  Yes it is much
> slower than using several plain swap partitions, and less
> efficient too, but it is much more safe.
> 
>> The boot loader is grub and I want both /boot and / raided.
>>
>> In the event of a failure of hda I would like the system to switch to 
>> hdc. This works fine. However what I have had problems with is if the 
>> system reboots. If /dev/hda is unavailable I no longer have a disk 
>> with a boot sector set up correctly. Unless I have a floppy or CDROM 
>> with a boot loader the system will not come up.
>>
>> So my main question is what is the best practice to get a workable 
>> boot sector on /dev/hdc? How are other people making sure that their 
>> system remains bootable after a disk failure of the boot disk? Is it 
>> even possible with software raid and PC BIOS? Also when you replace 
>> /dev/hda how are you getting a valid boot sector on that disk?
> 
> 
> The answer really depends.  There's no boot program set out there (where
> boot program set is everything from BIOS to the OS boot loader) that is
> able to deal with every kind of first (boot) disk failure.  There are 2
> scenarios of disk failure: when your failed /dev/hda is dead completely,
> just like as it just unplugged, so BIOS and OS boot loader does not even
> see/recognize it (from my expirience this is the most common scenario,
> YMMV).  And second choice is when your boot disk is alive but have some
> bad/unreadable/whatever sectors that belongs to data used during boot
> sequence, so the disk is recognized but boot fails due to read errors.
> 
> It's easy to deal with first case (first disk dead completely).  I wasn't
> able to use grub in that case, but lilo works just fine.  For that, I
> use standard MBR on both /dev/hda and /dev/hdc (your case), and install
> lilo into /dev/md0 (install=/dev/md0 in lilo.conf), making corresponding
> /dev/hd[ac]1 bootable ("active") partitions.  This way, boot sector gets
> "mirrored" manually when installing the MBR, and lilo maps are mirrored
> by raid code.  Lilo uses 0x80 BIOS disk number for the boot map for all
> the disks that forms /dev/md0 (regardless of actual number of them) - it
> treats /dev/md0 array like a single disk.  This way, you may remove/fail
> first (or second or 3rd in multidisk config) disk and your system will
> boot from first disk available, provided your bios will skip missing
> disks and assign 0x80 number to first disk really present.  There's one
> limitation of this method: disk layout should be exactly the same on all
> disks (at least /dev/hd[ac]1 partition placement), or else lilo map will
> be invalid on some disks and valid on others.
> 
> But there's no good way to deal with second scenario.  Especially since
> the problem (failed read) may happen when reading partition table or MBR
> by BIOS - a piece of code you usually can't modify/control.  Provided MBR
> read correctly by BIOS, loaded into memory and first stage of lilo/whatever
> is executing, next steps depends on the OS boot loader (lilo, grub, ...).
> It *may* recognize/know about raid1 array it is booting from, and try other
> disks in case read from first disk fails.  But none of currently existing
> linux boot loaders does that as far as I know.
> 
> So to summarize: it seems like using lilo, installing it into raid array
> instead of MBR, and using standard MBR to boot the machine allows you to 
> deal
> with at least one disk failure scenario, while other scenario is 
> problematic
> in all cases....
> 
> /mjt
> 



^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: Best Practice for Raid1 Root
  2004-01-15  0:59   ` Terrence Martin
@ 2004-01-15  1:22     ` Terrence Martin
  0 siblings, 0 replies; 9+ messages in thread
From: Terrence Martin @ 2004-01-15  1:22 UTC (permalink / raw)
  To: Terrence Martin; +Cc: linux-raid

So using the grub and /dev/md0 info I did a search in google and a read 
of a long running Redhat thread I found the following page.

http://lists.us.dell.com/pipermail/linux-poweredge/2003-July/014331.html

The author details the steps to get grub properly installed on both MBR 
in a raid set and how to restore a drive in a raid array.

Seems to work and allows me to kee grub.

Thanks again,

Terrence

Terrence Martin wrote:
> Thank you for the detailed post. My primary concern is the complete 
> failure case since even if there are block problems that cause a partial 
> boot (and subsequent failure) a quick unplug of the disk will simulate 
> the complete failure state. It is also fairly easy to document that. :)
> 
> I had not considered that grub would not be the better solution in this 
> case and the older lilo would be the preferred.
> 
> While I have managed to grok some of the details of grub it is fairly 
> complex. Your technique for lilo gives me a hint though on what I may 
> have to do to get grub to work. Of course I have lilo to fall back on.
> 
> I do have a concern that moving forward lilo may disappear as an option 
> from RH, but it is in RHAS3.0 so I guess I am good for a while.
> 
> Also thank you for the tip about swap. I had not considered placing swap 
> on an md device to ensure reliability. I will do that as well.
> 
> Thanks again,
> 
> Terrence
> 
> 
> 
> 
> 
> Michael Tokarev wrote:
> 
>> Terrence Martin wrote:
>>
>>> Hi,
>>>
>>> I wanted to post this question for a while.
>>>
>>> On several systems I have configured a root software raid setup with 
>>> two IDE hard drives. The systems are always some version of redhat. 
>>> Each disk has its own controller and is partitioned similar to the 
>>> following, maybe with more partitions, but this is the minimum.
>>>
>>> hda1 fd   100M
>>> hda2 swap 1024M
>>> hda3 fd   10G
>>>
>>> hdc1 fd   100M
>>> hdc2 swap 1024M
>>> hdc3 fd   10G
>>>
>>> The Raid devices would be
>>>
>>> /dev/md0 mounted under /boot made of /dev/hda1 and /dev/hdc1
>>> /dev/md1 mounted under / made of /dev/hda3 and /dev/hdc3
>>
>>
>>
>> You aren't using raid1 for swap, yes?
>> Using two (or more) swap partitions in equivalent of raid0 array
>> (listing all them in fstab with the same priority) looks like a
>> rather common case, and indeed it works good (you're getting
>> stripe speed this way)... until one disk crashes.  And in case
>> of disk failure, your running system goes complete havoc,
>> including possible filesystem corruption and very probable data
>> corruption due to bad ("missing") parts of virtual memory.
>> It happened to us recently - we where using 2-disk systems,
>> mirroring everything but swap... it was not a nice lesson... ;)
>>  From now on, I'm using raid1 for swap too.  Yes it is much
>> slower than using several plain swap partitions, and less
>> efficient too, but it is much more safe.
>>
>>> The boot loader is grub and I want both /boot and / raided.
>>>
>>> In the event of a failure of hda I would like the system to switch to 
>>> hdc. This works fine. However what I have had problems with is if the 
>>> system reboots. If /dev/hda is unavailable I no longer have a disk 
>>> with a boot sector set up correctly. Unless I have a floppy or CDROM 
>>> with a boot loader the system will not come up.
>>>
>>> So my main question is what is the best practice to get a workable 
>>> boot sector on /dev/hdc? How are other people making sure that their 
>>> system remains bootable after a disk failure of the boot disk? Is it 
>>> even possible with software raid and PC BIOS? Also when you replace 
>>> /dev/hda how are you getting a valid boot sector on that disk?
>>
>>
>>
>> The answer really depends.  There's no boot program set out there (where
>> boot program set is everything from BIOS to the OS boot loader) that is
>> able to deal with every kind of first (boot) disk failure.  There are 2
>> scenarios of disk failure: when your failed /dev/hda is dead completely,
>> just like as it just unplugged, so BIOS and OS boot loader does not even
>> see/recognize it (from my expirience this is the most common scenario,
>> YMMV).  And second choice is when your boot disk is alive but have some
>> bad/unreadable/whatever sectors that belongs to data used during boot
>> sequence, so the disk is recognized but boot fails due to read errors.
>>
>> It's easy to deal with first case (first disk dead completely).  I wasn't
>> able to use grub in that case, but lilo works just fine.  For that, I
>> use standard MBR on both /dev/hda and /dev/hdc (your case), and install
>> lilo into /dev/md0 (install=/dev/md0 in lilo.conf), making corresponding
>> /dev/hd[ac]1 bootable ("active") partitions.  This way, boot sector gets
>> "mirrored" manually when installing the MBR, and lilo maps are mirrored
>> by raid code.  Lilo uses 0x80 BIOS disk number for the boot map for all
>> the disks that forms /dev/md0 (regardless of actual number of them) - it
>> treats /dev/md0 array like a single disk.  This way, you may remove/fail
>> first (or second or 3rd in multidisk config) disk and your system will
>> boot from first disk available, provided your bios will skip missing
>> disks and assign 0x80 number to first disk really present.  There's one
>> limitation of this method: disk layout should be exactly the same on all
>> disks (at least /dev/hd[ac]1 partition placement), or else lilo map will
>> be invalid on some disks and valid on others.
>>
>> But there's no good way to deal with second scenario.  Especially since
>> the problem (failed read) may happen when reading partition table or MBR
>> by BIOS - a piece of code you usually can't modify/control.  Provided MBR
>> read correctly by BIOS, loaded into memory and first stage of 
>> lilo/whatever
>> is executing, next steps depends on the OS boot loader (lilo, grub, ...).
>> It *may* recognize/know about raid1 array it is booting from, and try 
>> other
>> disks in case read from first disk fails.  But none of currently existing
>> linux boot loaders does that as far as I know.
>>
>> So to summarize: it seems like using lilo, installing it into raid array
>> instead of MBR, and using standard MBR to boot the machine allows you 
>> to deal
>> with at least one disk failure scenario, while other scenario is 
>> problematic
>> in all cases....
>>
>> /mjt
>>
> 
> 
> -
> To unsubscribe from this list: send the line "unsubscribe linux-raid" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
> 



^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: Best Practice for Raid1 Root
  2004-01-14 23:43 Best Practice for Raid1 Root Terrence Martin
  2004-01-15  0:06 ` Christian Kivalo
  2004-01-15  0:26 ` Michael Tokarev
@ 2004-01-15  8:42 ` Gordon Henderson
  2004-01-18 21:58 ` Frank van Maarseveen
  3 siblings, 0 replies; 9+ messages in thread
From: Gordon Henderson @ 2004-01-15  8:42 UTC (permalink / raw)
  To: Terrence Martin; +Cc: linux-raid

On Wed, 14 Jan 2004, Terrence Martin wrote:

> The boot loader is grub and I want both /boot and / raided.
>
> In the event of a failure of hda I would like the system to switch to
> hdc. This works fine. However what I have had problems with is if the
> system reboots. If /dev/hda is unavailable I no longer have a disk with
> a boot sector set up correctly. Unless I have a floppy or CDROM with a
> boot loader the system will not come up.
>
> So my main question is what is the best practice to get a workable boot
> sector on /dev/hdc?

Use LILO.

Works just fine.

My standard layout on a 2-disk systems these days is usuely something like

  /		256MB
  swap		1GB (depends on system memory though)
  /usr		2GB
  /var		just under half of rest of disk
  /archive	rest of disk

I don't bother with a separate /boot partition. It's not needed in these
enlightened days of BIOSes that can boot from modern disks, and in
any-case a small / partition is inside the 1024 cylinder limit anyway.
Disks are cheap and I can keep an N-day archive on the same system (see
rsync documentation) which is normally re-mounted read only and then that
can be dumped to tape, etc.

I've deployed a lot of little servers and routers like this and I'm
pleased with their usbility.

The runes in a lilo.conf file are:

  boot=/dev/md0
  root=/dev/md0
  raid-extra-boot=/dev/hda,/dev/hdc

I've no idea how to integrate lilo into a modern RH distribution though. I
use Debian for all my stuff.

Gordon

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: Best Practice for Raid1 Root
  2004-01-15  0:32   ` Michael Tokarev
@ 2004-01-15 12:48     ` Luca Berra
  0 siblings, 0 replies; 9+ messages in thread
From: Luca Berra @ 2004-01-15 12:48 UTC (permalink / raw)
  To: linux-raid

On Thu, Jan 15, 2004 at 03:32:12AM +0300, Michael Tokarev wrote:
>When using raid-extra-boot, lilo treats second disk as bios #0x81,
>3rd disk as bios #0x82 and so on.  That to say - if your first disk
>fails and BIOS will substitute second disk in place of first, making
>it 0x80 instead of 0x81, lilo will not boot too, exactly in a way
>why grub installed into /dev/hdb (that was e.g. temporarily plugged
>into another machine just to make it bootable etc) does not want to
>boot from it when it will be /dev/hda again (when returned back into
>it's own machine).
this is not true

L.

-- 
Luca Berra -- bluca@comedia.it
        Communication Media & Services S.r.l.
 /"\
 \ /     ASCII RIBBON CAMPAIGN
  X        AGAINST HTML MAIL
 / \

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: Best Practice for Raid1 Root
  2004-01-14 23:43 Best Practice for Raid1 Root Terrence Martin
                   ` (2 preceding siblings ...)
  2004-01-15  8:42 ` Gordon Henderson
@ 2004-01-18 21:58 ` Frank van Maarseveen
  3 siblings, 0 replies; 9+ messages in thread
From: Frank van Maarseveen @ 2004-01-18 21:58 UTC (permalink / raw)
  To: linux-raid; +Cc: Terrence Martin

As has been pointed out it can be troublesome to handle half-broken
disks. And its difficult to test. I'd just boot from an USB memory
stick (or a CF card + USB reader, something I use). That way the possible
failure of moving parts in your system gets separated from the boot
process itself. And its easier to test.

-- 
Frank

^ permalink raw reply	[flat|nested] 9+ messages in thread

end of thread, other threads:[~2004-01-18 21:58 UTC | newest]

Thread overview: 9+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2004-01-14 23:43 Best Practice for Raid1 Root Terrence Martin
2004-01-15  0:06 ` Christian Kivalo
2004-01-15  0:32   ` Michael Tokarev
2004-01-15 12:48     ` Luca Berra
2004-01-15  0:26 ` Michael Tokarev
2004-01-15  0:59   ` Terrence Martin
2004-01-15  1:22     ` Terrence Martin
2004-01-15  8:42 ` Gordon Henderson
2004-01-18 21:58 ` Frank van Maarseveen

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).