* combining two raid systems
@ 2005-01-13 19:17 maarten
2005-01-13 19:57 ` maarten
` (3 more replies)
0 siblings, 4 replies; 11+ messages in thread
From: maarten @ 2005-01-13 19:17 UTC (permalink / raw)
To: linux-raid
Hi,
I'm currently combing two servers into one, and I'm trying to figure out the
safest way to do that.
System one had two md arrays: one raid-1 with the OS and a second one with
data (raid-5) It is bootable through lilo
System two had 9 arrays, one with the OS (raid-1) two raid-1's for swap, and 6
md devices that belong in an LVM volume. This system has grub.
All md arrays are self-booting 0xFD partitions.
I want to boot off system one. I verified that that boots fine if I
disconnect all the [system-2] drives, so that's working okay.
Now when I boot I get a lilo prompt, so I know the right disk is booted by the
BIOS. When logged in, I see only the md devices from system two, and thus
the current md0 "/" drive is from system two. Now what options do I have ?
If I zero the superblock(s) (or even the whole partitions) from md0 of system
2, it will not boot off of that obviously, but what will now get to be md0 ?
It could be the second array from system 2 equally well as the first array
from system one, right ?
I could experiment with finding the right array by using different kernel
root= commandlines, but only grub gives me that possibility, lilo has no
boot-time shell (well, it has a commandline...)
Another thing that strikes me is that running 'mdadm --detail --scan' also
only finds the arrays from system 2. Is that expected since it just reads
its /etc/mdadm.conf file, or should it disregard that and show all arrays ?
Upon first glance 'fdisk -l' does show all devices fine (there are 10 of them)
I think (er, hope, actually) that with mdadm.conf one could probably force the
machine to recognize the right drives as md0 as opposed to them being
numbered mdX, but is that a right assumption ? At the time the kernel md
code reads / assembles the various 0xFD partitions, the root-partition is not
mounted (obviously) so reading /etc/mdadm.conf will not be possible.
I'll start to try out some things but I _really_ want to avoid having an
unbootable system: for one, this system has no CDrom nor floppy, and even
more importantly I don't think my rescue media have all neccessary drivers
for the ATA & SATA cards.
Anyone have some good advice for me ?
Maarten
--
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: combining two raid systems
2005-01-13 19:17 combining two raid systems maarten
@ 2005-01-13 19:57 ` maarten
[not found] ` <1105645254.3390.12.camel@south.rosestar.lan>
` (2 subsequent siblings)
3 siblings, 0 replies; 11+ messages in thread
From: maarten @ 2005-01-13 19:57 UTC (permalink / raw)
To: linux-raid
On Thursday 13 January 2005 20:17, maarten wrote:
> Hi,
>
Some more info: The mdadm --detail --scan still cannot find the other md
devices so I examined the boot.log. It starts out assembling the right md
device (the one I want as the first) but it then rejects it.
Sorry, I have to type this in from the console since my booted kernel is not
the same version as the /lib/modules version on disk (it being the wrong disk
and all) so I can't load any modules, thus no network connectivity there...
It sees the partitions, but it says a couple of times
"invalid raid superblock magic on ..."
but then it continues with something like:
created md1
bind <hda3>
bind <hdc3>
bind <hde3>
bind <hdg3>
bind <hdq3>
bind <hds3>
running: <hds3><hdq3><hdg3><hde3><hdc3><hda3>
personality 4 is not loaded !
And then it stops md1again.
This same, or analogue, thing happens thereafter with md0 but now it says
personality 3 is not loaded !
Now I have to wonder... personality isn't the raid level, is it ? Because I
would never ever use or need raid-3 or raid-4. What's happening here ?
And these two arrays, that are attempted at boot at least, won't show up later
in mdadm --detail --scan. If I run --examine on one of those partitions it
clearly sees it being part of an array, but --detail differs in this respect.
I think I may now want to remove the 0xFD flags on all the arrays of system 2
so as to disable them, but that still might not fix the missing personalities
problem...
Well first I'll examine the boot.log of the system 1 to see if when booted
standalone it says things about personality 3 & 4 as well...
Any help greatly appreciated !
Maarten
> I'm currently combing two servers into one, and I'm trying to figure out
> the safest way to do that.
>
> System one had two md arrays: one raid-1 with the OS and a second one with
> data (raid-5) It is bootable through lilo
>
> System two had 9 arrays, one with the OS (raid-1) two raid-1's for swap,
> and 6 md devices that belong in an LVM volume. This system has grub.
>
> All md arrays are self-booting 0xFD partitions.
>
> I want to boot off system one. I verified that that boots fine if I
> disconnect all the [system-2] drives, so that's working okay.
>
> Now when I boot I get a lilo prompt, so I know the right disk is booted by
> the BIOS. When logged in, I see only the md devices from system two, and
> thus the current md0 "/" drive is from system two. Now what options do I
> have ?
>
> If I zero the superblock(s) (or even the whole partitions) from md0 of
> system 2, it will not boot off of that obviously, but what will now get to
> be md0 ? It could be the second array from system 2 equally well as the
> first array from system one, right ?
>
> I could experiment with finding the right array by using different kernel
> root= commandlines, but only grub gives me that possibility, lilo has no
> boot-time shell (well, it has a commandline...)
>
> Another thing that strikes me is that running 'mdadm --detail --scan' also
> only finds the arrays from system 2. Is that expected since it just reads
> its /etc/mdadm.conf file, or should it disregard that and show all arrays ?
>
> Upon first glance 'fdisk -l' does show all devices fine (there are 10 of
> them)
>
> I think (er, hope, actually) that with mdadm.conf one could probably force
> the machine to recognize the right drives as md0 as opposed to them being
> numbered mdX, but is that a right assumption ? At the time the kernel md
> code reads / assembles the various 0xFD partitions, the root-partition is
> not mounted (obviously) so reading /etc/mdadm.conf will not be possible.
>
> I'll start to try out some things but I _really_ want to avoid having an
> unbootable system: for one, this system has no CDrom nor floppy, and even
> more importantly I don't think my rescue media have all neccessary drivers
> for the ATA & SATA cards.
>
> Anyone have some good advice for me ?
>
> Maarten
--
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: combining two raid systems
[not found] ` <1105645254.3390.12.camel@south.rosestar.lan>
@ 2005-01-13 20:27 ` maarten
2005-01-13 21:22 ` Derek Piper
0 siblings, 1 reply; 11+ messages in thread
From: maarten @ 2005-01-13 20:27 UTC (permalink / raw)
To: Bob Hillegas; +Cc: linux-raid
On Thursday 13 January 2005 20:40, Bob Hillegas wrote:
> Set the sequence of hard drives to boot from in BIOS. Once you combine
> drives on single server, the drive designations will probably change.
> Need to figure out which sdx to put at the top of list.
No, that's perfectly handled by the autodetection. My drive ID are reassigned
all over the place (of course) but that goes well.
> Depending how you assemble array, you may at this point also need to
> tweak config file before you get the right drives assembled.
Yes but as I said the config is only read when all the assembling is done so
that won't help much. (chicken and egg problem)
> Introducing new SCSI devices into the chain is always interesting.
Indeed.
But I think I'm on the right track now anyway; I did --zero-superblock on the
unwanted md0 and md1 drives, and rebooted. At this point something
interesting happened: md1 now indeed was the right array (from system 1).
Md0 however was still from system 2. So I gathered from that that the SB also
'knows' which md device number it has so I had two md0's that clashed.
Knowing that I took the risk of fdisk'ing the drives from 0xFD to 0x83 and
that helped: I'm now booted into my system 1 OS drive.
I'll check if everything works now... gotta do stuff with LVM still at least,
and tweaking the mdadm.conf.
Maarten
> BobH
>
> On Thu, 2005-01-13 at 13:17, maarten wrote:
> > Hi,
> >
> > I'm currently combing two servers into one, and I'm trying to figure out
> > the safest way to do that.
> >
> > System one had two md arrays: one raid-1 with the OS and a second one
> > with data (raid-5) It is bootable through lilo
> >
> > System two had 9 arrays, one with the OS (raid-1) two raid-1's for swap,
> > and 6 md devices that belong in an LVM volume. This system has grub.
> >
> > All md arrays are self-booting 0xFD partitions.
> >
> > I want to boot off system one. I verified that that boots fine if I
> > disconnect all the [system-2] drives, so that's working okay.
> >
> > Now when I boot I get a lilo prompt, so I know the right disk is booted
> > by the BIOS. When logged in, I see only the md devices from system two,
> > and thus the current md0 "/" drive is from system two. Now what options
> > do I have ?
> >
> > If I zero the superblock(s) (or even the whole partitions) from md0 of
> > system 2, it will not boot off of that obviously, but what will now get
> > to be md0 ? It could be the second array from system 2 equally well as
> > the first array from system one, right ?
> >
> > I could experiment with finding the right array by using different kernel
> > root= commandlines, but only grub gives me that possibility, lilo has no
> > boot-time shell (well, it has a commandline...)
> >
> > Another thing that strikes me is that running 'mdadm --detail --scan'
> > also only finds the arrays from system 2. Is that expected since it just
> > reads its /etc/mdadm.conf file, or should it disregard that and show all
> > arrays ?
> >
> > Upon first glance 'fdisk -l' does show all devices fine (there are 10 of
> > them)
> >
> > I think (er, hope, actually) that with mdadm.conf one could probably
> > force the machine to recognize the right drives as md0 as opposed to them
> > being numbered mdX, but is that a right assumption ? At the time the
> > kernel md code reads / assembles the various 0xFD partitions, the
> > root-partition is not mounted (obviously) so reading /etc/mdadm.conf will
> > not be possible.
> >
> > I'll start to try out some things but I _really_ want to avoid having an
> > unbootable system: for one, this system has no CDrom nor floppy, and even
> > more importantly I don't think my rescue media have all neccessary
> > drivers for the ATA & SATA cards.
> >
> > Anyone have some good advice for me ?
> >
> > Maarten
--
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: combining two raid systems
2005-01-13 20:27 ` maarten
@ 2005-01-13 21:22 ` Derek Piper
2005-01-13 22:30 ` maarten
2005-01-13 23:04 ` Guy
0 siblings, 2 replies; 11+ messages in thread
From: Derek Piper @ 2005-01-13 21:22 UTC (permalink / raw)
To: linux-raid
Maarten, I'm curious as to how you get on with LVM. I've been looking
around and have seen that LVM seems to be a Bad Idea to run it on the
root FS, and devfs seems to be scary also, so I wasn't going to even
attempt any of that. I know Robin Bowes mentioned he uses LVM in that
way, to 'carve up' a large RAID array. I'm curious if other people do
that too and if they've had any problems. I'm not even sure of the
maturity of the LVM stuff, anyone got any words of wisdom?
Derek
/still yet to get his feet wet in the whole RAID thing
On Thu, 13 Jan 2005 21:27:07 +0100, maarten <maarten@ultratux.net> wrote:
> On Thursday 13 January 2005 20:40, Bob Hillegas wrote:
> > Set the sequence of hard drives to boot from in BIOS. Once you combine
> > drives on single server, the drive designations will probably change.
> > Need to figure out which sdx to put at the top of list.
>
> No, that's perfectly handled by the autodetection. My drive ID are reassigned
> all over the place (of course) but that goes well.
>
> > Depending how you assemble array, you may at this point also need to
> > tweak config file before you get the right drives assembled.
>
> Yes but as I said the config is only read when all the assembling is done so
> that won't help much. (chicken and egg problem)
>
> > Introducing new SCSI devices into the chain is always interesting.
>
> Indeed.
>
> But I think I'm on the right track now anyway; I did --zero-superblock on the
> unwanted md0 and md1 drives, and rebooted. At this point something
> interesting happened: md1 now indeed was the right array (from system 1).
> Md0 however was still from system 2. So I gathered from that that the SB also
> 'knows' which md device number it has so I had two md0's that clashed.
> Knowing that I took the risk of fdisk'ing the drives from 0xFD to 0x83 and
> that helped: I'm now booted into my system 1 OS drive.
>
> I'll check if everything works now... gotta do stuff with LVM still at least,
> and tweaking the mdadm.conf.
>
> Maarten
>
> > BobH
> >
> > On Thu, 2005-01-13 at 13:17, maarten wrote:
> > > Hi,
> > >
> > > I'm currently combing two servers into one, and I'm trying to figure out
> > > the safest way to do that.
> > >
> > > System one had two md arrays: one raid-1 with the OS and a second one
> > > with data (raid-5) It is bootable through lilo
> > >
> > > System two had 9 arrays, one with the OS (raid-1) two raid-1's for swap,
> > > and 6 md devices that belong in an LVM volume. This system has grub.
> > >
> > > All md arrays are self-booting 0xFD partitions.
> > >
> > > I want to boot off system one. I verified that that boots fine if I
> > > disconnect all the [system-2] drives, so that's working okay.
> > >
> > > Now when I boot I get a lilo prompt, so I know the right disk is booted
> > > by the BIOS. When logged in, I see only the md devices from system two,
> > > and thus the current md0 "/" drive is from system two. Now what options
> > > do I have ?
> > >
> > > If I zero the superblock(s) (or even the whole partitions) from md0 of
> > > system 2, it will not boot off of that obviously, but what will now get
> > > to be md0 ? It could be the second array from system 2 equally well as
> > > the first array from system one, right ?
> > >
> > > I could experiment with finding the right array by using different kernel
> > > root= commandlines, but only grub gives me that possibility, lilo has no
> > > boot-time shell (well, it has a commandline...)
> > >
> > > Another thing that strikes me is that running 'mdadm --detail --scan'
> > > also only finds the arrays from system 2. Is that expected since it just
> > > reads its /etc/mdadm.conf file, or should it disregard that and show all
> > > arrays ?
> > >
> > > Upon first glance 'fdisk -l' does show all devices fine (there are 10 of
> > > them)
> > >
> > > I think (er, hope, actually) that with mdadm.conf one could probably
> > > force the machine to recognize the right drives as md0 as opposed to them
> > > being numbered mdX, but is that a right assumption ? At the time the
> > > kernel md code reads / assembles the various 0xFD partitions, the
> > > root-partition is not mounted (obviously) so reading /etc/mdadm.conf will
> > > not be possible.
> > >
> > > I'll start to try out some things but I _really_ want to avoid having an
> > > unbootable system: for one, this system has no CDrom nor floppy, and even
> > > more importantly I don't think my rescue media have all neccessary
> > > drivers for the ATA & SATA cards.
> > >
> > > Anyone have some good advice for me ?
> > >
> > > Maarten
>
> --
>
> -
> To unsubscribe from this list: send the line "unsubscribe linux-raid" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at http://vger.kernel.org/majordomo-info.html
>
--
Derek Piper - derek.piper@gmail.com
http://doofer.org/
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: combining two raid systems
2005-01-13 21:22 ` Derek Piper
@ 2005-01-13 22:30 ` maarten
2005-01-13 23:04 ` Guy
1 sibling, 0 replies; 11+ messages in thread
From: maarten @ 2005-01-13 22:30 UTC (permalink / raw)
To: linux-raid
On Thursday 13 January 2005 22:22, Derek Piper wrote:
> Maarten, I'm curious as to how you get on with LVM. I've been looking
Tis working fine.
> around and have seen that LVM seems to be a Bad Idea to run it on the
> root FS, and devfs seems to be scary also, so I wasn't going to even
Possibly, but whoever said I run LVM for my root fs ? I don't.
Not that would advice against using it, but as it happens my OS is not on LVM.
Only my data is.
> attempt any of that. I know Robin Bowes mentioned he uses LVM in that
> way, to 'carve up' a large RAID array. I'm curious if other people do
> that too and if they've had any problems. I'm not even sure of the
> maturity of the LVM stuff, anyone got any words of wisdom?
LVM still is new to me, so no words of wisdom from me...
But it does work as expected so far.
> Derek
Maarten
^ permalink raw reply [flat|nested] 11+ messages in thread
* RE: combining two raid systems
2005-01-13 21:22 ` Derek Piper
2005-01-13 22:30 ` maarten
@ 2005-01-13 23:04 ` Guy
2005-01-14 10:03 ` Robin Bowes
1 sibling, 1 reply; 11+ messages in thread
From: Guy @ 2005-01-13 23:04 UTC (permalink / raw)
To: 'Derek Piper', linux-raid
My OS is on 2 disks.
I have /boot as a RAID1 partition (md0).
The rest of the disks are RAID1 (md1).
LVM on md1, 2 logical volumes "/" and swap.
I have never had any problems with the 2 boot disks and OS on LVM.
My big array (md2) is not LVM.
I never changed anything related to LVM, so it never really helped me in any
way.
I installed using RedHat 9.0. It was very easy to install/configure a RAID1
LVM setup.
Guy
-----Original Message-----
From: linux-raid-owner@vger.kernel.org
[mailto:linux-raid-owner@vger.kernel.org] On Behalf Of Derek Piper
Sent: Thursday, January 13, 2005 4:23 PM
To: linux-raid@vger.kernel.org
Subject: Re: combining two raid systems
Maarten, I'm curious as to how you get on with LVM. I've been looking
around and have seen that LVM seems to be a Bad Idea to run it on the
root FS, and devfs seems to be scary also, so I wasn't going to even
attempt any of that. I know Robin Bowes mentioned he uses LVM in that
way, to 'carve up' a large RAID array. I'm curious if other people do
that too and if they've had any problems. I'm not even sure of the
maturity of the LVM stuff, anyone got any words of wisdom?
Derek
/still yet to get his feet wet in the whole RAID thing
On Thu, 13 Jan 2005 21:27:07 +0100, maarten <maarten@ultratux.net> wrote:
> On Thursday 13 January 2005 20:40, Bob Hillegas wrote:
> > Set the sequence of hard drives to boot from in BIOS. Once you combine
> > drives on single server, the drive designations will probably change.
> > Need to figure out which sdx to put at the top of list.
>
> No, that's perfectly handled by the autodetection. My drive ID are
reassigned
> all over the place (of course) but that goes well.
>
> > Depending how you assemble array, you may at this point also need to
> > tweak config file before you get the right drives assembled.
>
> Yes but as I said the config is only read when all the assembling is done
so
> that won't help much. (chicken and egg problem)
>
> > Introducing new SCSI devices into the chain is always interesting.
>
> Indeed.
>
> But I think I'm on the right track now anyway; I did --zero-superblock on
the
> unwanted md0 and md1 drives, and rebooted. At this point something
> interesting happened: md1 now indeed was the right array (from system 1).
> Md0 however was still from system 2. So I gathered from that that the SB
also
> 'knows' which md device number it has so I had two md0's that clashed.
> Knowing that I took the risk of fdisk'ing the drives from 0xFD to 0x83 and
> that helped: I'm now booted into my system 1 OS drive.
>
> I'll check if everything works now... gotta do stuff with LVM still at
least,
> and tweaking the mdadm.conf.
>
> Maarten
>
> > BobH
> >
> > On Thu, 2005-01-13 at 13:17, maarten wrote:
> > > Hi,
> > >
> > > I'm currently combing two servers into one, and I'm trying to figure
out
> > > the safest way to do that.
> > >
> > > System one had two md arrays: one raid-1 with the OS and a second one
> > > with data (raid-5) It is bootable through lilo
> > >
> > > System two had 9 arrays, one with the OS (raid-1) two raid-1's for
swap,
> > > and 6 md devices that belong in an LVM volume. This system has grub.
> > >
> > > All md arrays are self-booting 0xFD partitions.
> > >
> > > I want to boot off system one. I verified that that boots fine if I
> > > disconnect all the [system-2] drives, so that's working okay.
> > >
> > > Now when I boot I get a lilo prompt, so I know the right disk is
booted
> > > by the BIOS. When logged in, I see only the md devices from system
two,
> > > and thus the current md0 "/" drive is from system two. Now what
options
> > > do I have ?
> > >
> > > If I zero the superblock(s) (or even the whole partitions) from md0 of
> > > system 2, it will not boot off of that obviously, but what will now
get
> > > to be md0 ? It could be the second array from system 2 equally well as
> > > the first array from system one, right ?
> > >
> > > I could experiment with finding the right array by using different
kernel
> > > root= commandlines, but only grub gives me that possibility, lilo has
no
> > > boot-time shell (well, it has a commandline...)
> > >
> > > Another thing that strikes me is that running 'mdadm --detail --scan'
> > > also only finds the arrays from system 2. Is that expected since it
just
> > > reads its /etc/mdadm.conf file, or should it disregard that and show
all
> > > arrays ?
> > >
> > > Upon first glance 'fdisk -l' does show all devices fine (there are 10
of
> > > them)
> > >
> > > I think (er, hope, actually) that with mdadm.conf one could probably
> > > force the machine to recognize the right drives as md0 as opposed to
them
> > > being numbered mdX, but is that a right assumption ? At the time the
> > > kernel md code reads / assembles the various 0xFD partitions, the
> > > root-partition is not mounted (obviously) so reading /etc/mdadm.conf
will
> > > not be possible.
> > >
> > > I'll start to try out some things but I _really_ want to avoid having
an
> > > unbootable system: for one, this system has no CDrom nor floppy, and
even
> > > more importantly I don't think my rescue media have all neccessary
> > > drivers for the ATA & SATA cards.
> > >
> > > Anyone have some good advice for me ?
> > >
> > > Maarten
>
> --
>
> -
> To unsubscribe from this list: send the line "unsubscribe linux-raid" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at http://vger.kernel.org/majordomo-info.html
>
--
Derek Piper - derek.piper@gmail.com
http://doofer.org/
-
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: combining two raid systems
2005-01-13 19:17 combining two raid systems maarten
2005-01-13 19:57 ` maarten
[not found] ` <1105645254.3390.12.camel@south.rosestar.lan>
@ 2005-01-13 23:46 ` berk walker
2005-01-14 1:01 ` maarten
2005-01-14 2:18 ` Neil Brown
3 siblings, 1 reply; 11+ messages in thread
From: berk walker @ 2005-01-13 23:46 UTC (permalink / raw)
To: maarten; +Cc: linux-raid
Maarten, why not just leave them as they are, and mount the secondary FS
on the primary FS?
Less drag on the power supply, and fewer eggs in the basket. If you
can't see the added array, it might be that, even in this great day of
standardisation, some boxes write to the disk differently, esp. in the
1st cyl.
Just a thought -
b-
maarten wrote:
>Hi,
>
>I'm currently combing two servers into one, and I'm trying to figure out the
>safest way to do that.
>
>System one had two md arrays: one raid-1 with the OS and a second one with
>data (raid-5) It is bootable through lilo
>
>System two had 9 arrays, one with the OS (raid-1) two raid-1's for swap, and 6
>md devices that belong in an LVM volume. This system has grub.
>
>All md arrays are self-booting 0xFD partitions.
>
>I want to boot off system one. I verified that that boots fine if I
>disconnect all the [system-2] drives, so that's working okay.
>
>Now when I boot I get a lilo prompt, so I know the right disk is booted by the
>BIOS. When logged in, I see only the md devices from system two, and thus
>the current md0 "/" drive is from system two. Now what options do I have ?
>
>If I zero the superblock(s) (or even the whole partitions) from md0 of system
>2, it will not boot off of that obviously, but what will now get to be md0 ?
>It could be the second array from system 2 equally well as the first array
>from system one, right ?
>
>I could experiment with finding the right array by using different kernel
>root= commandlines, but only grub gives me that possibility, lilo has no
>boot-time shell (well, it has a commandline...)
>
>Another thing that strikes me is that running 'mdadm --detail --scan' also
>only finds the arrays from system 2. Is that expected since it just reads
>its /etc/mdadm.conf file, or should it disregard that and show all arrays ?
>
>Upon first glance 'fdisk -l' does show all devices fine (there are 10 of them)
>
>I think (er, hope, actually) that with mdadm.conf one could probably force the
>machine to recognize the right drives as md0 as opposed to them being
>numbered mdX, but is that a right assumption ? At the time the kernel md
>code reads / assembles the various 0xFD partitions, the root-partition is not
>mounted (obviously) so reading /etc/mdadm.conf will not be possible.
>
>I'll start to try out some things but I _really_ want to avoid having an
>unbootable system: for one, this system has no CDrom nor floppy, and even
>more importantly I don't think my rescue media have all neccessary drivers
>for the ATA & SATA cards.
>
>Anyone have some good advice for me ?
>
>Maarten
>
>
>
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: combining two raid systems
2005-01-13 23:46 ` berk walker
@ 2005-01-14 1:01 ` maarten
0 siblings, 0 replies; 11+ messages in thread
From: maarten @ 2005-01-14 1:01 UTC (permalink / raw)
To: linux-raid
On Friday 14 January 2005 00:46, berk walker wrote:
> Maarten, why not just leave them as they are, and mount the secondary FS
> on the primary FS?
Can't. That secondary server runs an app which doesn't play well with a newer
kernel. I need that kernel for SATA support, so I'm stuck.
Therefore I decided to move all drives to my current fileserver. That is no
problem at all for the PSU and physical space; I went out and bought a huge
tower and a 480 Watt quality PSU.
Besides, I want them together, because that actually _saves_ electricity.
My AC power consumtion meter says the machine uses 160 VA now.
That's a far cry from 480 watts, but I'm very glad it is :-)
(it is normal for a PSU to be rather over-dimensioned)
> Less drag on the power supply, and fewer eggs in the basket. If you
> can't see the added array, it might be that, even in this great day of
> standardisation, some boxes write to the disk differently, esp. in the
> 1st cyl.
Nope, I fixed that already. There were two arrays that claimed to be md0, and
that -presumably- clashes. When I disabled the 'other' md0[-drives], the
first one magically appeared. So that's sorted out already.
No, I'm two or three stages further now; I had transmit timeouts on my NIC, a
Gbit intel (e1000) so I upgraded the kernel _and_ ran lilo. However, lilo
being a biatch, that yielded an unbootable system (L 99 99 99 ...) :-((
I should have known better, but NOT re-running lilo is worse still. ;)
So I booted using rescue CD, and installed grub but that did not work out
either. The problem is that my BIOS thinks differently about what the boot
drive is as my OS. The pri onboard channel drive gets, according to the
kernel, listed way at the end, after all the PCI (S)ATA controllers and there
are 3 of them. So what would've been hda gets to be hdq (!!). Other drives
also get swapped around, because the grub shell from rescue media believes
(hd2,0) is my first accessible drive, and when booted it is (hd1,0).
It's a huge mess but I guess it goes with the (ATA) territory (?)
During all this my root fs on md0 got degraded, so I want to hotadd the two
missing devices (to make sure the grub config on all 3 is at least
consistent). But alas, my BIG array got degraded too(*) so it did hotadd the
spare drive. That will takes 120 minutes! Grrr. I tried to find a way to
stop that resync but I found no solution to that. At least stopping the md
device was impossible. Failing the spare would've been possible but at this
point I DON'T want to fail an active drive by mistake, so I left that alone.
So I'll have to wait two hours before my md0 rootdrive gets synced, and only
then I can re-try if grub finds its files. (And I'm not even sure of that.)
Brrr. Life can be hard on poor sysdamins like us... ;-|
(*) there was no other way as I needed one ATA channel for the rescue CDRom. I
forgot / omitted to remove more drives (to make the array too degraded to
start). That wouldn't have been helpful; I'm trying to fix my bootloader so
removing drives isn't too smart then; I might well remove a boot drive, or
mess with the boot-order so that grub gets even more confused, later on.
But people, getting a system to boot (again) if it has 10 (ten) ATA drives is
something that you _really_ will want to avoid at all cost...!
Take my word for it :-((
Maarten
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: combining two raid systems
2005-01-13 19:17 combining two raid systems maarten
` (2 preceding siblings ...)
2005-01-13 23:46 ` berk walker
@ 2005-01-14 2:18 ` Neil Brown
2005-01-14 3:09 ` maarten
3 siblings, 1 reply; 11+ messages in thread
From: Neil Brown @ 2005-01-14 2:18 UTC (permalink / raw)
To: maarten; +Cc: linux-raid
On Thursday January 13, maarten@ultratux.net wrote:
>
> Hi,
>
> I'm currently combing two servers into one, and I'm trying to figure out the
> safest way to do that.
....
>
> All md arrays are self-booting 0xFD partitions.
...
>
> Now when I boot I get a lilo prompt, so I know the right disk is booted by the
> BIOS. When logged in, I see only the md devices from system two, and thus
> the current md0 "/" drive is from system two. Now what options do I have ?
This is exactly why I despise auto-detect (0xFD partitions). When it
works, it works well, but when it doesn't it causes real problems.
You seem to have solved a lot of your problems based on subsequent
emails, but the simplest approach would be to remove the 0xFD setting
on all partitions except those for the root filesystem.
Once you have done that, you should be able to boot fine, though only
the root array will be assembled of course.
Then something like
mdadm -E -s -c partitions
will cause mdadm to find all your arrays and report information about
them.
(mdadm -Ds, as you were trying, only reports assembled arrays. You
want '-E' to find arrays make of devices that aren't currently
assembled).
Put this information into mdadm.conf and edit out the bits you don't
want leaving something like:
DEVICES partitions
ARRAY /dev/md1 UUID=....
ARRAY /dev/md2 UUID=.....
etc
i.e. each ARRAY entry only lists a device name and a UUID.
The output of "mdadm -Es.." will list some device names
twice. Naturally you will need to change this.
Then
mdadm -As
will assemble the rest of your arrays.
You also wondered about "personality" numbers.
A "personality" corresponds to a module that handles 1 or more raid
levels.
Personality 3 handles raid1
Personality 4 handles raid5 and raid4
see include/linux/raid/md_k.h
NeilBrown
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: combining two raid systems
2005-01-14 2:18 ` Neil Brown
@ 2005-01-14 3:09 ` maarten
0 siblings, 0 replies; 11+ messages in thread
From: maarten @ 2005-01-14 3:09 UTC (permalink / raw)
To: linux-raid
On Friday 14 January 2005 03:18, Neil Brown wrote:
> On Thursday January 13, maarten@ultratux.net wrote:
> > All md arrays are self-booting 0xFD partitions.
> > Now when I boot I get a lilo prompt, so I know the right disk is booted
> > by the BIOS. When logged in, I see only the md devices from system two,
> > and thus the current md0 "/" drive is from system two. Now what options
> > do I have ?
>
> This is exactly why I despise auto-detect (0xFD partitions). When it
> works, it works well, but when it doesn't it causes real problems.
Ah. I really liked it but maybe that only stems from pre-mdadm time.
I think the howto might not reflect this, but then again it's been a while
since I looked at it. In this case it had me stumped for a while, indeed.
> You seem to have solved a lot of your problems based on subsequent
Yep, in fact, all of them. The 2-hour resync time has elapsed and grub (oh
praise) worked right off the bat this time around :-))
(Except for the intel e1000 problem but I postponed that eventually, I'm now
using the onboard 8139too again for a while.)
> emails, but the simplest approach would be to remove the 0xFD setting
> on all partitions except those for the root filesystem.
> Once you have done that, you should be able to boot fine, though only
> the root array will be assembled of course.
Obviously, but at that point that is not a drawback, maybe even the opposite.
It depends whether you have /usr or /var on raid, but I have only data arrays.
> Then something like
> mdadm -E -s -c partitions
Eh, that translates to 'mdadm --examine --scan --config', right ?
(oh wait, NOW I understand what the manpage says there...!!
I interpreted 'partitions' as something else... Hum.)
> will cause mdadm to find all your arrays and report information about
> them.
> (mdadm -Ds, as you were trying, only reports assembled arrays. You
> want '-E' to find arrays make of devices that aren't currently
> assembled).
>
> Put this information into mdadm.conf and edit out the bits you don't
> want leaving something like:
>
> DEVICES partitions
> ARRAY /dev/md1 UUID=....
> ARRAY /dev/md2 UUID=.....
> etc
...And leave out the physical devices? In a way that makes sense since [my]
drives get mixed all the time somehow anyway, but...
So if I understand correctly, when 0xFD is not set anymore, mdadm needs this
configfile, but if you use 0xFD partitions the file is more or less unused ?
I noted that a lot of the modern linux distribs don't even create it...
> Then
> mdadm -As
>
> will assemble the rest of your arrays.
Oh, nice. Didn't know that.
> You also wondered about "personality" numbers.
> A "personality" corresponds to a module that handles 1 or more raid
> levels.
> Personality 3 handles raid1
> Personality 4 handles raid5 and raid4
Yes, I later realized that the md scanning happens twice. At first the kernel
scans devices, groups them, prepares to assemble and start them and
_only_then_ realizes it has no raid-X support available. So then the
bootprocess continues, goes on to loading the initrd (which contain those
raid personalities) and the scanning starts anew, this time successfully.
Why at that point the devices found earlier on are not started first is
strange, but that might just be a race condition between the two "md0" arrays
so that is not very important.
> NeilBrown
Thanks Neil ! It is much clearer now. :)
Maarten
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: combining two raid systems
2005-01-13 23:04 ` Guy
@ 2005-01-14 10:03 ` Robin Bowes
0 siblings, 0 replies; 11+ messages in thread
From: Robin Bowes @ 2005-01-14 10:03 UTC (permalink / raw)
To: linux-raid
Guy wrote:
> I installed using RedHat 9.0. It was very easy to install/configure a RAID1
> LVM setup.
I believe that Fedora Core (from 3) actually uses LVM by default.
R.
--
http://robinbowes.com
^ permalink raw reply [flat|nested] 11+ messages in thread
end of thread, other threads:[~2005-01-14 10:03 UTC | newest]
Thread overview: 11+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2005-01-13 19:17 combining two raid systems maarten
2005-01-13 19:57 ` maarten
[not found] ` <1105645254.3390.12.camel@south.rosestar.lan>
2005-01-13 20:27 ` maarten
2005-01-13 21:22 ` Derek Piper
2005-01-13 22:30 ` maarten
2005-01-13 23:04 ` Guy
2005-01-14 10:03 ` Robin Bowes
2005-01-13 23:46 ` berk walker
2005-01-14 1:01 ` maarten
2005-01-14 2:18 ` Neil Brown
2005-01-14 3:09 ` maarten
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).