* Problem migrating to bootable RAID1 on Debian
@ 2006-01-02 4:08 John Stoffel
2006-01-03 3:43 ` SATA performace Paul Aviles
0 siblings, 1 reply; 8+ messages in thread
From: John Stoffel @ 2006-01-02 4:08 UTC (permalink / raw)
To: linux-raid; +Cc: john
Hi all,
I've been working on getting my heavily upgraded Debian distro to have
mirrored /, /boot, /usr, /var and swap partitions. My /home and
/local are already built on LVM2 volumes ontop of a pair of mirrored
120gb disks (md0). I just fixed the RAID autodetection for that MD
volume by changing the partition from LVM to Raid Autodetect. Duh...
Now I don't need anything in /etc/mdadm/mdadm.conf that I can see.
In any case, I'm following the instructions given on:
http://www.debian-administration.org/articles/238
which is pretty decent, but does gloss over some details, such as
debugging boot problems and testing. :] And of course, when you try
to convert a single disk into a mirrored bootable disk, it's sometimes
painful. No lost data yet... but I'd hate to have to rebuild.
Details:
I've got a Dell Precision 610, Dual 550Mhz PIII Xeon processors, 768mb
of RAM. Builtin SCSI controllers, with a pair of 18gb SCSI disks as
/dev/sda and /dev/sdb. Currently /dev/sda# is what I'm using. I've
also got an HPT302 controller with the pair of 120gb disks for data,
using GRUB to manage booting. No other OSes installed on here. Now
running kernel 2.6.15-rc7 on it. Ran 2.6.15-rc1 for about a month
without problems. Software versions:
grub 0.97
mdadm v1.12.0
LVM version: 2.02.01 (2005-11-23)
One thing I did was to leave the /etc/fstab on my original /dev/sda
disk alone, so that it refered to the single disk partitions. I
figured this way I could recover if the bootable RAID1 failed. I know
that the BIOS will use the SCSI disk sda for booting, which is ok, I
just want it to point to sdb when booting, etc.
But now whenever I boot up the system using EITHER of my grub menu.lst
definitions for the RAID1 setup, the system hangs, because it can't
start md2 (my / filesystem, on /dev/sdb2 only right now) for some
reason. Here's what I get on the console screen (and yes, I'll get a
serial console setup sometime soon as well...) copied down by hand,
though skipping the boring stuff. Note, it does find and start md2,
but it's can't use it for some reason...
.
.
.
md: considering sdb2 ...
md: adding sdb2 ...
md: sdb1 has different UUID to sdb2
md: hdg1 has different UUID to sdb2
md: hde1 has different UUID to sdb2
md: created md2
md: bind<sdb2>
md: running: <sdb2>
raid1: raid set md2 active with 1 out of 2 mirrors
.
.
.
md: ... autorun DONE.
md: Loading md2: /dev/sdb2
md: couldn't update array info. -22
md: could not bd_claim sdb2.
md: md_import_device returned -16
md: starting md2 failed
EXT3-fs: mounted filesystem with ordered data mode.
VFS: mounted root (ext3 flesystem) readonly.
Freeing unused kernel memory: 240k freed.
kjournald starting. Commit interval 5 seconds.
Warning: unable to open initial console
input: ImPS/2 Generic Wheel Mouse as /class/input/input1
usb 1-3: new high speed USB device using ehci_hcd and address 3
< hangs here >
My grub /boot/grub/menu.lst for sda just has:
title 2.6.15-rc7-raid1-sda
root (hd0,0)
kernel /vmlinuz-2.6.15-rc7 root=/dev/md2 md=2,/dev/sda2,/dev/sdb2 ro console=tty0
boot
title 2.6.15-rc7-raid1-backup-sda
root (hd1,0)
kernel /vmlinuz-2.6.15-rc7 root=/dev/md2 md=2,/dev/sdb2 ro console=tty0
boot
title 2.6.15-rc7
root (hd0,0)
kernel /vmlinuz-2.6.15-rc7 root=/dev/sda2 ro console=tty0
boot
And I can boot off the third one without any problems. But when I try
to boot off the first two, it hangs as show above. The strange thing,
is that I think the second stanza there should let me boot off the
second drive, even though grub is reading it's MBR off the first
drive. And then it should only use a root on /dev/md2 (/boot is
/dev/md1) with the following fstab entries, which are just inverted on
/dev/md2 so that all the md partitions are mounted instead.
/dev/sda2 / ext3 errors=remount-ro 0 1
#/dev/md2 / ext3 errors=remount-ro 0 1
/dev/sda5 /var ext3 defaults 0 2
#/dev/md5 /var ext3 defaults 0 2
/dev/sda1 /boot ext3 defaults 0 2
#/dev/md1 /boot ext3 defaults 0 2
/dev/sda6 /usr ext3 defaults 0 2
#/dev/md6 /usr ext3 defaults 0 2
/dev/sda3 none swap sw 0 0
#/dev/md3 none swap sw,pri=1 0 0
The one time I had /dev/sda setup to use the /dev/md# partitions for
my /, /boot, ... filesystems, the system did boot up, and did seem to
be mounting them properly, but I was freaked out that I couldn't get
the second grub stanza to work properly. What use is a redundant
setup if you can't make it work off eithre side of the mirror?
Some more details:
Kernel .config:
#
# Multi-device support (RAID and LVM)
#
CONFIG_MD=y
CONFIG_BLK_DEV_MD=y
CONFIG_MD_LINEAR=y
CONFIG_MD_RAID0=y
CONFIG_MD_RAID1=y
# CONFIG_MD_RAID10 is not set
CONFIG_MD_RAID5=y
# CONFIG_MD_RAID6 is not set
# CONFIG_MD_MULTIPATH is not set
# CONFIG_MD_FAULTY is not set
CONFIG_BLK_DEV_DM=y
CONFIG_DM_CRYPT=m
# CONFIG_DM_SNAPSHOT is not set
# CONFIG_DM_MIRROR is not set
# CONFIG_DM_ZERO is not set
# CONFIG_DM_MULTIPATH is not set
Thanks,
John
^ permalink raw reply [flat|nested] 8+ messages in thread
* SATA performace
2006-01-02 4:08 Problem migrating to bootable RAID1 on Debian John Stoffel
@ 2006-01-03 3:43 ` Paul Aviles
2006-01-03 4:47 ` Ross Vandegrift
2006-01-03 8:36 ` Andargor The Wise
0 siblings, 2 replies; 8+ messages in thread
From: Paul Aviles @ 2006-01-03 3:43 UTC (permalink / raw)
To: linux-raid
Are SATA drives similar in performance than IDE drives? I have tested
Barracudas 7200.0 (500Gb) and WD too on the same type of servers (more than
1 unit) and what I am getting is painfully slow in terms of read/writes.
Anyone out there with similar experience or this is just my isolated
results?
Thanks
Paul Aviles
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: SATA performace
2006-01-03 3:43 ` SATA performace Paul Aviles
@ 2006-01-03 4:47 ` Ross Vandegrift
2006-01-03 5:04 ` Konstantin Olchanski
2006-01-03 5:35 ` debian
2006-01-03 8:36 ` Andargor The Wise
1 sibling, 2 replies; 8+ messages in thread
From: Ross Vandegrift @ 2006-01-03 4:47 UTC (permalink / raw)
To: Paul Aviles; +Cc: linux-raid
On Mon, Jan 02, 2006 at 10:43:47PM -0500, Paul Aviles wrote:
> Are SATA drives similar in performance than IDE drives? I have tested
> Barracudas 7200.0 (500Gb) and WD too on the same type of servers (more than
> 1 unit) and what I am getting is painfully slow in terms of read/writes.
> Anyone out there with similar experience or this is just my isolated
> results?
Probably depends on your controller and configuration. If it's an
integrated controller, boot into your BIOS and make sure that any kind
of "Native/Legacy Mode" option is set to "Native mode".
I don't know what the difference is, but I've seen Legacy mode boxes
crawl...
--
Ross Vandegrift
ross@lug.udel.edu
"The good Christian should beware of mathematicians, and all those who
make empty prophecies. The danger already exists that the mathematicians
have made a covenant with the devil to darken the spirit and to confine
man in the bonds of Hell."
--St. Augustine, De Genesi ad Litteram, Book II, xviii, 37
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: SATA performace
2006-01-03 4:47 ` Ross Vandegrift
@ 2006-01-03 5:04 ` Konstantin Olchanski
2006-01-03 5:35 ` debian
1 sibling, 0 replies; 8+ messages in thread
From: Konstantin Olchanski @ 2006-01-03 5:04 UTC (permalink / raw)
To: Ross Vandegrift; +Cc: Paul Aviles, linux-raid
On Mon, Jan 02, 2006 at 11:47:59PM -0500, Ross Vandegrift wrote:
> On Mon, Jan 02, 2006 at 10:43:47PM -0500, Paul Aviles wrote:
> > Are SATA drives similar in performance than IDE drives? I have tested
> > Barracudas 7200.0 (500Gb) and WD too on the same type of servers (more than
> > 1 unit) and what I am getting is painfully slow in terms of read/writes.
In modern systems, disk performance is dominated by the physical disk
characteristics: rotation speed and seek times (ignoring speedups from
Linux-level, controller-level and disk-level i/o reordering and caching).
So you should (and we do) see almost identical performance between
similar PATA and SATA disks. For bulk data streaming, you should see
30-60 Mbytes/sec for a single disk.
The only performance-degrading problems I have seen are a) PATA
disks running in non-DMA modes (bulk data streaming rate = 3 Mbytes/sec)
and b) barely-readable sectors on many new high-density disks (disk-level
read retries, takes seconds to read one sector, kills performance).
Problem (b) is quite evil and hard to diagnose: it seems to be temperature
dependant, it is not reported by SMART, and it is not reported
by Linux (unless it is so bad that you get a read timeouts). For RAID sets,
it causes erratic performance.
--
Konstantin Olchanski
Data Acquisition Systems: The Bytes Must Flow!
Email: olchansk-at-triumf-dot-ca
Snail mail: 4004 Wesbrook Mall, TRIUMF, Vancouver, B.C., V6T 2A3, Canada
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: SATA performace
2006-01-03 4:47 ` Ross Vandegrift
2006-01-03 5:04 ` Konstantin Olchanski
@ 2006-01-03 5:35 ` debian
1 sibling, 0 replies; 8+ messages in thread
From: debian @ 2006-01-03 5:35 UTC (permalink / raw)
To: Paul Aviles; +Cc: linux-raid
On Mon, Jan 02, 2006 at 11:47:59PM -0500, Ross Vandegrift wrote:
> On Mon, Jan 02, 2006 at 10:43:47PM -0500, Paul Aviles wrote:
> > Are SATA drives similar in performance than IDE drives? I have tested
> > Barracudas 7200.0 (500Gb) and WD too on the same type of servers (more than
> > 1 unit) and what I am getting is painfully slow in terms of read/writes.
> > Anyone out there with similar experience or this is just my isolated
> > results?
>
> Probably depends on your controller and configuration. If it's an
> integrated controller, boot into your BIOS and make sure that any kind
> of "Native/Legacy Mode" option is set to "Native mode".
>
> I don't know what the difference is, but I've seen Legacy mode boxes
> crawl...
I believe it toggles AHCI, which should give a performance boost if
the reads are random and your disks (and chipset?) support NCQ.
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: SATA performace
2006-01-03 3:43 ` SATA performace Paul Aviles
2006-01-03 4:47 ` Ross Vandegrift
@ 2006-01-03 8:36 ` Andargor The Wise
2006-01-03 18:50 ` Dan Stromberg
1 sibling, 1 reply; 8+ messages in thread
From: Andargor The Wise @ 2006-01-03 8:36 UTC (permalink / raw)
To: Paul Aviles, linux-raid
--- Paul Aviles <paul.aviles@palei.com> wrote:
> Are SATA drives similar in performance than IDE
> drives? I have tested
> Barracudas 7200.0 (500Gb) and WD too on the same
> type of servers (more than
> 1 unit) and what I am getting is painfully slow in
> terms of read/writes.
> Anyone out there with similar experience or this is
> just my isolated
> results?
>
> Thanks
>
> Paul Aviles
>
Don't know about IDE, but I did some SATA tests on
RAID-5, if it can give you an indication:
http://www.andargor.com/raid5.html
BTW, EVMS will cut your performance about 50% (the
tests were run with straight MD, no EVMS). I don't
know about LVM, but that might add some overhead as
well...
Andargor
__________________________________
Yahoo! for Good - Make a difference this year.
http://brand.yahoo.com/cybergivingweek2005/
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: SATA performace
2006-01-03 8:36 ` Andargor The Wise
@ 2006-01-03 18:50 ` Dan Stromberg
2006-01-04 4:04 ` Andargor The Wise
0 siblings, 1 reply; 8+ messages in thread
From: Dan Stromberg @ 2006-01-03 18:50 UTC (permalink / raw)
To: Andargor The Wise; +Cc: Paul Aviles, linux-raid, strombrg
Supposedly current versions of EVMS and LVM sit overtop of the same
"device mapper" kernel component... Given this, I have to wonder if the
situation hasn't changed since last evaluated.
On Tue, 2006-01-03 at 00:36 -0800, Andargor The Wise wrote:
> --- Paul Aviles <paul.aviles@palei.com> wrote:
>
> > Are SATA drives similar in performance than IDE
> > drives? I have tested
> > Barracudas 7200.0 (500Gb) and WD too on the same
> > type of servers (more than
> > 1 unit) and what I am getting is painfully slow in
> > terms of read/writes.
> > Anyone out there with similar experience or this is
> > just my isolated
> > results?
> >
> > Thanks
> >
> > Paul Aviles
> >
>
> Don't know about IDE, but I did some SATA tests on
> RAID-5, if it can give you an indication:
>
> http://www.andargor.com/raid5.html
>
> BTW, EVMS will cut your performance about 50% (the
> tests were run with straight MD, no EVMS). I don't
> know about LVM, but that might add some overhead as
> well...
>
> Andargor
>
>
>
>
>
> __________________________________
> Yahoo! for Good - Make a difference this year.
> http://brand.yahoo.com/cybergivingweek2005/
> -
> To unsubscribe from this list: send the line "unsubscribe linux-raid" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at http://vger.kernel.org/majordomo-info.html
>
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: SATA performace
2006-01-03 18:50 ` Dan Stromberg
@ 2006-01-04 4:04 ` Andargor The Wise
0 siblings, 0 replies; 8+ messages in thread
From: Andargor The Wise @ 2006-01-04 4:04 UTC (permalink / raw)
Cc: Paul Aviles, linux-raid, strombrg
Hmm. I just installed EVMS 2.5.4 two weeks ago. Unless
I did something wrong?
Andargor
--- Dan Stromberg <strombrg@dcs.nac.uci.edu> wrote:
>
> Supposedly current versions of EVMS and LVM sit
> overtop of the same
> "device mapper" kernel component... Given this, I
> have to wonder if the
> situation hasn't changed since last evaluated.
>
> On Tue, 2006-01-03 at 00:36 -0800, Andargor The Wise
> wrote:
> > --- Paul Aviles <paul.aviles@palei.com> wrote:
> >
> > > Are SATA drives similar in performance than IDE
> > > drives? I have tested
> > > Barracudas 7200.0 (500Gb) and WD too on the same
> > > type of servers (more than
> > > 1 unit) and what I am getting is painfully slow
> in
> > > terms of read/writes.
> > > Anyone out there with similar experience or this
> is
> > > just my isolated
> > > results?
> > >
> > > Thanks
> > >
> > > Paul Aviles
> > >
> >
> > Don't know about IDE, but I did some SATA tests on
> > RAID-5, if it can give you an indication:
> >
> > http://www.andargor.com/raid5.html
> >
> > BTW, EVMS will cut your performance about 50% (the
> > tests were run with straight MD, no EVMS). I don't
> > know about LVM, but that might add some overhead
> as
> > well...
> >
> > Andargor
> >
> >
> >
> >
> >
> > __________________________________
> > Yahoo! for Good - Make a difference this year.
> > http://brand.yahoo.com/cybergivingweek2005/
> > -
> > To unsubscribe from this list: send the line
> "unsubscribe linux-raid" in
> > the body of a message to majordomo@vger.kernel.org
> > More majordomo info at
> http://vger.kernel.org/majordomo-info.html
> >
>
>
__________________________________________
Yahoo! DSL Something to write home about.
Just $16.99/mo. or less.
dsl.yahoo.com
-
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
^ permalink raw reply [flat|nested] 8+ messages in thread
end of thread, other threads:[~2006-01-04 4:04 UTC | newest]
Thread overview: 8+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2006-01-02 4:08 Problem migrating to bootable RAID1 on Debian John Stoffel
2006-01-03 3:43 ` SATA performace Paul Aviles
2006-01-03 4:47 ` Ross Vandegrift
2006-01-03 5:04 ` Konstantin Olchanski
2006-01-03 5:35 ` debian
2006-01-03 8:36 ` Andargor The Wise
2006-01-03 18:50 ` Dan Stromberg
2006-01-04 4:04 ` Andargor The Wise
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).