* Raid 1 install revealed poor HD performance (I think)
@ 2005-05-17 15:11 Jay Strauss
2005-05-17 15:50 ` Gordon Henderson
` (2 more replies)
0 siblings, 3 replies; 20+ messages in thread
From: Jay Strauss @ 2005-05-17 15:11 UTC (permalink / raw)
To: linux-raid
Hi,
I just installed a simple raid-1. I have a single HD on each
controller. ie. 2 controllers 2 HDs. That's it, no CD players or DVD
drives...
I did a fresh install of Sarge. After reboot I unplugged the CD drive,
and let the array sync overnight. It took 10hrs to sync 160GiB, which
seemed pretty long, so I executed the "dd if=/dev/hda of=/dev/null
bs=64k count=10000" command I've seen posted on this list.
It took about a minute and change to complete, which is long. So out of
curiosity I plugged in a CD player and booted Knoppix. When I ran the
same command it took only 30 seconds (which is still not as quick as
some of the numbers on this list, but it's an old celeron 466MHz).
I'm very unfamiliar with options for speeding up hard drive access.
What might Knoppix be setting (and how would I look for it), that it's
getting 100% better drive performance than a plain jane Sarge install?
Thanks
Jay
******** Knoppix Run ***********
root@ttyp0[~]# for i in 1 2 3; do
> time dd if=/dev/hda of=/dev/null bs=64k count=10000
> done
10000+0 records in
10000+0 records out
655360000 bytes transferred in 46.301872 seconds (14154071 bytes/sec)
real 0m46.754s
user 0m0.480s
sys 0m29.710s
10000+0 records in
10000+0 records out
655360000 bytes transferred in 49.852186 seconds (13146063 bytes/sec)
real 0m51.075s
user 0m0.430s
sys 0m26.610s
10000+0 records in
10000+0 records out
655360000 bytes transferred in 46.237684 seconds (14173720 bytes/sec)
real 0m47.928s
user 0m0.480s
sys 0m31.770s
************** Sarge Run *************
sandy:~# for i in 1 2 3; do
> time dd if=/dev/hda of=/dev/null bs=64k count=10000
> done
10000+0 records in
10000+0 records out
655360000 bytes transferred in 66.004525 seconds (9929016 bytes/sec)
real 1m6.052s
user 0m0.637s
sys 1m2.888s
10000+0 records in
10000+0 records out
655360000 bytes transferred in 63.797226 seconds (10272547 bytes/sec)
real 1m3.886s
user 0m0.678s
sys 1m0.732s
10000+0 records in
10000+0 records out
655360000 bytes transferred in 64.729628 seconds (10124575 bytes/sec)
real 1m4.822s
user 0m0.616s
sys 1m1.668s
^ permalink raw reply [flat|nested] 20+ messages in thread
* Re: Raid 1 install revealed poor HD performance (I think)
2005-05-17 15:11 Raid 1 install revealed poor HD performance (I think) Jay Strauss
@ 2005-05-17 15:50 ` Gordon Henderson
2005-05-17 16:17 ` Jay Strauss
2005-05-17 23:42 ` Henrik Holst
2005-05-17 23:44 ` Henrik Holst
2 siblings, 1 reply; 20+ messages in thread
From: Gordon Henderson @ 2005-05-17 15:50 UTC (permalink / raw)
To: Jay Strauss; +Cc: linux-raid
On Tue, 17 May 2005, Jay Strauss wrote:
> I'm very unfamiliar with options for speeding up hard drive access.
> What might Knoppix be setting (and how would I look for it), that it's
> getting 100% better drive performance than a plain jane Sarge install?
The usual culprit is DMA. (Or lack of it in your case)
What does
hdparm /dev/hda
give under each OS?
You can use hdparm to enable DMA, but this doesn't always work - the real
solution is to work out what type of IDE controller you have and make sure
it's compiled into your kernel.
Gordon
^ permalink raw reply [flat|nested] 20+ messages in thread
* Re: Raid 1 install revealed poor HD performance (I think)
2005-05-17 15:50 ` Gordon Henderson
@ 2005-05-17 16:17 ` Jay Strauss
2005-05-17 16:23 ` Gordon Henderson
0 siblings, 1 reply; 20+ messages in thread
From: Jay Strauss @ 2005-05-17 16:17 UTC (permalink / raw)
To: Gordon Henderson; +Cc: linux-raid
Gordon Henderson wrote:
> On Tue, 17 May 2005, Jay Strauss wrote:
>
>
>>I'm very unfamiliar with options for speeding up hard drive access.
>>What might Knoppix be setting (and how would I look for it), that it's
>>getting 100% better drive performance than a plain jane Sarge install?
>
>
> The usual culprit is DMA. (Or lack of it in your case)
>
> What does
>
> hdparm /dev/hda
>
> give under each OS?
>
> You can use hdparm to enable DMA, but this doesn't always work - the real
> solution is to work out what type of IDE controller you have and make sure
> it's compiled into your kernel.
>
> Gordon
>
>
>
thanks Gordon, below are the outputs from both trials. How would I go
about finding out the IDE controller I have?
Sarge:~# hdparm /dev/hda
/dev/hda:
multcount = 0 (off)
IO_support = 0 (default 16-bit)
unmaskirq = 0 (off)
using_dma = 1 (on)
keepsettings = 0 (off)
readonly = 0 (off)
readahead = 256 (on)
geometry = 19457/255/63, sectors = 312581808, start = 0
knoppix
/dev/hda:
multcount = 16 (on)
IO_support = 0 (default 16-bit)
unmaskirq = 0 (off)
using_dma = 1 (on)
keepsettings = 0 (off)
readonly = 0 (off)
readahead = 8 (on)
geometry = 19457/255/63, sectors = 312581808, start = 0
^ permalink raw reply [flat|nested] 20+ messages in thread
* Re: Raid 1 install revealed poor HD performance (I think)
2005-05-17 16:17 ` Jay Strauss
@ 2005-05-17 16:23 ` Gordon Henderson
2005-05-17 16:52 ` Mike Hardy
0 siblings, 1 reply; 20+ messages in thread
From: Gordon Henderson @ 2005-05-17 16:23 UTC (permalink / raw)
To: Jay Strauss; +Cc: linux-raid
On Tue, 17 May 2005, Jay Strauss wrote:
> Gordon Henderson wrote:
> > On Tue, 17 May 2005, Jay Strauss wrote:
> >
> >
> >>I'm very unfamiliar with options for speeding up hard drive access.
> >>What might Knoppix be setting (and how would I look for it), that it's
> >>getting 100% better drive performance than a plain jane Sarge install?
> >
> >
> > The usual culprit is DMA. (Or lack of it in your case)
> >
> > What does
> >
> > hdparm /dev/hda
> >
> > give under each OS?
> >
> > You can use hdparm to enable DMA, but this doesn't always work - the real
> > solution is to work out what type of IDE controller you have and make sure
> > it's compiled into your kernel.
> thanks Gordon, below are the outputs from both trials. How would I go
> about finding out the IDE controller I have?
From the outputs below, it looks like it's not DMA. (damn)
> Sarge:~# hdparm /dev/hda
>
> /dev/hda:
> multcount = 0 (off)
> IO_support = 0 (default 16-bit)
> unmaskirq = 0 (off)
> using_dma = 1 (on)
It's on here.
> keepsettings = 0 (off)
> readonly = 0 (off)
> readahead = 256 (on)
> geometry = 19457/255/63, sectors = 312581808, start = 0
>
> knoppix
>
> /dev/hda:
> multcount = 16 (on)
> IO_support = 0 (default 16-bit)
> unmaskirq = 0 (off)
> using_dma = 1 (on)
And here.
However, I notice that Sarge isn't using multicount. You can turn that on
with hdparm -m 16 /dev/hda /dev/hdc
To work out your IDE controller, try: lspci and have a look at the output
of /var/log/dmesg (or just type dmesg | less - knoppix might not create a
/var/log/dmesg) The output of lspci will tell you what hardware you have
then you'll need to examinf the dmesg output to see if you have support
for it, or need to compile it in.
Gordon
^ permalink raw reply [flat|nested] 20+ messages in thread
* Re: Raid 1 install revealed poor HD performance (I think)
2005-05-17 16:23 ` Gordon Henderson
@ 2005-05-17 16:52 ` Mike Hardy
2005-05-17 20:51 ` Jay Strauss
0 siblings, 1 reply; 20+ messages in thread
From: Mike Hardy @ 2005-05-17 16:52 UTC (permalink / raw)
To: Gordon Henderson; +Cc: Jay Strauss, linux-raid
Gordon Henderson wrote:
> However, I notice that Sarge isn't using multicount. You can turn that on
> with hdparm -m 16 /dev/hda /dev/hdc
>
> To work out your IDE controller, try: lspci and have a look at the output
> of /var/log/dmesg (or just type dmesg | less - knoppix might not create a
> /var/log/dmesg) The output of lspci will tell you what hardware you have
> then you'll need to examinf the dmesg output to see if you have support
> for it, or need to compile it in.
>
> Gordon
I was sure it would be DMA too :-). Wonder if multicount makes that much
of a difference?
Either way, one of the first things I typically do on a server is add an
hdparm line with whatever the disks need for that particular machine.
Usually it goes in rc.sysinit or rc.local or similar.
In particular, drives like to enable write-caching out of the box now,
which opens a window for data corruption that has bitten more than a few
high-profile sites as it causes the disk to break the fsync contract
which in turn bubbles up and breaks the 'D' part of the ACID contract
for databases (or journalling filesystems).
So, even if you can't get your distribution to tune your IDE chipset and
IDE hard drives perfectly, with something like multicount or write
caching, its effective (IMHO) to just force your settings on boot.
-Mike
^ permalink raw reply [flat|nested] 20+ messages in thread
* Re: Raid 1 install revealed poor HD performance (I think)
2005-05-17 16:52 ` Mike Hardy
@ 2005-05-17 20:51 ` Jay Strauss
2005-05-17 20:54 ` Mike Hardy
2005-05-17 22:45 ` Peter T. Breuer
0 siblings, 2 replies; 20+ messages in thread
From: Jay Strauss @ 2005-05-17 20:51 UTC (permalink / raw)
To: Mike Hardy; +Cc: Gordon Henderson, linux-raid
> I was sure it would be DMA too :-). Wonder if multicount makes that much
> of a difference?
It doesn't, I tried just before leaving this morning. No diff.
>
> Either way, one of the first things I typically do on a server is add an
> hdparm line with whatever the disks need for that particular machine.
I don't understand the above "whatever the disks need" portion of this
statement. Can you give me a little more detail?
> So, even if you can't get your distribution to tune your IDE chipset and
> IDE hard drives perfectly, with something like multicount or write
> caching, its effective (IMHO) to just force your settings on boot.
What settings am I to force? And to what values? I guess I'm asking,
are you saying just put the hdparm setting in a rc.local? If so, I'm
still stumped on what those settings need to be
Thanks
Jay
^ permalink raw reply [flat|nested] 20+ messages in thread
* Re: Raid 1 install revealed poor HD performance (I think)
2005-05-17 20:51 ` Jay Strauss
@ 2005-05-17 20:54 ` Mike Hardy
2005-05-17 22:16 ` Tobias Hofmann
2005-05-17 22:45 ` Peter T. Breuer
1 sibling, 1 reply; 20+ messages in thread
From: Mike Hardy @ 2005-05-17 20:54 UTC (permalink / raw)
To: Jay Strauss; +Cc: Gordon Henderson, linux-raid
Jay Strauss wrote:
> I don't understand the above "whatever the disks need" portion of this
> statement. Can you give me a little more detail?
I did mean the hdparm settings, sorry I wasn't clearer
> What settings am I to force? And to what values? I guess I'm asking,
> are you saying just put the hdparm setting in a rc.local? If so, I'm
> still stumped on what those settings need to be
Well, the "typical" problem with regard to speed is DMA detection on
your IDE chipset, which Gordon guessed (and which I would have). That
needs to be set sometimes.
I usually also set write-caching to off, although this does have a
performance impact. Its also possible to twiddle some of the other
hdparm settings, though these may or may not help. Things like
multicount, IRQ unmasking.
You can try any of them, then run bonnie++ or similar and see if they
have an effect, and put them in rc.local (or similar) to make sure they
stay in force across reboots.
That's fine-grained tuning mostly though, and not really on topic for
you as it appears you have something entirely different going on. What
exactly is going on, I must admit, I don't know. :-/
-Mike
^ permalink raw reply [flat|nested] 20+ messages in thread
* Re: Raid 1 install revealed poor HD performance (I think)
2005-05-17 20:54 ` Mike Hardy
@ 2005-05-17 22:16 ` Tobias Hofmann
2005-05-17 23:00 ` Jay Strauss
2005-05-18 3:28 ` Jay Strauss
0 siblings, 2 replies; 20+ messages in thread
From: Tobias Hofmann @ 2005-05-17 22:16 UTC (permalink / raw)
To: Mike Hardy; +Cc: Jay Strauss, Gordon Henderson, linux-raid
On 17.05.2005 22:54, Mike Hardy wrote:
[...]
> That's fine-grained tuning mostly though, and not really on topic for
> you as it appears you have something entirely different going on. What
> exactly is going on, I must admit, I don't know. :-/
<delurk>
Well, let's find out then, no? ;)
Jay, you said it yourself - Knoppix seems to be setting something
different than Sarge. I am not very familiar with both, but doing a
lspci on both (just in case they "see" something different) and
comparing, like Gordon suggested, the output of dmesg | less (or a less
/var/log/dmesg) on both systems might give you an idea where things
start to go differently.
I would look for differences in the discovery of (the)
ide-controller(s), and how the HDs are discovered/treated.
For completeness' sake, the kernel version of each try could give the
experts an idea of differences I am not aware of.
I am sure to have forgotten something here, but for starters, it might work.
Just my 2c, good luck, tobi... ;)
^ permalink raw reply [flat|nested] 20+ messages in thread
* Re: Raid 1 install revealed poor HD performance (I think)
2005-05-17 20:51 ` Jay Strauss
2005-05-17 20:54 ` Mike Hardy
@ 2005-05-17 22:45 ` Peter T. Breuer
1 sibling, 0 replies; 20+ messages in thread
From: Peter T. Breuer @ 2005-05-17 22:45 UTC (permalink / raw)
To: linux-raid
Jay Strauss <me@heyjay.com> wrote:
> > Either way, one of the first things I typically do on a server is add an
> > hdparm line with whatever the disks need for that particular machine.
> I don't understand the above "whatever the disks need" portion of this
> statement. Can you give me a little more detail?
He means that you set them up the way you want to. Use hdparm. -i or
-I or -v should show you what modes they are in now. Pay particular
attention to the dma mode.
> What settings am I to force? And to what values? I guess I'm asking,
Whatever you like, and whatever turns out to be best.
Peter
^ permalink raw reply [flat|nested] 20+ messages in thread
* Re: Raid 1 install revealed poor HD performance (I think)
2005-05-17 22:16 ` Tobias Hofmann
@ 2005-05-17 23:00 ` Jay Strauss
2005-05-18 3:28 ` Jay Strauss
1 sibling, 0 replies; 20+ messages in thread
From: Jay Strauss @ 2005-05-17 23:00 UTC (permalink / raw)
To: Tobias Hofmann; +Cc: Mike Hardy, Gordon Henderson, linux-raid
Tobias Hofmann wrote:
> On 17.05.2005 22:54, Mike Hardy wrote:
>
> [...]
>
>> That's fine-grained tuning mostly though, and not really on topic for
>> you as it appears you have something entirely different going on. What
>> exactly is going on, I must admit, I don't know. :-/
>
>
> <delurk>
> Well, let's find out then, no? ;)
>
> Jay, you said it yourself - Knoppix seems to be setting something
> different than Sarge. I am not very familiar with both, but doing a
> lspci on both (just in case they "see" something different) and
> comparing, like Gordon suggested, the output of dmesg | less (or a less
> /var/log/dmesg) on both systems might give you an idea where things
> start to go differently.
>
> I would look for differences in the discovery of (the)
> ide-controller(s), and how the HDs are discovered/treated.
>
> For completeness' sake, the kernel version of each try could give the
> experts an idea of differences I am not aware of.
>
> I am sure to have forgotten something here, but for starters, it might
> work.
>
> Just my 2c, good luck, tobi... ;)
Ok :), when I get home I'll be doing everything everyone has asked
It's a drag booting knoppix on this machine, it takes forever
Jay
^ permalink raw reply [flat|nested] 20+ messages in thread
* Re: Raid 1 install revealed poor HD performance (I think)
2005-05-17 15:11 Raid 1 install revealed poor HD performance (I think) Jay Strauss
2005-05-17 15:50 ` Gordon Henderson
@ 2005-05-17 23:42 ` Henrik Holst
2005-05-18 3:30 ` Jay Strauss
2005-05-17 23:44 ` Henrik Holst
2 siblings, 1 reply; 20+ messages in thread
From: Henrik Holst @ 2005-05-17 23:42 UTC (permalink / raw)
To: Jay Strauss; +Cc: linux-raid
You might also want to try and force UltraDMA mode5 (or the highest
UltraDMA mode supported by your hardware) with "hdparm -X69 /dev/hda"
combined with the "Multiple sector mode" (set with "hdparm -m 16
/dev/hda") that was suggested earlier.
/ Henrik Holst
Jay Strauss wrote:
> Hi,
>
> I just installed a simple raid-1. I have a single HD on each
> controller. ie. 2 controllers 2 HDs. That's it, no CD players or DVD
> drives...
>
> I did a fresh install of Sarge. After reboot I unplugged the CD drive,
> and let the array sync overnight. It took 10hrs to sync 160GiB, which
> seemed pretty long, so I executed the "dd if=/dev/hda of=/dev/null
> bs=64k count=10000" command I've seen posted on this list.
>
> It took about a minute and change to complete, which is long. So out of
> curiosity I plugged in a CD player and booted Knoppix. When I ran the
> same command it took only 30 seconds (which is still not as quick as
> some of the numbers on this list, but it's an old celeron 466MHz).
>
> I'm very unfamiliar with options for speeding up hard drive access. What
> might Knoppix be setting (and how would I look for it), that it's
> getting 100% better drive performance than a plain jane Sarge install?
>
> Thanks
> Jay
>
> ******** Knoppix Run ***********
>
> root@ttyp0[~]# for i in 1 2 3; do
> > time dd if=/dev/hda of=/dev/null bs=64k count=10000
> > done
> 10000+0 records in
> 10000+0 records out
> 655360000 bytes transferred in 46.301872 seconds (14154071 bytes/sec)
>
> real 0m46.754s
> user 0m0.480s
> sys 0m29.710s
>
> 10000+0 records in
> 10000+0 records out
> 655360000 bytes transferred in 49.852186 seconds (13146063 bytes/sec)
>
> real 0m51.075s
> user 0m0.430s
> sys 0m26.610s
>
> 10000+0 records in
> 10000+0 records out
> 655360000 bytes transferred in 46.237684 seconds (14173720 bytes/sec)
>
> real 0m47.928s
> user 0m0.480s
> sys 0m31.770s
>
> ************** Sarge Run *************
>
> sandy:~# for i in 1 2 3; do
> > time dd if=/dev/hda of=/dev/null bs=64k count=10000
> > done
> 10000+0 records in
> 10000+0 records out
> 655360000 bytes transferred in 66.004525 seconds (9929016 bytes/sec)
>
> real 1m6.052s
> user 0m0.637s
> sys 1m2.888s
>
> 10000+0 records in
> 10000+0 records out
> 655360000 bytes transferred in 63.797226 seconds (10272547 bytes/sec)
>
> real 1m3.886s
> user 0m0.678s
> sys 1m0.732s
>
> 10000+0 records in
> 10000+0 records out
> 655360000 bytes transferred in 64.729628 seconds (10124575 bytes/sec)
>
> real 1m4.822s
> user 0m0.616s
> sys 1m1.668s
>
> -
> To unsubscribe from this list: send the line "unsubscribe linux-raid" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at http://vger.kernel.org/majordomo-info.html
>
^ permalink raw reply [flat|nested] 20+ messages in thread
* Re: Raid 1 install revealed poor HD performance (I think)
2005-05-17 15:11 Raid 1 install revealed poor HD performance (I think) Jay Strauss
2005-05-17 15:50 ` Gordon Henderson
2005-05-17 23:42 ` Henrik Holst
@ 2005-05-17 23:44 ` Henrik Holst
2005-05-19 15:32 ` Jay Strauss
2 siblings, 1 reply; 20+ messages in thread
From: Henrik Holst @ 2005-05-17 23:44 UTC (permalink / raw)
To: Jay Strauss; +Cc: linux-raid
You might also want to try and force UltraDMA mode5 (or the highest
UltraDMA mode supported by your hardware) with "hdparm -X69 /dev/hda"
combined with the "Multiple sector mode" (set with "hdparm -m 16
/dev/hda") that was suggested earlier.
/ Henrik Holst
Jay Strauss wrote:
> Hi,
>
> I just installed a simple raid-1. I have a single HD on each
> controller. ie. 2 controllers 2 HDs. That's it, no CD players or DVD
> drives...
>
> I did a fresh install of Sarge. After reboot I unplugged the CD drive,
> and let the array sync overnight. It took 10hrs to sync 160GiB, which
> seemed pretty long, so I executed the "dd if=/dev/hda of=/dev/null
> bs=64k count=10000" command I've seen posted on this list.
>
> It took about a minute and change to complete, which is long. So out of
> curiosity I plugged in a CD player and booted Knoppix. When I ran the
> same command it took only 30 seconds (which is still not as quick as
> some of the numbers on this list, but it's an old celeron 466MHz).
>
> I'm very unfamiliar with options for speeding up hard drive access. What
> might Knoppix be setting (and how would I look for it), that it's
> getting 100% better drive performance than a plain jane Sarge install?
>
> Thanks
> Jay
>
> ******** Knoppix Run ***********
>
> root@ttyp0[~]# for i in 1 2 3; do
> > time dd if=/dev/hda of=/dev/null bs=64k count=10000
> > done
> 10000+0 records in
> 10000+0 records out
> 655360000 bytes transferred in 46.301872 seconds (14154071 bytes/sec)
>
> real 0m46.754s
> user 0m0.480s
> sys 0m29.710s
>
> 10000+0 records in
> 10000+0 records out
> 655360000 bytes transferred in 49.852186 seconds (13146063 bytes/sec)
>
> real 0m51.075s
> user 0m0.430s
> sys 0m26.610s
>
> 10000+0 records in
> 10000+0 records out
> 655360000 bytes transferred in 46.237684 seconds (14173720 bytes/sec)
>
> real 0m47.928s
> user 0m0.480s
> sys 0m31.770s
>
> ************** Sarge Run *************
>
> sandy:~# for i in 1 2 3; do
> > time dd if=/dev/hda of=/dev/null bs=64k count=10000
> > done
> 10000+0 records in
> 10000+0 records out
> 655360000 bytes transferred in 66.004525 seconds (9929016 bytes/sec)
>
> real 1m6.052s
> user 0m0.637s
> sys 1m2.888s
>
> 10000+0 records in
> 10000+0 records out
> 655360000 bytes transferred in 63.797226 seconds (10272547 bytes/sec)
>
> real 1m3.886s
> user 0m0.678s
> sys 1m0.732s
>
> 10000+0 records in
> 10000+0 records out
> 655360000 bytes transferred in 64.729628 seconds (10124575 bytes/sec)
>
> real 1m4.822s
> user 0m0.616s
> sys 1m1.668s
>
> -
> To unsubscribe from this list: send the line "unsubscribe linux-raid" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at http://vger.kernel.org/majordomo-info.html
>
^ permalink raw reply [flat|nested] 20+ messages in thread
* Re: Raid 1 install revealed poor HD performance (I think)
2005-05-17 22:16 ` Tobias Hofmann
2005-05-17 23:00 ` Jay Strauss
@ 2005-05-18 3:28 ` Jay Strauss
1 sibling, 0 replies; 20+ messages in thread
From: Jay Strauss @ 2005-05-18 3:28 UTC (permalink / raw)
Cc: linux-raid
Tobias Hofmann wrote:
> On 17.05.2005 22:54, Mike Hardy wrote:
>
> [...]
>
>> That's fine-grained tuning mostly though, and not really on topic for
>> you as it appears you have something entirely different going on. What
>> exactly is going on, I must admit, I don't know. :-/
>
>
> <delurk>
> Well, let's find out then, no? ;)
>
> Jay, you said it yourself - Knoppix seems to be setting something
> different than Sarge. I am not very familiar with both, but doing a
> lspci on both (just in case they "see" something different) and
> comparing, like Gordon suggested, the output of dmesg | less (or a less
> /var/log/dmesg) on both systems might give you an idea where things
> start to go differently.
>
> I would look for differences in the discovery of (the)
> ide-controller(s), and how the HDs are discovered/treated.
>
> For completeness' sake, the kernel version of each try could give the
> experts an idea of differences I am not aware of.
>
> I am sure to have forgotten something here, but for starters, it might
> work.
>
> Just my 2c, good luck, tobi... ;)
Unfortunately, nothing is sticking out at me. I can't say I know much
about this stuff (obviously). I'm new on the list, has this thread
drifted too much off the charter of this list, such that I should take
it somewhere else?
For the sake of completeness, below are the tests I was asked to run
jstrauss@hydrogen:~$ cat knoppix.uname
Linux Knoppix 2.4.27 #2 SMP Mo Aug 9 00:39:37 CEST 2004 i686 GNU/Linux
jstrauss@hydrogen:~$ cat sarge.uname
Linux sandy 2.6.8-2-686 #1 Mon Jan 24 03:58:38 EST 2005 i686 GNU/Linux
and diff sarge.dmesg knoppix.dmesg1c1
< Linux version 2.6.8-2-686 (dilinger@toaster.hq.voxel.net) (gcc version
3.3.5 (Debian 1:3.3.5-6)) #1 Mon Jan 24 03:58:38 EST 2005
---
> Linux version 2.4.27 (root@Koffer) (gcc version 2.95.4 20011002
(Debian prerelease)) #2 SMP Mo Aug 9 00:39:37 CEST 2004
12,15c12,14
< DMA zone: 4096 pages, LIFO batch:1
< Normal zone: 10240 pages, LIFO batch:2
< HighMem zone: 0 pages, LIFO batch:1
< DMI 2.2 present.
---
> zone(0): 4096 pages.
> zone(1): 10240 pages.
> zone(2): 0 pages.
18,19c17
< Built 1 zonelists
< Kernel command line: root=/dev/hda1 ro
---
> Kernel command line: ramdisk_size=100000 init=/etc/init lang=us
apm=power-off vga=791 initrd=minirt24.gz nomce quiet BOOT_IMAGE=knoppix
BOOT_IMAGE=linux24 desktop=twm
23,35c21,29
< PID hash table entries: 256 (order 8: 2048 bytes)
< Detected 467.789 MHz processor.
< Using tsc for high-res timesource
< Console: colour VGA+ 80x25
< Dentry cache hash table entries: 8192 (order: 3, 32768 bytes)
< Inode-cache hash table entries: 4096 (order: 2, 16384 bytes)
< Memory: 49220k/57344k available (1550k kernel code, 7668k reserved,
688k data, 148k init, 0k highmem)
< Checking if this processor honours the WP bit even in supervisor
mode... Ok.
< Calibrating delay loop... 919.55 BogoMIPS
< Security Scaffold v1.0.0 initialized
< Mount-cache hash table entries: 512 (order: 0, 4096 bytes)
< CPU: After generic identify, caps: 0183fbff 00000000 00000000 00000000
< CPU: After vendor identify, caps: 0183fbff 00000000 00000000 00000000
---
> Detected 467.722 MHz processor.
> Console: colour dummy device 80x25
> Calibrating delay loop... 930.61 BogoMIPS
> Memory: 52904k/57344k available (1406k kernel code, 4052k reserved,
574k data, 144k init, 0k highmem)
> Dentry cache hash table entries: 8192 (order: 4, 65536 bytes)
> Inode cache hash table entries: 4096 (order: 3, 32768 bytes)
> Mount cache hash table entries: 512 (order: 0, 4096 bytes)
> Buffer cache hash table entries: 1024 (order: 0, 4096 bytes)
> Page-cache hash table entries: 16384 (order: 4, 65536 bytes)
38,41c32,33
< CPU: After all inits, caps: 0183fbff 00000000 00000000 00000040
< Intel machine check architecture supported.
< Intel machine check reporting enabled on CPU#0.
< CPU: Intel Celeron (Mendocino) stepping 05
---
> CPU: After generic, caps: 0183fbff 00000000 00000000 00000000
> CPU: Common caps: 0183fbff 00000000 00000000 00000000
43a36,46
> Checking for popad bug... OK.
> POSIX conformance testing by UNIFIX
> mtrr: v1.40 (20010327) Richard Gooch (rgooch@atnf.csiro.au)
> mtrr: detected mtrr type: Intel
> CPU: L1 I cache: 16K, L1 D cache: 16K
> CPU: L2 cache: 128K
> CPU: After generic, caps: 0183fbff 00000000 00000000 00000000
> CPU: Common caps: 0183fbff 00000000 00000000 00000000
> CPU0: Intel Celeron (Mendocino) stepping 05
> per-CPU timeslice cutoff: 365.67 usecs.
> SMP motherboard not detected.
49,56c52,57
< ..... CPU clock speed is 467.0630 MHz.
< ..... host bus clock speed is 66.0804 MHz.
< checking if image is initramfs...it isn't (ungzip failed); looks like
an initrd
< Freeing initrd memory: 4544k freed
< NET: Registered protocol family 16
< PCI: PCI BIOS revision 2.10 entry at 0xfdb81, last bus=1
< PCI: Using configuration type 1
< mtrr: v2.0 (20020519)
---
> ..... CPU clock speed is 467.7528 MHz.
> ..... host bus clock speed is 66.8216 MHz.
> cpu: 0, clocks: 668216, slice: 334108
> CPU0<T0:668208,T1:334096,D:4,S:334108,C:668216>
> Waiting on wait_init_idle (map = 0x0)
> All processors have done init_idle
59,63c60,61
< Linux Plug and Play Support v0.97 (c) Adam Belay
< PnPBIOS: Scanning system for PnP BIOS support...
< PnPBIOS: Found PnP BIOS installation structure at 0xc00f7030
< PnPBIOS: PnP BIOS version 1.0, entry 0xf0000:0x6584, dseg 0xf0000
< PnPBIOS: 13 nodes reported by PnP BIOS; 13 recorded by driver
---
> PCI: PCI BIOS revision 2.10 entry at 0xfdb81, last bus=1
> PCI: Using configuration type 1
66,71c64
< PCI: Using IRQ router SIS [1039/0008] at 0000:00:01.0
< VFS: Disk quotas dquot_6.5.1
< Dquot-cache hash table entries: 1024 (order 0, 4096 bytes)
< devfs: 2004-01-31 Richard Gooch (rgooch@atnf.csiro.au)
< devfs: boot_options: 0x0
< Initializing Cryptographic API
---
> PCI: Using IRQ router SIS5595 [1039/0008] at 00:01.0
74,91c67,83
< Serial: 8250/16550 driver $Revision: 1.90 $ 48 ports, IRQ sharing enabled
< ttyS0 at I/O 0x3f8 (irq = 4) is a 16550A
< RAMDISK driver initialized: 16 RAM disks of 8192K size 1024 blocksize
< serio: i8042 AUX port at 0x60,0x64 irq 12
< serio: i8042 KBD port at 0x60,0x64 irq 1
< input: AT Translated Set 2 keyboard on isa0060/serio0
< NET: Registered protocol family 2
< IP: routing cache hash table of 512 buckets, 4Kbytes
< TCP: Hash tables configured (established 4096 bind 8192)
< NET: Registered protocol family 8
< NET: Registered protocol family 20
< RAMDISK: cramfs filesystem found at block 0
< RAMDISK: Loading 4544 blocks [1 disk] into ram disk... |\b/\b-\b\\b|\b/\b-\b\\b|\b/\b-\b\\b|\b/\b-\b\\b|\b/\b-\b\\b|\b/\b-\b\\b|\b/\b-\b\\b|\b/\b-\b\\b|\b/\b-\b\\b|\b/\b-\b\\b|\b/\b-\b\\b|\b/\b-\b\\b|\b/\b-\b\\b|\b/\b-\b\\b|\b/\b-\b\\b|\b/\b-\b\\b|\b/\b-\b\\b|\b/\b-\b\\b|\b/\b-\b\\b|\b/\b-\b\\b|\b/\b-\b\\b|\b/\b-\b\\b|\b/\b-\b\\b|\b/\b-\b\\b|\b/\b-\b\\b|\b/\b-\b\\b|\b/\b-\b\\b|\b/\b-\b\\b|\b/\b-\b\\b|\b/\b-\b\\b|\b/\b-\b\\b|\b/\b-\b\\b|\b/\b-\b\\b|\b/\b-\b\\b|\b/\b-\b\\b|\b/\b-\b\\b|\b/\b-\b\\b|\b/\b-\b\\b|\b/\b-\b\\b|\b/\b-\b\\b|\b/\b-\b\\b|\b/\b-\b\\b|\b/\b-\b\\b|\b/\b-\b\\b|\b/\b-\b\\b|\b/\b-\b\\b|\b/\b-\b\\b|\b/\b-\b\\b|\b/\b-\b\\b|\b/\b-\b\\b|\b/\b-\b\\b|\b/\b-\b\\b|\b/\b-\b\\b|\b/\b-\b\\b|\b/\b-\b\\b|\b/\b-\b\\b|\b/\b-\b\\b|\b/\b-\b\\b|\b/\b-\b\\b|\b/\b-\b\\b|\b/\b-\b\\b|\b/\b-\b\\b|\b/\b-\b\\b|\b/\b-\b\\b|\b/\b-\b\\b|\b/\b-\b\\b|\b/\b-\b\\b|\b/\b-\b\\b|\b/\b-\b\\b|\b/\b-\b\\b|\b/\b-\b\\bdone.
< VFS: Mounted root (cramfs filesystem) readonly.
< Freeing unused kernel memory: 148k freed
< vesafb: probe of vesafb0 failed with error -6
< NET: Registered protocol family 1
< Uniform Multi-Platform E-IDE driver Revision: 7.00alpha2
---
> Linux NET4.0 for Linux 2.4
> Based upon Swansea University Computer Society NET3.039
> Initializing RT netlink socket
> Starting kswapd
> VFS: Disk quotas vdquot_6.5.1
> vesafb: framebuffer at 0xe7000000, mapped to 0xc400d000, size 3072k
> vesafb: mode is 1024x768x16, linelength=2048, pages=4
> vesafb: protected mode interface info at c7c3:0000
> vesafb: scrolling: redraw
> vesafb: directcolor: size=0:5:6:5, shift=0:11:5:0
> Console: switching to colour frame buffer device 128x48
> fb0: VESA VGA frame buffer device
> pty: 256 Unix98 ptys configured
> Floppy drive(s): fd0 is 1.44M
> FDC 0 is a post-1991 82077
> RAMDISK driver initialized: 16 RAM disks of 100000K size 1024 blocksize
> Uniform Multi-Platform E-IDE driver Revision: 7.00beta4-2.4
93c85
< SIS5513: IDE controller at PCI slot 0000:00:00.1
---
> SIS5513: IDE controller at PCI slot 00:00.1
100,104d91
< Using anticipatory io scheduler
< ide0 at 0x1f0-0x1f7,0x3f6 on irq 14
< hda: max request size: 1024KiB
< hda: 312581808 sectors (160041 MB) w/2048KiB Cache, CHS=19457/255/63,
UDMA(66)
< /dev/ide/host0/bus0/target0/lun0: p1 p2
105a93
> ide0 at 0x1f0-0x1f7,0x3f6 on irq 14
107,118c95,168
< kjournald starting. Commit interval 5 seconds
< EXT3-fs: mounted filesystem with ordered data mode.
< Adding 979956k swap on /dev/hda2. Priority:-1 extents:1
< EXT3 FS on hda1, internal journal
< Generic RTC Driver v1.07
< hdc: ATAPI 40X CD-ROM drive, 128kB Cache, UDMA(33)
< Uniform CD-ROM driver Revision: 3.20
< input: ImPS/2 Generic Wheel Mouse on isa0060/serio1
< Capability LSM initialized
< mice: PS/2 mouse device common for all mice
< ts: Compaq touchscreen protocol output
< Linux agpgart interface v0.100 (c) Dave Jones
---
> hda: attached ide-disk driver.
> hda: host protected area => 1
> hda: 312581808 sectors (160042 MB) w/2048KiB Cache, CHS=19457/255/63
> Partition check:
> hda: hda1 hda2
> ide: late registration of driver.
> SCSI subsystem driver Revision: 1.00
> libata version 1.02 loaded.
> kmod: failed to exec /sbin/modprobe -s -k scsi_hostadapter, errno = 2
> kmod: failed to exec /sbin/modprobe -s -k scsi_hostadapter, errno = 2
> Initializing Cryptographic API
> NET4: Linux TCP/IP 1.0 for NET4.0
> IP Protocols: ICMP, UDP, TCP, IGMP
> IP: routing cache hash table of 512 buckets, 4Kbytes
> TCP: Hash tables configured (established 4096 bind 4096)
> NET4: Unix domain sockets 1.0/SMP for Linux NET4.0.
> RAMDISK: Compressed image found at block 0
> Freeing initrd memory: 781k freed
> EXT2-fs warning: checktime reached, running e2fsck is recommended
> VFS: Mounted root (ext2 filesystem).
> hdc: attached ide-scsi driver.
> scsi0 : SCSI host adapter emulation for IDE ATAPI devices
> Vendor: BCD Model: E520C Rev: 0.40
> Type: CD-ROM ANSI SCSI revision: 02
> Attached scsi CD-ROM sr0 at scsi0, channel 0, id 0, lun 0
> sr0: scsi3-mmc drive: 40x/40x cd/rw xa/form2 cdda tray
> Uniform CD-ROM driver Revision: 3.12
> NCR53c406a: no available ports found
> aec671x_detect:
> scsi: <fdomain> Detection failed (no card)
> GDT: Storage RAID Controller Driver. Version: 2.05
> GDT: Found 0 PCI Storage RAID Controllers
> megaraid: v1.18k (Release Date: Thu Aug 28 10:05:11 EDT 2003)
> megaraid: no BIOS enabled.
> DC390: 0 adapters found
> Failed initialization of WD-7000 SCSI card!
> Guestimating sector 312560384 for superblock
> driver for Silicon Image(tm) Medley(tm) hardware version 0.0.1: No
raid array found
> Medley RAID: No usable RAID sets found
> Promise Fasttrak(tm) Softwareraid driver 0.03beta: No raid array found
> usb.c: registered new driver usbdevfs
> usb.c: registered new driver hub
> usb-uhci.c: $Revision: 1.275 $ time 00:50:50 Aug 9 2004
> usb-uhci.c: High bandwidth mode enabled
> usb-uhci.c: v1.275:USB Universal Host Controller Interface driver
> usb-ohci.c: USB OHCI at membase 0xc432d000, IRQ 3
> usb-ohci.c: usb-00:01.2, Silicon Integrated Systems [SiS] USB 1.0
Controller
> usb.c: new USB bus registered, assigned bus number 1
> hub.c: USB hub found
> hub.c: 2 ports detected
> Initializing USB Mass Storage driver...
> usb.c: registered new driver usb-storage
> USB Mass Storage support registered.
> blk: queue c03661a0, I/O limit 4095Mb (mask 0xffffffff)
> ISO 9660 Extensions: Microsoft Joliet Level 3
> ISO 9660 Extensions: RRIP_1991A
> cloop: Initializing cloop v2.01
> cloop: loaded (max 8 devices)
> cloop: /cdrom/KNOPPIX/KNOPPIX: 30369 blocks, 65536 bytes/block,
largest block is 65552 bytes.
> ISO 9660 Extensions: RRIP_1991A
> Freeing unused kernel memory: 144k freed
> Real Time Clock Driver v1.10f
> apm: BIOS version 1.2 Flags 0x03 (Driver version 1.16)
> Linux Kernel Card Services 3.1.22
> options: [pci] [cardbus] [pm]
> Intel ISA PCIC probe: not found.
> Databook TCIC-2 PCMCIA probe: not found.
> usb-uhci.c: $Revision: 1.275 $ time 00:50:50 Aug 9 2004
> usb-uhci.c: High bandwidth mode enabled
> usb-uhci.c: v1.275:USB Universal Host Controller Interface driver
> usbdevfs: remount parameter error
> Serial driver version 5.05c (2001-07-08) with MANY_PORTS SHARE_IRQ
SERIAL_PCI ISAPNP enabled
> ttyS00 at 0x03f8 (irq = 4) is a 16550A
> parport0: PC-style at 0x378 [PCSPP,TRISTATE]
120,124c170,173
< PCI: Found IRQ 10 for device 0000:00:0b.0
< eth0: Davicom DM9102 at pci0000:00:0b.0, 00:d0:09:1e:b2:08, irq 10.
< cmpci: version $Revision: 6.82 $ time 04:15:48 Jan 24 2005
< PCI: Found IRQ 9 for device 0000:00:0f.0
< cmpci:
---
> PCI: Found IRQ 10 for device 00:0b.0
> eth0: Davicom DM9102 at pci00:0b.0, 00:d0:09:1e:b2:08, irq 10.
> cmpci: version $Revision: 6.16 $ time 00:49:29 Aug 9 2004
> PCI: Found IRQ 9 for device 00:0f.0
127,136c176
< usbcore: registered new driver usbfs
< usbcore: registered new driver hub
< ohci_hcd: 2004 Feb 02 USB 1.1 'Open' Host Controller (OHCI) Driver (PCI)
< ohci_hcd: block sizes: ed 64 td 64
< ohci_hcd 0000:00:01.2: Silicon Integrated Systems [SiS] USB 1.0 Controller
< ohci_hcd 0000:00:01.2: irq 3, pci mem c4032000
< ohci_hcd 0000:00:01.2: new USB bus registered, assigned bus number 1
< hub 1-0:1.0: USB hub found
< hub 1-0:1.0: 2 ports detected
< agpgart: Detected SiS 620 chipset
---
> Linux agpgart interface v0.99 (c) Jeff Hartmann
137a178
> agpgart: Detected SiS 620 chipset
139,154c180
< cpci_hotplug: CompactPCI Hot Plug Core version: 0.2
< pci_hotplug: PCI Hot Plug PCI Core version: 0.5
< pciehp: acpi_pciehprm:get_device PCI ROOT HID fail=0x1001
< shpchp: acpi_shpchprm:get_device PCI ROOT HID fail=0x1001
< Linux Tulip driver version 1.1.13 (May 11, 2002)
< input: PC Speaker
< parport: PnPBIOS parport detected.
< parport0: PC-style at 0x378, irq 7 [PCSPP,TRISTATE]
< inserting floppy driver for 2.6.8-2-686
< Floppy drive(s): fd0 is 1.44M
< FDC 0 is a post-1991 82077
< NET: Registered protocol family 17
< NET: Registered protocol family 10
< Disabled Privacy Extensions on device c02ff020(lo)
< IPv6 over IPv4 tunneling driver
< eth0: no IPv6 routers present
---
> Adding Swap: 979956k swap-space (priority -1)
^ permalink raw reply [flat|nested] 20+ messages in thread
* Re: Raid 1 install revealed poor HD performance (I think)
2005-05-17 23:42 ` Henrik Holst
@ 2005-05-18 3:30 ` Jay Strauss
0 siblings, 0 replies; 20+ messages in thread
From: Jay Strauss @ 2005-05-18 3:30 UTC (permalink / raw)
To: Henrik Holst; +Cc: linux-raid
Henrik Holst wrote:
> You might also want to try and force UltraDMA mode5 (or the highest
> UltraDMA mode supported by your hardware) with "hdparm -X69 /dev/hda"
> combined with the "Multiple sector mode" (set with "hdparm -m 16
> /dev/hda") that was suggested earlier.
>
No luck, thanks for the attempt though
Jay
^ permalink raw reply [flat|nested] 20+ messages in thread
* Re: Raid 1 install revealed poor HD performance (I think)
[not found] <Pine.LNX.4.44.0505180206580.1777-100000@coffee.psychology.mcmaster.ca>
@ 2005-05-18 15:08 ` Mike Hardy
0 siblings, 0 replies; 20+ messages in thread
From: Mike Hardy @ 2005-05-18 15:08 UTC (permalink / raw)
To: Mark Hahn, linux-raid
All colo facilities are required to have a complete power cut-off switch
for fire fighter safety. Its behind the UPS stacks, specifically so that
they know the racks aren't energized before they go in.
Apparently, the big red button that activates this cutoff is an easy
target for people to accidentally lean on or similar, so all power goes
down immediately.
http://www.google.com/search?q=data+corruption+colocation+power+cut+off+write+caching
As an example, the first result there is for wikipedia, which you've
probably heard of ;-)
LiveJournal (another huge site) was also bit by this.
Bottom line is, if you are in a colo, there must be a switch between all
power service and your machines, so you are susceptible
Posted back to linux-raid for general interest. Maybe everyone else
isn't interested, but I'm fascinated by gotchas. I'm sure at least a few
others...
-Mike
Mark Hahn wrote:
>>In particular, drives like to enable write-caching out of the box now,
>>which opens a window for data corruption that has bitten more than a few
>>high-profile sites as it causes the disk to break the fsync contract
>
>
> I'm curious for details on which sites and how they broke.
> I can't think of a scenario (except perhaps operating a server
> without UPS) were this would be a problem. maybe servers
> that are otherwise in such bad shape that they crash?
^ permalink raw reply [flat|nested] 20+ messages in thread
* Re: Raid 1 install revealed poor HD performance (I think)
2005-05-17 23:44 ` Henrik Holst
@ 2005-05-19 15:32 ` Jay Strauss
2005-05-19 16:14 ` Mike Hardy
0 siblings, 1 reply; 20+ messages in thread
From: Jay Strauss @ 2005-05-19 15:32 UTC (permalink / raw)
To: Henrik Holst; +Cc: linux-raid
Hi (again):
I diff'ed the contents of /proc/ide of Knoppix and Sarge, there were a
couple of (possibly) interesting differences. I'm not sure how to go
about changing the settings. The net/net is on my controller under
knoppix it says UDMA Enabled but on Sarge it's disabled. Also the
writecache is enabled under knoppix (I did a hdparm -W 1 /dev/hda, but
my tests were identical)
Below are the diffs, I'd like to make the sarge install setting
identical, and see if that changes anything.
If anyone knows how to alter these settings, I'd be appreciated
thanks
Jay
jstrauss@hydrogen:~$ diff -r sarge.ide/sis knoppix.ide/sis
10,11c10,11
< UDMA Enabled UDMA Enabled
< UDMA Cycle Time 2 CLK UDMA Cycle Time 4 CLK
---
> UDMA Disabled UDMA Disabled
> UDMA Cycle Time Reserved UDMA Cycle Time
Reserved
and
jstrauss@hydrogen:~$ diff sarge.ide/hda/settings knoppix.ide/hda/settings
7a8
> breada_readahead 8 0 255
rw
9c10
< current_speed 68 0 70 rw
---
> current_speed 12 0 70
rw
11c12,13
< init_speed 68 0 70 rw
---
> file_readahead 124 0 16384
rw
> init_speed 12 0 70
rw
16c18,19
< multcount 0 0 16 rw
---
> max_kb_per_request 128 1 255
rw
> multcount 16 0 16
rw
20a24
> slow 0 0 1
rw
23c27
< wcache 1 0 1 rw
---
> wcache 0 0 1
rw
^ permalink raw reply [flat|nested] 20+ messages in thread
* Re: Raid 1 install revealed poor HD performance (I think)
2005-05-19 15:32 ` Jay Strauss
@ 2005-05-19 16:14 ` Mike Hardy
2005-05-19 18:40 ` Jay Strauss
0 siblings, 1 reply; 20+ messages in thread
From: Mike Hardy @ 2005-05-19 16:14 UTC (permalink / raw)
To: Jay Strauss, linux-raid
That's more like what we were expecting!
I can't tell you why its like that or how, but this *is* the typical
problem with IDE. Most likely the chipset is being detected incorrectly,
or its being detected correctly but there is some errata that's
disabling dma, or maybe dma is experimental for the card.
Places to look for information are in the dmesg for the sis chipset's
detection diagnostics, then on the internet as a google for your sis
chipset number then "linux dma" maybe. You may end up with some ide
things on your kernel command line (via grub or lilo)
I'll reiterate something I said previously though, other than
write-caching (which is potentially dangerous) none of the other hdparm
or IDE settings are really going to give you that much of a boost unless
your workload is specifically tunable. They're all just little minor
twiddles you can do - nothing even close to the difference between
having DMA or not. So I'd just focus on enabling DMA. The other stuff is
all uninteresting compared to it.
-Mike
Jay Strauss wrote:
> Hi (again):
>
> I diff'ed the contents of /proc/ide of Knoppix and Sarge, there were a
> couple of (possibly) interesting differences. I'm not sure how to go
> about changing the settings. The net/net is on my controller under
> knoppix it says UDMA Enabled but on Sarge it's disabled. Also the
> writecache is enabled under knoppix (I did a hdparm -W 1 /dev/hda, but
> my tests were identical)
>
> Below are the diffs, I'd like to make the sarge install setting
> identical, and see if that changes anything.
>
> If anyone knows how to alter these settings, I'd be appreciated
>
> thanks
> Jay
>
> jstrauss@hydrogen:~$ diff -r sarge.ide/sis knoppix.ide/sis
> 10,11c10,11
> < UDMA Enabled UDMA Enabled
> < UDMA Cycle Time 2 CLK UDMA Cycle Time 4 CLK
> ---
>> UDMA Disabled UDMA Disabled
>> UDMA Cycle Time Reserved UDMA Cycle Time Reserved
>
> and
> jstrauss@hydrogen:~$ diff sarge.ide/hda/settings knoppix.ide/hda/settings
> 7a8
>> breada_readahead 8 0 255 rw
> 9c10
> < current_speed 68 0
> 70 rw
> ---
>> current_speed 12 0 70 rw
> 11c12,13
> < init_speed 68 0
> 70 rw
> ---
>> file_readahead 124 0 16384 rw
>> init_speed 12 0 70 rw
> 16c18,19
> < multcount 0 0
> 16 rw
> ---
>> max_kb_per_request 128 1 255 rw
>> multcount 16 0 16 rw
> 20a24
>> slow 0 0 1 rw
> 23c27
> < wcache 1 0
> 1 rw
> ---
>> wcache 0 0 1 rw
> -
> To unsubscribe from this list: send the line "unsubscribe linux-raid" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at http://vger.kernel.org/majordomo-info.html
^ permalink raw reply [flat|nested] 20+ messages in thread
* Re: Raid 1 install revealed poor HD performance (I think)
2005-05-19 16:14 ` Mike Hardy
@ 2005-05-19 18:40 ` Jay Strauss
2005-05-19 20:43 ` Mike Hardy
0 siblings, 1 reply; 20+ messages in thread
From: Jay Strauss @ 2005-05-19 18:40 UTC (permalink / raw)
To: Mike Hardy; +Cc: linux-raid
Mike Hardy wrote:
> That's more like what we were expecting!
>
> I can't tell you why its like that or how, but this *is* the typical
> problem with IDE. Most likely the chipset is being detected incorrectly,
> or its being detected correctly but there is some errata that's
> disabling dma, or maybe dma is experimental for the card.
>
> Places to look for information are in the dmesg for the sis chipset's
> detection diagnostics, then on the internet as a google for your sis
> chipset number then "linux dma" maybe. You may end up with some ide
> things on your kernel command line (via grub or lilo)
>
> I'll reiterate something I said previously though, other than
> write-caching (which is potentially dangerous) none of the other hdparm
> or IDE settings are really going to give you that much of a boost unless
> your workload is specifically tunable. They're all just little minor
> twiddles you can do - nothing even close to the difference between
> having DMA or not. So I'd just focus on enabling DMA. The other stuff is
> all uninteresting compared to it.
>
> -Mike
>
Thanks for the reply Mike. After some Googling and looking at my kernel
config file, it seems that SIS support is a module, like so:
CONFIG_BLK_DEV_SIS5513=m
I found this link but can't find any of the files that it references
http://kerneltrap.org/comment/reply/4925/129324
I'm not sure if having CONFIG_BLK_DEV_SIS5513 as a module has anything
to do with this.
I can't get to my box right now, because out of curiosity I did hdparm
-D 0 /dev/hda and ran the dd test, I think it crashed the box. The box
is in my basement at home, so I'll have to wait until tonight.
Jay
^ permalink raw reply [flat|nested] 20+ messages in thread
* Re: Raid 1 install revealed poor HD performance (I think)
2005-05-19 18:40 ` Jay Strauss
@ 2005-05-19 20:43 ` Mike Hardy
2005-05-20 1:43 ` Jay Strauss
0 siblings, 1 reply; 20+ messages in thread
From: Mike Hardy @ 2005-05-19 20:43 UTC (permalink / raw)
To: Jay Strauss; +Cc: linux-raid
Jay Strauss wrote:
> Thanks for the reply Mike. After some Googling and looking at my kernel
> config file, it seems that SIS support is a module, like so:
>
> CONFIG_BLK_DEV_SIS5513=m
>
> I found this link but can't find any of the files that it references
>
> http://kerneltrap.org/comment/reply/4925/129324
That message looks like the trick - the module doesn't have the right
PCI id for some reason, assuming you have the same chip
You may not have the kernel source, if you can't find the file with
locate. You'll need to fetch the kernel source first, then build a
custom kernel with that line changed.
This issue came up with my laptop when I first got it - a Dell Latitude
D800, it has an intel 5705M chipset that was supportable but had a new
PCI id and so didn't work. The custom kernel was a bit of a pain, but it
did work, and the patch made its way upstream really quickly. You can
help that along by posting it to linux-kernel even, most likely
(assuming it works)
If not, I will say that this is probably something that should be on
linux-ide at this point, as it really has nothing to do with raid
specifically other than this being a decent place to find folks trying
to help. The linux-ide folks will be closer to the code you're working
with so might be more capable
-Mike
^ permalink raw reply [flat|nested] 20+ messages in thread
* Re: Raid 1 install revealed poor HD performance (I think)
2005-05-19 20:43 ` Mike Hardy
@ 2005-05-20 1:43 ` Jay Strauss
0 siblings, 0 replies; 20+ messages in thread
From: Jay Strauss @ 2005-05-20 1:43 UTC (permalink / raw)
To: Mike Hardy; +Cc: linux-raid
Mike Hardy wrote:
>
> Jay Strauss wrote:
>
>
>>Thanks for the reply Mike. After some Googling and looking at my kernel
>>config file, it seems that SIS support is a module, like so:
>>
>>CONFIG_BLK_DEV_SIS5513=m
>>
>>I found this link but can't find any of the files that it references
>>
>>http://kerneltrap.org/comment/reply/4925/129324
>
>
> That message looks like the trick - the module doesn't have the right
> PCI id for some reason, assuming you have the same chip
>
> You may not have the kernel source, if you can't find the file with
> locate. You'll need to fetch the kernel source first, then build a
> custom kernel with that line changed.
>
> This issue came up with my laptop when I first got it - a Dell Latitude
> D800, it has an intel 5705M chipset that was supportable but had a new
> PCI id and so didn't work. The custom kernel was a bit of a pain, but it
> did work, and the patch made its way upstream really quickly. You can
> help that along by posting it to linux-kernel even, most likely
> (assuming it works)
>
> If not, I will say that this is probably something that should be on
> linux-ide at this point, as it really has nothing to do with raid
> specifically other than this being a decent place to find folks trying
> to help. The linux-ide folks will be closer to the code you're working
> with so might be more capable
>
> -Mike
Thanks everyone for contributing. I very much appreciate the help.
I'll bring this over to the linux-ide as Mike suggests
Again thanks
Jay
^ permalink raw reply [flat|nested] 20+ messages in thread
end of thread, other threads:[~2005-05-20 1:43 UTC | newest]
Thread overview: 20+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2005-05-17 15:11 Raid 1 install revealed poor HD performance (I think) Jay Strauss
2005-05-17 15:50 ` Gordon Henderson
2005-05-17 16:17 ` Jay Strauss
2005-05-17 16:23 ` Gordon Henderson
2005-05-17 16:52 ` Mike Hardy
2005-05-17 20:51 ` Jay Strauss
2005-05-17 20:54 ` Mike Hardy
2005-05-17 22:16 ` Tobias Hofmann
2005-05-17 23:00 ` Jay Strauss
2005-05-18 3:28 ` Jay Strauss
2005-05-17 22:45 ` Peter T. Breuer
2005-05-17 23:42 ` Henrik Holst
2005-05-18 3:30 ` Jay Strauss
2005-05-17 23:44 ` Henrik Holst
2005-05-19 15:32 ` Jay Strauss
2005-05-19 16:14 ` Mike Hardy
2005-05-19 18:40 ` Jay Strauss
2005-05-19 20:43 ` Mike Hardy
2005-05-20 1:43 ` Jay Strauss
[not found] <Pine.LNX.4.44.0505180206580.1777-100000@coffee.psychology.mcmaster.ca>
2005-05-18 15:08 ` Mike Hardy
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).