From: Walt H <waltabbyh@mindspring.com>
To: Mark Hahn <hahn@physics.mcmaster.ca>
Cc: linux-kernel <linux-kernel@vger.kernel.org>
Subject: Re: Raid0 slowdown from 2.4.19-rc1
Date: Mon, 05 Aug 2002 06:54:15 -0700 [thread overview]
Message-ID: <3D4E8387.3000704@mindspring.com> (raw)
In-Reply-To: Pine.LNX.4.33.0208050040120.15213-100000@coffee.psychology.mcmaster.ca
Sorry, should have said more about raid arrays. Drives are partitioned
as follows:
hda1, hdc1 = 4GB
hda2, hdc2 = Extended part - remainder of drive
hda5, hdc5 = 2GB = raid1, md0 /boot
hda6, hdc6 = ~15GB = raid0, md1 /usr
hda7, hdc7 = ~15GB = raid0, md2 /home
hda8, hdc8 = 1.5GB = raid0, /
hda9, hdc9 = remainder = swap
I agree, it seems as though you could see preempt lower performance, but
again, I haven't seen that either. In fact, the 2.4.18 kernel I was
using before was compiled with preempt also and showed ~68MB/Sec on md1,md2.
As for changes I may have made to .config? Nothing new. 2.4.19-rc1
compiled with xfs and preempt worked well also. I tried looking for
differences in raid drivers, but there were none to the raid0 driver.
ide-pdc202xx.c contained many changes, but I'm not a kernel hacker and
couldn't spot anything that might have affected this. Odd that it shows
up even under hdparm. Interestingly, when testing with bonnie++, the
overall sequential output was similar to the higher performing older
kernels. However, creates, deletes, and rewrites were all down
significantly.
-Walt
Mark Hahn wrote:
>>Final 2.4.19 was patched with XFS and preempt and compiled using
>
>
> it's easy to imagine cases where preempt would produce lower performance,
> though I haven't seen any hard evidence of that.
>
>
>>cutting out preempt patches. First md1 array consists of two partitions
>>from hda & hdc. hdparm for both drives looks fine by themselves:
>
>
> are they the first two partitions in hda/c?
>
>
>>/dev/hda:
>> Timing buffered disk reads: 64 MB in 1.66 seconds = 38.55 MB/sec
>>/dev/hdc:
>> Timing buffered disk reads: 64 MB in 1.65 seconds = 38.79 MB/sec
>
>
> such a disk will normally degrade to around half that performance
> in the tail of the disk.
>
>
>>/dev/md1:
>> Timing buffered disk reads: 64 MB in 1.44 seconds = 44.44 MB/sec
>>
>>In 2.4.18 and up through 2.4.19-rc1 I saw 66-70MB/sec from this array.
>>Starting in rc2 it dropped to the mid 40's. I've also ran bonnie++ and
>
>
> nothing else changed?
>
next parent reply other threads:[~2002-08-05 13:51 UTC|newest]
Thread overview: 2+ messages / expand[flat|nested] mbox.gz Atom feed top
[not found] <Pine.LNX.4.33.0208050040120.15213-100000@coffee.psychology.mcmaster.ca>
2002-08-05 13:54 ` Walt H [this message]
2002-08-05 1:39 Raid0 slowdown from 2.4.19-rc1 Walt H
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=3D4E8387.3000704@mindspring.com \
--to=waltabbyh@mindspring.com \
--cc=hahn@physics.mcmaster.ca \
--cc=linux-kernel@vger.kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox