public inbox for linux-kernel@vger.kernel.org
 help / color / mirror / Atom feed
* RE: Degraded I/O performance, since 2.5.41
@ 2002-10-11 13:49 Jeffery, David
  2002-10-12 17:47 ` Mike Anderson
  0 siblings, 1 reply; 5+ messages in thread
From: Jeffery, David @ 2002-10-11 13:49 UTC (permalink / raw)
  To: 'Dave Hansen'; +Cc: linux-scsi, lkml




> From: Dave Hansen 
> James Bottomley wrote:
> > OK, this patch should fix it.  Do your performance numbers 
> for ips improve 
> > again with this?
> 
> Yes, they are better, but still about 10% below what I was seeing 
> before.  Thank you for getting this out so quickly.  I can do 
> reasonable work with this.
> 
> Are the ServeRAID people aware of this situation?  Do they know that 
> their performance could be in the toliet if they don't 
> implement queue 
> resizing in the driver?
> -- 
> Dave Hansen
> haveblue@us.ibm.com
> 

Dave,

Thank you for for adding me to the CC list.  I didn't realize how the
queueing work would affect scsi drivers.

I'm using an older 2.5 kernel so I hadn't seen the performance drop.
I'll update my kernel and get to work on it next week when I have
time.

David Jeffery

^ permalink raw reply	[flat|nested] 5+ messages in thread
* Degraded I/O performance, since 2.5.41
@ 2002-10-10  1:09 Dave Hansen
  0 siblings, 0 replies; 5+ messages in thread
From: Dave Hansen @ 2002-10-10  1:09 UTC (permalink / raw)
  To: Andrew Morton; +Cc: linux-kernel, Rik van Riel

When I run a certain large webserver benchmark, I prefer to warm the 
pagecache up with the file set first, to cheat a little :)  I grep 
through 20 different 500MB file sets in parallel to do this.  It is a 
_lot_ slower in the BK snapshot than in plain 2.5.41.

And, no, these numbers aren't inflated, I have a lot of fast disks. I 
  _can_ do 50MB/sec :)

A little snipped from vmstat (I cut out the boring columns):
good kernel: 2.5.41: vmstat 4
Cached        bi    bo   bi    cs  us s id
389280     53284     7 1625  3235 12 88  0
600580     53489    19 1599  3264 11 89  0
813428     53891     0 1587  3256 12 88  0
1027260    54093     0 1609  3239 12 88  0
1241448    54183     0 1611  3251 11 89  0
1454036    53790     0 1618  3267 12 88  0
doing the entire 10GB grep takes 192 seconds.
a dd produces: ~48000 bi/sec

exact same grep operation on kernel: 2.5.41+yesterday's bk: vmstat 4
Cached       bi    bo   bi    cs us sy id
4855948    9697     1 1408   846 20 80  0
4890464    8745     0 1398   800 18 82  0
4922392    8077    55 1364   676 21 79  0
4959164    9296     1 1399   798 18 82  0
4995936    9315     0 1407   830 19 81  0
5027208    7931     0 1351   638 22 78  0
5066256    9855     9 1416   856 19 81  0
I was too impatient to wait on the greps to complete.
a dd produces: ~37800 bi/sec

So, bi/sec goes from 54,000 in 2.5.41, to ~8700 in yesterday's 
snapshot.  It goes from around 50MB/sec to about 8MB/sec.

Although vmstat shows 0% idle time, the profilers show lots of idle 
time, 98%!  I tried oprofile and readprofile.  Is the 2.0.9 vmstat 
still broken?  I'm using idle=poll if that makes any difference.

-- 
Dave Hansen
haveblue@us.ibm.com


^ permalink raw reply	[flat|nested] 5+ messages in thread

end of thread, other threads:[~2002-10-12 17:40 UTC | newest]

Thread overview: 5+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
     [not found] <Pine.LNX.4.44.0210092015170.9790-100000@home.transmeta.com>
     [not found] ` <3DA61041.9080808@us.ibm.com>
     [not found]   ` <20021011004227.GA27073@redhat.com>
2002-10-11  0:53     ` Degraded I/O performance, since 2.5.41 Dave Hansen
2002-10-11  1:45       ` Dave Hansen
2002-10-11 13:49 Jeffery, David
2002-10-12 17:47 ` Mike Anderson
  -- strict thread matches above, loose matches on Subject: below --
2002-10-10  1:09 Dave Hansen

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox