public inbox for linux-kernel@vger.kernel.org
 help / color / mirror / Atom feed
From: Steven Pratt <slpratt@austin.ibm.com>
To: Andrew Morton <akpm@osdl.org>
Cc: hannal@us.ibm.com, lse-tech@lists.sourceforge.net,
	linux-kernel@vger.kernel.org
Subject: Re: [Lse-tech] Re: Minutes from 10/1 LSE Call
Date: Fri, 03 Oct 2003 14:33:01 -0500	[thread overview]
Message-ID: <3F7DCEED.9080801@austin.ibm.com> (raw)
In-Reply-To: <20031002123618.7947d232.akpm@osdl.org>



Andrew Morton wrote:

>Steven Pratt <slpratt@austin.ibm.com> wrote:
>  
>
>> Sure, but why do I only see this is the mm tree, and not the mainline 
>> tree.
>>    
>>
>
>Please send a full description of how to reproduce it and I'll take a look.
>
>  
>
Get the latest rawread from 
http://www-124.ibm.com/developerworks/opensource/linuxperf/rawread/rawread.html

mkfs devices and mount on /mnt/mntN  where N is increasing index.  
Create file 'foo'  in each filesystem of size 1GB (for this example).  
Unmount and remount the partitions/devices to flush the cache.  
Filesystems are also umounted and re-mounted between each test run.

The following rawread commands will run the tests for block sizes 
ranging from 1k-512k.  The "-d 1" parameters assumes that you mounted 
starting at /mnt/mnt1  and the "-m2 -p16" say to run 8 threads on each 
of 2 devices /mnt/mnt1 and /mnt/mnt2.

rawread -m 2 -p 16 -d 6 -n 20480 -f -c -t 0 -s 1024

rawread -m 2 -p 16 -d 6 -n 10240 -f -c -t 0 -s 2048

rawread -m 2 -p 16 -d 6 -n 5120 -f -c -t 0 -s 4096

rawread -m 2 -p 16 -d 6 -n 2560 -f -c -t 0 -s 8192

rawread -m 2 -p 16 -d 1 -n 1280 -f -c -t 0 -s 16384

rawread -m 2 -p 16 -d 1 -n 640 -f -c -t 0 -s 32768

rawread -m 2 -p 16 -d 1 -n 320 -f -c -t 0 -s 65536

rawread -m 2 -p 16 -d 1 -n 160 -f -c -t 0 -s 131072

rawread -m 2 -p 16 -d 1 -n 80 -f -c -t 0 -s 262144

rawread -m 2 -p 16 -d 1 -n 40 -f -c -t 0 -s 524288


2 devices is the smallest number I have been able to run which shows 
this problem.  With only 1 device I did not see it.  My original tests 
were done with 20 devices.  One thing of interest is that with only 2 
devices the point at which CPU starts to increase again is at 128k 
instead of at 32k which I saw with 20 devices.  This would support your 
theory that this is casued by cache misses with more/larger buffers.  
I'm still not sure this accounts for all of the extra CPU usage, but I 
am less worried about it.

But as long as I have your attention, there is one other thing about 
these runs which bothers me, which is that the mm tree is doing horribly 
on 1k and 2k block sizes.  I looks like readahead is not functioning 
properly for these requst sizes.

Here is a comparison for 2 devices between test6 and test6mm1.  You can 
see that the mm1 tree does great at larger block sizes, but poorly at 
small ones.


Results:seqread-_vs_.seqread-

                                          tolerance = 0.00 + 3.00% of A
                test6         test6-mm1
 Blocksize      KBs/sec      KBs/sec    %diff         diff    tolerance
---------- ------------ ------------ -------- ------------ ------------
      1024        44083        22641   -48.64    -21442.00      1322.49  *
      2048        45276        26371   -41.76    -18905.00      1358.28  *
      4096        44024        45260     2.81      1236.00      1320.72
      8192        44519        50073    12.48      5554.00      1335.57  *
     16384        46869        51528     9.94      4659.00      1406.07  *
     32768        47900        52231     9.04      4331.00      1437.00  *
     65536        42803        52183    21.91      9380.00      1284.09  *
    131072        36525        49724    36.14     13199.00      1095.75  *
    262144        34628        46192    33.39     11564.00      1038.84  *
    524288        28997        48005    65.55     19008.00       869.91  *


Results:seqread-_vs_.seqread-
                                          tolerance = 0.50 + 3.00% of A
               test6         test6-mm1
 Blocksize         %CPU         %CPU    %diff         diff    tolerance
---------- ------------ ------------ -------- ------------ ------------
      1024        27.87        11.72   -57.95       -16.15         1.34  *
      2048        13.77         8.84   -35.80        -4.93         0.91  *
      4096            9         9.99    11.00         0.99         0.77  *
      8192         8.07         8.31     2.97         0.24         0.74
     16384          5.7         6.63    16.32         0.93         0.67  *
     32768         4.93         5.59    13.39         0.66         0.65  *
     65536         3.76          4.7    25.00         0.94         0.61  *
    131072         3.25         4.53    39.38         1.28         0.60  *
    262144         3.23         6.15    90.40         2.92         0.60  *
    524288         2.97         8.19   175.76         5.22         0.59  *

Steve


  reply	other threads:[~2003-10-03 19:34 UTC|newest]

Thread overview: 13+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2003-10-01 19:19 Minutes from 10/1 LSE Call Hanna Linder
2003-10-01 23:29 ` Andrew Morton
2003-10-01 23:38   ` Larry McVoy
2003-10-02  0:23     ` Jeff Garzik
2003-10-02 18:56       ` insecure
2003-10-02 19:10         ` Jeff Garzik
2003-10-02 22:38           ` insecure
2003-10-02 22:45             ` Hanna Linder
2003-10-05  5:38             ` Andrew Morton
2003-10-02 19:21   ` [Lse-tech] " Steven Pratt
2003-10-02 19:36     ` Andrew Morton
2003-10-03 19:33       ` Steven Pratt [this message]
2003-10-03 20:13         ` Andrew Morton

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=3F7DCEED.9080801@austin.ibm.com \
    --to=slpratt@austin.ibm.com \
    --cc=akpm@osdl.org \
    --cc=hannal@us.ibm.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=lse-tech@lists.sourceforge.net \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox