public inbox for linux-kernel@vger.kernel.org
 help / color / mirror / Atom feed
* New readahead - ups and downs
@ 2006-06-27 13:07 Helge Hafting
       [not found] ` <20060627160624.GB6014@mail.ustc.edu.cn>
       [not found] ` <20060702235516.GA6034@mail.ustc.edu.cn>
  0 siblings, 2 replies; 8+ messages in thread
From: Helge Hafting @ 2006-06-27 13:07 UTC (permalink / raw)
  To: Linux Kernel Mailing List

Many have noticed positive sides of the new readahead system.
I too see that, bootup is quicker and starting a big app like firefox is
also noticeable faster.

I made my own little io-intensive test, that shows a case where
performance drops.

I boot the machine, and starts "debsums", a debian utility that
checksums every file managed by debian package management.
As soon as the machine starts swapping, I also start
start a process that applies an mm-patch to the kernel tree, and
times this.

This patching took 1m28s with cold cache, without debsums running.
With the 2.6.15 kernel (old readahead), and debsums running, this
took 2m20s to complete, and 360kB in swap at the worst.

With the new readahead in 2.6.17-mm3 I get 6m22s for patching,
and 22MB in swap at the most.  Runs with mm1 and mm2 were
similiar, 5-6 minutes patching and 22MB swap.

My patching clearly takes more times this way.  I don't know
if debsums improved though, it could be as simple as a fairness
issue.  Memory pressure definitely went up.


Helge Hafting

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: New readahead - ups and downs
       [not found] ` <20060627160624.GB6014@mail.ustc.edu.cn>
@ 2006-06-27 16:06   ` Fengguang Wu
  0 siblings, 0 replies; 8+ messages in thread
From: Fengguang Wu @ 2006-06-27 16:06 UTC (permalink / raw)
  To: Helge Hafting; +Cc: Linux Kernel Mailing List

Hi Helge,

Thanks for testing it out.
I'll check it when I have time(I'll be out for a week...).

Wu

On Tue, Jun 27, 2006 at 03:07:16PM +0200, Helge Hafting wrote:
> Many have noticed positive sides of the new readahead system.
> I too see that, bootup is quicker and starting a big app like firefox is
> also noticeable faster.
> 
> I made my own little io-intensive test, that shows a case where
> performance drops.
> 
> I boot the machine, and starts "debsums", a debian utility that
> checksums every file managed by debian package management.
> As soon as the machine starts swapping, I also start
> start a process that applies an mm-patch to the kernel tree, and
> times this.
> 
> This patching took 1m28s with cold cache, without debsums running.
> With the 2.6.15 kernel (old readahead), and debsums running, this
> took 2m20s to complete, and 360kB in swap at the worst.
> 
> With the new readahead in 2.6.17-mm3 I get 6m22s for patching,
> and 22MB in swap at the most.  Runs with mm1 and mm2 were
> similiar, 5-6 minutes patching and 22MB swap.
> 
> My patching clearly takes more times this way.  I don't know
> if debsums improved though, it could be as simple as a fairness
> issue.  Memory pressure definitely went up.
> 
> 
> Helge Hafting
> -
> To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
> Please read the FAQ at  http://www.tux.org/lkml/

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: New readahead - ups and downs
       [not found] ` <20060702235516.GA6034@mail.ustc.edu.cn>
@ 2006-07-02 23:55   ` Fengguang Wu
  2006-07-03 13:50   ` New readahead - ups and downs new test Helge Hafting
  1 sibling, 0 replies; 8+ messages in thread
From: Fengguang Wu @ 2006-07-02 23:55 UTC (permalink / raw)
  To: Helge Hafting; +Cc: Linux Kernel Mailing List

Hi Helge,

On Tue, Jun 27, 2006 at 03:07:16PM +0200, Helge Hafting wrote:
> I made my own little io-intensive test, that shows a case where
> performance drops.
> 
> I boot the machine, and starts "debsums", a debian utility that
> checksums every file managed by debian package management.
> As soon as the machine starts swapping, I also start
> start a process that applies an mm-patch to the kernel tree, and
> times this.
> 
> This patching took 1m28s with cold cache, without debsums running.
> With the 2.6.15 kernel (old readahead), and debsums running, this
> took 2m20s to complete, and 360kB in swap at the worst.
> 
> With the new readahead in 2.6.17-mm3 I get 6m22s for patching,
> and 22MB in swap at the most.  Runs with mm1 and mm2 were
> similiar, 5-6 minutes patching and 22MB swap.
> 
> My patching clearly takes more times this way.  I don't know
> if debsums improved though, it could be as simple as a fairness
> issue.  Memory pressure definitely went up.

There are a lot changes between 2.6.15 and 2.6.17-mmX. Would you use
the single 2.6.17-mm5 kernel for benchmarking? It's easy:

        - select old readahead:
                echo 1 > /proc/sys/vm/readahead_ratio

        - select new readahead:
                echo 50 > /proc/sys/vm/readahead_ratio


Thanks,
Wu

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: New readahead - ups and downs new test
       [not found] ` <20060702235516.GA6034@mail.ustc.edu.cn>
  2006-07-02 23:55   ` Fengguang Wu
@ 2006-07-03 13:50   ` Helge Hafting
       [not found]     ` <20060703153930.GC5874@mail.ustc.edu.cn>
  1 sibling, 1 reply; 8+ messages in thread
From: Helge Hafting @ 2006-07-03 13:50 UTC (permalink / raw)
  To: Fengguang Wu, Helge Hafting, Linux Kernel Mailing List

On Mon, Jul 03, 2006 at 07:55:16AM +0800, Fengguang Wu wrote:
> Hi Helge,
> 
> On Tue, Jun 27, 2006 at 03:07:16PM +0200, Helge Hafting wrote:
> > I made my own little io-intensive test, that shows a case where
> > performance drops.
> > 
> > I boot the machine, and starts "debsums", a debian utility that
> > checksums every file managed by debian package management.
> > As soon as the machine starts swapping, I also start
> > start a process that applies an mm-patch to the kernel tree, and
> > times this.
> > 
> > This patching took 1m28s with cold cache, without debsums running.
> > With the 2.6.15 kernel (old readahead), and debsums running, this
> > took 2m20s to complete, and 360kB in swap at the worst.
> > 
> > With the new readahead in 2.6.17-mm3 I get 6m22s for patching,
> > and 22MB in swap at the most.  Runs with mm1 and mm2 were
> > similiar, 5-6 minutes patching and 22MB swap.
> > 
> > My patching clearly takes more times this way.  I don't know
> > if debsums improved though, it could be as simple as a fairness
> > issue.  Memory pressure definitely went up.
> 
> There are a lot changes between 2.6.15 and 2.6.17-mmX. Would you use
> the single 2.6.17-mm5 kernel for benchmarking? It's easy:
> 
>         - select old readahead:
>                 echo 1 > /proc/sys/vm/readahead_ratio
> 
>         - select new readahead:
>                 echo 50 > /proc/sys/vm/readahead_ratio
> 
>
I just tried this with 2.5.17-mm5.  I did in on a faster
machine (opteron cpu, but still 512MB) so don't compare with
my previous test which ran on a pentium-IV.
Single cpu in both cases.

Test procdure:
1. Reboot, log in through xdm
2. run vmstat 10 for swap monitoring
3. time debsums -s
4. As soon as the machine touches swap, launch
   time bzcat 2.6.15-mm5.bz2 | patch -p1

In either case, testing starts with 320MB free memory after boot,
which debsums caching eats in about a minute and swapping starts.
Then I start the patching, which finished before debsums.

Old readahed:
Max swap was 700kB, but it dropped back to 244kB after 10s
and stayed there.  
Patch timing:
real    0m37.662s
user    0m5.002s
sys     0m2.023s
debsums timing:
real    5m50.333s
user    0m21.127s
sys     0m14.506s

New readahead:
Max swap: 244kB.  (On another try it jumped to 816kB and then fell back
to 244kB).
patch timing:
real    0m40.951s
user    0m5.043s
sys     0m2.061s
debsums timing:
real    5m46.555s
user    0m21.195s
sys     0m13.918s

Timing and memory load seems to be almost identical this time,
perhaps this is a load where the type of readahead doesn't
matter.  

Helge Hafting




^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: New readahead - ups and downs new test
       [not found]     ` <20060703153930.GC5874@mail.ustc.edu.cn>
@ 2006-07-03 15:39       ` Fengguang Wu
  2006-07-03 20:36       ` Helge Hafting
  2006-07-03 21:42       ` New readahead - ups and downs new test. Vm oddities Helge Hafting
  2 siblings, 0 replies; 8+ messages in thread
From: Fengguang Wu @ 2006-07-03 15:39 UTC (permalink / raw)
  To: Helge Hafting; +Cc: Helge Hafting, Linux Kernel Mailing List

On Mon, Jul 03, 2006 at 03:50:27PM +0200, Helge Hafting wrote:
> On Mon, Jul 03, 2006 at 07:55:16AM +0800, Fengguang Wu wrote:
> > Hi Helge,
> > 
> > On Tue, Jun 27, 2006 at 03:07:16PM +0200, Helge Hafting wrote:
> > > I made my own little io-intensive test, that shows a case where
> > > performance drops.
> > > 
> > > I boot the machine, and starts "debsums", a debian utility that
> > > checksums every file managed by debian package management.
> > > As soon as the machine starts swapping, I also start
> > > start a process that applies an mm-patch to the kernel tree, and
> > > times this.
> > > 
> > > This patching took 1m28s with cold cache, without debsums running.
> > > With the 2.6.15 kernel (old readahead), and debsums running, this
> > > took 2m20s to complete, and 360kB in swap at the worst.
> > > 
> > > With the new readahead in 2.6.17-mm3 I get 6m22s for patching,
> > > and 22MB in swap at the most.  Runs with mm1 and mm2 were
> > > similiar, 5-6 minutes patching and 22MB swap.
> > > 
> > > My patching clearly takes more times this way.  I don't know
> > > if debsums improved though, it could be as simple as a fairness
> > > issue.  Memory pressure definitely went up.
> > 
> > There are a lot changes between 2.6.15 and 2.6.17-mmX. Would you use
> > the single 2.6.17-mm5 kernel for benchmarking? It's easy:
> > 
> >         - select old readahead:
> >                 echo 1 > /proc/sys/vm/readahead_ratio
> > 
> >         - select new readahead:
> >                 echo 50 > /proc/sys/vm/readahead_ratio
> > 
> >
> I just tried this with 2.5.17-mm5.  I did in on a faster
> machine (opteron cpu, but still 512MB) so don't compare with
> my previous test which ran on a pentium-IV.
> Single cpu in both cases.
> 
> Test procdure:
> 1. Reboot, log in through xdm
> 2. run vmstat 10 for swap monitoring
> 3. time debsums -s
> 4. As soon as the machine touches swap, launch
>    time bzcat 2.6.15-mm5.bz2 | patch -p1
> 
> In either case, testing starts with 320MB free memory after boot,
> which debsums caching eats in about a minute and swapping starts.
> Then I start the patching, which finished before debsums.
> 
> Old readahed:
> Max swap was 700kB, but it dropped back to 244kB after 10s
> and stayed there.  
> Patch timing:
> real    0m37.662s
> user    0m5.002s
> sys     0m2.023s
> debsums timing:
> real    5m50.333s
> user    0m21.127s
> sys     0m14.506s
> 
> New readahead:
> Max swap: 244kB.  (On another try it jumped to 816kB and then fell back
> to 244kB).
> patch timing:
> real    0m40.951s
> user    0m5.043s
> sys     0m2.061s
> debsums timing:
> real    5m46.555s
> user    0m21.195s
> sys     0m13.918s
> 
> Timing and memory load seems to be almost identical this time,
> perhaps this is a load where the type of readahead doesn't
> matter.  

Thanks. You are right, the readahead logic won't affect the swap cache.
Nor will the readahead size, I guess. But to be sure, you can do one
more test on it with the following command, using the same 2.5.17-mm5:

        blockdev --setra /dev/hda1 256

Please replace /dev/hda1 with the root device on your system, thanks.

Wu

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: New readahead - ups and downs new test
       [not found]     ` <20060703153930.GC5874@mail.ustc.edu.cn>
  2006-07-03 15:39       ` Fengguang Wu
@ 2006-07-03 20:36       ` Helge Hafting
  2006-07-03 21:42       ` New readahead - ups and downs new test. Vm oddities Helge Hafting
  2 siblings, 0 replies; 8+ messages in thread
From: Helge Hafting @ 2006-07-03 20:36 UTC (permalink / raw)
  To: Fengguang Wu, Helge Hafting, Linux Kernel Mailing List

On Mon, Jul 03, 2006 at 11:39:30PM +0800, Fengguang Wu wrote:
> On Mon, Jul 03, 2006 at 03:50:27PM +0200, Helge Hafting wrote:
> > On Mon, Jul 03, 2006 at 07:55:16AM +0800, Fengguang Wu wrote:
> > > Hi Helge,
> > > 
> > > On Tue, Jun 27, 2006 at 03:07:16PM +0200, Helge Hafting wrote:
> > > > I made my own little io-intensive test, that shows a case where
> > > > performance drops.
> > > > 
> > > > I boot the machine, and starts "debsums", a debian utility that
> > > > checksums every file managed by debian package management.
> > > > As soon as the machine starts swapping, I also start
> > > > start a process that applies an mm-patch to the kernel tree, and
> > > > times this.
> > > > 
> > > > This patching took 1m28s with cold cache, without debsums running.
> > > > With the 2.6.15 kernel (old readahead), and debsums running, this
> > > > took 2m20s to complete, and 360kB in swap at the worst.
> > > > 
> > > > With the new readahead in 2.6.17-mm3 I get 6m22s for patching,
> > > > and 22MB in swap at the most.  Runs with mm1 and mm2 were
> > > > similiar, 5-6 minutes patching and 22MB swap.
> > > > 
> > > > My patching clearly takes more times this way.  I don't know
> > > > if debsums improved though, it could be as simple as a fairness
> > > > issue.  Memory pressure definitely went up.
> > > 
> > > There are a lot changes between 2.6.15 and 2.6.17-mmX. Would you use
> > > the single 2.6.17-mm5 kernel for benchmarking? It's easy:
> > > 
> > >         - select old readahead:
> > >                 echo 1 > /proc/sys/vm/readahead_ratio
> > > 
> > >         - select new readahead:
> > >                 echo 50 > /proc/sys/vm/readahead_ratio
> > > 
> > >
> > I just tried this with 2.5.17-mm5.  I did in on a faster
> > machine (opteron cpu, but still 512MB) so don't compare with
> > my previous test which ran on a pentium-IV.
> > Single cpu in both cases.
> > 
> > Test procdure:
> > 1. Reboot, log in through xdm
> > 2. run vmstat 10 for swap monitoring
> > 3. time debsums -s
> > 4. As soon as the machine touches swap, launch
> >    time bzcat 2.6.15-mm5.bz2 | patch -p1
> > 
> > In either case, testing starts with 320MB free memory after boot,
> > which debsums caching eats in about a minute and swapping starts.
> > Then I start the patching, which finished before debsums.
> > 
> > Old readahed:
> > Max swap was 700kB, but it dropped back to 244kB after 10s
> > and stayed there.  
> > Patch timing:
> > real    0m37.662s
> > user    0m5.002s
> > sys     0m2.023s
> > debsums timing:
> > real    5m50.333s
> > user    0m21.127s
> > sys     0m14.506s
> > 
> > New readahead:
> > Max swap: 244kB.  (On another try it jumped to 816kB and then fell back
> > to 244kB).
> > patch timing:
> > real    0m40.951s
> > user    0m5.043s
> > sys     0m2.061s
> > debsums timing:
> > real    5m46.555s
> > user    0m21.195s
> > sys     0m13.918s
> > 
> > Timing and memory load seems to be almost identical this time,
> > perhaps this is a load where the type of readahead doesn't
> > matter.  
> 
> Thanks. You are right, the readahead logic won't affect the swap cache.
> Nor will the readahead size, I guess. But to be sure, you can do one
> more test on it with the following command, using the same 2.5.17-mm5:

Well, I did not expect readahead to directly affect swap, but there was
this very noticeable difference on the pentium-IV machine.  
Different io patterns & disk head movement patterns may alter timing
and make the memory pressure situation seem different (and more/less
data coming in as readahead might affect memory pressure also.)
369k vs 22M swap is a lot.

I have found an important difference between the two machines,
the one with the big differences with/without new readahead
has /usr and /usr/src on the same physical disk, although
separate partitions.  That makes for _lots_ of head movement,
when bzcat & patch is operating on /usr/src and debsums
is reading /usr.

That machine is not available for testing right now, but I'll
re-do my test with/without new readahed with a kernel source
tree on the same device as /usr.

> 
>         blockdev --setra /dev/hda1 256
> 
Using blockdev --getra on the two disks that holds /usr and
/usr/src gives me 2048.  So, now we get a test with 1/8 
of the normal readahead?

Results: Swap went up to 500k and was down at the usual 244k
10s later.

patch timing:
real    0m38.265s
user    0m5.010s
sys     0m2.097s

debsums timing:
real    5m48.367s
user    0m21.015s
sys     0m13.950s

Seems --setra made no difference.

I'll copy the kernel tree to /usr, to see if anything interesting
happes when the two processes actually compete for the
same device.  That's what got so different last time, although with
differing kernel versions.

Helge Hafting


^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: New readahead - ups and downs new test. Vm oddities.
       [not found]     ` <20060703153930.GC5874@mail.ustc.edu.cn>
  2006-07-03 15:39       ` Fengguang Wu
  2006-07-03 20:36       ` Helge Hafting
@ 2006-07-03 21:42       ` Helge Hafting
       [not found]         ` <20060704012621.GA7236@mail.ustc.edu.cn>
  2 siblings, 1 reply; 8+ messages in thread
From: Helge Hafting @ 2006-07-03 21:42 UTC (permalink / raw)
  To: Fengguang Wu, Helge Hafting, Linux Kernel Mailing List

I have now re-run my tests (parallel debsums and
bzcat+patch) this time with everything on the same device
so as to get competition for io.

New and old readahead didn't make much difference this time
either, so it seems that my idea about readahead
problems were wrong.  Which is good, as the new readahead
improves so many other things.

Results with new readahead using one disk device:
Swap went up to 32M, dropped to 244k when testing ended.
patch timing:
real    6m8.451s
user    0m5.183s
sys     0m2.897s
debsums timing:
real    7m42.851s
user    0m21.172s
sys     0m13.642s

Results with old readahead, one disk device:
Swap went to 32M, dropped to 244k when testing ended.
timings:
patch:
real    6m18.191s
user    0m5.226s
sys     0m2.724s
debsums:
real    7m49.860s
user    0m21.243s
sys     0m14.268s
A tiny bit slower, but very little.


No surprise that everyting is slower when using a single
disk instead of two.  

The swap difference from using two disks is striking though.
Nothing to do with readahead, but
why 32M swap when using one disk, and 244k swap when using two?

The amount of data processed is the same either way,
is the VM very timing-sensitive?

Helge Hafting


^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: New readahead - ups and downs new test. Vm oddities.
       [not found]         ` <20060704012621.GA7236@mail.ustc.edu.cn>
@ 2006-07-04  1:26           ` Fengguang Wu
  0 siblings, 0 replies; 8+ messages in thread
From: Fengguang Wu @ 2006-07-04  1:26 UTC (permalink / raw)
  To: Helge Hafting; +Cc: Helge Hafting, Linux Kernel Mailing List

Hi Helge,

On Mon, Jul 03, 2006 at 11:42:17PM +0200, Helge Hafting wrote:
> I have now re-run my tests (parallel debsums and
> bzcat+patch) this time with everything on the same device
> so as to get competition for io.
> 
> New and old readahead didn't make much difference this time
> either, so it seems that my idea about readahead
> problems were wrong.  Which is good, as the new readahead
> improves so many other things.
> 
> Results with new readahead using one disk device:
> Swap went up to 32M, dropped to 244k when testing ended.
> patch timing:
> real    6m8.451s
> user    0m5.183s
> sys     0m2.897s
> debsums timing:
> real    7m42.851s
> user    0m21.172s
> sys     0m13.642s
> 
> Results with old readahead, one disk device:
> Swap went to 32M, dropped to 244k when testing ended.
> timings:
> patch:
> real    6m18.191s
> user    0m5.226s
> sys     0m2.724s
> debsums:
> real    7m49.860s
> user    0m21.243s
> sys     0m14.268s
> A tiny bit slower, but very little.
> 
> 
> No surprise that everyting is slower when using a single
> disk instead of two.  

Thanks for all the efforts!

> The swap difference from using two disks is striking though.
> Nothing to do with readahead, but
> why 32M swap when using one disk, and 244k swap when using two?
> 
> The amount of data processed is the same either way,
> is the VM very timing-sensitive?

Because read/write request go for the same elevator queue I guess.

When there are concurrent read/writes, writes will be hold back,
giving priority to reads. So there will be more dirtied pages taking
up your memory during the test.

Thanks,
Wu

^ permalink raw reply	[flat|nested] 8+ messages in thread

end of thread, other threads:[~2006-07-04  1:25 UTC | newest]

Thread overview: 8+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2006-06-27 13:07 New readahead - ups and downs Helge Hafting
     [not found] ` <20060627160624.GB6014@mail.ustc.edu.cn>
2006-06-27 16:06   ` Fengguang Wu
     [not found] ` <20060702235516.GA6034@mail.ustc.edu.cn>
2006-07-02 23:55   ` Fengguang Wu
2006-07-03 13:50   ` New readahead - ups and downs new test Helge Hafting
     [not found]     ` <20060703153930.GC5874@mail.ustc.edu.cn>
2006-07-03 15:39       ` Fengguang Wu
2006-07-03 20:36       ` Helge Hafting
2006-07-03 21:42       ` New readahead - ups and downs new test. Vm oddities Helge Hafting
     [not found]         ` <20060704012621.GA7236@mail.ustc.edu.cn>
2006-07-04  1:26           ` Fengguang Wu

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox