From: "Adam Kropelin" <akropel1@rochester.rr.com>
To: "Andrea Arcangeli" <andrea@suse.de>
Cc: "Ken Brownfield" <brownfld@irridia.com>,
"Rik van Riel" <riel@conectiva.com.br>,
"Dieter Nützel" <Dieter.Nuetzel@hamburg.de>,
linux-kernel@vger.kernel.org
Subject: Re: [PATCH *] rmap VM 11c (RMAP IS A WINNER!)
Date: Sat, 19 Jan 2002 17:15:09 -0500 [thread overview]
Message-ID: <048201c1a136$c292c8b0$02c8a8c0@kroptech.com> (raw)
In-Reply-To: <Pine.LNX.4.33L.0201171721230.32617-100000@imladris.surriel.com> <012d01c19fb7$ba1cb680$02c8a8c0@kroptech.com> <20020118182837.D31076@asooo.flowerfire.com> <02f801c1a0a7$5643a1a0$02c8a8c0@kroptech.com> <20020119185016.F21279@athlon.random> <03c501c1a118$9d0dd120$02c8a8c0@kroptech.com> <20020119212134.G21279@athlon.random>
(Andrea, the previous version of this mail wasn't supposed to go out yet. I
fat-fingered and sent it before I was done. This is the full version.)
Andrea Arcangeli:
> On Sat, Jan 19, 2002 at 01:39:22PM -0500, Adam Kropelin wrote:
> > Andrea Arcangeli:
> > > With -aa something sane along the above lines is:
> > >
> > > /bin/echo "10 2000 0 0 500 3000 30 5 0" > /proc/sys/vm/bdflush
> >
> > Unfortunately, those adjustments on top of 2.4.18-pre2aa2 set a new record
for
> > worst performance: 7:19.
>
> then please try to decrease the nfract variable again, the above set it
> to 2000, if you've a slow harddisk maybe that's too much, so you can try
> to set it to 500 again.
Yes, the harddisk is definitely slow: it's a hw RAID5 partition with older
drives, so writes are pretty slow.
I tried various nfract settings:
/bin/echo "10 300 0 0 500 3000 30 5 0" > /proc/sys/vm/bdflush
7:33
/bin/echo "10 500 0 0 500 3000 30 5 0" > /proc/sys/vm/bdflush
6:00
/bin/echo "10 800 0 0 500 3000 30 5 0" > /proc/sys/vm/bdflush
7:17
nfract=500 seems to be the best and gets much closer to the performance of rmap
and -ac. Writeout is still very bursty compared to the other kernels, but that
may not really matter, I don't know.
> I'd also give a try with the below settings:
>
> /bin/echo "10 500 0 0 500 3000 80 8 0" > /proc/sys/vm/bdflush
7:08
<snip>
> Also just in case, I'd suggest to try to repeat each benchmark three
> times, so we know we are not bitten by random variations in the numbers.
I've been doing a variation on that theme already. The numbers I've been
reporting are best of 2 runs. I have never seen the 2 runs differ by more than
+/- 10 seconds.
--Adam
prev parent reply other threads:[~2002-01-19 22:15 UTC|newest]
Thread overview: 11+ messages / expand[flat|nested] mbox.gz Atom feed top
2002-01-17 19:22 [PATCH *] rmap VM 11c Rik van Riel
2002-01-17 23:59 ` Bill Davidsen
2002-01-18 0:05 ` Rik van Riel
2002-01-18 0:33 ` Adam Kropelin
2002-01-18 0:56 ` Rik van Riel
2002-01-18 10:06 ` Roy Sigurd Karlsbakk
[not found] ` <20020118182837.D31076@asooo.flowerfire.com>
2002-01-19 5:08 ` [PATCH *] rmap VM 11c (RMAP IS A WINNER!) Adam Kropelin
2002-01-19 17:50 ` Andrea Arcangeli
2002-01-19 18:39 ` Adam Kropelin
2002-01-19 20:21 ` Andrea Arcangeli
2002-01-19 22:15 ` Adam Kropelin [this message]
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to='048201c1a136$c292c8b0$02c8a8c0@kroptech.com' \
--to=akropel1@rochester.rr.com \
--cc=Dieter.Nuetzel@hamburg.de \
--cc=andrea@suse.de \
--cc=brownfld@irridia.com \
--cc=linux-kernel@vger.kernel.org \
--cc=riel@conectiva.com.br \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox