public inbox for linux-kernel@vger.kernel.org
 help / color / mirror / Atom feed
From: Andrea Arcangeli <andrea@suse.de>
To: rwhron@earthlink.net
Cc: linux-kernel@vger.kernel.org, ltp-list@lists.sourceforge.net,
	Linus Torvalds <torvalds@transmeta.com>
Subject: Re: VM test on 2.4.14pre3aa2 (compared to 2.4.14pre3aa1)
Date: Mon, 29 Oct 2001 04:29:38 +0100	[thread overview]
Message-ID: <20011029042938.M1396@athlon.random> (raw)
In-Reply-To: <20011028120721.A286@earthlink.net> <20011029014715.J1396@athlon.random> <20011029034546.L1396@athlon.random>
In-Reply-To: <20011029034546.L1396@athlon.random>; from andrea@suse.de on Mon, Oct 29, 2001 at 03:45:46AM +0100

On Mon, Oct 29, 2001 at 03:45:46AM +0100, Andrea Arcangeli wrote:
> andrea@dev4-000:~> time ./mtest01 -b $[1024*1024*512] -w     
> PASS ... 536870912 bytes allocated.
> 
> real    1m8.655s
> user    0m9.370s
> sys     0m2.250s
> andrea@dev4-000:~> 

new exciting result on exactly the same test (4Ganon mem +512m swap,
then started the bench):

andrea@dev4-000:~> time ./mtest01 -b $[1024*1024*512] -w
PASS ... 536870912 bytes allocated.

real    0m40.473s
user    0m9.290s
sys     0m3.860s
andrea@dev4-000:~> 

(mainline takes 1m 40s, 1 minute more for the same thing)

I guess I cheated this time though :), see the _only_ change that I did to
speedup from 68/69 seconds to exactly 40 seconds:

--- 2.4.14pre3aa2/mm/page_io.c.~1~	Tue May  1 19:35:33 2001
+++ 2.4.14pre3aa2/mm/page_io.c	Mon Oct 29 03:58:23 2001
@@ -43,10 +43,12 @@
 	struct inode *swapf = 0;
 	int wait = 0;
 
+#if 0
 	/* Don't allow too many pending pages in flight.. */
 	if ((rw == WRITE) && atomic_read(&nr_async_pages) >
 			pager_daemon.swap_cluster * (1 << page_cluster))
 		wait = 1;
+#endif
 
 	if (rw == READ) {
 		ClearPageUptodate(page);
@@ -75,10 +77,12 @@
 	} else {
 		return 0;
 	}
+#if 0
  	if (!wait) {
  		SetPageDecrAfter(page);
  		atomic_inc(&nr_async_pages);
  	}
+#endif
 
  	/* block_size == PAGE_SIZE/zones_used */
  	brw_page(rw, page, dev, zones, block_size);

I found we were hurted by not being able to use the full I/O pipeline
like we do for writes.

Now it swapouts constantly and regularly at 12.8 Mbyte/sec (still
smooth), the write throttling happens at the PG_launder layer like for
MAP_SHARED.

hdparm -t on the swap partition says 27 Mbyte/sec but that's unreal, at
least with writes, a cp flood at max runs at 17mbyte/sec on such scsi
disk where we also swapout to.  Without the above change it swapouts at
7.5 Mbyte/sec instead of 12.8 Mbyte/sec.  12.8Mbyte/sec seems acceptable
also considering the pagetable walking etc... more costly than a stright
generic_file_write + balance_dirty.

I'm aware of the implications of the above, we may empty the pfmemalloc
pool, but that should mostly cause some sched_yields, it still runs
stable during this test at least. I'd prefer to fix any places to
sched_yield rather than running at 7.5 mbyte/sec.

But my strongest non-cheat (and backwards compatibility, no too risk in
2.4) is that we just don't use the nr_async_pages during pageout to disk
of the MAP_SHARED segments, so why should we use it for pageout of the
anonymous memory that doesn't even need to pass through the fs? (in most
setups with a proper swap partition) As far as MAP_SHARED is just correct,
page_io.c also shouldn't need it. comments?

Andrea

  reply	other threads:[~2001-10-29  3:29 UTC|newest]

Thread overview: 7+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2001-10-28 17:07 VM test on 2.4.14pre3aa2 (compared to 2.4.14pre3aa1) rwhron
2001-10-29  0:47 ` Andrea Arcangeli
2001-10-29  2:45   ` Andrea Arcangeli
2001-10-29  3:29     ` Andrea Arcangeli [this message]
2001-10-29  3:57       ` 2.4.14pre3aa3 [was Re: VM test on 2.4.14pre3aa2 (compared to 2.4.14pre3aa1)] Andrea Arcangeli
2001-10-30  0:10         ` rwhron
2001-10-29  4:24       ` VM test on 2.4.14pre3aa2 (compared to 2.4.14pre3aa1) Linus Torvalds

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20011029042938.M1396@athlon.random \
    --to=andrea@suse.de \
    --cc=linux-kernel@vger.kernel.org \
    --cc=ltp-list@lists.sourceforge.net \
    --cc=rwhron@earthlink.net \
    --cc=torvalds@transmeta.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox