public inbox for linux-kernel@vger.kernel.org
 help / color / mirror / Atom feed
From: Andrew Tridgell <tridge@valinux.com>
To: marcelo@conectiva.com.br
Cc: linux-kernel@vger.kernel.org
Subject: Re: 2.4.8preX VM problems
Date: Tue, 31 Jul 2001 23:09:42 -0700 (PDT)	[thread overview]
Message-ID: <20010801060942.ABC16440B@lists.samba.org> (raw)
In-Reply-To: <Pine.LNX.4.21.0107312326080.8866-100000@freak.distro.conectiva> (message from Marcelo Tosatti on Tue, 31 Jul 2001 23:26:59 -0300 (BRT))
In-Reply-To: <Pine.LNX.4.21.0107312326080.8866-100000@freak.distro.conectiva>

Marcelo,

I've narrowed it down some more. If I apply the whole zone patch
except for this bit:

+		/* 
+		 * If we are doing zone-specific laundering, 
+		 * avoid touching pages from zones which do 
+		 * not have a free shortage.
+		 */
+		if (zone && !zone_free_shortage(page->zone)) {
+			list_del(page_lru);
+			list_add(page_lru, &inactive_dirty_list);
+			continue;
+		}
+

then the behaviour is much better:

[root@fraud trd]# ~/readfiles /dev/ddisk 
202 MB    202.125 MB/sec
394 MB    192.525 MB/sec
580 MB    185.487 MB/sec
755 MB    175.319 MB/sec
804 MB    41.3387 MB/sec
986 MB    182.5 MB/sec
1115 MB    114.862 MB/sec
1297 MB    182.276 MB/sec
1426 MB    128.983 MB/sec
1603 MB    164.939 MB/sec
1686 MB    82.9556 MB/sec
1866 MB    179.861 MB/sec
1930 MB    63.959 MB/sec

Even given that, the performance isn't exactly stunning. The
"dummy_disk" driver doesn't even do a memset or memcpy so it should
really run at the full memory bandwidth of the machine. We are only
getting a fraction of that (it is a dual PIII/800 server). If I get
time I'll try some profiling.

I also notice that the system peaks at a maximum of just under 750M in
the buffer cache. The system has 1.2G of completely unused memory
which I really expected to be consumed by something that is just
reading from a never-ending block device.

For example:

CPU0 states:  0.0% user, 67.1% system,  0.0% nice, 32.3% idle
CPU1 states:  0.0% user, 65.3% system,  0.0% nice, 34.1% idle
Mem:  2059660K av,  842712K used, 1216948K free,       0K shrd,  740816K buff
Swap: 1052216K av,       0K used, 1052216K free                    9496K cached

  PID USER     PRI  NI  SIZE  RSS SHARE LC STAT %CPU %MEM   TIME COMMAND
  615 root      14   0   452  452   328  1 R    99.9  0.0   3:52 readfiles
    5 root       9   0     0    0     0  1 SW   31.3  0.0   1:03 kswapd
    6 root       9   0     0    0     0  0 SW    0.5  0.0   0:04 kreclaimd

I know this is a *long* way from a real world benchmark, but I think
it is perhaps indicative of our buffer cache system getting a bit too
complex again :)

Cheers, Tridge

  parent reply	other threads:[~2001-08-01  6:14 UTC|newest]

Thread overview: 20+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2001-08-01  3:05 2.4.8preX VM problems Andrew Tridgell
2001-08-01  2:26 ` Marcelo Tosatti
2001-08-01  4:37   ` Andrew Tridgell
2001-08-01  3:32     ` Marcelo Tosatti
2001-08-01  5:43       ` Andrew Tridgell
2001-08-01  6:09   ` Andrew Tridgell [this message]
2001-08-01  6:10     ` Marcelo Tosatti
2001-08-01  8:13       ` Andrew Tridgell
2001-08-01  8:13         ` Marcelo Tosatti
2001-08-01 10:54           ` Andrew Tridgell
2001-08-01 11:51             ` Mike Black
2001-08-01 18:39               ` Daniel Phillips
2001-08-11 12:06                 ` Pavel Machek
2001-08-16 21:57                   ` Daniel Phillips
2001-08-04  6:50           ` Anton Blanchard
2001-08-04  5:55             ` Marcelo Tosatti
2001-08-04 17:17               ` Anton Blanchard
2001-08-06 22:58                 ` Marcelo Tosatti
2001-08-07 17:18                   ` Anton Blanchard
2001-08-07 21:02                     ` Kernel 2.4.6 & 2.4.7 networking performance: seeing serious delays in TCP layer depending upon packet length Ron Flory

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20010801060942.ABC16440B@lists.samba.org \
    --to=tridge@valinux.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=marcelo@conectiva.com.br \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox