public inbox for linux-kernel@vger.kernel.org
 help / color / mirror / Atom feed
From: Andrea Arcangeli <andrea@suse.de>
To: "Martin J. Bligh" <mbligh@aracnet.com>
Cc: Andrew Morton <akpm@osdl.org>,
	hugh@veritas.com, linux-kernel@vger.kernel.org
Subject: Re: Benchmarking objrmap under memory pressure
Date: Wed, 14 Apr 2004 19:11:26 +0200	[thread overview]
Message-ID: <20040414171126.GT2150@dualathlon.random> (raw)
In-Reply-To: <25670000.1081960943@[10.10.2.4]>

On Wed, Apr 14, 2004 at 09:42:24AM -0700, Martin J. Bligh wrote:
> > As expected the 6 second difference was nothing compared the the noise,
> > though I'd be curious to see an average number.
> 
> Yeah, I don't think either is worse or better - I really want a more stable
> test though, if I can find one.

a test involving less tasks and that cannot take any advantage from the
cache and the age information should be more stable, though I don't have
obvious suggestions.

> Yeah, that's odd.

I just wonder the VM needs a bit of fixing besides the rmap removal, or
if it was just a pure concidence. if it happens again in the -aa pass
too then it may not be a conincidence.

> Because it's frigging hard to make a 16GB machine swap ;-) 'twas just my
> desktop.

mem= should fix the problem for the benchmarking ;)

swapping in general is important for 16GB 32-way too (and that's the
thing that 2.4 mainline cannot swap efficiently, and that's why I had to
add objrmap in 2.4-aa too).

> Yeah, it's hard to do mem= on NUMA, but I have a patch from someone 
> somehwere. Those machines don't tend to swap heavily anyway, but I suppose
> page reclaim in general will happen.

I see what you mean with mem= being troublesome, I forgot you're numa=y,
you can either disable numa temporarily, or use the more complex syntax
that you should find in arch/i386/kernel/setup.c, that should work w/o
kernel changes and w/o patches since it simply trimes the e820 map,
everything else numa is built on top of that map, you've just to give
an hundred megs from the start of every node, and hopefully it'll boot ;).

  reply	other threads:[~2004-04-14 17:11 UTC|newest]

Thread overview: 19+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2004-04-13  7:39 Benchmarking objrmap under memory pressure Martin J. Bligh
2004-04-13  7:51 ` Andrew Morton
2004-04-13  7:55   ` Martin J. Bligh
2004-04-13 21:59   ` Andrea Arcangeli
2004-04-14  0:38   ` Martin J. Bligh
2004-04-14 16:27     ` Andrea Arcangeli
2004-04-14 16:42       ` Martin J. Bligh
2004-04-14 17:11         ` Andrea Arcangeli [this message]
2004-04-14 17:48       ` Hugh Dickins
2004-04-14 23:39         ` Andrea Arcangeli
2004-04-15 10:21           ` Hugh Dickins
2004-04-15 13:22             ` Andrea Arcangeli
2004-04-15 13:45               ` Hugh Dickins
2004-04-15 14:08                 ` Andrea Arcangeli
2004-04-15 16:23             ` Bill Davidsen
2004-04-15 16:48               ` Hugh Dickins
2004-04-22 19:54                 ` Bill Davidsen
2004-04-22 21:26                   ` Hugh Dickins
2004-04-14 18:11   ` Bill Davidsen

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20040414171126.GT2150@dualathlon.random \
    --to=andrea@suse.de \
    --cc=akpm@osdl.org \
    --cc=hugh@veritas.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=mbligh@aracnet.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox