linuxppc-dev.lists.ozlabs.org archive mirror
 help / color / mirror / Atom feed
From: Anton Blanchard <anton@samba.org>
To: mel@csn.ul.ie, benh@kernel.crashing.org, cl@linux-foundation.org
Cc: linuxppc-dev@lists.ozlabs.org
Subject: Re: [PATCH] powerpc: Set a smaller value for RECLAIM_DISTANCE to enable zone reclaim
Date: Fri, 19 Feb 2010 11:07:30 +1100	[thread overview]
Message-ID: <20100219000730.GD31681@kryten> (raw)
In-Reply-To: <20100218222923.GC31681@kryten>


Hi,

> The patch below sets a smaller value for RECLAIM_DISTANCE and thus enables
> zone reclaim.

FYI even with this enabled I could trip it up pretty easily with a multi
threaded application. I tried running stream across all threads in node 0. The
machine looks like:

node 0 cpus: 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15
node 0 free: 30254 MB
node 1 cpus: 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31
node 1 free: 31832 MB

Now create some clean pagecache on node 0:

# taskset -c 0 dd if=/dev/zero of=/tmp/bigfile bs=1G count=16
# sync

node 0 free: 12880 MB
node 1 free: 31830 MB

I built stream to use about 25GB of memory. I then ran stream across all
threads in node 0:

# OMP_NUM_THREADS=16 taskset -c 0-15 ./stream

We exhaust all memory on node 0, and start using memory on node 1:

node 0 free: 0 MB
node 1 free: 20795 MB

ie about 10GB of node 1. Now if we run the same test with one thread:

# OMP_NUM_THREADS=1 taskset -c 0 ./stream

things are much better:

node 0 free: 11 MB
node 1 free: 31552 MB

Interestingly enough it takes two goes to get completely onto node 0, even
with one thread. The second run looks like:

node 0 free: 14 MB
node 1 free: 31811 MB

I had a quick look at the page allocation logic and I think I understand why
we would have issues with multple threads all trying to allocate at once.

- The ZONE_RECLAIM_LOCKED flag allows only one thread into zone reclaim at
  a time, and whatever thread is in zone reclaim probably only frees a small
  amount of memory. Certainly not enough to satisfy all 16 threads.

- We seem to end up racing between zone_watermark_ok, zone_reclaim and
  buffered_rmqueue. Since everyone is in here the memory one thread reclaims
  may be stolen by another thread.

I'm not sure if there is an easy way to fix this without penalising other
workloads though.

Anton

  reply	other threads:[~2010-02-19  0:07 UTC|newest]

Thread overview: 14+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2010-02-18 22:29 [PATCH] powerpc: Set a smaller value for RECLAIM_DISTANCE to enable zone reclaim Anton Blanchard
2010-02-19  0:07 ` Anton Blanchard [this message]
2010-02-19 14:55   ` Mel Gorman
2010-02-19 15:12     ` Christoph Lameter
2010-02-19 15:41       ` Balbir Singh
2010-02-19 15:51         ` Christoph Lameter
2010-02-19 17:39           ` Balbir Singh
2010-02-23  1:55     ` Anton Blanchard
2010-02-23 16:23       ` Mel Gorman
2010-02-24 15:43       ` Christoph Lameter
2010-03-01 12:06       ` Mel Gorman
2010-03-01 15:19         ` Christoph Lameter
2010-02-19 15:43 ` Balbir Singh
2010-02-23  1:38   ` Anton Blanchard

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20100219000730.GD31681@kryten \
    --to=anton@samba.org \
    --cc=benh@kernel.crashing.org \
    --cc=cl@linux-foundation.org \
    --cc=linuxppc-dev@lists.ozlabs.org \
    --cc=mel@csn.ul.ie \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).