From: Rik van Riel <riel@redhat.com>
To: Johannes Weiner <hannes@cmpxchg.org>
Cc: Christian Ehrhardt <ehrhardt@linux.vnet.ibm.com>,
Mel Gorman <mel@csn.ul.ie>,
Andrew Morton <akpm@linux-foundation.org>,
linux-mm@kvack.org, Nick Piggin <npiggin@suse.de>,
Chris Mason <chris.mason@oracle.com>,
Jens Axboe <jens.axboe@oracle.com>,
linux-kernel@vger.kernel.org, gregkh@novell.com,
Corrado Zoccolo <czoccolo@gmail.com>
Subject: Re: [RFC PATCH 0/3] Avoid the use of congestion_wait under zone pressure
Date: Tue, 20 Apr 2010 10:40:04 -0400 [thread overview]
Message-ID: <4BCDBCC4.60401@redhat.com> (raw)
In-Reply-To: <20100419214412.GB5336@cmpxchg.org>
On 04/19/2010 05:44 PM, Johannes Weiner wrote:
> What do people think?
It has potential advantages and disadvantages.
On smaller desktop systems, it is entirely possible that
the working set is close to half of the page cache. Your
patch reduces the amount of memory that is protected on
the active file list, so it may cause part of the working
set to get evicted.
On the other hand, having a smaller active list frees up
more memory for sequential (streaming, use-once) disk IO.
This can be useful on systems with large IO subsystems
and small memory (like Christian's s390 virtual machine,
with 256MB RAM and 4 disks!).
I wonder if we could not find some automatic way to
balance between these two situations, for example by
excluding currently-in-flight pages from the calculations.
In Christian's case, he could have 160MB of cache (buffer
+ page cache), of which 70MB is in flight to disk at a
time. It may be worthwhile to exclude that 70MB from the
total and aim for 45MB active file and 45MB inactive file
pages on his system. That way IO does not get starved.
On a desktop system, which needs the working set protected
and does less IO, we will automatically protect more of
the working set - since there is no IO to starve.
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
next prev parent reply other threads:[~2010-04-20 14:40 UTC|newest]
Thread overview: 68+ messages / expand[flat|nested] mbox.gz Atom feed top
2010-03-08 11:48 [RFC PATCH 0/3] Avoid the use of congestion_wait under zone pressure Mel Gorman
2010-03-08 11:48 ` [PATCH 1/3] page-allocator: Under memory pressure, wait on pressure to relieve instead of congestion Mel Gorman
2010-03-09 13:35 ` Nick Piggin
2010-03-09 14:17 ` Mel Gorman
2010-03-09 15:03 ` Nick Piggin
2010-03-09 15:42 ` Christian Ehrhardt
2010-03-09 18:22 ` Mel Gorman
2010-03-10 2:38 ` Nick Piggin
2010-03-09 17:35 ` Mel Gorman
2010-03-10 2:35 ` Nick Piggin
2010-03-09 15:50 ` Christoph Lameter
2010-03-09 15:56 ` Christian Ehrhardt
2010-03-09 16:09 ` Christoph Lameter
2010-03-09 17:01 ` Mel Gorman
2010-03-09 17:11 ` Christoph Lameter
2010-03-09 17:30 ` Mel Gorman
2010-03-08 11:48 ` [PATCH 2/3] page-allocator: Check zone pressure when batch of pages are freed Mel Gorman
2010-03-09 9:53 ` Nick Piggin
2010-03-09 10:08 ` Mel Gorman
2010-03-09 10:23 ` Nick Piggin
2010-03-09 10:36 ` Mel Gorman
2010-03-09 11:11 ` Nick Piggin
2010-03-09 11:29 ` Mel Gorman
2010-03-08 11:48 ` [PATCH 3/3] vmscan: Put kswapd to sleep on its own waitqueue, not congestion Mel Gorman
2010-03-09 10:00 ` Nick Piggin
2010-03-09 10:21 ` Mel Gorman
2010-03-09 10:32 ` Nick Piggin
2010-03-11 23:41 ` [RFC PATCH 0/3] Avoid the use of congestion_wait under zone pressure Andrew Morton
2010-03-12 6:39 ` Christian Ehrhardt
2010-03-12 7:05 ` Andrew Morton
2010-03-12 10:47 ` Mel Gorman
2010-03-12 12:15 ` Christian Ehrhardt
2010-03-12 14:37 ` Andrew Morton
2010-03-15 12:29 ` Mel Gorman
2010-03-15 14:45 ` Christian Ehrhardt
2010-03-15 12:34 ` Christian Ehrhardt
2010-03-15 20:09 ` Andrew Morton
2010-03-16 10:11 ` Mel Gorman
2010-03-18 17:42 ` Mel Gorman
2010-03-22 23:50 ` Mel Gorman
2010-03-23 14:35 ` Christian Ehrhardt
2010-03-23 21:35 ` Corrado Zoccolo
2010-03-24 11:48 ` Mel Gorman
2010-03-24 12:56 ` Corrado Zoccolo
2010-03-23 22:29 ` Rik van Riel
2010-03-24 14:50 ` Mel Gorman
2010-04-19 12:22 ` Christian Ehrhardt
2010-04-19 21:44 ` Johannes Weiner
2010-04-20 7:20 ` Christian Ehrhardt
2010-04-20 8:54 ` Christian Ehrhardt
2010-04-20 15:32 ` Johannes Weiner
2010-04-20 17:22 ` Rik van Riel
2010-04-21 4:23 ` Christian Ehrhardt
2010-04-21 7:35 ` Christian Ehrhardt
2010-04-21 13:19 ` Rik van Riel
2010-04-22 6:21 ` Christian Ehrhardt
2010-04-26 10:59 ` Subject: [PATCH][RFC] mm: make working set portion that is protected tunable v2 Christian Ehrhardt
2010-04-26 11:59 ` KOSAKI Motohiro
2010-04-26 12:43 ` Christian Ehrhardt
2010-04-26 14:20 ` Rik van Riel
2010-04-27 14:00 ` Christian Ehrhardt
2010-04-21 9:03 ` [RFC PATCH 0/3] Avoid the use of congestion_wait under zone pressure Johannes Weiner
2010-04-21 13:20 ` Rik van Riel
2010-04-20 14:40 ` Rik van Riel [this message]
2010-03-24 2:38 ` Greg KH
2010-03-24 11:49 ` Mel Gorman
2010-03-24 13:13 ` Johannes Weiner
2010-03-12 9:09 ` Mel Gorman
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=4BCDBCC4.60401@redhat.com \
--to=riel@redhat.com \
--cc=akpm@linux-foundation.org \
--cc=chris.mason@oracle.com \
--cc=czoccolo@gmail.com \
--cc=ehrhardt@linux.vnet.ibm.com \
--cc=gregkh@novell.com \
--cc=hannes@cmpxchg.org \
--cc=jens.axboe@oracle.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=mel@csn.ul.ie \
--cc=npiggin@suse.de \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).