From: Johannes Weiner <hannes@cmpxchg.org>
To: Mel Gorman <mgorman@suse.de>
Cc: Andrew Morton <akpm@linux-foundation.org>,
Rik van Riel <riel@redhat.com>,
linux-mm@kvack.org, linux-kernel@vger.kernel.org,
kernel-team@fb.com
Subject: Re: [PATCH] mm: scale kswapd watermarks in proportion to memory
Date: Fri, 19 Feb 2016 15:20:00 -0500 [thread overview]
Message-ID: <20160219202000.GB17342@cmpxchg.org> (raw)
In-Reply-To: <20160219112543.GJ4763@suse.de>
On Fri, Feb 19, 2016 at 11:25:43AM +0000, Mel Gorman wrote:
> On Thu, Feb 18, 2016 at 11:41:59AM -0500, Johannes Weiner wrote:
> > In machines with 140G of memory and enterprise flash storage, we have
> > seen read and write bursts routinely exceed the kswapd watermarks and
> > cause thundering herds in direct reclaim. Unfortunately, the only way
> > to tune kswapd aggressiveness is through adjusting min_free_kbytes -
> > the system's emergency reserves - which is entirely unrelated to the
> > system's latency requirements. In order to get kswapd to maintain a
> > 250M buffer of free memory, the emergency reserves need to be set to
> > 1G. That is a lot of memory wasted for no good reason.
> >
> > On the other hand, it's reasonable to assume that allocation bursts
> > and overall allocation concurrency scale with memory capacity, so it
> > makes sense to make kswapd aggressiveness a function of that as well.
> >
> > Change the kswapd watermark scale factor from the currently fixed 25%
> > of the tunable emergency reserve to a tunable 0.001% of memory.
> >
> > On a 140G machine, this raises the default watermark steps - the
> > distance between min and low, and low and high - from 16M to 143M.
> >
> > Signed-off-by: Johannes Weiner <hannes@cmpxchg.org>
>
> Intuitively, the patch makes sense although Rik's comments should be
> addressed.
>
> The caveat will be that there will be workloads that used to fit into
> memory without reclaim that now have kswapd activity. It might manifest
> as continual reclaim with some thrashing but it should only apply to
> workloads that are exactly sized to fit in memory which in my experience
> are relatively rare. It should be "obvious" when occurs at least.
This is a problem only in theory, I think, because I doubt anybody is
able to keep a workingset reliably at a margin of less than 0.001% of
memory. I'd expect few users to even go within single digit margins
without eventually thrashing anyway.
It certainly becomes a real issue when users tune the scale factor,
but then it will be a deliberate act with known consequences. That's
what I choose to believe in.
> Acked-by: Mel Gorman <mgorman@suse.de>
Thanks!
prev parent reply other threads:[~2016-02-19 20:21 UTC|newest]
Thread overview: 5+ messages / expand[flat|nested] mbox.gz Atom feed top
2016-02-18 16:41 [PATCH] mm: scale kswapd watermarks in proportion to memory Johannes Weiner
2016-02-18 20:15 ` Rik van Riel
2016-02-19 19:41 ` Johannes Weiner
2016-02-19 11:25 ` Mel Gorman
2016-02-19 20:20 ` Johannes Weiner [this message]
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20160219202000.GB17342@cmpxchg.org \
--to=hannes@cmpxchg.org \
--cc=akpm@linux-foundation.org \
--cc=kernel-team@fb.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=mgorman@suse.de \
--cc=riel@redhat.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).