From: Michal Hocko <mhocko@kernel.org>
To: Vladimir Davydov <vdavydov@virtuozzo.com>
Cc: Johannes Weiner <hannes@cmpxchg.org>,
Andrew Morton <akpm@linux-foundation.org>,
linux-mm@kvack.org, cgroups@vger.kernel.org,
linux-kernel@vger.kernel.org, kernel-team@fb.com
Subject: Re: [PATCH] mm: memcontrol: reclaim when shrinking memory.high below usage
Date: Fri, 11 Mar 2016 09:42:39 +0100 [thread overview]
Message-ID: <20160311084238.GE27701@dhcp22.suse.cz> (raw)
In-Reply-To: <20160311083440.GI1946@esperanza>
On Fri 11-03-16 11:34:40, Vladimir Davydov wrote:
> On Thu, Mar 10, 2016 at 03:50:13PM -0500, Johannes Weiner wrote:
> > When setting memory.high below usage, nothing happens until the next
> > charge comes along, and then it will only reclaim its own charge and
> > not the now potentially huge excess of the new memory.high. This can
> > cause groups to stay in excess of their memory.high indefinitely.
> >
> > To fix that, when shrinking memory.high, kick off a reclaim cycle that
> > goes after the delta.
>
> I agree that we should reclaim the high excess, but I don't think it's a
> good idea to do it synchronously. Currently, memory.low and memory.high
> knobs can be easily used by a single-threaded load manager implemented
> in userspace, because it doesn't need to care about potential stalls
> caused by writes to these files. After this change it might happen that
> a write to memory.high would take long, seconds perhaps, so in order to
> react quickly to changes in other cgroups, a load manager would have to
> spawn a thread per each write to memory.high, which would complicate its
> implementation significantly.
Is the complication on the managing part really an issue though. Such a
manager would have to spawn a process/thread to change the .max already.
--
Michal Hocko
SUSE Labs
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
next prev parent reply other threads:[~2016-03-11 8:42 UTC|newest]
Thread overview: 13+ messages / expand[flat|nested] mbox.gz Atom feed top
2016-03-10 20:50 [PATCH] mm: memcontrol: reclaim when shrinking memory.high below usage Johannes Weiner
[not found] ` <1457643015-8828-1-git-send-email-hannes-druUgvl0LCNAfugRpC6u6w@public.gmane.org>
2016-03-11 7:55 ` Michal Hocko
2016-03-11 8:34 ` Vladimir Davydov
2016-03-11 8:42 ` Michal Hocko [this message]
[not found] ` <20160311084238.GE27701-2MMpYkNvuYDjFM9bn6wA6Q@public.gmane.org>
2016-03-11 9:13 ` Vladimir Davydov
2016-03-11 9:53 ` Michal Hocko
2016-03-11 11:49 ` Vladimir Davydov
2016-03-11 13:39 ` Michal Hocko
2016-03-11 14:01 ` Vladimir Davydov
2016-03-11 14:22 ` Michal Hocko
[not found] ` <20160311142227.GR27701-2MMpYkNvuYDjFM9bn6wA6Q@public.gmane.org>
2016-03-11 14:46 ` Vladimir Davydov
2016-03-16 5:41 ` Johannes Weiner
2016-03-16 14:47 ` Vladimir Davydov
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20160311084238.GE27701@dhcp22.suse.cz \
--to=mhocko@kernel.org \
--cc=akpm@linux-foundation.org \
--cc=cgroups@vger.kernel.org \
--cc=hannes@cmpxchg.org \
--cc=kernel-team@fb.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=vdavydov@virtuozzo.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).