From: Vladimir Davydov <vdavydov@virtuozzo.com>
To: Johannes Weiner <hannes@cmpxchg.org>
Cc: Rik van Riel <riel@redhat.com>,
lsf-pc@lists.linuxfoundation.org,
Linux Memory Management List <linux-mm@kvack.org>,
Linux kernel Mailing List <linux-kernel@vger.kernel.org>,
KVM list <kvm@vger.kernel.org>
Subject: Re: [LSF/MM TOPIC] VM containers
Date: Wed, 27 Jan 2016 18:48:31 +0300 [thread overview]
Message-ID: <20160127154831.GF9623@esperanza> (raw)
In-Reply-To: <20160122171121.GA18062@cmpxchg.org>
On Fri, Jan 22, 2016 at 12:11:21PM -0500, Johannes Weiner wrote:
> Hi,
>
> On Fri, Jan 22, 2016 at 10:56:15AM -0500, Rik van Riel wrote:
> > I am trying to gauge interest in discussing VM containers at the LSF/MM
> > summit this year. Projects like ClearLinux, Qubes, and others are all
> > trying to use virtual machines as better isolated containers.
> >
> > That changes some of the goals the memory management subsystem has,
> > from "use all the resources effectively" to "use as few resources as
> > necessary, in case the host needs the memory for something else".
>
> I would be very interested in discussing this topic, because I think
> the issue is more generic than these VM applications. We are facing
> the same issues with regular containers, where aggressive caching is
> counteracting the desire to cut down workloads to their bare minimum
> in order to pack them as tightly as possible.
>
> With per-cgroup LRUs and thrash detection, we have infrastructure in
By thrash detection, do you mean vmpressure?
> place that could allow us to accomplish this. Right now we only enter
> reclaim once memory runs out, but we could add an allocation mode that
> would prefer to always reclaim from the local LRU before increasing
> the memory footprint, and only expand once we detect thrashing in the
> page cache. That would keep the workloads neatly trimmed at all times.
I don't get it. Do you mean a sort of special GFP flag that would force
the caller to reclaim before actual charging/allocation? Or is it
supposed to be automatic, basing on how memcg is behaving? If the
latter, I suppose it could be already done by a userspace daemon by
adjusting memory.high as needed, although it's unclear how to do it
optimally.
>
> For virtualized environments, the thrashing information would be
> communicated slightly differently to the page allocator and/or the
> host, but otherwise the fundamental principles should be the same.
>
> We'd have to figure out how to balance the aggressiveness there and
> how to describe this to the user, as I can imagine that users would
> want to tune this based on a tolerance for the degree of thrashing: if
> pages are used every M ms, keep them cached; if pages are used every N
> ms, freeing up the memory and refetching them from disk is better etc.
Sounds reasonable. What about adding a parameter to memcg that would
define ws access time? So that it would act just like memory.low, but in
terms of lruvec age instead of lruvec size. I mean, we keep track of
lruvec ages and scan those lruvecs whose age is > ws access time before
others. That would protect those workloads that access their ws quite,
but not very often from streaming workloads which can generate a lot of
useless pressure.
Thanks,
Vladimir
>
> And we don't have thrash detection in secondary slab caches (yet).
>
> > Are people interested in discussing this at LSF/MM, or is it better
> > saved for a different forum?
>
> If more people are interested, I think that could be a great topic.
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
next prev parent reply other threads:[~2016-01-27 15:48 UTC|newest]
Thread overview: 10+ messages / expand[flat|nested] mbox.gz Atom feed top
2016-01-22 15:56 [LSF/MM TOPIC] VM containers Rik van Riel
2016-01-22 16:05 ` [Lsf-pc] " James Bottomley
2016-01-22 17:11 ` Johannes Weiner
2016-01-27 15:48 ` Vladimir Davydov [this message]
2016-01-27 18:36 ` Johannes Weiner
2016-01-28 17:12 ` Vladimir Davydov
2016-01-23 23:41 ` Nakajima, Jun
2016-01-24 17:06 ` One Thousand Gnomes
2016-01-25 17:25 ` Rik van Riel
2016-01-28 15:18 ` Aneesh Kumar K.V
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20160127154831.GF9623@esperanza \
--to=vdavydov@virtuozzo.com \
--cc=hannes@cmpxchg.org \
--cc=kvm@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=lsf-pc@lists.linuxfoundation.org \
--cc=riel@redhat.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).