linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Michal Hocko <mhocko@kernel.org>
To: Fam Zheng <zhengfeiran@bytedance.com>
Cc: cgroups@vger.kernel.org, linux-mm@kvack.org, tj@kernel.org,
	hannes@cmpxchg.org, lizefan@huawei.com, vdavydov.dev@gmail.com,
	duanxiongchun@bytedance.com, 张永肃 <zhangyongsu@bytedance.com>
Subject: Re: memory cgroup pagecache and inode problem
Date: Fri, 4 Jan 2019 10:04:55 +0100	[thread overview]
Message-ID: <20190104090441.GI31793@dhcp22.suse.cz> (raw)
In-Reply-To: <15614FDC-198E-449B-BFAF-B00D6EF61155@bytedance.com>

On Fri 04-01-19 12:43:40, Fam Zheng wrote:
> Hi,
> 
> In our server which frequently spawns containers, we find that if a
> process used pagecache in memory cgroup, after the process exits and
> memory cgroup is offlined, because the pagecache is still charged in
> this memory cgroup, this memory cgroup will not be destroyed until the
> pagecaches are dropped. This brings huge memory stress over time. We
> find that over one hundred thounsand such offlined memory cgroup in
> system hold too much memory (~100G). This memory can not be released
> immediately even after all associated pagecahes are released, because
> those memory cgroups are destroy asynchronously by a kworker. In some
> cases this can cause oom, since the synchronous memory allocation
> failed.

You are right that an offline memcg keeps memory behind and expects
kswapd or the direct reclaim to prune that memory on demand. Do you have
any examples when this would cause extreme memory stress though? For
example a high direct reclaim activity that would be result of these
offline memcgs? You are mentioning OOM which is even more unexpected.
I haven't seen such a disruptive behavior.

> We think a fix is to create a kworker that scans all pagecaches and
> dentry caches etc. in the background, if a referenced memory cgroup is
> offline, try to drop the cache or move it to the parent cgroup. This
> kworker can wake up periodically, or upon memory cgroup offline event
> (or both).

We do that from the kswapd context already. I do not think we need
another kworker.

Another option might be to enforce the reclaim on the offline path.
We are discussing a similar issue with Yang Shi
http://lkml.kernel.org/r/1546459533-36247-1-git-send-email-yang.shi@linux.alibaba.com

> There is a similar problem in inode. After digging in ext4 code, we
> find that when creating inode cache, SLAB_ACCOUNT is used. In this
> case, inode will alloc in slab which belongs to the current memory
> cgroup. After this memory cgroup goes offline, this inode may be held
> by a dentry cache. If another process uses the same file. this inode
> will be held by that process, preventing the previous memory cgroup
> from being destroyed until this other process closes the file and
> drops the dentry cache.

This is a natural side effect of shared memory, I am afraid. Isolated
memory cgroups should limit any shared resources to bare minimum. You
will get "who touches first gets charged" behavior otherwise and that is
not really deterministic.
-- 
Michal Hocko
SUSE Labs

  parent reply	other threads:[~2019-01-04  9:04 UTC|newest]

Thread overview: 28+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
     [not found] <15614FDC-198E-449B-BFAF-B00D6EF61155@bytedance.com>
2019-01-04  4:44 ` memory cgroup pagecache and inode problem Fam Zheng
2019-01-04  5:00   ` Yang Shi
2019-01-04  5:12     ` Fam Zheng
2019-01-04 19:36       ` Yang Shi
2019-01-07  5:10         ` Fam Zheng
2019-01-07  8:53           ` Michal Hocko
2019-01-07  9:01             ` Fam Zheng
2019-01-07  9:13               ` Michal Hocko
2019-01-09  4:33               ` Fam Zheng
2019-01-10  5:36           ` Yang Shi
2019-01-10  8:30             ` Fam Zheng
2019-01-10  8:41               ` Michal Hocko
2019-01-16  0:50               ` Yang Shi
2019-01-16  3:52                 ` Fam Zheng
2019-01-16  7:06                   ` Michal Hocko
2019-01-16 21:08                     ` Yang Shi
2019-01-16 21:06                   ` Yang Shi
2019-01-17  2:41                     ` Fam Zheng
2019-01-17  5:06                       ` Yang Shi
2019-01-19  3:17                         ` 段熊春
2019-01-20 23:15                         ` Shakeel Butt
2019-01-20 23:15                           ` Shakeel Butt
2019-01-20 23:20                           ` Shakeel Butt
2019-01-21 10:27                           ` Michal Hocko
2019-01-04  9:04 ` Michal Hocko [this message]
2019-01-04 10:02   ` Fam Zheng
2019-01-04 10:12     ` Michal Hocko
2019-01-04 10:35       ` Fam Zheng

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20190104090441.GI31793@dhcp22.suse.cz \
    --to=mhocko@kernel.org \
    --cc=cgroups@vger.kernel.org \
    --cc=duanxiongchun@bytedance.com \
    --cc=hannes@cmpxchg.org \
    --cc=linux-mm@kvack.org \
    --cc=lizefan@huawei.com \
    --cc=tj@kernel.org \
    --cc=vdavydov.dev@gmail.com \
    --cc=zhangyongsu@bytedance.com \
    --cc=zhengfeiran@bytedance.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).