From: Mike Rapoport <rppt@kernel.org>
To: Waiman Long <longman@redhat.com>
Cc: Johannes Weiner <hannes@cmpxchg.org>,
Michal Hocko <mhocko@kernel.org>,
Vladimir Davydov <vdavydov.dev@gmail.com>,
Andrew Morton <akpm@linux-foundation.org>,
Petr Mladek <pmladek@suse.com>,
Steven Rostedt <rostedt@goodmis.org>,
Sergey Senozhatsky <senozhatsky@chromium.org>,
Andy Shevchenko <andriy.shevchenko@linux.intel.com>,
Rasmus Villemoes <linux@rasmusvillemoes.dk>,
linux-kernel@vger.kernel.org, cgroups@vger.kernel.org,
linux-mm@kvack.org, Ira Weiny <ira.weiny@intel.com>,
David Rientjes <rientjes@google.com>,
Roman Gushchin <guro@fb.com>, Rafael Aquini <aquini@redhat.com>
Subject: Re: [PATCH v4 3/4] mm/page_owner: Print memcg information
Date: Thu, 3 Feb 2022 08:53:10 +0200 [thread overview]
Message-ID: <Yft71q+OO7lg90sl@kernel.org> (raw)
In-Reply-To: <20220202203036.744010-4-longman@redhat.com>
On Wed, Feb 02, 2022 at 03:30:35PM -0500, Waiman Long wrote:
> It was found that a number of offline memcgs were not freed because
> they were pinned by some charged pages that were present. Even "echo 1 >
> /proc/sys/vm/drop_caches" wasn't able to free those pages. These offline
> but not freed memcgs tend to increase in number over time with the side
> effect that percpu memory consumption as shown in /proc/meminfo also
> increases over time.
>
> In order to find out more information about those pages that pin
> offline memcgs, the page_owner feature is extended to print memory
> cgroup information especially whether the cgroup is offline or not.
> RCU read lock is taken when memcg is being accessed to make sure
> that it won't be freed.
>
> Signed-off-by: Waiman Long <longman@redhat.com>
> Acked-by: David Rientjes <rientjes@google.com>
> Acked-by: Roman Gushchin <guro@fb.com>
Acked-by: Mike Rapoport <rppt@linux.ibm.com>
And my akcs for the first two patches are missing somehow in v4...
> ---
> mm/page_owner.c | 42 ++++++++++++++++++++++++++++++++++++++++++
> 1 file changed, 42 insertions(+)
>
> diff --git a/mm/page_owner.c b/mm/page_owner.c
> index 28dac73e0542..f7820357e4d4 100644
> --- a/mm/page_owner.c
> +++ b/mm/page_owner.c
> @@ -10,6 +10,7 @@
> #include <linux/migrate.h>
> #include <linux/stackdepot.h>
> #include <linux/seq_file.h>
> +#include <linux/memcontrol.h>
> #include <linux/sched/clock.h>
>
> #include "internal.h"
> @@ -325,6 +326,45 @@ void pagetypeinfo_showmixedcount_print(struct seq_file *m,
> seq_putc(m, '\n');
> }
>
> +/*
> + * Looking for memcg information and print it out
> + */
> +static inline int print_page_owner_memcg(char *kbuf, size_t count, int ret,
> + struct page *page)
> +{
> +#ifdef CONFIG_MEMCG
> + unsigned long memcg_data;
> + struct mem_cgroup *memcg;
> + bool online;
> + char name[80];
> +
> + rcu_read_lock();
> + memcg_data = READ_ONCE(page->memcg_data);
> + if (!memcg_data)
> + goto out_unlock;
> +
> + if (memcg_data & MEMCG_DATA_OBJCGS)
> + ret += scnprintf(kbuf + ret, count - ret,
> + "Slab cache page\n");
> +
> + memcg = page_memcg_check(page);
> + if (!memcg)
> + goto out_unlock;
> +
> + online = (memcg->css.flags & CSS_ONLINE);
> + cgroup_name(memcg->css.cgroup, name, sizeof(name));
> + ret += scnprintf(kbuf + ret, count - ret,
> + "Charged %sto %smemcg %s\n",
> + PageMemcgKmem(page) ? "(via objcg) " : "",
> + online ? "" : "offline ",
> + name);
> +out_unlock:
> + rcu_read_unlock();
> +#endif /* CONFIG_MEMCG */
> +
> + return ret;
> +}
> +
> static ssize_t
> print_page_owner(char __user *buf, size_t count, unsigned long pfn,
> struct page *page, struct page_owner *page_owner,
> @@ -365,6 +405,8 @@ print_page_owner(char __user *buf, size_t count, unsigned long pfn,
> migrate_reason_names[page_owner->last_migrate_reason]);
> }
>
> + ret = print_page_owner_memcg(kbuf, count, ret, page);
> +
> ret += snprintf(kbuf + ret, count - ret, "\n");
> if (ret >= count)
> goto err;
> --
> 2.27.0
>
>
--
Sincerely yours,
Mike.
next prev parent reply other threads:[~2022-02-03 6:53 UTC|newest]
Thread overview: 39+ messages / expand[flat|nested] mbox.gz Atom feed top
2022-01-31 19:23 [PATCH v3 0/4] mm/page_owner: Extend page_owner to show memcg information Waiman Long
2022-01-31 19:23 ` [PATCH v3 1/4] lib/vsprintf: Avoid redundant work with 0 size Waiman Long
2022-01-31 20:42 ` Mike Rapoport
2022-01-31 19:23 ` [PATCH v3 2/4] mm/page_owner: Use scnprintf() to avoid excessive buffer overrun check Waiman Long
2022-01-31 20:38 ` Roman Gushchin
2022-01-31 20:43 ` Mike Rapoport
2022-01-31 19:23 ` [PATCH v3 3/4] mm/page_owner: Print memcg information Waiman Long
2022-01-31 20:51 ` Mike Rapoport
2022-01-31 21:43 ` Waiman Long
2022-02-01 6:23 ` Mike Rapoport
2022-01-31 20:51 ` Roman Gushchin
2022-02-01 10:54 ` Michal Hocko
2022-02-01 17:04 ` Waiman Long
2022-02-02 8:49 ` Michal Hocko
2022-02-02 16:12 ` Waiman Long
2022-01-31 19:23 ` [PATCH v3 4/4] mm/page_owner: Record task command name Waiman Long
2022-01-31 20:54 ` Roman Gushchin
2022-01-31 21:46 ` Waiman Long
2022-01-31 22:03 ` [PATCH v4 " Waiman Long
2022-02-01 15:28 ` Michal Hocko
2022-02-02 16:53 ` Waiman Long
2022-02-03 12:10 ` Vlastimil Babka
2022-02-03 18:53 ` Waiman Long
2022-02-02 20:30 ` [PATCH v4 0/4] mm/page_owner: Extend page_owner to show memcg information Waiman Long
2022-02-02 23:06 ` Rafael Aquini
2022-02-02 20:30 ` [PATCH v4 1/4] lib/vsprintf: Avoid redundant work with 0 size Waiman Long
2022-02-08 10:08 ` Petr Mladek
2022-02-02 20:30 ` [PATCH v4 2/4] mm/page_owner: Use scnprintf() to avoid excessive buffer overrun check Waiman Long
2022-02-03 15:46 ` Vlastimil Babka
2022-02-03 18:49 ` Waiman Long
2022-02-08 10:51 ` Petr Mladek
2022-02-02 20:30 ` [PATCH v4 3/4] mm/page_owner: Print memcg information Waiman Long
2022-02-03 6:53 ` Mike Rapoport [this message]
2022-02-03 12:46 ` Michal Hocko
2022-02-03 19:03 ` Waiman Long
2022-02-07 17:20 ` Michal Hocko
2022-02-07 19:09 ` Andrew Morton
2022-02-07 19:33 ` Waiman Long
2022-02-02 20:30 ` [PATCH v4 4/4] mm/page_owner: Record task command name Waiman Long
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=Yft71q+OO7lg90sl@kernel.org \
--to=rppt@kernel.org \
--cc=akpm@linux-foundation.org \
--cc=andriy.shevchenko@linux.intel.com \
--cc=aquini@redhat.com \
--cc=cgroups@vger.kernel.org \
--cc=guro@fb.com \
--cc=hannes@cmpxchg.org \
--cc=ira.weiny@intel.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=linux@rasmusvillemoes.dk \
--cc=longman@redhat.com \
--cc=mhocko@kernel.org \
--cc=pmladek@suse.com \
--cc=rientjes@google.com \
--cc=rostedt@goodmis.org \
--cc=senozhatsky@chromium.org \
--cc=vdavydov.dev@gmail.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).