From: Mike Rapoport <rppt@kernel.org>
To: Waiman Long <longman@redhat.com>
Cc: Johannes Weiner <hannes@cmpxchg.org>,
Michal Hocko <mhocko@kernel.org>,
Vladimir Davydov <vdavydov.dev@gmail.com>,
Andrew Morton <akpm@linux-foundation.org>,
Petr Mladek <pmladek@suse.com>,
Steven Rostedt <rostedt@goodmis.org>,
Sergey Senozhatsky <senozhatsky@chromium.org>,
Andy Shevchenko <andriy.shevchenko@linux.intel.com>,
Rasmus Villemoes <linux@rasmusvillemoes.dk>,
linux-kernel@vger.kernel.org, cgroups@vger.kernel.org,
linux-mm@kvack.org, Ira Weiny <ira.weiny@intel.com>,
Rafael Aquini <aquini@redhat.com>
Subject: Re: [PATCH v2 3/3] mm/page_owner: Dump memcg information
Date: Sun, 30 Jan 2022 08:33:09 +0200 [thread overview]
Message-ID: <YfYxJR7ugv83ywAb@kernel.org> (raw)
In-Reply-To: <20220129205315.478628-4-longman@redhat.com>
On Sat, Jan 29, 2022 at 03:53:15PM -0500, Waiman Long wrote:
> It was found that a number of offlined memcgs were not freed because
> they were pinned by some charged pages that were present. Even "echo
> 1 > /proc/sys/vm/drop_caches" wasn't able to free those pages. These
> offlined but not freed memcgs tend to increase in number over time with
> the side effect that percpu memory consumption as shown in /proc/meminfo
> also increases over time.
>
> In order to find out more information about those pages that pin
> offlined memcgs, the page_owner feature is extended to dump memory
> cgroup information especially whether the cgroup is offlined or not.
>
> Signed-off-by: Waiman Long <longman@redhat.com>
> ---
> mm/page_owner.c | 31 +++++++++++++++++++++++++++++++
> 1 file changed, 31 insertions(+)
>
> diff --git a/mm/page_owner.c b/mm/page_owner.c
> index 28dac73e0542..8dc5cd0fa227 100644
> --- a/mm/page_owner.c
> +++ b/mm/page_owner.c
> @@ -10,6 +10,7 @@
> #include <linux/migrate.h>
> #include <linux/stackdepot.h>
> #include <linux/seq_file.h>
> +#include <linux/memcontrol.h>
> #include <linux/sched/clock.h>
>
> #include "internal.h"
> @@ -331,6 +332,7 @@ print_page_owner(char __user *buf, size_t count, unsigned long pfn,
> depot_stack_handle_t handle)
> {
> int ret, pageblock_mt, page_mt;
> + unsigned long __maybe_unused memcg_data;
> char *kbuf;
>
> count = min_t(size_t, count, PAGE_SIZE);
> @@ -365,6 +367,35 @@ print_page_owner(char __user *buf, size_t count, unsigned long pfn,
> migrate_reason_names[page_owner->last_migrate_reason]);
> }
>
> +#ifdef CONFIG_MEMCG
Can we put all this along with the declaration of memcg_data in a helper
function please?
> + /*
> + * Look for memcg information and print it out
> + */
> + memcg_data = READ_ONCE(page->memcg_data);
> + if (memcg_data) {
> + struct mem_cgroup *memcg = page_memcg_check(page);
> + bool onlined;
> + char name[80];
> +
> + if (memcg_data & MEMCG_DATA_OBJCGS)
> + ret += scnprintf(kbuf + ret, count - ret,
> + "Slab cache page\n");
> +
> + if (!memcg)
> + goto copy_out;
> +
> + onlined = (memcg->css.flags & CSS_ONLINE);
> + cgroup_name(memcg->css.cgroup, name, sizeof(name));
> + ret += scnprintf(kbuf + ret, count - ret,
> + "Charged %sto %smemcg %s\n",
> + PageMemcgKmem(page) ? "(via objcg) " : "",
> + onlined ? "" : "offlined ",
> + name);
> + }
> +
> +copy_out:
> +#endif
> +
> ret += snprintf(kbuf + ret, count - ret, "\n");
> if (ret >= count)
> goto err;
> --
> 2.27.0
>
>
--
Sincerely yours,
Mike.
next prev parent reply other threads:[~2022-01-30 6:33 UTC|newest]
Thread overview: 33+ messages / expand[flat|nested] mbox.gz Atom feed top
2022-01-29 20:53 [PATCH v2 0/3] mm/page_owner: Extend page_owner to show memcg information Waiman Long
2022-01-29 20:53 ` [PATCH v2 1/3] lib/vsprintf: Avoid redundant work with 0 size Waiman Long
2022-01-30 20:49 ` David Rientjes
2022-01-30 20:57 ` Waiman Long
2022-01-31 10:25 ` Andy Shevchenko
2022-01-31 10:30 ` Andy Shevchenko
2022-01-31 10:34 ` Andy Shevchenko
2022-01-31 11:02 ` Rasmus Villemoes
2022-01-31 11:22 ` Andy Shevchenko
2022-01-31 18:48 ` Waiman Long
2022-02-01 7:12 ` Rasmus Villemoes
2022-02-01 16:01 ` Waiman Long
2022-01-31 2:53 ` Sergey Senozhatsky
2022-01-31 18:17 ` Roman Gushchin
2022-01-29 20:53 ` [PATCH v2 2/3] mm/page_owner: Use scnprintf() to avoid excessive buffer overrun check Waiman Long
2022-01-31 2:58 ` Sergey Senozhatsky
2022-01-29 20:53 ` [PATCH v2 3/3] mm/page_owner: Dump memcg information Waiman Long
2022-01-30 6:33 ` Mike Rapoport [this message]
2022-01-30 18:22 ` Waiman Long
2022-01-30 20:51 ` David Rientjes
2022-01-31 9:38 ` Michal Hocko
[not found] ` <YfgT/9tEREQNiiAN@cmpxchg.org>
2022-01-31 18:15 ` Roman Gushchin
2022-01-31 18:25 ` Michal Hocko
2022-01-31 18:38 ` Waiman Long
2022-02-01 10:49 ` Michal Hocko
2022-02-01 16:41 ` Waiman Long
2022-02-02 8:57 ` Michal Hocko
2022-02-02 15:54 ` Roman Gushchin
2022-02-02 16:38 ` Michal Hocko
2022-02-02 17:51 ` Roman Gushchin
2022-02-02 17:56 ` Michal Hocko
2022-02-02 16:29 ` Waiman Long
2022-01-31 19:01 ` Waiman Long
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=YfYxJR7ugv83ywAb@kernel.org \
--to=rppt@kernel.org \
--cc=akpm@linux-foundation.org \
--cc=andriy.shevchenko@linux.intel.com \
--cc=aquini@redhat.com \
--cc=cgroups@vger.kernel.org \
--cc=hannes@cmpxchg.org \
--cc=ira.weiny@intel.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=linux@rasmusvillemoes.dk \
--cc=longman@redhat.com \
--cc=mhocko@kernel.org \
--cc=pmladek@suse.com \
--cc=rostedt@goodmis.org \
--cc=senozhatsky@chromium.org \
--cc=vdavydov.dev@gmail.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).