cgroups.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Waiman Long <longman@redhat.com>
To: Michal Hocko <mhocko@suse.com>
Cc: Johannes Weiner <hannes@cmpxchg.org>,
	Vladimir Davydov <vdavydov.dev@gmail.com>,
	Andrew Morton <akpm@linux-foundation.org>,
	Petr Mladek <pmladek@suse.com>,
	Steven Rostedt <rostedt@goodmis.org>,
	Sergey Senozhatsky <senozhatsky@chromium.org>,
	Andy Shevchenko <andriy.shevchenko@linux.intel.com>,
	Rasmus Villemoes <linux@rasmusvillemoes.dk>,
	linux-kernel@vger.kernel.org, cgroups@vger.kernel.org,
	linux-mm@kvack.org, Ira Weiny <ira.weiny@intel.com>,
	Mike Rapoport <rppt@kernel.org>,
	David Rientjes <rientjes@google.com>,
	Roman Gushchin <guro@fb.com>, Rafael Aquini <aquini@redhat.com>,
	Mike Rapoport <rppt@linux.ibm.com>
Subject: Re: [PATCH v5 3/4] mm/page_owner: Print memcg information
Date: Tue, 8 Feb 2022 13:40:57 -0500	[thread overview]
Message-ID: <e897adca-168e-13db-8001-4afbef3aa648@redhat.com> (raw)
In-Reply-To: <YgJeWth50eP9L0PK@dhcp22.suse.cz>

On 2/8/22 07:13, Michal Hocko wrote:
> On Mon 07-02-22 19:05:31, Waiman Long wrote:
>> It was found that a number of dying memcgs were not freed because
>> they were pinned by some charged pages that were present. Even "echo 1 >
>> /proc/sys/vm/drop_caches" wasn't able to free those pages. These dying
>> but not freed memcgs tend to increase in number over time with the side
>> effect that percpu memory consumption as shown in /proc/meminfo also
>> increases over time.
> I still believe that this is very suboptimal way to debug offline memcgs
> but memcg information can be useful in other contexts and it doesn't
> cost us anything except for an additional output so I am fine with this.
I am planning to have a follow-up patch to add a new debugfs file for 
just printing page information associated with dying memcgs only. It 
will be based on the existing page_owner code, though. So I need to get 
this patch in first.
>   
>> In order to find out more information about those pages that pin
>> dying memcgs, the page_owner feature is extended to print memory
>> cgroup information especially whether the cgroup is dying or not.
>> RCU read lock is taken when memcg is being accessed to make sure
>> that it won't be freed.
>>
>> Signed-off-by: Waiman Long <longman@redhat.com>
>> Acked-by: David Rientjes <rientjes@google.com>
>> Acked-by: Roman Gushchin <guro@fb.com>
>> Acked-by: Mike Rapoport <rppt@linux.ibm.com>
> With few comments/questions below.
>
>> ---
>>   mm/page_owner.c | 44 ++++++++++++++++++++++++++++++++++++++++++++
>>   1 file changed, 44 insertions(+)
>>
>> diff --git a/mm/page_owner.c b/mm/page_owner.c
>> index 28dac73e0542..d4c311455753 100644
>> --- a/mm/page_owner.c
>> +++ b/mm/page_owner.c
>> @@ -10,6 +10,7 @@
>>   #include <linux/migrate.h>
>>   #include <linux/stackdepot.h>
>>   #include <linux/seq_file.h>
>> +#include <linux/memcontrol.h>
>>   #include <linux/sched/clock.h>
>>   
>>   #include "internal.h"
>> @@ -325,6 +326,47 @@ void pagetypeinfo_showmixedcount_print(struct seq_file *m,
>>   	seq_putc(m, '\n');
>>   }
>>   
>> +/*
>> + * Looking for memcg information and print it out
>> + */
> I am not sure this is particularly useful comment.
Right, I can remove that.
>
>> +static inline int print_page_owner_memcg(char *kbuf, size_t count, int ret,
>> +					 struct page *page)
>> +{
>> +#ifdef CONFIG_MEMCG
>> +	unsigned long memcg_data;
>> +	struct mem_cgroup *memcg;
>> +	bool dying;
>> +
>> +	rcu_read_lock();
>> +	memcg_data = READ_ONCE(page->memcg_data);
>> +	if (!memcg_data)
>> +		goto out_unlock;
>> +
>> +	if (memcg_data & MEMCG_DATA_OBJCGS)
>> +		ret += scnprintf(kbuf + ret, count - ret,
>> +				"Slab cache page\n");
>> +
>> +	memcg = page_memcg_check(page);
>> +	if (!memcg)
>> +		goto out_unlock;
>> +
>> +	dying = (memcg->css.flags & CSS_DYING);
> Is there any specific reason why you haven't used mem_cgroup_online?
Not really. However, I think checking for CSS_DYING makes more sense now 
that I using the term "dying".
>
>> +	ret += scnprintf(kbuf + ret, count - ret,
>> +			"Charged %sto %smemcg ",
>> +			PageMemcgKmem(page) ? "(via objcg) " : "",
>> +			dying ? "dying " : "");
>> +
>> +	/* Write cgroup name directly into kbuf */
>> +	cgroup_name(memcg->css.cgroup, kbuf + ret, count - ret);
>> +	ret += strlen(kbuf + ret);
> cgroup_name should return the length of the path added to the buffer.
I realized that after I sent out the patch. I will remove te redundant 
strlen() in a future update.
>
>> +	ret += scnprintf(kbuf + ret, count - ret, "\n");
> I do not see any overflow prevention here. I believe you really need to
> check ret >= count after each scnprintf/cgroup_name.

As you have realized, the beauty of using scnprintf() is to not needing 
an overflow check after each invocation.

Cheers,
Longman


  parent reply	other threads:[~2022-02-08 18:40 UTC|newest]

Thread overview: 9+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-02-08  0:05 [PATCH v5 0/4] mm/page_owner: Extend page_owner to show memcg information Waiman Long
2022-02-08  0:05 ` [PATCH v5 1/4] lib/vsprintf: Avoid redundant work with 0 size Waiman Long
     [not found] ` <20220208000532.1054311-1-longman-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
2022-02-08  0:05   ` [PATCH v5 2/4] mm/page_owner: Use scnprintf() to avoid excessive buffer overrun check Waiman Long
2022-02-08  0:05   ` [PATCH v5 3/4] mm/page_owner: Print memcg information Waiman Long
     [not found]     ` <20220208000532.1054311-4-longman-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
2022-02-08 12:13       ` Michal Hocko
     [not found]         ` <YgJeWth50eP9L0PK-2MMpYkNvuYDjFM9bn6wA6Q@public.gmane.org>
2022-02-08 15:15           ` Michal Hocko
2022-02-08 18:40         ` Waiman Long [this message]
     [not found]           ` <e897adca-168e-13db-8001-4afbef3aa648-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
2022-02-08 19:11             ` Michal Hocko
2022-02-08  0:05   ` [PATCH v5 4/4] mm/page_owner: Record task command name Waiman Long

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=e897adca-168e-13db-8001-4afbef3aa648@redhat.com \
    --to=longman@redhat.com \
    --cc=akpm@linux-foundation.org \
    --cc=andriy.shevchenko@linux.intel.com \
    --cc=aquini@redhat.com \
    --cc=cgroups@vger.kernel.org \
    --cc=guro@fb.com \
    --cc=hannes@cmpxchg.org \
    --cc=ira.weiny@intel.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=linux@rasmusvillemoes.dk \
    --cc=mhocko@suse.com \
    --cc=pmladek@suse.com \
    --cc=rientjes@google.com \
    --cc=rostedt@goodmis.org \
    --cc=rppt@kernel.org \
    --cc=rppt@linux.ibm.com \
    --cc=senozhatsky@chromium.org \
    --cc=vdavydov.dev@gmail.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).