cgroups.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* About cgroup memory limits
@ 2012-05-10 21:38 Andre Nathan
  2012-05-15  9:20 ` Johannes Weiner
  0 siblings, 1 reply; 4+ messages in thread
From: Andre Nathan @ 2012-05-10 21:38 UTC (permalink / raw)
  To: cgroups-u79uwXL29TY76Z2rM5mHXA
  Cc: andre-K36Kqf6HJK439yzSjRtAkw, michel-K36Kqf6HJK439yzSjRtAkw

Hello

I'm doing some tests with LXC and how it interacts with the memory
cgroup limits, more specifically the memory.limit_in_bytes control file.

Am I correct in my understanding of the memory cgroup documentation[1]
that the limit set in memory.limit_in_bytes is applied to the sum of the
fields 'cache', 'rss' and 'mapped_file' in the memory.stat file?

I am also trying to understand the values reported in memory.stat when
compared to the statistics in /proc/$PID/statm.

Below is the sum of each field in /proc/$PID/statm for every process
running inside a test container, converted to bytes:

       size  resident     share     text  lib       data  dt
  897208320  28741632  20500480  1171456    0  170676224   0

Compare this with the usage reports from memory.stat (fields total_*,
hierarchical_* and pg* omitted):

cache                     16834560
rss                       8192000
mapped_file               3743744
swap                      0
inactive_anon             0
active_anon               8192000
inactive_file             13996032
active_file               2838528
unevictable               0

Is there a way to reconcile these numbers somehow? I understand that the
fields from the two files represent different things. What I'm trying to
do is to combine, for example, the fields from memory.stat to
approximately reach what is displayed by statm.

Thank you in advance,
Andre

[1] http://www.kernel.org/doc/Documentation/cgroups/memory.txt



^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: About cgroup memory limits
  2012-05-10 21:38 About cgroup memory limits Andre Nathan
@ 2012-05-15  9:20 ` Johannes Weiner
       [not found]   ` <20120515092040.GF1406-druUgvl0LCNAfugRpC6u6w@public.gmane.org>
  0 siblings, 1 reply; 4+ messages in thread
From: Johannes Weiner @ 2012-05-15  9:20 UTC (permalink / raw)
  To: Andre Nathan
  Cc: cgroups-u79uwXL29TY76Z2rM5mHXA, michel-K36Kqf6HJK439yzSjRtAkw

On Thu, May 10, 2012 at 06:38:43PM -0300, Andre Nathan wrote:
> Hello
> 
> I'm doing some tests with LXC and how it interacts with the memory
> cgroup limits, more specifically the memory.limit_in_bytes control file.
> 
> Am I correct in my understanding of the memory cgroup documentation[1]
> that the limit set in memory.limit_in_bytes is applied to the sum of the
> fields 'cache', 'rss' and 'mapped_file' in the memory.stat file?

mapped_file is the subset of cache that is mapped into virtual memory.

cache (= inactive_file + active_file) + rss (= inactive_anon +
active_anon) is what the limit applies to.

> I am also trying to understand the values reported in memory.stat when
> compared to the statistics in /proc/$PID/statm.
> 
> Below is the sum of each field in /proc/$PID/statm for every process
> running inside a test container, converted to bytes:
> 
>        size  resident     share     text  lib       data  dt
>   897208320  28741632  20500480  1171456    0  170676224   0

statms accounts based on virtual memory, not physical memory like
memcg does.  If you have the same page mapped into two tasks, both
their "share" counters will show a page, while the memcg will only
account the single physical page in mapped_file.

> Compare this with the usage reports from memory.stat (fields total_*,
> hierarchical_* and pg* omitted):
> 
> cache                     16834560
> rss                       8192000
> mapped_file               3743744
> swap                      0
> inactive_anon             0
> active_anon               8192000
> inactive_file             13996032
> active_file               2838528
> unevictable               0
> 
> Is there a way to reconcile these numbers somehow? I understand that the
> fields from the two files represent different things. What I'm trying to
> do is to combine, for example, the fields from memory.stat to
> approximately reach what is displayed by statm.

Excluding the memory shared between tasks in the same group from the
"shared" counter gets you to "mapped_file" etc.

^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: About cgroup memory limits
       [not found]   ` <20120515092040.GF1406-druUgvl0LCNAfugRpC6u6w@public.gmane.org>
@ 2012-05-16 13:15     ` Michel Machado
  2012-05-23  8:07       ` Johannes Weiner
  0 siblings, 1 reply; 4+ messages in thread
From: Michel Machado @ 2012-05-16 13:15 UTC (permalink / raw)
  To: Johannes Weiner; +Cc: Andre Nathan, cgroups-u79uwXL29TY76Z2rM5mHXA

Hi Johannes,

   Thank you very much for your reply, it does help us to understand the
numbers we have at hand.

   Could you clarify your following statement further:

> > Below is the sum of each field in /proc/$PID/statm for every process
> > running inside a test container, converted to bytes:
> > 
> >        size  resident     share     text  lib       data  dt
> >   897208320  28741632  20500480  1171456    0  170676224   0
> 
> statms accounts based on virtual memory, not physical memory like
> memcg does.  If you have the same page mapped into two tasks, both
> their "share" counters will show a page, while the memcg will only
> account the single physical page in mapped_file.

   You mean when those two tasks are in the same cgroup, don't you? Is
there a case in which a page is shared by two tasks that are in
different cgroups but that page is accounted only for one of the two
cgroups? If so, how's this case triggered?

-- 
[ ]'s
Michel Machado

^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: About cgroup memory limits
  2012-05-16 13:15     ` Michel Machado
@ 2012-05-23  8:07       ` Johannes Weiner
  0 siblings, 0 replies; 4+ messages in thread
From: Johannes Weiner @ 2012-05-23  8:07 UTC (permalink / raw)
  To: Michel Machado; +Cc: Andre Nathan, cgroups-u79uwXL29TY76Z2rM5mHXA

On Wed, May 16, 2012 at 09:15:28AM -0400, Michel Machado wrote:
> Hi Johannes,
> 
>    Thank you very much for your reply, it does help us to understand the
> numbers we have at hand.
> 
>    Could you clarify your following statement further:
> 
> > > Below is the sum of each field in /proc/$PID/statm for every process
> > > running inside a test container, converted to bytes:
> > > 
> > >        size  resident     share     text  lib       data  dt
> > >   897208320  28741632  20500480  1171456    0  170676224   0
> > 
> > statms accounts based on virtual memory, not physical memory like
> > memcg does.  If you have the same page mapped into two tasks, both
> > their "share" counters will show a page, while the memcg will only
> > account the single physical page in mapped_file.
> 
>    You mean when those two tasks are in the same cgroup, don't you? Is
> there a case in which a page is shared by two tasks that are in
> different cgroups but that page is accounted only for one of the two
> cgroups? If so, how's this case triggered?

It doesn't matter whether it's the same cgroup or separate cgroups.

When two tasks are in separate memory cgroups, the shared page will be
accounted to the group of the task that is responsible for bringing it
into memory, the one touching it for the first time.

^ permalink raw reply	[flat|nested] 4+ messages in thread

end of thread, other threads:[~2012-05-23  8:07 UTC | newest]

Thread overview: 4+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2012-05-10 21:38 About cgroup memory limits Andre Nathan
2012-05-15  9:20 ` Johannes Weiner
     [not found]   ` <20120515092040.GF1406-druUgvl0LCNAfugRpC6u6w@public.gmane.org>
2012-05-16 13:15     ` Michel Machado
2012-05-23  8:07       ` Johannes Weiner

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).