public inbox for linux-kernel@vger.kernel.org
 help / color / mirror / Atom feed
From: Glauber Costa <glommer@parallels.com>
To: Paul Turner <pjt@google.com>
Cc: <linux-kernel@vger.kernel.org>, <cgroups@vger.kernel.org>,
	<devel@openvz.org>, Peter Zijlstra <a.p.zijlstra@chello.nl>,
	Tejun Heo <tj@kernel.org>,
	"Eric W. Biederman" <ebiederm@xmission.com>,
	<handai.szj@gmail.com>, <Andrew.Phillips@lmax.com>,
	Serge Hallyn <serge.hallyn@canonical.com>
Subject: Re: [PATCH v3 3/6] expose fine-grained per-cpu data for cpuacct stats
Date: Wed, 30 May 2012 16:20:15 +0400	[thread overview]
Message-ID: <4FC6107F.9020802@parallels.com> (raw)
In-Reply-To: <CAPM31RJanAvDB+pZ+h5J3W6KXvAwPgbbeXgw6C_56tx_Mc+cgA@mail.gmail.com>

On 05/30/2012 03:24 PM, Paul Turner wrote:
>> +static int cpuacct_stats_percpu_show(struct cgroup *cgrp, struct cftype *cft,
>> >  +                                    struct cgroup_map_cb *cb)
>> >  +{
>> >  +       struct cpuacct *ca = cgroup_ca(cgrp);
>> >  +       int cpu;
>> >  +
>> >  +       for_each_online_cpu(cpu) {
>> >  +               do_fill_cb(cb, ca, "user", cpu, CPUTIME_USER);
>> >  +               do_fill_cb(cb, ca, "nice", cpu, CPUTIME_NICE);
>> >  +               do_fill_cb(cb, ca, "system", cpu, CPUTIME_SYSTEM);
>> >  +               do_fill_cb(cb, ca, "irq", cpu, CPUTIME_IRQ);
>> >  +               do_fill_cb(cb, ca, "softirq", cpu, CPUTIME_SOFTIRQ);
>> >  +               do_fill_cb(cb, ca, "guest", cpu, CPUTIME_GUEST);
>> >  +               do_fill_cb(cb, ca, "guest_nice", cpu, CPUTIME_GUEST_NICE);
>> >  +       }
>> >  +
> I don't know if there's much that can be trivially done about it but I
> suspect these are a bit of a memory allocation time-bomb on a many-CPU
> machine.  The cgroup:seq_file mating (via read_map) treats everything
> as/one/  record.  This means that seq_printf is going to end up
> eventually allocating a buffer that can fit_everything_  (as well as
> every power-of-2 on the way there).  Adding insult to injury is that
> that the backing buffer is kmalloc() not vmalloc().
>
> 200+ bytes per-cpu above really is not unreasonable (46 bytes just for
> the text, plus a byte per base 10 digit we end up reporting), but that
> then  leaves us looking at order-12/13 allocations just to print this
> thing when there are O(many) cpus.
>

And how's /proc/stat different ?
It will suffer from the very same problems, since it also have this very 
same information (actually more, since I am skipping some), per-cpu.

Now, if you guys are okay with a file per-cpu, I can do it as well.
It pollutes the filesystem, but at least protects against the fact that 
this is kmalloc-backed.


  reply	other threads:[~2012-05-30 12:22 UTC|newest]

Thread overview: 36+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2012-05-30  9:48 [PATCH v3 0/6] per cgroup /proc/stat statistics Glauber Costa
2012-05-30  9:48 ` [PATCH v3 1/6] measure exec_clock for rt sched entities Glauber Costa
2012-05-30 10:29   ` Peter Zijlstra
2012-05-30 10:32     ` Glauber Costa
2012-05-30 10:42       ` Peter Zijlstra
2012-05-30 10:42         ` Glauber Costa
2012-05-30 11:00           ` Paul Turner
2012-05-30 12:09             ` Glauber Costa
2012-05-30  9:48 ` [PATCH v3 2/6] account guest time per-cgroup as well Glauber Costa
2012-05-30 10:32   ` Peter Zijlstra
2012-05-30 10:36     ` Glauber Costa
2012-05-30 10:46       ` Paul Turner
2012-05-30  9:48 ` [PATCH v3 3/6] expose fine-grained per-cpu data for cpuacct stats Glauber Costa
2012-05-30 10:34   ` Peter Zijlstra
2012-05-30 10:34     ` Glauber Costa
2012-05-30 10:43       ` Peter Zijlstra
2012-05-30 10:44         ` Glauber Costa
2012-05-30 11:24           ` Peter Zijlstra
2012-05-30 11:24   ` Paul Turner
2012-05-30 12:20     ` Glauber Costa [this message]
2012-05-30 12:48       ` Paul Turner
2012-05-30 12:52         ` Glauber Costa
2012-05-30 13:26         ` Glauber Costa
2012-05-30 13:26         ` Glauber Costa
2012-05-30  9:48 ` [PATCH v3 4/6] add a new scheduler hook for context switch Glauber Costa
2012-05-30 11:20   ` Peter Zijlstra
2012-05-30 11:40     ` Peter Zijlstra
2012-05-30 12:08       ` Glauber Costa
2012-05-30 12:07     ` Glauber Costa
2012-05-30  9:48 ` [PATCH v3 5/6] Also record sleep start for a task group Glauber Costa
2012-05-30 11:35   ` Paul Turner
2012-05-30 12:24     ` Glauber Costa
2012-05-30 12:44       ` Peter Zijlstra
2012-05-30 12:44         ` Glauber Costa
2012-05-30  9:48 ` [PATCH v3 6/6] expose per-taskgroup schedstats in cgroup Glauber Costa
2012-05-30 11:22   ` Peter Zijlstra

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=4FC6107F.9020802@parallels.com \
    --to=glommer@parallels.com \
    --cc=Andrew.Phillips@lmax.com \
    --cc=a.p.zijlstra@chello.nl \
    --cc=cgroups@vger.kernel.org \
    --cc=devel@openvz.org \
    --cc=ebiederm@xmission.com \
    --cc=handai.szj@gmail.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=pjt@google.com \
    --cc=serge.hallyn@canonical.com \
    --cc=tj@kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox