From: Ian Kent <raven@themaw.net>
To: Heiko Carstens <heiko.carstens@de.ibm.com>
Cc: Christoph Hellwig <hch@infradead.org>,
Andrew Morton <akpm@linux-foundation.org>,
KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>,
Andrea Righi <andrea@betterlinux.com>,
Eric Dumazet <eric.dumazet@gmail.com>,
linux-kernel@vger.kernel.org, linux-fsdevel@vger.kernel.org,
Hendrik Brueckner <brueckner@linux.vnet.ibm.com>,
Thorsten Diehl <thorsten.diehl@de.ibm.com>,
"Elliott, Robert (Server Storage)" <Elliott@hp.com>
Subject: Re: [PATCH 1/2] fs: proc/stat: use num_online_cpus() for buffer size
Date: Wed, 28 May 2014 19:14:39 +0800 [thread overview]
Message-ID: <1401275679.2497.8.camel@perseus.fritz.box> (raw)
In-Reply-To: <1401275180.2497.5.camel@perseus.fritz.box>
On Wed, 2014-05-28 at 19:06 +0800, Ian Kent wrote:
> On Wed, 2014-05-28 at 10:59 +0200, Heiko Carstens wrote:
> > The number of bytes contained 'within' /proc/stat depends on the number
> > of online cpus and not of the numbe of possible cpus.
> >
> > This reduces the number of bytes requested for the initial buffer allocation
> > within stat_open(). Which is usually way too high and for nr_possible_cpus()
> > == 256 cpus would result in an order 4 allocation.
> >
> > Order 4 allocations however may fail if memory is fragmented and we would
> > end up with an unreadable /proc/stat file:
> >
> > 62129.701569] sadc: page allocation failure: order:4, mode:0x1040d0
> > [62129.701573] CPU: 1 PID: 192063 Comm: sadc Not tainted 3.10.0-123.el7.s390x #1
> > [...]
> > [62129.701586] Call Trace:
> > [62129.701588] ([<0000000000111fbe>] show_trace+0xe6/0x130)
> > [62129.701591] [<0000000000112074>] show_stack+0x6c/0xe8
> > [62129.701593] [<000000000020d356>] warn_alloc_failed+0xd6/0x138
> > [62129.701596] [<00000000002114d2>] __alloc_pages_nodemask+0x9da/0xb68
> > [62129.701598] [<000000000021168e>] __get_free_pages+0x2e/0x58
> > [62129.701599] [<000000000025a05c>] kmalloc_order_trace+0x44/0xc0
> > [62129.701602] [<00000000002f3ffa>] stat_open+0x5a/0xd8
> > [62129.701604] [<00000000002e9aaa>] proc_reg_open+0x8a/0x140
> > [62129.701606] [<0000000000273b64>] do_dentry_open+0x1bc/0x2c8
> > [62129.701608] [<000000000027411e>] finish_open+0x46/0x60
> > [62129.701610] [<000000000028675a>] do_last+0x382/0x10d0
> > [62129.701612] [<0000000000287570>] path_openat+0xc8/0x4f8
> > [62129.701614] [<0000000000288bde>] do_filp_open+0x46/0xa8
> > [62129.701616] [<000000000027541c>] do_sys_open+0x114/0x1f0
> > [62129.701618] [<00000000005b1c1c>] sysc_tracego+0x14/0x1a
> >
> > Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com>
> > ---
> > fs/proc/stat.c | 2 +-
> > 1 file changed, 1 insertion(+), 1 deletion(-)
> >
> > diff --git a/fs/proc/stat.c b/fs/proc/stat.c
> > index 9d231e9e5f0e..3898ca5f1e92 100644
> > --- a/fs/proc/stat.c
> > +++ b/fs/proc/stat.c
> > @@ -184,7 +184,7 @@ static int show_stat(struct seq_file *p, void *v)
> >
> > static int stat_open(struct inode *inode, struct file *file)
> > {
> > - size_t size = 1024 + 128 * num_possible_cpus();
> > + size_t size = 1024 + 128 * num_online_cpus();
>
> Yes, I thought of this too when I was looking at the problem but was
> concerned about the number of online cpus changing during the read.
>
> If a system can hotplug cpus then I guess we don't care much about the
> number of cpus increasing during the read, we'll just see incorrect data
> once, but what would happen if some cpus were removed? Do we even care
> about that case?
Oh hang on, that's not right it's the opposite, if the number of cpus
increases between the call to stat_open() and show_stat() there might
not be enough space.
>
> > char *buf;
> > struct seq_file *m;
> > int res;
>
next prev parent reply other threads:[~2014-05-28 11:14 UTC|newest]
Thread overview: 21+ messages / expand[flat|nested] mbox.gz Atom feed top
2014-05-21 12:25 /proc/stat vs. failed order-4 allocation Heiko Carstens
2014-05-21 14:32 ` Christoph Hellwig
2014-05-22 3:05 ` Elliott, Robert (Server Storage)
2014-05-28 8:58 ` Heiko Carstens
2014-05-28 8:59 ` [PATCH 1/2] fs: proc/stat: use num_online_cpus() for buffer size Heiko Carstens
2014-05-28 11:06 ` Ian Kent
2014-05-28 11:14 ` Ian Kent [this message]
2014-05-28 9:01 ` [PATCH 2/2] fs: proc/stat: use usual seq_file ops rather than single_open Heiko Carstens
2014-05-28 22:37 ` Andrew Morton
2014-05-30 8:38 ` Heiko Carstens
2014-05-30 11:36 ` [PATCH] fs: proc/stat: use seq_file iterator interface Heiko Carstens
2014-06-09 8:11 ` [PATCH 2/2] fs: proc/stat: use usual seq_file ops rather than single_open Ian Kent
2014-06-11 12:43 ` Heiko Carstens
2014-06-11 22:29 ` David Rientjes
2014-06-12 6:24 ` Ian Kent
2014-06-12 6:52 ` David Rientjes
2014-06-12 7:27 ` Heiko Carstens
2014-06-12 8:18 ` Heiko Carstens
2014-06-12 20:59 ` David Rientjes
2014-06-12 11:09 ` Ian Kent
2014-05-22 11:29 ` /proc/stat vs. failed order-4 allocation Ian Kent
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=1401275679.2497.8.camel@perseus.fritz.box \
--to=raven@themaw.net \
--cc=Elliott@hp.com \
--cc=akpm@linux-foundation.org \
--cc=andrea@betterlinux.com \
--cc=brueckner@linux.vnet.ibm.com \
--cc=eric.dumazet@gmail.com \
--cc=hch@infradead.org \
--cc=heiko.carstens@de.ibm.com \
--cc=kamezawa.hiroyu@jp.fujitsu.com \
--cc=linux-fsdevel@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=thorsten.diehl@de.ibm.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).