linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Ingo Molnar <mingo@kernel.org>
To: Rasmus Villemoes <linux@rasmusvillemoes.dk>
Cc: George Spelvin <linux@horizon.com>,
	dave@sr71.net, linux-kernel@vger.kernel.org, linux-mm@kvack.org,
	peterz@infradead.org, riel@redhat.com, rientjes@google.com,
	torvalds@linux-foundation.org
Subject: Re: [PATCH 3/3 v3] mm/vmalloc: Cache the vmalloc memory info
Date: Mon, 24 Aug 2015 08:58:09 +0200	[thread overview]
Message-ID: <20150824065809.GA13082@gmail.com> (raw)
In-Reply-To: <87lhd1wwtz.fsf@rasmusvillemoes.dk>


* Rasmus Villemoes <linux@rasmusvillemoes.dk> wrote:

> On Sun, Aug 23 2015, Ingo Molnar <mingo@kernel.org> wrote:
> 
> > Ok, fair enough - so how about the attached approach instead, which
> > uses a 64-bit generation counter to track changes to the vmalloc
> > state.
> 
> How does this invalidation approach compare to the jiffies approach? In
> other words, how often does the vmalloc info actually change (or rather,
> in this approximation, how often is vmap_area_lock taken)? In
> particular, does it also solve the problem with git's test suite and
> similar situations with lots of short-lived processes?

The two approaches are pretty similar, and in a typical distro with typical 
workload vmalloc() is mostly a boot time affair.

But vmalloc() can be used more often in certain corner cases - neither of the 
patches makes that in any way slower, just the optimization won't trigger that 
often.

Since vmalloc() use is suboptimal for several reasons (it does not use large pages 
for kernel space allocations, etc.), this is all pretty OK IMHO.

> > ==============================>
> > From f9fd770e75e2edb4143f32ced0b53d7a77969c94 Mon Sep 17 00:00:00 2001
> > From: Ingo Molnar <mingo@kernel.org>
> > Date: Sat, 22 Aug 2015 12:28:01 +0200
> > Subject: [PATCH] mm/vmalloc: Cache the vmalloc memory info
> >
> > Linus reported that glibc (rather stupidly) reads /proc/meminfo
> > for every sysinfo() call,
> 
> Not quite: It is done by the two functions get_{av,}phys_pages
> functions; and get_phys_pages is called (once per process) by glibc's
> qsort implementation. In fact, sysinfo() is (at least part of) the cure,
> not the disease. Whether qsort should care about the total amount of
> memory is another discussion.
> 
> <http://thread.gmane.org/gmane.comp.lib.glibc.alpha/54342/focus=54558>

Thanks, is the fixed up changelog below better?

	Ingo

===============>

mm/vmalloc: Cache the vmalloc memory info

Linus reported that for scripting-intense workloads such as the
Git build, glibc's qsort will read /proc/meminfo for every process
created (by way of get_phys_pages()), which causes the Git build 
to generate a surprising amount of kernel overhead.

A fair chunk of the overhead is due to get_vmalloc_info() - which 
walks a potentially long list to do its statistics.

Modify Linus's jiffies based patch to use generation counters
to cache the vmalloc info: vmap_unlock() increases the generation
counter, and the get_vmalloc_info() reads it and compares it
against a cached generation counter.

Also use a seqlock to make sure we always print a consistent
set of vmalloc statistics.

Reported-by: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Rik van Riel <riel@redhat.com>
Cc: linux-mm@kvack.org
Signed-off-by: Ingo Molnar <mingo@kernel.org>

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

  reply	other threads:[~2015-08-24  6:58 UTC|newest]

Thread overview: 25+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2015-08-23  4:48 [PATCH 0/3] mm/vmalloc: Cache the /proc/meminfo vmalloc statistics George Spelvin
2015-08-23  6:04 ` Ingo Molnar
2015-08-23  6:46   ` George Spelvin
2015-08-23  8:17     ` [PATCH 3/3 v3] mm/vmalloc: Cache the vmalloc memory info Ingo Molnar
2015-08-23 20:53       ` Rasmus Villemoes
2015-08-24  6:58         ` Ingo Molnar [this message]
2015-08-24  8:39           ` Rasmus Villemoes
2015-08-23 21:56       ` Rasmus Villemoes
2015-08-24  7:00         ` Ingo Molnar
2015-08-25 16:39         ` Linus Torvalds
2015-08-25 17:03           ` Linus Torvalds
2015-08-24  1:04       ` George Spelvin
2015-08-24  7:34         ` [PATCH 3/3 v4] " Ingo Molnar
2015-08-24  7:47           ` Ingo Molnar
2015-08-24  7:50             ` [PATCH 3/3 v5] " Ingo Molnar
2015-08-24 12:54               ` George Spelvin
2015-08-25  9:56                 ` [PATCH 3/3 v6] " Ingo Molnar
2015-08-25 10:36                   ` George Spelvin
2015-08-25 12:59                   ` Peter Zijlstra
2015-08-25 14:19                   ` Rasmus Villemoes
2015-08-25 15:11                     ` George Spelvin
2015-08-24 13:11           ` [PATCH 3/3 v4] " John Stoffel
2015-08-24 15:11             ` George Spelvin
2015-08-24 15:55               ` John Stoffel
2015-08-25 12:46       ` [PATCH 3/3 v3] " Peter Zijlstra

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20150824065809.GA13082@gmail.com \
    --to=mingo@kernel.org \
    --cc=dave@sr71.net \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=linux@horizon.com \
    --cc=linux@rasmusvillemoes.dk \
    --cc=peterz@infradead.org \
    --cc=riel@redhat.com \
    --cc=rientjes@google.com \
    --cc=torvalds@linux-foundation.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).