linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Michel Lespinasse <walken@google.com>
To: Davidlohr Bueso <davidlohr@hp.com>
Cc: Ingo Molnar <mingo@kernel.org>,
	Linus Torvalds <torvalds@linux-foundation.org>,
	Andrew Morton <akpm@linux-foundation.org>,
	Hugh Dickins <hughd@google.com>, Mel Gorman <mgorman@suse.de>,
	Rik van Riel <riel@redhat.com>, Guan Xuetao <gxt@mprc.pku.edu.cn>,
	"Chandramouleeswaran, Aswin" <aswin@hp.com>,
	Linux Kernel Mailing List <linux-kernel@vger.kernel.org>,
	linux-mm <linux-mm@kvack.org>
Subject: Re: [PATCH] mm: cache largest vma
Date: Sun, 10 Nov 2013 23:43:43 -0800	[thread overview]
Message-ID: <CANN689Eauq+DHQrn8Wr=VU-PFGDOELz6HTabGDGERdDfeOK_UQ@mail.gmail.com> (raw)
In-Reply-To: <1384143129.6940.32.camel@buesod1.americas.hpqcorp.net>

On Sun, Nov 10, 2013 at 8:12 PM, Davidlohr Bueso <davidlohr@hp.com> wrote:
> 2) Oracle Data mining (4K pages)
> +------------------------+----------+------------------+---------+
> |    mmap_cache type     | hit-rate | cycles (billion) | stddev  |
> +------------------------+----------+------------------+---------+
> | no mmap_cache          | -        | 63.35            | 0.20207 |
> | current mmap_cache     | 65.66%   | 19.55            | 0.35019 |
> | mmap_cache+largest VMA | 71.53%   | 15.84            | 0.26764 |
> | 4 element hash table   | 70.75%   | 15.90            | 0.25586 |
> | per-thread mmap_cache  | 86.42%   | 11.57            | 0.29462 |
> +------------------------+----------+------------------+---------+
>
> This workload sure makes the point of how much we can benefit of caching
> the vma, otherwise find_vma() can cost more than 220% extra cycles. We
> clearly win here by having a per-thread cache instead of per address
> space. I also tried the same workload with 2Mb hugepages and the results
> are much more closer to the kernel build, but with the per-thread vma
> still winning over the rest of the alternatives.
>
> All in all I think that we should probably have a per-thread vma cache.
> Please let me know if there is some other workload you'd like me to try
> out. If folks agree then I can cleanup the patch and send it out.

Per thread cache sounds interesting - with per-mm caches there is a
real risk that some modern threaded apps pay the cost of cache updates
without seeing much of the benefit. However, how do you cheaply handle
invalidations for the per thread cache ?

If you have a nice simple scheme for invalidations, I could see per
thread LRU cache working well.

That said, the difficulty with this kind of measurements
(instrumenting code to fish out the cost of a particular function) is
that it would be easy to lose somewhere else - for example for keeping
the cache up to date - and miss that on the instrumented measurement.

-- 
Michel "Walken" Lespinasse
A program is never fully debugged until the last user dies.

  reply	other threads:[~2013-11-11  7:43 UTC|newest]

Thread overview: 38+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2013-11-01 20:17 [PATCH] mm: cache largest vma Davidlohr Bueso
2013-11-01 20:38 ` KOSAKI Motohiro
2013-11-01 21:11   ` Davidlohr Bueso
2013-11-03  9:46     ` Ingo Molnar
2013-11-03 23:57     ` KOSAKI Motohiro
2013-11-04  4:22       ` Davidlohr Bueso
2013-11-01 21:23 ` Rik van Riel
2013-11-03 10:12 ` Ingo Molnar
2013-11-04  4:20   ` Davidlohr Bueso
2013-11-04  4:48     ` converting unicore32 to gate_vma as done for arm (was Re: [PATCH] mm: cache largest vma) Al Viro
2013-11-05  2:49       ` 管雪涛
2013-11-11  7:25         ` converting unicore32 to gate_vma as done for arm (was " Al Viro
2013-11-04  7:00     ` [PATCH] mm: cache largest vma Ingo Molnar
2013-11-04  7:05     ` Ingo Molnar
2013-11-04 14:20       ` Frederic Weisbecker
2013-11-04 17:52         ` Ingo Molnar
2013-11-04 18:10           ` Frederic Weisbecker
2013-11-05  8:24             ` Ingo Molnar
2013-11-05 14:27               ` Jiri Olsa
2013-11-06  6:01                 ` Ingo Molnar
2013-11-06 14:03                   ` Konstantin Khlebnikov
2013-11-03 18:51 ` Linus Torvalds
2013-11-04  4:04   ` Davidlohr Bueso
2013-11-04  7:36     ` Ingo Molnar
2013-11-04 14:56       ` Michel Lespinasse
2013-11-11  4:12       ` Davidlohr Bueso
2013-11-11  7:43         ` Michel Lespinasse [this message]
2013-11-11 12:04           ` Ingo Molnar
2013-11-11 20:47             ` Davidlohr Bueso
2013-11-13 17:08               ` Davidlohr Bueso
2013-11-13 17:59                 ` Ingo Molnar
2013-11-13 18:16               ` Peter Zijlstra
2013-11-11 12:01         ` Ingo Molnar
2013-11-11 18:24           ` Davidlohr Bueso
2013-11-11 20:47             ` Ingo Molnar
2013-11-11 20:59               ` Davidlohr Bueso
2013-11-11 21:09                 ` Ingo Molnar
2013-11-04  7:03   ` Christoph Hellwig

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to='CANN689Eauq+DHQrn8Wr=VU-PFGDOELz6HTabGDGERdDfeOK_UQ@mail.gmail.com' \
    --to=walken@google.com \
    --cc=akpm@linux-foundation.org \
    --cc=aswin@hp.com \
    --cc=davidlohr@hp.com \
    --cc=gxt@mprc.pku.edu.cn \
    --cc=hughd@google.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=mgorman@suse.de \
    --cc=mingo@kernel.org \
    --cc=riel@redhat.com \
    --cc=torvalds@linux-foundation.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).