From: Ingo Molnar <mingo@kernel.org>
To: Davidlohr Bueso <davidlohr@hp.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>,
Andrew Morton <akpm@linux-foundation.org>,
Hugh Dickins <hughd@google.com>,
Michel Lespinasse <walken@google.com>,
Mel Gorman <mgorman@suse.de>, Rik van Riel <riel@redhat.com>,
Guan Xuetao <gxt@mprc.pku.edu.cn>,
"Chandramouleeswaran, Aswin" <aswin@hp.com>,
Linux Kernel Mailing List <linux-kernel@vger.kernel.org>,
linux-mm <linux-mm@kvack.org>
Subject: Re: [PATCH] mm: cache largest vma
Date: Mon, 4 Nov 2013 08:36:40 +0100 [thread overview]
Message-ID: <20131104073640.GF13030@gmail.com> (raw)
In-Reply-To: <1383537862.2373.14.camel@buesod1.americas.hpqcorp.net>
* Davidlohr Bueso <davidlohr@hp.com> wrote:
> I will look into doing the vma cache per thread instead of mm (I hadn't
> really looked at the problem like this) as well as Ingo's suggestion on
> the weighted LRU approach. However, having seen that we can cheaply and
> easily reach around ~70% hit rate in a lot of workloads, makes me wonder
> how good is good enough?
So I think it all really depends on the hit/miss cost difference. It makes
little sense to add a more complex scheme if it washes out most of the
benefits!
Also note the historic context: the _original_ mmap_cache, that I
implemented 16 years ago, was a front-line cache to a linear list walk
over all vmas (!).
This is the relevant 2.1.37pre1 code in include/linux/mm.h:
/* Look up the first VMA which satisfies addr < vm_end, NULL if none. */
static inline struct vm_area_struct * find_vma(struct mm_struct * mm, unsigned long addr)
{
struct vm_area_struct *vma = NULL;
if (mm) {
/* Check the cache first. */
vma = mm->mmap_cache;
if(!vma || (vma->vm_end <= addr) || (vma->vm_start > addr)) {
vma = mm->mmap;
while(vma && vma->vm_end <= addr)
vma = vma->vm_next;
mm->mmap_cache = vma;
}
}
return vma;
}
See that vma->vm_next iteration? It was awful - but back then most of us
had at most a couple of megs of RAM with just a few vmas. No RAM, no SMP,
no worries - the mm was really simple back then.
Today we have the vma rbtree, which is self-balancing and a lot faster
than your typical linear list walk search ;-)
So I'd _really_ suggest to first examine the assumptions behind the cache,
it being named 'cache' and it having a hit rate does in itself not
guarantee that it gives us any worthwile cost savings when put in front of
an rbtree ...
Thanks,
Ingo
next prev parent reply other threads:[~2013-11-04 7:36 UTC|newest]
Thread overview: 38+ messages / expand[flat|nested] mbox.gz Atom feed top
2013-11-01 20:17 [PATCH] mm: cache largest vma Davidlohr Bueso
2013-11-01 20:38 ` KOSAKI Motohiro
2013-11-01 21:11 ` Davidlohr Bueso
2013-11-03 9:46 ` Ingo Molnar
2013-11-03 23:57 ` KOSAKI Motohiro
2013-11-04 4:22 ` Davidlohr Bueso
2013-11-01 21:23 ` Rik van Riel
2013-11-03 10:12 ` Ingo Molnar
2013-11-04 4:20 ` Davidlohr Bueso
2013-11-04 4:48 ` converting unicore32 to gate_vma as done for arm (was Re: [PATCH] mm: cache largest vma) Al Viro
2013-11-05 2:49 ` 管雪涛
2013-11-11 7:25 ` converting unicore32 to gate_vma as done for arm (was " Al Viro
2013-11-04 7:00 ` [PATCH] mm: cache largest vma Ingo Molnar
2013-11-04 7:05 ` Ingo Molnar
2013-11-04 14:20 ` Frederic Weisbecker
2013-11-04 17:52 ` Ingo Molnar
2013-11-04 18:10 ` Frederic Weisbecker
2013-11-05 8:24 ` Ingo Molnar
2013-11-05 14:27 ` Jiri Olsa
2013-11-06 6:01 ` Ingo Molnar
2013-11-06 14:03 ` Konstantin Khlebnikov
2013-11-03 18:51 ` Linus Torvalds
2013-11-04 4:04 ` Davidlohr Bueso
2013-11-04 7:36 ` Ingo Molnar [this message]
2013-11-04 14:56 ` Michel Lespinasse
2013-11-11 4:12 ` Davidlohr Bueso
2013-11-11 7:43 ` Michel Lespinasse
2013-11-11 12:04 ` Ingo Molnar
2013-11-11 20:47 ` Davidlohr Bueso
2013-11-13 17:08 ` Davidlohr Bueso
2013-11-13 17:59 ` Ingo Molnar
2013-11-13 18:16 ` Peter Zijlstra
2013-11-11 12:01 ` Ingo Molnar
2013-11-11 18:24 ` Davidlohr Bueso
2013-11-11 20:47 ` Ingo Molnar
2013-11-11 20:59 ` Davidlohr Bueso
2013-11-11 21:09 ` Ingo Molnar
2013-11-04 7:03 ` Christoph Hellwig
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20131104073640.GF13030@gmail.com \
--to=mingo@kernel.org \
--cc=akpm@linux-foundation.org \
--cc=aswin@hp.com \
--cc=davidlohr@hp.com \
--cc=gxt@mprc.pku.edu.cn \
--cc=hughd@google.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=mgorman@suse.de \
--cc=riel@redhat.com \
--cc=torvalds@linux-foundation.org \
--cc=walken@google.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).