From: Minchan Kim <minchan.kim@gmail.com>
To: Steven Whitehouse <swhiteho@redhat.com>
Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org,
Nick Piggin <npiggin@suse.de>
Subject: Re: vmalloc performance
Date: Thu, 15 Apr 2010 01:35:48 +0900 [thread overview]
Message-ID: <1271262948.2233.14.camel@barrios-desktop> (raw)
In-Reply-To: <m2g28c262361004140813j5d70a80fy1882d01436d136a6@mail.gmail.com>
On Thu, 2010-04-15 at 00:13 +0900, Minchan Kim wrote:
> Cced Nick.
> He's Mr. Vmalloc.
>
> On Wed, Apr 14, 2010 at 9:49 PM, Steven Whitehouse <swhiteho@redhat.com> wrote:
> >
> > Since this didn't attract much interest the first time around, and at
> > the risk of appearing to be talking to myself, here is the patch from
> > the bugzilla to better illustrate the issue:
> >
> >
> > diff --git a/mm/vmalloc.c b/mm/vmalloc.c
> > index ae00746..63c8178 100644
> > --- a/mm/vmalloc.c
> > +++ b/mm/vmalloc.c
> > @@ -605,8 +605,7 @@ static void free_unmap_vmap_area_noflush(struct
> > vmap_area *va)
> > {
> > va->flags |= VM_LAZY_FREE;
> > atomic_add((va->va_end - va->va_start) >> PAGE_SHIFT, &vmap_lazy_nr);
> > - if (unlikely(atomic_read(&vmap_lazy_nr) > lazy_max_pages()))
> > - try_purge_vmap_area_lazy();
> > + try_purge_vmap_area_lazy();
> > }
> >
> > /*
> >
> >
> > Steve.
> >
> > On Mon, 2010-04-12 at 17:27 +0100, Steven Whitehouse wrote:
> >> Hi,
> >>
> >> I've noticed that vmalloc seems to be rather slow. I wrote a test kernel
> >> module to track down what was going wrong. The kernel module does one
> >> million vmalloc/touch mem/vfree in a loop and prints out how long it
> >> takes.
> >>
> >> The source of the test kernel module can be found as an attachment to
> >> this bz: https://bugzilla.redhat.com/show_bug.cgi?id=581459
> >>
> >> When this module is run on my x86_64, 8 core, 12 Gb machine, then on an
> >> otherwise idle system I get the following results:
> >>
> >> vmalloc took 148798983 us
> >> vmalloc took 151664529 us
> >> vmalloc took 152416398 us
> >> vmalloc took 151837733 us
> >>
> >> After applying the two line patch (see the same bz) which disabled the
> >> delayed removal of the structures, which appears to be intended to
> >> improve performance in the smp case by reducing TLB flushes across cpus,
> >> I get the following results:
> >>
> >> vmalloc took 15363634 us
> >> vmalloc took 15358026 us
> >> vmalloc took 15240955 us
> >> vmalloc took 15402302 us
> >>
> >> So thats a speed up of around 10x, which isn't too bad. The question is
> >> whether it is possible to come to a compromise where it is possible to
> >> retain the benefits of the delayed TLB flushing code, but reduce the
> >> overhead for other users. My two line patch basically disables the delay
> >> by forcing a removal on each and every vfree.
> >>
> >> What is the correct way to fix this I wonder?
> >>
> >> Steve.
> >>
In my case(2 core, mem 2G system), 50300661 vs 11569357.
It improves 4 times.
It would result from larger number of lazy_max_pages.
It would prevent many vmap_area freed.
So alloc_vmap_area takes long time to find new vmap_area. (ie, lookup
rbtree)
How about calling purge_vmap_area_lazy at the middle of loop in
alloc_vmap_area if rbtree lookup were long?
BTW, Steve. Is is real issue or some test?
I doubt such vmalloc bomb workload is real.
--
Kind regards,
Minchan Kim
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
next prev parent reply other threads:[~2010-04-14 16:36 UTC|newest]
Thread overview: 30+ messages / expand[flat|nested] mbox.gz Atom feed top
2010-04-12 16:27 vmalloc performance Steven Whitehouse
2010-04-14 12:49 ` Steven Whitehouse
2010-04-14 14:24 ` Steven Whitehouse
2010-04-14 15:12 ` Minchan Kim
2010-04-14 15:13 ` Minchan Kim
2010-04-14 16:35 ` Minchan Kim [this message]
2010-04-15 8:33 ` Steven Whitehouse
2010-04-15 16:51 ` Minchan Kim
2010-04-16 14:10 ` Steven Whitehouse
2010-04-18 15:14 ` Minchan Kim
2010-04-19 12:58 ` Steven Whitehouse
2010-04-19 14:12 ` Minchan Kim
2010-04-29 13:43 ` Steven Whitehouse
2010-05-02 17:29 ` [PATCH] cache last free vmap_area to avoid restarting beginning Minchan Kim
2010-05-05 12:48 ` Steven Whitehouse
2010-05-05 16:16 ` Nick Piggin
2010-05-17 12:42 ` Steven Whitehouse
2010-05-18 13:44 ` Steven Whitehouse
2010-05-19 13:54 ` Steven Whitehouse
2010-05-19 13:56 ` Nick Piggin
2010-05-25 8:43 ` Nick Piggin
2010-05-25 15:00 ` Minchan Kim
2010-05-25 15:48 ` Steven Whitehouse
2010-05-22 9:53 ` Minchan Kim
2010-05-24 6:23 ` Nick Piggin
2010-04-19 13:38 ` vmalloc performance Nick Piggin
2010-04-19 14:09 ` Minchan Kim
2010-04-16 6:12 ` Nick Piggin
2010-04-16 7:20 ` Minchan Kim
2010-04-16 8:50 ` Steven Whitehouse
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=1271262948.2233.14.camel@barrios-desktop \
--to=minchan.kim@gmail.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=npiggin@suse.de \
--cc=swhiteho@redhat.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).