linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Tao Ma <tm@tao.ma>
To: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org,
	David Rientjes <rientjes@google.com>,
	Minchan Kim <minchan.kim@gmail.com>, Mel Gorman <mel@csn.ul.ie>,
	Johannes Weiner <jweiner@redhat.com>,
	Andrew Morton <akpm@linux-foundation.org>
Subject: Re: [PATCH] mm: do not drain pagevecs for mlock
Date: Fri, 30 Dec 2011 17:45:22 +0800	[thread overview]
Message-ID: <4EFD8832.6010905@tao.ma> (raw)
In-Reply-To: <CAHGf_=pODc6fLGJAEZWzQtUd6fj6v=fV9n6UTwysqRR1SwY++A@mail.gmail.com>

On 12/30/2011 05:31 PM, KOSAKI Motohiro wrote:
> 2011/12/30 Tao Ma <tm@tao.ma>:
>> On 12/30/2011 04:11 PM, KOSAKI Motohiro wrote:
>>> 2011/12/30 Tao Ma <tm@tao.ma>:
>>>> In our test of mlock, we have found some severe performance regression
>>>> in it. Some more investigations show that mlocked is blocked heavily
>>>> by lur_add_drain_all which calls schedule_on_each_cpu and flush the work
>>>> queue which is very slower if we have several cpus.
>>>>
>>>> So we have tried 2 ways to solve it:
>>>> 1. Add a per cpu counter for all the pagevecs so that we don't schedule
>>>>   and flush the lru_drain work if the cpu doesn't have any pagevecs(I
>>>>   have finished the codes already).
>>>> 2. Remove the lru_add_drain_all.
>>>>
>>>> The first one has some problems since in our product system, all the cpus
>>>> are busy, so I guess there is very little chance for a cpu to have 0 pagevecs
>>>> except that you run several consecutive mlocks.
>>>>
>>>> From the commit log which added this function(8891d6da), it seems that we
>>>> don't have to call it. So the 2nd one seems to be both easy and workable and
>>>> comes this patch.
>>>
>>> Could you please show us your system environment and benchmark programs?
>>> Usually lru_drain_** is very fast than mlock() body because it makes
>>> plenty memset(page).
>> The system environment is: 16 core Xeon E5620. 24G memory.
>>
>> I have attached the program. It is very simple and just uses mlock/munlock.
> 
> Because your test program is too artificial. 20sec/100000times =
> 200usec. And your
> program repeat mlock and munlock the exact same address. so, yes, if
> lru_add_drain_all() is removed, it become near no-op. but it's
> worthless comparision.
> none of any practical program does such strange mlock usage.
yes, I should say it is artificial. But mlock did cause the problem in
our product system and perf shows that the mlock uses the system time
much more than others. That's the reason we created this program to test
whether mlock really sucks. And we compared the result with
rhel5(2.6.18) which runs much much faster.

And from the commit log you described, we can remove lru_add_drain_all
safely here, so why add it? At least removing it makes mlock much faster
compared to the vanilla kernel.

> 
> But, 200usec is much than I measured before. I'll dig it a bit more.
Thanks for the help.

Tao

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Fight unfair telecom internet charges in Canada: sign http://stopthemeter.ca/
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

  reply	other threads:[~2011-12-30  9:45 UTC|newest]

Thread overview: 28+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2011-12-30  6:36 [PATCH] mm: do not drain pagevecs for mlock Tao Ma
2011-12-30  8:11 ` KOSAKI Motohiro
2011-12-30  8:48   ` Tao Ma
2011-12-30  9:31     ` KOSAKI Motohiro
2011-12-30  9:45       ` Tao Ma [this message]
2011-12-30 10:07         ` KOSAKI Motohiro
2012-01-01  7:30           ` [PATCH 1/2] mm,mlock: drain pagevecs asynchronously kosaki.motohiro
2012-01-04  1:17             ` Minchan Kim
2012-01-04  2:38               ` KOSAKI Motohiro
2012-01-10  8:53                 ` Tao Ma
2012-01-04  2:56             ` Hugh Dickins
2012-01-04 22:05             ` Andrew Morton
2012-01-04 23:33               ` KOSAKI Motohiro
2012-01-05  0:19                 ` Hugh Dickins
2012-01-01  7:30           ` [PATCH 2/2] sysvshm: SHM_LOCK use lru_add_drain_all_async() kosaki.motohiro
2012-01-04  1:51             ` Hugh Dickins
2012-01-04  2:19               ` KOSAKI Motohiro
2012-01-04  5:17                 ` Hugh Dickins
2012-01-04  8:34                   ` KOSAKI Motohiro
2012-01-06  6:13           ` [PATCH] mm: do not drain pagevecs for mlock Tao Ma
2012-01-06  6:18             ` KOSAKI Motohiro
2012-01-06  6:30               ` Tao Ma
2012-01-06  6:33                 ` KOSAKI Motohiro
2012-01-06  6:46                   ` Tao Ma
2012-01-09 23:58                     ` KOSAKI Motohiro
2012-01-10  2:08                       ` Tao Ma
2012-01-09  7:25           ` Tao Ma
2011-12-30 10:14         ` KOSAKI Motohiro

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=4EFD8832.6010905@tao.ma \
    --to=tm@tao.ma \
    --cc=akpm@linux-foundation.org \
    --cc=jweiner@redhat.com \
    --cc=kosaki.motohiro@jp.fujitsu.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=mel@csn.ul.ie \
    --cc=minchan.kim@gmail.com \
    --cc=rientjes@google.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).