linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: KOSAKI Motohiro <kosaki.motohiro@gmail.com>
To: Tao Ma <tm@tao.ma>
Cc: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>,
	linux-mm@kvack.org, linux-kernel@vger.kernel.org,
	David Rientjes <rientjes@google.com>,
	Minchan Kim <minchan.kim@gmail.com>, Mel Gorman <mel@csn.ul.ie>,
	Johannes Weiner <jweiner@redhat.com>,
	Andrew Morton <akpm@linux-foundation.org>
Subject: Re: [PATCH] mm: do not drain pagevecs for mlock
Date: Mon, 09 Jan 2012 18:58:22 -0500	[thread overview]
Message-ID: <4F0B7F1E.40504@gmail.com> (raw)
In-Reply-To: <4F0698D8.3000300@tao.ma>

(1/6/12 1:46 AM), Tao Ma wrote:
> On 01/06/2012 02:33 PM, KOSAKI Motohiro wrote:
>> (1/6/12 1:30 AM), Tao Ma wrote:
>>> On 01/06/2012 02:18 PM, KOSAKI Motohiro wrote:
>>>> 2012/1/6 Tao Ma<tm@tao.ma>:
>>>>> Hi Kosaki,
>>>>> On 12/30/2011 06:07 PM, KOSAKI Motohiro wrote:
>>>>>>>> Because your test program is too artificial. 20sec/100000times =
>>>>>>>> 200usec. And your
>>>>>>>> program repeat mlock and munlock the exact same address. so, yes, if
>>>>>>>> lru_add_drain_all() is removed, it become near no-op. but it's
>>>>>>>> worthless comparision.
>>>>>>>> none of any practical program does such strange mlock usage.
>>>>>>> yes, I should say it is artificial. But mlock did cause the
>>>>>>> problem in
>>>>>>> our product system and perf shows that the mlock uses the system time
>>>>>>> much more than others. That's the reason we created this program
>>>>>>> to test
>>>>>>> whether mlock really sucks. And we compared the result with
>>>>>>> rhel5(2.6.18) which runs much much faster.
>>>>>>>
>>>>>>> And from the commit log you described, we can remove
>>>>>>> lru_add_drain_all
>>>>>>> safely here, so why add it? At least removing it makes mlock much
>>>>>>> faster
>>>>>>> compared to the vanilla kernel.
>>>>>>
>>>>>> If we remove it, we lose to a test way of mlock. "Memlocked" field of
>>>>>> /proc/meminfo
>>>>>> show inaccurate number very easily. So, if 200usec is no avoidable,
>>>>>> I'll ack you.
>>>>>> But I'm not convinced yet.
>>>>> Do you find something new for this?
>>>>
>>>> No.
>>>>
>>>> Or more exactly, 200usec is my calculation mistake. your program call
>>>> mlock
>>>> 3 times per each iteration. so, correct cost is 66usec.
>>> yes, so mlock can do 15000/s, it is even slower than the whole i/o time
>>> for some not very fast ssd disk and I don't think it is endurable. I
>>> guess we should remove it, right? Or you have another other suggestion
>>> that I can try for it?
>>
>> read whole thread.
> I have read the whole thread, and you just described that the test case
> is artificial and there is no suggestion or patch about how to resolve
> it. As I have said that it is very time-consuming and with more cpu
> cores, the more penalty, and an i/o time for a ssd can be faster than
> it. So do you think 66 usec is OK for a memory operation?

I don't think you've read the thread at all. please read akpm's commnet.

http://www.spinics.net/lists/linux-mm/msg28290.html




--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Fight unfair telecom internet charges in Canada: sign http://stopthemeter.ca/
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

  reply	other threads:[~2012-01-09 23:58 UTC|newest]

Thread overview: 28+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2011-12-30  6:36 [PATCH] mm: do not drain pagevecs for mlock Tao Ma
2011-12-30  8:11 ` KOSAKI Motohiro
2011-12-30  8:48   ` Tao Ma
2011-12-30  9:31     ` KOSAKI Motohiro
2011-12-30  9:45       ` Tao Ma
2011-12-30 10:07         ` KOSAKI Motohiro
2012-01-01  7:30           ` [PATCH 1/2] mm,mlock: drain pagevecs asynchronously kosaki.motohiro
2012-01-04  1:17             ` Minchan Kim
2012-01-04  2:38               ` KOSAKI Motohiro
2012-01-10  8:53                 ` Tao Ma
2012-01-04  2:56             ` Hugh Dickins
2012-01-04 22:05             ` Andrew Morton
2012-01-04 23:33               ` KOSAKI Motohiro
2012-01-05  0:19                 ` Hugh Dickins
2012-01-01  7:30           ` [PATCH 2/2] sysvshm: SHM_LOCK use lru_add_drain_all_async() kosaki.motohiro
2012-01-04  1:51             ` Hugh Dickins
2012-01-04  2:19               ` KOSAKI Motohiro
2012-01-04  5:17                 ` Hugh Dickins
2012-01-04  8:34                   ` KOSAKI Motohiro
2012-01-06  6:13           ` [PATCH] mm: do not drain pagevecs for mlock Tao Ma
2012-01-06  6:18             ` KOSAKI Motohiro
2012-01-06  6:30               ` Tao Ma
2012-01-06  6:33                 ` KOSAKI Motohiro
2012-01-06  6:46                   ` Tao Ma
2012-01-09 23:58                     ` KOSAKI Motohiro [this message]
2012-01-10  2:08                       ` Tao Ma
2012-01-09  7:25           ` Tao Ma
2011-12-30 10:14         ` KOSAKI Motohiro

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=4F0B7F1E.40504@gmail.com \
    --to=kosaki.motohiro@gmail.com \
    --cc=akpm@linux-foundation.org \
    --cc=jweiner@redhat.com \
    --cc=kosaki.motohiro@jp.fujitsu.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=mel@csn.ul.ie \
    --cc=minchan.kim@gmail.com \
    --cc=rientjes@google.com \
    --cc=tm@tao.ma \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).