linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Konstantin Khlebnikov <khlebnikov@openvz.org>
To: Hugh Dickins <hughd@google.com>
Cc: Andrew Morton <akpm@linux-foundation.org>,
	KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>,
	Johannes Weiner <hannes@cmpxchg.org>,
	Ying Han <yinghan@google.com>,
	"linux-mm@kvack.org" <linux-mm@kvack.org>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>
Subject: Re: [PATCH 9/10] mm/memcg: move lru_lock into lruvec
Date: Wed, 22 Feb 2012 01:35:25 +0400	[thread overview]
Message-ID: <4F440E1D.7050004@openvz.org> (raw)
In-Reply-To: <alpine.LSU.2.00.1202211205280.1858@eggly.anvils>

Hugh Dickins wrote:
> On Tue, 21 Feb 2012, Konstantin Khlebnikov wrote:
>>
>> On lumpy/compaction isolate you do:
>>
>> if (!PageLRU(page))
>> 	continue
>>
>> __isolate_lru_page()
>>
>> page_relock_rcu_vec()
>> 	rcu_read_lock()
>> 	rcu_dereference()...
>> 	spin_lock()...
>> 	rcu_read_unlock()
>>
>> You protect page_relock_rcu_vec with switching pointers back to root.
>>
>> I do:
>>
>> catch_page_lru()
>> 	rcu_read_lock()
>> 	if (!PageLRU(page))
>> 		return false
>> 	rcu_dereference()...
>> 	spin_lock()...
>> 	rcu_read_unlock()
>> 	if (PageLRU())
>> 		return true
>> if true
>> 	__isolate_lru_page()
>>
>> I protect my catch_page_lruvec() with PageLRU() under single rcu-interval
>> with locking.
>> Thus my code is better, because it not requires switching pointers back to
>> root memcg.
>
> That sounds much better, yes - if it does work reliably.
>
> I'll have to come back to think about your locking later too;
> or maybe that's exactly where I need to look, when investigating
> the mm_inline.h:41 BUG.

pages_count[] updates looks correct.
This really may be bug in locking, and this VM_BUG_ON catch it before list-debug.

>
> But at first sight, I have to say I'm very suspicious: I've never found
> PageLRU a good enough test for whether we need such a lock, because of
> races with those pages on percpu lruvec about to be put on the lru.
>
> But maybe once I look closer, I'll find that's handled by your changes
> away from lruvec; though I'd have thought the same issue exists,
> independent of whether the pending pages are in vector or list.

Are you talking about my per-cpu page-lists for lru-adding?
This is just an unnecessary patch, I don't know why I include it into v2 set.
It does not protect anything.

>
> Hugh
>
>>
>> Meanwhile after seeing your patches, I realized that this rcu-protection is
>> required only for lock-by-pfn in lumpy/compaction isolation.
>> Thus my locking should be simplified and optimized.

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Fight unfair telecom internet charges in Canada: sign http://stopthemeter.ca/
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

  reply	other threads:[~2012-02-21 21:35 UTC|newest]

Thread overview: 36+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2012-02-20 23:26 [PATCH 0/10] mm/memcg: per-memcg per-zone lru locking Hugh Dickins
2012-02-20 23:28 ` [PATCH 1/10] mm/memcg: scanning_global_lru means mem_cgroup_disabled Hugh Dickins
2012-02-21  8:03   ` KAMEZAWA Hiroyuki
2012-02-20 23:29 ` [PATCH 2/10] mm/memcg: move reclaim_stat into lruvec Hugh Dickins
2012-02-21  8:05   ` KAMEZAWA Hiroyuki
2012-02-20 23:30 ` [PATCH 3/10] mm/memcg: add zone pointer " Hugh Dickins
2012-02-21  8:08   ` KAMEZAWA Hiroyuki
2012-02-20 23:32 ` [PATCH 4/10] mm/memcg: apply add/del_page to lruvec Hugh Dickins
2012-02-21  8:20   ` KAMEZAWA Hiroyuki
2012-02-21 22:25     ` Hugh Dickins
2012-02-20 23:33 ` [PATCH 5/10] mm/memcg: introduce page_relock_lruvec Hugh Dickins
2012-02-21  8:38   ` KAMEZAWA Hiroyuki
2012-02-21 22:36     ` Hugh Dickins
2012-02-20 23:34 ` [PATCH 6/10] mm/memcg: take care over pc->mem_cgroup Hugh Dickins
2012-02-21  5:55   ` Konstantin Khlebnikov
2012-02-21 19:37     ` Hugh Dickins
2012-02-21 20:40       ` Konstantin Khlebnikov
2012-02-21 22:05         ` Hugh Dickins
2012-02-21  6:05   ` Konstantin Khlebnikov
2012-02-21 20:00     ` Hugh Dickins
2012-02-21  9:13   ` KAMEZAWA Hiroyuki
2012-02-21 23:03     ` Hugh Dickins
2012-02-22  4:05       ` Konstantin Khlebnikov
2012-02-20 23:35 ` [PATCH 7/10] mm/memcg: remove mem_cgroup_reset_owner Hugh Dickins
2012-02-21  9:17   ` KAMEZAWA Hiroyuki
2012-02-20 23:36 ` [PATCH 8/10] mm/memcg: nest lru_lock inside page_cgroup lock Hugh Dickins
2012-02-21  9:48   ` KAMEZAWA Hiroyuki
2012-02-20 23:38 ` [PATCH 9/10] mm/memcg: move lru_lock into lruvec Hugh Dickins
2012-02-21  7:08   ` Konstantin Khlebnikov
2012-02-21 20:12     ` Hugh Dickins
2012-02-21 21:35       ` Konstantin Khlebnikov [this message]
2012-02-21 22:12         ` Hugh Dickins
2012-02-22  3:43           ` Konstantin Khlebnikov
2012-02-22  6:09             ` Hugh Dickins
2012-02-23 14:21               ` Konstantin Khlebnikov
2012-02-20 23:39 ` [PATCH 10/10] mm/memcg: per-memcg per-zone lru locking Hugh Dickins

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=4F440E1D.7050004@openvz.org \
    --to=khlebnikov@openvz.org \
    --cc=akpm@linux-foundation.org \
    --cc=hannes@cmpxchg.org \
    --cc=hughd@google.com \
    --cc=kamezawa.hiroyu@jp.fujitsu.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=yinghan@google.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).