cgroups.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [BUGFIX][PATCH] add mem_cgroup_replace_page_cache.
@ 2011-12-06  3:39 KAMEZAWA Hiroyuki
       [not found] ` <20111206123923.1432ab52.kamezawa.hiroyu-+CUm20s59erQFUHtdCDX3A@public.gmane.org>
  0 siblings, 1 reply; 11+ messages in thread
From: KAMEZAWA Hiroyuki @ 2011-12-06  3:39 UTC (permalink / raw)
  To: linux-kernel-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
  Cc: Miklos Szeredi,
	akpm-de/tnXTf+JLsfHDXvbKv3WD2FQJk+8+b@public.gmane.org,
	linux-mm-Bw31MaZKKs3YtjvyW6yDsg@public.gmane.org,
	cgroups-u79uwXL29TY76Z2rM5mHXA,
	hannes-druUgvl0LCNAfugRpC6u6w@public.gmane.org, Michal Hocko,
	Hugh Dickins


Hm, is this too naive ? better idea is welcome. 
==
From 33638351c5cd28af9f47f9ab1c44eeb1f63d9964 Mon Sep 17 00:00:00 2001
From: KAMEZAWA Hiroyuki <kamezawa.hiroyu-+CUm20s59erQFUHtdCDX3A@public.gmane.org>
Date: Tue, 6 Dec 2011 12:32:32 +0900
Subject: [PATCH] memcg: add mem_cgroup_replace_page_cache() for fixing LRU issue.

commit ef6a3c6311 adds a function replace_page_cache_page(). This
function replaces a page in radix-tree with a new page.
At doing this, memory cgroup need to fix up the accounting information.
memcg need to check PCG_USED bit etc.

In some(many?) case, 'newpage' is on LRU before calling replace_page_cache().
So, memcg's LRU accounting information should be fixed, too.

This patch adds mem_cgroup_replace_page_cache() and removing old hooks.
In that function, old pages will be unaccounted without touching res_counter
and new page will be accounted to the memcg (of old page). At overwriting
pc->mem_cgroup of newpage, take zone->lru_lock and avoid race with
LRU handling.

Background:
  replace_page_cache_page() is called by FUSE code in its splice() handling.
  Here, 'newpage' is replacing oldpage but this newpage is not a newly allocated
  page and may be on LRU. LRU mis-accounting will be critical for memory cgroup
  because rmdir() checks the whole LRU is empty and there is no account leak.
  If a page is on the other LRU than it should be, rmdir() will fail.

Signed-off-by: KAMEZAWA Hiroyuki <kamezawa.hiroyu-+CUm20s59erQFUHtdCDX3A@public.gmane.org>
---
 include/linux/memcontrol.h |    6 ++++++
 mm/filemap.c               |   18 ++----------------
 mm/memcontrol.c            |   41 +++++++++++++++++++++++++++++++++++++++++
 3 files changed, 49 insertions(+), 16 deletions(-)

diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h
index 4b70e05..bd3b102 100644
--- a/include/linux/memcontrol.h
+++ b/include/linux/memcontrol.h
@@ -123,6 +123,8 @@ struct zone_reclaim_stat*
 mem_cgroup_get_reclaim_stat_from_page(struct page *page);
 extern void mem_cgroup_print_oom_info(struct mem_cgroup *memcg,
 					struct task_struct *p);
+extern void mem_cgroup_replace_page_cache(struct page *oldpage,
+					struct page *newpage);
 
 #ifdef CONFIG_CGROUP_MEM_RES_CTLR_SWAP
 extern int do_swap_account;
@@ -382,6 +384,10 @@ static inline
 void mem_cgroup_count_vm_event(struct mm_struct *mm, enum vm_event_item idx)
 {
 }
+static inline void mem_cgroup_replace_page_cache(struct page *oldpage,
+				struct page *newpage)
+{
+}
 #endif /* CONFIG_CGROUP_MEM_CONT */
 
 #if !defined(CONFIG_CGROUP_MEM_RES_CTLR) || !defined(CONFIG_DEBUG_VM)
diff --git a/mm/filemap.c b/mm/filemap.c
index a7b572b..4642211 100644
--- a/mm/filemap.c
+++ b/mm/filemap.c
@@ -393,24 +393,11 @@ EXPORT_SYMBOL(filemap_write_and_wait_range);
 int replace_page_cache_page(struct page *old, struct page *new, gfp_t gfp_mask)
 {
 	int error;
-	struct mem_cgroup *memcg = NULL;
 
 	VM_BUG_ON(!PageLocked(old));
 	VM_BUG_ON(!PageLocked(new));
 	VM_BUG_ON(new->mapping);
 
-	/*
-	 * This is not page migration, but prepare_migration and
-	 * end_migration does enough work for charge replacement.
-	 *
-	 * In the longer term we probably want a specialized function
-	 * for moving the charge from old to new in a more efficient
-	 * manner.
-	 */
-	error = mem_cgroup_prepare_migration(old, new, &memcg, gfp_mask);
-	if (error)
-		return error;
-
 	error = radix_tree_preload(gfp_mask & ~__GFP_HIGHMEM);
 	if (!error) {
 		struct address_space *mapping = old->mapping;
@@ -432,13 +419,12 @@ int replace_page_cache_page(struct page *old, struct page *new, gfp_t gfp_mask)
 		if (PageSwapBacked(new))
 			__inc_zone_page_state(new, NR_SHMEM);
 		spin_unlock_irq(&mapping->tree_lock);
+		/* mem_cgroup codes must not be called under tree_lock */
+		mem_cgroup_replace_page_cache(old, new);
 		radix_tree_preload_end();
 		if (freepage)
 			freepage(old);
 		page_cache_release(old);
-		mem_cgroup_end_migration(memcg, old, new, true);
-	} else {
-		mem_cgroup_end_migration(memcg, old, new, false);
 	}
 
 	return error;
diff --git a/mm/memcontrol.c b/mm/memcontrol.c
index 8880a32..a9e92a6 100644
--- a/mm/memcontrol.c
+++ b/mm/memcontrol.c
@@ -3306,6 +3306,47 @@ void mem_cgroup_end_migration(struct mem_cgroup *memcg,
 	cgroup_release_and_wakeup_rmdir(&memcg->css);
 }
 
+/*
+ * At replace page cache, newpage is not under any memcg but it's on
+ * LRU. So, this function doesn't touch res_counter but handles LRU
+ * in correct way.
+ */
+void mem_cgroup_replace_page_cache(struct page *oldpage,
+				  struct page *newpage)
+{
+	struct mem_cgroup *memcg;
+	struct page_cgroup *pc;
+	struct zone *zone;
+	enum charge_type type = MEM_CGROUP_CHARGE_TYPE_CACHE;
+	unsigned long flags;
+
+	pc = lookup_page_cgroup(oldpage);
+	/* fix accounting on old pages */
+	lock_page_cgroup(pc);
+	memcg = pc->mem_cgroup;
+	mem_cgroup_charge_statistics(memcg, PageCgroupCache(pc), -1);
+	ClearPageCgroupUsed(pc);
+	unlock_page_cgroup(pc);
+
+	if (PageSwapBacked(oldpage))
+		type = MEM_CGROUP_CHARGE_TYPE_SHMEM;
+
+	zone = page_zone(newpage);
+	pc = lookup_page_cgroup(newpage);
+	/*
+	 * Even if newpage->mapping was NULL before starting replacement,
+	 * the newpage may be on LRU(or pagevec for LRU) already. We lock
+	 * LRU while we overwrite pc->mem_cgroup.
+	 */
+	spin_lock_irqsave(&zone->lru_lock, flags);
+	if (PageLRU(newpage))
+		del_page_from_lru_list(zone, newpage, page_lru(newpage));
+	__mem_cgroup_commit_charge(memcg, newpage, 1, pc, type);
+	if (PageLRU(newpage))
+		add_page_to_lru_list(zone, newpage, page_lru(newpage));
+	spin_unlock_irqrestore(&zone->lru_lock, flags);
+}
+
 #ifdef CONFIG_DEBUG_VM
 static struct page_cgroup *lookup_page_cgroup_used(struct page *page)
 {
-- 
1.7.4.1


--
To unsubscribe from this list: send the line "unsubscribe cgroups" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply related	[flat|nested] 11+ messages in thread

* [RFC][PATCH 1/4] memcg: simplify page cache charging
       [not found] ` <20111206123923.1432ab52.kamezawa.hiroyu-+CUm20s59erQFUHtdCDX3A@public.gmane.org>
@ 2011-12-06 10:12   ` KAMEZAWA Hiroyuki
  2011-12-06 10:13     ` [RFC][PATCH 2/4] memcg: simplify corner case handling of LRU and charge races KAMEZAWA Hiroyuki
       [not found]     ` <20111206191211.3be32ccb.kamezawa.hiroyu-+CUm20s59erQFUHtdCDX3A@public.gmane.org>
  2011-12-07  9:21   ` [BUGFIX][PATCH] add mem_cgroup_replace_page_cache Johannes Weiner
  2011-12-07 11:14   ` Michal Hocko
  2 siblings, 2 replies; 11+ messages in thread
From: KAMEZAWA Hiroyuki @ 2011-12-06 10:12 UTC (permalink / raw)
  To: KAMEZAWA Hiroyuki
  Cc: linux-kernel-u79uwXL29TY76Z2rM5mHXA@public.gmane.org,
	Miklos Szeredi,
	akpm-de/tnXTf+JLsfHDXvbKv3WD2FQJk+8+b@public.gmane.org,
	linux-mm-Bw31MaZKKs3YtjvyW6yDsg@public.gmane.org,
	cgroups-u79uwXL29TY76Z2rM5mHXA,
	hannes-druUgvl0LCNAfugRpC6u6w@public.gmane.org, Michal Hocko,
	Hugh Dickins

This is an add-on patches to mem_cgroup_replace_page_cache(), introducing
new LRU rule under memcg, finally. After this, lru handling will be
much simplified. (all pathces are based on 3.2.0-rc4-next-20111205+)

But this is experimental... I may forget some important corner cases.
==
From 028eddd3fea1634e23b18a96ba5653e8c54b437b Mon Sep 17 00:00:00 2001
From: KAMEZAWA Hiroyuki <kamezawa.hiroyu-+CUm20s59erQFUHtdCDX3A@public.gmane.org>
Date: Tue, 6 Dec 2011 14:57:01 +0900
Subject: [PATCH 1/4] memcg: simplify page cache charging.

Because of commit ef6a3c6311, FUSE uses replace_page_cache() instead
of add_to_page_cache(). Then, mem_cgroup_cache_charge() is not
called against FUSE's pages from splice.

So, Now, mem_cgroup_cache_charge() doesn't receive a page on LRU
unless it's not SwapCache.
For checking, WARN_ON_ONCE(PageLRU(page)) is added.

Signed-off-by: KAMEZAWA Hiroyuki <kamezawa.hiroyu-+CUm20s59erQFUHtdCDX3A@public.gmane.org>
---
 mm/memcontrol.c |   31 +++++++++----------------------
 1 files changed, 9 insertions(+), 22 deletions(-)

diff --git a/mm/memcontrol.c b/mm/memcontrol.c
index a9e92a6..947c62c 100644
--- a/mm/memcontrol.c
+++ b/mm/memcontrol.c
@@ -2710,6 +2710,7 @@ int mem_cgroup_cache_charge(struct page *page, struct mm_struct *mm,
 				gfp_t gfp_mask)
 {
 	struct mem_cgroup *memcg = NULL;
+	enum charge_type type = MEM_CGROUP_CHARGE_TYPE_CACHE;
 	int ret;
 
 	if (mem_cgroup_disabled())
@@ -2719,31 +2720,17 @@ int mem_cgroup_cache_charge(struct page *page, struct mm_struct *mm,
 
 	if (unlikely(!mm))
 		mm = &init_mm;
+	if (!page_is_file_cache(page))
+		type = MEM_CGROUP_CHARGE_TYPE_SHMEM;
 
-	if (page_is_file_cache(page)) {
-		ret = __mem_cgroup_try_charge(mm, gfp_mask, 1, &memcg, true);
-		if (ret || !memcg)
-			return ret;
-
-		/*
-		 * FUSE reuses pages without going through the final
-		 * put that would remove them from the LRU list, make
-		 * sure that they get relinked properly.
-		 */
-		__mem_cgroup_commit_charge_lrucare(page, memcg,
-					MEM_CGROUP_CHARGE_TYPE_CACHE);
-		return ret;
-	}
-	/* shmem */
-	if (PageSwapCache(page)) {
+	if (!PageSwapCache(page)) {
+		ret = mem_cgroup_charge_common(page, mm, gfp_mask, type);
+		WARN_ON_ONCE(PageLRU(page));
+	} else { /* page is swapcache/shmem */
 		ret = mem_cgroup_try_charge_swapin(mm, page, gfp_mask, &memcg);
 		if (!ret)
-			__mem_cgroup_commit_charge_swapin(page, memcg,
-					MEM_CGROUP_CHARGE_TYPE_SHMEM);
-	} else
-		ret = mem_cgroup_charge_common(page, mm, gfp_mask,
-					MEM_CGROUP_CHARGE_TYPE_SHMEM);
-
+			__mem_cgroup_commit_charge_swapin(page, memcg, type);
+	}
 	return ret;
 }
 
-- 
1.7.4.1


--
To unsubscribe from this list: send the line "unsubscribe cgroups" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply related	[flat|nested] 11+ messages in thread

* [RFC][PATCH 2/4] memcg: simplify corner case handling of LRU and charge races
  2011-12-06 10:12   ` [RFC][PATCH 1/4] memcg: simplify page cache charging KAMEZAWA Hiroyuki
@ 2011-12-06 10:13     ` KAMEZAWA Hiroyuki
       [not found]     ` <20111206191211.3be32ccb.kamezawa.hiroyu-+CUm20s59erQFUHtdCDX3A@public.gmane.org>
  1 sibling, 0 replies; 11+ messages in thread
From: KAMEZAWA Hiroyuki @ 2011-12-06 10:13 UTC (permalink / raw)
  To: KAMEZAWA Hiroyuki
  Cc: linux-kernel@vger.kernel.org, Miklos Szeredi,
	akpm@linux-foundation.org, linux-mm@kvack.org, cgroups,
	hannes@cmpxchg.org, Michal Hocko, Hugh Dickins

From 2949dd497b4b87d9a5a6352053d247b5924516ea Mon Sep 17 00:00:00 2001
From: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
Date: Tue, 6 Dec 2011 15:08:55 +0900
Subject: [PATCH 2/4] memcg: simplify corner case handling of LRU and charge races.

This patch simplifies LRU handling of racy case (memcg+SwapCache).
At charging, SwapCache tend to be on LRU already. So, before
overwriting pc->mem_cgroup, the page must be removed from LRU and
added to LRU later.

This patch does
	spin_lock(zone->lru_lock);
	if (PageLRU(page))
		remove from LRU
	overwrite pc->mem_cgroup
	if (PageLRU(page))
		add to new LRU.
	spin_unlock(zone->lru_lock);

Signed-off-by: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
---
 mm/memcontrol.c |   90 +++++--------------------------------------------------
 1 files changed, 8 insertions(+), 82 deletions(-)

diff --git a/mm/memcontrol.c b/mm/memcontrol.c
index 947c62c..66a2a59 100644
--- a/mm/memcontrol.c
+++ b/mm/memcontrol.c
@@ -1071,86 +1071,6 @@ struct lruvec *mem_cgroup_lru_move_lists(struct zone *zone,
 }
 
 /*
- * At handling SwapCache and other FUSE stuff, pc->mem_cgroup may be changed
- * while it's linked to lru because the page may be reused after it's fully
- * uncharged. To handle that, unlink page_cgroup from LRU when charge it again.
- * It's done under lock_page and expected that zone->lru_lock isnever held.
- */
-static void mem_cgroup_lru_del_before_commit(struct page *page)
-{
-	enum lru_list lru;
-	unsigned long flags;
-	struct zone *zone = page_zone(page);
-	struct page_cgroup *pc = lookup_page_cgroup(page);
-
-	/*
-	 * Doing this check without taking ->lru_lock seems wrong but this
-	 * is safe. Because if page_cgroup's USED bit is unset, the page
-	 * will not be added to any memcg's LRU. If page_cgroup's USED bit is
-	 * set, the commit after this will fail, anyway.
-	 * This all charge/uncharge is done under some mutual execustion.
-	 * So, we don't need to taking care of changes in USED bit.
-	 */
-	if (likely(!PageLRU(page)))
-		return;
-
-	spin_lock_irqsave(&zone->lru_lock, flags);
-	lru = page_lru(page);
-	/*
-	 * The uncharged page could still be registered to the LRU of
-	 * the stale pc->mem_cgroup.
-	 *
-	 * As pc->mem_cgroup is about to get overwritten, the old LRU
-	 * accounting needs to be taken care of.  Let root_mem_cgroup
-	 * babysit the page until the new memcg is responsible for it.
-	 *
-	 * The PCG_USED bit is guarded by lock_page() as the page is
-	 * swapcache/pagecache.
-	 */
-	if (PageLRU(page) && PageCgroupAcctLRU(pc) && !PageCgroupUsed(pc)) {
-		del_page_from_lru_list(zone, page, lru);
-		add_page_to_lru_list(zone, page, lru);
-	}
-	spin_unlock_irqrestore(&zone->lru_lock, flags);
-}
-
-static void mem_cgroup_lru_add_after_commit(struct page *page)
-{
-	enum lru_list lru;
-	unsigned long flags;
-	struct zone *zone = page_zone(page);
-	struct page_cgroup *pc = lookup_page_cgroup(page);
-	/*
-	 * putback:				charge:
-	 * SetPageLRU				SetPageCgroupUsed
-	 * smp_mb				smp_mb
-	 * PageCgroupUsed && add to memcg LRU	PageLRU && add to memcg LRU
-	 *
-	 * Ensure that one of the two sides adds the page to the memcg
-	 * LRU during a race.
-	 */
-	smp_mb();
-	/* taking care of that the page is added to LRU while we commit it */
-	if (likely(!PageLRU(page)))
-		return;
-	spin_lock_irqsave(&zone->lru_lock, flags);
-	lru = page_lru(page);
-	/*
-	 * If the page is not on the LRU, someone will soon put it
-	 * there.  If it is, and also already accounted for on the
-	 * memcg-side, it must be on the right lruvec as setting
-	 * pc->mem_cgroup and PageCgroupUsed is properly ordered.
-	 * Otherwise, root_mem_cgroup has been babysitting the page
-	 * during the charge.  Move it to the new memcg now.
-	 */
-	if (PageLRU(page) && !PageCgroupAcctLRU(pc)) {
-		del_page_from_lru_list(zone, page, lru);
-		add_page_to_lru_list(zone, page, lru);
-	}
-	spin_unlock_irqrestore(&zone->lru_lock, flags);
-}
-
-/*
  * Checks whether given mem is same or in the root_mem_cgroup's
  * hierarchy subtree
  */
@@ -2695,14 +2615,20 @@ __mem_cgroup_commit_charge_lrucare(struct page *page, struct mem_cgroup *memcg,
 					enum charge_type ctype)
 {
 	struct page_cgroup *pc = lookup_page_cgroup(page);
+	struct zone *zone = page_zone(page);
+	unsigned long flags;
 	/*
 	 * In some case, SwapCache, FUSE(splice_buf->radixtree), the page
 	 * is already on LRU. It means the page may on some other page_cgroup's
 	 * LRU. Take care of it.
 	 */
-	mem_cgroup_lru_del_before_commit(page);
+	spin_lock_irqsave(&zone->lru_lock, flags);
+	if (PageLRU(page))
+		del_page_from_lru_list(zone, page, page_lru(page));
 	__mem_cgroup_commit_charge(memcg, page, 1, pc, ctype);
-	mem_cgroup_lru_add_after_commit(page);
+	if (PageLRU(page))
+		add_page_to_lru_list(zone, page, page_lru(page));
+	spin_unlock_irqrestore(&zone->lru_lock, flags);
 	return;
 }
 
-- 
1.7.4.1


--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Fight unfair telecom internet charges in Canada: sign http://stopthemeter.ca/
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 11+ messages in thread

* [RFC][PATCH 3/4] memcg: clear pc->mem_cgroup if necessary
       [not found]     ` <20111206191211.3be32ccb.kamezawa.hiroyu-+CUm20s59erQFUHtdCDX3A@public.gmane.org>
@ 2011-12-06 10:15       ` KAMEZAWA Hiroyuki
  2011-12-06 10:17       ` [RFC][PATCH 4/4] memcg: new LRU rule KAMEZAWA Hiroyuki
  1 sibling, 0 replies; 11+ messages in thread
From: KAMEZAWA Hiroyuki @ 2011-12-06 10:15 UTC (permalink / raw)
  To: KAMEZAWA Hiroyuki
  Cc: linux-kernel-u79uwXL29TY76Z2rM5mHXA@public.gmane.org,
	Miklos Szeredi,
	akpm-de/tnXTf+JLsfHDXvbKv3WD2FQJk+8+b@public.gmane.org,
	linux-mm-Bw31MaZKKs3YtjvyW6yDsg@public.gmane.org,
	cgroups-u79uwXL29TY76Z2rM5mHXA,
	hannes-druUgvl0LCNAfugRpC6u6w@public.gmane.org, Michal Hocko,
	Hugh Dickins

From f0c8f600f0d0e5500aaad493fb5740ebe5ee1225 Mon Sep 17 00:00:00 2001
From: KAMEZAWA Hiroyuki <kamezawa.hiroyu-+CUm20s59erQFUHtdCDX3A@public.gmane.org>
Date: Tue, 6 Dec 2011 17:51:55 +0900
Subject: [PATCH 3/4] memcg: clear pc->mem_cgorup if necessary.

At swap-in, page is added to LRU before accounted by memcg.
This causes page_cgroup->mem_cgroup v.s. LRU mis-match. This problem
is now treated by PCG_ACCT_LRU flag bit in page_cgroup->flags.

This patch is a preparation for new approarch. This patch clears
pc->mem_cgroup if a newly allocated page may be added to LRU before
charge. This will be required for removing PCG_ACCT_LRU bit.

Signed-off-by: KAMEZAWA Hiroyuki <kamezawa.hiroyu-+CUm20s59erQFUHtdCDX3A@public.gmane.org>
---
 include/linux/memcontrol.h |    5 +++++
 mm/ksm.c                   |    1 +
 mm/memcontrol.c            |   14 ++++++++++++++
 mm/swap_state.c            |    1 +
 4 files changed, 21 insertions(+), 0 deletions(-)

diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h
index bd3b102..7428409 100644
--- a/include/linux/memcontrol.h
+++ b/include/linux/memcontrol.h
@@ -126,6 +126,7 @@ extern void mem_cgroup_print_oom_info(struct mem_cgroup *memcg,
 extern void mem_cgroup_replace_page_cache(struct page *oldpage,
 					struct page *newpage);
 
+extern void mem_cgroup_reset_owner(struct page *page);
 #ifdef CONFIG_CGROUP_MEM_RES_CTLR_SWAP
 extern int do_swap_account;
 #endif
@@ -388,6 +389,10 @@ static inline void mem_cgroup_replace_page_cache(struct page *oldpage,
 				struct page *newpage)
 {
 }
+
+static inline void mem_cgroup_reset_owner(struct page *page);
+{
+}
 #endif /* CONFIG_CGROUP_MEM_CONT */
 
 #if !defined(CONFIG_CGROUP_MEM_RES_CTLR) || !defined(CONFIG_DEBUG_VM)
diff --git a/mm/ksm.c b/mm/ksm.c
index a6d3fb7..480983d 100644
--- a/mm/ksm.c
+++ b/mm/ksm.c
@@ -1571,6 +1571,7 @@ struct page *ksm_does_need_to_copy(struct page *page,
 
 	new_page = alloc_page_vma(GFP_HIGHUSER_MOVABLE, vma, address);
 	if (new_page) {
+		mem_cgroup_reset_owner(new_page);
 		copy_user_highpage(new_page, page, address, vma);
 
 		SetPageDirty(new_page);
diff --git a/mm/memcontrol.c b/mm/memcontrol.c
index 66a2a59..b8706d8 100644
--- a/mm/memcontrol.c
+++ b/mm/memcontrol.c
@@ -3017,6 +3017,20 @@ void mem_cgroup_uncharge_swap(swp_entry_t ent)
 	rcu_read_unlock();
 }
 
+/*
+ * A function for resetting pc->memc_cgroup for newly allocated pages.
+ * This function should be called if the newpage will be added to LRU
+ * before start accounting.
+ */
+void mem_cgroup_reset_owner(struct page *newpage)
+{
+	struct page_cgroup *pc;
+
+	pc = lookup_page_cgroup(newpage);
+	VM_BUG_ON(PageCgroupUsed(pc));
+	pc->mem_cgroup = root_mem_cgroup;
+}
+
 /**
  * mem_cgroup_move_swap_account - move swap charge and swap_cgroup's record.
  * @entry: swap entry to be moved
diff --git a/mm/swap_state.c b/mm/swap_state.c
index 78cc4d1..747539e 100644
--- a/mm/swap_state.c
+++ b/mm/swap_state.c
@@ -301,6 +301,7 @@ struct page *read_swap_cache_async(swp_entry_t entry, gfp_t gfp_mask,
 			new_page = alloc_page_vma(gfp_mask, vma, addr);
 			if (!new_page)
 				break;		/* Out of memory */
+			mem_cgroup_reset_owner(new_page);
 		}
 
 		/*
-- 
1.7.4.1


--
To unsubscribe from this list: send the line "unsubscribe cgroups" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply related	[flat|nested] 11+ messages in thread

* [RFC][PATCH 4/4] memcg: new LRU rule
       [not found]     ` <20111206191211.3be32ccb.kamezawa.hiroyu-+CUm20s59erQFUHtdCDX3A@public.gmane.org>
  2011-12-06 10:15       ` [RFC][PATCH 3/4] memcg: clear pc->mem_cgroup if necessary KAMEZAWA Hiroyuki
@ 2011-12-06 10:17       ` KAMEZAWA Hiroyuki
  1 sibling, 0 replies; 11+ messages in thread
From: KAMEZAWA Hiroyuki @ 2011-12-06 10:17 UTC (permalink / raw)
  To: KAMEZAWA Hiroyuki
  Cc: linux-kernel-u79uwXL29TY76Z2rM5mHXA@public.gmane.org,
	Miklos Szeredi,
	akpm-de/tnXTf+JLsfHDXvbKv3WD2FQJk+8+b@public.gmane.org,
	linux-mm-Bw31MaZKKs3YtjvyW6yDsg@public.gmane.org,
	cgroups-u79uwXL29TY76Z2rM5mHXA,
	hannes-druUgvl0LCNAfugRpC6u6w@public.gmane.org, Michal Hocko,
	Hugh Dickins

Not measured performance yet, sorry.
==
From 3f2539d695084eb8b83ec08347587b1e1f2efa9a Mon Sep 17 00:00:00 2001
From: KAMEZAWA Hiroyuki <kamezawa.hiroyu-+CUm20s59erQFUHtdCDX3A@public.gmane.org>
Date: Tue, 6 Dec 2011 19:09:06 +0900
Subject: [PATCH 4/4] memcg: new LRU rule

Now, at LRU handling, memory cgroup needs to do complicated works
to see valid pc->mem_cgroup, which may be overwritten.

This patch is for relaxing the protocol. This patch guarantees
   - when pc->mem_cgroup is overwritten, page must not be on LRU.

By this, LRU routine can believe pc->mem_cgroup and don't need to
check bits on pc->flags. This new rule may adds small
overheads to swap-in but, in most case, lru handling gets faster
and reduce overheads.

After this patch, PCG_ACCT_LRU bit is obsolete and removed.

Signed-off-by: KAMEZAWA Hiroyuki <kamezawa.hiroyu-+CUm20s59erQFUHtdCDX3A@public.gmane.org>
---
 include/linux/page_cgroup.h |    8 ----
 mm/memcontrol.c             |   81 +++++++++++-------------------------------
 2 files changed, 21 insertions(+), 68 deletions(-)

diff --git a/include/linux/page_cgroup.h b/include/linux/page_cgroup.h
index aaa60da..2cddacf 100644
--- a/include/linux/page_cgroup.h
+++ b/include/linux/page_cgroup.h
@@ -10,8 +10,6 @@ enum {
 	/* flags for mem_cgroup and file and I/O status */
 	PCG_MOVE_LOCK, /* For race between move_account v.s. following bits */
 	PCG_FILE_MAPPED, /* page is accounted as "mapped" */
-	/* No lock in page_cgroup */
-	PCG_ACCT_LRU, /* page has been accounted for (under lru_lock) */
 	__NR_PCG_FLAGS,
 };
 
@@ -75,12 +73,6 @@ TESTPCGFLAG(Used, USED)
 CLEARPCGFLAG(Used, USED)
 SETPCGFLAG(Used, USED)
 
-SETPCGFLAG(AcctLRU, ACCT_LRU)
-CLEARPCGFLAG(AcctLRU, ACCT_LRU)
-TESTPCGFLAG(AcctLRU, ACCT_LRU)
-TESTCLEARPCGFLAG(AcctLRU, ACCT_LRU)
-
-
 SETPCGFLAG(FileMapped, FILE_MAPPED)
 CLEARPCGFLAG(FileMapped, FILE_MAPPED)
 TESTPCGFLAG(FileMapped, FILE_MAPPED)
diff --git a/mm/memcontrol.c b/mm/memcontrol.c
index b8706d8..0814cda 100644
--- a/mm/memcontrol.c
+++ b/mm/memcontrol.c
@@ -974,30 +974,7 @@ struct lruvec *mem_cgroup_lru_add_list(struct zone *zone, struct page *page,
 		return &zone->lruvec;
 
 	pc = lookup_page_cgroup(page);
-	VM_BUG_ON(PageCgroupAcctLRU(pc));
-	/*
-	 * putback:				charge:
-	 * SetPageLRU				SetPageCgroupUsed
-	 * smp_mb				smp_mb
-	 * PageCgroupUsed && add to memcg LRU	PageLRU && add to memcg LRU
-	 *
-	 * Ensure that one of the two sides adds the page to the memcg
-	 * LRU during a race.
-	 */
-	smp_mb();
-	/*
-	 * If the page is uncharged, it may be freed soon, but it
-	 * could also be swap cache (readahead, swapoff) that needs to
-	 * be reclaimable in the future.  root_mem_cgroup will babysit
-	 * it for the time being.
-	 */
-	if (PageCgroupUsed(pc)) {
-		/* Ensure pc->mem_cgroup is visible after reading PCG_USED. */
-		smp_rmb();
-		memcg = pc->mem_cgroup;
-		SetPageCgroupAcctLRU(pc);
-	} else
-		memcg = root_mem_cgroup;
+	memcg = pc->mem_cgroup;
 	mz = page_cgroup_zoneinfo(memcg, page);
 	/* compound_order() is stabilized through lru_lock */
 	MEM_CGROUP_ZSTAT(mz, lru) += 1 << compound_order(page);
@@ -1024,18 +1001,7 @@ void mem_cgroup_lru_del_list(struct page *page, enum lru_list lru)
 		return;
 
 	pc = lookup_page_cgroup(page);
-	/*
-	 * root_mem_cgroup babysits uncharged LRU pages, but
-	 * PageCgroupUsed is cleared when the page is about to get
-	 * freed.  PageCgroupAcctLRU remembers whether the
-	 * LRU-accounting happened against pc->mem_cgroup or
-	 * root_mem_cgroup.
-	 */
-	if (TestClearPageCgroupAcctLRU(pc)) {
-		VM_BUG_ON(!pc->mem_cgroup);
-		memcg = pc->mem_cgroup;
-	} else
-		memcg = root_mem_cgroup;
+	memcg = pc->mem_cgroup;
 	mz = page_cgroup_zoneinfo(memcg, page);
 	/* huge page split is done under lru_lock. so, we have no races. */
 	MEM_CGROUP_ZSTAT(mz, lru) -= 1 << compound_order(page);
@@ -2377,6 +2343,7 @@ static void __mem_cgroup_commit_charge(struct mem_cgroup *memcg,
 
 	mem_cgroup_charge_statistics(memcg, PageCgroupCache(pc), nr_pages);
 	unlock_page_cgroup(pc);
+	WARN_ON_ONCE(PageLRU(page));
 	/*
 	 * "charge_statistics" updated event counter. Then, check it.
 	 * Insert ancestor (and ancestor's ancestors), to softlimit RB-tree.
@@ -2388,7 +2355,7 @@ static void __mem_cgroup_commit_charge(struct mem_cgroup *memcg,
 #ifdef CONFIG_TRANSPARENT_HUGEPAGE
 
 #define PCGF_NOCOPY_AT_SPLIT ((1 << PCG_LOCK) | (1 << PCG_MOVE_LOCK) |\
-			(1 << PCG_ACCT_LRU) | (1 << PCG_MIGRATION))
+			(1 << PCG_MIGRATION))
 /*
  * Because tail pages are not marked as "used", set it. We're under
  * zone->lru_lock, 'splitting on pmd' and compound_lock.
@@ -2399,6 +2366,8 @@ void mem_cgroup_split_huge_fixup(struct page *head)
 {
 	struct page_cgroup *head_pc = lookup_page_cgroup(head);
 	struct page_cgroup *pc;
+	struct mem_cgroup_per_zone *mz;
+	enum lru_list lru;
 	int i;
 
 	if (mem_cgroup_disabled())
@@ -2414,16 +2383,12 @@ void mem_cgroup_split_huge_fixup(struct page *head)
 		pc->flags = head_pc->flags & ~PCGF_NOCOPY_AT_SPLIT;
 	}
 
-	if (PageCgroupAcctLRU(head_pc)) {
-		enum lru_list lru;
-		struct mem_cgroup_per_zone *mz;
-		/*
-		 * We hold lru_lock, then, reduce counter directly.
-		 */
-		lru = page_lru(head);
-		mz = page_cgroup_zoneinfo(head_pc->mem_cgroup, head);
-		MEM_CGROUP_ZSTAT(mz, lru) -= HPAGE_PMD_NR - 1;
-	}
+	/*
+	 * We hold lru_lock, then, reduce counter directly.
+	 */
+	lru = page_lru(head);
+	mz = page_cgroup_zoneinfo(head_pc->mem_cgroup, head);
+	MEM_CGROUP_ZSTAT(mz, lru) -= HPAGE_PMD_NR - 1;
 }
 #endif
 
@@ -2617,17 +2582,23 @@ __mem_cgroup_commit_charge_lrucare(struct page *page, struct mem_cgroup *memcg,
 	struct page_cgroup *pc = lookup_page_cgroup(page);
 	struct zone *zone = page_zone(page);
 	unsigned long flags;
+	bool removed = false;
 	/*
 	 * In some case, SwapCache, FUSE(splice_buf->radixtree), the page
 	 * is already on LRU. It means the page may on some other page_cgroup's
 	 * LRU. Take care of it.
 	 */
 	spin_lock_irqsave(&zone->lru_lock, flags);
-	if (PageLRU(page))
+	if (PageLRU(page)) {
 		del_page_from_lru_list(zone, page, page_lru(page));
+		ClearPageLRU(page);
+		removed = true;
+	}
 	__mem_cgroup_commit_charge(memcg, page, 1, pc, ctype);
-	if (PageLRU(page))
+	if (removed) {
+		SetPageLRU(page);
 		add_page_to_lru_list(zone, page, page_lru(page));
+	}
 	spin_unlock_irqrestore(&zone->lru_lock, flags);
 	return;
 }
@@ -3243,9 +3214,7 @@ void mem_cgroup_replace_page_cache(struct page *oldpage,
 {
 	struct mem_cgroup *memcg;
 	struct page_cgroup *pc;
-	struct zone *zone;
 	enum charge_type type = MEM_CGROUP_CHARGE_TYPE_CACHE;
-	unsigned long flags;
 
 	pc = lookup_page_cgroup(oldpage);
 	/* fix accounting on old pages */
@@ -3258,20 +3227,12 @@ void mem_cgroup_replace_page_cache(struct page *oldpage,
 	if (PageSwapBacked(oldpage))
 		type = MEM_CGROUP_CHARGE_TYPE_SHMEM;
 
-	zone = page_zone(newpage);
-	pc = lookup_page_cgroup(newpage);
 	/*
 	 * Even if newpage->mapping was NULL before starting replacement,
 	 * the newpage may be on LRU(or pagevec for LRU) already. We lock
 	 * LRU while we overwrite pc->mem_cgroup.
 	 */
-	spin_lock_irqsave(&zone->lru_lock, flags);
-	if (PageLRU(newpage))
-		del_page_from_lru_list(zone, newpage, page_lru(newpage));
-	__mem_cgroup_commit_charge(memcg, newpage, 1, pc, type);
-	if (PageLRU(newpage))
-		add_page_to_lru_list(zone, newpage, page_lru(newpage));
-	spin_unlock_irqrestore(&zone->lru_lock, flags);
+	__mem_cgroup_commit_charge_lrucare(newpage, memcg, type);
 }
 
 #ifdef CONFIG_DEBUG_VM
-- 
1.7.4.1


--
To unsubscribe from this list: send the line "unsubscribe cgroups" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply related	[flat|nested] 11+ messages in thread

* Re: [BUGFIX][PATCH] add mem_cgroup_replace_page_cache.
       [not found] ` <20111206123923.1432ab52.kamezawa.hiroyu-+CUm20s59erQFUHtdCDX3A@public.gmane.org>
  2011-12-06 10:12   ` [RFC][PATCH 1/4] memcg: simplify page cache charging KAMEZAWA Hiroyuki
@ 2011-12-07  9:21   ` Johannes Weiner
  2011-12-07 11:14   ` Michal Hocko
  2 siblings, 0 replies; 11+ messages in thread
From: Johannes Weiner @ 2011-12-07  9:21 UTC (permalink / raw)
  To: KAMEZAWA Hiroyuki
  Cc: linux-kernel-u79uwXL29TY76Z2rM5mHXA@public.gmane.org,
	Miklos Szeredi,
	akpm-de/tnXTf+JLsfHDXvbKv3WD2FQJk+8+b@public.gmane.org,
	linux-mm-Bw31MaZKKs3YtjvyW6yDsg@public.gmane.org,
	cgroups-u79uwXL29TY76Z2rM5mHXA, Michal Hocko, Hugh Dickins

On Tue, Dec 06, 2011 at 12:39:23PM +0900, KAMEZAWA Hiroyuki wrote:
> 
> Hm, is this too naive ? better idea is welcome. 
> ==
> >From 33638351c5cd28af9f47f9ab1c44eeb1f63d9964 Mon Sep 17 00:00:00 2001
> From: KAMEZAWA Hiroyuki <kamezawa.hiroyu-+CUm20s59erQFUHtdCDX3A@public.gmane.org>
> Date: Tue, 6 Dec 2011 12:32:32 +0900
> Subject: [PATCH] memcg: add mem_cgroup_replace_page_cache() for fixing LRU issue.
> 
> commit ef6a3c6311 adds a function replace_page_cache_page(). This
> function replaces a page in radix-tree with a new page.
> At doing this, memory cgroup need to fix up the accounting information.
> memcg need to check PCG_USED bit etc.
> 
> In some(many?) case, 'newpage' is on LRU before calling replace_page_cache().
> So, memcg's LRU accounting information should be fixed, too.
> 
> This patch adds mem_cgroup_replace_page_cache() and removing old hooks.
> In that function, old pages will be unaccounted without touching res_counter
> and new page will be accounted to the memcg (of old page). At overwriting
> pc->mem_cgroup of newpage, take zone->lru_lock and avoid race with
> LRU handling.
> 
> Background:
>   replace_page_cache_page() is called by FUSE code in its splice() handling.
>   Here, 'newpage' is replacing oldpage but this newpage is not a newly allocated
>   page and may be on LRU. LRU mis-accounting will be critical for memory cgroup
>   because rmdir() checks the whole LRU is empty and there is no account leak.
>   If a page is on the other LRU than it should be, rmdir() will fail.
> 
> Signed-off-by: KAMEZAWA Hiroyuki <kamezawa.hiroyu-+CUm20s59erQFUHtdCDX3A@public.gmane.org>

I think this is okay.  It's a tiny bit unfortunate that the migration
code is more or less duplicated with some optimizations, but I fear
the other solutions would be more complex and thus not adequate as a
bug fix.

Acked-by: Johannes Weiner <hannes-druUgvl0LCNAfugRpC6u6w@public.gmane.org>
--
To unsubscribe from this list: send the line "unsubscribe cgroups" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [BUGFIX][PATCH] add mem_cgroup_replace_page_cache.
       [not found] ` <20111206123923.1432ab52.kamezawa.hiroyu-+CUm20s59erQFUHtdCDX3A@public.gmane.org>
  2011-12-06 10:12   ` [RFC][PATCH 1/4] memcg: simplify page cache charging KAMEZAWA Hiroyuki
  2011-12-07  9:21   ` [BUGFIX][PATCH] add mem_cgroup_replace_page_cache Johannes Weiner
@ 2011-12-07 11:14   ` Michal Hocko
       [not found]     ` <20111207111455.GA18249-VqjxzfR4DlwKmadIfiO5sKVXKuFTiq87@public.gmane.org>
  2 siblings, 1 reply; 11+ messages in thread
From: Michal Hocko @ 2011-12-07 11:14 UTC (permalink / raw)
  To: KAMEZAWA Hiroyuki
  Cc: linux-kernel-u79uwXL29TY76Z2rM5mHXA@public.gmane.org,
	Miklos Szeredi,
	akpm-de/tnXTf+JLsfHDXvbKv3WD2FQJk+8+b@public.gmane.org,
	linux-mm-Bw31MaZKKs3YtjvyW6yDsg@public.gmane.org,
	cgroups-u79uwXL29TY76Z2rM5mHXA,
	hannes-druUgvl0LCNAfugRpC6u6w@public.gmane.org, Hugh Dickins

On Tue 06-12-11 12:39:23, KAMEZAWA Hiroyuki wrote:
> 
> Hm, is this too naive ? better idea is welcome. 
> ==
> From 33638351c5cd28af9f47f9ab1c44eeb1f63d9964 Mon Sep 17 00:00:00 2001
> From: KAMEZAWA Hiroyuki <kamezawa.hiroyu-+CUm20s59erQFUHtdCDX3A@public.gmane.org>
> Date: Tue, 6 Dec 2011 12:32:32 +0900
> Subject: [PATCH] memcg: add mem_cgroup_replace_page_cache() for fixing LRU issue.
> 
> commit ef6a3c6311 adds a function replace_page_cache_page(). This
> function replaces a page in radix-tree with a new page.
> At doing this, memory cgroup need to fix up the accounting information.
> memcg need to check PCG_USED bit etc.
> 
> In some(many?) case, 'newpage' is on LRU before calling replace_page_cache().
> So, memcg's LRU accounting information should be fixed, too.
> 
> This patch adds mem_cgroup_replace_page_cache() and removing old hooks.
> In that function, old pages will be unaccounted without touching res_counter
> and new page will be accounted to the memcg (of old page). At overwriting
> pc->mem_cgroup of newpage, take zone->lru_lock and avoid race with
> LRU handling.
> 
> Background:
>   replace_page_cache_page() is called by FUSE code in its splice() handling.
>   Here, 'newpage' is replacing oldpage but this newpage is not a newly allocated
>   page and may be on LRU. LRU mis-accounting will be critical for memory cgroup
>   because rmdir() checks the whole LRU is empty and there is no account leak.
>   If a page is on the other LRU than it should be, rmdir() will fail.
> 
> Signed-off-by: KAMEZAWA Hiroyuki <kamezawa.hiroyu-+CUm20s59erQFUHtdCDX3A@public.gmane.org>
> ---
>  include/linux/memcontrol.h |    6 ++++++
>  mm/filemap.c               |   18 ++----------------
>  mm/memcontrol.c            |   41 +++++++++++++++++++++++++++++++++++++++++
>  3 files changed, 49 insertions(+), 16 deletions(-)
> 
[...]
> diff --git a/mm/memcontrol.c b/mm/memcontrol.c
> index 8880a32..a9e92a6 100644
> --- a/mm/memcontrol.c
> +++ b/mm/memcontrol.c
> @@ -3306,6 +3306,47 @@ void mem_cgroup_end_migration(struct mem_cgroup *memcg,
>  	cgroup_release_and_wakeup_rmdir(&memcg->css);
>  }
>  
> +/*
> + * At replace page cache, newpage is not under any memcg but it's on
> + * LRU. So, this function doesn't touch res_counter but handles LRU
> + * in correct way.

Could you add?
Both pages are locked so we cannot race with uncharge

> + */
> +void mem_cgroup_replace_page_cache(struct page *oldpage,
> +				  struct page *newpage)
> +{
> +	struct mem_cgroup *memcg;
> +	struct page_cgroup *pc;
> +	struct zone *zone;
> +	enum charge_type type = MEM_CGROUP_CHARGE_TYPE_CACHE;
> +	unsigned long flags;
> +

You are missing 
	if (mem_cgroup_disabled())
		return;

> +	pc = lookup_page_cgroup(oldpage);
> +	/* fix accounting on old pages */
> +	lock_page_cgroup(pc);
> +	memcg = pc->mem_cgroup;
> +	mem_cgroup_charge_statistics(memcg, PageCgroupCache(pc), -1);
> +	ClearPageCgroupUsed(pc);
> +	unlock_page_cgroup(pc);
> +
> +	if (PageSwapBacked(oldpage))
> +		type = MEM_CGROUP_CHARGE_TYPE_SHMEM;
> +
> +	zone = page_zone(newpage);
> +	pc = lookup_page_cgroup(newpage);
> +	/*
> +	 * Even if newpage->mapping was NULL before starting replacement,
> +	 * the newpage may be on LRU(or pagevec for LRU) already. We lock
> +	 * LRU while we overwrite pc->mem_cgroup.
> +	 */
> +	spin_lock_irqsave(&zone->lru_lock, flags);
> +	if (PageLRU(newpage))
> +		del_page_from_lru_list(zone, newpage, page_lru(newpage));
> +	__mem_cgroup_commit_charge(memcg, newpage, 1, pc, type);
> +	if (PageLRU(newpage))
> +		add_page_to_lru_list(zone, newpage, page_lru(newpage));
> +	spin_unlock_irqrestore(&zone->lru_lock, flags);
> +}
> +

Other than that looks ok.

Thanks
-- 
Michal Hocko
SUSE Labs
SUSE LINUX s.r.o.
Lihovarska 1060/12
190 00 Praha 9    
Czech Republic
--
To unsubscribe from this list: send the line "unsubscribe cgroups" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 11+ messages in thread

* [BUGFIX][PATCH v2] add mem_cgroup_replace_page_cache.
       [not found]     ` <20111207111455.GA18249-VqjxzfR4DlwKmadIfiO5sKVXKuFTiq87@public.gmane.org>
@ 2011-12-08  7:18       ` KAMEZAWA Hiroyuki
       [not found]         ` <20111208161829.b6101de6.kamezawa.hiroyu-+CUm20s59erQFUHtdCDX3A@public.gmane.org>
  0 siblings, 1 reply; 11+ messages in thread
From: KAMEZAWA Hiroyuki @ 2011-12-08  7:18 UTC (permalink / raw)
  To: Michal Hocko
  Cc: linux-kernel-u79uwXL29TY76Z2rM5mHXA@public.gmane.org,
	Miklos Szeredi,
	akpm-de/tnXTf+JLsfHDXvbKv3WD2FQJk+8+b@public.gmane.org,
	linux-mm-Bw31MaZKKs3YtjvyW6yDsg@public.gmane.org,
	cgroups-u79uwXL29TY76Z2rM5mHXA,
	hannes-druUgvl0LCNAfugRpC6u6w@public.gmane.org, Hugh Dickins

On Wed, 7 Dec 2011 12:14:55 +0100
Michal Hocko <mhocko-AlSwsSmVLrQ@public.gmane.org> wrote:

> Other than that looks ok.
> 

Thank you for review. v2 here. This patch is for the latest linux-next.
==
From 82067c96323cf464d9b18867025414526fc7ce84 Mon Sep 17 00:00:00 2001
From: KAMEZAWA Hiroyuki <kamezawa.hiroyu-+CUm20s59erQFUHtdCDX3A@public.gmane.org>
Date: Thu, 8 Dec 2011 16:31:15 +0900
Subject: [PATCH] [BUGFIX][PATCH v2] memcg: add mem_cgroup_replace_page_cache() for fixing LRU issue.

commit ef6a3c6311 adds a function replace_page_cache_page(). This
function replaces a page in radix-tree with a new page.
At doing this, memory cgroup need to fix up the accounting information.
memcg need to check PCG_USED bit etc.

In some(many?) case, 'newpage' is on LRU before calling replace_page_cache().
So, memcg's LRU accounting information should be fixed, too.

This patch adds mem_cgroup_replace_page_cache() and removing old hooks.
In that function, old pages will be unaccounted without touching res_counter
and new page will be accounted to the memcg (of old page). At overwriting
pc->mem_cgroup of newpage, take zone->lru_lock and avoid race with
LRU handling.

Background:
  replace_page_cache_page() is called by FUSE code in its splice() handling.
  Here, 'newpage' is replacing oldpage but this newpage is not a newly allocated
  page and may be on LRU. LRU mis-accounting will be critical for memory cgroup
  because rmdir() checks the whole LRU is empty and there is no account leak.
  If a page is on the other LRU than it should be, rmdir() will fail.

Changelog: v1 -> v2
  - fixed mem_cgroup_disabled() check missing.
  - added comments.

Acked-by: Johannes Weiner <hannes-druUgvl0LCNAfugRpC6u6w@public.gmane.org>
Signed-off-by: KAMEZAWA Hiroyuki <kamezawa.hiroyu-+CUm20s59erQFUHtdCDX3A@public.gmane.org>
---
 include/linux/memcontrol.h |    6 ++++++
 mm/filemap.c               |   18 ++----------------
 mm/memcontrol.c            |   44 ++++++++++++++++++++++++++++++++++++++++++++
 3 files changed, 52 insertions(+), 16 deletions(-)

diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h
index 4b70e05..bd3b102 100644
--- a/include/linux/memcontrol.h
+++ b/include/linux/memcontrol.h
@@ -123,6 +123,8 @@ struct zone_reclaim_stat*
 mem_cgroup_get_reclaim_stat_from_page(struct page *page);
 extern void mem_cgroup_print_oom_info(struct mem_cgroup *memcg,
 					struct task_struct *p);
+extern void mem_cgroup_replace_page_cache(struct page *oldpage,
+					struct page *newpage);
 
 #ifdef CONFIG_CGROUP_MEM_RES_CTLR_SWAP
 extern int do_swap_account;
@@ -382,6 +384,10 @@ static inline
 void mem_cgroup_count_vm_event(struct mm_struct *mm, enum vm_event_item idx)
 {
 }
+static inline void mem_cgroup_replace_page_cache(struct page *oldpage,
+				struct page *newpage)
+{
+}
 #endif /* CONFIG_CGROUP_MEM_CONT */
 
 #if !defined(CONFIG_CGROUP_MEM_RES_CTLR) || !defined(CONFIG_DEBUG_VM)
diff --git a/mm/filemap.c b/mm/filemap.c
index a7b572b..4642211 100644
--- a/mm/filemap.c
+++ b/mm/filemap.c
@@ -393,24 +393,11 @@ EXPORT_SYMBOL(filemap_write_and_wait_range);
 int replace_page_cache_page(struct page *old, struct page *new, gfp_t gfp_mask)
 {
 	int error;
-	struct mem_cgroup *memcg = NULL;
 
 	VM_BUG_ON(!PageLocked(old));
 	VM_BUG_ON(!PageLocked(new));
 	VM_BUG_ON(new->mapping);
 
-	/*
-	 * This is not page migration, but prepare_migration and
-	 * end_migration does enough work for charge replacement.
-	 *
-	 * In the longer term we probably want a specialized function
-	 * for moving the charge from old to new in a more efficient
-	 * manner.
-	 */
-	error = mem_cgroup_prepare_migration(old, new, &memcg, gfp_mask);
-	if (error)
-		return error;
-
 	error = radix_tree_preload(gfp_mask & ~__GFP_HIGHMEM);
 	if (!error) {
 		struct address_space *mapping = old->mapping;
@@ -432,13 +419,12 @@ int replace_page_cache_page(struct page *old, struct page *new, gfp_t gfp_mask)
 		if (PageSwapBacked(new))
 			__inc_zone_page_state(new, NR_SHMEM);
 		spin_unlock_irq(&mapping->tree_lock);
+		/* mem_cgroup codes must not be called under tree_lock */
+		mem_cgroup_replace_page_cache(old, new);
 		radix_tree_preload_end();
 		if (freepage)
 			freepage(old);
 		page_cache_release(old);
-		mem_cgroup_end_migration(memcg, old, new, true);
-	} else {
-		mem_cgroup_end_migration(memcg, old, new, false);
 	}
 
 	return error;
diff --git a/mm/memcontrol.c b/mm/memcontrol.c
index 8880a32..52edaef 100644
--- a/mm/memcontrol.c
+++ b/mm/memcontrol.c
@@ -3306,6 +3306,50 @@ void mem_cgroup_end_migration(struct mem_cgroup *memcg,
 	cgroup_release_and_wakeup_rmdir(&memcg->css);
 }
 
+/*
+ * At replace page cache, newpage is not under any memcg but it's on
+ * LRU. So, this function doesn't touch res_counter but handles LRU
+ * in correct way. Both pages are locked so we cannot race with uncharge.
+ */
+void mem_cgroup_replace_page_cache(struct page *oldpage,
+				  struct page *newpage)
+{
+	struct mem_cgroup *memcg;
+	struct page_cgroup *pc;
+	struct zone *zone;
+	enum charge_type type = MEM_CGROUP_CHARGE_TYPE_CACHE;
+	unsigned long flags;
+
+	if (mem_cgroup_disabled())
+		return;
+
+	pc = lookup_page_cgroup(oldpage);
+	/* fix accounting on old pages */
+	lock_page_cgroup(pc);
+	memcg = pc->mem_cgroup;
+	mem_cgroup_charge_statistics(memcg, PageCgroupCache(pc), -1);
+	ClearPageCgroupUsed(pc);
+	unlock_page_cgroup(pc);
+
+	if (PageSwapBacked(oldpage))
+		type = MEM_CGROUP_CHARGE_TYPE_SHMEM;
+
+	zone = page_zone(newpage);
+	pc = lookup_page_cgroup(newpage);
+	/*
+	 * Even if newpage->mapping was NULL before starting replacement,
+	 * the newpage may be on LRU(or pagevec for LRU) already. We lock
+	 * LRU while we overwrite pc->mem_cgroup.
+	 */
+	spin_lock_irqsave(&zone->lru_lock, flags);
+	if (PageLRU(newpage))
+		del_page_from_lru_list(zone, newpage, page_lru(newpage));
+	__mem_cgroup_commit_charge(memcg, newpage, 1, pc, type);
+	if (PageLRU(newpage))
+		add_page_to_lru_list(zone, newpage, page_lru(newpage));
+	spin_unlock_irqrestore(&zone->lru_lock, flags);
+}
+
 #ifdef CONFIG_DEBUG_VM
 static struct page_cgroup *lookup_page_cgroup_used(struct page *page)
 {
-- 
1.7.4.1


--
To unsubscribe from this list: send the line "unsubscribe cgroups" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply related	[flat|nested] 11+ messages in thread

* Re: [BUGFIX][PATCH v2] add mem_cgroup_replace_page_cache.
       [not found]         ` <20111208161829.b6101de6.kamezawa.hiroyu-+CUm20s59erQFUHtdCDX3A@public.gmane.org>
@ 2011-12-08  9:31           ` Michal Hocko
  2011-12-09 20:37           ` Andrew Morton
  1 sibling, 0 replies; 11+ messages in thread
From: Michal Hocko @ 2011-12-08  9:31 UTC (permalink / raw)
  To: KAMEZAWA Hiroyuki
  Cc: linux-kernel-u79uwXL29TY76Z2rM5mHXA@public.gmane.org,
	Miklos Szeredi,
	akpm-de/tnXTf+JLsfHDXvbKv3WD2FQJk+8+b@public.gmane.org,
	linux-mm-Bw31MaZKKs3YtjvyW6yDsg@public.gmane.org,
	cgroups-u79uwXL29TY76Z2rM5mHXA,
	hannes-druUgvl0LCNAfugRpC6u6w@public.gmane.org, Hugh Dickins

On Thu 08-12-11 16:18:29, KAMEZAWA Hiroyuki wrote:
> On Wed, 7 Dec 2011 12:14:55 +0100
> Michal Hocko <mhocko-AlSwsSmVLrQ@public.gmane.org> wrote:
> 
> > Other than that looks ok.
> > 
> 
> Thank you for review. v2 here. This patch is for the latest linux-next.
> ==
> From 82067c96323cf464d9b18867025414526fc7ce84 Mon Sep 17 00:00:00 2001
> From: KAMEZAWA Hiroyuki <kamezawa.hiroyu-+CUm20s59erQFUHtdCDX3A@public.gmane.org>
> Date: Thu, 8 Dec 2011 16:31:15 +0900
> Subject: [PATCH] [BUGFIX][PATCH v2] memcg: add mem_cgroup_replace_page_cache() for fixing LRU issue.
> 
> commit ef6a3c6311 adds a function replace_page_cache_page(). This
> function replaces a page in radix-tree with a new page.
> At doing this, memory cgroup need to fix up the accounting information.
> memcg need to check PCG_USED bit etc.
> 
> In some(many?) case, 'newpage' is on LRU before calling replace_page_cache().
> So, memcg's LRU accounting information should be fixed, too.
> 
> This patch adds mem_cgroup_replace_page_cache() and removing old hooks.
> In that function, old pages will be unaccounted without touching res_counter
> and new page will be accounted to the memcg (of old page). At overwriting
> pc->mem_cgroup of newpage, take zone->lru_lock and avoid race with
> LRU handling.
> 
> Background:
>   replace_page_cache_page() is called by FUSE code in its splice() handling.
>   Here, 'newpage' is replacing oldpage but this newpage is not a newly allocated
>   page and may be on LRU. LRU mis-accounting will be critical for memory cgroup
>   because rmdir() checks the whole LRU is empty and there is no account leak.
>   If a page is on the other LRU than it should be, rmdir() will fail.
> 
> Changelog: v1 -> v2
>   - fixed mem_cgroup_disabled() check missing.
>   - added comments.
> 
> Acked-by: Johannes Weiner <hannes-druUgvl0LCNAfugRpC6u6w@public.gmane.org>
> Signed-off-by: KAMEZAWA Hiroyuki <kamezawa.hiroyu-+CUm20s59erQFUHtdCDX3A@public.gmane.org>

Looks good now

Acked-by: Michal Hocko <mhocko-AlSwsSmVLrQ@public.gmane.org>

Thanks
> ---
>  include/linux/memcontrol.h |    6 ++++++
>  mm/filemap.c               |   18 ++----------------
>  mm/memcontrol.c            |   44 ++++++++++++++++++++++++++++++++++++++++++++
>  3 files changed, 52 insertions(+), 16 deletions(-)
> 
> diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h
> index 4b70e05..bd3b102 100644
> --- a/include/linux/memcontrol.h
> +++ b/include/linux/memcontrol.h
> @@ -123,6 +123,8 @@ struct zone_reclaim_stat*
>  mem_cgroup_get_reclaim_stat_from_page(struct page *page);
>  extern void mem_cgroup_print_oom_info(struct mem_cgroup *memcg,
>  					struct task_struct *p);
> +extern void mem_cgroup_replace_page_cache(struct page *oldpage,
> +					struct page *newpage);
>  
>  #ifdef CONFIG_CGROUP_MEM_RES_CTLR_SWAP
>  extern int do_swap_account;
> @@ -382,6 +384,10 @@ static inline
>  void mem_cgroup_count_vm_event(struct mm_struct *mm, enum vm_event_item idx)
>  {
>  }
> +static inline void mem_cgroup_replace_page_cache(struct page *oldpage,
> +				struct page *newpage)
> +{
> +}
>  #endif /* CONFIG_CGROUP_MEM_CONT */
>  
>  #if !defined(CONFIG_CGROUP_MEM_RES_CTLR) || !defined(CONFIG_DEBUG_VM)
> diff --git a/mm/filemap.c b/mm/filemap.c
> index a7b572b..4642211 100644
> --- a/mm/filemap.c
> +++ b/mm/filemap.c
> @@ -393,24 +393,11 @@ EXPORT_SYMBOL(filemap_write_and_wait_range);
>  int replace_page_cache_page(struct page *old, struct page *new, gfp_t gfp_mask)
>  {
>  	int error;
> -	struct mem_cgroup *memcg = NULL;
>  
>  	VM_BUG_ON(!PageLocked(old));
>  	VM_BUG_ON(!PageLocked(new));
>  	VM_BUG_ON(new->mapping);
>  
> -	/*
> -	 * This is not page migration, but prepare_migration and
> -	 * end_migration does enough work for charge replacement.
> -	 *
> -	 * In the longer term we probably want a specialized function
> -	 * for moving the charge from old to new in a more efficient
> -	 * manner.
> -	 */
> -	error = mem_cgroup_prepare_migration(old, new, &memcg, gfp_mask);
> -	if (error)
> -		return error;
> -
>  	error = radix_tree_preload(gfp_mask & ~__GFP_HIGHMEM);
>  	if (!error) {
>  		struct address_space *mapping = old->mapping;
> @@ -432,13 +419,12 @@ int replace_page_cache_page(struct page *old, struct page *new, gfp_t gfp_mask)
>  		if (PageSwapBacked(new))
>  			__inc_zone_page_state(new, NR_SHMEM);
>  		spin_unlock_irq(&mapping->tree_lock);
> +		/* mem_cgroup codes must not be called under tree_lock */
> +		mem_cgroup_replace_page_cache(old, new);
>  		radix_tree_preload_end();
>  		if (freepage)
>  			freepage(old);
>  		page_cache_release(old);
> -		mem_cgroup_end_migration(memcg, old, new, true);
> -	} else {
> -		mem_cgroup_end_migration(memcg, old, new, false);
>  	}
>  
>  	return error;
> diff --git a/mm/memcontrol.c b/mm/memcontrol.c
> index 8880a32..52edaef 100644
> --- a/mm/memcontrol.c
> +++ b/mm/memcontrol.c
> @@ -3306,6 +3306,50 @@ void mem_cgroup_end_migration(struct mem_cgroup *memcg,
>  	cgroup_release_and_wakeup_rmdir(&memcg->css);
>  }
>  
> +/*
> + * At replace page cache, newpage is not under any memcg but it's on
> + * LRU. So, this function doesn't touch res_counter but handles LRU
> + * in correct way. Both pages are locked so we cannot race with uncharge.
> + */
> +void mem_cgroup_replace_page_cache(struct page *oldpage,
> +				  struct page *newpage)
> +{
> +	struct mem_cgroup *memcg;
> +	struct page_cgroup *pc;
> +	struct zone *zone;
> +	enum charge_type type = MEM_CGROUP_CHARGE_TYPE_CACHE;
> +	unsigned long flags;
> +
> +	if (mem_cgroup_disabled())
> +		return;
> +
> +	pc = lookup_page_cgroup(oldpage);
> +	/* fix accounting on old pages */
> +	lock_page_cgroup(pc);
> +	memcg = pc->mem_cgroup;
> +	mem_cgroup_charge_statistics(memcg, PageCgroupCache(pc), -1);
> +	ClearPageCgroupUsed(pc);
> +	unlock_page_cgroup(pc);
> +
> +	if (PageSwapBacked(oldpage))
> +		type = MEM_CGROUP_CHARGE_TYPE_SHMEM;
> +
> +	zone = page_zone(newpage);
> +	pc = lookup_page_cgroup(newpage);
> +	/*
> +	 * Even if newpage->mapping was NULL before starting replacement,
> +	 * the newpage may be on LRU(or pagevec for LRU) already. We lock
> +	 * LRU while we overwrite pc->mem_cgroup.
> +	 */
> +	spin_lock_irqsave(&zone->lru_lock, flags);
> +	if (PageLRU(newpage))
> +		del_page_from_lru_list(zone, newpage, page_lru(newpage));
> +	__mem_cgroup_commit_charge(memcg, newpage, 1, pc, type);
> +	if (PageLRU(newpage))
> +		add_page_to_lru_list(zone, newpage, page_lru(newpage));
> +	spin_unlock_irqrestore(&zone->lru_lock, flags);
> +}
> +
>  #ifdef CONFIG_DEBUG_VM
>  static struct page_cgroup *lookup_page_cgroup_used(struct page *page)
>  {
> -- 
> 1.7.4.1
> 
> 
> --
> To unsubscribe from this list: send the line "unsubscribe cgroups" in
> the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html

-- 
Michal Hocko
SUSE Labs
SUSE LINUX s.r.o.
Lihovarska 1060/12
190 00 Praha 9    
Czech Republic
--
To unsubscribe from this list: send the line "unsubscribe cgroups" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [BUGFIX][PATCH v2] add mem_cgroup_replace_page_cache.
       [not found]         ` <20111208161829.b6101de6.kamezawa.hiroyu-+CUm20s59erQFUHtdCDX3A@public.gmane.org>
  2011-12-08  9:31           ` Michal Hocko
@ 2011-12-09 20:37           ` Andrew Morton
  2011-12-12  0:48             ` KAMEZAWA Hiroyuki
  1 sibling, 1 reply; 11+ messages in thread
From: Andrew Morton @ 2011-12-09 20:37 UTC (permalink / raw)
  To: KAMEZAWA Hiroyuki
  Cc: Michal Hocko,
	linux-kernel-u79uwXL29TY76Z2rM5mHXA@public.gmane.org,
	Miklos Szeredi, linux-mm-Bw31MaZKKs3YtjvyW6yDsg@public.gmane.org,
	cgroups-u79uwXL29TY76Z2rM5mHXA,
	hannes-druUgvl0LCNAfugRpC6u6w@public.gmane.org, Hugh Dickins

On Thu, 8 Dec 2011 16:18:29 +0900
KAMEZAWA Hiroyuki <kamezawa.hiroyu-+CUm20s59erQFUHtdCDX3A@public.gmane.org> wrote:

> commit ef6a3c6311 adds a function replace_page_cache_page(). This
> function replaces a page in radix-tree with a new page.
> At doing this, memory cgroup need to fix up the accounting information.
> memcg need to check PCG_USED bit etc.
> 
> In some(many?) case, 'newpage' is on LRU before calling replace_page_cache().
> So, memcg's LRU accounting information should be fixed, too.
> 
> This patch adds mem_cgroup_replace_page_cache() and removing old hooks.
> In that function, old pages will be unaccounted without touching res_counter
> and new page will be accounted to the memcg (of old page). At overwriting
> pc->mem_cgroup of newpage, take zone->lru_lock and avoid race with
> LRU handling.
> 
> Background:
>   replace_page_cache_page() is called by FUSE code in its splice() handling.
>   Here, 'newpage' is replacing oldpage but this newpage is not a newly allocated
>   page and may be on LRU. LRU mis-accounting will be critical for memory cgroup
>   because rmdir() checks the whole LRU is empty and there is no account leak.
>   If a page is on the other LRU than it should be, rmdir() will fail.
> 
> Changelog: v1 -> v2
>   - fixed mem_cgroup_disabled() check missing.
>   - added comments.
> 
> Acked-by: Johannes Weiner <hannes-druUgvl0LCNAfugRpC6u6w@public.gmane.org>
> Signed-off-by: KAMEZAWA Hiroyuki <kamezawa.hiroyu-+CUm20s59erQFUHtdCDX3A@public.gmane.org>
> ---
>  include/linux/memcontrol.h |    6 ++++++
>  mm/filemap.c               |   18 ++----------------
>  mm/memcontrol.c            |   44 ++++++++++++++++++++++++++++++++++++++++++++
>  3 files changed, 52 insertions(+), 16 deletions(-)

It's a relatively intrusive patch and I'm a bit concerned about
feeding it into 3.2.

How serious is the bug, and which kernel version(s) do you think we
should fix it in?

--
To unsubscribe from this list: send the line "unsubscribe cgroups" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [BUGFIX][PATCH v2] add mem_cgroup_replace_page_cache.
  2011-12-09 20:37           ` Andrew Morton
@ 2011-12-12  0:48             ` KAMEZAWA Hiroyuki
  0 siblings, 0 replies; 11+ messages in thread
From: KAMEZAWA Hiroyuki @ 2011-12-12  0:48 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Michal Hocko, linux-kernel@vger.kernel.org, Miklos Szeredi,
	linux-mm@kvack.org, cgroups, hannes@cmpxchg.org, Hugh Dickins

On Fri, 9 Dec 2011 12:37:01 -0800
Andrew Morton <akpm@linux-foundation.org> wrote:

> On Thu, 8 Dec 2011 16:18:29 +0900
> KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com> wrote:
> 
> > commit ef6a3c6311 adds a function replace_page_cache_page(). This
> > function replaces a page in radix-tree with a new page.
> > At doing this, memory cgroup need to fix up the accounting information.
> > memcg need to check PCG_USED bit etc.
> > 
> > In some(many?) case, 'newpage' is on LRU before calling replace_page_cache().
> > So, memcg's LRU accounting information should be fixed, too.
> > 
> > This patch adds mem_cgroup_replace_page_cache() and removing old hooks.
> > In that function, old pages will be unaccounted without touching res_counter
> > and new page will be accounted to the memcg (of old page). At overwriting
> > pc->mem_cgroup of newpage, take zone->lru_lock and avoid race with
> > LRU handling.
> > 
> > Background:
> >   replace_page_cache_page() is called by FUSE code in its splice() handling.
> >   Here, 'newpage' is replacing oldpage but this newpage is not a newly allocated
> >   page and may be on LRU. LRU mis-accounting will be critical for memory cgroup
> >   because rmdir() checks the whole LRU is empty and there is no account leak.
> >   If a page is on the other LRU than it should be, rmdir() will fail.
> > 
> > Changelog: v1 -> v2
> >   - fixed mem_cgroup_disabled() check missing.
> >   - added comments.
> > 
> > Acked-by: Johannes Weiner <hannes@cmpxchg.org>
> > Signed-off-by: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
> > ---
> >  include/linux/memcontrol.h |    6 ++++++
> >  mm/filemap.c               |   18 ++----------------
> >  mm/memcontrol.c            |   44 ++++++++++++++++++++++++++++++++++++++++++++
> >  3 files changed, 52 insertions(+), 16 deletions(-)
> 
> It's a relatively intrusive patch and I'm a bit concerned about
> feeding it into 3.2.
> 
> How serious is the bug, and which kernel version(s) do you think we
> should fix it in?

This bug was added by commit ef6a3c63112e (2011 Mar), but no bug report yet.
I guess there are not many people who use memcg and FUSE at the same time
with upstream kernels.

The result of this bug is that admin cannot destroy a memcg because of
account leak. So, no panic, no deadlock. And, even if an active cgroup exist,
umount can succseed. So no problem at shutdown.

I want this fix should be merged when/after unify-lru works goes to upstream.

Thanks,
-Kame








 

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Fight unfair telecom internet charges in Canada: sign http://stopthemeter.ca/
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 11+ messages in thread

end of thread, other threads:[~2011-12-12  0:48 UTC | newest]

Thread overview: 11+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2011-12-06  3:39 [BUGFIX][PATCH] add mem_cgroup_replace_page_cache KAMEZAWA Hiroyuki
     [not found] ` <20111206123923.1432ab52.kamezawa.hiroyu-+CUm20s59erQFUHtdCDX3A@public.gmane.org>
2011-12-06 10:12   ` [RFC][PATCH 1/4] memcg: simplify page cache charging KAMEZAWA Hiroyuki
2011-12-06 10:13     ` [RFC][PATCH 2/4] memcg: simplify corner case handling of LRU and charge races KAMEZAWA Hiroyuki
     [not found]     ` <20111206191211.3be32ccb.kamezawa.hiroyu-+CUm20s59erQFUHtdCDX3A@public.gmane.org>
2011-12-06 10:15       ` [RFC][PATCH 3/4] memcg: clear pc->mem_cgroup if necessary KAMEZAWA Hiroyuki
2011-12-06 10:17       ` [RFC][PATCH 4/4] memcg: new LRU rule KAMEZAWA Hiroyuki
2011-12-07  9:21   ` [BUGFIX][PATCH] add mem_cgroup_replace_page_cache Johannes Weiner
2011-12-07 11:14   ` Michal Hocko
     [not found]     ` <20111207111455.GA18249-VqjxzfR4DlwKmadIfiO5sKVXKuFTiq87@public.gmane.org>
2011-12-08  7:18       ` [BUGFIX][PATCH v2] " KAMEZAWA Hiroyuki
     [not found]         ` <20111208161829.b6101de6.kamezawa.hiroyu-+CUm20s59erQFUHtdCDX3A@public.gmane.org>
2011-12-08  9:31           ` Michal Hocko
2011-12-09 20:37           ` Andrew Morton
2011-12-12  0:48             ` KAMEZAWA Hiroyuki

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).