linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v5 0/7] mm: some cleanup/rework before lru_lock splitting
@ 2012-03-08 18:03 Konstantin Khlebnikov
  2012-03-08 18:04 ` [PATCH v5 1/7] mm/memcg: scanning_global_lru means mem_cgroup_disabled Konstantin Khlebnikov
                   ` (7 more replies)
  0 siblings, 8 replies; 16+ messages in thread
From: Konstantin Khlebnikov @ 2012-03-08 18:03 UTC (permalink / raw)
  To: Andrew Morton, Hugh Dickins, Johannes Weiner, KAMEZAWA Hiroyuki
  Cc: linux-mm, linux-kernel

v5:
* rebase to next-20120308
* reworked cleanup for __isolate_lru_page()
* bloat-o-meter results for each patch

---

Hugh Dickins (2):
      mm/memcg: scanning_global_lru means mem_cgroup_disabled
      mm/memcg: move reclaim_stat into lruvec

Konstantin Khlebnikov (5):
      mm: push lru index into shrink_[in]active_list()
      mm: rework __isolate_lru_page() page lru filter
      mm: rework reclaim_stat counters
      mm/memcg: rework inactive_ratio calculation
      mm/memcg: use vm_swappiness from target memory cgroup


 include/linux/memcontrol.h |   25 -----
 include/linux/mm_inline.h  |    2 
 include/linux/mmzone.h     |   51 ++++-----
 include/linux/swap.h       |    2 
 mm/compaction.c            |    4 -
 mm/memcontrol.c            |   86 ++++------------
 mm/page_alloc.c            |   50 ---------
 mm/swap.c                  |   43 +++-----
 mm/vmscan.c                |  241 +++++++++++++++++++++-----------------------
 mm/vmstat.c                |    6 -
 10 files changed, 178 insertions(+), 332 deletions(-)

-- 
Signature

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Fight unfair telecom internet charges in Canada: sign http://stopthemeter.ca/
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 16+ messages in thread

* [PATCH v5 1/7] mm/memcg: scanning_global_lru means mem_cgroup_disabled
  2012-03-08 18:03 [PATCH v5 0/7] mm: some cleanup/rework before lru_lock splitting Konstantin Khlebnikov
@ 2012-03-08 18:04 ` Konstantin Khlebnikov
  2012-03-09  1:26   ` KAMEZAWA Hiroyuki
  2012-03-08 18:04 ` [PATCH v5 2/7] mm/memcg: move reclaim_stat into lruvec Konstantin Khlebnikov
                   ` (6 subsequent siblings)
  7 siblings, 1 reply; 16+ messages in thread
From: Konstantin Khlebnikov @ 2012-03-08 18:04 UTC (permalink / raw)
  To: Andrew Morton, Hugh Dickins, Johannes Weiner, KAMEZAWA Hiroyuki
  Cc: linux-mm, linux-kernel

From: Hugh Dickins <hughd@google.com>

Although one has to admire the skill with which it has been concealed,
scanning_global_lru(mz) is actually just an interesting way to test
mem_cgroup_disabled().  Too many developer hours have been wasted on
confusing it with global_reclaim(): just use mem_cgroup_disabled().

Signed-off-by: Hugh Dickins <hughd@google.com>
Signed-off-by: Konstantin Khlebnikov <khlebnikov@openvz.org>

---

add/remove: 0/0 grow/shrink: 3/1 up/down: 17/-16 (1)
function                                     old     new   delta
inactive_anon_is_low                         101     110      +9
zone_nr_lru_pages                            108     114      +6
get_reclaim_stat                              44      46      +2
shrink_inactive_list                        1227    1211     -16
---
 mm/vmscan.c |   18 ++++--------------
 1 files changed, 4 insertions(+), 14 deletions(-)

diff --git a/mm/vmscan.c b/mm/vmscan.c
index 57d8ef6..8d1745c 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -164,26 +164,16 @@ static bool global_reclaim(struct scan_control *sc)
 {
 	return !sc->target_mem_cgroup;
 }
-
-static bool scanning_global_lru(struct mem_cgroup_zone *mz)
-{
-	return !mz->mem_cgroup;
-}
 #else
 static bool global_reclaim(struct scan_control *sc)
 {
 	return true;
 }
-
-static bool scanning_global_lru(struct mem_cgroup_zone *mz)
-{
-	return true;
-}
 #endif
 
 static struct zone_reclaim_stat *get_reclaim_stat(struct mem_cgroup_zone *mz)
 {
-	if (!scanning_global_lru(mz))
+	if (!mem_cgroup_disabled())
 		return mem_cgroup_get_reclaim_stat(mz->mem_cgroup, mz->zone);
 
 	return &mz->zone->reclaim_stat;
@@ -192,7 +182,7 @@ static struct zone_reclaim_stat *get_reclaim_stat(struct mem_cgroup_zone *mz)
 static unsigned long zone_nr_lru_pages(struct mem_cgroup_zone *mz,
 				       enum lru_list lru)
 {
-	if (!scanning_global_lru(mz))
+	if (!mem_cgroup_disabled())
 		return mem_cgroup_zone_nr_lru_pages(mz->mem_cgroup,
 						    zone_to_nid(mz->zone),
 						    zone_idx(mz->zone),
@@ -1804,7 +1794,7 @@ static int inactive_anon_is_low(struct mem_cgroup_zone *mz)
 	if (!total_swap_pages)
 		return 0;
 
-	if (!scanning_global_lru(mz))
+	if (!mem_cgroup_disabled())
 		return mem_cgroup_inactive_anon_is_low(mz->mem_cgroup,
 						       mz->zone);
 
@@ -1843,7 +1833,7 @@ static int inactive_file_is_low_global(struct zone *zone)
  */
 static int inactive_file_is_low(struct mem_cgroup_zone *mz)
 {
-	if (!scanning_global_lru(mz))
+	if (!mem_cgroup_disabled())
 		return mem_cgroup_inactive_file_is_low(mz->mem_cgroup,
 						       mz->zone);
 

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Fight unfair telecom internet charges in Canada: sign http://stopthemeter.ca/
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [PATCH v5 2/7] mm/memcg: move reclaim_stat into lruvec
  2012-03-08 18:03 [PATCH v5 0/7] mm: some cleanup/rework before lru_lock splitting Konstantin Khlebnikov
  2012-03-08 18:04 ` [PATCH v5 1/7] mm/memcg: scanning_global_lru means mem_cgroup_disabled Konstantin Khlebnikov
@ 2012-03-08 18:04 ` Konstantin Khlebnikov
  2012-03-09  1:27   ` KAMEZAWA Hiroyuki
  2012-03-08 18:04 ` [PATCH v5 3/7] mm: push lru index into shrink_[in]active_list() Konstantin Khlebnikov
                   ` (5 subsequent siblings)
  7 siblings, 1 reply; 16+ messages in thread
From: Konstantin Khlebnikov @ 2012-03-08 18:04 UTC (permalink / raw)
  To: Andrew Morton, Hugh Dickins, Johannes Weiner, KAMEZAWA Hiroyuki
  Cc: linux-mm, linux-kernel

From: Hugh Dickins <hughd@google.com>

With mem_cgroup_disabled() now explicit, it becomes clear that the
zone_reclaim_stat structure actually belongs in lruvec, per-zone
when memcg is disabled but per-memcg per-zone when it's enabled.

We can delete mem_cgroup_get_reclaim_stat(), and change
update_page_reclaim_stat() to update just the one set of stats,
the one which get_scan_count() will actually use.

Signed-off-by: Hugh Dickins <hughd@google.com>
Signed-off-by: Konstantin Khlebnikov <khlebnikov@openvz.org>

---

add/remove: 0/1 grow/shrink: 3/6 up/down: 21/-108 (-87)
function                                     old     new   delta
shrink_inactive_list                        1211    1227     +16
release_pages                                542     545      +3
lru_deactivate_fn                            498     500      +2
put_page_testzero                             33      30      -3
page_evictable                               173     170      -3
mem_cgroup_get_reclaim_stat_from_page        114     111      -3
mem_control_stat_show                        762     754      -8
update_page_reclaim_stat                     103      89     -14
get_reclaim_stat                              46      30     -16
mem_cgroup_get_reclaim_stat                   61       -     -61
---
 include/linux/memcontrol.h |    9 ---------
 include/linux/mmzone.h     |   29 ++++++++++++++---------------
 mm/memcontrol.c            |   27 +++++++--------------------
 mm/page_alloc.c            |    8 ++++----
 mm/swap.c                  |   14 ++++----------
 mm/vmscan.c                |    5 +----
 6 files changed, 30 insertions(+), 62 deletions(-)

diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h
index 250a78b..4c4b968 100644
--- a/include/linux/memcontrol.h
+++ b/include/linux/memcontrol.h
@@ -120,8 +120,6 @@ int mem_cgroup_inactive_file_is_low(struct mem_cgroup *memcg,
 int mem_cgroup_select_victim_node(struct mem_cgroup *memcg);
 unsigned long mem_cgroup_zone_nr_lru_pages(struct mem_cgroup *memcg,
 					int nid, int zid, unsigned int lrumask);
-struct zone_reclaim_stat *mem_cgroup_get_reclaim_stat(struct mem_cgroup *memcg,
-						      struct zone *zone);
 struct zone_reclaim_stat*
 mem_cgroup_get_reclaim_stat_from_page(struct page *page);
 extern void mem_cgroup_print_oom_info(struct mem_cgroup *memcg,
@@ -350,13 +348,6 @@ mem_cgroup_zone_nr_lru_pages(struct mem_cgroup *memcg, int nid, int zid,
 	return 0;
 }
 
-
-static inline struct zone_reclaim_stat*
-mem_cgroup_get_reclaim_stat(struct mem_cgroup *memcg, struct zone *zone)
-{
-	return NULL;
-}
-
 static inline struct zone_reclaim_stat*
 mem_cgroup_get_reclaim_stat_from_page(struct page *page)
 {
diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h
index f10a54c..aa881de 100644
--- a/include/linux/mmzone.h
+++ b/include/linux/mmzone.h
@@ -159,8 +159,22 @@ static inline int is_unevictable_lru(enum lru_list lru)
 	return (lru == LRU_UNEVICTABLE);
 }
 
+struct zone_reclaim_stat {
+	/*
+	 * The pageout code in vmscan.c keeps track of how many of the
+	 * mem/swap backed and file backed pages are refeferenced.
+	 * The higher the rotated/scanned ratio, the more valuable
+	 * that cache is.
+	 *
+	 * The anon LRU stats live in [0], file LRU stats in [1]
+	 */
+	unsigned long		recent_rotated[2];
+	unsigned long		recent_scanned[2];
+};
+
 struct lruvec {
 	struct list_head lists[NR_LRU_LISTS];
+	struct zone_reclaim_stat reclaim_stat;
 };
 
 /* Mask used at gathering information at once (see memcontrol.c) */
@@ -287,19 +301,6 @@ enum zone_type {
 #error ZONES_SHIFT -- too many zones configured adjust calculation
 #endif
 
-struct zone_reclaim_stat {
-	/*
-	 * The pageout code in vmscan.c keeps track of how many of the
-	 * mem/swap backed and file backed pages are refeferenced.
-	 * The higher the rotated/scanned ratio, the more valuable
-	 * that cache is.
-	 *
-	 * The anon LRU stats live in [0], file LRU stats in [1]
-	 */
-	unsigned long		recent_rotated[2];
-	unsigned long		recent_scanned[2];
-};
-
 struct zone {
 	/* Fields commonly accessed by the page allocator */
 
@@ -374,8 +375,6 @@ struct zone {
 	spinlock_t		lru_lock;
 	struct lruvec		lruvec;
 
-	struct zone_reclaim_stat reclaim_stat;
-
 	unsigned long		pages_scanned;	   /* since last reclaim */
 	unsigned long		flags;		   /* zone flags, see below */
 
diff --git a/mm/memcontrol.c b/mm/memcontrol.c
index a288855..6864f57 100644
--- a/mm/memcontrol.c
+++ b/mm/memcontrol.c
@@ -138,7 +138,6 @@ struct mem_cgroup_per_zone {
 
 	struct mem_cgroup_reclaim_iter reclaim_iter[DEF_PRIORITY + 1];
 
-	struct zone_reclaim_stat reclaim_stat;
 	struct rb_node		tree_node;	/* RB tree node */
 	unsigned long long	usage_in_excess;/* Set to the value by which */
 						/* the soft limit is exceeded*/
@@ -1213,16 +1212,6 @@ int mem_cgroup_inactive_file_is_low(struct mem_cgroup *memcg, struct zone *zone)
 	return (active > inactive);
 }
 
-struct zone_reclaim_stat *mem_cgroup_get_reclaim_stat(struct mem_cgroup *memcg,
-						      struct zone *zone)
-{
-	int nid = zone_to_nid(zone);
-	int zid = zone_idx(zone);
-	struct mem_cgroup_per_zone *mz = mem_cgroup_zoneinfo(memcg, nid, zid);
-
-	return &mz->reclaim_stat;
-}
-
 struct zone_reclaim_stat *
 mem_cgroup_get_reclaim_stat_from_page(struct page *page)
 {
@@ -1238,7 +1227,7 @@ mem_cgroup_get_reclaim_stat_from_page(struct page *page)
 	/* Ensure pc->mem_cgroup is visible after reading PCG_USED. */
 	smp_rmb();
 	mz = page_cgroup_zoneinfo(pc->mem_cgroup, page);
-	return &mz->reclaim_stat;
+	return &mz->lruvec.reclaim_stat;
 }
 
 #define mem_cgroup_from_res_counter(counter, member)	\
@@ -4196,21 +4185,19 @@ static int mem_control_stat_show(struct cgroup *cont, struct cftype *cft,
 	{
 		int nid, zid;
 		struct mem_cgroup_per_zone *mz;
+		struct zone_reclaim_stat *rstat;
 		unsigned long recent_rotated[2] = {0, 0};
 		unsigned long recent_scanned[2] = {0, 0};
 
 		for_each_online_node(nid)
 			for (zid = 0; zid < MAX_NR_ZONES; zid++) {
 				mz = mem_cgroup_zoneinfo(memcg, nid, zid);
+				rstat = &mz->lruvec.reclaim_stat;
 
-				recent_rotated[0] +=
-					mz->reclaim_stat.recent_rotated[0];
-				recent_rotated[1] +=
-					mz->reclaim_stat.recent_rotated[1];
-				recent_scanned[0] +=
-					mz->reclaim_stat.recent_scanned[0];
-				recent_scanned[1] +=
-					mz->reclaim_stat.recent_scanned[1];
+				recent_rotated[0] += rstat->recent_rotated[0];
+				recent_rotated[1] += rstat->recent_rotated[1];
+				recent_scanned[0] += rstat->recent_scanned[0];
+				recent_scanned[1] += rstat->recent_scanned[1];
 			}
 		cb->fill(cb, "recent_rotated_anon", recent_rotated[0]);
 		cb->fill(cb, "recent_rotated_file", recent_rotated[1]);
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 2b7f07b..ab2d210 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -4365,10 +4365,10 @@ static void __paginginit free_area_init_core(struct pglist_data *pgdat,
 		zone_pcp_init(zone);
 		for_each_lru(lru)
 			INIT_LIST_HEAD(&zone->lruvec.lists[lru]);
-		zone->reclaim_stat.recent_rotated[0] = 0;
-		zone->reclaim_stat.recent_rotated[1] = 0;
-		zone->reclaim_stat.recent_scanned[0] = 0;
-		zone->reclaim_stat.recent_scanned[1] = 0;
+		zone->lruvec.reclaim_stat.recent_rotated[0] = 0;
+		zone->lruvec.reclaim_stat.recent_rotated[1] = 0;
+		zone->lruvec.reclaim_stat.recent_scanned[0] = 0;
+		zone->lruvec.reclaim_stat.recent_scanned[1] = 0;
 		zap_zone_vm_stats(zone);
 		zone->flags = 0;
 		if (!size)
diff --git a/mm/swap.c b/mm/swap.c
index 5c13f13..60d14da 100644
--- a/mm/swap.c
+++ b/mm/swap.c
@@ -279,21 +279,15 @@ void rotate_reclaimable_page(struct page *page)
 static void update_page_reclaim_stat(struct zone *zone, struct page *page,
 				     int file, int rotated)
 {
-	struct zone_reclaim_stat *reclaim_stat = &zone->reclaim_stat;
-	struct zone_reclaim_stat *memcg_reclaim_stat;
+	struct zone_reclaim_stat *reclaim_stat;
 
-	memcg_reclaim_stat = mem_cgroup_get_reclaim_stat_from_page(page);
+	reclaim_stat = mem_cgroup_get_reclaim_stat_from_page(page);
+	if (!reclaim_stat)
+		reclaim_stat = &zone->lruvec.reclaim_stat;
 
 	reclaim_stat->recent_scanned[file]++;
 	if (rotated)
 		reclaim_stat->recent_rotated[file]++;
-
-	if (!memcg_reclaim_stat)
-		return;
-
-	memcg_reclaim_stat->recent_scanned[file]++;
-	if (rotated)
-		memcg_reclaim_stat->recent_rotated[file]++;
 }
 
 static void __activate_page(struct page *page, void *arg)
diff --git a/mm/vmscan.c b/mm/vmscan.c
index 8d1745c..05c1157 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -173,10 +173,7 @@ static bool global_reclaim(struct scan_control *sc)
 
 static struct zone_reclaim_stat *get_reclaim_stat(struct mem_cgroup_zone *mz)
 {
-	if (!mem_cgroup_disabled())
-		return mem_cgroup_get_reclaim_stat(mz->mem_cgroup, mz->zone);
-
-	return &mz->zone->reclaim_stat;
+	return &mem_cgroup_zone_lruvec(mz->zone, mz->mem_cgroup)->reclaim_stat;
 }
 
 static unsigned long zone_nr_lru_pages(struct mem_cgroup_zone *mz,

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Fight unfair telecom internet charges in Canada: sign http://stopthemeter.ca/
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [PATCH v5 3/7] mm: push lru index into shrink_[in]active_list()
  2012-03-08 18:03 [PATCH v5 0/7] mm: some cleanup/rework before lru_lock splitting Konstantin Khlebnikov
  2012-03-08 18:04 ` [PATCH v5 1/7] mm/memcg: scanning_global_lru means mem_cgroup_disabled Konstantin Khlebnikov
  2012-03-08 18:04 ` [PATCH v5 2/7] mm/memcg: move reclaim_stat into lruvec Konstantin Khlebnikov
@ 2012-03-08 18:04 ` Konstantin Khlebnikov
  2012-03-09  1:28   ` KAMEZAWA Hiroyuki
  2012-03-08 18:04 ` [PATCH v5 4/7] mm: rework __isolate_lru_page() page lru filter Konstantin Khlebnikov
                   ` (4 subsequent siblings)
  7 siblings, 1 reply; 16+ messages in thread
From: Konstantin Khlebnikov @ 2012-03-08 18:04 UTC (permalink / raw)
  To: Andrew Morton, Hugh Dickins, Johannes Weiner, KAMEZAWA Hiroyuki
  Cc: linux-mm, linux-kernel

Let's toss lru index through call stack to isolate_lru_pages(),
this is better than its reconstructing from individual bits.

v5:
* move patch upper

Signed-off-by: Konstantin Khlebnikov <khlebnikov@openvz.org>

---

add/remove: 0/0 grow/shrink: 1/3 up/down: 32/-105 (-73)
function                                     old     new   delta
shrink_inactive_list                        1227    1259     +32
static.isolate_lru_pages                    1071    1055     -16
shrink_mem_cgroup_zone                      1538    1507     -31
static.shrink_active_list                    895     837     -58
---
 mm/vmscan.c |   41 +++++++++++++++++------------------------
 1 files changed, 17 insertions(+), 24 deletions(-)

diff --git a/mm/vmscan.c b/mm/vmscan.c
index 05c1157..9769970 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -1127,15 +1127,14 @@ int __isolate_lru_page(struct page *page, isolate_mode_t mode, int file)
  * @nr_scanned:	The number of pages that were scanned.
  * @sc:		The scan_control struct for this reclaim session
  * @mode:	One of the LRU isolation modes
- * @active:	True [1] if isolating active pages
- * @file:	True [1] if isolating file [!anon] pages
+ * @lru		LRU list id for isolating
  *
  * returns how many pages were moved onto *@dst.
  */
 static unsigned long isolate_lru_pages(unsigned long nr_to_scan,
 		struct mem_cgroup_zone *mz, struct list_head *dst,
 		unsigned long *nr_scanned, struct scan_control *sc,
-		isolate_mode_t mode, int active, int file)
+		isolate_mode_t mode, enum lru_list lru)
 {
 	struct lruvec *lruvec;
 	struct list_head *src;
@@ -1144,13 +1143,9 @@ static unsigned long isolate_lru_pages(unsigned long nr_to_scan,
 	unsigned long nr_lumpy_dirty = 0;
 	unsigned long nr_lumpy_failed = 0;
 	unsigned long scan;
-	int lru = LRU_BASE;
+	int file = is_file_lru(lru);
 
 	lruvec = mem_cgroup_zone_lruvec(mz->zone, mz->mem_cgroup);
-	if (active)
-		lru += LRU_ACTIVE;
-	if (file)
-		lru += LRU_FILE;
 	src = &lruvec->lists[lru];
 
 	for (scan = 0; scan < nr_to_scan && !list_empty(src); scan++) {
@@ -1487,7 +1482,7 @@ static inline bool should_reclaim_stall(unsigned long nr_taken,
  */
 static noinline_for_stack unsigned long
 shrink_inactive_list(unsigned long nr_to_scan, struct mem_cgroup_zone *mz,
-		     struct scan_control *sc, int priority, int file)
+		     struct scan_control *sc, int priority, enum lru_list lru)
 {
 	LIST_HEAD(page_list);
 	unsigned long nr_scanned;
@@ -1498,6 +1493,7 @@ shrink_inactive_list(unsigned long nr_to_scan, struct mem_cgroup_zone *mz,
 	unsigned long nr_dirty = 0;
 	unsigned long nr_writeback = 0;
 	isolate_mode_t isolate_mode = ISOLATE_INACTIVE;
+	int file = is_file_lru(lru);
 	struct zone *zone = mz->zone;
 	struct zone_reclaim_stat *reclaim_stat = get_reclaim_stat(mz);
 
@@ -1523,7 +1519,7 @@ shrink_inactive_list(unsigned long nr_to_scan, struct mem_cgroup_zone *mz,
 	spin_lock_irq(&zone->lru_lock);
 
 	nr_taken = isolate_lru_pages(nr_to_scan, mz, &page_list, &nr_scanned,
-				     sc, isolate_mode, 0, file);
+				     sc, isolate_mode, lru);
 	if (global_reclaim(sc)) {
 		zone->pages_scanned += nr_scanned;
 		if (current_is_kswapd())
@@ -1661,7 +1657,7 @@ static void move_active_pages_to_lru(struct zone *zone,
 static void shrink_active_list(unsigned long nr_to_scan,
 			       struct mem_cgroup_zone *mz,
 			       struct scan_control *sc,
-			       int priority, int file)
+			       int priority, enum lru_list lru)
 {
 	unsigned long nr_taken;
 	unsigned long nr_scanned;
@@ -1673,6 +1669,7 @@ static void shrink_active_list(unsigned long nr_to_scan,
 	struct zone_reclaim_stat *reclaim_stat = get_reclaim_stat(mz);
 	unsigned long nr_rotated = 0;
 	isolate_mode_t isolate_mode = ISOLATE_ACTIVE;
+	int file = is_file_lru(lru);
 	struct zone *zone = mz->zone;
 
 	lru_add_drain();
@@ -1685,17 +1682,14 @@ static void shrink_active_list(unsigned long nr_to_scan,
 	spin_lock_irq(&zone->lru_lock);
 
 	nr_taken = isolate_lru_pages(nr_to_scan, mz, &l_hold, &nr_scanned, sc,
-				     isolate_mode, 1, file);
+				     isolate_mode, lru);
 	if (global_reclaim(sc))
 		zone->pages_scanned += nr_scanned;
 
 	reclaim_stat->recent_scanned[file] += nr_taken;
 
 	__count_zone_vm_events(PGREFILL, zone, nr_scanned);
-	if (file)
-		__mod_zone_page_state(zone, NR_ACTIVE_FILE, -nr_taken);
-	else
-		__mod_zone_page_state(zone, NR_ACTIVE_ANON, -nr_taken);
+	__mod_zone_page_state(zone, NR_LRU_BASE + lru, -nr_taken);
 	__mod_zone_page_state(zone, NR_ISOLATED_ANON + file, nr_taken);
 	spin_unlock_irq(&zone->lru_lock);
 
@@ -1750,10 +1744,8 @@ static void shrink_active_list(unsigned long nr_to_scan,
 	 */
 	reclaim_stat->recent_rotated[file] += nr_rotated;
 
-	move_active_pages_to_lru(zone, &l_active, &l_hold,
-						LRU_ACTIVE + file * LRU_FILE);
-	move_active_pages_to_lru(zone, &l_inactive, &l_hold,
-						LRU_BASE   + file * LRU_FILE);
+	move_active_pages_to_lru(zone, &l_active, &l_hold, lru);
+	move_active_pages_to_lru(zone, &l_inactive, &l_hold, lru - LRU_ACTIVE);
 	__mod_zone_page_state(zone, NR_ISOLATED_ANON + file, -nr_taken);
 	spin_unlock_irq(&zone->lru_lock);
 
@@ -1853,11 +1845,11 @@ static unsigned long shrink_list(enum lru_list lru, unsigned long nr_to_scan,
 
 	if (is_active_lru(lru)) {
 		if (inactive_list_is_low(mz, file))
-			shrink_active_list(nr_to_scan, mz, sc, priority, file);
+			shrink_active_list(nr_to_scan, mz, sc, priority, lru);
 		return 0;
 	}
 
-	return shrink_inactive_list(nr_to_scan, mz, sc, priority, file);
+	return shrink_inactive_list(nr_to_scan, mz, sc, priority, lru);
 }
 
 static int vmscan_swappiness(struct mem_cgroup_zone *mz,
@@ -2108,7 +2100,8 @@ restart:
 	 * rebalance the anon lru active/inactive ratio.
 	 */
 	if (inactive_anon_is_low(mz))
-		shrink_active_list(SWAP_CLUSTER_MAX, mz, sc, priority, 0);
+		shrink_active_list(SWAP_CLUSTER_MAX, mz,
+				   sc, priority, LRU_ACTIVE_ANON);
 
 	/* reclaim/compaction might need reclaim to continue */
 	if (should_continue_reclaim(mz, nr_reclaimed,
@@ -2550,7 +2543,7 @@ static void age_active_anon(struct zone *zone, struct scan_control *sc,
 
 		if (inactive_anon_is_low(&mz))
 			shrink_active_list(SWAP_CLUSTER_MAX, &mz,
-					   sc, priority, 0);
+					   sc, priority, LRU_ACTIVE_ANON);
 
 		memcg = mem_cgroup_iter(NULL, memcg, NULL);
 	} while (memcg);

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Fight unfair telecom internet charges in Canada: sign http://stopthemeter.ca/
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [PATCH v5 4/7] mm: rework __isolate_lru_page() page lru filter
  2012-03-08 18:03 [PATCH v5 0/7] mm: some cleanup/rework before lru_lock splitting Konstantin Khlebnikov
                   ` (2 preceding siblings ...)
  2012-03-08 18:04 ` [PATCH v5 3/7] mm: push lru index into shrink_[in]active_list() Konstantin Khlebnikov
@ 2012-03-08 18:04 ` Konstantin Khlebnikov
  2012-03-09  1:30   ` KAMEZAWA Hiroyuki
  2012-03-08 18:04 ` [PATCH v5 5/7] mm: rework reclaim_stat counters Konstantin Khlebnikov
                   ` (3 subsequent siblings)
  7 siblings, 1 reply; 16+ messages in thread
From: Konstantin Khlebnikov @ 2012-03-08 18:04 UTC (permalink / raw)
  To: Andrew Morton, Hugh Dickins, Johannes Weiner, KAMEZAWA Hiroyuki
  Cc: linux-mm, linux-kernel

This patch adds lru bit mask into lower byte of isolate_mode_t,
this allows to simplify checks in __isolate_lru_page().

v5:
* lru bit mask instead of special file/anon active/inactive bits
* mark page_lru() as __always_inline, it helps gcc generate more compact code

Signed-off-by: Konstantin Khlebnikov <khlebnikov@openvz.org>

---

add/remove: 0/0 grow/shrink: 3/4 up/down: 35/-67 (-32)
function                                     old     new   delta
static.shrink_active_list                    837     853     +16
__isolate_lru_page                           301     317     +16
page_evictable                               170     173      +3
__remove_mapping                             322     319      -3
mem_cgroup_lru_del                            73      65      -8
static.isolate_lru_pages                    1055    1035     -20
__mem_cgroup_commit_charge                   676     640     -36
---
 include/linux/mm_inline.h |    2 +-
 include/linux/mmzone.h    |   12 +++++-------
 include/linux/swap.h      |    2 +-
 mm/compaction.c           |    4 ++--
 mm/vmscan.c               |   44 +++++++++++++++-----------------------------
 5 files changed, 24 insertions(+), 40 deletions(-)

diff --git a/include/linux/mm_inline.h b/include/linux/mm_inline.h
index 227fd3e..71e7d76 100644
--- a/include/linux/mm_inline.h
+++ b/include/linux/mm_inline.h
@@ -85,7 +85,7 @@ static inline enum lru_list page_off_lru(struct page *page)
  * Returns the LRU list a page should be on, as an index
  * into the array of LRU lists.
  */
-static inline enum lru_list page_lru(struct page *page)
+static __always_inline enum lru_list page_lru(struct page *page)
 {
 	enum lru_list lru;
 
diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h
index aa881de..3370a8c 100644
--- a/include/linux/mmzone.h
+++ b/include/linux/mmzone.h
@@ -183,16 +183,14 @@ struct lruvec {
 #define LRU_ALL_EVICTABLE (LRU_ALL_FILE | LRU_ALL_ANON)
 #define LRU_ALL	     ((1 << NR_LRU_LISTS) - 1)
 
-/* Isolate inactive pages */
-#define ISOLATE_INACTIVE	((__force isolate_mode_t)0x1)
-/* Isolate active pages */
-#define ISOLATE_ACTIVE		((__force isolate_mode_t)0x2)
+/* Mask of LRU lists allowed for isolation */
+#define ISOLATE_LRU_MASK	((__force isolate_mode_t)0xFF)
 /* Isolate clean file */
-#define ISOLATE_CLEAN		((__force isolate_mode_t)0x4)
+#define ISOLATE_CLEAN		((__force isolate_mode_t)0x100)
 /* Isolate unmapped file */
-#define ISOLATE_UNMAPPED	((__force isolate_mode_t)0x8)
+#define ISOLATE_UNMAPPED	((__force isolate_mode_t)0x200)
 /* Isolate for asynchronous migration */
-#define ISOLATE_ASYNC_MIGRATE	((__force isolate_mode_t)0x10)
+#define ISOLATE_ASYNC_MIGRATE	((__force isolate_mode_t)0x400)
 
 /* LRU Isolation modes. */
 typedef unsigned __bitwise__ isolate_mode_t;
diff --git a/include/linux/swap.h b/include/linux/swap.h
index ba2c8d7..dc6e6a3 100644
--- a/include/linux/swap.h
+++ b/include/linux/swap.h
@@ -254,7 +254,7 @@ static inline void lru_cache_add_file(struct page *page)
 /* linux/mm/vmscan.c */
 extern unsigned long try_to_free_pages(struct zonelist *zonelist, int order,
 					gfp_t gfp_mask, nodemask_t *mask);
-extern int __isolate_lru_page(struct page *page, isolate_mode_t mode, int file);
+extern int __isolate_lru_page(struct page *page, isolate_mode_t mode);
 extern unsigned long try_to_free_mem_cgroup_pages(struct mem_cgroup *mem,
 						  gfp_t gfp_mask, bool noswap);
 extern unsigned long mem_cgroup_shrink_node_zone(struct mem_cgroup *mem,
diff --git a/mm/compaction.c b/mm/compaction.c
index 74a8c82..5b02dbd 100644
--- a/mm/compaction.c
+++ b/mm/compaction.c
@@ -261,7 +261,7 @@ static isolate_migrate_t isolate_migratepages(struct zone *zone,
 	unsigned long last_pageblock_nr = 0, pageblock_nr;
 	unsigned long nr_scanned = 0, nr_isolated = 0;
 	struct list_head *migratelist = &cc->migratepages;
-	isolate_mode_t mode = ISOLATE_ACTIVE|ISOLATE_INACTIVE;
+	isolate_mode_t mode = LRU_ALL_EVICTABLE;
 
 	/* Do not scan outside zone boundaries */
 	low_pfn = max(cc->migrate_pfn, zone->zone_start_pfn);
@@ -375,7 +375,7 @@ static isolate_migrate_t isolate_migratepages(struct zone *zone,
 			mode |= ISOLATE_ASYNC_MIGRATE;
 
 		/* Try isolate the page */
-		if (__isolate_lru_page(page, mode, 0) != 0)
+		if (__isolate_lru_page(page, mode) != 0)
 			continue;
 
 		VM_BUG_ON(PageTransCompound(page));
diff --git a/mm/vmscan.c b/mm/vmscan.c
index 9769970..0966f11 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -1028,36 +1028,20 @@ keep_lumpy:
  *
  * returns 0 on success, -ve errno on failure.
  */
-int __isolate_lru_page(struct page *page, isolate_mode_t mode, int file)
+int __isolate_lru_page(struct page *page, isolate_mode_t mode)
 {
-	bool all_lru_mode;
 	int ret = -EINVAL;
 
 	/* Only take pages on the LRU. */
 	if (!PageLRU(page))
 		return ret;
 
-	all_lru_mode = (mode & (ISOLATE_ACTIVE|ISOLATE_INACTIVE)) ==
-		(ISOLATE_ACTIVE|ISOLATE_INACTIVE);
-
-	/*
-	 * When checking the active state, we need to be sure we are
-	 * dealing with comparible boolean values.  Take the logical not
-	 * of each.
-	 */
-	if (!all_lru_mode && !PageActive(page) != !(mode & ISOLATE_ACTIVE))
+	/* Isolate pages only from allowed LRU lists */
+	if (!(mode & BIT(page_lru(page))))
 		return ret;
 
-	if (!all_lru_mode && !!page_is_file_cache(page) != file)
-		return ret;
-
-	/*
-	 * When this function is being called for lumpy reclaim, we
-	 * initially look into all LRU pages, active, inactive and
-	 * unevictable; only give shrink_page_list evictable pages.
-	 */
-	if (PageUnevictable(page))
-		return ret;
+	/* All possible LRU lists must fit into isolation mask area */
+	BUILD_BUG_ON(LRU_ALL & ~ISOLATE_LRU_MASK);
 
 	ret = -EBUSY;
 
@@ -1160,7 +1144,7 @@ static unsigned long isolate_lru_pages(unsigned long nr_to_scan,
 
 		VM_BUG_ON(!PageLRU(page));
 
-		switch (__isolate_lru_page(page, mode, file)) {
+		switch (__isolate_lru_page(page, mode)) {
 		case 0:
 			mem_cgroup_lru_del(page);
 			list_move(&page->lru, dst);
@@ -1218,7 +1202,7 @@ static unsigned long isolate_lru_pages(unsigned long nr_to_scan,
 			    !PageSwapCache(cursor_page))
 				break;
 
-			if (__isolate_lru_page(cursor_page, mode, file) == 0) {
+			if (__isolate_lru_page(cursor_page, mode) == 0) {
 				unsigned int isolated_pages;
 
 				mem_cgroup_lru_del(cursor_page);
@@ -1492,7 +1476,7 @@ shrink_inactive_list(unsigned long nr_to_scan, struct mem_cgroup_zone *mz,
 	unsigned long nr_file;
 	unsigned long nr_dirty = 0;
 	unsigned long nr_writeback = 0;
-	isolate_mode_t isolate_mode = ISOLATE_INACTIVE;
+	isolate_mode_t isolate_mode;
 	int file = is_file_lru(lru);
 	struct zone *zone = mz->zone;
 	struct zone_reclaim_stat *reclaim_stat = get_reclaim_stat(mz);
@@ -1505,12 +1489,13 @@ shrink_inactive_list(unsigned long nr_to_scan, struct mem_cgroup_zone *mz,
 			return SWAP_CLUSTER_MAX;
 	}
 
-	set_reclaim_mode(priority, sc, false);
-	if (sc->reclaim_mode & RECLAIM_MODE_LUMPYRECLAIM)
-		isolate_mode |= ISOLATE_ACTIVE;
-
 	lru_add_drain();
 
+	set_reclaim_mode(priority, sc, false);
+
+	isolate_mode = BIT(lru);
+	if (sc->reclaim_mode & RECLAIM_MODE_LUMPYRECLAIM)
+		isolate_mode |= LRU_ALL_EVICTABLE;
 	if (!sc->may_unmap)
 		isolate_mode |= ISOLATE_UNMAPPED;
 	if (!sc->may_writepage)
@@ -1668,12 +1653,13 @@ static void shrink_active_list(unsigned long nr_to_scan,
 	struct page *page;
 	struct zone_reclaim_stat *reclaim_stat = get_reclaim_stat(mz);
 	unsigned long nr_rotated = 0;
-	isolate_mode_t isolate_mode = ISOLATE_ACTIVE;
+	isolate_mode_t isolate_mode;
 	int file = is_file_lru(lru);
 	struct zone *zone = mz->zone;
 
 	lru_add_drain();
 
+	isolate_mode = BIT(lru);
 	if (!sc->may_unmap)
 		isolate_mode |= ISOLATE_UNMAPPED;
 	if (!sc->may_writepage)

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Fight unfair telecom internet charges in Canada: sign http://stopthemeter.ca/
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [PATCH v5 5/7] mm: rework reclaim_stat counters
  2012-03-08 18:03 [PATCH v5 0/7] mm: some cleanup/rework before lru_lock splitting Konstantin Khlebnikov
                   ` (3 preceding siblings ...)
  2012-03-08 18:04 ` [PATCH v5 4/7] mm: rework __isolate_lru_page() page lru filter Konstantin Khlebnikov
@ 2012-03-08 18:04 ` Konstantin Khlebnikov
  2012-03-09  1:32   ` KAMEZAWA Hiroyuki
  2012-03-08 18:04 ` [PATCH v5 6/7] mm/memcg: rework inactive_ratio calculation Konstantin Khlebnikov
                   ` (2 subsequent siblings)
  7 siblings, 1 reply; 16+ messages in thread
From: Konstantin Khlebnikov @ 2012-03-08 18:04 UTC (permalink / raw)
  To: Andrew Morton, Hugh Dickins, Johannes Weiner, KAMEZAWA Hiroyuki
  Cc: linux-mm, linux-kernel

Currently there is two types of reclaim-stat counters:
recent_scanned (pages picked from from lru),
recent_rotated (pages putted back to active lru).
Reclaimer uses ratio recent_rotated / recent_scanned
for balancing pressure between file and anon pages.

But if we pick page from lru we can either reclaim it or put it back to lru, thus:
recent_scanned == recent_rotated[inactive] + recent_rotated[active] + reclaimed
This can be called "The Law of Conservation of Memory" =)

Thus recent_rotated counters for each lru list is enough, reclaimed pages can be
counted as rotatation into inactive lru. After that reclaimer can use this ratio:
recent_rotated[active] / (recent_rotated[active] + recent_rotated[inactive])

After this patch struct zone_reclaimer_stat has only one array: recent_rotated,
which is directly indexed by lru list index:

before patch:

LRU_ACTIVE_ANON   -> LRU_ACTIVE_ANON   : recent_scanned[ANON]++, recent_rotated[ANON]++
LRU_INACTIVE_ANON -> LRU_ACTIVE_ANON   : recent_scanned[ANON]++, recent_rotated[ANON]++
LRU_ACTIVE_ANON   -> LRU_INACTIVE_ANON : recent_scanned[ANON]++
LRU_INACTIVE_ANON -> LRU_INACTIVE_ANON : recent_scanned[ANON]++

after patch:

LRU_ACTIVE_ANON   -> LRU_ACTIVE_ANON   : recent_rotated[LRU_ACTIVE_ANON]++
LRU_INACTIVE_ANON -> LRU_ACTIVE_ANON   : recent_rotated[LRU_ACTIVE_ANON]++
LRU_ACTIVE_ANON   -> LRU_INACTIVE_ANON : recent_rotated[LRU_INACTIVE_ANON]++
LRU_INACTIVE_ANON -> LRU_INACTIVE_ANON : recent_rotated[LRU_INACTIVE_ANON]++

recent_scanned[ANON] === recent_rotated[LRU_ACTIVE_ANON] + recent_rotated[LRU_INACTIVE_ANON]
recent_rotated[ANON] === recent_rotated[LRU_ACTIVE_ANON]

(and the same for FILE/LRU_ACTIVE_FILE/LRU_INACTIVE_FILE)

v5:
* resolve conflict with "memcg: fix GPF when cgroup removal races with last exit"

Signed-off-by: Konstantin Khlebnikov <khlebnikov@openvz.org>

---

add/remove: 0/0 grow/shrink: 3/8 up/down: 32/-135 (-103)
function                                     old     new   delta
shrink_mem_cgroup_zone                      1507    1526     +19
free_area_init_node                          852     862     +10
put_page_testzero                             30      33      +3
mem_control_stat_show                        754     750      -4
putback_inactive_pages                       635     629      -6
lru_add_page_tail                            364     349     -15
__pagevec_lru_add_fn                         266     249     -17
lru_deactivate_fn                            500     482     -18
__activate_page                              365     347     -18
update_page_reclaim_stat                      89      64     -25
static.shrink_active_list                    853     821     -32
---
 include/linux/mmzone.h |   11 +++++------
 mm/memcontrol.c        |   29 +++++++++++++++++------------
 mm/page_alloc.c        |    6 ++----
 mm/swap.c              |   29 ++++++++++-------------------
 mm/vmscan.c            |   42 ++++++++++++++++++++++--------------------
 5 files changed, 56 insertions(+), 61 deletions(-)

diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h
index 3370a8c..6d40cc8 100644
--- a/include/linux/mmzone.h
+++ b/include/linux/mmzone.h
@@ -137,12 +137,14 @@ enum lru_list {
 	LRU_INACTIVE_FILE = LRU_BASE + LRU_FILE,
 	LRU_ACTIVE_FILE = LRU_BASE + LRU_FILE + LRU_ACTIVE,
 	LRU_UNEVICTABLE,
-	NR_LRU_LISTS
+	NR_LRU_LISTS,
+	NR_EVICTABLE_LRU_LISTS = LRU_UNEVICTABLE,
 };
 
 #define for_each_lru(lru) for (lru = 0; lru < NR_LRU_LISTS; lru++)
 
-#define for_each_evictable_lru(lru) for (lru = 0; lru <= LRU_ACTIVE_FILE; lru++)
+#define for_each_evictable_lru(lru) \
+	for (lru = 0; lru < NR_EVICTABLE_LRU_LISTS; lru++)
 
 static inline int is_file_lru(enum lru_list lru)
 {
@@ -165,11 +167,8 @@ struct zone_reclaim_stat {
 	 * mem/swap backed and file backed pages are refeferenced.
 	 * The higher the rotated/scanned ratio, the more valuable
 	 * that cache is.
-	 *
-	 * The anon LRU stats live in [0], file LRU stats in [1]
 	 */
-	unsigned long		recent_rotated[2];
-	unsigned long		recent_scanned[2];
+	unsigned long		recent_rotated[NR_EVICTABLE_LRU_LISTS];
 };
 
 struct lruvec {
diff --git a/mm/memcontrol.c b/mm/memcontrol.c
index 6864f57..02af4a6 100644
--- a/mm/memcontrol.c
+++ b/mm/memcontrol.c
@@ -4183,26 +4183,31 @@ static int mem_control_stat_show(struct cgroup *cont, struct cftype *cft,
 
 #ifdef CONFIG_DEBUG_VM
 	{
-		int nid, zid;
+		int nid, zid, lru;
 		struct mem_cgroup_per_zone *mz;
 		struct zone_reclaim_stat *rstat;
-		unsigned long recent_rotated[2] = {0, 0};
-		unsigned long recent_scanned[2] = {0, 0};
+		unsigned long recent_rotated[NR_EVICTABLE_LRU_LISTS];
 
+		memset(recent_rotated, 0, sizeof(recent_rotated));
 		for_each_online_node(nid)
 			for (zid = 0; zid < MAX_NR_ZONES; zid++) {
 				mz = mem_cgroup_zoneinfo(memcg, nid, zid);
 				rstat = &mz->lruvec.reclaim_stat;
-
-				recent_rotated[0] += rstat->recent_rotated[0];
-				recent_rotated[1] += rstat->recent_rotated[1];
-				recent_scanned[0] += rstat->recent_scanned[0];
-				recent_scanned[1] += rstat->recent_scanned[1];
+				for_each_evictable_lru(lru)
+					recent_rotated[lru] +=
+						rstat->recent_rotated[lru];
 			}
-		cb->fill(cb, "recent_rotated_anon", recent_rotated[0]);
-		cb->fill(cb, "recent_rotated_file", recent_rotated[1]);
-		cb->fill(cb, "recent_scanned_anon", recent_scanned[0]);
-		cb->fill(cb, "recent_scanned_file", recent_scanned[1]);
+
+		cb->fill(cb, "recent_rotated_anon",
+				recent_rotated[LRU_ACTIVE_ANON]);
+		cb->fill(cb, "recent_rotated_file",
+				recent_rotated[LRU_ACTIVE_FILE]);
+		cb->fill(cb, "recent_scanned_anon",
+				recent_rotated[LRU_ACTIVE_ANON] +
+				recent_rotated[LRU_INACTIVE_ANON]);
+		cb->fill(cb, "recent_scanned_file",
+				recent_rotated[LRU_ACTIVE_FILE] +
+				recent_rotated[LRU_INACTIVE_FILE]);
 	}
 #endif
 
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index ab2d210..ea40034 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -4365,10 +4365,8 @@ static void __paginginit free_area_init_core(struct pglist_data *pgdat,
 		zone_pcp_init(zone);
 		for_each_lru(lru)
 			INIT_LIST_HEAD(&zone->lruvec.lists[lru]);
-		zone->lruvec.reclaim_stat.recent_rotated[0] = 0;
-		zone->lruvec.reclaim_stat.recent_rotated[1] = 0;
-		zone->lruvec.reclaim_stat.recent_scanned[0] = 0;
-		zone->lruvec.reclaim_stat.recent_scanned[1] = 0;
+		memset(&zone->lruvec.reclaim_stat, 0,
+				sizeof(struct zone_reclaim_stat));
 		zap_zone_vm_stats(zone);
 		zone->flags = 0;
 		if (!size)
diff --git a/mm/swap.c b/mm/swap.c
index 60d14da..9051079 100644
--- a/mm/swap.c
+++ b/mm/swap.c
@@ -277,7 +277,7 @@ void rotate_reclaimable_page(struct page *page)
 }
 
 static void update_page_reclaim_stat(struct zone *zone, struct page *page,
-				     int file, int rotated)
+				     enum lru_list lru)
 {
 	struct zone_reclaim_stat *reclaim_stat;
 
@@ -285,9 +285,7 @@ static void update_page_reclaim_stat(struct zone *zone, struct page *page,
 	if (!reclaim_stat)
 		reclaim_stat = &zone->lruvec.reclaim_stat;
 
-	reclaim_stat->recent_scanned[file]++;
-	if (rotated)
-		reclaim_stat->recent_rotated[file]++;
+	reclaim_stat->recent_rotated[lru]++;
 }
 
 static void __activate_page(struct page *page, void *arg)
@@ -295,7 +293,6 @@ static void __activate_page(struct page *page, void *arg)
 	struct zone *zone = page_zone(page);
 
 	if (PageLRU(page) && !PageActive(page) && !PageUnevictable(page)) {
-		int file = page_is_file_cache(page);
 		int lru = page_lru_base_type(page);
 		del_page_from_lru_list(zone, page, lru);
 
@@ -304,7 +301,7 @@ static void __activate_page(struct page *page, void *arg)
 		add_page_to_lru_list(zone, page, lru);
 		__count_vm_event(PGACTIVATE);
 
-		update_page_reclaim_stat(zone, page, file, 1);
+		update_page_reclaim_stat(zone, page, lru);
 	}
 }
 
@@ -482,7 +479,7 @@ static void lru_deactivate_fn(struct page *page, void *arg)
 
 	if (active)
 		__count_vm_event(PGDEACTIVATE);
-	update_page_reclaim_stat(zone, page, file, 0);
+	update_page_reclaim_stat(zone, page, lru);
 }
 
 /*
@@ -646,9 +643,7 @@ EXPORT_SYMBOL(__pagevec_release);
 void lru_add_page_tail(struct zone* zone,
 		       struct page *page, struct page *page_tail)
 {
-	int uninitialized_var(active);
 	enum lru_list lru;
-	const int file = 0;
 
 	VM_BUG_ON(!PageHead(page));
 	VM_BUG_ON(PageCompound(page_tail));
@@ -660,12 +655,9 @@ void lru_add_page_tail(struct zone* zone,
 	if (page_evictable(page_tail, NULL)) {
 		if (PageActive(page)) {
 			SetPageActive(page_tail);
-			active = 1;
 			lru = LRU_ACTIVE_ANON;
-		} else {
-			active = 0;
+		} else
 			lru = LRU_INACTIVE_ANON;
-		}
 	} else {
 		SetPageUnevictable(page_tail);
 		lru = LRU_UNEVICTABLE;
@@ -687,8 +679,8 @@ void lru_add_page_tail(struct zone* zone,
 		list_move_tail(&page_tail->lru, list_head);
 	}
 
-	if (!PageUnevictable(page))
-		update_page_reclaim_stat(zone, page_tail, file, active);
+	if (!is_unevictable_lru(lru))
+		update_page_reclaim_stat(zone, page_tail, lru);
 }
 #endif /* CONFIG_TRANSPARENT_HUGEPAGE */
 
@@ -696,18 +688,17 @@ static void __pagevec_lru_add_fn(struct page *page, void *arg)
 {
 	enum lru_list lru = (enum lru_list)arg;
 	struct zone *zone = page_zone(page);
-	int file = is_file_lru(lru);
-	int active = is_active_lru(lru);
 
 	VM_BUG_ON(PageActive(page));
 	VM_BUG_ON(PageUnevictable(page));
 	VM_BUG_ON(PageLRU(page));
 
 	SetPageLRU(page);
-	if (active)
+	if (is_active_lru(lru))
 		SetPageActive(page);
+
 	add_page_to_lru_list(zone, page, lru);
-	update_page_reclaim_stat(zone, page, file, active);
+	update_page_reclaim_stat(zone, page, lru);
 }
 
 /*
diff --git a/mm/vmscan.c b/mm/vmscan.c
index 0966f11..ab5c0f6 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -1348,11 +1348,7 @@ putback_inactive_pages(struct mem_cgroup_zone *mz,
 		SetPageLRU(page);
 		lru = page_lru(page);
 		add_page_to_lru_list(zone, page, lru);
-		if (is_active_lru(lru)) {
-			int file = is_file_lru(lru);
-			int numpages = hpage_nr_pages(page);
-			reclaim_stat->recent_rotated[file] += numpages;
-		}
+		reclaim_stat->recent_rotated[lru] += hpage_nr_pages(page);
 		if (put_page_testzero(page)) {
 			__ClearPageLRU(page);
 			__ClearPageActive(page);
@@ -1533,8 +1529,11 @@ shrink_inactive_list(unsigned long nr_to_scan, struct mem_cgroup_zone *mz,
 
 	spin_lock_irq(&zone->lru_lock);
 
-	reclaim_stat->recent_scanned[0] += nr_anon;
-	reclaim_stat->recent_scanned[1] += nr_file;
+	/*
+	 * Count reclaimed pages as rotated, this helps balance scan pressure
+	 * between file and anonymous pages in get_scan_ratio.
+	 */
+	reclaim_stat->recent_rotated[lru] += nr_reclaimed;
 
 	if (current_is_kswapd())
 		__count_vm_events(KSWAPD_STEAL, nr_reclaimed);
@@ -1672,8 +1671,6 @@ static void shrink_active_list(unsigned long nr_to_scan,
 	if (global_reclaim(sc))
 		zone->pages_scanned += nr_scanned;
 
-	reclaim_stat->recent_scanned[file] += nr_taken;
-
 	__count_zone_vm_events(PGREFILL, zone, nr_scanned);
 	__mod_zone_page_state(zone, NR_LRU_BASE + lru, -nr_taken);
 	__mod_zone_page_state(zone, NR_ISOLATED_ANON + file, nr_taken);
@@ -1728,7 +1725,7 @@ static void shrink_active_list(unsigned long nr_to_scan,
 	 * helps balance scan pressure between file and anonymous pages in
 	 * get_scan_ratio.
 	 */
-	reclaim_stat->recent_rotated[file] += nr_rotated;
+	reclaim_stat->recent_rotated[lru] += nr_rotated;
 
 	move_active_pages_to_lru(zone, &l_active, &l_hold, lru);
 	move_active_pages_to_lru(zone, &l_inactive, &l_hold, lru - LRU_ACTIVE);
@@ -1861,6 +1858,7 @@ static void get_scan_count(struct mem_cgroup_zone *mz, struct scan_control *sc,
 	unsigned long anon_prio, file_prio;
 	unsigned long ap, fp;
 	struct zone_reclaim_stat *reclaim_stat = get_reclaim_stat(mz);
+	unsigned long *recent_rotated = reclaim_stat->recent_rotated;
 	u64 fraction[2], denominator;
 	enum lru_list lru;
 	int noswap = 0;
@@ -1926,14 +1924,16 @@ static void get_scan_count(struct mem_cgroup_zone *mz, struct scan_control *sc,
 	 * anon in [0], file in [1]
 	 */
 	spin_lock_irq(&mz->zone->lru_lock);
-	if (unlikely(reclaim_stat->recent_scanned[0] > anon / 4)) {
-		reclaim_stat->recent_scanned[0] /= 2;
-		reclaim_stat->recent_rotated[0] /= 2;
+	if (unlikely(recent_rotated[LRU_INACTIVE_ANON] +
+		     recent_rotated[LRU_ACTIVE_ANON] > anon / 4)) {
+		recent_rotated[LRU_INACTIVE_ANON] /= 2;
+		recent_rotated[LRU_ACTIVE_ANON] /= 2;
 	}
 
-	if (unlikely(reclaim_stat->recent_scanned[1] > file / 4)) {
-		reclaim_stat->recent_scanned[1] /= 2;
-		reclaim_stat->recent_rotated[1] /= 2;
+	if (unlikely(recent_rotated[LRU_INACTIVE_FILE] +
+		     recent_rotated[LRU_ACTIVE_FILE] > file / 4)) {
+		recent_rotated[LRU_INACTIVE_FILE] /= 2;
+		recent_rotated[LRU_ACTIVE_FILE] /= 2;
 	}
 
 	/*
@@ -1941,11 +1941,13 @@ static void get_scan_count(struct mem_cgroup_zone *mz, struct scan_control *sc,
 	 * proportional to the fraction of recently scanned pages on
 	 * each list that were recently referenced and in active use.
 	 */
-	ap = (anon_prio + 1) * (reclaim_stat->recent_scanned[0] + 1);
-	ap /= reclaim_stat->recent_rotated[0] + 1;
+	ap = (anon_prio + 1) * (recent_rotated[LRU_INACTIVE_ANON] +
+				recent_rotated[LRU_ACTIVE_ANON] + 1);
+	ap /= recent_rotated[LRU_ACTIVE_ANON] + 1;
 
-	fp = (file_prio + 1) * (reclaim_stat->recent_scanned[1] + 1);
-	fp /= reclaim_stat->recent_rotated[1] + 1;
+	fp = (file_prio + 1) * (recent_rotated[LRU_INACTIVE_FILE] +
+				recent_rotated[LRU_ACTIVE_FILE] + 1);
+	fp /= recent_rotated[LRU_ACTIVE_FILE] + 1;
 	spin_unlock_irq(&mz->zone->lru_lock);
 
 	fraction[0] = ap;

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Fight unfair telecom internet charges in Canada: sign http://stopthemeter.ca/
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [PATCH v5 6/7] mm/memcg: rework inactive_ratio calculation
  2012-03-08 18:03 [PATCH v5 0/7] mm: some cleanup/rework before lru_lock splitting Konstantin Khlebnikov
                   ` (4 preceding siblings ...)
  2012-03-08 18:04 ` [PATCH v5 5/7] mm: rework reclaim_stat counters Konstantin Khlebnikov
@ 2012-03-08 18:04 ` Konstantin Khlebnikov
  2012-03-08 18:04 ` [PATCH v5 7/7] mm/memcg: use vm_swappiness from target memory cgroup Konstantin Khlebnikov
  2012-03-11 14:36 ` [PATCH v5 4.5/7] mm: optimize isolate_lru_pages() Konstantin Khlebnikov
  7 siblings, 0 replies; 16+ messages in thread
From: Konstantin Khlebnikov @ 2012-03-08 18:04 UTC (permalink / raw)
  To: Andrew Morton, Hugh Dickins, Johannes Weiner, KAMEZAWA Hiroyuki
  Cc: linux-mm, linux-kernel

This patch removes precalculated zone->inactive_ratio.
Now it always calculated in inactive_anon_is_low() from current lru size.
After that we can merge memcg and non-memcg cases and drop duplicated code.

We can drop precalculated ratio, because its calculation fast enough to do it
each time. Plus precalculation uses zone size as basis, this estimation not
always match with page lru size, for example if a significant proportion
of memory occupied by kernel objects.

One userspace-visible change:
this patch removes "inactive_ratio" field from /proc/zoneinfo.
This field was introduced not long ago (in commit v2.6.27-5589-g556adec),
and it has lost sense after splitting LRU lists into per-memcg parts.

Signed-off-by: Konstantin Khlebnikov <khlebnikov@openvz.org>

---

add/remove: 1/3 grow/shrink: 5/5 up/down: 327/-549 (-222)
function                                     old     new   delta
static.inactive_anon_is_low                    -     249    +249
shrink_mem_cgroup_zone                      1526    1583     +57
balance_pgdat                               1938    1954     +16
__remove_mapping                             319     322      +3
destroy_compound_page                        155     156      +1
__free_pages_bootmem                         138     139      +1
free_area_init                                49      47      -2
__alloc_pages_nodemask                      2084    2082      -2
zoneinfo_show_print                          467     459      -8
get_page_from_freelist                      1985    1969     -16
init_per_zone_wmark_min                      136      70     -66
inactive_anon_is_low                         110       -    -110
mem_cgroup_inactive_file_is_low              155       -    -155
mem_cgroup_inactive_anon_is_low              190       -    -190
---
 include/linux/memcontrol.h |   16 --------
 include/linux/mmzone.h     |    7 ----
 mm/memcontrol.c            |   38 -------------------
 mm/page_alloc.c            |   44 ----------------------
 mm/vmscan.c                |   88 ++++++++++++++++++++++++++++----------------
 mm/vmstat.c                |    6 +--
 6 files changed, 58 insertions(+), 141 deletions(-)

diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h
index 4c4b968..d189ac5 100644
--- a/include/linux/memcontrol.h
+++ b/include/linux/memcontrol.h
@@ -113,10 +113,6 @@ void mem_cgroup_iter_break(struct mem_cgroup *, struct mem_cgroup *);
 /*
  * For memory reclaim.
  */
-int mem_cgroup_inactive_anon_is_low(struct mem_cgroup *memcg,
-				    struct zone *zone);
-int mem_cgroup_inactive_file_is_low(struct mem_cgroup *memcg,
-				    struct zone *zone);
 int mem_cgroup_select_victim_node(struct mem_cgroup *memcg);
 unsigned long mem_cgroup_zone_nr_lru_pages(struct mem_cgroup *memcg,
 					int nid, int zid, unsigned int lrumask);
@@ -329,18 +325,6 @@ static inline bool mem_cgroup_disabled(void)
 	return true;
 }
 
-static inline int
-mem_cgroup_inactive_anon_is_low(struct mem_cgroup *memcg, struct zone *zone)
-{
-	return 1;
-}
-
-static inline int
-mem_cgroup_inactive_file_is_low(struct mem_cgroup *memcg, struct zone *zone)
-{
-	return 1;
-}
-
 static inline unsigned long
 mem_cgroup_zone_nr_lru_pages(struct mem_cgroup *memcg, int nid, int zid,
 				unsigned int lru_mask)
diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h
index 6d40cc8..30f2bd4 100644
--- a/include/linux/mmzone.h
+++ b/include/linux/mmzone.h
@@ -378,13 +378,6 @@ struct zone {
 	/* Zone statistics */
 	atomic_long_t		vm_stat[NR_VM_ZONE_STAT_ITEMS];
 
-	/*
-	 * The target ratio of ACTIVE_ANON to INACTIVE_ANON pages on
-	 * this zone's LRU.  Maintained by the pageout code.
-	 */
-	unsigned int inactive_ratio;
-
-
 	ZONE_PADDING(_pad2_)
 	/* Rarely used or read-mostly fields */
 
diff --git a/mm/memcontrol.c b/mm/memcontrol.c
index 02af4a6..bf5121b 100644
--- a/mm/memcontrol.c
+++ b/mm/memcontrol.c
@@ -1174,44 +1174,6 @@ int task_in_mem_cgroup(struct task_struct *task, const struct mem_cgroup *memcg)
 	return ret;
 }
 
-int mem_cgroup_inactive_anon_is_low(struct mem_cgroup *memcg, struct zone *zone)
-{
-	unsigned long inactive_ratio;
-	int nid = zone_to_nid(zone);
-	int zid = zone_idx(zone);
-	unsigned long inactive;
-	unsigned long active;
-	unsigned long gb;
-
-	inactive = mem_cgroup_zone_nr_lru_pages(memcg, nid, zid,
-						BIT(LRU_INACTIVE_ANON));
-	active = mem_cgroup_zone_nr_lru_pages(memcg, nid, zid,
-					      BIT(LRU_ACTIVE_ANON));
-
-	gb = (inactive + active) >> (30 - PAGE_SHIFT);
-	if (gb)
-		inactive_ratio = int_sqrt(10 * gb);
-	else
-		inactive_ratio = 1;
-
-	return inactive * inactive_ratio < active;
-}
-
-int mem_cgroup_inactive_file_is_low(struct mem_cgroup *memcg, struct zone *zone)
-{
-	unsigned long active;
-	unsigned long inactive;
-	int zid = zone_idx(zone);
-	int nid = zone_to_nid(zone);
-
-	inactive = mem_cgroup_zone_nr_lru_pages(memcg, nid, zid,
-						BIT(LRU_INACTIVE_FILE));
-	active = mem_cgroup_zone_nr_lru_pages(memcg, nid, zid,
-					      BIT(LRU_ACTIVE_FILE));
-
-	return (active > inactive);
-}
-
 struct zone_reclaim_stat *
 mem_cgroup_get_reclaim_stat_from_page(struct page *page)
 {
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index ea40034..2e90931 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -5051,49 +5051,6 @@ void setup_per_zone_wmarks(void)
 }
 
 /*
- * The inactive anon list should be small enough that the VM never has to
- * do too much work, but large enough that each inactive page has a chance
- * to be referenced again before it is swapped out.
- *
- * The inactive_anon ratio is the target ratio of ACTIVE_ANON to
- * INACTIVE_ANON pages on this zone's LRU, maintained by the
- * pageout code. A zone->inactive_ratio of 3 means 3:1 or 25% of
- * the anonymous pages are kept on the inactive list.
- *
- * total     target    max
- * memory    ratio     inactive anon
- * -------------------------------------
- *   10MB       1         5MB
- *  100MB       1        50MB
- *    1GB       3       250MB
- *   10GB      10       0.9GB
- *  100GB      31         3GB
- *    1TB     101        10GB
- *   10TB     320        32GB
- */
-static void __meminit calculate_zone_inactive_ratio(struct zone *zone)
-{
-	unsigned int gb, ratio;
-
-	/* Zone size in gigabytes */
-	gb = zone->present_pages >> (30 - PAGE_SHIFT);
-	if (gb)
-		ratio = int_sqrt(10 * gb);
-	else
-		ratio = 1;
-
-	zone->inactive_ratio = ratio;
-}
-
-static void __meminit setup_per_zone_inactive_ratio(void)
-{
-	struct zone *zone;
-
-	for_each_zone(zone)
-		calculate_zone_inactive_ratio(zone);
-}
-
-/*
  * Initialise min_free_kbytes.
  *
  * For small machines we want it small (128k min).  For large machines
@@ -5131,7 +5088,6 @@ int __meminit init_per_zone_wmark_min(void)
 	setup_per_zone_wmarks();
 	refresh_zone_stat_thresholds();
 	setup_per_zone_lowmem_reserve();
-	setup_per_zone_inactive_ratio();
 	return 0;
 }
 module_init(init_per_zone_wmark_min)
diff --git a/mm/vmscan.c b/mm/vmscan.c
index ab5c0f6..9a41769 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -1736,29 +1736,38 @@ static void shrink_active_list(unsigned long nr_to_scan,
 }
 
 #ifdef CONFIG_SWAP
-static int inactive_anon_is_low_global(struct zone *zone)
-{
-	unsigned long active, inactive;
-
-	active = zone_page_state(zone, NR_ACTIVE_ANON);
-	inactive = zone_page_state(zone, NR_INACTIVE_ANON);
-
-	if (inactive * zone->inactive_ratio < active)
-		return 1;
-
-	return 0;
-}
-
 /**
  * inactive_anon_is_low - check if anonymous pages need to be deactivated
  * @zone: zone to check
- * @sc:   scan control of this context
  *
  * Returns true if the zone does not have enough inactive anon pages,
  * meaning some active anon pages need to be deactivated.
+ *
+ * The inactive anon list should be small enough that the VM never has to
+ * do too much work, but large enough that each inactive page has a chance
+ * to be referenced again before it is swapped out.
+ *
+ * The inactive_anon ratio is the target ratio of ACTIVE_ANON to
+ * INACTIVE_ANON pages on this zone's LRU, maintained by the
+ * pageout code. A zone->inactive_ratio of 3 means 3:1 or 25% of
+ * the anonymous pages are kept on the inactive list.
+ *
+ * total     target    max
+ * memory    ratio     inactive anon
+ * -------------------------------------
+ *   10MB       1         5MB
+ *  100MB       1        50MB
+ *    1GB       3       250MB
+ *   10GB      10       0.9GB
+ *  100GB      31         3GB
+ *    1TB     101        10GB
+ *   10TB     320        32GB
  */
 static int inactive_anon_is_low(struct mem_cgroup_zone *mz)
 {
+	unsigned long active, inactive;
+	unsigned int gb, ratio;
+
 	/*
 	 * If we don't have swap space, anonymous page deactivation
 	 * is pointless.
@@ -1766,11 +1775,26 @@ static int inactive_anon_is_low(struct mem_cgroup_zone *mz)
 	if (!total_swap_pages)
 		return 0;
 
-	if (!mem_cgroup_disabled())
-		return mem_cgroup_inactive_anon_is_low(mz->mem_cgroup,
-						       mz->zone);
+	if (mem_cgroup_disabled()) {
+		active = zone_page_state(mz->zone, NR_ACTIVE_ANON);
+		inactive = zone_page_state(mz->zone, NR_INACTIVE_ANON);
+	} else {
+		active = mem_cgroup_zone_nr_lru_pages(mz->mem_cgroup,
+				zone_to_nid(mz->zone), zone_idx(mz->zone),
+				BIT(LRU_ACTIVE_ANON));
+		inactive = mem_cgroup_zone_nr_lru_pages(mz->mem_cgroup,
+				zone_to_nid(mz->zone), zone_idx(mz->zone),
+				BIT(LRU_INACTIVE_ANON));
+	}
+
+	/* Total size in gigabytes */
+	gb = (active + inactive) >> (30 - PAGE_SHIFT);
+	if (gb)
+		ratio = int_sqrt(10 * gb);
+	else
+		ratio = 1;
 
-	return inactive_anon_is_low_global(mz->zone);
+	return inactive * ratio < active;
 }
 #else
 static inline int inactive_anon_is_low(struct mem_cgroup_zone *mz)
@@ -1779,16 +1803,6 @@ static inline int inactive_anon_is_low(struct mem_cgroup_zone *mz)
 }
 #endif
 
-static int inactive_file_is_low_global(struct zone *zone)
-{
-	unsigned long active, inactive;
-
-	active = zone_page_state(zone, NR_ACTIVE_FILE);
-	inactive = zone_page_state(zone, NR_INACTIVE_FILE);
-
-	return (active > inactive);
-}
-
 /**
  * inactive_file_is_low - check if file pages need to be deactivated
  * @mz: memory cgroup and zone to check
@@ -1805,11 +1819,21 @@ static int inactive_file_is_low_global(struct zone *zone)
  */
 static int inactive_file_is_low(struct mem_cgroup_zone *mz)
 {
-	if (!mem_cgroup_disabled())
-		return mem_cgroup_inactive_file_is_low(mz->mem_cgroup,
-						       mz->zone);
+	unsigned long active, inactive;
+
+	if (mem_cgroup_disabled()) {
+		active = zone_page_state(mz->zone, NR_ACTIVE_FILE);
+		inactive = zone_page_state(mz->zone, NR_INACTIVE_FILE);
+	} else {
+		active = mem_cgroup_zone_nr_lru_pages(mz->mem_cgroup,
+				zone_to_nid(mz->zone), zone_idx(mz->zone),
+				BIT(LRU_ACTIVE_FILE));
+		inactive = mem_cgroup_zone_nr_lru_pages(mz->mem_cgroup,
+				zone_to_nid(mz->zone), zone_idx(mz->zone),
+				BIT(LRU_INACTIVE_FILE));
+	}
 
-	return inactive_file_is_low_global(mz->zone);
+	return inactive < active;
 }
 
 static int inactive_list_is_low(struct mem_cgroup_zone *mz, int file)
diff --git a/mm/vmstat.c b/mm/vmstat.c
index f600557..2c813e1 100644
--- a/mm/vmstat.c
+++ b/mm/vmstat.c
@@ -1017,11 +1017,9 @@ static void zoneinfo_show_print(struct seq_file *m, pg_data_t *pgdat,
 	}
 	seq_printf(m,
 		   "\n  all_unreclaimable: %u"
-		   "\n  start_pfn:         %lu"
-		   "\n  inactive_ratio:    %u",
+		   "\n  start_pfn:         %lu",
 		   zone->all_unreclaimable,
-		   zone->zone_start_pfn,
-		   zone->inactive_ratio);
+		   zone->zone_start_pfn);
 	seq_putc(m, '\n');
 }
 

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Fight unfair telecom internet charges in Canada: sign http://stopthemeter.ca/
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [PATCH v5 7/7] mm/memcg: use vm_swappiness from target memory cgroup
  2012-03-08 18:03 [PATCH v5 0/7] mm: some cleanup/rework before lru_lock splitting Konstantin Khlebnikov
                   ` (5 preceding siblings ...)
  2012-03-08 18:04 ` [PATCH v5 6/7] mm/memcg: rework inactive_ratio calculation Konstantin Khlebnikov
@ 2012-03-08 18:04 ` Konstantin Khlebnikov
  2012-03-09  1:33   ` KAMEZAWA Hiroyuki
  2012-03-11 14:36 ` [PATCH v5 4.5/7] mm: optimize isolate_lru_pages() Konstantin Khlebnikov
  7 siblings, 1 reply; 16+ messages in thread
From: Konstantin Khlebnikov @ 2012-03-08 18:04 UTC (permalink / raw)
  To: Andrew Morton, Hugh Dickins, Johannes Weiner, KAMEZAWA Hiroyuki
  Cc: linux-mm, linux-kernel

Use vm_swappiness from memory cgroup which is triggered this memory reclaim.
This is more reasonable and allows to kill one argument.

Signed-off-by: Konstantin Khlebnikov <khlebnikov@openvz.org>

---

add/remove: 0/0 grow/shrink: 0/1 up/down: 0/-2 (-2)
function                                     old     new   delta
shrink_mem_cgroup_zone                      1583    1581      -2
---
 mm/vmscan.c |    9 ++++-----
 1 files changed, 4 insertions(+), 5 deletions(-)

diff --git a/mm/vmscan.c b/mm/vmscan.c
index 9a41769..95719f3 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -1859,12 +1859,11 @@ static unsigned long shrink_list(enum lru_list lru, unsigned long nr_to_scan,
 	return shrink_inactive_list(nr_to_scan, mz, sc, priority, lru);
 }
 
-static int vmscan_swappiness(struct mem_cgroup_zone *mz,
-			     struct scan_control *sc)
+static int vmscan_swappiness(struct scan_control *sc)
 {
 	if (global_reclaim(sc))
 		return vm_swappiness;
-	return mem_cgroup_swappiness(mz->mem_cgroup);
+	return mem_cgroup_swappiness(sc->target_mem_cgroup);
 }
 
 /*
@@ -1933,8 +1932,8 @@ static void get_scan_count(struct mem_cgroup_zone *mz, struct scan_control *sc,
 	 * With swappiness at 100, anonymous and file have the same priority.
 	 * This scanning priority is essentially the inverse of IO cost.
 	 */
-	anon_prio = vmscan_swappiness(mz, sc);
-	file_prio = 200 - vmscan_swappiness(mz, sc);
+	anon_prio = vmscan_swappiness(sc);
+	file_prio = 200 - vmscan_swappiness(sc);
 
 	/*
 	 * OK, so we have swap space and a fair amount of page cache

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Fight unfair telecom internet charges in Canada: sign http://stopthemeter.ca/
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 16+ messages in thread

* Re: [PATCH v5 1/7] mm/memcg: scanning_global_lru means mem_cgroup_disabled
  2012-03-08 18:04 ` [PATCH v5 1/7] mm/memcg: scanning_global_lru means mem_cgroup_disabled Konstantin Khlebnikov
@ 2012-03-09  1:26   ` KAMEZAWA Hiroyuki
  0 siblings, 0 replies; 16+ messages in thread
From: KAMEZAWA Hiroyuki @ 2012-03-09  1:26 UTC (permalink / raw)
  To: Konstantin Khlebnikov
  Cc: Andrew Morton, Hugh Dickins, Johannes Weiner, linux-mm,
	linux-kernel

On Thu, 08 Mar 2012 22:04:01 +0400
Konstantin Khlebnikov <khlebnikov@openvz.org> wrote:

> From: Hugh Dickins <hughd@google.com>
> 
> Although one has to admire the skill with which it has been concealed,
> scanning_global_lru(mz) is actually just an interesting way to test
> mem_cgroup_disabled().  Too many developer hours have been wasted on
> confusing it with global_reclaim(): just use mem_cgroup_disabled().
> 
> Signed-off-by: Hugh Dickins <hughd@google.com>
> Signed-off-by: Konstantin Khlebnikov <khlebnikov@openvz.org>


Acked-by: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>

If no changes since v4, please show Acks you got.

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Fight unfair telecom internet charges in Canada: sign http://stopthemeter.ca/
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH v5 2/7] mm/memcg: move reclaim_stat into lruvec
  2012-03-08 18:04 ` [PATCH v5 2/7] mm/memcg: move reclaim_stat into lruvec Konstantin Khlebnikov
@ 2012-03-09  1:27   ` KAMEZAWA Hiroyuki
  0 siblings, 0 replies; 16+ messages in thread
From: KAMEZAWA Hiroyuki @ 2012-03-09  1:27 UTC (permalink / raw)
  To: Konstantin Khlebnikov
  Cc: Andrew Morton, Hugh Dickins, Johannes Weiner, linux-mm,
	linux-kernel

On Thu, 08 Mar 2012 22:04:06 +0400
Konstantin Khlebnikov <khlebnikov@openvz.org> wrote:

> From: Hugh Dickins <hughd@google.com>
> 
> With mem_cgroup_disabled() now explicit, it becomes clear that the
> zone_reclaim_stat structure actually belongs in lruvec, per-zone
> when memcg is disabled but per-memcg per-zone when it's enabled.
> 
> We can delete mem_cgroup_get_reclaim_stat(), and change
> update_page_reclaim_stat() to update just the one set of stats,
> the one which get_scan_count() will actually use.
> 
> Signed-off-by: Hugh Dickins <hughd@google.com>
> Signed-off-by: Konstantin Khlebnikov <khlebnikov@openvz.org>
> 

Acked-by: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Fight unfair telecom internet charges in Canada: sign http://stopthemeter.ca/
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH v5 3/7] mm: push lru index into shrink_[in]active_list()
  2012-03-08 18:04 ` [PATCH v5 3/7] mm: push lru index into shrink_[in]active_list() Konstantin Khlebnikov
@ 2012-03-09  1:28   ` KAMEZAWA Hiroyuki
  0 siblings, 0 replies; 16+ messages in thread
From: KAMEZAWA Hiroyuki @ 2012-03-09  1:28 UTC (permalink / raw)
  To: Konstantin Khlebnikov
  Cc: Andrew Morton, Hugh Dickins, Johannes Weiner, linux-mm,
	linux-kernel

On Thu, 08 Mar 2012 22:04:10 +0400
Konstantin Khlebnikov <khlebnikov@openvz.org> wrote:

> Let's toss lru index through call stack to isolate_lru_pages(),
> this is better than its reconstructing from individual bits.
> 
> v5:
> * move patch upper
> 
> Signed-off-by: Konstantin Khlebnikov <khlebnikov@openvz.org>
> 
Acked-by: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Fight unfair telecom internet charges in Canada: sign http://stopthemeter.ca/
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH v5 4/7] mm: rework __isolate_lru_page() page lru filter
  2012-03-08 18:04 ` [PATCH v5 4/7] mm: rework __isolate_lru_page() page lru filter Konstantin Khlebnikov
@ 2012-03-09  1:30   ` KAMEZAWA Hiroyuki
  0 siblings, 0 replies; 16+ messages in thread
From: KAMEZAWA Hiroyuki @ 2012-03-09  1:30 UTC (permalink / raw)
  To: Konstantin Khlebnikov
  Cc: Andrew Morton, Hugh Dickins, Johannes Weiner, linux-mm,
	linux-kernel

On Thu, 08 Mar 2012 22:04:15 +0400
Konstantin Khlebnikov <khlebnikov@openvz.org> wrote:

> This patch adds lru bit mask into lower byte of isolate_mode_t,
> this allows to simplify checks in __isolate_lru_page().
> 
> v5:
> * lru bit mask instead of special file/anon active/inactive bits
> * mark page_lru() as __always_inline, it helps gcc generate more compact code
> 
> Signed-off-by: Konstantin Khlebnikov <khlebnikov@openvz.org>
> 

Nice !

Acked-by: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>


--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Fight unfair telecom internet charges in Canada: sign http://stopthemeter.ca/
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH v5 5/7] mm: rework reclaim_stat counters
  2012-03-08 18:04 ` [PATCH v5 5/7] mm: rework reclaim_stat counters Konstantin Khlebnikov
@ 2012-03-09  1:32   ` KAMEZAWA Hiroyuki
  0 siblings, 0 replies; 16+ messages in thread
From: KAMEZAWA Hiroyuki @ 2012-03-09  1:32 UTC (permalink / raw)
  To: Konstantin Khlebnikov
  Cc: Andrew Morton, Hugh Dickins, Johannes Weiner, linux-mm,
	linux-kernel

On Thu, 08 Mar 2012 22:04:19 +0400
Konstantin Khlebnikov <khlebnikov@openvz.org> wrote:

> Currently there is two types of reclaim-stat counters:
> recent_scanned (pages picked from from lru),
> recent_rotated (pages putted back to active lru).
> Reclaimer uses ratio recent_rotated / recent_scanned
> for balancing pressure between file and anon pages.
> 
> But if we pick page from lru we can either reclaim it or put it back to lru, thus:
> recent_scanned == recent_rotated[inactive] + recent_rotated[active] + reclaimed
> This can be called "The Law of Conservation of Memory" =)
> 
> Thus recent_rotated counters for each lru list is enough, reclaimed pages can be
> counted as rotatation into inactive lru. After that reclaimer can use this ratio:
> recent_rotated[active] / (recent_rotated[active] + recent_rotated[inactive])
> 
> After this patch struct zone_reclaimer_stat has only one array: recent_rotated,
> which is directly indexed by lru list index:
> 
> before patch:
> 
> LRU_ACTIVE_ANON   -> LRU_ACTIVE_ANON   : recent_scanned[ANON]++, recent_rotated[ANON]++
> LRU_INACTIVE_ANON -> LRU_ACTIVE_ANON   : recent_scanned[ANON]++, recent_rotated[ANON]++
> LRU_ACTIVE_ANON   -> LRU_INACTIVE_ANON : recent_scanned[ANON]++
> LRU_INACTIVE_ANON -> LRU_INACTIVE_ANON : recent_scanned[ANON]++
> 
> after patch:
> 
> LRU_ACTIVE_ANON   -> LRU_ACTIVE_ANON   : recent_rotated[LRU_ACTIVE_ANON]++
> LRU_INACTIVE_ANON -> LRU_ACTIVE_ANON   : recent_rotated[LRU_ACTIVE_ANON]++
> LRU_ACTIVE_ANON   -> LRU_INACTIVE_ANON : recent_rotated[LRU_INACTIVE_ANON]++
> LRU_INACTIVE_ANON -> LRU_INACTIVE_ANON : recent_rotated[LRU_INACTIVE_ANON]++
> 
> recent_scanned[ANON] === recent_rotated[LRU_ACTIVE_ANON] + recent_rotated[LRU_INACTIVE_ANON]
> recent_rotated[ANON] === recent_rotated[LRU_ACTIVE_ANON]
> 
> (and the same for FILE/LRU_ACTIVE_FILE/LRU_INACTIVE_FILE)
> 
> v5:
> * resolve conflict with "memcg: fix GPF when cgroup removal races with last exit"
> 
> Signed-off-by: Konstantin Khlebnikov <khlebnikov@openvz.org>

Nice description.

Reviewed-by: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>


--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Fight unfair telecom internet charges in Canada: sign http://stopthemeter.ca/
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH v5 7/7] mm/memcg: use vm_swappiness from target memory cgroup
  2012-03-08 18:04 ` [PATCH v5 7/7] mm/memcg: use vm_swappiness from target memory cgroup Konstantin Khlebnikov
@ 2012-03-09  1:33   ` KAMEZAWA Hiroyuki
  0 siblings, 0 replies; 16+ messages in thread
From: KAMEZAWA Hiroyuki @ 2012-03-09  1:33 UTC (permalink / raw)
  To: Konstantin Khlebnikov
  Cc: Andrew Morton, Hugh Dickins, Johannes Weiner, linux-mm,
	linux-kernel

On Thu, 08 Mar 2012 22:04:30 +0400
Konstantin Khlebnikov <khlebnikov@openvz.org> wrote:

> Use vm_swappiness from memory cgroup which is triggered this memory reclaim.
> This is more reasonable and allows to kill one argument.
> 
> Signed-off-by: Konstantin Khlebnikov <khlebnikov@openvz.org>

Acked-by: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujtisu.com>

Thank you for your works!

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Fight unfair telecom internet charges in Canada: sign http://stopthemeter.ca/
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 16+ messages in thread

* [PATCH v5 4.5/7] mm: optimize isolate_lru_pages()
  2012-03-08 18:03 [PATCH v5 0/7] mm: some cleanup/rework before lru_lock splitting Konstantin Khlebnikov
                   ` (6 preceding siblings ...)
  2012-03-08 18:04 ` [PATCH v5 7/7] mm/memcg: use vm_swappiness from target memory cgroup Konstantin Khlebnikov
@ 2012-03-11 14:36 ` Konstantin Khlebnikov
  2012-03-13  5:26   ` KAMEZAWA Hiroyuki
  7 siblings, 1 reply; 16+ messages in thread
From: Konstantin Khlebnikov @ 2012-03-11 14:36 UTC (permalink / raw)
  To: Hugh Dickins, Johannes Weiner, KAMEZAWA Hiroyuki
  Cc: linux-mm, Andrew Morton, linux-kernel

This patch moves lru checks from __isolate_lru_page() to its callers.

They aren't required on non-lumpy reclaim: all pages are came from right lru.
Pages isolation on memory compaction should skip only unevictable pages.
Thus we need to check page lru only on pages isolation for lumpy-reclaim.

Plus this patch kills mem_cgroup_lru_del() and uses mem_cgroup_lru_del_list()
instead, because now we already have lru list index.

Signed-off-by: Konstantin Khlebnikov <khlebnikov@openvz.org>

add/remove: 0/1 grow/shrink: 2/1 up/down: 101/-164 (-63)
function                                     old     new   delta
static.isolate_lru_pages                    1018    1103     +85
compact_zone                                2230    2246     +16
mem_cgroup_lru_del                            65       -     -65
__isolate_lru_page                           287     188     -99
---
 include/linux/memcontrol.h |    5 -----
 mm/compaction.c            |    3 ++-
 mm/memcontrol.c            |    5 -----
 mm/vmscan.c                |   38 +++++++++++++++++---------------------
 4 files changed, 19 insertions(+), 32 deletions(-)

diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h
index 4c4b968..8af2a61 100644
--- a/include/linux/memcontrol.h
+++ b/include/linux/memcontrol.h
@@ -66,7 +66,6 @@ struct lruvec *mem_cgroup_zone_lruvec(struct zone *, struct mem_cgroup *);
 struct lruvec *mem_cgroup_lru_add_list(struct zone *, struct page *,
 				       enum lru_list);
 void mem_cgroup_lru_del_list(struct page *, enum lru_list);
-void mem_cgroup_lru_del(struct page *);
 struct lruvec *mem_cgroup_lru_move_lists(struct zone *, struct page *,
 					 enum lru_list, enum lru_list);
 
@@ -259,10 +258,6 @@ static inline void mem_cgroup_lru_del_list(struct page *page, enum lru_list lru)
 {
 }
 
-static inline void mem_cgroup_lru_del(struct page *page)
-{
-}
-
 static inline struct lruvec *mem_cgroup_lru_move_lists(struct zone *zone,
 						       struct page *page,
 						       enum lru_list from,
diff --git a/mm/compaction.c b/mm/compaction.c
index 5b02dbd..c2e783d 100644
--- a/mm/compaction.c
+++ b/mm/compaction.c
@@ -358,7 +358,8 @@ static isolate_migrate_t isolate_migratepages(struct zone *zone,
 			continue;
 		}
 
-		if (!PageLRU(page))
+		/* Isolate only evictable pages */
+		if (!PageLRU(page) || PageUnevictable(page))
 			continue;
 
 		/*
diff --git a/mm/memcontrol.c b/mm/memcontrol.c
index 6864f57..6f62621 100644
--- a/mm/memcontrol.c
+++ b/mm/memcontrol.c
@@ -1095,11 +1095,6 @@ void mem_cgroup_lru_del_list(struct page *page, enum lru_list lru)
 	mz->lru_size[lru] -= 1 << compound_order(page);
 }
 
-void mem_cgroup_lru_del(struct page *page)
-{
-	mem_cgroup_lru_del_list(page, page_lru(page));
-}
-
 /**
  * mem_cgroup_lru_move_lists - account for moving a page between lrus
  * @zone: zone of the page
diff --git a/mm/vmscan.c b/mm/vmscan.c
index 0966f11..6a13a05 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -1030,21 +1030,11 @@ keep_lumpy:
  */
 int __isolate_lru_page(struct page *page, isolate_mode_t mode)
 {
-	int ret = -EINVAL;
-
-	/* Only take pages on the LRU. */
-	if (!PageLRU(page))
-		return ret;
-
-	/* Isolate pages only from allowed LRU lists */
-	if (!(mode & BIT(page_lru(page))))
-		return ret;
+	int ret = -EBUSY;
 
 	/* All possible LRU lists must fit into isolation mask area */
 	BUILD_BUG_ON(LRU_ALL & ~ISOLATE_LRU_MASK);
 
-	ret = -EBUSY;
-
 	/*
 	 * To minimise LRU disruption, the caller can indicate that it only
 	 * wants to isolate pages it will be able to operate on without
@@ -1143,21 +1133,16 @@ static unsigned long isolate_lru_pages(unsigned long nr_to_scan,
 		prefetchw_prev_lru_page(page, src, flags);
 
 		VM_BUG_ON(!PageLRU(page));
+		VM_BUG_ON(page_lru(page) != lru);
 
-		switch (__isolate_lru_page(page, mode)) {
-		case 0:
-			mem_cgroup_lru_del(page);
+		if (__isolate_lru_page(page, mode) == 0) {
+			mem_cgroup_lru_del_list(page, lru);
 			list_move(&page->lru, dst);
 			nr_taken += hpage_nr_pages(page);
-			break;
-
-		case -EBUSY:
+		} else {
 			/* else it is being freed elsewhere */
 			list_move(&page->lru, src);
 			continue;
-
-		default:
-			BUG();
 		}
 
 		if (!sc->order || !(sc->reclaim_mode & RECLAIM_MODE_LUMPYRECLAIM))
@@ -1178,6 +1163,7 @@ static unsigned long isolate_lru_pages(unsigned long nr_to_scan,
 		end_pfn = pfn + (1 << sc->order);
 		for (; pfn < end_pfn; pfn++) {
 			struct page *cursor_page;
+			enum lru_list cursor_lru;
 
 			/* The target page is in the block, ignore it. */
 			if (unlikely(pfn == page_pfn))
@@ -1202,10 +1188,19 @@ static unsigned long isolate_lru_pages(unsigned long nr_to_scan,
 			    !PageSwapCache(cursor_page))
 				break;
 
+			if (!PageLRU(cursor_page))
+				goto skip_free;
+
+			/* Isolate pages only from allowed LRU lists */
+			cursor_lru = page_lru(cursor_page);
+			if (!(mode & BIT(cursor_lru)))
+				goto skip_free;
+
 			if (__isolate_lru_page(cursor_page, mode) == 0) {
 				unsigned int isolated_pages;
 
-				mem_cgroup_lru_del(cursor_page);
+				mem_cgroup_lru_del_list(cursor_page,
+							cursor_lru);
 				list_move(&cursor_page->lru, dst);
 				isolated_pages = hpage_nr_pages(cursor_page);
 				nr_taken += isolated_pages;
@@ -1227,6 +1222,7 @@ static unsigned long isolate_lru_pages(unsigned long nr_to_scan,
 				 * track the head status without a
 				 * page pin.
 				 */
+skip_free:
 				if (!PageTail(cursor_page) &&
 				    !atomic_read(&cursor_page->_count))
 					continue;

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Fight unfair telecom internet charges in Canada: sign http://stopthemeter.ca/
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 16+ messages in thread

* Re: [PATCH v5 4.5/7] mm: optimize isolate_lru_pages()
  2012-03-11 14:36 ` [PATCH v5 4.5/7] mm: optimize isolate_lru_pages() Konstantin Khlebnikov
@ 2012-03-13  5:26   ` KAMEZAWA Hiroyuki
  0 siblings, 0 replies; 16+ messages in thread
From: KAMEZAWA Hiroyuki @ 2012-03-13  5:26 UTC (permalink / raw)
  To: Konstantin Khlebnikov
  Cc: Hugh Dickins, Johannes Weiner, linux-mm, Andrew Morton,
	linux-kernel

On Sun, 11 Mar 2012 18:36:16 +0400
Konstantin Khlebnikov <khlebnikov@openvz.org> wrote:

> This patch moves lru checks from __isolate_lru_page() to its callers.
> 
> They aren't required on non-lumpy reclaim: all pages are came from right lru.
> Pages isolation on memory compaction should skip only unevictable pages.
> Thus we need to check page lru only on pages isolation for lumpy-reclaim.
> 
> Plus this patch kills mem_cgroup_lru_del() and uses mem_cgroup_lru_del_list()
> instead, because now we already have lru list index.
> 
> Signed-off-by: Konstantin Khlebnikov <khlebnikov@openvz.org>
> 
> add/remove: 0/1 grow/shrink: 2/1 up/down: 101/-164 (-63)
> function                                     old     new   delta
> static.isolate_lru_pages                    1018    1103     +85
> compact_zone                                2230    2246     +16
> mem_cgroup_lru_del                            65       -     -65
> __isolate_lru_page                           287     188     -99


Reviewed-by: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>


--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Fight unfair telecom internet charges in Canada: sign http://stopthemeter.ca/
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 16+ messages in thread

end of thread, other threads:[~2012-03-13  5:28 UTC | newest]

Thread overview: 16+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2012-03-08 18:03 [PATCH v5 0/7] mm: some cleanup/rework before lru_lock splitting Konstantin Khlebnikov
2012-03-08 18:04 ` [PATCH v5 1/7] mm/memcg: scanning_global_lru means mem_cgroup_disabled Konstantin Khlebnikov
2012-03-09  1:26   ` KAMEZAWA Hiroyuki
2012-03-08 18:04 ` [PATCH v5 2/7] mm/memcg: move reclaim_stat into lruvec Konstantin Khlebnikov
2012-03-09  1:27   ` KAMEZAWA Hiroyuki
2012-03-08 18:04 ` [PATCH v5 3/7] mm: push lru index into shrink_[in]active_list() Konstantin Khlebnikov
2012-03-09  1:28   ` KAMEZAWA Hiroyuki
2012-03-08 18:04 ` [PATCH v5 4/7] mm: rework __isolate_lru_page() page lru filter Konstantin Khlebnikov
2012-03-09  1:30   ` KAMEZAWA Hiroyuki
2012-03-08 18:04 ` [PATCH v5 5/7] mm: rework reclaim_stat counters Konstantin Khlebnikov
2012-03-09  1:32   ` KAMEZAWA Hiroyuki
2012-03-08 18:04 ` [PATCH v5 6/7] mm/memcg: rework inactive_ratio calculation Konstantin Khlebnikov
2012-03-08 18:04 ` [PATCH v5 7/7] mm/memcg: use vm_swappiness from target memory cgroup Konstantin Khlebnikov
2012-03-09  1:33   ` KAMEZAWA Hiroyuki
2012-03-11 14:36 ` [PATCH v5 4.5/7] mm: optimize isolate_lru_pages() Konstantin Khlebnikov
2012-03-13  5:26   ` KAMEZAWA Hiroyuki

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).