public inbox for linux-kernel@vger.kernel.org
 help / color / mirror / Atom feed
* [PATCH 00/12] Balancing the scan rate of major caches
@ 2005-12-01 10:18 Wu Fengguang
  2005-12-01 10:18 ` [PATCH 01/12] vm: kswapd incmin Wu Fengguang
                   ` (11 more replies)
  0 siblings, 12 replies; 40+ messages in thread
From: Wu Fengguang @ 2005-12-01 10:18 UTC (permalink / raw)
  To: linux-kernel
  Cc: Andrew Morton, Christoph Lameter, Rik van Riel, Peter Zijlstra,
	Nick Piggin, Andrea Arcangeli, Marcelo Tosatti, Magnus Damm

Hi all,

This patch balances the aging rates of active_list/inactive_list/slab.

It started out as an effort to enable the adaptive read-ahead to handle large
number of concurrent readers. Then I found it involves much more stuffs, and
deserves a standalone patchset to address the balancing problem as a whole.


The whole picture of balancing:

- In each node, inactive_list scan rates are synced with each other.
  It is done in the direct/kswapd reclaim path.

- In each zone, active_list scan rate always follows that of inactive_list.

- Slab cache scan rates always follow that of the current node.
  If the shrinkers are not NUMA aware, they will effectly sync scan rates
  with that of the most scanned node.


The patches can be grouped as follows:

- balancing stuffs
vm-kswapd-incmin.patch
mm-balance-zone-aging-supporting-facilities.patch
mm-balance-zone-aging-in-direct-reclaim.patch
mm-balance-zone-aging-in-kswapd-reclaim.patch
mm-balance-slab-aging.patch
mm-balance-active-inactive-list-aging.patch

- pure code cleanups
mm-remove-unnecessary-variable-and-loop.patch
mm-remove-swap-cluster-max-from-scan-control.patch
mm-accumulate-nr-scanned-reclaimed-in-scan-control.patch
mm-turn-bool-variables-into-flags-in-scan-control.patch

- debug code
mm-page-reclaim-debug-traces.patch

- a minor fix
mm-scan-accounting-fix.patch

Thanks,
Wu Fengguang

-- 
Dept. Automation                University of Science and Technology of China

^ permalink raw reply	[flat|nested] 40+ messages in thread

* [PATCH 01/12] vm: kswapd incmin
  2005-12-01 10:18 [PATCH 00/12] Balancing the scan rate of major caches Wu Fengguang
@ 2005-12-01 10:18 ` Wu Fengguang
  2005-12-01 10:33   ` Andrew Morton
  2005-12-01 10:18 ` [PATCH 02/12] mm: supporting variables and functions for balanced zone aging Wu Fengguang
                   ` (10 subsequent siblings)
  11 siblings, 1 reply; 40+ messages in thread
From: Wu Fengguang @ 2005-12-01 10:18 UTC (permalink / raw)
  To: linux-kernel
  Cc: Andrew Morton, Christoph Lameter, Rik van Riel, Peter Zijlstra,
	Nick Piggin, Andrea Arcangeli, Marcelo Tosatti, Magnus Damm,
	Wu Fengguang

[-- Attachment #1: vm-kswapd-incmin.patch --]
[-- Type: text/plain, Size: 4827 bytes --]

Explicitly teach kswapd about the incremental min logic instead of just scanning
all zones under the first low zone. This should keep more even pressure applied
on the zones.

Signed-off-by: Nick Piggin <npiggin@suse.de>
Signed-off-by: Wu Fengguang <wfg@mail.ustc.edu.cn>
---


 mm/vmscan.c |  111 ++++++++++++++++++++----------------------------------------
 1 files changed, 37 insertions(+), 74 deletions(-)

--- linux.orig/mm/vmscan.c
+++ linux/mm/vmscan.c
@@ -1310,101 +1310,65 @@ loop_again:
 	}
 
 	for (priority = DEF_PRIORITY; priority >= 0; priority--) {
-		int end_zone = 0;	/* Inclusive.  0 = ZONE_DMA */
 		unsigned long lru_pages = 0;
+		int first_low_zone = 0;
+
+		all_zones_ok = 1;
+		sc.nr_scanned = 0;
+		sc.nr_reclaimed = 0;
+		sc.priority = priority;
+		sc.swap_cluster_max = nr_pages ? nr_pages : SWAP_CLUSTER_MAX;
 
 		/* The swap token gets in the way of swapout... */
 		if (!priority)
 			disable_swap_token();
 
-		all_zones_ok = 1;
-
-		if (nr_pages == 0) {
-			/*
-			 * Scan in the highmem->dma direction for the highest
-			 * zone which needs scanning
-			 */
-			for (i = pgdat->nr_zones - 1; i >= 0; i--) {
-				struct zone *zone = pgdat->node_zones + i;
+		/* Scan in the highmem->dma direction */
+		for (i = pgdat->nr_zones - 1; i >= 0; i--) {
+			struct zone *zone = pgdat->node_zones + i;
 
-				if (!populated_zone(zone))
-					continue;
+			if (!populated_zone(zone))
+				continue;
 
-				if (zone->all_unreclaimable &&
-						priority != DEF_PRIORITY)
+			if (nr_pages == 0) {	/* Not software suspend */
+				if (zone_watermark_ok(zone, order,
+					zone->pages_high, first_low_zone, 0))
 					continue;
 
-				if (!zone_watermark_ok(zone, order,
-						zone->pages_high, 0, 0)) {
-					end_zone = i;
-					goto scan;
-				}
+				all_zones_ok = 0;
+				if (first_low_zone < i)
+					first_low_zone = i;
 			}
-			goto out;
-		} else {
-			end_zone = pgdat->nr_zones - 1;
-		}
-scan:
-		for (i = 0; i <= end_zone; i++) {
-			struct zone *zone = pgdat->node_zones + i;
-
-			lru_pages += zone->nr_active + zone->nr_inactive;
-		}
-
-		/*
-		 * Now scan the zone in the dma->highmem direction, stopping
-		 * at the last zone which needs scanning.
-		 *
-		 * We do this because the page allocator works in the opposite
-		 * direction.  This prevents the page allocator from allocating
-		 * pages behind kswapd's direction of progress, which would
-		 * cause too much scanning of the lower zones.
-		 */
-		for (i = 0; i <= end_zone; i++) {
-			struct zone *zone = pgdat->node_zones + i;
-			int nr_slab;
-
-			if (!populated_zone(zone))
-				continue;
 
 			if (zone->all_unreclaimable && priority != DEF_PRIORITY)
 				continue;
 
-			if (nr_pages == 0) {	/* Not software suspend */
-				if (!zone_watermark_ok(zone, order,
-						zone->pages_high, end_zone, 0))
-					all_zones_ok = 0;
-			}
 			zone->temp_priority = priority;
 			if (zone->prev_priority > priority)
 				zone->prev_priority = priority;
-			sc.nr_scanned = 0;
-			sc.nr_reclaimed = 0;
-			sc.priority = priority;
-			sc.swap_cluster_max = nr_pages? nr_pages : SWAP_CLUSTER_MAX;
-			atomic_inc(&zone->reclaim_in_progress);
+			lru_pages += zone->nr_active + zone->nr_inactive;
+
 			shrink_zone(zone, &sc);
-			atomic_dec(&zone->reclaim_in_progress);
-			reclaim_state->reclaimed_slab = 0;
-			nr_slab = shrink_slab(sc.nr_scanned, GFP_KERNEL,
-						lru_pages);
-			sc.nr_reclaimed += reclaim_state->reclaimed_slab;
-			total_reclaimed += sc.nr_reclaimed;
-			total_scanned += sc.nr_scanned;
-			if (zone->all_unreclaimable)
-				continue;
-			if (nr_slab == 0 && zone->pages_scanned >=
+
+			if (zone->pages_scanned >=
 				    (zone->nr_active + zone->nr_inactive) * 4)
 				zone->all_unreclaimable = 1;
-			/*
-			 * If we've done a decent amount of scanning and
-			 * the reclaim ratio is low, start doing writepage
-			 * even in laptop mode
-			 */
-			if (total_scanned > SWAP_CLUSTER_MAX * 2 &&
-			    total_scanned > total_reclaimed+total_reclaimed/2)
-				sc.may_writepage = 1;
 		}
+		reclaim_state->reclaimed_slab = 0;
+		shrink_slab(sc.nr_scanned, GFP_KERNEL, lru_pages);
+		sc.nr_reclaimed += reclaim_state->reclaimed_slab;
+		total_reclaimed += sc.nr_reclaimed;
+		total_scanned += sc.nr_scanned;
+
+		/*
+		 * If we've done a decent amount of scanning and
+		 * the reclaim ratio is low, start doing writepage
+		 * even in laptop mode
+		 */
+		if (total_scanned > SWAP_CLUSTER_MAX * 2 &&
+		    total_scanned > total_reclaimed+total_reclaimed/2)
+			sc.may_writepage = 1;
+
 		if (nr_pages && to_free > total_reclaimed)
 			continue;	/* swsusp: need to do more work */
 		if (all_zones_ok)
@@ -1425,7 +1389,6 @@ scan:
 		if ((total_reclaimed >= SWAP_CLUSTER_MAX) && (!nr_pages))
 			break;
 	}
-out:
 	for (i = 0; i < pgdat->nr_zones; i++) {
 		struct zone *zone = pgdat->node_zones + i;
 

--

^ permalink raw reply	[flat|nested] 40+ messages in thread

* [PATCH 02/12] mm: supporting variables and functions for balanced zone aging
  2005-12-01 10:18 [PATCH 00/12] Balancing the scan rate of major caches Wu Fengguang
  2005-12-01 10:18 ` [PATCH 01/12] vm: kswapd incmin Wu Fengguang
@ 2005-12-01 10:18 ` Wu Fengguang
  2005-12-01 10:37   ` Andrew Morton
  2005-12-01 10:18 ` [PATCH 03/12] mm: balance zone aging in direct reclaim path Wu Fengguang
                   ` (9 subsequent siblings)
  11 siblings, 1 reply; 40+ messages in thread
From: Wu Fengguang @ 2005-12-01 10:18 UTC (permalink / raw)
  To: linux-kernel
  Cc: Andrew Morton, Christoph Lameter, Rik van Riel, Peter Zijlstra,
	Nick Piggin, Andrea Arcangeli, Marcelo Tosatti, Magnus Damm,
	Wu Fengguang

[-- Attachment #1: mm-balance-zone-aging-supporting-facilities.patch --]
[-- Type: text/plain, Size: 5239 bytes --]

The zone aging rates are currently imbalanced, the gap can be as large as 3
times, which can severely damage read-ahead requests and shorten their
effective life time.

This patch adds three variables in struct zone
	- aging_total
	- aging_milestone
	- page_age
to keep track of page aging rate, and keep it in sync on page reclaim time.

The aging_total is just a per-zone counter-part to the per-cpu
pgscan_{kswapd,direct}_{zone name}. But it is not direct comparable between
zones, so the aging_milestone/page_age are maintained based on aging_total.

The page_age is a normalized value that can be direct compared between zones
with the helper macro pages_more_aged(). The goal of balancing logics are to
keep this normalized value in sync between zones.

One can check the balanced aging progress by running:
                        tar c / | cat > /dev/null &
                        watch -n1 'grep "age " /proc/zoneinfo'

Signed-off-by: Wu Fengguang <wfg@mail.ustc.edu.cn>
---

 include/linux/mmzone.h |   14 ++++++++++++++
 mm/page_alloc.c        |   11 +++++++++++
 mm/vmscan.c            |   39 +++++++++++++++++++++++++++++++++++++++
 3 files changed, 64 insertions(+)

--- linux.orig/include/linux/mmzone.h
+++ linux/include/linux/mmzone.h
@@ -149,6 +149,20 @@ struct zone {
 	unsigned long		pages_scanned;	   /* since last reclaim */
 	int			all_unreclaimable; /* All pages pinned */
 
+	/* Fields for balanced page aging:
+	 * aging_total     - The accumulated number of activities that may
+	 *                   cause page aging, that is, make some pages closer
+	 *                   to the tail of inactive_list.
+	 * aging_milestone - A snapshot of total_scan every time a full
+	 *                   inactive_list of pages become aged.
+	 * page_age        - A normalized value showing the percent of pages
+	 *                   have been aged.  It is compared between zones to
+	 *                   balance the rate of page aging.
+	 */
+	unsigned long		aging_total;
+	unsigned long		aging_milestone;
+	unsigned long		page_age;
+
 	/*
 	 * Does the allocator try to reclaim pages from the zone as soon
 	 * as it fails a watermark_ok() in __alloc_pages?
--- linux.orig/mm/vmscan.c
+++ linux/mm/vmscan.c
@@ -123,6 +123,44 @@ static long total_memory;
 static LIST_HEAD(shrinker_list);
 static DECLARE_RWSEM(shrinker_rwsem);
 
+#ifdef CONFIG_HIGHMEM64G
+#define		PAGE_AGE_SHIFT  8
+#elif BITS_PER_LONG == 32
+#define		PAGE_AGE_SHIFT  12
+#elif BITS_PER_LONG == 64
+#define		PAGE_AGE_SHIFT  20
+#else
+#error unknown BITS_PER_LONG
+#endif
+#define		PAGE_AGE_MASK   ((1 << PAGE_AGE_SHIFT) - 1)
+
+/*
+ * The simplified code is: (a->page_age > b->page_age)
+ * The complexity deals with the wrap-around problem.
+ * Two page ages not close enough should also be ignored:
+ * they are out of sync and the comparison may be nonsense.
+ */
+#define pages_more_aged(a, b) 						\
+	((b->page_age - a->page_age) & PAGE_AGE_MASK) >			\
+			PAGE_AGE_MASK - (1 << (PAGE_AGE_SHIFT - 3))	\
+
+/*
+ * Keep track of the percent of cold pages that have been scanned / aged.
+ * It's not really ##%, but a high resolution normalized value.
+ */
+static inline void update_zone_age(struct zone *z, int nr_scan)
+{
+	unsigned long len = z->nr_inactive | 1;
+
+	z->aging_total += nr_scan;
+
+	if (z->aging_total - z->aging_milestone > len)
+		z->aging_milestone += len;
+
+	z->page_age = ((z->aging_total - z->aging_milestone)
+						<< PAGE_AGE_SHIFT) / len;
+}
+
 /*
  * Add a shrinker callback to be called from the vm
  */
@@ -888,6 +926,7 @@ static void shrink_cache(struct zone *zo
 					     &page_list, &nr_scan);
 		zone->nr_inactive -= nr_taken;
 		zone->pages_scanned += nr_scan;
+		update_zone_age(zone, nr_scan);
 		spin_unlock_irq(&zone->lru_lock);
 
 		if (nr_taken == 0)
--- linux.orig/mm/page_alloc.c
+++ linux/mm/page_alloc.c
@@ -1521,6 +1521,8 @@ void show_free_areas(void)
 			" active:%lukB"
 			" inactive:%lukB"
 			" present:%lukB"
+			" aging:%lukB"
+			" age:%lu"
 			" pages_scanned:%lu"
 			" all_unreclaimable? %s"
 			"\n",
@@ -1532,6 +1534,8 @@ void show_free_areas(void)
 			K(zone->nr_active),
 			K(zone->nr_inactive),
 			K(zone->present_pages),
+			K(zone->aging_total),
+			zone->page_age,
 			zone->pages_scanned,
 			(zone->all_unreclaimable ? "yes" : "no")
 			);
@@ -2145,6 +2149,9 @@ static void __init free_area_init_core(s
 		zone->nr_scan_inactive = 0;
 		zone->nr_active = 0;
 		zone->nr_inactive = 0;
+		zone->aging_total = 0;
+		zone->aging_milestone = 0;
+		zone->page_age = 0;
 		atomic_set(&zone->reclaim_in_progress, 0);
 		if (!size)
 			continue;
@@ -2293,6 +2300,8 @@ static int zoneinfo_show(struct seq_file
 			   "\n        high     %lu"
 			   "\n        active   %lu"
 			   "\n        inactive %lu"
+			   "\n        aging    %lu"
+			   "\n        age      %lu"
 			   "\n        scanned  %lu (a: %lu i: %lu)"
 			   "\n        spanned  %lu"
 			   "\n        present  %lu",
@@ -2302,6 +2311,8 @@ static int zoneinfo_show(struct seq_file
 			   zone->pages_high,
 			   zone->nr_active,
 			   zone->nr_inactive,
+			   zone->aging_total,
+			   zone->page_age,
 			   zone->pages_scanned,
 			   zone->nr_scan_active, zone->nr_scan_inactive,
 			   zone->spanned_pages,

--

^ permalink raw reply	[flat|nested] 40+ messages in thread

* [PATCH 03/12] mm: balance zone aging in direct reclaim path
  2005-12-01 10:18 [PATCH 00/12] Balancing the scan rate of major caches Wu Fengguang
  2005-12-01 10:18 ` [PATCH 01/12] vm: kswapd incmin Wu Fengguang
  2005-12-01 10:18 ` [PATCH 02/12] mm: supporting variables and functions for balanced zone aging Wu Fengguang
@ 2005-12-01 10:18 ` Wu Fengguang
  2005-12-01 10:18 ` [PATCH 04/12] mm: balance zone aging in kswapd " Wu Fengguang
                   ` (8 subsequent siblings)
  11 siblings, 0 replies; 40+ messages in thread
From: Wu Fengguang @ 2005-12-01 10:18 UTC (permalink / raw)
  To: linux-kernel
  Cc: Andrew Morton, Christoph Lameter, Rik van Riel, Peter Zijlstra,
	Nick Piggin, Andrea Arcangeli, Marcelo Tosatti, Magnus Damm,
	Wu Fengguang

[-- Attachment #1: mm-balance-zone-aging-in-direct-reclaim.patch --]
[-- Type: text/plain, Size: 2369 bytes --]

Add 10 extra priorities to the direct page reclaim path, which makes 10 round of
balancing effort(reclaim only from the least aged local/headless zone) before
falling back to the reclaim-all scheme.

Ten rounds should be enough to get enough free pages in normal cases, which
prevents unnecessarily disturbing remote nodes. If further restrict the first
round of page allocation to local zones, we might get what the early zone
reclaim patch want: memory affinity/locality.

Signed-off-by: Wu Fengguang <wfg@mail.ustc.edu.cn>
---

 mm/vmscan.c |   31 ++++++++++++++++++++++++++++---
 1 files changed, 28 insertions(+), 3 deletions(-)

--- linux.orig/mm/vmscan.c
+++ linux/mm/vmscan.c
@@ -1186,6 +1186,7 @@ static void
 shrink_caches(struct zone **zones, struct scan_control *sc)
 {
 	int i;
+	struct zone *z = NULL;
 
 	for (i = 0; zones[i] != NULL; i++) {
 		struct zone *zone = zones[i];
@@ -1200,11 +1201,34 @@ shrink_caches(struct zone **zones, struc
 		if (zone->prev_priority > sc->priority)
 			zone->prev_priority = sc->priority;
 
-		if (zone->all_unreclaimable && sc->priority != DEF_PRIORITY)
+		if (zone->all_unreclaimable && sc->priority < DEF_PRIORITY)
 			continue;	/* Let kswapd poll it */
 
+		/*
+		 * Balance page aging in local zones and following headless
+		 * zones.
+		 */
+		if (sc->priority > DEF_PRIORITY) {
+			if (zone->zone_pgdat != zones[0]->zone_pgdat) {
+				cpumask_t cpu = node_to_cpumask(
+						zone->zone_pgdat->node_id);
+				if (!cpus_empty(cpu))
+					break;
+			}
+
+			if (!z)
+				z = zone;
+			else if (pages_more_aged(z, zone))
+				z = zone;
+
+			continue;
+		}
+
 		shrink_zone(zone, sc);
 	}
+
+	if (z)
+		shrink_zone(z, sc);
 }
  
 /*
@@ -1248,7 +1272,8 @@ int try_to_free_pages(struct zone **zone
 		lru_pages += zone->nr_active + zone->nr_inactive;
 	}
 
-	for (priority = DEF_PRIORITY; priority >= 0; priority--) {
+	/* The added 10 priorities are for scan rate balancing */
+	for (priority = DEF_PRIORITY + 10; priority >= 0; priority--) {
 		sc.nr_mapped = read_page_state(nr_mapped);
 		sc.nr_scanned = 0;
 		sc.nr_reclaimed = 0;
@@ -1282,7 +1307,7 @@ int try_to_free_pages(struct zone **zone
 		}
 
 		/* Take a nap, wait for some writeback to complete */
-		if (sc.nr_scanned && priority < DEF_PRIORITY - 2)
+		if (sc.nr_scanned && priority < DEF_PRIORITY)
 			blk_congestion_wait(WRITE, HZ/10);
 	}
 out:

--

^ permalink raw reply	[flat|nested] 40+ messages in thread

* [PATCH 04/12] mm: balance zone aging in kswapd reclaim path
  2005-12-01 10:18 [PATCH 00/12] Balancing the scan rate of major caches Wu Fengguang
                   ` (2 preceding siblings ...)
  2005-12-01 10:18 ` [PATCH 03/12] mm: balance zone aging in direct reclaim path Wu Fengguang
@ 2005-12-01 10:18 ` Wu Fengguang
  2005-12-01 10:18 ` [PATCH 05/12] mm: balance slab aging Wu Fengguang
                   ` (7 subsequent siblings)
  11 siblings, 0 replies; 40+ messages in thread
From: Wu Fengguang @ 2005-12-01 10:18 UTC (permalink / raw)
  To: linux-kernel
  Cc: Andrew Morton, Christoph Lameter, Rik van Riel, Peter Zijlstra,
	Nick Piggin, Andrea Arcangeli, Marcelo Tosatti, Magnus Damm,
	Wu Fengguang

[-- Attachment #1: mm-balance-zone-aging-in-kswapd-reclaim.patch --]
[-- Type: text/plain, Size: 2852 bytes --]

The kswapd reclaim has had one single goal:
	reclaim from zones to make their watermarks ok.

Now add another weak goal(it will not set all_zones_ok=0):
	reclaim from the least aged zone to help balance the aging rates.

Two major aspects of this algorithm:
- reclaim the least aged zone unless it catches up with the most aged zone
- reclaim for weaker watermark by calling watermark_ok() with classzone_idx=0

That garuantees reclaims-for-aging to be more than reclaims-for-watermark if
there is ever a big imbalance, thus eliminates the chance of growing gaps.

Signed-off-by: Wu Fengguang <wfg@mail.ustc.edu.cn>
---

 mm/vmscan.c |   39 ++++++++++++++++++++++++++++++---------
 1 files changed, 30 insertions(+), 9 deletions(-)

--- linux.orig/mm/vmscan.c
+++ linux/mm/vmscan.c
@@ -1356,6 +1356,8 @@ static int balance_pgdat(pg_data_t *pgda
 	int total_scanned, total_reclaimed;
 	struct reclaim_state *reclaim_state = current->reclaim_state;
 	struct scan_control sc;
+	struct zone *youngest_zone = NULL;
+	struct zone *oldest_zone = NULL;
 
 loop_again:
 	total_scanned = 0;
@@ -1371,11 +1373,20 @@ loop_again:
 		struct zone *zone = pgdat->node_zones + i;
 
 		zone->temp_priority = DEF_PRIORITY;
+
+		if (zone->present_pages == 0)
+			continue;
+
+		if (!oldest_zone)
+			youngest_zone = oldest_zone = zone;
+		else if (pages_more_aged(zone, oldest_zone))
+			oldest_zone = zone;
+		else if (pages_more_aged(youngest_zone, zone))
+			youngest_zone = zone;
 	}
 
 	for (priority = DEF_PRIORITY; priority >= 0; priority--) {
 		unsigned long lru_pages = 0;
-		int first_low_zone = 0;
 
 		all_zones_ok = 1;
 		sc.nr_scanned = 0;
@@ -1387,21 +1398,31 @@ loop_again:
 		if (!priority)
 			disable_swap_token();
 
-		/* Scan in the highmem->dma direction */
-		for (i = pgdat->nr_zones - 1; i >= 0; i--) {
+		/*
+		 * Now scan the zone in the dma->highmem direction, stopping
+		 * at the last zone which needs scanning.
+		 *
+		 * We do this because the page allocator works in the opposite
+		 * direction.  This prevents the page allocator from allocating
+		 * pages behind kswapd's direction of progress, which would
+		 * cause too much scanning of the lower zones.
+		 */
+		for (i = 0; i < pgdat->nr_zones; i++) {
 			struct zone *zone = pgdat->node_zones + i;
 
 			if (!populated_zone(zone))
 				continue;
 
 			if (nr_pages == 0) {	/* Not software suspend */
-				if (zone_watermark_ok(zone, order,
-					zone->pages_high, first_low_zone, 0))
+				if (!zone_watermark_ok(zone, order,
+							zone->pages_high,
+							0, 0)) {
+					all_zones_ok = 0;
+				} else if (zone == youngest_zone &&
+						pages_more_aged(oldest_zone,
+								youngest_zone)) {
+				} else
 					continue;
-
-				all_zones_ok = 0;
-				if (first_low_zone < i)
-					first_low_zone = i;
 			}
 
 			if (zone->all_unreclaimable && priority != DEF_PRIORITY)

--

^ permalink raw reply	[flat|nested] 40+ messages in thread

* [PATCH 05/12] mm: balance slab aging
  2005-12-01 10:18 [PATCH 00/12] Balancing the scan rate of major caches Wu Fengguang
                   ` (3 preceding siblings ...)
  2005-12-01 10:18 ` [PATCH 04/12] mm: balance zone aging in kswapd " Wu Fengguang
@ 2005-12-01 10:18 ` Wu Fengguang
  2005-12-01 10:18 ` [PATCH 06/12] mm: balance active/inactive list scan rates Wu Fengguang
                   ` (6 subsequent siblings)
  11 siblings, 0 replies; 40+ messages in thread
From: Wu Fengguang @ 2005-12-01 10:18 UTC (permalink / raw)
  To: linux-kernel
  Cc: Andrew Morton, Christoph Lameter, Rik van Riel, Peter Zijlstra,
	Nick Piggin, Andrea Arcangeli, Marcelo Tosatti, Magnus Damm,
	Wu Fengguang

[-- Attachment #1: mm-balance-slab-aging.patch --]
[-- Type: text/plain, Size: 7965 bytes --]

The current slab shrinking code is way too fragile.
Let it manage aging pace by itself, and provide a simple and robust interface.

The design considerations:
- use the same syncing facilities as that of the zones
- keep the age of slabs in line with that of the largest zone
  this in effect makes aging rate of slabs follow that of the most aged node.

- reserve a minimal number of unused slabs
  the size of reservation depends on vm pressure

- shrink more slab caches only when vm pressure is high
  the old logic, `mmap pages found' - `shrink more caches' - `avoid swapping',
  sounds not quite logical, so the code is removed.

- let sc->nr_scanned record the exact number of cold pages scanned
  it is no longer used by the slab cache shrinking algorithm, but good for other
  algorithms(e.g. the active_list/inactive_list balancing).

Signed-off-by: Wu Fengguang <wfg@mail.ustc.edu.cn>
---

 include/linux/mm.h |    4 +
 mm/vmscan.c        |  118 +++++++++++++++++++++++------------------------------
 2 files changed, 55 insertions(+), 67 deletions(-)

--- linux.orig/include/linux/mm.h
+++ linux/include/linux/mm.h
@@ -798,7 +798,9 @@ struct shrinker {
 	shrinker_t		shrinker;
 	struct list_head	list;
 	int			seeks;	/* seeks to recreate an obj */
-	long			nr;	/* objs pending delete */
+	unsigned long		aging_total;
+	unsigned long		aging_milestone;
+	unsigned long		page_age;
 	struct shrinker_stats	*s_stats;
 };
 
--- linux.orig/mm/vmscan.c
+++ linux/mm/vmscan.c
@@ -161,6 +161,18 @@ static inline void update_zone_age(struc
 						<< PAGE_AGE_SHIFT) / len;
 }
 
+static inline void update_slab_age(struct shrinker *s,
+					unsigned long len, int nr_scan)
+{
+	s->aging_total += nr_scan;
+
+	if (s->aging_total - s->aging_milestone > len)
+		s->aging_milestone += len;
+
+	s->page_age = ((s->aging_total - s->aging_milestone)
+						<< PAGE_AGE_SHIFT) / len;
+}
+
 /*
  * Add a shrinker callback to be called from the vm
  */
@@ -172,7 +184,9 @@ struct shrinker *set_shrinker(int seeks,
         if (shrinker) {
 	        shrinker->shrinker = theshrinker;
 	        shrinker->seeks = seeks;
-	        shrinker->nr = 0;
+	        shrinker->aging_total = 0;
+	        shrinker->aging_milestone = 0;
+	        shrinker->page_age = 0;
 		shrinker->s_stats = alloc_percpu(struct shrinker_stats);
 		if (!shrinker->s_stats) {
 			kfree(shrinker);
@@ -208,80 +222,61 @@ EXPORT_SYMBOL(remove_shrinker);
  * percentages of the lru and ageable caches.  This should balance the seeks
  * generated by these structures.
  *
- * If the vm encounted mapped pages on the LRU it increase the pressure on
- * slab to avoid swapping.
- *
- * We do weird things to avoid (scanned*seeks*entries) overflowing 32 bits.
- *
- * `lru_pages' represents the number of on-LRU pages in all the zones which
- * are eligible for the caller's allocation attempt.  It is used for balancing
- * slab reclaim versus page reclaim.
+ * If the vm pressure is high, shrink the slabs more.
  *
  * Returns the number of slab objects which we shrunk.
  */
-static int shrink_slab(unsigned long scanned, gfp_t gfp_mask,
-			unsigned long lru_pages)
+static int shrink_slab(gfp_t gfp_mask)
 {
 	struct shrinker *shrinker;
-	int ret = 0;
-
-	if (scanned == 0)
-		scanned = SWAP_CLUSTER_MAX;
+	struct pglist_data *pgdat;
+	struct zone *zone;
+	int n;
 
 	if (!down_read_trylock(&shrinker_rwsem))
 		return 1;	/* Assume we'll be able to shrink next time */
 
-	list_for_each_entry(shrinker, &shrinker_list, list) {
-		unsigned long long delta;
-		unsigned long total_scan;
-		unsigned long max_pass = (*shrinker->shrinker)(0, gfp_mask);
-
-		delta = (4 * scanned) / shrinker->seeks;
-		delta *= max_pass;
-		do_div(delta, lru_pages + 1);
-		shrinker->nr += delta;
-		if (shrinker->nr < 0) {
-			printk(KERN_ERR "%s: nr=%ld\n",
-					__FUNCTION__, shrinker->nr);
-			shrinker->nr = max_pass;
-		}
+	/* find the major zone for the slabs to catch up age with */
+	pgdat = NODE_DATA(numa_node_id());
+	zone = pgdat->node_zones;
+	for (n = 1; n < pgdat->nr_zones; n++) {
+		struct zone *z = pgdat->node_zones + n;
 
-		/*
-		 * Avoid risking looping forever due to too large nr value:
-		 * never try to free more than twice the estimate number of
-		 * freeable entries.
-		 */
-		if (shrinker->nr > max_pass * 2)
-			shrinker->nr = max_pass * 2;
-
-		total_scan = shrinker->nr;
-		shrinker->nr = 0;
+		if (zone->present_pages < z->present_pages)
+			zone = z;
+	}
 
-		while (total_scan >= SHRINK_BATCH) {
-			long this_scan = SHRINK_BATCH;
-			int shrink_ret;
+	n = 0;
+	list_for_each_entry(shrinker, &shrinker_list, list) {
+		while (pages_more_aged(zone, shrinker)) {
 			int nr_before;
+			int nr_after;
 
 			nr_before = (*shrinker->shrinker)(0, gfp_mask);
-			shrink_ret = (*shrinker->shrinker)(this_scan, gfp_mask);
-			if (shrink_ret == -1)
+			if (nr_before <= SHRINK_BATCH * zone->prev_priority)
+				break;
+
+			nr_after = (*shrinker->shrinker)(SHRINK_BATCH, gfp_mask);
+			if (nr_after == -1)
 				break;
-			if (shrink_ret < nr_before) {
-				ret += nr_before - shrink_ret;
-				shrinker_stat_add(shrinker, nr_freed,
-					(nr_before - shrink_ret));
+
+			if (nr_after < nr_before) {
+				int nr_freed = nr_before - nr_after;
+
+				n += nr_freed;
+				shrinker_stat_add(shrinker, nr_freed, nr_freed);
 			}
-			shrinker_stat_add(shrinker, nr_req, this_scan);
-			mod_page_state(slabs_scanned, this_scan);
-			total_scan -= this_scan;
+			shrinker_stat_add(shrinker, nr_req, SHRINK_BATCH);
+			mod_page_state(slabs_scanned, SHRINK_BATCH);
+			update_slab_age(shrinker, nr_before * DEF_PRIORITY,
+						SHRINK_BATCH * shrinker->seeks *
+							zone->prev_priority);
 
 			cond_resched();
 		}
-
-		shrinker->nr += total_scan;
 	}
 	up_read(&shrinker_rwsem);
-	return ret;
+	return n;
 }
 
 /* Called without lock on whether page is mapped, so answer is unstable */
@@ -484,11 +479,6 @@ static int shrink_list(struct list_head 
 
 		BUG_ON(PageActive(page));
 
-		sc->nr_scanned++;
-		/* Double the slab pressure for mapped and swapcache pages */
-		if (page_mapped(page) || PageSwapCache(page))
-			sc->nr_scanned++;
-
 		if (PageWriteback(page))
 			goto keep_locked;
 
@@ -933,6 +923,7 @@ static void shrink_cache(struct zone *zo
 			goto done;
 
 		max_scan -= nr_scan;
+		sc->nr_scanned += nr_scan;
 		if (current_is_kswapd())
 			mod_page_state_zone(zone, pgscan_kswapd, nr_scan);
 		else
@@ -1251,7 +1242,6 @@ int try_to_free_pages(struct zone **zone
 	int total_scanned = 0, total_reclaimed = 0;
 	struct reclaim_state *reclaim_state = current->reclaim_state;
 	struct scan_control sc;
-	unsigned long lru_pages = 0;
 	int i;
 
 	delay_prefetch();
@@ -1269,7 +1259,6 @@ int try_to_free_pages(struct zone **zone
 			continue;
 
 		zone->temp_priority = DEF_PRIORITY;
-		lru_pages += zone->nr_active + zone->nr_inactive;
 	}
 
 	/* The added 10 priorities are for scan rate balancing */
@@ -1282,7 +1271,7 @@ int try_to_free_pages(struct zone **zone
 		if (!priority)
 			disable_swap_token();
 		shrink_caches(zones, &sc);
-		shrink_slab(sc.nr_scanned, gfp_mask, lru_pages);
+		shrink_slab(gfp_mask);
 		if (reclaim_state) {
 			sc.nr_reclaimed += reclaim_state->reclaimed_slab;
 			reclaim_state->reclaimed_slab = 0;
@@ -1386,8 +1375,6 @@ loop_again:
 	}
 
 	for (priority = DEF_PRIORITY; priority >= 0; priority--) {
-		unsigned long lru_pages = 0;
-
 		all_zones_ok = 1;
 		sc.nr_scanned = 0;
 		sc.nr_reclaimed = 0;
@@ -1431,7 +1418,6 @@ loop_again:
 			zone->temp_priority = priority;
 			if (zone->prev_priority > priority)
 				zone->prev_priority = priority;
-			lru_pages += zone->nr_active + zone->nr_inactive;
 
 			shrink_zone(zone, &sc);
 
@@ -1440,7 +1426,7 @@ loop_again:
 				zone->all_unreclaimable = 1;
 		}
 		reclaim_state->reclaimed_slab = 0;
-		shrink_slab(sc.nr_scanned, GFP_KERNEL, lru_pages);
+		shrink_slab(GFP_KERNEL);
 		sc.nr_reclaimed += reclaim_state->reclaimed_slab;
 		total_reclaimed += sc.nr_reclaimed;
 		total_scanned += sc.nr_scanned;

--

^ permalink raw reply	[flat|nested] 40+ messages in thread

* [PATCH 06/12] mm: balance active/inactive list scan rates
  2005-12-01 10:18 [PATCH 00/12] Balancing the scan rate of major caches Wu Fengguang
                   ` (4 preceding siblings ...)
  2005-12-01 10:18 ` [PATCH 05/12] mm: balance slab aging Wu Fengguang
@ 2005-12-01 10:18 ` Wu Fengguang
  2005-12-01 11:39   ` Peter Zijlstra
  2005-12-01 10:18 ` [PATCH 07/12] mm: remove unnecessary variable and loop Wu Fengguang
                   ` (5 subsequent siblings)
  11 siblings, 1 reply; 40+ messages in thread
From: Wu Fengguang @ 2005-12-01 10:18 UTC (permalink / raw)
  To: linux-kernel
  Cc: Andrew Morton, Christoph Lameter, Rik van Riel, Peter Zijlstra,
	Nick Piggin, Andrea Arcangeli, Marcelo Tosatti, Magnus Damm

[-- Attachment #1: mm-balance-active-inactive-list-aging.patch --]
[-- Type: text/plain, Size: 9397 bytes --]

shrink_zone() has two major design goals:
1) let active/inactive lists have equal scan rates
2) do the scans in small chunks

But the implementation has some problems:
- reluctant to scan small zones
  the callers often have to dip into low priority to free memory.

- the balance is quite rough
  the break statement in the loop breaks it.

- may scan few pages in one batch
  refill_inactive_zone can be called twice to scan 32 and 1 pages.

The new design:
1) keep perfect balance
   let active_list follow inactive_list in scan rate

2) always scan in SWAP_CLUSTER_MAX sized chunks
   simple and efficient

3) will scan at least one chunk
   the expected behavior from the callers

The perfect balance may or may not yield better performance, though it
a) is a more understandable and dependable behavior
b) together with inter-zone balancing, makes the zoned memories consistent

The atomic reclaim_in_progress is there to prevent most concurrent reclaims.
If concurrent reclaims did happen, there will be no fatal errors.


I tested the patch with the following commands:
	dd if=/dev/zero of=hot bs=1M seek=800 count=1
	dd if=/dev/zero of=cold bs=1M seek=50000 count=1
	./test-aging.sh; ./active-inactive-aging-rate.sh

Before the patch:
-----------------------------------------------------------------------------
active/inactive sizes on 2.6.14-2-686-smp:
0/1000          = 0 / 1241
563/1000        = 73343 / 130108
887/1000        = 137348 / 154816

active/inactive scan rates:
dma      38/1000        = 7731 / (198924 + 0)
normal   465/1000       = 2979780 / (6394740 + 0)
high     680/1000       = 4354230 / (6396786 + 0)

             total       used       free     shared    buffers     cached
Mem:          2027       1978         49          0          4       1923
-/+ buffers/cache:         49       1977
Swap:            0          0          0
-----------------------------------------------------------------------------

After the patch, the scan rates and the size ratios are kept roughly the same
for all zones:
-----------------------------------------------------------------------------
active/inactive sizes on 2.6.15-rc3-mm1:
0/1000          = 0 / 961
236/1000        = 38385 / 162429
319/1000        = 70607 / 221101

active/inactive scan rates:
dma      0/1000         = 0 / (42176 + 0)
normal   234/1000       = 1714688 / (7303456 + 1088)
high     317/1000       = 3151936 / (9933792 + 96)
             
             total       used       free     shared    buffers     cached
Mem:          2020       1969         50          0          5       1908
-/+ buffers/cache:         54       1965
Swap:            0          0          0
-----------------------------------------------------------------------------

script test-aging.sh:
------------------------------
#!/bin/zsh
cp cold /dev/null&

while {pidof cp > /dev/null};
do
        cp hot /dev/null
done
------------------------------

script active-inactive-aging-rate.sh:
-----------------------------------------------------------------------------
#!/bin/sh

echo active/inactive sizes on `uname -r`:
egrep '(active|inactive)' /proc/zoneinfo |
while true
do
	read name value
	[[ -z $name ]] && break
	eval $name=$value
	[[ $name = "inactive" ]] && echo -e "$((active * 1000 / (1 + inactive)))/1000  \t= $active / $inactive"
done

while true
do
	read name value
	[[ -z $name ]] && break
	eval $name=$value
done < /proc/vmstat

echo
echo active/inactive scan rates:
echo -e "dma \t $((pgrefill_dma * 1000 / (1 + pgscan_kswapd_dma + pgscan_direct_dma)))/1000 \t= $pgrefill_dma / ($pgscan_kswapd_dma + $pgscan_direct_dma)"
echo -e "normal \t $((pgrefill_normal * 1000 / (1 + pgscan_kswapd_normal + pgscan_direct_normal)))/1000 \t= $pgrefill_normal / ($pgscan_kswapd_normal + $pgscan_direct_normal)"
echo -e "high \t $((pgrefill_high * 1000 / (1 + pgscan_kswapd_high + pgscan_direct_high)))/1000 \t= $pgrefill_high / ($pgscan_kswapd_high + $pgscan_direct_high)"

echo
free -m
-----------------------------------------------------------------------------

Signed-off-by: Wu Fengguang <wfg@mail.ustc.edu.cn>
---

 include/linux/mmzone.h |    3 --
 include/linux/swap.h   |    2 -
 mm/page_alloc.c        |    5 +---
 mm/vmscan.c            |   52 +++++++++++++++++++++++++++----------------------
 4 files changed, 33 insertions(+), 29 deletions(-)

--- linux.orig/mm/vmscan.c
+++ linux/mm/vmscan.c
@@ -911,7 +911,7 @@ static void shrink_cache(struct zone *zo
 		int nr_scan;
 		int nr_freed;
 
-		nr_taken = isolate_lru_pages(sc->swap_cluster_max,
+		nr_taken = isolate_lru_pages(sc->nr_to_scan,
 					     &zone->inactive_list,
 					     &page_list, &nr_scan);
 		zone->nr_inactive -= nr_taken;
@@ -1105,56 +1105,56 @@ refill_inactive_zone(struct zone *zone, 
 
 /*
  * This is a basic per-zone page freer.  Used by both kswapd and direct reclaim.
+ * The reclaim process:
+ * a) scan always in batch of SWAP_CLUSTER_MAX pages
+ * b) scan inactive list at least one batch
+ * c) balance the scan rate of active/inactive list
+ * d) finish on either scanned or reclaimed enough pages
  */
 static void
 shrink_zone(struct zone *zone, struct scan_control *sc)
 {
+	unsigned long long next_scan_active;
 	unsigned long nr_active;
 	unsigned long nr_inactive;
 
 	atomic_inc(&zone->reclaim_in_progress);
 
+	next_scan_active = sc->nr_scanned;
+
 	/*
 	 * Add one to `nr_to_scan' just to make sure that the kernel will
 	 * slowly sift through the active list.
 	 */
-	zone->nr_scan_active += (zone->nr_active >> sc->priority) + 1;
-	nr_active = zone->nr_scan_active;
-	if (nr_active >= sc->swap_cluster_max)
-		zone->nr_scan_active = 0;
-	else
-		nr_active = 0;
-
-	zone->nr_scan_inactive += (zone->nr_inactive >> sc->priority) + 1;
-	nr_inactive = zone->nr_scan_inactive;
-	if (nr_inactive >= sc->swap_cluster_max)
-		zone->nr_scan_inactive = 0;
-	else
-		nr_inactive = 0;
+	nr_active = zone->nr_scan_active + 1;
+	nr_inactive = (zone->nr_inactive >> sc->priority) + SWAP_CLUSTER_MAX;
+	nr_inactive &= ~(SWAP_CLUSTER_MAX - 1);
 
+	sc->nr_to_scan = SWAP_CLUSTER_MAX;
 	sc->nr_to_reclaim = sc->swap_cluster_max;
 
-	while (nr_active || nr_inactive) {
-		if (nr_active) {
-			sc->nr_to_scan = min(nr_active,
-					(unsigned long)sc->swap_cluster_max);
-			nr_active -= sc->nr_to_scan;
+	while (nr_active >= SWAP_CLUSTER_MAX * 1024 || nr_inactive) {
+		if (nr_active >= SWAP_CLUSTER_MAX * 1024) {
+			nr_active -= SWAP_CLUSTER_MAX * 1024;
 			refill_inactive_zone(zone, sc);
 		}
 
 		if (nr_inactive) {
-			sc->nr_to_scan = min(nr_inactive,
-					(unsigned long)sc->swap_cluster_max);
-			nr_inactive -= sc->nr_to_scan;
+			nr_inactive -= SWAP_CLUSTER_MAX;
 			shrink_cache(zone, sc);
 			if (sc->nr_to_reclaim <= 0)
 				break;
 		}
 	}
 
-	throttle_vm_writeout();
+	next_scan_active = (sc->nr_scanned - next_scan_active) * 1024ULL *
+					(unsigned long long)zone->nr_active;
+	do_div(next_scan_active, zone->nr_inactive | 1);
+	zone->nr_scan_active = nr_active + (unsigned long)next_scan_active;
 
 	atomic_dec(&zone->reclaim_in_progress);
+
+	throttle_vm_writeout();
 }
 
 /*
@@ -1195,6 +1195,9 @@ shrink_caches(struct zone **zones, struc
 		if (zone->all_unreclaimable && sc->priority < DEF_PRIORITY)
 			continue;	/* Let kswapd poll it */
 
+		if (atomic_read(&zone->reclaim_in_progress))
+			continue;
+
 		/*
 		 * Balance page aging in local zones and following headless
 		 * zones.
@@ -1415,6 +1418,9 @@ loop_again:
 			if (zone->all_unreclaimable && priority != DEF_PRIORITY)
 				continue;
 
+			if (atomic_read(&zone->reclaim_in_progress))
+				continue;
+
 			zone->temp_priority = priority;
 			if (zone->prev_priority > priority)
 				zone->prev_priority = priority;
--- linux.orig/mm/page_alloc.c
+++ linux/mm/page_alloc.c
@@ -2146,7 +2146,6 @@ static void __init free_area_init_core(s
 		INIT_LIST_HEAD(&zone->active_list);
 		INIT_LIST_HEAD(&zone->inactive_list);
 		zone->nr_scan_active = 0;
-		zone->nr_scan_inactive = 0;
 		zone->nr_active = 0;
 		zone->nr_inactive = 0;
 		zone->aging_total = 0;
@@ -2302,7 +2301,7 @@ static int zoneinfo_show(struct seq_file
 			   "\n        inactive %lu"
 			   "\n        aging    %lu"
 			   "\n        age      %lu"
-			   "\n        scanned  %lu (a: %lu i: %lu)"
+			   "\n        scanned  %lu (a: %lu)"
 			   "\n        spanned  %lu"
 			   "\n        present  %lu",
 			   zone->free_pages,
@@ -2314,7 +2313,7 @@ static int zoneinfo_show(struct seq_file
 			   zone->aging_total,
 			   zone->page_age,
 			   zone->pages_scanned,
-			   zone->nr_scan_active, zone->nr_scan_inactive,
+			   zone->nr_scan_active / 1024,
 			   zone->spanned_pages,
 			   zone->present_pages);
 		seq_printf(m,
--- linux.orig/include/linux/swap.h
+++ linux/include/linux/swap.h
@@ -111,7 +111,7 @@ enum {
 	SWP_SCANNING	= (1 << 8),	/* refcount in scan_swap_map */
 };
 
-#define SWAP_CLUSTER_MAX 32
+#define SWAP_CLUSTER_MAX 32		/* must be power of 2 */
 
 #define SWAP_MAP_MAX	0x7fff
 #define SWAP_MAP_BAD	0x8000
--- linux.orig/include/linux/mmzone.h
+++ linux/include/linux/mmzone.h
@@ -142,8 +142,7 @@ struct zone {
 	spinlock_t		lru_lock;	
 	struct list_head	active_list;
 	struct list_head	inactive_list;
-	unsigned long		nr_scan_active;
-	unsigned long		nr_scan_inactive;
+	unsigned long		nr_scan_active;	/* x1024 to be more precise */
 	unsigned long		nr_active;
 	unsigned long		nr_inactive;
 	unsigned long		pages_scanned;	   /* since last reclaim */

--

^ permalink raw reply	[flat|nested] 40+ messages in thread

* [PATCH 07/12] mm: remove unnecessary variable and loop
  2005-12-01 10:18 [PATCH 00/12] Balancing the scan rate of major caches Wu Fengguang
                   ` (5 preceding siblings ...)
  2005-12-01 10:18 ` [PATCH 06/12] mm: balance active/inactive list scan rates Wu Fengguang
@ 2005-12-01 10:18 ` Wu Fengguang
  2005-12-01 10:18 ` [PATCH 08/12] mm: remove swap_cluster_max from scan_control Wu Fengguang
                   ` (4 subsequent siblings)
  11 siblings, 0 replies; 40+ messages in thread
From: Wu Fengguang @ 2005-12-01 10:18 UTC (permalink / raw)
  To: linux-kernel
  Cc: Andrew Morton, Christoph Lameter, Rik van Riel, Peter Zijlstra,
	Nick Piggin, Andrea Arcangeli, Marcelo Tosatti, Magnus Damm,
	Wu Fengguang

[-- Attachment #1: mm-remove-unnecessary-variable-and-loop.patch --]
[-- Type: text/plain, Size: 3826 bytes --]

shrink_cache() and refill_inactive_zone() do not need loops.

Simplify them to scan one chunk at a time.

Signed-off-by: Wu Fengguang <wfg@mail.ustc.edu.cn>
---

 mm/vmscan.c |   92 ++++++++++++++++++++++++++++--------------------------------
 1 files changed, 43 insertions(+), 49 deletions(-)

--- linux.orig/mm/vmscan.c
+++ linux/mm/vmscan.c
@@ -899,63 +899,58 @@ static void shrink_cache(struct zone *zo
 {
 	LIST_HEAD(page_list);
 	struct pagevec pvec;
-	int max_scan = sc->nr_to_scan;
+	struct page *page;
+	int nr_taken;
+	int nr_scan;
+	int nr_freed;
 
 	pagevec_init(&pvec, 1);
 
 	lru_add_drain();
 	spin_lock_irq(&zone->lru_lock);
-	while (max_scan > 0) {
-		struct page *page;
-		int nr_taken;
-		int nr_scan;
-		int nr_freed;
-
-		nr_taken = isolate_lru_pages(sc->nr_to_scan,
-					     &zone->inactive_list,
-					     &page_list, &nr_scan);
-		zone->nr_inactive -= nr_taken;
-		zone->pages_scanned += nr_scan;
-		update_zone_age(zone, nr_scan);
-		spin_unlock_irq(&zone->lru_lock);
+	nr_taken = isolate_lru_pages(sc->nr_to_scan,
+				     &zone->inactive_list,
+				     &page_list, &nr_scan);
+	zone->nr_inactive -= nr_taken;
+	zone->pages_scanned += nr_scan;
+	update_zone_age(zone, nr_scan);
+	spin_unlock_irq(&zone->lru_lock);
 
-		if (nr_taken == 0)
-			goto done;
+	if (nr_taken == 0)
+		return;
 
-		max_scan -= nr_scan;
-		sc->nr_scanned += nr_scan;
-		if (current_is_kswapd())
-			mod_page_state_zone(zone, pgscan_kswapd, nr_scan);
-		else
-			mod_page_state_zone(zone, pgscan_direct, nr_scan);
-		nr_freed = shrink_list(&page_list, sc);
-		if (current_is_kswapd())
-			mod_page_state(kswapd_steal, nr_freed);
-		mod_page_state_zone(zone, pgsteal, nr_freed);
-		sc->nr_to_reclaim -= nr_freed;
+	sc->nr_scanned += nr_scan;
+	if (current_is_kswapd())
+		mod_page_state_zone(zone, pgscan_kswapd, nr_scan);
+	else
+		mod_page_state_zone(zone, pgscan_direct, nr_scan);
+	nr_freed = shrink_list(&page_list, sc);
+	if (current_is_kswapd())
+		mod_page_state(kswapd_steal, nr_freed);
+	mod_page_state_zone(zone, pgsteal, nr_freed);
+	sc->nr_to_reclaim -= nr_freed;
 
-		spin_lock_irq(&zone->lru_lock);
-		/*
-		 * Put back any unfreeable pages.
-		 */
-		while (!list_empty(&page_list)) {
-			page = lru_to_page(&page_list);
-			if (TestSetPageLRU(page))
-				BUG();
-			list_del(&page->lru);
-			if (PageActive(page))
-				add_page_to_active_list(zone, page);
-			else
-				add_page_to_inactive_list(zone, page);
-			if (!pagevec_add(&pvec, page)) {
-				spin_unlock_irq(&zone->lru_lock);
-				__pagevec_release(&pvec);
-				spin_lock_irq(&zone->lru_lock);
-			}
+	spin_lock_irq(&zone->lru_lock);
+	/*
+	 * Put back any unfreeable pages.
+	 */
+	while (!list_empty(&page_list)) {
+		page = lru_to_page(&page_list);
+		if (TestSetPageLRU(page))
+			BUG();
+		list_del(&page->lru);
+		if (PageActive(page))
+			add_page_to_active_list(zone, page);
+		else
+			add_page_to_inactive_list(zone, page);
+		if (!pagevec_add(&pvec, page)) {
+			spin_unlock_irq(&zone->lru_lock);
+			__pagevec_release(&pvec);
+			spin_lock_irq(&zone->lru_lock);
 		}
-  	}
+	}
 	spin_unlock_irq(&zone->lru_lock);
-done:
+
 	pagevec_release(&pvec);
 }
 
@@ -982,7 +977,6 @@ refill_inactive_zone(struct zone *zone, 
 	int pgmoved;
 	int pgdeactivate = 0;
 	int pgscanned;
-	int nr_pages = sc->nr_to_scan;
 	LIST_HEAD(l_hold);	/* The pages which were snipped off */
 	LIST_HEAD(l_inactive);	/* Pages to go onto the inactive_list */
 	LIST_HEAD(l_active);	/* Pages to go onto the active_list */
@@ -995,7 +989,7 @@ refill_inactive_zone(struct zone *zone, 
 
 	lru_add_drain();
 	spin_lock_irq(&zone->lru_lock);
-	pgmoved = isolate_lru_pages(nr_pages, &zone->active_list,
+	pgmoved = isolate_lru_pages(sc->nr_to_scan, &zone->active_list,
 				    &l_hold, &pgscanned);
 	zone->pages_scanned += pgscanned;
 	zone->nr_active -= pgmoved;

--

^ permalink raw reply	[flat|nested] 40+ messages in thread

* [PATCH 08/12] mm: remove swap_cluster_max from scan_control
  2005-12-01 10:18 [PATCH 00/12] Balancing the scan rate of major caches Wu Fengguang
                   ` (6 preceding siblings ...)
  2005-12-01 10:18 ` [PATCH 07/12] mm: remove unnecessary variable and loop Wu Fengguang
@ 2005-12-01 10:18 ` Wu Fengguang
  2005-12-01 10:18 ` [PATCH 09/12] mm: accumulate sc.nr_scanned/sc.nr_reclaimed Wu Fengguang
                   ` (3 subsequent siblings)
  11 siblings, 0 replies; 40+ messages in thread
From: Wu Fengguang @ 2005-12-01 10:18 UTC (permalink / raw)
  To: linux-kernel
  Cc: Andrew Morton, Christoph Lameter, Rik van Riel, Peter Zijlstra,
	Nick Piggin, Andrea Arcangeli, Marcelo Tosatti, Magnus Damm,
	Wu Fengguang

[-- Attachment #1: mm-remove-swap-cluster-max-from-scan-control.patch --]
[-- Type: text/plain, Size: 2354 bytes --]

The use of sc.swap_cluster_max is weird and redundant.

The callers should just set sc.priority/sc.nr_to_reclaim, and let
shrink_zone() decide the proper loop parameters.

Signed-off-by: Wu Fengguang <wfg@mail.ustc.edu.cn>
---

 mm/vmscan.c |   15 ++++-----------
 1 files changed, 4 insertions(+), 11 deletions(-)

--- linux.orig/mm/vmscan.c
+++ linux/mm/vmscan.c
@@ -76,12 +76,6 @@ struct scan_control {
 
 	/* Can pages be swapped as part of reclaim? */
 	int may_swap;
-
-	/* This context's SWAP_CLUSTER_MAX. If freeing memory for
-	 * suspend, we effectively ignore SWAP_CLUSTER_MAX.
-	 * In this context, it doesn't matter that we scan the
-	 * whole list at once. */
-	int swap_cluster_max;
 };
 
 #define lru_to_page(_head) (list_entry((_head)->prev, struct page, lru))
@@ -1125,7 +1119,6 @@ shrink_zone(struct zone *zone, struct sc
 	nr_inactive &= ~(SWAP_CLUSTER_MAX - 1);
 
 	sc->nr_to_scan = SWAP_CLUSTER_MAX;
-	sc->nr_to_reclaim = sc->swap_cluster_max;
 
 	while (nr_active >= SWAP_CLUSTER_MAX * 1024 || nr_inactive) {
 		if (nr_active >= SWAP_CLUSTER_MAX * 1024) {
@@ -1264,7 +1257,7 @@ int try_to_free_pages(struct zone **zone
 		sc.nr_scanned = 0;
 		sc.nr_reclaimed = 0;
 		sc.priority = priority;
-		sc.swap_cluster_max = SWAP_CLUSTER_MAX;
+		sc.nr_to_reclaim = SWAP_CLUSTER_MAX;
 		if (!priority)
 			disable_swap_token();
 		shrink_caches(zones, &sc);
@@ -1275,7 +1268,7 @@ int try_to_free_pages(struct zone **zone
 		}
 		total_scanned += sc.nr_scanned;
 		total_reclaimed += sc.nr_reclaimed;
-		if (total_reclaimed >= sc.swap_cluster_max) {
+		if (total_reclaimed >= SWAP_CLUSTER_MAX) {
 			ret = 1;
 			goto out;
 		}
@@ -1287,7 +1280,7 @@ int try_to_free_pages(struct zone **zone
 		 * that's undesirable in laptop mode, where we *want* lumpy
 		 * writeout.  So in laptop mode, write out the whole world.
 		 */
-		if (total_scanned > sc.swap_cluster_max + sc.swap_cluster_max/2) {
+		if (total_scanned > SWAP_CLUSTER_MAX * 3 / 2) {
 			wakeup_pdflush(laptop_mode ? 0 : total_scanned);
 			sc.may_writepage = 1;
 		}
@@ -1376,7 +1369,7 @@ loop_again:
 		sc.nr_scanned = 0;
 		sc.nr_reclaimed = 0;
 		sc.priority = priority;
-		sc.swap_cluster_max = nr_pages ? nr_pages : SWAP_CLUSTER_MAX;
+		sc.nr_to_reclaim = nr_pages ? nr_pages : SWAP_CLUSTER_MAX;
 
 		/* The swap token gets in the way of swapout... */
 		if (!priority)

--

^ permalink raw reply	[flat|nested] 40+ messages in thread

* [PATCH 09/12] mm: accumulate sc.nr_scanned/sc.nr_reclaimed
  2005-12-01 10:18 [PATCH 00/12] Balancing the scan rate of major caches Wu Fengguang
                   ` (7 preceding siblings ...)
  2005-12-01 10:18 ` [PATCH 08/12] mm: remove swap_cluster_max from scan_control Wu Fengguang
@ 2005-12-01 10:18 ` Wu Fengguang
  2005-12-01 10:18 ` [PATCH 10/12] mm: merge sc.may_writepage and sc.may_swap into sc.flags Wu Fengguang
                   ` (2 subsequent siblings)
  11 siblings, 0 replies; 40+ messages in thread
From: Wu Fengguang @ 2005-12-01 10:18 UTC (permalink / raw)
  To: linux-kernel
  Cc: Andrew Morton, Christoph Lameter, Rik van Riel, Peter Zijlstra,
	Nick Piggin, Andrea Arcangeli, Marcelo Tosatti, Magnus Damm,
	Wu Fengguang

[-- Attachment #1: mm-accumulate-nr-scanned-reclaimed-in-scan-control.patch --]
[-- Type: text/plain, Size: 4573 bytes --]

Now that there's no need to keep track of nr_scanned/nr_reclaimed for every
single round of shrink_zone(), remove the total_scanned/total_reclaimed and
let nr_scanned/nr_reclaimed accumulate between rounds.

Signed-off-by: Wu Fengguang <wfg@mail.ustc.edu.cn>
---

 mm/vmscan.c |   36 ++++++++++++++----------------------
 1 files changed, 14 insertions(+), 22 deletions(-)

--- linux.orig/mm/vmscan.c
+++ linux/mm/vmscan.c
@@ -1229,7 +1229,6 @@ int try_to_free_pages(struct zone **zone
 {
 	int priority;
 	int ret = 0;
-	int total_scanned = 0, total_reclaimed = 0;
 	struct reclaim_state *reclaim_state = current->reclaim_state;
 	struct scan_control sc;
 	int i;
@@ -1239,6 +1238,8 @@ int try_to_free_pages(struct zone **zone
 	sc.gfp_mask = gfp_mask;
 	sc.may_writepage = 0;
 	sc.may_swap = 1;
+	sc.nr_scanned = 0;
+	sc.nr_reclaimed = 0;
 
 	inc_page_state(allocstall);
 
@@ -1254,8 +1255,6 @@ int try_to_free_pages(struct zone **zone
 	/* The added 10 priorities are for scan rate balancing */
 	for (priority = DEF_PRIORITY + 10; priority >= 0; priority--) {
 		sc.nr_mapped = read_page_state(nr_mapped);
-		sc.nr_scanned = 0;
-		sc.nr_reclaimed = 0;
 		sc.priority = priority;
 		sc.nr_to_reclaim = SWAP_CLUSTER_MAX;
 		if (!priority)
@@ -1266,9 +1265,7 @@ int try_to_free_pages(struct zone **zone
 			sc.nr_reclaimed += reclaim_state->reclaimed_slab;
 			reclaim_state->reclaimed_slab = 0;
 		}
-		total_scanned += sc.nr_scanned;
-		total_reclaimed += sc.nr_reclaimed;
-		if (total_reclaimed >= SWAP_CLUSTER_MAX) {
+		if (sc.nr_reclaimed >= SWAP_CLUSTER_MAX) {
 			ret = 1;
 			goto out;
 		}
@@ -1280,13 +1277,13 @@ int try_to_free_pages(struct zone **zone
 		 * that's undesirable in laptop mode, where we *want* lumpy
 		 * writeout.  So in laptop mode, write out the whole world.
 		 */
-		if (total_scanned > SWAP_CLUSTER_MAX * 3 / 2) {
-			wakeup_pdflush(laptop_mode ? 0 : total_scanned);
+		if (sc.nr_scanned > SWAP_CLUSTER_MAX * 3 / 2) {
+			wakeup_pdflush(laptop_mode ? 0 : sc.nr_scanned);
 			sc.may_writepage = 1;
 		}
 
 		/* Take a nap, wait for some writeback to complete */
-		if (sc.nr_scanned && priority < DEF_PRIORITY)
+		if (priority < DEF_PRIORITY)
 			blk_congestion_wait(WRITE, HZ/10);
 	}
 out:
@@ -1332,19 +1329,18 @@ static int balance_pgdat(pg_data_t *pgda
 	int all_zones_ok;
 	int priority;
 	int i;
-	int total_scanned, total_reclaimed;
 	struct reclaim_state *reclaim_state = current->reclaim_state;
 	struct scan_control sc;
 	struct zone *youngest_zone = NULL;
 	struct zone *oldest_zone = NULL;
 
 loop_again:
-	total_scanned = 0;
-	total_reclaimed = 0;
 	sc.gfp_mask = GFP_KERNEL;
 	sc.may_writepage = 0;
 	sc.may_swap = 1;
 	sc.nr_mapped = read_page_state(nr_mapped);
+	sc.nr_scanned = 0;
+	sc.nr_reclaimed = 0;
 
 	inc_page_state(pageoutrun);
 
@@ -1366,8 +1362,6 @@ loop_again:
 
 	for (priority = DEF_PRIORITY; priority >= 0; priority--) {
 		all_zones_ok = 1;
-		sc.nr_scanned = 0;
-		sc.nr_reclaimed = 0;
 		sc.priority = priority;
 		sc.nr_to_reclaim = nr_pages ? nr_pages : SWAP_CLUSTER_MAX;
 
@@ -1421,19 +1415,17 @@ loop_again:
 		reclaim_state->reclaimed_slab = 0;
 		shrink_slab(GFP_KERNEL);
 		sc.nr_reclaimed += reclaim_state->reclaimed_slab;
-		total_reclaimed += sc.nr_reclaimed;
-		total_scanned += sc.nr_scanned;
 
 		/*
 		 * If we've done a decent amount of scanning and
 		 * the reclaim ratio is low, start doing writepage
 		 * even in laptop mode
 		 */
-		if (total_scanned > SWAP_CLUSTER_MAX * 2 &&
-		    total_scanned > total_reclaimed+total_reclaimed/2)
+		if (sc.nr_scanned > SWAP_CLUSTER_MAX * 2 &&
+		    sc.nr_scanned > sc.nr_reclaimed + sc.nr_reclaimed / 2)
 			sc.may_writepage = 1;
 
-		if (nr_pages && to_free > total_reclaimed)
+		if (nr_pages && to_free > sc.nr_reclaimed)
 			continue;	/* swsusp: need to do more work */
 		if (all_zones_ok)
 			break;		/* kswapd: all done */
@@ -1441,7 +1433,7 @@ loop_again:
 		 * OK, kswapd is getting into trouble.  Take a nap, then take
 		 * another pass across the zones.
 		 */
-		if (total_scanned && priority < DEF_PRIORITY - 2)
+		if (priority < DEF_PRIORITY - 2)
 			blk_congestion_wait(WRITE, HZ/10);
 
 		/*
@@ -1450,7 +1442,7 @@ loop_again:
 		 * matches the direct reclaim path behaviour in terms of impact
 		 * on zone->*_priority.
 		 */
-		if ((total_reclaimed >= SWAP_CLUSTER_MAX) && (!nr_pages))
+		if (sc.nr_reclaimed >= SWAP_CLUSTER_MAX && !nr_pages)
 			break;
 	}
 	for (i = 0; i < pgdat->nr_zones; i++) {
@@ -1463,7 +1455,7 @@ loop_again:
 		goto loop_again;
 	}
 
-	return total_reclaimed;
+	return sc.nr_reclaimed;
 }
 
 /*

--

^ permalink raw reply	[flat|nested] 40+ messages in thread

* [PATCH 10/12] mm: merge sc.may_writepage and sc.may_swap into sc.flags
  2005-12-01 10:18 [PATCH 00/12] Balancing the scan rate of major caches Wu Fengguang
                   ` (8 preceding siblings ...)
  2005-12-01 10:18 ` [PATCH 09/12] mm: accumulate sc.nr_scanned/sc.nr_reclaimed Wu Fengguang
@ 2005-12-01 10:18 ` Wu Fengguang
  2005-12-01 10:18 ` [PATCH 11/12] mm: add page reclaim debug traces Wu Fengguang
  2005-12-01 10:18 ` [PATCH 12/12] mm: fix minor scan count bugs Wu Fengguang
  11 siblings, 0 replies; 40+ messages in thread
From: Wu Fengguang @ 2005-12-01 10:18 UTC (permalink / raw)
  To: linux-kernel
  Cc: Andrew Morton, Christoph Lameter, Rik van Riel, Peter Zijlstra,
	Nick Piggin, Andrea Arcangeli, Marcelo Tosatti, Magnus Damm,
	Wu Fengguang

[-- Attachment #1: mm-turn-bool-variables-into-flags-in-scan-control.patch --]
[-- Type: text/plain, Size: 2406 bytes --]

Turn bool values into flags to make struct scan_control more compact.

Signed-off-by: Wu Fengguang <wfg@mail.ustc.edu.cn>
---

 mm/vmscan.c |   22 ++++++++++------------
 1 files changed, 10 insertions(+), 12 deletions(-)

--- linux.orig/mm/vmscan.c
+++ linux/mm/vmscan.c
@@ -72,12 +72,12 @@ struct scan_control {
 	/* This context's GFP mask */
 	gfp_t gfp_mask;
 
-	int may_writepage;
-
-	/* Can pages be swapped as part of reclaim? */
-	int may_swap;
+	unsigned long flags;
 };
 
+#define SC_MAY_WRITEPAGE	0x1
+#define SC_MAY_SWAP		0x2	/* Can pages be swapped as part of reclaim? */
+
 #define lru_to_page(_head) (list_entry((_head)->prev, struct page, lru))
 
 #ifdef ARCH_HAS_PREFETCH
@@ -487,7 +487,7 @@ static int shrink_list(struct list_head 
 		 * Try to allocate it some swap space here.
 		 */
 		if (PageAnon(page) && !PageSwapCache(page)) {
-			if (!sc->may_swap)
+			if (!(sc->flags & SC_MAY_SWAP))
 				goto keep_locked;
 			if (!add_to_swap(page, GFP_ATOMIC))
 				goto activate_locked;
@@ -518,7 +518,7 @@ static int shrink_list(struct list_head 
 				goto keep_locked;
 			if (!may_enter_fs)
 				goto keep_locked;
-			if (laptop_mode && !sc->may_writepage)
+			if (laptop_mode && !(sc->flags & SC_MAY_WRITEPAGE))
 				goto keep_locked;
 
 			/* Page is dirty, try to write it out here */
@@ -1236,8 +1236,7 @@ int try_to_free_pages(struct zone **zone
 	delay_prefetch();
 
 	sc.gfp_mask = gfp_mask;
-	sc.may_writepage = 0;
-	sc.may_swap = 1;
+	sc.flags = SC_MAY_SWAP;
 	sc.nr_scanned = 0;
 	sc.nr_reclaimed = 0;
 
@@ -1279,7 +1278,7 @@ int try_to_free_pages(struct zone **zone
 		 */
 		if (sc.nr_scanned > SWAP_CLUSTER_MAX * 3 / 2) {
 			wakeup_pdflush(laptop_mode ? 0 : sc.nr_scanned);
-			sc.may_writepage = 1;
+			sc.flags |= SC_MAY_WRITEPAGE;
 		}
 
 		/* Take a nap, wait for some writeback to complete */
@@ -1336,8 +1335,7 @@ static int balance_pgdat(pg_data_t *pgda
 
 loop_again:
 	sc.gfp_mask = GFP_KERNEL;
-	sc.may_writepage = 0;
-	sc.may_swap = 1;
+	sc.flags = SC_MAY_SWAP;
 	sc.nr_mapped = read_page_state(nr_mapped);
 	sc.nr_scanned = 0;
 	sc.nr_reclaimed = 0;
@@ -1423,7 +1421,7 @@ loop_again:
 		 */
 		if (sc.nr_scanned > SWAP_CLUSTER_MAX * 2 &&
 		    sc.nr_scanned > sc.nr_reclaimed + sc.nr_reclaimed / 2)
-			sc.may_writepage = 1;
+			sc.flags |= SC_MAY_WRITEPAGE;
 
 		if (nr_pages && to_free > sc.nr_reclaimed)
 			continue;	/* swsusp: need to do more work */

--

^ permalink raw reply	[flat|nested] 40+ messages in thread

* [PATCH 11/12] mm: add page reclaim debug traces
  2005-12-01 10:18 [PATCH 00/12] Balancing the scan rate of major caches Wu Fengguang
                   ` (9 preceding siblings ...)
  2005-12-01 10:18 ` [PATCH 10/12] mm: merge sc.may_writepage and sc.may_swap into sc.flags Wu Fengguang
@ 2005-12-01 10:18 ` Wu Fengguang
  2005-12-01 10:18 ` [PATCH 12/12] mm: fix minor scan count bugs Wu Fengguang
  11 siblings, 0 replies; 40+ messages in thread
From: Wu Fengguang @ 2005-12-01 10:18 UTC (permalink / raw)
  To: linux-kernel
  Cc: Andrew Morton, Christoph Lameter, Rik van Riel, Peter Zijlstra,
	Nick Piggin, Andrea Arcangeli, Marcelo Tosatti, Magnus Damm,
	Wu Fengguang

[-- Attachment #1: mm-page-reclaim-debug-traces.patch --]
[-- Type: text/plain, Size: 5168 bytes --]

Show the detailed steps of direct/kswapd page reclaim.

To enable the printk traces:
# echo y > /debug/debug_page_reclaim

Sample lines:

reclaim zone3 from kswapd for watermark, prio 12, scan-reclaimed 32-32, age 2626, active to scan 6542, hot+cold+free pages 8842+283558+352
reclaim zone2 from kswapd for aging, prio 12, scan-reclaimed 32-32, age 2626, active to scan 8018, hot+cold+free pages 1693+200036+10360
reclaim zone3 from kswapd for watermark, prio 12, scan-reclaimed 64-64, age 2627, active to scan 7564, hot+cold+free pages 8842+283526+384
reclaim zone2 from kswapd for aging, prio 12, scan-reclaimed 32-32, age 2627, active to scan 8296, hot+cold+free pages 1693+200018+10360
reclaim zone3 from kswapd for watermark, prio 12, scan-reclaimed 64-63, age 2628, active to scan 8587, hot+cold+free pages 8843+283495+416
reclaim zone2 from kswapd for aging, prio 12, scan-reclaimed 32-32, age 2628, active to scan 8574, hot+cold+free pages 1693+200014+10392
reclaim zone3 from kswapd for watermark, prio 12, scan-reclaimed 64-63, age 2628, active to scan 9610, hot+cold+free pages 8844+283465+448
reclaim zone2 from kswapd for aging, prio 12, scan-reclaimed 32-32, age 2628, active to scan 8852, hot+cold+free pages 1693+199996+10424
reclaim zone3 from kswapd for watermark, prio 12, scan-reclaimed 64-64, age 2629, active to scan 10633, hot+cold+free pages 8844+283433+480
reclaim zone2 from kswapd for aging, prio 12, scan-reclaimed 32-32, age 2629, active to scan 9130, hot+cold+free pages 1693+199992+10456
reclaim zone3 from kswapd for watermark, prio 12, scan-reclaimed 64-64, age 2630, active to scan 11656, hot+cold+free pages 8844+283401+512
reclaim zone2 from kswapd for aging, prio 12, scan-reclaimed 32-32, age 2630, active to scan 9408, hot+cold+free pages 1693+199974+10488

Signed-off-by: Wu Fengguang <wfg@mail.ustc.edu.cn>
---


 mm/vmscan.c |   70 +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++-
 1 files changed, 69 insertions(+), 1 deletion(-)

--- linux.orig/mm/vmscan.c
+++ linux/mm/vmscan.c
@@ -38,6 +38,7 @@
 #include <asm/div64.h>
 
 #include <linux/swapops.h>
+#include <linux/debugfs.h>
 
 /* possible outcome of pageout() */
 typedef enum {
@@ -78,6 +79,62 @@ struct scan_control {
 #define SC_MAY_WRITEPAGE	0x1
 #define SC_MAY_SWAP		0x2	/* Can pages be swapped as part of reclaim? */
 
+#define SC_RECLAIM_FROM_KSWAPD		0x10
+#define SC_RECLAIM_FROM_DIRECT		0x20
+#define SC_RECLAIM_FOR_WATERMARK	0x40
+#define SC_RECLAIM_FOR_AGING		0x80
+#define SC_RECLAIM_MASK			0xF0
+
+#ifdef CONFIG_DEBUG_FS
+static u32 debug_page_reclaim;
+
+static inline void debug_reclaim(struct scan_control *sc, unsigned long flags)
+{
+	sc->flags = (sc->flags & ~SC_RECLAIM_MASK) | flags;
+}
+
+static inline void debug_reclaim_report(struct scan_control *sc, struct zone *z)
+{
+	if (!debug_page_reclaim)
+		return;
+
+	printk(KERN_DEBUG "reclaim zone%d from %s for %s, "
+			"prio %d, scan-reclaimed %lu-%lu, age %lu, "
+			"active to scan %lu, "
+			"hot+cold+free pages %lu+%lu+%lu\n",
+			zone_idx(z),
+			(sc->flags & SC_RECLAIM_FROM_KSWAPD) ? "kswapd" :
+			((sc->flags & SC_RECLAIM_FROM_DIRECT) ? "direct" :
+								"early"),
+			(sc->flags & SC_RECLAIM_FOR_AGING) ?
+							"aging" : "watermark",
+			sc->priority, sc->nr_scanned, sc->nr_reclaimed,
+			z->page_age,
+			z->nr_scan_active,
+			z->nr_active, z->nr_inactive, z->free_pages);
+
+	if (atomic_read(&z->reclaim_in_progress))
+		printk(KERN_WARNING "reclaim_in_progress=%d\n",
+					atomic_read(&z->reclaim_in_progress));
+}
+
+static inline void debug_reclaim_init(void)
+{
+	debugfs_create_bool("debug_page_reclaim", 0644, NULL,
+							&debug_page_reclaim);
+}
+#else
+static inline void debug_reclaim(struct scan_control *sc, int flags)
+{
+}
+static inline void debug_reclaim_report(struct scan_control *sc, struct zone *z)
+{
+}
+static inline void debug_reclaim_init(void)
+{
+}
+#endif
+
 #define lru_to_page(_head) (list_entry((_head)->prev, struct page, lru))
 
 #ifdef ARCH_HAS_PREFETCH
@@ -1141,6 +1198,7 @@ shrink_zone(struct zone *zone, struct sc
 
 	atomic_dec(&zone->reclaim_in_progress);
 
+	debug_reclaim_report(sc, zone);
 	throttle_vm_writeout();
 }
 
@@ -1205,11 +1263,14 @@ shrink_caches(struct zone **zones, struc
 			continue;
 		}
 
+		debug_reclaim(sc, SC_RECLAIM_FROM_DIRECT);
 		shrink_zone(zone, sc);
 	}
 
-	if (z)
+	if (z) {
+		debug_reclaim(sc, SC_RECLAIM_FROM_DIRECT|SC_RECLAIM_FOR_AGING);
 		shrink_zone(z, sc);
+	}
 }
  
 /*
@@ -1386,10 +1447,16 @@ loop_again:
 				if (!zone_watermark_ok(zone, order,
 							zone->pages_high,
 							0, 0)) {
+					debug_reclaim(&sc,
+							SC_RECLAIM_FROM_KSWAPD|
+							SC_RECLAIM_FOR_WATERMARK);
 					all_zones_ok = 0;
 				} else if (zone == youngest_zone &&
 						pages_more_aged(oldest_zone,
 								youngest_zone)) {
+					debug_reclaim(&sc,
+							SC_RECLAIM_FROM_KSWAPD|
+							SC_RECLAIM_FOR_AGING);
 				} else
 					continue;
 			}
@@ -1611,6 +1678,7 @@ static int __init kswapd_init(void)
 		= find_task_by_pid(kernel_thread(kswapd, pgdat, CLONE_KERNEL));
 	total_memory = nr_free_pagecache_pages();
 	hotcpu_notifier(cpu_callback, 0);
+	debug_reclaim_init();
 	return 0;
 }
 

--

^ permalink raw reply	[flat|nested] 40+ messages in thread

* [PATCH 12/12] mm: fix minor scan count bugs
  2005-12-01 10:18 [PATCH 00/12] Balancing the scan rate of major caches Wu Fengguang
                   ` (10 preceding siblings ...)
  2005-12-01 10:18 ` [PATCH 11/12] mm: add page reclaim debug traces Wu Fengguang
@ 2005-12-01 10:18 ` Wu Fengguang
  11 siblings, 0 replies; 40+ messages in thread
From: Wu Fengguang @ 2005-12-01 10:18 UTC (permalink / raw)
  To: linux-kernel
  Cc: Andrew Morton, Christoph Lameter, Rik van Riel, Peter Zijlstra,
	Nick Piggin, Andrea Arcangeli, Marcelo Tosatti, Magnus Damm,
	Wu Fengguang

[-- Attachment #1: mm-scan-accounting-fix.patch --]
[-- Type: text/plain, Size: 1100 bytes --]

- in isolate_lru_pages(): reports one more scan. Fix it.
- in shrink_cache(): 0 pages taken does not mean 0 pages scanned. Fix it.

Signed-off-by: Wu Fengguang <wfg@mail.ustc.edu.cn>
---

 mm/vmscan.c |   10 ++++++----
 1 files changed, 6 insertions(+), 4 deletions(-)

--- linux.orig/mm/vmscan.c
+++ linux/mm/vmscan.c
@@ -920,7 +920,8 @@ static int isolate_lru_pages(int nr_to_s
 	struct page *page;
 	int scan = 0;
 
-	while (scan++ < nr_to_scan && !list_empty(src)) {
+	while (scan < nr_to_scan && !list_empty(src)) {
+		scan++;
 		page = lru_to_page(src);
 		prefetchw_prev_lru_page(page, src, flags);
 
@@ -967,14 +968,15 @@ static void shrink_cache(struct zone *zo
 	update_zone_age(zone, nr_scan);
 	spin_unlock_irq(&zone->lru_lock);
 
-	if (nr_taken == 0)
-		return;
-
 	sc->nr_scanned += nr_scan;
 	if (current_is_kswapd())
 		mod_page_state_zone(zone, pgscan_kswapd, nr_scan);
 	else
 		mod_page_state_zone(zone, pgscan_direct, nr_scan);
+
+	if (nr_taken == 0)
+		return;
+
 	nr_freed = shrink_list(&page_list, sc);
 	if (current_is_kswapd())
 		mod_page_state(kswapd_steal, nr_freed);

--

^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: [PATCH 01/12] vm: kswapd incmin
  2005-12-01 10:18 ` [PATCH 01/12] vm: kswapd incmin Wu Fengguang
@ 2005-12-01 10:33   ` Andrew Morton
  2005-12-01 11:40     ` Wu Fengguang
  0 siblings, 1 reply; 40+ messages in thread
From: Andrew Morton @ 2005-12-01 10:33 UTC (permalink / raw)
  To: Wu Fengguang
  Cc: linux-kernel, christoph, riel, a.p.zijlstra, npiggin, andrea,
	marcelo.tosatti, magnus.damm, wfg

Wu Fengguang <wfg@mail.ustc.edu.cn> wrote:
>
> Explicitly teach kswapd about the incremental min logic instead of just scanning
>  all zones under the first low zone. This should keep more even pressure applied
>  on the zones.

I spat this back a while ago.  See the changelog (below) for the logic
which you're removing.

This change appears to go back to performing reclaim in the highmem->lowmem
direction.  Page reclaim might go all lumpy again.


Shouldn't first_low_zone be initialised to ZONE_HIGHMEM (or pgdat->nr_zones
- 1) rather than to 0, or something?  I don't understand why we're passing
zero as the classzone_idx into zone_watermark_ok() in the first go around
the loop.

And this bit, which Nick didn't reply to (wimp!).  I think it's a bug.



Looking at it, I am confused.

 In the first loop:

 			for (i = pgdat->nr_zones - 1; i >= 0; i--) {
 				struct zone *zone = pgdat->node_zones + i;
 	...
 				if (!zone_watermark_ok(zone, order,
 						zone->pages_high, 0, 0)) {
 					end_zone = i;
 					goto scan;
 				}

 end_zone gets the value of the highest-numbered zone which needs scanning. 
 Where `0' corresponds to ZONE_DMA.  (correct?)

 In the second loop:

 		for (i = 0; i <= end_zone; i++) {
 			struct zone *zone = pgdat->node_zones + i;

 Shouldn't that be

 		for (i = end_zone; ...; i++)

 or am I on crack?




 As kswapd is now scanning zones in the highmem->normal->dma direction it can
 get into competition with the page allocator: kswapd keep on trying to free
 pages from highmem, then kswapd moves onto lowmem.  By the time kswapd has
 done proportional scanning in lowmem, someone has come in and allocated a few
 pages from highmem.  So kswapd goes back and frees some highmem, then some
 lowmem again.  But nobody has allocated any lowmem yet.  So we keep on and on
 scanning lowmem in response to highmem page allocations.

 With a simple `dd' on a 1G box we get:

  r  b   swpd   free   buff  cache   si   so    bi    bo   in    cs us sy wa id
  0  3      0  59340   4628 922348    0    0     4 28188 1072   808  0 10 46 44
  0  3      0  29932   4660 951760    0    0     0 30752 1078   441  1  6 30 64
  0  3      0  57568   4556 924052    0    0     0 30748 1075   478  0  8 43 49
  0  3      0  29664   4584 952176    0    0     0 30752 1075   472  0  6 34 60
  0  3      0   5304   4620 976280    0    0     4 40484 1073   456  1  7 52 41
  0  3      0 104856   4508 877112    0    0     0 18452 1074    97  0  7 67 26
  0  3      0  70768   4540 911488    0    0     0 35876 1078   746  0  7 34 59
  1  2      0  42544   4568 939680    0    0     0 21524 1073   556  0  5 43 51
  0  3      0   5520   4608 976428    0    0     4 37924 1076   836  0  7 41 51
  0  2      0   4848   4632 976812    0    0    32 12308 1092    94  0  1 33 66

 Simple fix: go back to scanning the zones in the dma->normal->highmem
 direction so we meet the page allocator in the middle somewhere.

  r  b   swpd   free   buff  cache   si   so    bi    bo   in    cs us sy wa id
  1  3      0   5152   3468 976548    0    0     4 37924 1071   650  0  8 64 28
  1  2      0   4888   3496 976588    0    0     0 23576 1075   726  0  6 66 27
  0  3      0   5336   3532 976348    0    0     0 31264 1072   708  0  8 60 32
  0  3      0   6168   3560 975504    0    0     0 40992 1072   683  0  6 63 31
  0  3      0   4560   3580 976844    0    0     0 18448 1073   233  0  4 59 37
  0  3      0   5840   3624 975712    0    0     4 26660 1072   800  1  8 46 45
  0  3      0   4816   3648 976640    0    0     0 40992 1073   526  0  6 47 47
  0  3      0   5456   3672 976072    0    0     0 19984 1070   320  0  5 60 35



 ---

  25-akpm/mm/vmscan.c |   37 +++++++++++++++++++++++++++++++++++--
  1 files changed, 35 insertions(+), 2 deletions(-)

 diff -puN mm/vmscan.c~kswapd-avoid-higher-zones-reverse-direction mm/vmscan.c
 --- 25/mm/vmscan.c~kswapd-avoid-higher-zones-reverse-direction	2004-03-12 01:33:09.000000000 -0800
 +++ 25-akpm/mm/vmscan.c	2004-03-12 01:33:09.000000000 -0800
 @@ -924,8 +924,41 @@ static int balance_pgdat(pg_data_t *pgda
  	for (priority = DEF_PRIORITY; priority; priority--) {
  		int all_zones_ok = 1;
  		int pages_scanned = 0;
 +		int end_zone = 0;	/* Inclusive.  0 = ZONE_DMA */
  
 -		for (i = pgdat->nr_zones - 1; i >= 0; i--) {
 +
 +		if (nr_pages == 0) {
 +			/*
 +			 * Scan in the highmem->dma direction for the highest
 +			 * zone which needs scanning
 +			 */
 +			for (i = pgdat->nr_zones - 1; i >= 0; i--) {
 +				struct zone *zone = pgdat->node_zones + i;
 +
 +				if (zone->all_unreclaimable &&
 +						priority != DEF_PRIORITY)
 +					continue;
 +
 +				if (zone->free_pages <= zone->pages_high) {
 +					end_zone = i;
 +					goto scan;
 +				}
 +			}
 +			goto out;
 +		} else {
 +			end_zone = pgdat->nr_zones - 1;
 +		}
 +scan:
 +		/*
 +		 * Now scan the zone in the dma->highmem direction, stopping
 +		 * at the last zone which needs scanning.
 +		 *
 +		 * We do this because the page allocator works in the opposite
 +		 * direction.  This prevents the page allocator from allocating
 +		 * pages behind kswapd's direction of progress, which would
 +		 * cause too much scanning of the lower zones.
 +		 */
 +		for (i = 0; i <= end_zone; i++) {
  			struct zone *zone = pgdat->node_zones + i;
  			int total_scanned = 0;
  			int max_scan;
 @@ -965,7 +998,7 @@ static int balance_pgdat(pg_data_t *pgda
  		if (pages_scanned)
  			blk_congestion_wait(WRITE, HZ/10);
  	}
 -
 +out:
  	for (i = 0; i < pgdat->nr_zones; i++) {
  		struct zone *zone = pgdat->node_zones + i;
  

 _


^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: [PATCH 02/12] mm: supporting variables and functions for balanced zone aging
  2005-12-01 10:18 ` [PATCH 02/12] mm: supporting variables and functions for balanced zone aging Wu Fengguang
@ 2005-12-01 10:37   ` Andrew Morton
  2005-12-01 12:11     ` Wu Fengguang
  2005-12-01 22:28     ` Marcelo Tosatti
  0 siblings, 2 replies; 40+ messages in thread
From: Andrew Morton @ 2005-12-01 10:37 UTC (permalink / raw)
  To: Wu Fengguang
  Cc: linux-kernel, christoph, riel, a.p.zijlstra, npiggin, andrea,
	marcelo.tosatti, magnus.damm, wfg

Wu Fengguang <wfg@mail.ustc.edu.cn> wrote:
>
>  The zone aging rates are currently imbalanced,

ZONE_DMA is out of whack.  It shouldn't be, and I'm not aware of anyone
getting in and working out why.  I certainly wouldn't want to go and add
all this stuff without having a good understanding of _why_ it's out of
whack.  Perhaps it's just some silly bug, like the thing I pointed at in
the previous email.

> the gap can be as large as 3 times,

What's the testcase?

> which can severely damage read-ahead requests and shorten their
>  effective life time.

Have you any performance numbers for this?

^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: [PATCH 06/12] mm: balance active/inactive list scan rates
  2005-12-01 10:18 ` [PATCH 06/12] mm: balance active/inactive list scan rates Wu Fengguang
@ 2005-12-01 11:39   ` Peter Zijlstra
  0 siblings, 0 replies; 40+ messages in thread
From: Peter Zijlstra @ 2005-12-01 11:39 UTC (permalink / raw)
  To: Wu Fengguang
  Cc: linux-kernel, Andrew Morton, Christoph Lameter, Rik van Riel,
	Nick Piggin, Andrea Arcangeli, Marcelo Tosatti, Magnus Damm

On Thu, 2005-12-01 at 18:18 +0800, Wu Fengguang wrote:
> plain text document attachment
> (mm-balance-active-inactive-list-aging.patch)
> shrink_zone() has two major design goals:
> 1) let active/inactive lists have equal scan rates
> 2) do the scans in small chunks
> 

> The new design:
> 1) keep perfect balance
>    let active_list follow inactive_list in scan rate
> 
> 2) always scan in SWAP_CLUSTER_MAX sized chunks
>    simple and efficient
> 
> 3) will scan at least one chunk
>    the expected behavior from the callers
> 
> The perfect balance may or may not yield better performance, though it
> a) is a more understandable and dependable behavior
> b) together with inter-zone balancing, makes the zoned memories consistent

Nice, this patch effectively separates zone balancing from
active/inactive balancing. I was thinking about doing this this morning
in order to nicely abstract out all the page-replacement code.

Thanks!





^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: [PATCH 01/12] vm: kswapd incmin
  2005-12-01 10:33   ` Andrew Morton
@ 2005-12-01 11:40     ` Wu Fengguang
  0 siblings, 0 replies; 40+ messages in thread
From: Wu Fengguang @ 2005-12-01 11:40 UTC (permalink / raw)
  To: Andrew Morton
  Cc: linux-kernel, christoph, riel, a.p.zijlstra, npiggin, andrea,
	marcelo.tosatti, magnus.damm

On Thu, Dec 01, 2005 at 02:33:30AM -0800, Andrew Morton wrote:
> I spat this back a while ago.  See the changelog (below) for the logic
> which you're removing.
>
> This change appears to go back to performing reclaim in the highmem->lowmem
> direction.  Page reclaim might go all lumpy again.
> 
> Shouldn't first_low_zone be initialised to ZONE_HIGHMEM (or pgdat->nr_zones
> - 1) rather than to 0, or something?  I don't understand why we're passing
> zero as the classzone_idx into zone_watermark_ok() in the first go around
> the loop.

Sorry to note that I'm mainly taking its zone-range --> zones-under-watermark
cleanups. The scan order is reverted back to DMA->HighMem in
mm-balance-zone-aging-in-kswapd-reclaim.patch, and the first_low_zone logic is
also replaced with a quite different one there.

My thinking is that the overall reclaim-for-watermark should be weakened and
just do minimal watermark-safeguard work, so that it will not be a major force
of imbalance.

Assume there are three zones. The dynamics goes something like:

HighMem exhausted --> reclaim from it --> become more aged --> reclaim the
other two zones for aging

DMA reclaimed --> age leaps ahead --> reclaim Normal zone for aging, while
HighMem is being reclaimed for watermark

In the kswapd path, if there are N rounds of reclaim-for-watermark with
all_zones_ok=0, there could be N+1 rounds of reclaim-for-aging with the
additional 1 time of all_zones_ok=1. With this the force of balance outperforms
the force of imbalance.

In the direct path, there are 10 rounds of aging before unconditionally reclaim
from all zones, that's pretty much force of balance.

In summary:
- HighMem zone is normally first exhausted and mostly reclaimed for watermark.
- DMA zone is now mainly reclaimed for aging.
- Normal zone will be mostly reclaimed for aging, sometimes for watermark.
 
Thanks,
Wu

^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: [PATCH 02/12] mm: supporting variables and functions for balanced zone aging
  2005-12-01 10:37   ` Andrew Morton
@ 2005-12-01 12:11     ` Wu Fengguang
  2005-12-01 22:28     ` Marcelo Tosatti
  1 sibling, 0 replies; 40+ messages in thread
From: Wu Fengguang @ 2005-12-01 12:11 UTC (permalink / raw)
  To: Andrew Morton
  Cc: linux-kernel, christoph, riel, a.p.zijlstra, npiggin, andrea,
	marcelo.tosatti, magnus.damm

On Thu, Dec 01, 2005 at 02:37:14AM -0800, Andrew Morton wrote:
> Wu Fengguang <wfg@mail.ustc.edu.cn> wrote:
> >
> >  The zone aging rates are currently imbalanced,
> 
> ZONE_DMA is out of whack.  It shouldn't be, and I'm not aware of anyone
> getting in and working out why.  I certainly wouldn't want to go and add
> all this stuff without having a good understanding of _why_ it's out of
> whack.  Perhaps it's just some silly bug, like the thing I pointed at in
> the previous email.

Yep, my rule is that if ever the DMA zone is reclaimed for watermark, it will
be running wild ;) So I leave it out by setting classzone_idx=0, and let the
age balancing code to catch it up. This scheme works fine: tested to be OK from
64M to 2G memory.

> > the gap can be as large as 3 times,
> 
> What's the testcase?
> 
> > which can severely damage read-ahead requests and shorten their
> >  effective life time.
> 
> Have you any performance numbers for this?

That's months ago, if I remember it right, the number of concurrent readers the
adaptive read-ahead code can handle without much thrashing was raised from ~100
to 800 with the balancing work.

This is my original announce back then:

    The page aging problem showed up when I was testing many slow reads with
    limited memory. Pages in the DMA zone were found out to be aged about 3
    times faster than that of Normal zone in systems with 64-512M memory.
    That is a BIG threat to the read-ahead pages. So I added some code to
    make the aging rates synchronized. You can see the effect by running:
    $ tar c / | cat > /dev/null &
    $ watch -n1 'grep "age " /proc/zoneinfo'
    There are still some extra DMA scans in the direct page reclaim path.
    It tend to happen in large memory system and is therefore not a big
    problem.

And here is some numbers collected by Magnus Damm:
http://lkml.org/lkml/2005/10/25/50

Good night!
Wu

^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: [PATCH 02/12] mm: supporting variables and functions for balanced zone aging
  2005-12-01 10:37   ` Andrew Morton
  2005-12-01 12:11     ` Wu Fengguang
@ 2005-12-01 22:28     ` Marcelo Tosatti
  2005-12-01 23:03       ` Andrew Morton
  1 sibling, 1 reply; 40+ messages in thread
From: Marcelo Tosatti @ 2005-12-01 22:28 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Wu Fengguang, linux-kernel, christoph, riel, a.p.zijlstra,
	npiggin, andrea, magnus.damm

Hi Andrew,

On Thu, Dec 01, 2005 at 02:37:14AM -0800, Andrew Morton wrote:
> Wu Fengguang <wfg@mail.ustc.edu.cn> wrote:
> >
> >  The zone aging rates are currently imbalanced,
> 
> ZONE_DMA is out of whack.  It shouldn't be, and I'm not aware of anyone
> getting in and working out why.  I certainly wouldn't want to go and add
> all this stuff without having a good understanding of _why_ it's out of
> whack.  Perhaps it's just some silly bug, like the thing I pointed at in
> the previous email.

I think that the problem is caused by the interaction between 
the way reclaiming is quantified and parallel allocators.

The zones have different sizes, and each zone reclaim iteration
scans the same number of pages. It is unfair.

On top of that, kswapd is likely to block while doing its job, 
which means that allocators have a chance to run.

It seems that scaling the number of isolated pages to zone 
size fixes the unbalancing problem, making the Normal zone
be _more_ scanned than DMA. Which is expected since the
lower zone protection logic decreases allocation pressure
from DMA sending it straight to the Normal zone (therefore 
zeroing lower_zone_protection should make the scanning 
proportionally equal).

Test used was FFSB using the following profile on a 128MB 
P-3 machine:

num_filesystems=1
num_threadgroups=1
directio=0
time=300

[filesystem0]
location=/mnt/hda4/
num_files=20
num_dirs=10
max_filesize=91534338
min_filesize=65535
[end0]

[threadgroup0]
num_threads=10
write_size=2816
write_blocksize=4096
read_size=2816
read_blocksize=4096
create_weight=100
write_weight=30
read_weight=100
[end0]

And the scanning ratios are:

Normal: 114688kB
DMA: 16384kB

Normal/DMA ratio = 114688 / 16384 = 7.000

******* 2.6.14 vanilla ********

* kswapd scanning rates
pgscan_kswapd_normal 450483
pgscan_kswapd_dma 84645
pgscan_kswapd Normal/DMA = (450483 / 88869) = 5.069

* direct scanning rates
pgscan_direct_normal 23826
pgscan_direct_dma 4224
pgscan_direct Normal/DMA = (23826 / 4224) = 5.640

* global (kswapd+direct) scanning rates
pgscan_normal = (450483 + 23826) = 474309
pgscan_dma = (84645 + 4224) = 88869
pgscan Normal/DMA = (474309 / 88869) = 5.337

pgalloc_normal = 794293
pgalloc_dma = 123805
pgalloc_normal_dma_ratio = (794293/123805) = 6.415

****** 2.6.14 isolate relative ***** 

* kswapd scanning rates
pgscan_kswapd_normal 664883
pgscan_kswapd_dma 82845
pgscan_kswapd Normal/DMA (664883/82845) = 8.025

* direct scanning rates
pgscan_direct_normal 13485
pgscan_direct_dma 1745
pgscan_direct Normal/DMA = (13485/1745) = 7.727

* global (kswapd+direct) scanning rates
pgscan_normal = (664883 + 13485) = 678368
pgscan_dma = (82845 + 1745) = 84590
pgscan Normal/DMA = (678368 / 84590) = 8.019

pgalloc_normal 699927
pgalloc_dma 66313
pgalloc_normal_dma_ratio = (699927/66313) = 10.554


I think it becomes pretty clear that this is really 
the case. On vanilla, for each DMA allocation, there 
are 6.415 NORMAL allocations, while the NORMAL zone 
is 7.000 times the size of DMA.

With the patch, there are 10.5 NORMAL allocations for each
DMA one.

Batching of reclaim is affected by the relative
isolation (now in smaller batches) though.

--- mm/vmscan.c.orig	2006-01-01 12:44:39.000000000 -0200
+++ mm/vmscan.c	2006-01-01 16:43:54.000000000 -0200
@@ -616,8 +616,12 @@
 {
 	LIST_HEAD(page_list);
 	struct pagevec pvec;
+	int nr_to_isolate;
 	int max_scan = sc->nr_to_scan;
 
+	nr_to_isolate = (sc->swap_cluster_max * zone->present_pages)
+			/ total_memory;
+
 	pagevec_init(&pvec, 1);
 
 	lru_add_drain();
@@ -628,7 +632,8 @@
 		int nr_scan;
 		int nr_freed;
 
-		nr_taken = isolate_lru_pages(sc->swap_cluster_max,
+
+		nr_taken = isolate_lru_pages(nr_to_isolate,
 					     &zone->inactive_list,
 					     &page_list, &nr_scan);
 		zone->nr_inactive -= nr_taken;




^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: [PATCH 02/12] mm: supporting variables and functions for balanced zone aging
  2005-12-01 22:28     ` Marcelo Tosatti
@ 2005-12-01 23:03       ` Andrew Morton
  2005-12-02  1:19         ` Wu Fengguang
  2005-12-02  1:26         ` Marcelo Tosatti
  0 siblings, 2 replies; 40+ messages in thread
From: Andrew Morton @ 2005-12-01 23:03 UTC (permalink / raw)
  To: Marcelo Tosatti
  Cc: wfg, linux-kernel, christoph, riel, a.p.zijlstra, npiggin, andrea,
	magnus.damm

Marcelo Tosatti <marcelo.tosatti@cyclades.com> wrote:
>
> Hi Andrew,
> 
> On Thu, Dec 01, 2005 at 02:37:14AM -0800, Andrew Morton wrote:
> > Wu Fengguang <wfg@mail.ustc.edu.cn> wrote:
> > >
> > >  The zone aging rates are currently imbalanced,
> > 
> > ZONE_DMA is out of whack.  It shouldn't be, and I'm not aware of anyone
> > getting in and working out why.  I certainly wouldn't want to go and add
> > all this stuff without having a good understanding of _why_ it's out of
> > whack.  Perhaps it's just some silly bug, like the thing I pointed at in
> > the previous email.
> 
> I think that the problem is caused by the interaction between 
> the way reclaiming is quantified and parallel allocators.

Could be.  But what about the bug which I think is there?  That'll cause
overscanning of the DMA zone.

> The zones have different sizes, and each zone reclaim iteration
> scans the same number of pages. It is unfair.

Nope.  See how shrink_zone() bases nr_active and nr_inactive on
zone->nr_active and zone_nr_inactive.  These calculations are intended to
cause the number of scanned pages in each zone to be

	(zone->nr-active + zone->nr_inactive) >> sc->priority.

> On top of that, kswapd is likely to block while doing its job, 
> which means that allocators have a chance to run.

kswapd should only block under rare circumstances - huge amounts of dirty
pages coming off the tail of the LRU.

> --- mm/vmscan.c.orig	2006-01-01 12:44:39.000000000 -0200
> +++ mm/vmscan.c	2006-01-01 16:43:54.000000000 -0200
> @@ -616,8 +616,12 @@
>  {

Please use `diff -p'.


^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: [PATCH 02/12] mm: supporting variables and functions for balanced zone aging
  2005-12-01 23:03       ` Andrew Morton
@ 2005-12-02  1:19         ` Wu Fengguang
  2005-12-02  1:30           ` Andrew Morton
  2005-12-02  5:49           ` Andrew Morton
  2005-12-02  1:26         ` Marcelo Tosatti
  1 sibling, 2 replies; 40+ messages in thread
From: Wu Fengguang @ 2005-12-02  1:19 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Marcelo Tosatti, linux-kernel, christoph, riel, a.p.zijlstra,
	npiggin, andrea, magnus.damm

On Thu, Dec 01, 2005 at 03:03:49PM -0800, Andrew Morton wrote:
> > > ZONE_DMA is out of whack.  It shouldn't be, and I'm not aware of anyone
> > > getting in and working out why.  I certainly wouldn't want to go and add
> > > all this stuff without having a good understanding of _why_ it's out of
> > > whack.  Perhaps it's just some silly bug, like the thing I pointed at in
> > > the previous email.
> > 
> > I think that the problem is caused by the interaction between 
> > the way reclaiming is quantified and parallel allocators.
> 
> Could be.  But what about the bug which I think is there?  That'll cause
> overscanning of the DMA zone.

Take for example these numbers:
--------------------------------------------------------------------------------
active/inactive sizes on 2.6.14-1-k7-smp:
43/1000         = 116 / 2645
819/1000        = 54023 / 65881

active/inactive scan rates:
dma      480/1000       = 31364 / (58377 + 6963)
normal   985/1000       = 719219 / (645051 + 84579)
high     0/1000         = 0 / (0 + 0)
             
             total       used       free     shared    buffers     cached
Mem:           503        497          6          0          0        328
-/+ buffers/cache:        168        335
Swap:          127          2        125
--------------------------------------------------------------------------------

cold-page-scan-rate = K * (direct-reclaim-count * direct-scan-prob +
                           kswapd-reclaim-count * kswapd-scan-prob) * shrink-zone-prob

(direct-reclaim-count : kswapd-reclaim-count) depends on memory pressure.
Here it is
        DMA:    8 = 58377 / 6963
        Normal: 7 = 645051 / 84579

(direct-scan-prob) is roughly equal for all zones.
(kswapd-scan-prob) is expected to be equal too.

So the equation can be simplified to:
cold-page-scan-rate ~= C * shrink-zone-prob

It depends largely on the shrink_zone() function:

    843         zone->nr_scan_inactive += (zone->nr_inactive >> sc->priority) + 1;
    844         nr_inactive = zone->nr_scan_inactive;
    845         if (nr_inactive >= sc->swap_cluster_max)
    846                 zone->nr_scan_inactive = 0;
    847         else
    848                 nr_inactive = 0;
    849         
    850         sc->nr_to_reclaim = sc->swap_cluster_max;
    851         
    852         while (nr_active || nr_inactive) {
                        //...
    860                 if (nr_inactive) {
    861                         sc->nr_to_scan = min(nr_inactive,
    862                                         (unsigned long)sc->swap_cluster_max);
    863                         nr_inactive -= sc->nr_to_scan;
    864                         shrink_cache(zone, sc);
    865                         if (sc->nr_to_reclaim <= 0)
    866                                 break;
    867                 }
    868         }

Line 843 is the core of the scan balancing logic:

priority                12      11      10

On each call nr_scan_inactive is increased by:
DMA(2k pages)           +1      +2      +3
Normal(64k pages)      +17      +33     +65 

Round it up to SWAP_CLUSTER_MAX=32, we get (scan batches/accumulate rounds):
DMA                     1/32    1/16    2/11
Normal                  2/2     2/1     3/1
DMA:Normal ratio        1:32    1:32    2:33

This keeps the scan rate roughly balanced(i.e. 1:32) in low vm pressure.

But lines 865-866 together with line 846 make most shrink_zone() invocations
only run one batch of scan. The numbers become:

DMA                     1/32    1/16    1/11
Normal                  1/2     1/1     1/1
DMA:Normal ratio        1:16    1:16    1:11

Now the scan ratio turns into something between 2:1 ~ 3:1 !

Another problem is that the equation in line 843 is quite coarse, 64k/127k
pages result in the same result, leading to a large variance range.

Thanks,
Wu

^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: [PATCH 02/12] mm: supporting variables and functions for balanced zone aging
  2005-12-01 23:03       ` Andrew Morton
  2005-12-02  1:19         ` Wu Fengguang
@ 2005-12-02  1:26         ` Marcelo Tosatti
  2005-12-02  3:40           ` Andrew Morton
  1 sibling, 1 reply; 40+ messages in thread
From: Marcelo Tosatti @ 2005-12-02  1:26 UTC (permalink / raw)
  To: Andrew Morton
  Cc: wfg, linux-kernel, christoph, riel, a.p.zijlstra, npiggin, andrea,
	magnus.damm


On Thu, Dec 01, 2005 at 03:03:49PM -0800, Andrew Morton wrote:
> Marcelo Tosatti <marcelo.tosatti@cyclades.com> wrote:
> >
> > Hi Andrew,
> > 
> > On Thu, Dec 01, 2005 at 02:37:14AM -0800, Andrew Morton wrote:
> > > Wu Fengguang <wfg@mail.ustc.edu.cn> wrote:
> > > >
> > > >  The zone aging rates are currently imbalanced,
> > > 
> > > ZONE_DMA is out of whack.  It shouldn't be, and I'm not aware of anyone
> > > getting in and working out why.  I certainly wouldn't want to go and add
> > > all this stuff without having a good understanding of _why_ it's out of
> > > whack.  Perhaps it's just some silly bug, like the thing I pointed at in
> > > the previous email.
> > 
> > I think that the problem is caused by the interaction between 
> > the way reclaiming is quantified and parallel allocators.
> 
> Could be.  But what about the bug which I think is there?  That'll cause
> overscanning of the DMA zone. 

There were about 12Mb of inactive pages on the DMA zone. You're hypothesis 
was that there were no LRU pages to be scanned on DMA zone?

> > The zones have different sizes, and each zone reclaim iteration
> > scans the same number of pages. It is unfair.
> 
> Nope.  See how shrink_zone() bases nr_active and nr_inactive on
> zone->nr_active and zone_nr_inactive.  These calculations are intended to
> cause the number of scanned pages in each zone to be
> 
> 	(zone->nr-active + zone->nr_inactive) >> sc->priority.  

True... Well, I don't know, then.

> > On top of that, kswapd is likely to block while doing its job, 
> > which means that allocators have a chance to run.
> 
> kswapd should only block under rare circumstances - huge amounts of dirty
> pages coming off the tail of the LRU. 

Alright. I don't know - what could be the problem, then? 

> > --- mm/vmscan.c.orig	2006-01-01 12:44:39.000000000 -0200
> > +++ mm/vmscan.c	2006-01-01 16:43:54.000000000 -0200
> > @@ -616,8 +616,12 @@
> >  {
> 
> Please use `diff -p'.

^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: [PATCH 02/12] mm: supporting variables and functions for balanced zone aging
  2005-12-02  1:19         ` Wu Fengguang
@ 2005-12-02  1:30           ` Andrew Morton
  2005-12-02  2:04             ` Wu Fengguang
  2005-12-02  5:49           ` Andrew Morton
  1 sibling, 1 reply; 40+ messages in thread
From: Andrew Morton @ 2005-12-02  1:30 UTC (permalink / raw)
  To: Wu Fengguang
  Cc: marcelo.tosatti, linux-kernel, christoph, riel, a.p.zijlstra,
	npiggin, andrea, magnus.damm

Wu Fengguang <wfg@mail.ustc.edu.cn> wrote:
>
>    850         sc->nr_to_reclaim = sc->swap_cluster_max;
>      851         
>      852         while (nr_active || nr_inactive) {
>                          //...
>      860                 if (nr_inactive) {
>      861                         sc->nr_to_scan = min(nr_inactive,
>      862                                         (unsigned long)sc->swap_cluster_max);
>      863                         nr_inactive -= sc->nr_to_scan;
>      864                         shrink_cache(zone, sc);
>      865                         if (sc->nr_to_reclaim <= 0)
>      866                                 break;
>      867                 }
>      868         }
> 
>  Line 843 is the core of the scan balancing logic:
> 
>  priority                12      11      10
> 
>  On each call nr_scan_inactive is increased by:
>  DMA(2k pages)           +1      +2      +3
>  Normal(64k pages)      +17      +33     +65 
> 
>  Round it up to SWAP_CLUSTER_MAX=32, we get (scan batches/accumulate rounds):
>  DMA                     1/32    1/16    2/11
>  Normal                  2/2     2/1     3/1
>  DMA:Normal ratio        1:32    1:32    2:33
> 
>  This keeps the scan rate roughly balanced(i.e. 1:32) in low vm pressure.
> 
>  But lines 865-866 together with line 846 make most shrink_zone() invocations
>  only run one batch of scan. The numbers become:

True.  Need to go into a huddle with the changelogs, but I have a feeling
that lines 865 and 866 aren't very important.  What happens if we remove
them?

^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: [PATCH 02/12] mm: supporting variables and functions for balanced zone aging
  2005-12-02  1:30           ` Andrew Morton
@ 2005-12-02  2:04             ` Wu Fengguang
  2005-12-02  2:18               ` Andrea Arcangeli
  2005-12-02  2:27               ` Nick Piggin
  0 siblings, 2 replies; 40+ messages in thread
From: Wu Fengguang @ 2005-12-02  2:04 UTC (permalink / raw)
  To: Andrew Morton
  Cc: marcelo.tosatti, linux-kernel, christoph, riel, a.p.zijlstra,
	npiggin, andrea, magnus.damm

On Thu, Dec 01, 2005 at 05:30:15PM -0800, Andrew Morton wrote:
> >  But lines 865-866 together with line 846 make most shrink_zone() invocations
> >  only run one batch of scan. The numbers become:
> 
> True.  Need to go into a huddle with the changelogs, but I have a feeling
> that lines 865 and 866 aren't very important.  What happens if we remove
> them?

Maybe the answer is: can we accept to free 15M memory at one time for a 64G zone?
(Or can we simply increase the DEF_PRIORITY?)

btw, maybe it's time to lower the low_mem_reserve.
There should be no need to keep ~50M free memory with the balancing patch.

Regards,
Wu

^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: [PATCH 02/12] mm: supporting variables and functions for balanced zone aging
  2005-12-02  2:04             ` Wu Fengguang
@ 2005-12-02  2:18               ` Andrea Arcangeli
  2005-12-02  2:37                 ` Wu Fengguang
  2005-12-02  4:45                 ` Andrew Morton
  2005-12-02  2:27               ` Nick Piggin
  1 sibling, 2 replies; 40+ messages in thread
From: Andrea Arcangeli @ 2005-12-02  2:18 UTC (permalink / raw)
  To: Wu Fengguang, Andrew Morton, marcelo.tosatti, linux-kernel,
	christoph, riel, a.p.zijlstra, npiggin, magnus.damm

On Fri, Dec 02, 2005 at 10:04:07AM +0800, Wu Fengguang wrote:
> btw, maybe it's time to lower the low_mem_reserve.
> There should be no need to keep ~50M free memory with the balancing patch.

low_mem_reserve is indipendent from shrink_cache, because shrink_cache can't
free unfreeable pinned memory.

If you want to remove low_mem_reserve you'd better start by adding
migration of memory across the zones with pte updates etc... That would
at least mitigate the effect of anonymous memory w/o swap. But
low_mem_reserve is still needed for all other kind of allocations like
kmalloc or pci_alloc_consistent (i.e. not relocatable) etc...

^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: [PATCH 02/12] mm: supporting variables and functions for balanced zone aging
  2005-12-02  2:04             ` Wu Fengguang
  2005-12-02  2:18               ` Andrea Arcangeli
@ 2005-12-02  2:27               ` Nick Piggin
  2005-12-02  2:36                 ` Andrea Arcangeli
  2005-12-02  2:43                 ` Wu Fengguang
  1 sibling, 2 replies; 40+ messages in thread
From: Nick Piggin @ 2005-12-02  2:27 UTC (permalink / raw)
  To: Wu Fengguang
  Cc: Andrew Morton, marcelo.tosatti, linux-kernel, christoph, riel,
	a.p.zijlstra, npiggin, andrea, magnus.damm

Wu Fengguang wrote:
> On Thu, Dec 01, 2005 at 05:30:15PM -0800, Andrew Morton wrote:
> 
>>> But lines 865-866 together with line 846 make most shrink_zone() invocations
>>> only run one batch of scan. The numbers become:
>>
>>True.  Need to go into a huddle with the changelogs, but I have a feeling
>>that lines 865 and 866 aren't very important.  What happens if we remove
>>them?
> 
> 
> Maybe the answer is: can we accept to free 15M memory at one time for a 64G zone?
> (Or can we simply increase the DEF_PRIORITY?)
> 

0.02% of the memory? Why not? I think you should be more worried
about what happens when the priority winds up.

I think your proposal to synch reclaim rates between zones is fine
when all pages have similar properties, but could behave strangely
when you do have different requirements on different zones.

> btw, maybe it's time to lower the low_mem_reserve.
> There should be no need to keep ~50M free memory with the balancing patch.
> 

min_free_kbytes? This number really isn't anything to do with balancing
and more to do with the amount of reserve kept for things like GFP_ATOMIC
and recursive allocations. Let's not lower it ;)

-- 
SUSE Labs, Novell Inc.

Send instant messages to your online friends http://au.messenger.yahoo.com 

^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: [PATCH 02/12] mm: supporting variables and functions for balanced zone aging
  2005-12-02  2:27               ` Nick Piggin
@ 2005-12-02  2:36                 ` Andrea Arcangeli
  2005-12-02  2:43                 ` Wu Fengguang
  1 sibling, 0 replies; 40+ messages in thread
From: Andrea Arcangeli @ 2005-12-02  2:36 UTC (permalink / raw)
  To: Nick Piggin
  Cc: Wu Fengguang, Andrew Morton, marcelo.tosatti, linux-kernel,
	christoph, riel, a.p.zijlstra, npiggin, magnus.damm

On Fri, Dec 02, 2005 at 01:27:06PM +1100, Nick Piggin wrote:
> min_free_kbytes? This number really isn't anything to do with balancing
> and more to do with the amount of reserve kept for things like GFP_ATOMIC
> and recursive allocations. Let's not lower it ;)

Agreed. Or at the very least that should be discussed in a separate
thread, it has no relation with shrink_cache changes or anything else
related to zone aging IMHO.

^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: [PATCH 02/12] mm: supporting variables and functions for balanced zone aging
  2005-12-02  2:18               ` Andrea Arcangeli
@ 2005-12-02  2:37                 ` Wu Fengguang
  2005-12-02  2:52                   ` Andrea Arcangeli
  2005-12-02  4:45                 ` Andrew Morton
  1 sibling, 1 reply; 40+ messages in thread
From: Wu Fengguang @ 2005-12-02  2:37 UTC (permalink / raw)
  To: Andrea Arcangeli
  Cc: Andrew Morton, marcelo.tosatti, linux-kernel, christoph, riel,
	a.p.zijlstra, npiggin, magnus.damm

On Fri, Dec 02, 2005 at 03:18:11AM +0100, Andrea Arcangeli wrote:
> On Fri, Dec 02, 2005 at 10:04:07AM +0800, Wu Fengguang wrote:
> > btw, maybe it's time to lower the low_mem_reserve.
> > There should be no need to keep ~50M free memory with the balancing patch.
> 
> low_mem_reserve is indipendent from shrink_cache, because shrink_cache can't
> free unfreeable pinned memory.
> 
> If you want to remove low_mem_reserve you'd better start by adding
> migration of memory across the zones with pte updates etc... That would
> at least mitigate the effect of anonymous memory w/o swap. But
> low_mem_reserve is still needed for all other kind of allocations like
> kmalloc or pci_alloc_consistent (i.e. not relocatable) etc...

Thanks for the clarification, I was concerning too much ;)

Regards,
Wu

^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: [PATCH 02/12] mm: supporting variables and functions for balanced zone aging
  2005-12-02  2:27               ` Nick Piggin
  2005-12-02  2:36                 ` Andrea Arcangeli
@ 2005-12-02  2:43                 ` Wu Fengguang
  1 sibling, 0 replies; 40+ messages in thread
From: Wu Fengguang @ 2005-12-02  2:43 UTC (permalink / raw)
  To: Nick Piggin
  Cc: Andrew Morton, marcelo.tosatti, linux-kernel, christoph, riel,
	a.p.zijlstra, npiggin, andrea, magnus.damm

On Fri, Dec 02, 2005 at 01:27:06PM +1100, Nick Piggin wrote:
> Wu Fengguang wrote:
> >On Thu, Dec 01, 2005 at 05:30:15PM -0800, Andrew Morton wrote:
> >
> >>>But lines 865-866 together with line 846 make most shrink_zone() 
> >>>invocations
> >>>only run one batch of scan. The numbers become:
> >>
> >>True.  Need to go into a huddle with the changelogs, but I have a feeling
> >>that lines 865 and 866 aren't very important.  What happens if we remove
> >>them?
> >
> >
> >Maybe the answer is: can we accept to free 15M memory at one time for a 
> >64G zone?
> >(Or can we simply increase the DEF_PRIORITY?)
> >
> 
> 0.02% of the memory? Why not? I think you should be more worried
> about what happens when the priority winds up.

Yes, sounds reasonable.

> I think your proposal to synch reclaim rates between zones is fine
> when all pages have similar properties, but could behave strangely
> when you do have different requirements on different zones.

Thanks.
That requirement might be addressed by disabling the feature on specific zones,
or attaching them with a shrinker.seeks like ratio, or something else...

> >btw, maybe it's time to lower the low_mem_reserve.
> >There should be no need to keep ~50M free memory with the balancing patch.
> >
> 
> min_free_kbytes? This number really isn't anything to do with balancing
> and more to do with the amount of reserve kept for things like GFP_ATOMIC
> and recursive allocations. Let's not lower it ;)

ok :)

Regards,
Wu

^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: [PATCH 02/12] mm: supporting variables and functions for balanced zone aging
  2005-12-02  2:37                 ` Wu Fengguang
@ 2005-12-02  2:52                   ` Andrea Arcangeli
  0 siblings, 0 replies; 40+ messages in thread
From: Andrea Arcangeli @ 2005-12-02  2:52 UTC (permalink / raw)
  To: Wu Fengguang, Andrew Morton, marcelo.tosatti, linux-kernel,
	christoph, riel, a.p.zijlstra, npiggin, magnus.damm

On Fri, Dec 02, 2005 at 10:37:27AM +0800, Wu Fengguang wrote:
> Thanks for the clarification, I was concerning too much ;)

You're welcome. I'm also not concerned because the cost is linear with
the amount of memory (and the cost has an high bound, that is the size
of the lower zones, so it's not like the struct page that is a
percentage of ram guaranteed to be lost) so it's generally not
noticeable at runtime, and it's most important in the big systems (where
in turn the cost is higher).

^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: [PATCH 02/12] mm: supporting variables and functions for balanced zone aging
  2005-12-02  1:26         ` Marcelo Tosatti
@ 2005-12-02  3:40           ` Andrew Morton
  0 siblings, 0 replies; 40+ messages in thread
From: Andrew Morton @ 2005-12-02  3:40 UTC (permalink / raw)
  To: Marcelo Tosatti
  Cc: wfg, linux-kernel, christoph, riel, a.p.zijlstra, npiggin, andrea,
	magnus.damm

Marcelo Tosatti <marcelo.tosatti@cyclades.com> wrote:
>
>  > Could be.  But what about the bug which I think is there?  That'll cause
>  > overscanning of the DMA zone. 
> 
>  There were about 12Mb of inactive pages on the DMA zone. You're hypothesis 
>  was that there were no LRU pages to be scanned on DMA zone?

No, my hypothesis was that balance_pgdat() had a bug.  Looking at it again,
I don't see it any more..

^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: [PATCH 02/12] mm: supporting variables and functions for balanced zone aging
  2005-12-02  2:18               ` Andrea Arcangeli
  2005-12-02  2:37                 ` Wu Fengguang
@ 2005-12-02  4:45                 ` Andrew Morton
  2005-12-02  6:38                   ` Wu Fengguang
  1 sibling, 1 reply; 40+ messages in thread
From: Andrew Morton @ 2005-12-02  4:45 UTC (permalink / raw)
  To: Andrea Arcangeli
  Cc: wfg, marcelo.tosatti, linux-kernel, christoph, riel, a.p.zijlstra,
	npiggin, magnus.damm

Andrea Arcangeli <andrea@suse.de> wrote:
>
> low_mem_reserve

I've a suspicion that the addition of the dma32 zone might have
broken this.

^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: [PATCH 02/12] mm: supporting variables and functions for balanced zone aging
  2005-12-02  1:19         ` Wu Fengguang
  2005-12-02  1:30           ` Andrew Morton
@ 2005-12-02  5:49           ` Andrew Morton
  2005-12-02  7:18             ` Wu Fengguang
  2005-12-02 15:13             ` Marcelo Tosatti
  1 sibling, 2 replies; 40+ messages in thread
From: Andrew Morton @ 2005-12-02  5:49 UTC (permalink / raw)
  To: Wu Fengguang
  Cc: marcelo.tosatti, linux-kernel, christoph, riel, a.p.zijlstra,
	npiggin, andrea, magnus.damm

Wu Fengguang <wfg@mail.ustc.edu.cn> wrote:
>
>      865                         if (sc->nr_to_reclaim <= 0)
>      866                                 break;
>      867                 }
>      868         }
> 
>  Line 843 is the core of the scan balancing logic:
> 
>  priority                12      11      10
> 
>  On each call nr_scan_inactive is increased by:
>  DMA(2k pages)           +1      +2      +3
>  Normal(64k pages)      +17      +33     +65 
> 
>  Round it up to SWAP_CLUSTER_MAX=32, we get (scan batches/accumulate rounds):
>  DMA                     1/32    1/16    2/11
>  Normal                  2/2     2/1     3/1
>  DMA:Normal ratio        1:32    1:32    2:33
> 
>  This keeps the scan rate roughly balanced(i.e. 1:32) in low vm pressure.
> 
>  But lines 865-866 together with line 846 make most shrink_zone() invocations
>  only run one batch of scan.

Yes, this seems to be the problem.  Sigh.  By the time 2.6.8 came around I
just didn't have time to do the amount of testing which any page reclaim
tweak necessitates.



From: Andrew Morton <akpm@osdl.org>

Revert a patch which went into 2.6.8-rc1.  The changelog for that patch was:

  The shrink_zone() logic can, under some circumstances, cause far too many
  pages to be reclaimed.  Say, we're scanning at high priority and suddenly
  hit a large number of reclaimable pages on the LRU.

  Change things so we bale out when SWAP_CLUSTER_MAX pages have been
  reclaimed.

Problem is, this change caused significant imbalance in inter-zone scan
balancing by truncating scans of larger zones.

Suppose, for example, ZONE_HIGHMEM is 10x the size of ZONE_NORMAL.  The zone
balancing algorithm would require that if we're scanning 100 pages of
ZONE_HIGHMEM, we should scan 10 pages of ZONE_NORMAL.  But this logic will
cause the scanning of ZONE_HIGHMEM to bale out after only 32 pages are
reclaimed.  Thus effectively causing smaller zones to be scanned relatively
harder than large ones.

Now I need to remember what the workload was which caused me to write this
patch originally, then fix it up in a different way...

Signed-off-by: Andrew Morton <akpm@osdl.org>
---

 mm/vmscan.c |    8 --------
 1 files changed, 8 deletions(-)

diff -puN mm/vmscan.c~vmscan-balancing-fix mm/vmscan.c
--- devel/mm/vmscan.c~vmscan-balancing-fix	2005-12-01 21:20:44.000000000 -0800
+++ devel-akpm/mm/vmscan.c	2005-12-01 21:21:38.000000000 -0800
@@ -63,9 +63,6 @@ struct scan_control {
 
 	unsigned long nr_mapped;	/* From page_state */
 
-	/* How many pages shrink_cache() should reclaim */
-	int nr_to_reclaim;
-
 	/* Ask shrink_caches, or shrink_zone to scan at this priority */
 	unsigned int priority;
 
@@ -901,7 +898,6 @@ static void shrink_cache(struct zone *zo
 		if (current_is_kswapd())
 			mod_page_state(kswapd_steal, nr_freed);
 		mod_page_state_zone(zone, pgsteal, nr_freed);
-		sc->nr_to_reclaim -= nr_freed;
 
 		spin_lock_irq(&zone->lru_lock);
 		/*
@@ -1101,8 +1097,6 @@ shrink_zone(struct zone *zone, struct sc
 	else
 		nr_inactive = 0;
 
-	sc->nr_to_reclaim = sc->swap_cluster_max;
-
 	while (nr_active || nr_inactive) {
 		if (nr_active) {
 			sc->nr_to_scan = min(nr_active,
@@ -1116,8 +1110,6 @@ shrink_zone(struct zone *zone, struct sc
 					(unsigned long)sc->swap_cluster_max);
 			nr_inactive -= sc->nr_to_scan;
 			shrink_cache(zone, sc);
-			if (sc->nr_to_reclaim <= 0)
-				break;
 		}
 	}
 
_


^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: [PATCH 02/12] mm: supporting variables and functions for balanced zone aging
  2005-12-02  4:45                 ` Andrew Morton
@ 2005-12-02  6:38                   ` Wu Fengguang
  0 siblings, 0 replies; 40+ messages in thread
From: Wu Fengguang @ 2005-12-02  6:38 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Andrea Arcangeli, marcelo.tosatti, linux-kernel, christoph, riel,
	a.p.zijlstra, npiggin, magnus.damm

On Thu, Dec 01, 2005 at 08:45:49PM -0800, Andrew Morton wrote:
> Andrea Arcangeli <andrea@suse.de> wrote:
> >
> > low_mem_reserve
> 
> I've a suspicion that the addition of the dma32 zone might have
> broken this.

And there is a danger of (the last zone != the largest zone). This breaks my
assumption. Either we should remove the two lines in shrink_zone():

>      865                         if (sc->nr_to_reclaim <= 0)
>      866                                 break;

Or explicitly add more weight to the balancing efforts with
mm-add-weight-to-reclaim-for-aging.patch below.

Thanks,
Wu

Subject: mm: add more weight to reclaim for aging
Cc: Marcelo Tosatti <marcelo.tosatti@cyclades.com>, Magnus Damm <magnus.damm@gmail.com>
Cc: Nick Piggin <npiggin@suse.de>, Andrea Arcangeli <andrea@suse.de>

Let HighMem = the last zone, we get in normal cases:
- HighMem zone is the largest zone
- HighMem zone is mainly reclaimed for watermark, other zones is almost always
  reclaimed for aging
- While HighMem is reclaimed N times for watermark, other zones has N+1 chances
  to reclaim for aging
- shrink_zone() only scans one chunk of SWAP_CLUSTER_MAX pages to get
  SWAP_CLUSTER_MAX free pages

In the above situation, the force of balancing will win out the force of
unbalancing. But if HighMem(or the last zone) is not the largest zone, the
other larger zones can no longer catch up.

This patch multiplies the force of balancing by 8 times, which should be more
than enough.  It just prevents shrink_zone() to return prematurely, and will
not cause DMA zone to be scanned more than SWAP_CLUSTER_MAX at one time in
normal cases.

Signed-off-by: Wu Fengguang <wfg@mail.ustc.edu.cn>
--- linux.orig/mm/vmscan.c
+++ linux/mm/vmscan.c
@@ -1453,12 +1453,14 @@ loop_again:
 							SC_RECLAIM_FROM_KSWAPD|
 							SC_RECLAIM_FOR_WATERMARK);
 					all_zones_ok = 0;
+					sc.nr_to_reclaim = SWAP_CLUSTER_MAX;
 				} else if (zone == youngest_zone &&
 						pages_more_aged(oldest_zone,
 								youngest_zone)) {
 					debug_reclaim(&sc,
 							SC_RECLAIM_FROM_KSWAPD|
 							SC_RECLAIM_FOR_AGING);
+					sc.nr_to_reclaim = SWAP_CLUSTER_MAX * 8;
 				} else
 					continue;
 			}

^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: [PATCH 02/12] mm: supporting variables and functions for balanced zone aging
  2005-12-02  5:49           ` Andrew Morton
@ 2005-12-02  7:18             ` Wu Fengguang
  2005-12-02  7:27               ` Andrew Morton
  2005-12-02 15:13             ` Marcelo Tosatti
  1 sibling, 1 reply; 40+ messages in thread
From: Wu Fengguang @ 2005-12-02  7:18 UTC (permalink / raw)
  To: Andrew Morton
  Cc: marcelo.tosatti, linux-kernel, christoph, riel, a.p.zijlstra,
	npiggin, andrea, magnus.damm

On Thu, Dec 01, 2005 at 09:49:31PM -0800, Andrew Morton wrote:
> From: Andrew Morton <akpm@osdl.org>
> 
> Revert a patch which went into 2.6.8-rc1.  The changelog for that patch was:
> 
>   The shrink_zone() logic can, under some circumstances, cause far too many
>   pages to be reclaimed.  Say, we're scanning at high priority and suddenly
>   hit a large number of reclaimable pages on the LRU.
> 
>   Change things so we bale out when SWAP_CLUSTER_MAX pages have been
>   reclaimed.
> 
> Problem is, this change caused significant imbalance in inter-zone scan
> balancing by truncating scans of larger zones.
> 
> Suppose, for example, ZONE_HIGHMEM is 10x the size of ZONE_NORMAL.  The zone
> balancing algorithm would require that if we're scanning 100 pages of
> ZONE_HIGHMEM, we should scan 10 pages of ZONE_NORMAL.  But this logic will
> cause the scanning of ZONE_HIGHMEM to bale out after only 32 pages are
> reclaimed.  Thus effectively causing smaller zones to be scanned relatively
> harder than large ones.
> 
> Now I need to remember what the workload was which caused me to write this
> patch originally, then fix it up in a different way...

Maybe it's a situation like this:

__|____|________|________________|________________________________|________________________________________________________________|________________________________________________________________________________________________________________________________|________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
        _: pinned chunk
        -: reclaimable chunk
        |: shrink_zone() invocation
        
First we run into a large range of pinned chunks, which lowered the scan
priority.  And then there are plenty of reclaimable chunks, bomb...

Thanks,
Wu

^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: [PATCH 02/12] mm: supporting variables and functions for balanced zone aging
  2005-12-02  7:18             ` Wu Fengguang
@ 2005-12-02  7:27               ` Andrew Morton
  0 siblings, 0 replies; 40+ messages in thread
From: Andrew Morton @ 2005-12-02  7:27 UTC (permalink / raw)
  To: Wu Fengguang
  Cc: marcelo.tosatti, linux-kernel, christoph, riel, a.p.zijlstra,
	npiggin, andrea, magnus.damm

Wu Fengguang <wfg@mail.ustc.edu.cn> wrote:
>
>  First we run into a large range of pinned chunks, which lowered the scan
>  priority.  And then there are plenty of reclaimable chunks, bomb...

It doesn't have to be that complex - the unreclaimable pages could be
referenced, or under writeback or even simply dirty.

^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: [PATCH 02/12] mm: supporting variables and functions for balanced zone aging
  2005-12-02  5:49           ` Andrew Morton
  2005-12-02  7:18             ` Wu Fengguang
@ 2005-12-02 15:13             ` Marcelo Tosatti
  2005-12-02 21:39               ` Andrew Morton
  1 sibling, 1 reply; 40+ messages in thread
From: Marcelo Tosatti @ 2005-12-02 15:13 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Wu Fengguang, linux-kernel, christoph, riel, a.p.zijlstra,
	npiggin, andrea, magnus.damm

On Thu, Dec 01, 2005 at 09:49:31PM -0800, Andrew Morton wrote:
> Wu Fengguang <wfg@mail.ustc.edu.cn> wrote:
> >
> >      865                         if (sc->nr_to_reclaim <= 0)
> >      866                                 break;
> >      867                 }
> >      868         }
> > 
> >  Line 843 is the core of the scan balancing logic:
> > 
> >  priority                12      11      10
> > 
> >  On each call nr_scan_inactive is increased by:
> >  DMA(2k pages)           +1      +2      +3
> >  Normal(64k pages)      +17      +33     +65 
> > 
> >  Round it up to SWAP_CLUSTER_MAX=32, we get (scan batches/accumulate rounds):
> >  DMA                     1/32    1/16    2/11
> >  Normal                  2/2     2/1     3/1
> >  DMA:Normal ratio        1:32    1:32    2:33
> > 
> >  This keeps the scan rate roughly balanced(i.e. 1:32) in low vm pressure.
> > 
> >  But lines 865-866 together with line 846 make most shrink_zone() invocations
> >  only run one batch of scan.
> 
> Yes, this seems to be the problem.  Sigh.  By the time 2.6.8 came around I
> just didn't have time to do the amount of testing which any page reclaim
> tweak necessitates.

Hi Andrew,

It all makes sense to me (Wu's description of the problem and your patch), 
but still no good with reference to fair scanning. Moreover the patch hurts 
interactivity _badly_, not sure why (ssh into the box with FFSB testcase 
takes more than one minute to login, while vanilla takes few dozens of seconds). 

Follows an interesting part of "diff -u 2614-vanilla.vmstat 2614-akpm.vmstat"
(they were not retrieve at the exact same point in the benchmark run, but 
that should not matter much):

-slabs_scanned 37632
-kswapd_steal 731859
-kswapd_inodesteal 1363
-pageoutrun 26573
-allocstall 636
-pgrotated 1898
+slabs_scanned 2688
+kswapd_steal 502946
+kswapd_inodesteal 1
+pageoutrun 10612
+allocstall 90
+pgrotated 68

Note how direct reclaim (and slabs_scanned) are hugely affected. 

Normal: 114688kB
DMA: 16384kB

Normal/DMA ratio = 114688 / 16384 = 7.000

******* 2.6.14 vanilla ********

* kswapd scanning rates
pgscan_kswapd_normal 450483
pgscan_kswapd_dma 84645
pgscan_kswapd Normal/DMA = (450483 / 88869) = 5.069

* direct scanning rates
pgscan_direct_normal 23826
pgscan_direct_dma 4224
pgscan_direct Normal/DMA = (23826 / 4224) = 5.640

* global (kswapd+direct) scanning rates
pgscan_normal = (450483 + 23826) = 474309
pgscan_dma = (84645 + 4224) = 88869
pgscan Normal/DMA = (474309 / 88869) = 5.337

pgalloc_normal = 794293
pgalloc_dma = 123805
pgalloc_normal_dma_ratio = (794293/123805) = 6.415

******* 2.6.14 akpm-no-nr_to_reclaim ********

* kswapd scanning rates
pgscan_kswapd_normal 441936
pgscan_kswapd_dma 80520
pgscan_kswapd Normal/DMA = (441936 / 80520) = 5.488

* direct scanning rates
pgscan_direct_normal 7392
pgscan_direct_dma 1188
pgscan_direct Normal/DMA = (7392/1188) = 6.222

* global (kswapd+direct) scanning rates
pgscan_normal = (441936 + 7392) = 449328
pgscan_dma = (80520 + 1188) = 81708
pgscan Normal/DMA = (449328 / 81708) = 5.499

pgalloc_normal = 559994
pgalloc_dma = 84883
pgalloc_normal_dma_ratio = (559994 / 8488) = 6.597

****** 2.6.14 isolate relative ***** 

* kswapd scanning rates
pgscan_kswapd_normal 664883
pgscan_kswapd_dma 82845
pgscan_kswapd Normal/DMA (664883/82845) = 8.025

* direct scanning rates
pgscan_direct_normal 13485
pgscan_direct_dma 1745
pgscan_direct Normal/DMA = (13485/1745) = 7.727

* global (kswapd+direct) scanning rates
pgscan_normal = (664883 + 13485) = 678368
pgscan_dma = (82845 + 1745) = 84590
pgscan Normal/DMA = (678368 / 84590) = 8.019

pgalloc_normal 699927
pgalloc_dma 66313
pgalloc_normal_dma_ratio = (699927/66313) = 10.554


^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: [PATCH 02/12] mm: supporting variables and functions for balanced zone aging
  2005-12-02 15:13             ` Marcelo Tosatti
@ 2005-12-02 21:39               ` Andrew Morton
  2005-12-03  0:26                 ` Marcelo Tosatti
  0 siblings, 1 reply; 40+ messages in thread
From: Andrew Morton @ 2005-12-02 21:39 UTC (permalink / raw)
  To: Marcelo Tosatti
  Cc: wfg, linux-kernel, christoph, riel, a.p.zijlstra, npiggin, andrea,
	magnus.damm

Marcelo Tosatti <marcelo.tosatti@cyclades.com> wrote:
>
> 
> It all makes sense to me (Wu's description of the problem and your patch), 
> but still no good with reference to fair scanning.

Not so.  On a 4G x86 box doing a simple 8GB write this patch took the
highmem/normal scanning ratio from 0.7 to 3.5.  On that setup the highmem
zone has 3.6x as many pages as the normal zone, so it's bang-on-target.

There's not a lot of point in jumping straight into the complex stresstests
without having first tested the simple stuff.

> Moreover the patch hurts 
> interactivity _badly_, not sure why (ssh into the box with FFSB testcase 
> takes more than one minute to login, while vanilla takes few dozens of seconds). 

Well, we know that the revert reintroduces an overscanning problem.

How are you invoking FFSB?  Exactly?  On what sort of machine, with how
much memory?

> Follows an interesting part of "diff -u 2614-vanilla.vmstat 2614-akpm.vmstat"
> (they were not retrieve at the exact same point in the benchmark run, but 
> that should not matter much):
> 
> -slabs_scanned 37632
> -kswapd_steal 731859
> -kswapd_inodesteal 1363
> -pageoutrun 26573
> -allocstall 636
> -pgrotated 1898
> +slabs_scanned 2688
> +kswapd_steal 502946
> +kswapd_inodesteal 1
> +pageoutrun 10612
> +allocstall 90
> +pgrotated 68
>
> Note how direct reclaim (and slabs_scanned) are hugely affected. 

hm.  allocstall is much lower and pgrotated has improved and direct reclaim
has improved.  All of which would indicate that kswapd is doing more work. 
Yet kswapd reclaimed less pages.  It's hard to say what's going on as these
numbers came from different stages of the test.

> 
> Normal: 114688kB
> DMA: 16384kB
> 
> Normal/DMA ratio = 114688 / 16384 = 7.000
>
> pgscan_kswapd Normal/DMA = (450483 / 88869) = 5.069
> pgscan_direct Normal/DMA = (23826 / 4224) = 5.640
> pgscan Normal/DMA = (474309 / 88869) = 5.337
> pgscan_kswapd Normal/DMA = (441936 / 80520) = 5.488
> pgscan_direct Normal/DMA = (7392/1188) = 6.222
> pgscan Normal/DMA = (449328 / 81708) = 5.499
> pgalloc_normal_dma_ratio = (559994 / 8488) = 6.597
> pgscan_kswapd Normal/DMA (664883/82845) = 8.025
> pgscan_direct Normal/DMA = (13485/1745) = 7.727
> pgscan Normal/DMA = (678368 / 84590) = 8.019
> pgalloc_normal_dma_ratio = (699927/66313) = 10.554

All of these look close enough to me.  10-20% over- or under-scanning of
the teeny DMA zone doesn't seem very important.  Getting normal-vs-highmem
right is more important.

It's hard to say what effect the watermark thingies have on all of this. 
I'd sugget that you start out with much less complex tests and see if `echo
10000 10000 10000 > /proc/sys/vm/lowmem_reserve_ratio' changes anything. 
(I have that in my rc.local - the thing is a daft waste of memory).

I'd be more concerned about the interactivity thing, although it sounds
like the machine is so overloaded with this test that it'd be fairly
pointless to try to tune that workload first.  It's more important to tune
the system for more typical heavy loads.

Also, the choice of IO scheduler matters.  Which are you using?

^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: [PATCH 02/12] mm: supporting variables and functions for balanced zone aging
  2005-12-02 21:39               ` Andrew Morton
@ 2005-12-03  0:26                 ` Marcelo Tosatti
  2005-12-04  6:06                   ` Wu Fengguang
  0 siblings, 1 reply; 40+ messages in thread
From: Marcelo Tosatti @ 2005-12-03  0:26 UTC (permalink / raw)
  To: Andrew Morton
  Cc: wfg, linux-kernel, christoph, riel, a.p.zijlstra, npiggin, andrea,
	magnus.damm

On Fri, Dec 02, 2005 at 01:39:17PM -0800, Andrew Morton wrote:
> Marcelo Tosatti <marcelo.tosatti@cyclades.com> wrote:
> >
> > 
> > It all makes sense to me (Wu's description of the problem and your patch), 
> > but still no good with reference to fair scanning.
> 
> Not so.  On a 4G x86 box doing a simple 8GB write this patch took the
> highmem/normal scanning ratio from 0.7 to 3.5.  On that setup the highmem
> zone has 3.6x as many pages as the normal zone, so it's bang-on-target.

Humpf!  What are the pgalloc dma/normal/highmem numbers under such test?

Does this machine need bounce buffers for disk I/O?

> There's not a lot of point in jumping straight into the complex stresstests
> without having first tested the simple stuff.

Its not a really complex stresstest, though yours is simpler. There are 10 
threads operating on 20 files. You can reproduce the load using the 
following FFSB profile (I remake the filesystem each time, results are 
pretty stable):

num_filesystems=1
num_threadgroups=1
directio=0
time=300

[filesystem0]
location=/mnt/hda4/
num_files=20
num_dirs=10
max_filesize=91534338
min_filesize=65535
[end0]

[threadgroup0]
num_threads=10
write_size=2816
write_blocksize=4096
read_size=2816
read_blocksize=4096
create_weight=100
write_weight=30
read_weight=100
[end0]


> > Moreover the patch hurts 
> > interactivity _badly_, not sure why (ssh into the box with FFSB testcase 
> > takes more than one minute to login, while vanilla takes few dozens of seconds). 
> 
> Well, we know that the revert reintroduces an overscanning problem.

Can you remember the testcase for which you added the "truncate reclaim"
logic more precisely? 

> How are you invoking FFSB?  Exactly?  On what sort of machine, with how
> much memory?

Its a single processor Pentium-3 1GHz+ booted with mem=128M, 4/5 years old IDE disk.

> > Follows an interesting part of "diff -u 2614-vanilla.vmstat 2614-akpm.vmstat"
> > (they were not retrieve at the exact same point in the benchmark run, but 
> > that should not matter much):
> > 
> > -slabs_scanned 37632
> > -kswapd_steal 731859
> > -kswapd_inodesteal 1363
> > -pageoutrun 26573
> > -allocstall 636
> > -pgrotated 1898
> > +slabs_scanned 2688
> > +kswapd_steal 502946
> > +kswapd_inodesteal 1
> > +pageoutrun 10612
> > +allocstall 90
> > +pgrotated 68
> >
> > Note how direct reclaim (and slabs_scanned) are hugely affected. 
> 
> hm.  allocstall is much lower and pgrotated has improved and direct reclaim
> has improved.  All of which would indicate that kswapd is doing more work. 
> Yet kswapd reclaimed less pages.  It's hard to say what's going on as these
> numbers came from different stages of the test.

I have a feeling they came from a somewhat equivalent stage (FFSB is a cyclic
test, there are not much of "phases" after the initial creation of files).

Feel free to reproduce the testcase, you simply need the FFSB profile 
above and mem=128M.

It seems very fragile (Wu's patches attempt to address that) in general: you
tweak it here and watch it go nuts there.

> > Normal: 114688kB
> > DMA: 16384kB
> > 
> > Normal/DMA ratio = 114688 / 16384 = 7.000
> >
> > pgscan_kswapd Normal/DMA = (450483 / 88869) = 5.069
> > pgscan_direct Normal/DMA = (23826 / 4224) = 5.640
> > pgscan Normal/DMA = (474309 / 88869) = 5.337
> > pgscan_kswapd Normal/DMA = (441936 / 80520) = 5.488
> > pgscan_direct Normal/DMA = (7392/1188) = 6.222
> > pgscan Normal/DMA = (449328 / 81708) = 5.499
> > pgalloc_normal_dma_ratio = (559994 / 8488) = 6.597
> > pgscan_kswapd Normal/DMA (664883/82845) = 8.025
> > pgscan_direct Normal/DMA = (13485/1745) = 7.727
> > pgscan Normal/DMA = (678368 / 84590) = 8.019
> > pgalloc_normal_dma_ratio = (699927/66313) = 10.554
> 
> All of these look close enough to me.  10-20% over- or under-scanning of
> the teeny DMA zone doesn't seem very important.

Hopefully yes. The lowmem_reserve[] logic is there to _avoid_ over-allocation
(over-scanning) of the DMA zone by GFP_NORMAL allocations, isnt it?

Note, there should be no DMA limited hardware on this box (I'm using PIO for the 
IDE disk). BTW, why do you need lowmem_reserve for the DMA zone if you don't 
have 16MB capped ISA devices on your system?

> Getting normal-vs-highmem right is more important.
> 
> It's hard to say what effect the watermark thingies have on all of this. 
> I'd sugget that you start out with much less complex tests and see if `echo
> 10000 10000 10000 > /proc/sys/vm/lowmem_reserve_ratio' changes anything. 
> (I have that in my rc.local - the thing is a daft waste of memory).
> 
> I'd be more concerned about the interactivity thing, although it sounds
> like the machine is so overloaded with this test that it'd be fairly
> pointless to try to tune that workload first.  It's more important to tune
> the system for more typical heavy loads.

What made me notice it was the huge interactivity difference between
vanilla and your patch, again, I'm not really sure about its root.

> Also, the choice of IO scheduler matters.  Which are you using?

The default for 2.6.14. Thats AS right? 

I'll see if I can do more tests next week.

Best wishes.

^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: [PATCH 02/12] mm: supporting variables and functions for balanced zone aging
  2005-12-03  0:26                 ` Marcelo Tosatti
@ 2005-12-04  6:06                   ` Wu Fengguang
  0 siblings, 0 replies; 40+ messages in thread
From: Wu Fengguang @ 2005-12-04  6:06 UTC (permalink / raw)
  To: Marcelo Tosatti
  Cc: Andrew Morton, linux-kernel, christoph, riel, a.p.zijlstra,
	npiggin, andrea, magnus.damm

On Fri, Dec 02, 2005 at 10:26:14PM -0200, Marcelo Tosatti wrote:
> It seems very fragile (Wu's patches attempt to address that) in general: you
> tweak it here and watch it go nuts there.

The patch still has problems, and it can lead to more page allocations in
remote nodes.

For NUMA systems, basicly HPC applications want locality, and file servers
want cache consistency. More worse two types of applications can coexist in one
single system. The general solution may be classifying pages into two types:

local  pages: mostly local accessed, and low latency is first priority
global pages: for consistent file caching

Reclaims from global pages should be balanced globally to make a seamlessly
single global cache. We can allocate special zones to hold the global pages,
and make the reclaims from them in sync. Nick, are you working on this?

Thanks,
Wu

^ permalink raw reply	[flat|nested] 40+ messages in thread

end of thread, other threads:[~2005-12-04  5:56 UTC | newest]

Thread overview: 40+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2005-12-01 10:18 [PATCH 00/12] Balancing the scan rate of major caches Wu Fengguang
2005-12-01 10:18 ` [PATCH 01/12] vm: kswapd incmin Wu Fengguang
2005-12-01 10:33   ` Andrew Morton
2005-12-01 11:40     ` Wu Fengguang
2005-12-01 10:18 ` [PATCH 02/12] mm: supporting variables and functions for balanced zone aging Wu Fengguang
2005-12-01 10:37   ` Andrew Morton
2005-12-01 12:11     ` Wu Fengguang
2005-12-01 22:28     ` Marcelo Tosatti
2005-12-01 23:03       ` Andrew Morton
2005-12-02  1:19         ` Wu Fengguang
2005-12-02  1:30           ` Andrew Morton
2005-12-02  2:04             ` Wu Fengguang
2005-12-02  2:18               ` Andrea Arcangeli
2005-12-02  2:37                 ` Wu Fengguang
2005-12-02  2:52                   ` Andrea Arcangeli
2005-12-02  4:45                 ` Andrew Morton
2005-12-02  6:38                   ` Wu Fengguang
2005-12-02  2:27               ` Nick Piggin
2005-12-02  2:36                 ` Andrea Arcangeli
2005-12-02  2:43                 ` Wu Fengguang
2005-12-02  5:49           ` Andrew Morton
2005-12-02  7:18             ` Wu Fengguang
2005-12-02  7:27               ` Andrew Morton
2005-12-02 15:13             ` Marcelo Tosatti
2005-12-02 21:39               ` Andrew Morton
2005-12-03  0:26                 ` Marcelo Tosatti
2005-12-04  6:06                   ` Wu Fengguang
2005-12-02  1:26         ` Marcelo Tosatti
2005-12-02  3:40           ` Andrew Morton
2005-12-01 10:18 ` [PATCH 03/12] mm: balance zone aging in direct reclaim path Wu Fengguang
2005-12-01 10:18 ` [PATCH 04/12] mm: balance zone aging in kswapd " Wu Fengguang
2005-12-01 10:18 ` [PATCH 05/12] mm: balance slab aging Wu Fengguang
2005-12-01 10:18 ` [PATCH 06/12] mm: balance active/inactive list scan rates Wu Fengguang
2005-12-01 11:39   ` Peter Zijlstra
2005-12-01 10:18 ` [PATCH 07/12] mm: remove unnecessary variable and loop Wu Fengguang
2005-12-01 10:18 ` [PATCH 08/12] mm: remove swap_cluster_max from scan_control Wu Fengguang
2005-12-01 10:18 ` [PATCH 09/12] mm: accumulate sc.nr_scanned/sc.nr_reclaimed Wu Fengguang
2005-12-01 10:18 ` [PATCH 10/12] mm: merge sc.may_writepage and sc.may_swap into sc.flags Wu Fengguang
2005-12-01 10:18 ` [PATCH 11/12] mm: add page reclaim debug traces Wu Fengguang
2005-12-01 10:18 ` [PATCH 12/12] mm: fix minor scan count bugs Wu Fengguang

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox