* [PATCH 00/13] Balancing the scan rate of major caches V2
@ 2005-12-06 13:56 Wu Fengguang
2005-12-06 13:56 ` [PATCH 01/13] mm: restore sc.nr_to_reclaim Wu Fengguang
` (12 more replies)
0 siblings, 13 replies; 15+ messages in thread
From: Wu Fengguang @ 2005-12-06 13:56 UTC (permalink / raw)
To: linux-kernel
Cc: Andrew Morton, Christoph Lameter, Rik van Riel, Peter Zijlstra
Changes since V1:
- better broken up of patches
- replace pages_more_aged with age_ge/age_gt
- expanded shrink_slab interface
- rewrite kswapd rebalance logic to be simple and robust
This patch balances the aging rates of active_list/inactive_list/slab.
It started out as an effort to enable the adaptive read-ahead to handle large
number of concurrent readers. Then I found it envolves much more stuffs, and
deserves a standalone patchset to address the balancing problem as a whole.
The whole picture of balancing:
- In each node, inactive_list scan rates are synced with each other
It is done in the direct/kswapd reclaim path.
- In each zone, active_list scan rate always follows that of inactive_list
- Slab cache scan rates always follow that of the current node.
Since shrink_slab() can be called from different CPUs, that effectly sync
slab cache scan rates with that of the most scanned node.
The patches can be grouped as follows:
- balancing stuffs
mm-revert-vmscan-balancing-fix.patch
mm-simplify-kswapd-reclaim-code.patch
mm-balance-zone-aging-supporting-facilities.patch
mm-balance-zone-aging-in-direct-reclaim.patch
mm-balance-zone-aging-in-kswapd-reclaim.patch
mm-balance-slab-aging.patch
mm-balance-active-inactive-list-aging.patch
- pure code cleanups
mm-remove-unnecessary-variable-and-loop.patch
mm-remove-swap-cluster-max-from-scan-control.patch
mm-accumulate-nr-scanned-reclaimed-in-scan-control.patch
mm-turn-bool-variables-into-flags-in-scan-control.patch
- debug code
mm-page-reclaim-debug-traces.patch
- a minor fix
mm-scan-accounting-fix.patch
Thanks,
Wu Fengguang
--
Dept. Automation University of Science and Technology of China
^ permalink raw reply [flat|nested] 15+ messages in thread
* [PATCH 01/13] mm: restore sc.nr_to_reclaim
2005-12-06 13:56 [PATCH 00/13] Balancing the scan rate of major caches V2 Wu Fengguang
@ 2005-12-06 13:56 ` Wu Fengguang
2005-12-06 13:56 ` [PATCH 02/13] mm: simplify kswapd reclaim code Wu Fengguang
` (11 subsequent siblings)
12 siblings, 0 replies; 15+ messages in thread
From: Wu Fengguang @ 2005-12-06 13:56 UTC (permalink / raw)
To: linux-kernel
Cc: Andrew Morton, Christoph Lameter, Rik van Riel, Peter Zijlstra,
Marcelo Tosatti, Magnus Damm, Nick Piggin, Andrea Arcangeli,
Wu Fengguang
[-- Attachment #1: mm-revert-vmscan-balancing-fix.patch --]
[-- Type: text/plain, Size: 1331 bytes --]
Keep it before the real fine grained scan patch is ready :)
The following patches really needs small scan quantities, at least in
normal situation.
Signed-off-by: Wu Fengguang <wfg@mail.ustc.edu.cn>
---
mm/vmscan.c | 8 ++++++++
1 files changed, 8 insertions(+)
--- linux-2.6.15-rc5-mm1.orig/mm/vmscan.c
+++ linux-2.6.15-rc5-mm1/mm/vmscan.c
@@ -63,6 +63,9 @@ struct scan_control {
unsigned long nr_mapped; /* From page_state */
+ /* How many pages shrink_cache() should reclaim */
+ int nr_to_reclaim;
+
/* Ask shrink_caches, or shrink_zone to scan at this priority */
unsigned int priority;
@@ -898,6 +901,7 @@ static void shrink_cache(struct zone *zo
if (current_is_kswapd())
mod_page_state(kswapd_steal, nr_freed);
mod_page_state_zone(zone, pgsteal, nr_freed);
+ sc->nr_to_reclaim -= nr_freed;
spin_lock_irq(&zone->lru_lock);
/*
@@ -1097,6 +1101,8 @@ shrink_zone(struct zone *zone, struct sc
else
nr_inactive = 0;
+ sc->nr_to_reclaim = sc->swap_cluster_max;
+
while (nr_active || nr_inactive) {
if (nr_active) {
sc->nr_to_scan = min(nr_active,
@@ -1110,6 +1116,8 @@ shrink_zone(struct zone *zone, struct sc
(unsigned long)sc->swap_cluster_max);
nr_inactive -= sc->nr_to_scan;
shrink_cache(zone, sc);
+ if (sc->nr_to_reclaim <= 0)
+ break;
}
}
--
^ permalink raw reply [flat|nested] 15+ messages in thread
* [PATCH 02/13] mm: simplify kswapd reclaim code
2005-12-06 13:56 [PATCH 00/13] Balancing the scan rate of major caches V2 Wu Fengguang
2005-12-06 13:56 ` [PATCH 01/13] mm: restore sc.nr_to_reclaim Wu Fengguang
@ 2005-12-06 13:56 ` Wu Fengguang
2005-12-06 13:56 ` [PATCH 03/13] mm: supporting variables and functions for balanced zone aging Wu Fengguang
` (10 subsequent siblings)
12 siblings, 0 replies; 15+ messages in thread
From: Wu Fengguang @ 2005-12-06 13:56 UTC (permalink / raw)
To: linux-kernel
Cc: Andrew Morton, Christoph Lameter, Rik van Riel, Peter Zijlstra,
Nick Piggin, Wu Fengguang
[-- Attachment #1: mm-simplify-kswapd-reclaim-code.patch --]
[-- Type: text/plain, Size: 4423 bytes --]
Simplify the kswapd reclaim code for the new balancing logic.
Signed-off-by: Nick Piggin <npiggin@suse.de>
Signed-off-by: Wu Fengguang <wfg@mail.ustc.edu.cn>
---
mm/vmscan.c | 100 ++++++++++++++++++++----------------------------------------
1 files changed, 34 insertions(+), 66 deletions(-)
--- linux-2.6.15-rc5-mm1.orig/mm/vmscan.c
+++ linux-2.6.15-rc5-mm1/mm/vmscan.c
@@ -1309,47 +1309,18 @@ loop_again:
}
for (priority = DEF_PRIORITY; priority >= 0; priority--) {
- int end_zone = 0; /* Inclusive. 0 = ZONE_DMA */
unsigned long lru_pages = 0;
+ all_zones_ok = 1;
+ sc.nr_scanned = 0;
+ sc.nr_reclaimed = 0;
+ sc.priority = priority;
+ sc.swap_cluster_max = nr_pages ? nr_pages : SWAP_CLUSTER_MAX;
+
/* The swap token gets in the way of swapout... */
if (!priority)
disable_swap_token();
- all_zones_ok = 1;
-
- if (nr_pages == 0) {
- /*
- * Scan in the highmem->dma direction for the highest
- * zone which needs scanning
- */
- for (i = pgdat->nr_zones - 1; i >= 0; i--) {
- struct zone *zone = pgdat->node_zones + i;
-
- if (!populated_zone(zone))
- continue;
-
- if (zone->all_unreclaimable &&
- priority != DEF_PRIORITY)
- continue;
-
- if (!zone_watermark_ok(zone, order,
- zone->pages_high, 0, 0)) {
- end_zone = i;
- goto scan;
- }
- }
- goto out;
- } else {
- end_zone = pgdat->nr_zones - 1;
- }
-scan:
- for (i = 0; i <= end_zone; i++) {
- struct zone *zone = pgdat->node_zones + i;
-
- lru_pages += zone->nr_active + zone->nr_inactive;
- }
-
/*
* Now scan the zone in the dma->highmem direction, stopping
* at the last zone which needs scanning.
@@ -1359,51 +1330,49 @@ scan:
* pages behind kswapd's direction of progress, which would
* cause too much scanning of the lower zones.
*/
- for (i = 0; i <= end_zone; i++) {
+ for (i = 0; i < pgdat->nr_zones; i++) {
struct zone *zone = pgdat->node_zones + i;
- int nr_slab;
if (!populated_zone(zone))
continue;
+ if (nr_pages == 0) { /* Not software suspend */
+ if (zone_watermark_ok(zone, order,
+ zone->pages_high, 0, 0))
+ continue;
+
+ all_zones_ok = 0;
+ }
+
if (zone->all_unreclaimable && priority != DEF_PRIORITY)
continue;
- if (nr_pages == 0) { /* Not software suspend */
- if (!zone_watermark_ok(zone, order,
- zone->pages_high, end_zone, 0))
- all_zones_ok = 0;
- }
zone->temp_priority = priority;
if (zone->prev_priority > priority)
zone->prev_priority = priority;
- sc.nr_scanned = 0;
- sc.nr_reclaimed = 0;
- sc.priority = priority;
- sc.swap_cluster_max = nr_pages? nr_pages : SWAP_CLUSTER_MAX;
- atomic_inc(&zone->reclaim_in_progress);
+ lru_pages += zone->nr_active + zone->nr_inactive;
+
shrink_zone(zone, &sc);
- atomic_dec(&zone->reclaim_in_progress);
- reclaim_state->reclaimed_slab = 0;
- nr_slab = shrink_slab(sc.nr_scanned, GFP_KERNEL,
- lru_pages);
- sc.nr_reclaimed += reclaim_state->reclaimed_slab;
- total_reclaimed += sc.nr_reclaimed;
- total_scanned += sc.nr_scanned;
- if (zone->all_unreclaimable)
- continue;
- if (nr_slab == 0 && zone->pages_scanned >=
+
+ if (zone->pages_scanned >=
(zone->nr_active + zone->nr_inactive) * 4)
zone->all_unreclaimable = 1;
- /*
- * If we've done a decent amount of scanning and
- * the reclaim ratio is low, start doing writepage
- * even in laptop mode
- */
- if (total_scanned > SWAP_CLUSTER_MAX * 2 &&
- total_scanned > total_reclaimed+total_reclaimed/2)
- sc.may_writepage = 1;
}
+ reclaim_state->reclaimed_slab = 0;
+ shrink_slab(sc.nr_scanned, GFP_KERNEL, lru_pages);
+ sc.nr_reclaimed += reclaim_state->reclaimed_slab;
+ total_reclaimed += sc.nr_reclaimed;
+ total_scanned += sc.nr_scanned;
+
+ /*
+ * If we've done a decent amount of scanning and
+ * the reclaim ratio is low, start doing writepage
+ * even in laptop mode
+ */
+ if (total_scanned > SWAP_CLUSTER_MAX * 2 &&
+ total_scanned > total_reclaimed+total_reclaimed/2)
+ sc.may_writepage = 1;
+
if (nr_pages && to_free > total_reclaimed)
continue; /* swsusp: need to do more work */
if (all_zones_ok)
@@ -1424,7 +1393,6 @@ scan:
if ((total_reclaimed >= SWAP_CLUSTER_MAX) && (!nr_pages))
break;
}
-out:
for (i = 0; i < pgdat->nr_zones; i++) {
struct zone *zone = pgdat->node_zones + i;
--
^ permalink raw reply [flat|nested] 15+ messages in thread
* [PATCH 03/13] mm: supporting variables and functions for balanced zone aging
2005-12-06 13:56 [PATCH 00/13] Balancing the scan rate of major caches V2 Wu Fengguang
2005-12-06 13:56 ` [PATCH 01/13] mm: restore sc.nr_to_reclaim Wu Fengguang
2005-12-06 13:56 ` [PATCH 02/13] mm: simplify kswapd reclaim code Wu Fengguang
@ 2005-12-06 13:56 ` Wu Fengguang
2005-12-06 13:56 ` [PATCH 04/13] mm: balance zone aging in direct reclaim path Wu Fengguang
` (9 subsequent siblings)
12 siblings, 0 replies; 15+ messages in thread
From: Wu Fengguang @ 2005-12-06 13:56 UTC (permalink / raw)
To: linux-kernel
Cc: Andrew Morton, Christoph Lameter, Rik van Riel, Peter Zijlstra,
Marcelo Tosatti, Magnus Damm, Nick Piggin, Andrea Arcangeli,
Wu Fengguang
[-- Attachment #1: mm-balance-zone-aging-supporting-facilities.patch --]
[-- Type: text/plain, Size: 5485 bytes --]
The zone aging rates are currently imbalanced, the gap can be as large as 3
times, which can severely damage read-ahead requests and shorten their
effective life time.
This patch adds three variables in struct zone
- aging_total
- aging_milestone
- page_age
to keep track of page aging rate, and keep it in sync on page reclaim time.
The aging_total is just a per-zone counter-part to the per-cpu
pgscan_{kswapd,direct}_{zone name}. But it is not direct comparable between
zones, so the aging_milestone/page_age are maintained based on aging_total.
The page_age is a normalized value that can be direct compared between zones
with the helper macro age_ge/age_gt. The goal of balancing logics are to keep
this normalized value in sync between zones.
One can check the balanced aging progress by running:
tar c / | cat > /dev/null &
watch -n1 'grep "age " /proc/zoneinfo'
Signed-off-by: Wu Fengguang <wfg@mail.ustc.edu.cn>
---
include/linux/mmzone.h | 14 ++++++++++++++
mm/page_alloc.c | 11 +++++++++++
mm/vmscan.c | 43 +++++++++++++++++++++++++++++++++++++++++++
3 files changed, 68 insertions(+)
--- linux-2.6.15-rc5-mm1.orig/include/linux/mmzone.h
+++ linux-2.6.15-rc5-mm1/include/linux/mmzone.h
@@ -149,6 +149,20 @@ struct zone {
unsigned long pages_scanned; /* since last reclaim */
int all_unreclaimable; /* All pages pinned */
+ /* Fields for balanced page aging:
+ * aging_total - The accumulated number of activities that may
+ * cause page aging, that is, make some pages closer
+ * to the tail of inactive_list.
+ * aging_milestone - A snapshot of total_scan every time a full
+ * inactive_list of pages become aged.
+ * page_age - A normalized value showing the percent of pages
+ * have been aged. It is compared between zones to
+ * balance the rate of page aging.
+ */
+ unsigned long aging_total;
+ unsigned long aging_milestone;
+ unsigned long page_age;
+
/*
* Does the allocator try to reclaim pages from the zone as soon
* as it fails a watermark_ok() in __alloc_pages?
--- linux-2.6.15-rc5-mm1.orig/mm/vmscan.c
+++ linux-2.6.15-rc5-mm1/mm/vmscan.c
@@ -123,6 +123,48 @@ static long total_memory;
static LIST_HEAD(shrinker_list);
static DECLARE_RWSEM(shrinker_rwsem);
+#ifdef CONFIG_HIGHMEM64G
+#define PAGE_AGE_SHIFT 8
+#elif BITS_PER_LONG == 32
+#define PAGE_AGE_SHIFT 12
+#elif BITS_PER_LONG == 64
+#define PAGE_AGE_SHIFT 20
+#else
+#error unknown BITS_PER_LONG
+#endif
+#define PAGE_AGE_SIZE (1 << PAGE_AGE_SHIFT)
+#define PAGE_AGE_MASK (PAGE_AGE_SIZE - 1)
+
+/*
+ * The simplified code is:
+ * age_ge: (a->page_age >= b->page_age)
+ * age_gt: (a->page_age > b->page_age)
+ * The complexity deals with the wrap-around problem.
+ * Two page ages not close enough(gap >= 1/8) should also be ignored:
+ * they are out of sync and the comparison may be nonsense.
+ */
+#define age_ge(a, b) \
+ (((a->page_age - b->page_age) & PAGE_AGE_MASK) < PAGE_AGE_SIZE / 8)
+#define age_gt(a, b) \
+ (((b->page_age - a->page_age) & PAGE_AGE_MASK) > PAGE_AGE_SIZE * 7 / 8)
+
+/*
+ * Keep track of the percent of cold pages that have been scanned / aged.
+ * It's not really ##%, but a high resolution normalized value.
+ */
+static inline void update_zone_age(struct zone *z, int nr_scan)
+{
+ unsigned long len = z->nr_inactive | 1;
+
+ z->aging_total += nr_scan;
+
+ if (z->aging_total - z->aging_milestone > len)
+ z->aging_milestone += len;
+
+ z->page_age = ((z->aging_total - z->aging_milestone)
+ << PAGE_AGE_SHIFT) / len;
+}
+
/*
* Add a shrinker callback to be called from the vm
*/
@@ -887,6 +929,7 @@ static void shrink_cache(struct zone *zo
&page_list, &nr_scan);
zone->nr_inactive -= nr_taken;
zone->pages_scanned += nr_scan;
+ update_zone_age(zone, nr_scan);
spin_unlock_irq(&zone->lru_lock);
if (nr_taken == 0)
--- linux-2.6.15-rc5-mm1.orig/mm/page_alloc.c
+++ linux-2.6.15-rc5-mm1/mm/page_alloc.c
@@ -1522,6 +1522,8 @@ void show_free_areas(void)
" active:%lukB"
" inactive:%lukB"
" present:%lukB"
+ " aging:%lukB"
+ " age:%lu"
" pages_scanned:%lu"
" all_unreclaimable? %s"
"\n",
@@ -1533,6 +1535,8 @@ void show_free_areas(void)
K(zone->nr_active),
K(zone->nr_inactive),
K(zone->present_pages),
+ K(zone->aging_total),
+ zone->page_age,
zone->pages_scanned,
(zone->all_unreclaimable ? "yes" : "no")
);
@@ -2144,6 +2148,9 @@ static void __init free_area_init_core(s
zone->nr_scan_inactive = 0;
zone->nr_active = 0;
zone->nr_inactive = 0;
+ zone->aging_total = 0;
+ zone->aging_milestone = 0;
+ zone->page_age = 0;
atomic_set(&zone->reclaim_in_progress, 0);
if (!size)
continue;
@@ -2292,6 +2299,8 @@ static int zoneinfo_show(struct seq_file
"\n high %lu"
"\n active %lu"
"\n inactive %lu"
+ "\n aging %lu"
+ "\n age %lu"
"\n scanned %lu (a: %lu i: %lu)"
"\n spanned %lu"
"\n present %lu",
@@ -2301,6 +2310,8 @@ static int zoneinfo_show(struct seq_file
zone->pages_high,
zone->nr_active,
zone->nr_inactive,
+ zone->aging_total,
+ zone->page_age,
zone->pages_scanned,
zone->nr_scan_active, zone->nr_scan_inactive,
zone->spanned_pages,
--
^ permalink raw reply [flat|nested] 15+ messages in thread
* [PATCH 04/13] mm: balance zone aging in direct reclaim path
2005-12-06 13:56 [PATCH 00/13] Balancing the scan rate of major caches V2 Wu Fengguang
` (2 preceding siblings ...)
2005-12-06 13:56 ` [PATCH 03/13] mm: supporting variables and functions for balanced zone aging Wu Fengguang
@ 2005-12-06 13:56 ` Wu Fengguang
2005-12-06 13:56 ` [PATCH 05/13] mm: balance zone aging in kswapd " Wu Fengguang
` (8 subsequent siblings)
12 siblings, 0 replies; 15+ messages in thread
From: Wu Fengguang @ 2005-12-06 13:56 UTC (permalink / raw)
To: linux-kernel
Cc: Andrew Morton, Christoph Lameter, Rik van Riel, Peter Zijlstra,
Marcelo Tosatti, Magnus Damm, Nick Piggin, Andrea Arcangeli,
Wu Fengguang
[-- Attachment #1: mm-balance-zone-aging-in-direct-reclaim.patch --]
[-- Type: text/plain, Size: 2390 bytes --]
Add 10 extra priorities to the direct page reclaim path, which makes 10 round of
balancing effort(reclaim only from the least aged local/headless zone) before
falling back to the reclaim-all scheme.
Ten rounds should be enough to get enough free pages in normal cases, which
prevents unnecessarily disturbing remote nodes. If further restrict the first
round of page allocation to local zones, we might get what the early zone
reclaim patch want: memory affinity/locality.
Signed-off-by: Wu Fengguang <wfg@mail.ustc.edu.cn>
---
mm/vmscan.c | 31 ++++++++++++++++++++++++++++---
1 files changed, 28 insertions(+), 3 deletions(-)
--- linux-2.6.15-rc5-mm1.orig/mm/vmscan.c
+++ linux-2.6.15-rc5-mm1/mm/vmscan.c
@@ -1189,6 +1189,7 @@ static void
shrink_caches(struct zone **zones, struct scan_control *sc)
{
int i;
+ struct zone *z = NULL;
for (i = 0; zones[i] != NULL; i++) {
struct zone *zone = zones[i];
@@ -1203,11 +1204,34 @@ shrink_caches(struct zone **zones, struc
if (zone->prev_priority > sc->priority)
zone->prev_priority = sc->priority;
- if (zone->all_unreclaimable && sc->priority != DEF_PRIORITY)
+ if (zone->all_unreclaimable && sc->priority < DEF_PRIORITY)
continue; /* Let kswapd poll it */
+ /*
+ * Balance page aging in local zones and following headless
+ * zones.
+ */
+ if (sc->priority > DEF_PRIORITY) {
+ if (zone->zone_pgdat != zones[0]->zone_pgdat) {
+ cpumask_t cpu = node_to_cpumask(
+ zone->zone_pgdat->node_id);
+ if (!cpus_empty(cpu))
+ break;
+ }
+
+ if (!z)
+ z = zone;
+ else if (age_gt(z, zone))
+ z = zone;
+
+ continue;
+ }
+
shrink_zone(zone, sc);
}
+
+ if (z)
+ shrink_zone(z, sc);
}
/*
@@ -1251,7 +1275,8 @@ int try_to_free_pages(struct zone **zone
lru_pages += zone->nr_active + zone->nr_inactive;
}
- for (priority = DEF_PRIORITY; priority >= 0; priority--) {
+ /* The added 10 priorities are for scan rate balancing */
+ for (priority = DEF_PRIORITY + 10; priority >= 0; priority--) {
sc.nr_mapped = read_page_state(nr_mapped);
sc.nr_scanned = 0;
sc.nr_reclaimed = 0;
@@ -1285,7 +1310,7 @@ int try_to_free_pages(struct zone **zone
}
/* Take a nap, wait for some writeback to complete */
- if (sc.nr_scanned && priority < DEF_PRIORITY - 2)
+ if (sc.nr_scanned && priority < DEF_PRIORITY)
blk_congestion_wait(WRITE, HZ/10);
}
out:
--
^ permalink raw reply [flat|nested] 15+ messages in thread
* [PATCH 05/13] mm: balance zone aging in kswapd reclaim path
2005-12-06 13:56 [PATCH 00/13] Balancing the scan rate of major caches V2 Wu Fengguang
` (3 preceding siblings ...)
2005-12-06 13:56 ` [PATCH 04/13] mm: balance zone aging in direct reclaim path Wu Fengguang
@ 2005-12-06 13:56 ` Wu Fengguang
2005-12-06 14:19 ` Wu Fengguang
2005-12-06 13:56 ` [PATCH 06/13] mm: balance slab aging Wu Fengguang
` (7 subsequent siblings)
12 siblings, 1 reply; 15+ messages in thread
From: Wu Fengguang @ 2005-12-06 13:56 UTC (permalink / raw)
To: linux-kernel
Cc: Andrew Morton, Christoph Lameter, Rik van Riel, Peter Zijlstra,
Marcelo Tosatti, Magnus Damm, Nick Piggin, Andrea Arcangeli,
Wu Fengguang
[-- Attachment #1: mm-balance-zone-aging-in-kswapd-reclaim.patch --]
[-- Type: text/plain, Size: 4182 bytes --]
The vm subsystem is rather complex. System memory is divided into zones,
lower zones act as fallback of higher zones in memory allocation. The page
reclaim algorithm should generally keep zone aging rates in sync. But if a
zone under watermark has many unreclaimable pages, it has to be scanned much
more to get enough free pages. While doing this,
- lower zones should also be scanned more, since their pages are also usable
for higher zone allocations.
- higher zones should not be scanned just to keep the aging in sync, which
can evict large amount of pages without saving the problem(and may well
worsen it).
With that in mind, the patch does the rebalance in kswapd as follows:
1) reclaim from the lowest zone when
- under pages_high
- under pages_high+lowmem_reserve, and less/equal aged than highest
zone(or out of sync with it)
2) reclaim from higher zones when
- under pages_high+lowmem_reserve, and less/equal aged than its
immediate lower neighbor(or out of sync with it)
Note that the zone age is a normalized value in range 0-4096 on i386/4G. 4096
corresponds to a full scan of one zone. And the comparison of ages are only
deemed ok if the gap is less than 4096/8, or they will be regarded as out of
sync.
On exit, the code ensures:
1) the lowest zone will be pages_high ok
2) at least one zone will be pages_high+lowmem_reserve ok
3) a very strong force of rebalancing with the exception of
- some lower zones are unreclaimable: we must let them go ahead
alone, leaving higher zones back
- shrink_zone() scans too much and creates huge imbalance in one
run(Nick is working on this)
The logic can deal with known normal/abnormal situations gracefully:
1) Normal case
- zone ages are cyclicly tied together: taking over each other, and
keeping close enough
2) A Zone is unreclaimable, scanned much more, and become out of sync
- if ever a troublesome zone is being overscanned, the logic brings
its lower neighbors ahead together, leaving higher neighbors back.
- the aging tie between the two groups is broken, and the relevant
zones are reclaimed when pages_high+lowmem_reserve not ok, just as
before the patch.
- at some time the zone ages meet again and back to normal
- a possiblely better strategy, as soon as the pressure disappeared,
might be relunctant to reclaim from the already overscanned lower
group, and let the higher group slowly catch up.
3) Zone is truncated
- will not reclaim from it until under watermark
With this patch, the meaning of zone->pages_high+lowmem_reserve changed from
the _required_ watermark to the _recommended_ watermark. Someone might be
willing to increase them somehow.
Signed-off-by: Wu Fengguang <wfg@mail.ustc.edu.cn>
---
mm/vmscan.c | 25 ++++++++++++++++++++-----
1 files changed, 20 insertions(+), 5 deletions(-)
--- linux-2.6.15-rc5-mm1.orig/mm/vmscan.c
+++ linux-2.6.15-rc5-mm1/mm/vmscan.c
@@ -1359,6 +1359,7 @@ static int balance_pgdat(pg_data_t *pgda
int total_scanned, total_reclaimed;
struct reclaim_state *reclaim_state = current->reclaim_state;
struct scan_control sc;
+ struct zone *prev_zone = pgdat->node_zones;
loop_again:
total_scanned = 0;
@@ -1374,6 +1375,9 @@ loop_again:
struct zone *zone = pgdat->node_zones + i;
zone->temp_priority = DEF_PRIORITY;
+
+ if (populated_zone(zone))
+ prev_zone = zone;
}
for (priority = DEF_PRIORITY; priority >= 0; priority--) {
@@ -1404,14 +1408,25 @@ loop_again:
if (!populated_zone(zone))
continue;
- if (nr_pages == 0) { /* Not software suspend */
- if (zone_watermark_ok(zone, order,
- zone->pages_high, 0, 0))
- continue;
+ if (nr_pages) /* software suspend */
+ goto scan_swspd;
- all_zones_ok = 0;
+ if (zone < prev_zone &&
+ !zone_watermark_ok(zone, order,
+ zone->pages_high, 0, 0)) {
+ } else if (!age_gt(zone, prev_zone) &&
+ !zone_watermark_ok(zone, order,
+ zone->pages_high,
+ pgdat->nr_zones - 1, 0)) {
+ } else {
+ prev_zone = zone;
+ continue;
}
+ prev_zone = zone;
+ all_zones_ok = 0;
+
+scan_swspd:
if (zone->all_unreclaimable && priority != DEF_PRIORITY)
continue;
--
^ permalink raw reply [flat|nested] 15+ messages in thread
* [PATCH 06/13] mm: balance slab aging
2005-12-06 13:56 [PATCH 00/13] Balancing the scan rate of major caches V2 Wu Fengguang
` (4 preceding siblings ...)
2005-12-06 13:56 ` [PATCH 05/13] mm: balance zone aging in kswapd " Wu Fengguang
@ 2005-12-06 13:56 ` Wu Fengguang
2005-12-06 13:56 ` [PATCH 07/13] mm: balance active/inactive list scan rates Wu Fengguang
` (6 subsequent siblings)
12 siblings, 0 replies; 15+ messages in thread
From: Wu Fengguang @ 2005-12-06 13:56 UTC (permalink / raw)
To: linux-kernel
Cc: Andrew Morton, Christoph Lameter, Rik van Riel, Peter Zijlstra,
Marcelo Tosatti, Magnus Damm, Nick Piggin, Andrea Arcangeli,
Wu Fengguang
[-- Attachment #1: mm-balance-slab-aging.patch --]
[-- Type: text/plain, Size: 8679 bytes --]
The current slab shrinking code is way too fragile.
Let it manage aging pace by itself, and provide a simple and robust interface.
The design considerations:
- use the same syncing facilities as that of the zones
- keep the age of slabs in line with that of the largest zone
this in effect makes aging rate of slabs follow that of the most aged node.
- reserve a minimal number of unused slabs
the size of reservation depends on vm pressure
- shrink more slab caches only when vm pressure is high
the old logic, `mmap pages found' - `shrink more caches' - `avoid swapping',
sounds not quite logical, so the code is removed.
- let sc->nr_scanned record the exact number of cold pages scanned
it is no longer used by the slab cache shrinking algorithm, but good for other
algorithms(e.g. the active_list/inactive_list balancing).
Signed-off-by: Wu Fengguang <wfg@mail.ustc.edu.cn>
---
fs/drop-pagecache.c | 2
include/linux/mm.h | 7 +--
mm/vmscan.c | 106 +++++++++++++++++++++-------------------------------
3 files changed, 48 insertions(+), 67 deletions(-)
--- linux-2.6.15-rc5-mm1.orig/include/linux/mm.h
+++ linux-2.6.15-rc5-mm1/include/linux/mm.h
@@ -798,7 +798,9 @@ struct shrinker {
shrinker_t shrinker;
struct list_head list;
int seeks; /* seeks to recreate an obj */
- long nr; /* objs pending delete */
+ unsigned long aging_total;
+ unsigned long aging_milestone;
+ unsigned long page_age;
struct shrinker_stats *s_stats;
};
@@ -1080,8 +1082,7 @@ int in_gate_area_no_task(unsigned long a
int drop_pagecache_sysctl_handler(struct ctl_table *, int, struct file *,
void __user *, size_t *, loff_t *);
-int shrink_slab(unsigned long scanned, gfp_t gfp_mask,
- unsigned long lru_pages);
+int shrink_slab(struct zone *zone, int priority, gfp_t gfp_mask);
#endif /* __KERNEL__ */
#endif /* _LINUX_MM_H */
--- linux-2.6.15-rc5-mm1.orig/mm/vmscan.c
+++ linux-2.6.15-rc5-mm1/mm/vmscan.c
@@ -165,6 +165,18 @@ static inline void update_zone_age(struc
<< PAGE_AGE_SHIFT) / len;
}
+static inline void update_slab_age(struct shrinker *s,
+ unsigned long len, int nr_scan)
+{
+ s->aging_total += nr_scan;
+
+ if (s->aging_total - s->aging_milestone > len)
+ s->aging_milestone += len;
+
+ s->page_age = ((s->aging_total - s->aging_milestone)
+ << PAGE_AGE_SHIFT) / len;
+}
+
/*
* Add a shrinker callback to be called from the vm
*/
@@ -176,7 +188,9 @@ struct shrinker *set_shrinker(int seeks,
if (shrinker) {
shrinker->shrinker = theshrinker;
shrinker->seeks = seeks;
- shrinker->nr = 0;
+ shrinker->aging_total = 0;
+ shrinker->aging_milestone = 0;
+ shrinker->page_age = 0;
shrinker->s_stats = alloc_percpu(struct shrinker_stats);
if (!shrinker->s_stats) {
kfree(shrinker);
@@ -204,6 +218,7 @@ void remove_shrinker(struct shrinker *sh
EXPORT_SYMBOL(remove_shrinker);
#define SHRINK_BATCH 128
+#define SLAB_RESERVE 1000
/*
* Call the shrink functions to age shrinkable caches
*
@@ -212,76 +227,49 @@ EXPORT_SYMBOL(remove_shrinker);
* percentages of the lru and ageable caches. This should balance the seeks
* generated by these structures.
*
- * If the vm encounted mapped pages on the LRU it increase the pressure on
- * slab to avoid swapping.
+ * @priority reflects the vm pressure, the lower the value, the more to
+ * shrink.
*
- * We do weird things to avoid (scanned*seeks*entries) overflowing 32 bits.
- *
- * `lru_pages' represents the number of on-LRU pages in all the zones which
- * are eligible for the caller's allocation attempt. It is used for balancing
- * slab reclaim versus page reclaim.
+ * @zone is better to be the least over-scanned one (normally the highest
+ * zone).
*
* Returns the number of slab objects which we shrunk.
*/
-int shrink_slab(unsigned long scanned, gfp_t gfp_mask, unsigned long lru_pages)
+int shrink_slab(struct zone *zone, int priority, gfp_t gfp_mask)
{
struct shrinker *shrinker;
int ret = 0;
- if (scanned == 0)
- scanned = SWAP_CLUSTER_MAX;
-
if (!down_read_trylock(&shrinker_rwsem))
return 1; /* Assume we'll be able to shrink next time */
list_for_each_entry(shrinker, &shrinker_list, list) {
- unsigned long long delta;
- unsigned long total_scan;
- unsigned long max_pass = (*shrinker->shrinker)(0, gfp_mask);
-
- delta = (4 * scanned) / shrinker->seeks;
- delta *= max_pass;
- do_div(delta, lru_pages + 1);
- shrinker->nr += delta;
- if (shrinker->nr < 0) {
- printk(KERN_ERR "%s: nr=%ld\n",
- __FUNCTION__, shrinker->nr);
- shrinker->nr = max_pass;
- }
-
- /*
- * Avoid risking looping forever due to too large nr value:
- * never try to free more than twice the estimate number of
- * freeable entries.
- */
- if (shrinker->nr > max_pass * 2)
- shrinker->nr = max_pass * 2;
-
- total_scan = shrinker->nr;
- shrinker->nr = 0;
-
- while (total_scan >= SHRINK_BATCH) {
- long this_scan = SHRINK_BATCH;
- int shrink_ret;
+ while (!zone || age_gt(zone, shrinker)) {
int nr_before;
+ int nr_after;
nr_before = (*shrinker->shrinker)(0, gfp_mask);
- shrink_ret = (*shrinker->shrinker)(this_scan, gfp_mask);
- if (shrink_ret == -1)
+ if (nr_before < SLAB_RESERVE * priority / DEF_PRIORITY)
+ break;
+
+ nr_after = (*shrinker->shrinker)(SHRINK_BATCH, gfp_mask);
+ if (nr_after == -1)
break;
- if (shrink_ret < nr_before) {
- ret += nr_before - shrink_ret;
- shrinker_stat_add(shrinker, nr_freed,
- (nr_before - shrink_ret));
+
+ if (nr_after < nr_before) {
+ int nr_freed = nr_before - nr_after;
+
+ ret += nr_freed;
+ shrinker_stat_add(shrinker, nr_freed, nr_freed);
}
- shrinker_stat_add(shrinker, nr_req, this_scan);
- mod_page_state(slabs_scanned, this_scan);
- total_scan -= this_scan;
+ shrinker_stat_add(shrinker, nr_req, SHRINK_BATCH);
+ mod_page_state(slabs_scanned, SHRINK_BATCH);
+ update_slab_age(shrinker, nr_before * DEF_PRIORITY * 2,
+ SHRINK_BATCH * shrinker->seeks *
+ (DEF_PRIORITY + priority));
cond_resched();
}
-
- shrinker->nr += total_scan;
}
up_read(&shrinker_rwsem);
return ret;
@@ -487,11 +475,6 @@ static int shrink_list(struct list_head
BUG_ON(PageActive(page));
- sc->nr_scanned++;
- /* Double the slab pressure for mapped and swapcache pages */
- if (page_mapped(page) || PageSwapCache(page))
- sc->nr_scanned++;
-
if (PageWriteback(page))
goto keep_locked;
@@ -936,6 +919,7 @@ static void shrink_cache(struct zone *zo
goto done;
max_scan -= nr_scan;
+ sc->nr_scanned += nr_scan;
if (current_is_kswapd())
mod_page_state_zone(zone, pgscan_kswapd, nr_scan);
else
@@ -1254,7 +1238,6 @@ int try_to_free_pages(struct zone **zone
int total_scanned = 0, total_reclaimed = 0;
struct reclaim_state *reclaim_state = current->reclaim_state;
struct scan_control sc;
- unsigned long lru_pages = 0;
int i;
delay_prefetch();
@@ -1272,7 +1255,6 @@ int try_to_free_pages(struct zone **zone
continue;
zone->temp_priority = DEF_PRIORITY;
- lru_pages += zone->nr_active + zone->nr_inactive;
}
/* The added 10 priorities are for scan rate balancing */
@@ -1285,7 +1267,8 @@ int try_to_free_pages(struct zone **zone
if (!priority)
disable_swap_token();
shrink_caches(zones, &sc);
- shrink_slab(sc.nr_scanned, gfp_mask, lru_pages);
+ if (zone_idx(zones[0]))
+ shrink_slab(zones[0], priority, gfp_mask);
if (reclaim_state) {
sc.nr_reclaimed += reclaim_state->reclaimed_slab;
reclaim_state->reclaimed_slab = 0;
@@ -1381,8 +1364,6 @@ loop_again:
}
for (priority = DEF_PRIORITY; priority >= 0; priority--) {
- unsigned long lru_pages = 0;
-
all_zones_ok = 1;
sc.nr_scanned = 0;
sc.nr_reclaimed = 0;
@@ -1433,7 +1414,6 @@ scan_swspd:
zone->temp_priority = priority;
if (zone->prev_priority > priority)
zone->prev_priority = priority;
- lru_pages += zone->nr_active + zone->nr_inactive;
shrink_zone(zone, &sc);
@@ -1442,7 +1422,7 @@ scan_swspd:
zone->all_unreclaimable = 1;
}
reclaim_state->reclaimed_slab = 0;
- shrink_slab(sc.nr_scanned, GFP_KERNEL, lru_pages);
+ shrink_slab(prev_zone, priority, GFP_KERNEL);
sc.nr_reclaimed += reclaim_state->reclaimed_slab;
total_reclaimed += sc.nr_reclaimed;
total_scanned += sc.nr_scanned;
--- linux-2.6.15-rc5-mm1.orig/fs/drop-pagecache.c
+++ linux-2.6.15-rc5-mm1/fs/drop-pagecache.c
@@ -47,7 +47,7 @@ static void drop_slab(void)
int nr_objects;
do {
- nr_objects = shrink_slab(1000, GFP_KERNEL, 1000);
+ nr_objects = shrink_slab(NULL, 0, GFP_KERNEL);
} while (nr_objects > 10);
}
--
^ permalink raw reply [flat|nested] 15+ messages in thread
* [PATCH 07/13] mm: balance active/inactive list scan rates
2005-12-06 13:56 [PATCH 00/13] Balancing the scan rate of major caches V2 Wu Fengguang
` (5 preceding siblings ...)
2005-12-06 13:56 ` [PATCH 06/13] mm: balance slab aging Wu Fengguang
@ 2005-12-06 13:56 ` Wu Fengguang
2005-12-06 13:56 ` [PATCH 08/13] mm: remove unnecessary variable and loop Wu Fengguang
` (5 subsequent siblings)
12 siblings, 0 replies; 15+ messages in thread
From: Wu Fengguang @ 2005-12-06 13:56 UTC (permalink / raw)
To: linux-kernel
Cc: Andrew Morton, Christoph Lameter, Rik van Riel, Peter Zijlstra,
Marcelo Tosatti, Magnus Damm, Nick Piggin, Andrea Arcangeli
[-- Attachment #1: mm-balance-active-inactive-list-aging.patch --]
[-- Type: text/plain, Size: 9517 bytes --]
shrink_zone() has two major design goals:
1) let active/inactive lists have equal scan rates
2) do the scans in small chunks
But the implementation has some problems:
- reluctant to scan small zones
the callers often have to dip into low priority to free memory.
- the balance is quite rough
the break statement in the loop breaks it.
- may scan few pages in one batch
refill_inactive_zone can be called twice to scan 32 and 1 pages.
The new design:
1) keep perfect balance
let active_list follow inactive_list in scan rate
2) always scan in SWAP_CLUSTER_MAX sized chunks
simple and efficient
3) will scan at least one chunk
the expected behavior from the callers
The perfect balance may or may not yield better performance, though it
a) is a more understandable and dependable behavior
b) together with inter-zone balancing, makes the zoned memories consistent
The atomic reclaim_in_progress is there to prevent most concurrent reclaims.
If concurrent reclaims did happen, there will be no fatal errors.
I tested the patch with the following commands:
dd if=/dev/zero of=hot bs=1M seek=800 count=1
dd if=/dev/zero of=cold bs=1M seek=50000 count=1
./test-aging.sh; ./active-inactive-aging-rate.sh
Before the patch:
-----------------------------------------------------------------------------
active/inactive sizes on 2.6.14-2-686-smp:
0/1000 = 0 / 1241
563/1000 = 73343 / 130108
887/1000 = 137348 / 154816
active/inactive scan rates:
dma 38/1000 = 7731 / (198924 + 0)
normal 465/1000 = 2979780 / (6394740 + 0)
high 680/1000 = 4354230 / (6396786 + 0)
total used free shared buffers cached
Mem: 2027 1978 49 0 4 1923
-/+ buffers/cache: 49 1977
Swap: 0 0 0
-----------------------------------------------------------------------------
After the patch, the scan rates and the size ratios are kept roughly the same
for all zones:
-----------------------------------------------------------------------------
active/inactive sizes on 2.6.15-rc3-mm1:
0/1000 = 0 / 961
236/1000 = 38385 / 162429
319/1000 = 70607 / 221101
active/inactive scan rates:
dma 0/1000 = 0 / (42176 + 0)
normal 234/1000 = 1714688 / (7303456 + 1088)
high 317/1000 = 3151936 / (9933792 + 96)
total used free shared buffers cached
Mem: 2020 1969 50 0 5 1908
-/+ buffers/cache: 54 1965
Swap: 0 0 0
-----------------------------------------------------------------------------
script test-aging.sh:
------------------------------
#!/bin/zsh
cp cold /dev/null&
while {pidof cp > /dev/null};
do
cp hot /dev/null
done
------------------------------
script active-inactive-aging-rate.sh:
-----------------------------------------------------------------------------
#!/bin/sh
echo active/inactive sizes on `uname -r`:
egrep '(active|inactive)' /proc/zoneinfo |
while true
do
read name value
[[ -z $name ]] && break
eval $name=$value
[[ $name = "inactive" ]] && echo -e "$((active * 1000 / (1 + inactive)))/1000 \t= $active / $inactive"
done
while true
do
read name value
[[ -z $name ]] && break
eval $name=$value
done < /proc/vmstat
echo
echo active/inactive scan rates:
echo -e "dma \t $((pgrefill_dma * 1000 / (1 + pgscan_kswapd_dma + pgscan_direct_dma)))/1000 \t= $pgrefill_dma / ($pgscan_kswapd_dma + $pgscan_direct_dma)"
echo -e "normal \t $((pgrefill_normal * 1000 / (1 + pgscan_kswapd_normal + pgscan_direct_normal)))/1000 \t= $pgrefill_normal / ($pgscan_kswapd_normal + $pgscan_direct_normal)"
echo -e "high \t $((pgrefill_high * 1000 / (1 + pgscan_kswapd_high + pgscan_direct_high)))/1000 \t= $pgrefill_high / ($pgscan_kswapd_high + $pgscan_direct_high)"
echo
free -m
-----------------------------------------------------------------------------
Signed-off-by: Wu Fengguang <wfg@mail.ustc.edu.cn>
---
include/linux/mmzone.h | 3 --
include/linux/swap.h | 2 -
mm/page_alloc.c | 5 +---
mm/vmscan.c | 52 +++++++++++++++++++++++++++----------------------
4 files changed, 33 insertions(+), 29 deletions(-)
--- linux-2.6.15-rc5-mm1.orig/mm/vmscan.c
+++ linux-2.6.15-rc5-mm1/mm/vmscan.c
@@ -907,7 +907,7 @@ static void shrink_cache(struct zone *zo
int nr_scan;
int nr_freed;
- nr_taken = isolate_lru_pages(sc->swap_cluster_max,
+ nr_taken = isolate_lru_pages(sc->nr_to_scan,
&zone->inactive_list,
&page_list, &nr_scan);
zone->nr_inactive -= nr_taken;
@@ -1101,56 +1101,56 @@ refill_inactive_zone(struct zone *zone,
/*
* This is a basic per-zone page freer. Used by both kswapd and direct reclaim.
+ * The reclaim process:
+ * a) scan always in batch of SWAP_CLUSTER_MAX pages
+ * b) scan inactive list at least one batch
+ * c) balance the scan rate of active/inactive list
+ * d) finish on either scanned or reclaimed enough pages
*/
static void
shrink_zone(struct zone *zone, struct scan_control *sc)
{
+ unsigned long long next_scan_active;
unsigned long nr_active;
unsigned long nr_inactive;
atomic_inc(&zone->reclaim_in_progress);
+ next_scan_active = sc->nr_scanned;
+
/*
* Add one to `nr_to_scan' just to make sure that the kernel will
* slowly sift through the active list.
*/
- zone->nr_scan_active += (zone->nr_active >> sc->priority) + 1;
- nr_active = zone->nr_scan_active;
- if (nr_active >= sc->swap_cluster_max)
- zone->nr_scan_active = 0;
- else
- nr_active = 0;
-
- zone->nr_scan_inactive += (zone->nr_inactive >> sc->priority) + 1;
- nr_inactive = zone->nr_scan_inactive;
- if (nr_inactive >= sc->swap_cluster_max)
- zone->nr_scan_inactive = 0;
- else
- nr_inactive = 0;
+ nr_active = zone->nr_scan_active + 1;
+ nr_inactive = (zone->nr_inactive >> sc->priority) + SWAP_CLUSTER_MAX;
+ nr_inactive &= ~(SWAP_CLUSTER_MAX - 1);
+ sc->nr_to_scan = SWAP_CLUSTER_MAX;
sc->nr_to_reclaim = sc->swap_cluster_max;
- while (nr_active || nr_inactive) {
- if (nr_active) {
- sc->nr_to_scan = min(nr_active,
- (unsigned long)sc->swap_cluster_max);
- nr_active -= sc->nr_to_scan;
+ while (nr_active >= SWAP_CLUSTER_MAX * 1024 || nr_inactive) {
+ if (nr_active >= SWAP_CLUSTER_MAX * 1024) {
+ nr_active -= SWAP_CLUSTER_MAX * 1024;
refill_inactive_zone(zone, sc);
}
if (nr_inactive) {
- sc->nr_to_scan = min(nr_inactive,
- (unsigned long)sc->swap_cluster_max);
- nr_inactive -= sc->nr_to_scan;
+ nr_inactive -= SWAP_CLUSTER_MAX;
shrink_cache(zone, sc);
if (sc->nr_to_reclaim <= 0)
break;
}
}
- throttle_vm_writeout();
+ next_scan_active = (sc->nr_scanned - next_scan_active) * 1024ULL *
+ (unsigned long long)zone->nr_active;
+ do_div(next_scan_active, zone->nr_inactive | 1);
+ zone->nr_scan_active = nr_active + (unsigned long)next_scan_active;
atomic_dec(&zone->reclaim_in_progress);
+
+ throttle_vm_writeout();
}
/*
@@ -1191,6 +1191,9 @@ shrink_caches(struct zone **zones, struc
if (zone->all_unreclaimable && sc->priority < DEF_PRIORITY)
continue; /* Let kswapd poll it */
+ if (atomic_read(&zone->reclaim_in_progress))
+ continue;
+
/*
* Balance page aging in local zones and following headless
* zones.
@@ -1411,6 +1414,9 @@ scan_swspd:
if (zone->all_unreclaimable && priority != DEF_PRIORITY)
continue;
+ if (atomic_read(&zone->reclaim_in_progress))
+ continue;
+
zone->temp_priority = priority;
if (zone->prev_priority > priority)
zone->prev_priority = priority;
--- linux-2.6.15-rc5-mm1.orig/mm/page_alloc.c
+++ linux-2.6.15-rc5-mm1/mm/page_alloc.c
@@ -2145,7 +2145,6 @@ static void __init free_area_init_core(s
INIT_LIST_HEAD(&zone->active_list);
INIT_LIST_HEAD(&zone->inactive_list);
zone->nr_scan_active = 0;
- zone->nr_scan_inactive = 0;
zone->nr_active = 0;
zone->nr_inactive = 0;
zone->aging_total = 0;
@@ -2301,7 +2300,7 @@ static int zoneinfo_show(struct seq_file
"\n inactive %lu"
"\n aging %lu"
"\n age %lu"
- "\n scanned %lu (a: %lu i: %lu)"
+ "\n scanned %lu (a: %lu)"
"\n spanned %lu"
"\n present %lu",
zone->free_pages,
@@ -2313,7 +2312,7 @@ static int zoneinfo_show(struct seq_file
zone->aging_total,
zone->page_age,
zone->pages_scanned,
- zone->nr_scan_active, zone->nr_scan_inactive,
+ zone->nr_scan_active / 1024,
zone->spanned_pages,
zone->present_pages);
seq_printf(m,
--- linux-2.6.15-rc5-mm1.orig/include/linux/swap.h
+++ linux-2.6.15-rc5-mm1/include/linux/swap.h
@@ -111,7 +111,7 @@ enum {
SWP_SCANNING = (1 << 8), /* refcount in scan_swap_map */
};
-#define SWAP_CLUSTER_MAX 32
+#define SWAP_CLUSTER_MAX 32 /* must be power of 2 */
#define SWAP_MAP_MAX 0x7fff
#define SWAP_MAP_BAD 0x8000
--- linux-2.6.15-rc5-mm1.orig/include/linux/mmzone.h
+++ linux-2.6.15-rc5-mm1/include/linux/mmzone.h
@@ -142,8 +142,7 @@ struct zone {
spinlock_t lru_lock;
struct list_head active_list;
struct list_head inactive_list;
- unsigned long nr_scan_active;
- unsigned long nr_scan_inactive;
+ unsigned long nr_scan_active; /* x1024 to be more precise */
unsigned long nr_active;
unsigned long nr_inactive;
unsigned long pages_scanned; /* since last reclaim */
--
^ permalink raw reply [flat|nested] 15+ messages in thread
* [PATCH 08/13] mm: remove unnecessary variable and loop
2005-12-06 13:56 [PATCH 00/13] Balancing the scan rate of major caches V2 Wu Fengguang
` (6 preceding siblings ...)
2005-12-06 13:56 ` [PATCH 07/13] mm: balance active/inactive list scan rates Wu Fengguang
@ 2005-12-06 13:56 ` Wu Fengguang
2005-12-06 13:56 ` [PATCH 09/13] mm: remove swap_cluster_max from scan_control Wu Fengguang
` (4 subsequent siblings)
12 siblings, 0 replies; 15+ messages in thread
From: Wu Fengguang @ 2005-12-06 13:56 UTC (permalink / raw)
To: linux-kernel
Cc: Andrew Morton, Christoph Lameter, Rik van Riel, Peter Zijlstra,
Marcelo Tosatti, Magnus Damm, Nick Piggin, Andrea Arcangeli,
Wu Fengguang
[-- Attachment #1: mm-remove-unnecessary-variable-and-loop.patch --]
[-- Type: text/plain, Size: 3856 bytes --]
shrink_cache() and refill_inactive_zone() do not need loops.
Simplify them to scan one chunk at a time.
Signed-off-by: Wu Fengguang <wfg@mail.ustc.edu.cn>
---
mm/vmscan.c | 92 ++++++++++++++++++++++++++++--------------------------------
1 files changed, 43 insertions(+), 49 deletions(-)
--- linux-2.6.15-rc5-mm1.orig/mm/vmscan.c
+++ linux-2.6.15-rc5-mm1/mm/vmscan.c
@@ -895,63 +895,58 @@ static void shrink_cache(struct zone *zo
{
LIST_HEAD(page_list);
struct pagevec pvec;
- int max_scan = sc->nr_to_scan;
+ struct page *page;
+ int nr_taken;
+ int nr_scan;
+ int nr_freed;
pagevec_init(&pvec, 1);
lru_add_drain();
spin_lock_irq(&zone->lru_lock);
- while (max_scan > 0) {
- struct page *page;
- int nr_taken;
- int nr_scan;
- int nr_freed;
-
- nr_taken = isolate_lru_pages(sc->nr_to_scan,
- &zone->inactive_list,
- &page_list, &nr_scan);
- zone->nr_inactive -= nr_taken;
- zone->pages_scanned += nr_scan;
- update_zone_age(zone, nr_scan);
- spin_unlock_irq(&zone->lru_lock);
+ nr_taken = isolate_lru_pages(sc->nr_to_scan,
+ &zone->inactive_list,
+ &page_list, &nr_scan);
+ zone->nr_inactive -= nr_taken;
+ zone->pages_scanned += nr_scan;
+ update_zone_age(zone, nr_scan);
+ spin_unlock_irq(&zone->lru_lock);
- if (nr_taken == 0)
- goto done;
+ if (nr_taken == 0)
+ return;
- max_scan -= nr_scan;
- sc->nr_scanned += nr_scan;
- if (current_is_kswapd())
- mod_page_state_zone(zone, pgscan_kswapd, nr_scan);
- else
- mod_page_state_zone(zone, pgscan_direct, nr_scan);
- nr_freed = shrink_list(&page_list, sc);
- if (current_is_kswapd())
- mod_page_state(kswapd_steal, nr_freed);
- mod_page_state_zone(zone, pgsteal, nr_freed);
- sc->nr_to_reclaim -= nr_freed;
+ sc->nr_scanned += nr_scan;
+ if (current_is_kswapd())
+ mod_page_state_zone(zone, pgscan_kswapd, nr_scan);
+ else
+ mod_page_state_zone(zone, pgscan_direct, nr_scan);
+ nr_freed = shrink_list(&page_list, sc);
+ if (current_is_kswapd())
+ mod_page_state(kswapd_steal, nr_freed);
+ mod_page_state_zone(zone, pgsteal, nr_freed);
+ sc->nr_to_reclaim -= nr_freed;
- spin_lock_irq(&zone->lru_lock);
- /*
- * Put back any unfreeable pages.
- */
- while (!list_empty(&page_list)) {
- page = lru_to_page(&page_list);
- if (TestSetPageLRU(page))
- BUG();
- list_del(&page->lru);
- if (PageActive(page))
- add_page_to_active_list(zone, page);
- else
- add_page_to_inactive_list(zone, page);
- if (!pagevec_add(&pvec, page)) {
- spin_unlock_irq(&zone->lru_lock);
- __pagevec_release(&pvec);
- spin_lock_irq(&zone->lru_lock);
- }
+ spin_lock_irq(&zone->lru_lock);
+ /*
+ * Put back any unfreeable pages.
+ */
+ while (!list_empty(&page_list)) {
+ page = lru_to_page(&page_list);
+ if (TestSetPageLRU(page))
+ BUG();
+ list_del(&page->lru);
+ if (PageActive(page))
+ add_page_to_active_list(zone, page);
+ else
+ add_page_to_inactive_list(zone, page);
+ if (!pagevec_add(&pvec, page)) {
+ spin_unlock_irq(&zone->lru_lock);
+ __pagevec_release(&pvec);
+ spin_lock_irq(&zone->lru_lock);
}
- }
+ }
spin_unlock_irq(&zone->lru_lock);
-done:
+
pagevec_release(&pvec);
}
@@ -978,7 +973,6 @@ refill_inactive_zone(struct zone *zone,
int pgmoved;
int pgdeactivate = 0;
int pgscanned;
- int nr_pages = sc->nr_to_scan;
LIST_HEAD(l_hold); /* The pages which were snipped off */
LIST_HEAD(l_inactive); /* Pages to go onto the inactive_list */
LIST_HEAD(l_active); /* Pages to go onto the active_list */
@@ -991,7 +985,7 @@ refill_inactive_zone(struct zone *zone,
lru_add_drain();
spin_lock_irq(&zone->lru_lock);
- pgmoved = isolate_lru_pages(nr_pages, &zone->active_list,
+ pgmoved = isolate_lru_pages(sc->nr_to_scan, &zone->active_list,
&l_hold, &pgscanned);
zone->pages_scanned += pgscanned;
zone->nr_active -= pgmoved;
--
^ permalink raw reply [flat|nested] 15+ messages in thread
* [PATCH 09/13] mm: remove swap_cluster_max from scan_control
2005-12-06 13:56 [PATCH 00/13] Balancing the scan rate of major caches V2 Wu Fengguang
` (7 preceding siblings ...)
2005-12-06 13:56 ` [PATCH 08/13] mm: remove unnecessary variable and loop Wu Fengguang
@ 2005-12-06 13:56 ` Wu Fengguang
2005-12-06 13:56 ` [PATCH 10/13] mm: let sc.nr_scanned/sc.nr_reclaimed accumulate Wu Fengguang
` (3 subsequent siblings)
12 siblings, 0 replies; 15+ messages in thread
From: Wu Fengguang @ 2005-12-06 13:56 UTC (permalink / raw)
To: linux-kernel
Cc: Andrew Morton, Christoph Lameter, Rik van Riel, Peter Zijlstra,
Marcelo Tosatti, Magnus Damm, Nick Piggin, Andrea Arcangeli,
Wu Fengguang
[-- Attachment #1: mm-remove-swap-cluster-max-from-scan-control.patch --]
[-- Type: text/plain, Size: 2384 bytes --]
The use of sc.swap_cluster_max is weird and redundant.
The callers should just set sc.priority/sc.nr_to_reclaim, and let
shrink_zone() decide the proper loop parameters.
Signed-off-by: Wu Fengguang <wfg@mail.ustc.edu.cn>
---
mm/vmscan.c | 15 ++++-----------
1 files changed, 4 insertions(+), 11 deletions(-)
--- linux-2.6.15-rc5-mm1.orig/mm/vmscan.c
+++ linux-2.6.15-rc5-mm1/mm/vmscan.c
@@ -76,12 +76,6 @@ struct scan_control {
/* Can pages be swapped as part of reclaim? */
int may_swap;
-
- /* This context's SWAP_CLUSTER_MAX. If freeing memory for
- * suspend, we effectively ignore SWAP_CLUSTER_MAX.
- * In this context, it doesn't matter that we scan the
- * whole list at once. */
- int swap_cluster_max;
};
#define lru_to_page(_head) (list_entry((_head)->prev, struct page, lru))
@@ -1121,7 +1115,6 @@ shrink_zone(struct zone *zone, struct sc
nr_inactive &= ~(SWAP_CLUSTER_MAX - 1);
sc->nr_to_scan = SWAP_CLUSTER_MAX;
- sc->nr_to_reclaim = sc->swap_cluster_max;
while (nr_active >= SWAP_CLUSTER_MAX * 1024 || nr_inactive) {
if (nr_active >= SWAP_CLUSTER_MAX * 1024) {
@@ -1260,7 +1253,7 @@ int try_to_free_pages(struct zone **zone
sc.nr_scanned = 0;
sc.nr_reclaimed = 0;
sc.priority = priority;
- sc.swap_cluster_max = SWAP_CLUSTER_MAX;
+ sc.nr_to_reclaim = SWAP_CLUSTER_MAX;
if (!priority)
disable_swap_token();
shrink_caches(zones, &sc);
@@ -1272,7 +1265,7 @@ int try_to_free_pages(struct zone **zone
}
total_scanned += sc.nr_scanned;
total_reclaimed += sc.nr_reclaimed;
- if (total_reclaimed >= sc.swap_cluster_max) {
+ if (total_reclaimed >= SWAP_CLUSTER_MAX) {
ret = 1;
goto out;
}
@@ -1284,7 +1277,7 @@ int try_to_free_pages(struct zone **zone
* that's undesirable in laptop mode, where we *want* lumpy
* writeout. So in laptop mode, write out the whole world.
*/
- if (total_scanned > sc.swap_cluster_max + sc.swap_cluster_max/2) {
+ if (total_scanned > SWAP_CLUSTER_MAX * 3 / 2) {
wakeup_pdflush(laptop_mode ? 0 : total_scanned);
sc.may_writepage = 1;
}
@@ -1365,7 +1358,7 @@ loop_again:
sc.nr_scanned = 0;
sc.nr_reclaimed = 0;
sc.priority = priority;
- sc.swap_cluster_max = nr_pages ? nr_pages : SWAP_CLUSTER_MAX;
+ sc.nr_to_reclaim = nr_pages ? nr_pages : SWAP_CLUSTER_MAX;
/* The swap token gets in the way of swapout... */
if (!priority)
--
^ permalink raw reply [flat|nested] 15+ messages in thread
* [PATCH 10/13] mm: let sc.nr_scanned/sc.nr_reclaimed accumulate
2005-12-06 13:56 [PATCH 00/13] Balancing the scan rate of major caches V2 Wu Fengguang
` (8 preceding siblings ...)
2005-12-06 13:56 ` [PATCH 09/13] mm: remove swap_cluster_max from scan_control Wu Fengguang
@ 2005-12-06 13:56 ` Wu Fengguang
2005-12-06 13:56 ` [PATCH 11/13] mm: fold sc.may_writepage and sc.may_swap into sc.flags Wu Fengguang
` (2 subsequent siblings)
12 siblings, 0 replies; 15+ messages in thread
From: Wu Fengguang @ 2005-12-06 13:56 UTC (permalink / raw)
To: linux-kernel
Cc: Andrew Morton, Christoph Lameter, Rik van Riel, Peter Zijlstra,
Marcelo Tosatti, Magnus Damm, Nick Piggin, Andrea Arcangeli,
Wu Fengguang
[-- Attachment #1: mm-accumulate-nr-scanned-reclaimed-in-scan-control.patch --]
[-- Type: text/plain, Size: 4611 bytes --]
Now that there's no need to keep track of nr_scanned/nr_reclaimed for every
single round of shrink_zone(), remove the total_scanned/total_reclaimed and
let nr_scanned/nr_reclaimed accumulate between shrink_zone() calls.
Signed-off-by: Wu Fengguang <wfg@mail.ustc.edu.cn>
---
mm/vmscan.c | 36 ++++++++++++++----------------------
1 files changed, 14 insertions(+), 22 deletions(-)
--- linux-2.6.15-rc5-mm1.orig/mm/vmscan.c
+++ linux-2.6.15-rc5-mm1/mm/vmscan.c
@@ -1225,7 +1225,6 @@ int try_to_free_pages(struct zone **zone
{
int priority;
int ret = 0;
- int total_scanned = 0, total_reclaimed = 0;
struct reclaim_state *reclaim_state = current->reclaim_state;
struct scan_control sc;
int i;
@@ -1235,6 +1234,8 @@ int try_to_free_pages(struct zone **zone
sc.gfp_mask = gfp_mask;
sc.may_writepage = 0;
sc.may_swap = 1;
+ sc.nr_scanned = 0;
+ sc.nr_reclaimed = 0;
inc_page_state(allocstall);
@@ -1250,8 +1251,6 @@ int try_to_free_pages(struct zone **zone
/* The added 10 priorities are for scan rate balancing */
for (priority = DEF_PRIORITY + 10; priority >= 0; priority--) {
sc.nr_mapped = read_page_state(nr_mapped);
- sc.nr_scanned = 0;
- sc.nr_reclaimed = 0;
sc.priority = priority;
sc.nr_to_reclaim = SWAP_CLUSTER_MAX;
if (!priority)
@@ -1263,9 +1262,7 @@ int try_to_free_pages(struct zone **zone
sc.nr_reclaimed += reclaim_state->reclaimed_slab;
reclaim_state->reclaimed_slab = 0;
}
- total_scanned += sc.nr_scanned;
- total_reclaimed += sc.nr_reclaimed;
- if (total_reclaimed >= SWAP_CLUSTER_MAX) {
+ if (sc.nr_reclaimed >= SWAP_CLUSTER_MAX) {
ret = 1;
goto out;
}
@@ -1277,13 +1274,13 @@ int try_to_free_pages(struct zone **zone
* that's undesirable in laptop mode, where we *want* lumpy
* writeout. So in laptop mode, write out the whole world.
*/
- if (total_scanned > SWAP_CLUSTER_MAX * 3 / 2) {
- wakeup_pdflush(laptop_mode ? 0 : total_scanned);
+ if (sc.nr_scanned > SWAP_CLUSTER_MAX * 3 / 2) {
+ wakeup_pdflush(laptop_mode ? 0 : sc.nr_scanned);
sc.may_writepage = 1;
}
/* Take a nap, wait for some writeback to complete */
- if (sc.nr_scanned && priority < DEF_PRIORITY)
+ if (priority < DEF_PRIORITY)
blk_congestion_wait(WRITE, HZ/10);
}
out:
@@ -1329,18 +1326,17 @@ static int balance_pgdat(pg_data_t *pgda
int all_zones_ok;
int priority;
int i;
- int total_scanned, total_reclaimed;
struct reclaim_state *reclaim_state = current->reclaim_state;
struct scan_control sc;
struct zone *prev_zone = pgdat->node_zones;
loop_again:
- total_scanned = 0;
- total_reclaimed = 0;
sc.gfp_mask = GFP_KERNEL;
sc.may_writepage = 0;
sc.may_swap = 1;
sc.nr_mapped = read_page_state(nr_mapped);
+ sc.nr_scanned = 0;
+ sc.nr_reclaimed = 0;
inc_page_state(pageoutrun);
@@ -1355,8 +1351,6 @@ loop_again:
for (priority = DEF_PRIORITY; priority >= 0; priority--) {
all_zones_ok = 1;
- sc.nr_scanned = 0;
- sc.nr_reclaimed = 0;
sc.priority = priority;
sc.nr_to_reclaim = nr_pages ? nr_pages : SWAP_CLUSTER_MAX;
@@ -1417,19 +1411,17 @@ scan_swspd:
reclaim_state->reclaimed_slab = 0;
shrink_slab(prev_zone, priority, GFP_KERNEL);
sc.nr_reclaimed += reclaim_state->reclaimed_slab;
- total_reclaimed += sc.nr_reclaimed;
- total_scanned += sc.nr_scanned;
/*
* If we've done a decent amount of scanning and
* the reclaim ratio is low, start doing writepage
* even in laptop mode
*/
- if (total_scanned > SWAP_CLUSTER_MAX * 2 &&
- total_scanned > total_reclaimed+total_reclaimed/2)
+ if (sc.nr_scanned > SWAP_CLUSTER_MAX * 2 &&
+ sc.nr_scanned > sc.nr_reclaimed + sc.nr_reclaimed / 2)
sc.may_writepage = 1;
- if (nr_pages && to_free > total_reclaimed)
+ if (nr_pages && to_free > sc.nr_reclaimed)
continue; /* swsusp: need to do more work */
if (all_zones_ok)
break; /* kswapd: all done */
@@ -1437,7 +1429,7 @@ scan_swspd:
* OK, kswapd is getting into trouble. Take a nap, then take
* another pass across the zones.
*/
- if (total_scanned && priority < DEF_PRIORITY - 2)
+ if (priority < DEF_PRIORITY - 2)
blk_congestion_wait(WRITE, HZ/10);
/*
@@ -1446,7 +1438,7 @@ scan_swspd:
* matches the direct reclaim path behaviour in terms of impact
* on zone->*_priority.
*/
- if ((total_reclaimed >= SWAP_CLUSTER_MAX) && (!nr_pages))
+ if (sc.nr_reclaimed >= SWAP_CLUSTER_MAX && !nr_pages)
break;
}
for (i = 0; i < pgdat->nr_zones; i++) {
@@ -1459,7 +1451,7 @@ scan_swspd:
goto loop_again;
}
- return total_reclaimed;
+ return sc.nr_reclaimed;
}
/*
--
^ permalink raw reply [flat|nested] 15+ messages in thread
* [PATCH 11/13] mm: fold sc.may_writepage and sc.may_swap into sc.flags
2005-12-06 13:56 [PATCH 00/13] Balancing the scan rate of major caches V2 Wu Fengguang
` (9 preceding siblings ...)
2005-12-06 13:56 ` [PATCH 10/13] mm: let sc.nr_scanned/sc.nr_reclaimed accumulate Wu Fengguang
@ 2005-12-06 13:56 ` Wu Fengguang
2005-12-06 13:56 ` [PATCH 12/13] mm: add page reclaim debug traces Wu Fengguang
2005-12-06 13:56 ` [PATCH 13/13] mm: fix minor scan count bugs Wu Fengguang
12 siblings, 0 replies; 15+ messages in thread
From: Wu Fengguang @ 2005-12-06 13:56 UTC (permalink / raw)
To: linux-kernel
Cc: Andrew Morton, Christoph Lameter, Rik van Riel, Peter Zijlstra,
Marcelo Tosatti, Magnus Damm, Nick Piggin, Andrea Arcangeli,
Wu Fengguang
[-- Attachment #1: mm-fold-bool-variables-into-flags-in-scan-control.patch --]
[-- Type: text/plain, Size: 2436 bytes --]
Fold bool values into flags to make struct scan_control more compact.
Signed-off-by: Wu Fengguang <wfg@mail.ustc.edu.cn>
---
mm/vmscan.c | 22 ++++++++++------------
1 files changed, 10 insertions(+), 12 deletions(-)
--- linux-2.6.15-rc5-mm1.orig/mm/vmscan.c
+++ linux-2.6.15-rc5-mm1/mm/vmscan.c
@@ -72,12 +72,12 @@ struct scan_control {
/* This context's GFP mask */
gfp_t gfp_mask;
- int may_writepage;
-
- /* Can pages be swapped as part of reclaim? */
- int may_swap;
+ unsigned long flags;
};
+#define SC_MAY_WRITEPAGE 0x1
+#define SC_MAY_SWAP 0x2 /* Can pages be swapped as part of reclaim? */
+
#define lru_to_page(_head) (list_entry((_head)->prev, struct page, lru))
#ifdef ARCH_HAS_PREFETCH
@@ -483,7 +483,7 @@ static int shrink_list(struct list_head
* Try to allocate it some swap space here.
*/
if (PageAnon(page) && !PageSwapCache(page)) {
- if (!sc->may_swap)
+ if (!(sc->flags & SC_MAY_SWAP))
goto keep_locked;
if (!add_to_swap(page, GFP_ATOMIC))
goto activate_locked;
@@ -514,7 +514,7 @@ static int shrink_list(struct list_head
goto keep_locked;
if (!may_enter_fs)
goto keep_locked;
- if (laptop_mode && !sc->may_writepage)
+ if (laptop_mode && !(sc->flags & SC_MAY_WRITEPAGE))
goto keep_locked;
/* Page is dirty, try to write it out here */
@@ -1232,8 +1232,7 @@ int try_to_free_pages(struct zone **zone
delay_prefetch();
sc.gfp_mask = gfp_mask;
- sc.may_writepage = 0;
- sc.may_swap = 1;
+ sc.flags = SC_MAY_SWAP;
sc.nr_scanned = 0;
sc.nr_reclaimed = 0;
@@ -1276,7 +1275,7 @@ int try_to_free_pages(struct zone **zone
*/
if (sc.nr_scanned > SWAP_CLUSTER_MAX * 3 / 2) {
wakeup_pdflush(laptop_mode ? 0 : sc.nr_scanned);
- sc.may_writepage = 1;
+ sc.flags |= SC_MAY_WRITEPAGE;
}
/* Take a nap, wait for some writeback to complete */
@@ -1332,8 +1331,7 @@ static int balance_pgdat(pg_data_t *pgda
loop_again:
sc.gfp_mask = GFP_KERNEL;
- sc.may_writepage = 0;
- sc.may_swap = 1;
+ sc.flags = SC_MAY_SWAP;
sc.nr_mapped = read_page_state(nr_mapped);
sc.nr_scanned = 0;
sc.nr_reclaimed = 0;
@@ -1419,7 +1417,7 @@ scan_swspd:
*/
if (sc.nr_scanned > SWAP_CLUSTER_MAX * 2 &&
sc.nr_scanned > sc.nr_reclaimed + sc.nr_reclaimed / 2)
- sc.may_writepage = 1;
+ sc.flags |= SC_MAY_WRITEPAGE;
if (nr_pages && to_free > sc.nr_reclaimed)
continue; /* swsusp: need to do more work */
--
^ permalink raw reply [flat|nested] 15+ messages in thread
* [PATCH 12/13] mm: add page reclaim debug traces
2005-12-06 13:56 [PATCH 00/13] Balancing the scan rate of major caches V2 Wu Fengguang
` (10 preceding siblings ...)
2005-12-06 13:56 ` [PATCH 11/13] mm: fold sc.may_writepage and sc.may_swap into sc.flags Wu Fengguang
@ 2005-12-06 13:56 ` Wu Fengguang
2005-12-06 13:56 ` [PATCH 13/13] mm: fix minor scan count bugs Wu Fengguang
12 siblings, 0 replies; 15+ messages in thread
From: Wu Fengguang @ 2005-12-06 13:56 UTC (permalink / raw)
To: linux-kernel
Cc: Andrew Morton, Christoph Lameter, Rik van Riel, Peter Zijlstra,
Wu Fengguang
[-- Attachment #1: mm-page-reclaim-debug-traces.patch --]
[-- Type: text/plain, Size: 5231 bytes --]
Show the detailed steps of direct/kswapd page reclaim.
To enable the printk traces:
# echo y > /debug/debug_page_reclaim
Sample lines:
reclaim zone3 from kswapd for watermark, prio 12, scan-reclaimed 32-32, age 2626, active to scan 6542, hot+cold+free pages 8842+283558+352
reclaim zone2 from kswapd for aging, prio 12, scan-reclaimed 32-32, age 2626, active to scan 8018, hot+cold+free pages 1693+200036+10360
reclaim zone3 from kswapd for watermark, prio 12, scan-reclaimed 64-64, age 2627, active to scan 7564, hot+cold+free pages 8842+283526+384
reclaim zone2 from kswapd for aging, prio 12, scan-reclaimed 32-32, age 2627, active to scan 8296, hot+cold+free pages 1693+200018+10360
reclaim zone3 from kswapd for watermark, prio 12, scan-reclaimed 64-63, age 2628, active to scan 8587, hot+cold+free pages 8843+283495+416
reclaim zone2 from kswapd for aging, prio 12, scan-reclaimed 32-32, age 2628, active to scan 8574, hot+cold+free pages 1693+200014+10392
reclaim zone3 from kswapd for watermark, prio 12, scan-reclaimed 64-63, age 2628, active to scan 9610, hot+cold+free pages 8844+283465+448
reclaim zone2 from kswapd for aging, prio 12, scan-reclaimed 32-32, age 2628, active to scan 8852, hot+cold+free pages 1693+199996+10424
reclaim zone3 from kswapd for watermark, prio 12, scan-reclaimed 64-64, age 2629, active to scan 10633, hot+cold+free pages 8844+283433+480
reclaim zone2 from kswapd for aging, prio 12, scan-reclaimed 32-32, age 2629, active to scan 9130, hot+cold+free pages 1693+199992+10456
reclaim zone3 from kswapd for watermark, prio 12, scan-reclaimed 64-64, age 2630, active to scan 11656, hot+cold+free pages 8844+283401+512
reclaim zone2 from kswapd for aging, prio 12, scan-reclaimed 32-32, age 2630, active to scan 9408, hot+cold+free pages 1693+199974+10488
Signed-off-by: Wu Fengguang <wfg@mail.ustc.edu.cn>
---
mm/vmscan.c | 68 +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++-
1 files changed, 67 insertions(+), 1 deletion(-)
--- linux-2.6.15-rc5-mm1.orig/mm/vmscan.c
+++ linux-2.6.15-rc5-mm1/mm/vmscan.c
@@ -38,6 +38,7 @@
#include <asm/div64.h>
#include <linux/swapops.h>
+#include <linux/debugfs.h>
/* possible outcome of pageout() */
typedef enum {
@@ -78,6 +79,62 @@ struct scan_control {
#define SC_MAY_WRITEPAGE 0x1
#define SC_MAY_SWAP 0x2 /* Can pages be swapped as part of reclaim? */
+#define SC_RECLAIM_FROM_KSWAPD 0x10
+#define SC_RECLAIM_FROM_DIRECT 0x20
+#define SC_RECLAIM_FOR_WATERMARK 0x40
+#define SC_RECLAIM_FOR_AGING 0x80
+#define SC_RECLAIM_MASK 0xF0
+
+#ifdef CONFIG_DEBUG_FS
+static u32 debug_page_reclaim;
+
+static inline void debug_reclaim(struct scan_control *sc, unsigned long flags)
+{
+ sc->flags = (sc->flags & ~SC_RECLAIM_MASK) | flags;
+}
+
+static inline void debug_reclaim_report(struct scan_control *sc, struct zone *z)
+{
+ if (!debug_page_reclaim)
+ return;
+
+ printk(KERN_DEBUG "reclaim zone%d from %s for %s, "
+ "prio %d, scan-reclaimed %lu-%lu, age %lu, "
+ "active to scan %lu, "
+ "hot+cold+free pages %lu+%lu+%lu\n",
+ zone_idx(z),
+ (sc->flags & SC_RECLAIM_FROM_KSWAPD) ? "kswapd" :
+ ((sc->flags & SC_RECLAIM_FROM_DIRECT) ? "direct" :
+ "early"),
+ (sc->flags & SC_RECLAIM_FOR_AGING) ?
+ "aging" : "watermark",
+ sc->priority, sc->nr_scanned, sc->nr_reclaimed,
+ z->page_age,
+ z->nr_scan_active,
+ z->nr_active, z->nr_inactive, z->free_pages);
+
+ if (atomic_read(&z->reclaim_in_progress))
+ printk(KERN_WARNING "reclaim_in_progress=%d\n",
+ atomic_read(&z->reclaim_in_progress));
+}
+
+static inline void debug_reclaim_init(void)
+{
+ debugfs_create_bool("debug_page_reclaim", 0644, NULL,
+ &debug_page_reclaim);
+}
+#else
+static inline void debug_reclaim(struct scan_control *sc, int flags)
+{
+}
+static inline void debug_reclaim_report(struct scan_control *sc, struct zone *z)
+{
+}
+static inline void debug_reclaim_init(void)
+{
+}
+#endif
+
#define lru_to_page(_head) (list_entry((_head)->prev, struct page, lru))
#ifdef ARCH_HAS_PREFETCH
@@ -1137,6 +1194,7 @@ shrink_zone(struct zone *zone, struct sc
atomic_dec(&zone->reclaim_in_progress);
+ debug_reclaim_report(sc, zone);
throttle_vm_writeout();
}
@@ -1201,11 +1259,14 @@ shrink_caches(struct zone **zones, struc
continue;
}
+ debug_reclaim(sc, SC_RECLAIM_FROM_DIRECT);
shrink_zone(zone, sc);
}
- if (z)
+ if (z) {
+ debug_reclaim(sc, SC_RECLAIM_FROM_DIRECT|SC_RECLAIM_FOR_AGING);
shrink_zone(z, sc);
+ }
}
/*
@@ -1377,10 +1438,14 @@ loop_again:
if (zone < prev_zone &&
!zone_watermark_ok(zone, order,
zone->pages_high, 0, 0)) {
+ debug_reclaim(&sc, SC_RECLAIM_FROM_KSWAPD |
+ SC_RECLAIM_FOR_WATERMARK);
} else if (!age_gt(zone, prev_zone) &&
!zone_watermark_ok(zone, order,
zone->pages_high,
pgdat->nr_zones - 1, 0)) {
+ debug_reclaim(&sc, SC_RECLAIM_FROM_KSWAPD |
+ SC_RECLAIM_FOR_AGING);
} else {
prev_zone = zone;
continue;
@@ -1607,6 +1672,7 @@ static int __init kswapd_init(void)
= find_task_by_pid(kernel_thread(kswapd, pgdat, CLONE_KERNEL));
total_memory = nr_free_pagecache_pages();
hotcpu_notifier(cpu_callback, 0);
+ debug_reclaim_init();
return 0;
}
--
^ permalink raw reply [flat|nested] 15+ messages in thread
* [PATCH 13/13] mm: fix minor scan count bugs
2005-12-06 13:56 [PATCH 00/13] Balancing the scan rate of major caches V2 Wu Fengguang
` (11 preceding siblings ...)
2005-12-06 13:56 ` [PATCH 12/13] mm: add page reclaim debug traces Wu Fengguang
@ 2005-12-06 13:56 ` Wu Fengguang
12 siblings, 0 replies; 15+ messages in thread
From: Wu Fengguang @ 2005-12-06 13:56 UTC (permalink / raw)
To: linux-kernel
Cc: Andrew Morton, Christoph Lameter, Rik van Riel, Peter Zijlstra,
Marcelo Tosatti, Magnus Damm, Nick Piggin, Andrea Arcangeli,
Wu Fengguang
[-- Attachment #1: mm-scan-accounting-fix.patch --]
[-- Type: text/plain, Size: 1130 bytes --]
- in isolate_lru_pages(): reports one more scan. Fix it.
- in shrink_cache(): 0 pages taken does not mean 0 pages scanned. Fix it.
Signed-off-by: Wu Fengguang <wfg@mail.ustc.edu.cn>
---
mm/vmscan.c | 10 ++++++----
1 files changed, 6 insertions(+), 4 deletions(-)
--- linux-2.6.15-rc5-mm1.orig/mm/vmscan.c
+++ linux-2.6.15-rc5-mm1/mm/vmscan.c
@@ -916,7 +916,8 @@ static int isolate_lru_pages(int nr_to_s
struct page *page;
int scan = 0;
- while (scan++ < nr_to_scan && !list_empty(src)) {
+ while (scan < nr_to_scan && !list_empty(src)) {
+ scan++;
page = lru_to_page(src);
prefetchw_prev_lru_page(page, src, flags);
@@ -963,14 +964,15 @@ static void shrink_cache(struct zone *zo
update_zone_age(zone, nr_scan);
spin_unlock_irq(&zone->lru_lock);
- if (nr_taken == 0)
- return;
-
sc->nr_scanned += nr_scan;
if (current_is_kswapd())
mod_page_state_zone(zone, pgscan_kswapd, nr_scan);
else
mod_page_state_zone(zone, pgscan_direct, nr_scan);
+
+ if (nr_taken == 0)
+ return;
+
nr_freed = shrink_list(&page_list, sc);
if (current_is_kswapd())
mod_page_state(kswapd_steal, nr_freed);
--
^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [PATCH 05/13] mm: balance zone aging in kswapd reclaim path
2005-12-06 13:56 ` [PATCH 05/13] mm: balance zone aging in kswapd " Wu Fengguang
@ 2005-12-06 14:19 ` Wu Fengguang
0 siblings, 0 replies; 15+ messages in thread
From: Wu Fengguang @ 2005-12-06 14:19 UTC (permalink / raw)
To: linux-kernel
Cc: Andrew Morton, Christoph Lameter, Rik van Riel, Peter Zijlstra,
Marcelo Tosatti, Magnus Damm, Nick Piggin, Andrea Arcangeli
Here is a simple test.
It concurrently copies two sparse files of 601M/1.6G in qemu.
The balance is not good enough for now, but is expected to improve much when
shrink_zone() scans small enough in one run(say, < nr_inactive/100).
RESULTS
=======
root ~# ./show-aging-rate.sh
Linux (none) 2.6.15-rc5-mm1 #4 SMP Tue Dec 6 21:27:36 CST 2005 i686 GNU/Linux
total used free shared buffers cached
Mem: 1138 1119 18 0 0 1104
-/+ buffers/cache: 14 1123
Swap: 0 0 0
---------------------------------------------------------------
active/inactive size ratios:
DMA0: 78 / 1000 = 141 / 1803
Normal0: 372 / 1000 = 58453 / 156728
HighMem0: 415 / 1000 = 19471 / 46875
active/inactive scan rates:
DMA: 45 / 1000 = 3238 / ( 61920 + 9280)
Normal: 279 / 1000 = 1509888 / ( 5272608 + 133024)
HighMem: 437 / 1000 = 645536 / ( 1430112 + 44032)
---------------------------------------------------------------
inactive size ratios:
DMA0 / Normal0: 115 / 10000 = 1803 / 156733
Normal0 / HighMem0: 33420 / 10000 = 156733 / 46896
inactive scan rates:
DMA / Normal: 131 / 10000 = ( 61920 + 9280) / ( 5272608 + 133024)
Normal / HighMem: 36669 / 10000 = ( 5272608 + 133024) / ( 1430112 + 44032)
root ~# grep "age " /proc/zoneinfo
age 3085
age 3072
age 3072
root ~# grep -E '(low|high|free|protection:) ' /proc/zoneinfo
pages free 1161
low 21
high 25
protection: (0, 0, 880, 1140)
pages free 3420
low 1173
high 1408
protection: (0, 0, 0, 2080)
pages free 132
low 134
high 203
protection: (0, 0, 0, 0)
root ~# vmstat 5 10
procs -----------memory---------- ---swap-- -----io---- --system-- ----cpu----
r b swpd free buff cache si so bi bo in cs us sy id wa
1 0 0 19544 200 1130844 0 0 40 9 1102 61 6 86 7 1
1 4 0 16936 208 1133012 0 0 0 22 1043 30 7 93 0 0
1 0 0 19404 208 1130904 0 0 0 8 994 83 5 95 0 0
1 0 0 19152 160 1130612 0 0 0 8 1018 85 3 93 0 3
2 0 0 19152 160 1130952 0 0 0 8 997 56 5 95 0 0
2 0 0 18372 168 1131896 0 0 0 3 1000 50 5 95 0 0
1 0 0 18672 112 1131544 0 0 0 7 1014 70 5 95 0 0
1 0 0 19320 112 1131204 0 0 0 2 989 81 4 96 0 0
2 0 0 19152 108 1131276 0 0 1 0 996 79 5 95 0 0
2 0 0 18216 96 1132444 0 0 0 8 1015 84 5 95 0 0
pages_high+lowmem_reserve
203 + 1408 + 2080 + 25 + 1140 = 4856 (x4 = 19424)
SCRIPTS
=======
# cat test-aging2
#!/bin/zsh
while true
do
cp cold /dev/null
sleep $((RANDOM%20))
done &
while true
do
cp hot /dev/null
sleep $((RANDOM%10))
done
# cat show-aging-rate.sh
#!/bin/sh
uname -a
free -m
echo
echo ---------------------------------------------------------------
echo active/inactive size ratios:
egrep '(zone|active|inactive)' /proc/zoneinfo |
while true
do
read a b c d
[[ -z $a ]] && break
if [[ $c = "zone" ]]; then
prev_node=$node
prev_zone=$zone
node=${b%,}
zone=$d$node
else
eval $a=$b
if [[ $a = "inactive" ]]; then
printf "%8s: %4d / 1000 = %9d / %9d\n" $zone \
$((active * 1000 / (1 + inactive))) \
$active $inactive
fi
fi
done
while true
do
read name value
[[ -z $name ]] && break
eval $name=$value
done < /proc/vmstat
echo
echo active/inactive scan rates:
printf " DMA: %4d / 1000 = %11d / (%11d + %11d)\n" \
$((pgrefill_dma * 1000 / (1 + pgscan_kswapd_dma + pgscan_direct_dma))) \
$pgrefill_dma $pgscan_kswapd_dma $pgscan_direct_dma
[[ $pgscan_kswapd_dma32 != 0 ]] && \
printf " DMA32: %4d / 1000 = %11d / (%11d + %11d)\n" \
$((pgrefill_dma32 * 1000 / (1 + pgscan_kswapd_dma32 + pgscan_direct_dma32))) \
$pgrefill_dma32 $pgscan_kswapd_dma32 $pgscan_direct_dma32
printf " Normal: %4d / 1000 = %11d / (%11d + %11d)\n" \
$((pgrefill_normal * 1000 / (1 + pgscan_kswapd_normal + pgscan_direct_normal))) \
$pgrefill_normal $pgscan_kswapd_normal $pgscan_direct_normal
[[ $pgscan_kswapd_high != 0 ]] && \
printf " HighMem: %4d / 1000 = %11d / (%11d + %11d)\n" \
$((pgrefill_high * 1000 / (1 + pgscan_kswapd_high + pgscan_direct_high))) \
$pgrefill_high $pgscan_kswapd_high $pgscan_direct_high
echo
echo ---------------------------------------------------------------
echo inactive size ratios:
egrep '(zone|inactive)' /proc/zoneinfo |
while true
do
read a b c d
[[ -z $a ]] && break
if [[ $c = "zone" ]]; then
prev_node=$node
prev_zone=$zone
node=${b%,}
zone=$d$node
else
prev_inactive=$inactive
eval $a=$b
if [[ $prev_node = $node ]]; then
printf "%8s / %8s: %4d / 10000 = %9d / %9d\n" \
$prev_zone $zone \
$((prev_inactive * 10000 / (1 + inactive))) \
$prev_inactive $inactive
fi
fi
done
echo
echo inactive scan rates:
[[ $pgscan_kswapd_dma != 0 ]] && \
printf "%8s / %8s: %4d / 10000 = (%11d + %11d) / (%11d + %11d)\n" \
"DMA" "Normal" \
$(((1 + pgscan_kswapd_dma + pgscan_direct_dma)* 10000 /\
(1 + pgscan_kswapd_normal + pgscan_direct_normal))) \
$pgscan_kswapd_dma $pgscan_direct_dma \
$pgscan_kswapd_normal $pgscan_direct_normal
[[ $pgscan_kswapd_dma32 != 0 ]] && \
printf "%8s / %8s: %4d / 10000 = (%11d + %11d) / (%11d + %11d)\n" \
"DMA32" "Normal" \
$(((1 + pgscan_kswapd_dma32 + pgscan_direct_dma32)* 10000 /\
(1 + pgscan_kswapd_normal + pgscan_direct_normal))) \
$pgscan_kswapd_dma32 $pgscan_direct_dma32 \
$pgscan_kswapd_normal $pgscan_direct_normal
[[ $pgscan_kswapd_high != 0 ]] && \
printf "%8s / %8s: %4d / 10000 = (%11d + %11d) / (%11d + %11d)\n" \
"Normal" "HighMem" \
$(((1 + pgscan_kswapd_normal + pgscan_direct_normal)* 10000 /\
(1 + pgscan_kswapd_high + pgscan_direct_high))) \
$pgscan_kswapd_normal $pgscan_direct_normal \
$pgscan_kswapd_high $pgscan_direct_high
^ permalink raw reply [flat|nested] 15+ messages in thread
end of thread, other threads:[~2005-12-06 13:56 UTC | newest]
Thread overview: 15+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2005-12-06 13:56 [PATCH 00/13] Balancing the scan rate of major caches V2 Wu Fengguang
2005-12-06 13:56 ` [PATCH 01/13] mm: restore sc.nr_to_reclaim Wu Fengguang
2005-12-06 13:56 ` [PATCH 02/13] mm: simplify kswapd reclaim code Wu Fengguang
2005-12-06 13:56 ` [PATCH 03/13] mm: supporting variables and functions for balanced zone aging Wu Fengguang
2005-12-06 13:56 ` [PATCH 04/13] mm: balance zone aging in direct reclaim path Wu Fengguang
2005-12-06 13:56 ` [PATCH 05/13] mm: balance zone aging in kswapd " Wu Fengguang
2005-12-06 14:19 ` Wu Fengguang
2005-12-06 13:56 ` [PATCH 06/13] mm: balance slab aging Wu Fengguang
2005-12-06 13:56 ` [PATCH 07/13] mm: balance active/inactive list scan rates Wu Fengguang
2005-12-06 13:56 ` [PATCH 08/13] mm: remove unnecessary variable and loop Wu Fengguang
2005-12-06 13:56 ` [PATCH 09/13] mm: remove swap_cluster_max from scan_control Wu Fengguang
2005-12-06 13:56 ` [PATCH 10/13] mm: let sc.nr_scanned/sc.nr_reclaimed accumulate Wu Fengguang
2005-12-06 13:56 ` [PATCH 11/13] mm: fold sc.may_writepage and sc.may_swap into sc.flags Wu Fengguang
2005-12-06 13:56 ` [PATCH 12/13] mm: add page reclaim debug traces Wu Fengguang
2005-12-06 13:56 ` [PATCH 13/13] mm: fix minor scan count bugs Wu Fengguang
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox