From: Mel Gorman <mel@csn.ul.ie>
To: Andrew Morton <akpm@linux-foundation.org>
Cc: Shaohua Li <shaohua.li@intel.com>,
KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>,
Christoph Lameter <cl@linux.com>,
David Rientjes <rientjes@google.com>,
KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>,
LKML <linux-kernel@vger.kernel.org>,
Linux-MM <linux-mm@kvack.org>, Mel Gorman <mel@csn.ul.ie>
Subject: [PATCH 2/2] mm: vmstat: Use a single setter function and callback for adjusting percpu thresholds
Date: Wed, 27 Oct 2010 09:47:36 +0100 [thread overview]
Message-ID: <1288169256-7174-3-git-send-email-mel@csn.ul.ie> (raw)
In-Reply-To: <1288169256-7174-1-git-send-email-mel@csn.ul.ie>
reduce_pgdat_percpu_threshold() and restore_pgdat_percpu_threshold()
exist to adjust the per-cpu vmstat thresholds while kswapd is awake to
avoid errors due to counter drift. The functions duplicate some code so
this patch replaces them with a single set_pgdat_percpu_threshold() that
takes a callback function to calculate the desired threshold as a
parameter.
Signed-off-by: Mel Gorman <mel@csn.ul.ie>
---
include/linux/vmstat.h | 10 ++++++----
mm/vmscan.c | 6 ++++--
mm/vmstat.c | 30 ++++++------------------------
3 files changed, 16 insertions(+), 30 deletions(-)
diff --git a/include/linux/vmstat.h b/include/linux/vmstat.h
index e4cc21c..833e676 100644
--- a/include/linux/vmstat.h
+++ b/include/linux/vmstat.h
@@ -254,8 +254,11 @@ extern void dec_zone_state(struct zone *, enum zone_stat_item);
extern void __dec_zone_state(struct zone *, enum zone_stat_item);
void refresh_cpu_vm_stats(int);
-void reduce_pgdat_percpu_threshold(pg_data_t *pgdat);
-void restore_pgdat_percpu_threshold(pg_data_t *pgdat);
+
+int calculate_pressure_threshold(struct zone *zone);
+int calculate_normal_threshold(struct zone *zone);
+void set_pgdat_percpu_threshold(pg_data_t *pgdat,
+ int (*calculate_pressure)(struct zone *));
#else /* CONFIG_SMP */
/*
@@ -300,8 +303,7 @@ static inline void __dec_zone_page_state(struct page *page,
#define dec_zone_page_state __dec_zone_page_state
#define mod_zone_page_state __mod_zone_page_state
-static inline void reduce_pgdat_percpu_threshold(pg_data_t *pgdat) { }
-static inline void restore_pgdat_percpu_threshold(pg_data_t *pgdat) { }
+#define set_pgdat_percpu_threshold(pgdat, callback) { }
static inline void refresh_cpu_vm_stats(int cpu) { }
#endif
diff --git a/mm/vmscan.c b/mm/vmscan.c
index 3e71cb1..7966110 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -2378,9 +2378,11 @@ static int kswapd(void *p)
*/
if (!sleeping_prematurely(pgdat, order, remaining)) {
trace_mm_vmscan_kswapd_sleep(pgdat->node_id);
- restore_pgdat_percpu_threshold(pgdat);
+ set_pgdat_percpu_threshold(pgdat,
+ calculate_normal_threshold);
schedule();
- reduce_pgdat_percpu_threshold(pgdat);
+ set_pgdat_percpu_threshold(pgdat,
+ calculate_pressure_threshold);
} else {
if (remaining)
count_vm_event(KSWAPD_LOW_WMARK_HIT_QUICKLY);
diff --git a/mm/vmstat.c b/mm/vmstat.c
index cafcc2d..73661e8 100644
--- a/mm/vmstat.c
+++ b/mm/vmstat.c
@@ -81,13 +81,13 @@ EXPORT_SYMBOL(vm_stat);
#ifdef CONFIG_SMP
-static int calculate_pressure_threshold(struct zone *zone)
+int calculate_pressure_threshold(struct zone *zone)
{
return max(1, (int)((high_wmark_pages(zone) - low_wmark_pages(zone) /
num_online_cpus())));
}
-static int calculate_threshold(struct zone *zone)
+int calculate_normal_threshold(struct zone *zone)
{
int threshold;
int mem; /* memory in 128 MB units */
@@ -146,7 +146,7 @@ static void refresh_zone_stat_thresholds(void)
for_each_populated_zone(zone) {
unsigned long max_drift, tolerate_drift;
- threshold = calculate_threshold(zone);
+ threshold = calculate_normal_threshold(zone);
for_each_online_cpu(cpu)
per_cpu_ptr(zone->pageset, cpu)->stat_threshold
@@ -165,7 +165,8 @@ static void refresh_zone_stat_thresholds(void)
}
}
-void reduce_pgdat_percpu_threshold(pg_data_t *pgdat)
+void set_pgdat_percpu_threshold(pg_data_t *pgdat,
+ int (*calculate_pressure)(struct zone *))
{
struct zone *zone;
int cpu;
@@ -177,26 +178,7 @@ void reduce_pgdat_percpu_threshold(pg_data_t *pgdat)
if (!zone->percpu_drift_mark)
continue;
- threshold = calculate_pressure_threshold(zone);
- for_each_online_cpu(cpu)
- per_cpu_ptr(zone->pageset, cpu)->stat_threshold
- = threshold;
- }
-}
-
-void restore_pgdat_percpu_threshold(pg_data_t *pgdat)
-{
- struct zone *zone;
- int cpu;
- int threshold;
- int i;
-
- for (i = 0; i < pgdat->nr_zones; i++) {
- zone = &pgdat->node_zones[i];
- if (!zone->percpu_drift_mark)
- continue;
-
- threshold = calculate_threshold(zone);
+ threshold = calculate_pressure(zone);
for_each_online_cpu(cpu)
per_cpu_ptr(zone->pageset, cpu)->stat_threshold
= threshold;
--
1.7.1
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
next prev parent reply other threads:[~2010-10-27 8:47 UTC|newest]
Thread overview: 33+ messages / expand[flat|nested] mbox.gz Atom feed top
2010-10-27 8:47 [PATCH 0/2] Reduce the amount of time spent in watermark-related functions Mel Gorman
2010-10-27 8:47 ` [PATCH 1/2] mm: page allocator: Adjust the per-cpu counter threshold when memory is low Mel Gorman
2010-10-27 20:16 ` Christoph Lameter
2010-10-28 1:09 ` KAMEZAWA Hiroyuki
2010-10-28 9:49 ` Mel Gorman
2010-10-28 9:58 ` KAMEZAWA Hiroyuki
2010-11-14 8:53 ` [PATCH] set_pgdat_percpu_threshold() don't use for_each_online_cpu KOSAKI Motohiro
2010-11-15 10:26 ` Mel Gorman
2010-11-15 14:04 ` Christoph Lameter
2010-11-16 9:58 ` Mel Gorman
2010-11-17 0:07 ` Andrew Morton
2010-11-19 15:29 ` Christoph Lameter
2010-11-23 8:32 ` KOSAKI Motohiro
2010-11-01 7:06 ` [PATCH 1/2] mm: page allocator: Adjust the per-cpu counter threshold when memory is low KOSAKI Motohiro
2010-11-26 16:06 ` Kyle McMartin
2010-11-29 9:56 ` Mel Gorman
2010-11-29 13:16 ` Kyle McMartin
2010-11-29 15:08 ` Mel Gorman
2010-11-29 15:22 ` Kyle McMartin
2010-11-29 15:26 ` Kyle McMartin
2010-11-29 15:58 ` Mel Gorman
2010-12-23 22:18 ` David Rientjes
2010-12-23 22:35 ` Andrew Morton
2010-12-23 23:00 ` Kyle McMartin
2010-12-23 23:07 ` David Rientjes
2010-12-23 23:17 ` Andrew Morton
2010-10-27 8:47 ` Mel Gorman [this message]
2010-10-27 20:13 ` [PATCH 2/2] mm: vmstat: Use a single setter function and callback for adjusting percpu thresholds Christoph Lameter
2010-10-28 1:10 ` KAMEZAWA Hiroyuki
2010-11-01 7:06 ` KOSAKI Motohiro
-- strict thread matches above, loose matches on Subject: below --
2010-10-28 15:13 [PATCH 0/2] Reduce the amount of time spent in watermark-related functions V4 Mel Gorman
2010-10-28 15:13 ` [PATCH 2/2] mm: vmstat: Use a single setter function and callback for adjusting percpu thresholds Mel Gorman
2010-10-28 22:09 ` Andrew Morton
2010-10-29 10:17 ` Mel Gorman
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=1288169256-7174-3-git-send-email-mel@csn.ul.ie \
--to=mel@csn.ul.ie \
--cc=akpm@linux-foundation.org \
--cc=cl@linux.com \
--cc=kamezawa.hiroyu@jp.fujitsu.com \
--cc=kosaki.motohiro@jp.fujitsu.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=rientjes@google.com \
--cc=shaohua.li@intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).