From: Lai Jiangshan <laijs-BthXqXjhjHXQFUHtdCDX3A@public.gmane.org>
To: Mel Gorman <mel-wPRd99KPJ+uzQB+pC5nmwQ@public.gmane.org>
Cc: Christoph Lameter
<cl-de/tnXTf+JLsfHDXvbKv3WD2FQJk+8+b@public.gmane.org>,
Jiri Kosina <jkosina-AlSwsSmVLrQ@public.gmane.org>,
Dan Magenheimer
<dan.magenheimer-QHcLZuEGTsvQT0dZR+AlfA@public.gmane.org>,
linux-kernel-u79uwXL29TY76Z2rM5mHXA@public.gmane.org,
Michal Hocko <mhocko-AlSwsSmVLrQ@public.gmane.org>,
Paul Gortmaker
<paul.gortmaker-CWA4WttNNZF54TAoqtyWWQ@public.gmane.org>,
Konstantin Khlebnikov
<khlebnikov-GEFAQzZX7r8dnm+yROfE0A@public.gmane.org>,
"H. Peter Anvin" <hpa-YMNOUZJC4hwAvxtiuMwx3w@public.gmane.org>,
Sam Ravnborg <sam-uyr5N9Q2VtJg9hUCZPvPmw@public.gmane.org>,
Gavin Shan
<shangw-23VcF4HTsmIX0ybBhKVfKdBPR1lH4CV8@public.gmane.org>,
Rik van Riel <riel-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>,
cgroups-u79uwXL29TY76Z2rM5mHXA@public.gmane.org,
x86-DgEjT+Ai2ygdnm+yROfE0A@public.gmane.org,
Hugh Dickins <hughd-hpIqsD4AKlfQT0dZR+AlfA@public.gmane.org>,
Ingo Molnar <mingo-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>,
Mel Gorman <mgorman-l3A5Bk7waGM@public.gmane.org>,
KOSAKI Motohiro
<kosaki.motohiro-+CUm20s59erQFUHtdCDX3A@public.gmane.org>,
David Rientjes <rientjes-hpIqsD4AKlfQT0dZR+AlfA@public.gmane.org>,
Petr Holasek <pholasek-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>,
linux-mm-Bw31MaZKKs3YtjvyW6yDsg@public.gmane.org,
Wanlong Gao <gaowanlong-BthXqXjhjHXQFUHtdCDX3A@public.gmane.org>,
Djalal Harouni <tixxdz-Umm1ozX2/EEdnm+yROfE0A@public.gmane.org>,
Rusty Russell <rusty-8n+1lVoiYb80n/F98K4Iww@public.gmane.org>,
Wen Congyang <wency-BthXqXjhjHXQFUHtdCDX3A@public.gmane.org>,
Peter Zijlstra <a.p.zijlstra@ch>
Subject: [RFC PATCH 02/23 V2] cpuset: use N_MEMORY instead N_HIGH_MEMORY
Date: Thu, 2 Aug 2012 10:52:50 +0800 [thread overview]
Message-ID: <1343875991-7533-3-git-send-email-laijs@cn.fujitsu.com> (raw)
In-Reply-To: <1343875991-7533-1-git-send-email-laijs-BthXqXjhjHXQFUHtdCDX3A@public.gmane.org>
N_HIGH_MEMORY stands for the nodes that has normal or high memory.
N_MEMORY stands for the nodes that has any memory.
The code here need to handle with the nodes which have memory, we should
use N_MEMORY instead.
Signed-off-by: Lai Jiangshan <laijs-BthXqXjhjHXQFUHtdCDX3A@public.gmane.org>
---
Documentation/cgroups/cpusets.txt | 2 +-
include/linux/cpuset.h | 2 +-
kernel/cpuset.c | 32 ++++++++++++++++----------------
3 files changed, 18 insertions(+), 18 deletions(-)
diff --git a/Documentation/cgroups/cpusets.txt b/Documentation/cgroups/cpusets.txt
index cefd3d8..12e01d4 100644
--- a/Documentation/cgroups/cpusets.txt
+++ b/Documentation/cgroups/cpusets.txt
@@ -218,7 +218,7 @@ and name space for cpusets, with a minimum of additional kernel code.
The cpus and mems files in the root (top_cpuset) cpuset are
read-only. The cpus file automatically tracks the value of
cpu_online_mask using a CPU hotplug notifier, and the mems file
-automatically tracks the value of node_states[N_HIGH_MEMORY]--i.e.,
+automatically tracks the value of node_states[N_MEMORY]--i.e.,
nodes with memory--using the cpuset_track_online_nodes() hook.
diff --git a/include/linux/cpuset.h b/include/linux/cpuset.h
index 838320f..8c8a60d 100644
--- a/include/linux/cpuset.h
+++ b/include/linux/cpuset.h
@@ -144,7 +144,7 @@ static inline nodemask_t cpuset_mems_allowed(struct task_struct *p)
return node_possible_map;
}
-#define cpuset_current_mems_allowed (node_states[N_HIGH_MEMORY])
+#define cpuset_current_mems_allowed (node_states[N_MEMORY])
static inline void cpuset_init_current_mems_allowed(void) {}
static inline int cpuset_nodemask_valid_mems_allowed(nodemask_t *nodemask)
diff --git a/kernel/cpuset.c b/kernel/cpuset.c
index f33c715..2b133db 100644
--- a/kernel/cpuset.c
+++ b/kernel/cpuset.c
@@ -302,10 +302,10 @@ static void guarantee_online_cpus(const struct cpuset *cs,
* are online, with memory. If none are online with memory, walk
* up the cpuset hierarchy until we find one that does have some
* online mems. If we get all the way to the top and still haven't
- * found any online mems, return node_states[N_HIGH_MEMORY].
+ * found any online mems, return node_states[N_MEMORY].
*
* One way or another, we guarantee to return some non-empty subset
- * of node_states[N_HIGH_MEMORY].
+ * of node_states[N_MEMORY].
*
* Call with callback_mutex held.
*/
@@ -313,14 +313,14 @@ static void guarantee_online_cpus(const struct cpuset *cs,
static void guarantee_online_mems(const struct cpuset *cs, nodemask_t *pmask)
{
while (cs && !nodes_intersects(cs->mems_allowed,
- node_states[N_HIGH_MEMORY]))
+ node_states[N_MEMORY]))
cs = cs->parent;
if (cs)
nodes_and(*pmask, cs->mems_allowed,
- node_states[N_HIGH_MEMORY]);
+ node_states[N_MEMORY]);
else
- *pmask = node_states[N_HIGH_MEMORY];
- BUG_ON(!nodes_intersects(*pmask, node_states[N_HIGH_MEMORY]));
+ *pmask = node_states[N_MEMORY];
+ BUG_ON(!nodes_intersects(*pmask, node_states[N_MEMORY]));
}
/*
@@ -1100,7 +1100,7 @@ static int update_nodemask(struct cpuset *cs, struct cpuset *trialcs,
return -ENOMEM;
/*
- * top_cpuset.mems_allowed tracks node_stats[N_HIGH_MEMORY];
+ * top_cpuset.mems_allowed tracks node_stats[N_MEMORY];
* it's read-only
*/
if (cs == &top_cpuset) {
@@ -1122,7 +1122,7 @@ static int update_nodemask(struct cpuset *cs, struct cpuset *trialcs,
goto done;
if (!nodes_subset(trialcs->mems_allowed,
- node_states[N_HIGH_MEMORY])) {
+ node_states[N_MEMORY])) {
retval = -EINVAL;
goto done;
}
@@ -2034,7 +2034,7 @@ static struct cpuset *cpuset_next(struct list_head *queue)
* before dropping down to the next. It always processes a node before
* any of its children.
*
- * In the case of memory hot-unplug, it will remove nodes from N_HIGH_MEMORY
+ * In the case of memory hot-unplug, it will remove nodes from N_MEMORY
* if all present pages from a node are offlined.
*/
static void
@@ -2073,7 +2073,7 @@ scan_cpusets_upon_hotplug(struct cpuset *root, enum hotplug_event event)
/* Continue past cpusets with all mems online */
if (nodes_subset(cp->mems_allowed,
- node_states[N_HIGH_MEMORY]))
+ node_states[N_MEMORY]))
continue;
oldmems = cp->mems_allowed;
@@ -2081,7 +2081,7 @@ scan_cpusets_upon_hotplug(struct cpuset *root, enum hotplug_event event)
/* Remove offline mems from this cpuset. */
mutex_lock(&callback_mutex);
nodes_and(cp->mems_allowed, cp->mems_allowed,
- node_states[N_HIGH_MEMORY]);
+ node_states[N_MEMORY]);
mutex_unlock(&callback_mutex);
/* Move tasks from the empty cpuset to a parent */
@@ -2134,8 +2134,8 @@ void cpuset_update_active_cpus(bool cpu_online)
#ifdef CONFIG_MEMORY_HOTPLUG
/*
- * Keep top_cpuset.mems_allowed tracking node_states[N_HIGH_MEMORY].
- * Call this routine anytime after node_states[N_HIGH_MEMORY] changes.
+ * Keep top_cpuset.mems_allowed tracking node_states[N_MEMORY].
+ * Call this routine anytime after node_states[N_MEMORY] changes.
* See cpuset_update_active_cpus() for CPU hotplug handling.
*/
static int cpuset_track_online_nodes(struct notifier_block *self,
@@ -2148,7 +2148,7 @@ static int cpuset_track_online_nodes(struct notifier_block *self,
case MEM_ONLINE:
oldmems = top_cpuset.mems_allowed;
mutex_lock(&callback_mutex);
- top_cpuset.mems_allowed = node_states[N_HIGH_MEMORY];
+ top_cpuset.mems_allowed = node_states[N_MEMORY];
mutex_unlock(&callback_mutex);
update_tasks_nodemask(&top_cpuset, &oldmems, NULL);
break;
@@ -2177,7 +2177,7 @@ static int cpuset_track_online_nodes(struct notifier_block *self,
void __init cpuset_init_smp(void)
{
cpumask_copy(top_cpuset.cpus_allowed, cpu_active_mask);
- top_cpuset.mems_allowed = node_states[N_HIGH_MEMORY];
+ top_cpuset.mems_allowed = node_states[N_MEMORY];
hotplug_memory_notifier(cpuset_track_online_nodes, 10);
@@ -2245,7 +2245,7 @@ void cpuset_init_current_mems_allowed(void)
*
* Description: Returns the nodemask_t mems_allowed of the cpuset
* attached to the specified @tsk. Guaranteed to return some non-empty
- * subset of node_states[N_HIGH_MEMORY], even if this means going outside the
+ * subset of node_states[N_MEMORY], even if this means going outside the
* tasks cpuset.
**/
--
1.7.1
next prev parent reply other threads:[~2012-08-02 2:52 UTC|newest]
Thread overview: 33+ messages / expand[flat|nested] mbox.gz Atom feed top
2012-08-02 2:52 [RFC PATCH 00/23 V2] memory, numa: introduce MOVABLE-dedicated node and online_movable for hotplug Lai Jiangshan
[not found] ` <1343875991-7533-1-git-send-email-laijs-BthXqXjhjHXQFUHtdCDX3A@public.gmane.org>
2012-08-02 2:52 ` [RFC PATCH 01/23 V2] node_states: introduce N_MEMORY Lai Jiangshan
2012-08-02 16:01 ` Christoph Lameter
2012-08-04 13:38 ` Hillf Danton
2012-08-02 2:52 ` Lai Jiangshan [this message]
2012-08-04 13:51 ` [RFC PATCH 02/23 V2] cpuset: use N_MEMORY instead N_HIGH_MEMORY Hillf Danton
2012-08-02 2:52 ` [RFC PATCH 03/23 V2] procfs: " Lai Jiangshan
2012-08-04 13:53 ` Hillf Danton
2012-08-02 2:52 ` [RFC PATCH 04/23 V2] oom: " Lai Jiangshan
2012-08-04 13:54 ` Hillf Danton
2012-08-02 2:52 ` [RFC PATCH 05/23 V2] mm,migrate: " Lai Jiangshan
2012-08-02 2:52 ` [RFC PATCH 06/23 V2] mempolicy: " Lai Jiangshan
2012-08-02 2:52 ` [RFC PATCH 07/23 V2] memcontrol: " Lai Jiangshan
2012-08-02 2:52 ` [RFC PATCH 08/23 V2] hugetlb: " Lai Jiangshan
2012-08-02 2:52 ` [RFC PATCH 09/23 V2] vmstat: " Lai Jiangshan
2012-08-02 2:52 ` [RFC PATCH 10/23 V2] kthread: " Lai Jiangshan
2012-08-02 2:52 ` [RFC PATCH 11/23 V2] init: " Lai Jiangshan
2012-08-02 2:53 ` [RFC PATCH 12/23 V2] vmscan: " Lai Jiangshan
2012-08-04 13:55 ` Hillf Danton
2012-08-02 2:53 ` [RFC PATCH 13/23 V2] page_alloc: use N_MEMORY instead N_HIGH_MEMORY change the node_states initialization Lai Jiangshan
2012-08-02 2:53 ` [RFC PATCH 14/23 V2] slub, hotplug: ignore unrelated node's hot-adding and hot-removing Lai Jiangshan
2012-08-02 16:04 ` Christoph Lameter
[not found] ` <alpine.DEB.2.00.1208021102560.23049-sBS69tsa9Uj/9pzu0YdTqQ@public.gmane.org>
2012-08-06 5:39 ` Lai Jiangshan
2012-08-02 2:53 ` [RFC PATCH 15/23 V2] memory_hotplug: fix missing nodemask management Lai Jiangshan
2012-08-02 2:53 ` [RFC PATCH 16/23 V2] numa: add CONFIG_MOVABLE_NODE for movable-dedicated node Lai Jiangshan
2012-08-02 16:06 ` Christoph Lameter
2012-08-02 2:53 ` [RFC PATCH 17/23 V2] page_alloc.c: don't subtract unrelated memmap from zone's present pages Lai Jiangshan
2012-08-02 2:53 ` [RFC PATCH 18/23 V2] page_alloc: add kernelcore_max_addr Lai Jiangshan
2012-08-02 2:53 ` [RFC PATCH 19/23 V2] x86: get pg_data_t's memory from other node Lai Jiangshan
2012-08-02 2:53 ` [RFC PATCH 20/23 V2] x86: use memblock_set_current_limit() to set memblock.current_limit Lai Jiangshan
2012-08-02 2:53 ` [RFC PATCH 21/23 V2] memblock: limit memory address from memblock Lai Jiangshan
2012-08-02 2:53 ` [RFC PATCH 22/23 V2] memblock: compare current_limit with end variable at memblock_find_in_range_node() Lai Jiangshan
2012-08-02 2:53 ` [RFC PATCH 23/23 V2] mm, memory-hotplug: add online_movable Lai Jiangshan
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=1343875991-7533-3-git-send-email-laijs@cn.fujitsu.com \
--to=laijs-bthxqxjhjhxqfuhtdcdx3a@public.gmane.org \
--cc=a.p.zijlstra@ch \
--cc=cgroups-u79uwXL29TY76Z2rM5mHXA@public.gmane.org \
--cc=cl-de/tnXTf+JLsfHDXvbKv3WD2FQJk+8+b@public.gmane.org \
--cc=dan.magenheimer-QHcLZuEGTsvQT0dZR+AlfA@public.gmane.org \
--cc=gaowanlong-BthXqXjhjHXQFUHtdCDX3A@public.gmane.org \
--cc=hpa-YMNOUZJC4hwAvxtiuMwx3w@public.gmane.org \
--cc=hughd-hpIqsD4AKlfQT0dZR+AlfA@public.gmane.org \
--cc=jkosina-AlSwsSmVLrQ@public.gmane.org \
--cc=khlebnikov-GEFAQzZX7r8dnm+yROfE0A@public.gmane.org \
--cc=kosaki.motohiro-+CUm20s59erQFUHtdCDX3A@public.gmane.org \
--cc=linux-kernel-u79uwXL29TY76Z2rM5mHXA@public.gmane.org \
--cc=linux-mm-Bw31MaZKKs3YtjvyW6yDsg@public.gmane.org \
--cc=mel-wPRd99KPJ+uzQB+pC5nmwQ@public.gmane.org \
--cc=mgorman-l3A5Bk7waGM@public.gmane.org \
--cc=mhocko-AlSwsSmVLrQ@public.gmane.org \
--cc=mingo-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org \
--cc=paul.gortmaker-CWA4WttNNZF54TAoqtyWWQ@public.gmane.org \
--cc=pholasek-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org \
--cc=riel-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org \
--cc=rientjes-hpIqsD4AKlfQT0dZR+AlfA@public.gmane.org \
--cc=rusty-8n+1lVoiYb80n/F98K4Iww@public.gmane.org \
--cc=sam-uyr5N9Q2VtJg9hUCZPvPmw@public.gmane.org \
--cc=shangw-23VcF4HTsmIX0ybBhKVfKdBPR1lH4CV8@public.gmane.org \
--cc=tixxdz-Umm1ozX2/EEdnm+yROfE0A@public.gmane.org \
--cc=wency-BthXqXjhjHXQFUHtdCDX3A@public.gmane.org \
--cc=x86-DgEjT+Ai2ygdnm+yROfE0A@public.gmane.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).