From: Juergen Gross <juergen.gross@ts.fujitsu.com>
To: xen-devel@lists.xensource.com
Subject: [PATCH 3 of 3] reflect cpupool in numa node affinity
Date: Tue, 24 Jan 2012 06:54:12 +0100 [thread overview]
Message-ID: <574dba7570ff785d3351.1327384452@nehalem1> (raw)
In-Reply-To: <patchbomb.1327384449@nehalem1>
[-- Attachment #1: Type: text/plain, Size: 378 bytes --]
In order to prefer node local memory for a domain the numa node locality
info must be built according to the cpus belonging to the cpupool of the
domain.
Signed-off-by: juergen.gross@ts.fujitsu.com
3 files changed, 27 insertions(+), 8 deletions(-)
xen/common/cpupool.c | 9 +++++++++
xen/common/domain.c | 16 +++++++++++++++-
xen/common/schedule.c | 10 +++-------
[-- Attachment #2: xen-staging.hg-3.patch --]
[-- Type: text/x-patch, Size: 4196 bytes --]
# HG changeset patch
# User Juergen Gross <juergen.gross@ts.fujitsu.com>
# Date 1327384424 -3600
# Node ID 574dba7570ff785d3351051a4a0a724c63066f57
# Parent 08232960ff4bed750d26e5f1ff53972fee9e0130
reflect cpupool in numa node affinity
In order to prefer node local memory for a domain the numa node locality
info must be built according to the cpus belonging to the cpupool of the
domain.
Signed-off-by: juergen.gross@ts.fujitsu.com
diff -r 08232960ff4b -r 574dba7570ff xen/common/cpupool.c
--- a/xen/common/cpupool.c Tue Jan 24 06:53:30 2012 +0100
+++ b/xen/common/cpupool.c Tue Jan 24 06:53:44 2012 +0100
@@ -220,6 +220,7 @@ static int cpupool_assign_cpu_locked(str
{
int ret;
struct cpupool *old;
+ struct domain *d;
if ( (cpupool_moving_cpu == cpu) && (c != cpupool_cpu_moving) )
return -EBUSY;
@@ -240,6 +241,14 @@ static int cpupool_assign_cpu_locked(str
cpupool_cpu_moving = NULL;
}
cpumask_set_cpu(cpu, c->cpu_valid);
+
+ rcu_read_lock(&domlist_read_lock);
+ for_each_domain_in_cpupool(d, c)
+ {
+ domain_update_node_affinity(d);
+ }
+ rcu_read_unlock(&domlist_read_lock);
+
return 0;
}
diff -r 08232960ff4b -r 574dba7570ff xen/common/domain.c
--- a/xen/common/domain.c Tue Jan 24 06:53:30 2012 +0100
+++ b/xen/common/domain.c Tue Jan 24 06:53:44 2012 +0100
@@ -11,6 +11,7 @@
#include <xen/ctype.h>
#include <xen/errno.h>
#include <xen/sched.h>
+#include <xen/sched-if.h>
#include <xen/domain.h>
#include <xen/mm.h>
#include <xen/event.h>
@@ -334,17 +335,29 @@ void domain_update_node_affinity(struct
void domain_update_node_affinity(struct domain *d)
{
cpumask_var_t cpumask;
+ cpumask_var_t online_affinity;
+ const cpumask_t *online;
nodemask_t nodemask = NODE_MASK_NONE;
struct vcpu *v;
unsigned int node;
if ( !zalloc_cpumask_var(&cpumask) )
return;
+ if ( !alloc_cpumask_var(&online_affinity) )
+ {
+ free_cpumask_var(cpumask);
+ return;
+ }
+
+ online = cpupool_online_cpumask(d->cpupool);
spin_lock(&d->node_affinity_lock);
for_each_vcpu ( d, v )
- cpumask_or(cpumask, cpumask, v->cpu_affinity);
+ {
+ cpumask_and(online_affinity, v->cpu_affinity, online);
+ cpumask_or(cpumask, cpumask, online_affinity);
+ }
for_each_online_node ( node )
if ( cpumask_intersects(&node_to_cpumask(node), cpumask) )
@@ -353,6 +366,7 @@ void domain_update_node_affinity(struct
d->node_affinity = nodemask;
spin_unlock(&d->node_affinity_lock);
+ free_cpumask_var(online_affinity);
free_cpumask_var(cpumask);
}
diff -r 08232960ff4b -r 574dba7570ff xen/common/schedule.c
--- a/xen/common/schedule.c Tue Jan 24 06:53:30 2012 +0100
+++ b/xen/common/schedule.c Tue Jan 24 06:53:44 2012 +0100
@@ -280,11 +280,12 @@ int sched_move_domain(struct domain *d,
SCHED_OP(VCPU2OP(v), insert_vcpu, v);
}
- domain_update_node_affinity(d);
d->cpupool = c;
SCHED_OP(DOM2OP(d), free_domdata, d->sched_priv);
d->sched_priv = domdata;
+
+ domain_update_node_affinity(d);
domain_unpause(d);
@@ -535,7 +536,6 @@ int cpu_disable_scheduler(unsigned int c
struct cpupool *c;
cpumask_t online_affinity;
int ret = 0;
- bool_t affinity_broken;
c = per_cpu(cpupool, cpu);
if ( c == NULL )
@@ -543,8 +543,6 @@ int cpu_disable_scheduler(unsigned int c
for_each_domain_in_cpupool ( d, c )
{
- affinity_broken = 0;
-
for_each_vcpu ( d, v )
{
vcpu_schedule_lock_irq(v);
@@ -556,7 +554,6 @@ int cpu_disable_scheduler(unsigned int c
printk("Breaking vcpu affinity for domain %d vcpu %d\n",
v->domain->domain_id, v->vcpu_id);
cpumask_setall(v->cpu_affinity);
- affinity_broken = 1;
}
if ( v->processor == cpu )
@@ -580,8 +577,7 @@ int cpu_disable_scheduler(unsigned int c
ret = -EAGAIN;
}
- if ( affinity_broken )
- domain_update_node_affinity(d);
+ domain_update_node_affinity(d);
}
return ret;
[-- Attachment #3: Type: text/plain, Size: 138 bytes --]
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xensource.com
http://lists.xensource.com/xen-devel
next prev parent reply other threads:[~2012-01-24 5:54 UTC|newest]
Thread overview: 8+ messages / expand[flat|nested] mbox.gz Atom feed top
2012-01-24 5:54 [PATCH 0 of 3] Reflect cpupool in numa node affinity (v4) Juergen Gross
2012-01-24 5:54 ` [PATCH 1 of 3] introduce and use common macros for selecting cpupool based cpumasks Juergen Gross
2012-01-24 5:54 ` [PATCH 2 of 3] switch to dynamically allocated cpumask in domain_update_node_affinity() Juergen Gross
2012-01-24 9:33 ` Ian Campbell
2012-01-24 9:56 ` Juergen Gross
2012-01-24 10:04 ` Ian Campbell
2012-01-24 5:54 ` Juergen Gross [this message]
-- strict thread matches above, loose matches on Subject: below --
2012-01-24 10:06 [PATCH 0 of 3] Reflect cpupool in numa node affinity (v5) Juergen Gross
2012-01-24 10:06 ` [PATCH 3 of 3] reflect cpupool in numa node affinity Juergen Gross
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=574dba7570ff785d3351.1327384452@nehalem1 \
--to=juergen.gross@ts.fujitsu.com \
--cc=xen-devel@lists.xensource.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).