* [PATCH 0 of 3] Reflect cpupool in numa node affinity (v4)
@ 2012-01-24 5:54 Juergen Gross
2012-01-24 5:54 ` [PATCH 1 of 3] introduce and use common macros for selecting cpupool based cpumasks Juergen Gross
` (2 more replies)
0 siblings, 3 replies; 7+ messages in thread
From: Juergen Gross @ 2012-01-24 5:54 UTC (permalink / raw)
To: xen-devel
In order to prefer node local memory for a domain the numa node locality
info must be built according to the cpus belonging to the cpupool of the
domain.
Changes in v4:
- split in three patches
Changes in v3:
- formatting
- avoid memory leak
Changes in v2:
- switch to dynamically allocated cpumasks in domain_update_node_affinity()
- introduce and use common macros for selecting cpupool based cpumasks
8 files changed, 48 insertions(+), 28 deletions(-)
xen/common/cpupool.c | 9 +++++++++
xen/common/domain.c | 28 +++++++++++++++++++++++-----
xen/common/domctl.c | 2 +-
xen/common/sched_credit.c | 6 ++----
xen/common/sched_credit2.c | 2 --
xen/common/sched_sedf.c | 8 +++-----
xen/common/schedule.c | 16 +++++-----------
xen/include/xen/sched-if.h | 5 +++++
^ permalink raw reply [flat|nested] 7+ messages in thread
* [PATCH 1 of 3] introduce and use common macros for selecting cpupool based cpumasks
2012-01-24 5:54 [PATCH 0 of 3] Reflect cpupool in numa node affinity (v4) Juergen Gross
@ 2012-01-24 5:54 ` Juergen Gross
2012-01-24 5:54 ` [PATCH 2 of 3] switch to dynamically allocated cpumask in domain_update_node_affinity() Juergen Gross
2012-01-24 5:54 ` [PATCH 3 of 3] reflect cpupool in numa node affinity Juergen Gross
2 siblings, 0 replies; 7+ messages in thread
From: Juergen Gross @ 2012-01-24 5:54 UTC (permalink / raw)
To: xen-devel
[-- Attachment #1: Type: text/plain, Size: 444 bytes --]
There are several instances of the same construct finding the cpumask for a
cpupool. Use macros instead.
Signed-off-by: juergen.gross@ts.fujitsu.com
6 files changed, 13 insertions(+), 16 deletions(-)
xen/common/domctl.c | 2 +-
xen/common/sched_credit.c | 6 ++----
xen/common/sched_credit2.c | 2 --
xen/common/sched_sedf.c | 8 +++-----
xen/common/schedule.c | 6 ++----
xen/include/xen/sched-if.h | 5 +++++
[-- Attachment #2: xen-staging.hg-3.patch --]
[-- Type: text/x-patch, Size: 5750 bytes --]
# HG changeset patch
# User Juergen Gross <juergen.gross@ts.fujitsu.com>
# Date 1327384386 -3600
# Node ID 99f98e501f226825fbf16f6210b4b7834dff5df1
# Parent 370924e204dc29e12cd807dd730974d6b2bc53d3
introduce and use common macros for selecting cpupool based cpumasks
There are several instances of the same construct finding the cpumask for a
cpupool. Use macros instead.
Signed-off-by: juergen.gross@ts.fujitsu.com
diff -r 370924e204dc -r 99f98e501f22 xen/common/domctl.c
--- a/xen/common/domctl.c Mon Jan 23 15:10:43 2012 +0000
+++ b/xen/common/domctl.c Tue Jan 24 06:53:06 2012 +0100
@@ -518,7 +518,7 @@ long do_domctl(XEN_GUEST_HANDLE(xen_domc
goto maxvcpu_out;
ret = -ENOMEM;
- online = (d->cpupool == NULL) ? &cpu_online_map : d->cpupool->cpu_valid;
+ online = cpupool_online_cpumask(d->cpupool);
if ( max > d->max_vcpus )
{
struct vcpu **vcpus;
diff -r 370924e204dc -r 99f98e501f22 xen/common/sched_credit.c
--- a/xen/common/sched_credit.c Mon Jan 23 15:10:43 2012 +0000
+++ b/xen/common/sched_credit.c Tue Jan 24 06:53:06 2012 +0100
@@ -72,8 +72,6 @@
#define CSCHED_VCPU(_vcpu) ((struct csched_vcpu *) (_vcpu)->sched_priv)
#define CSCHED_DOM(_dom) ((struct csched_dom *) (_dom)->sched_priv)
#define RUNQ(_cpu) (&(CSCHED_PCPU(_cpu)->runq))
-#define CSCHED_CPUONLINE(_pool) \
- (((_pool) == NULL) ? &cpupool_free_cpus : (_pool)->cpu_valid)
/*
@@ -471,7 +469,7 @@ _csched_cpu_pick(const struct scheduler
* Pick from online CPUs in VCPU's affinity mask, giving a
* preference to its current processor if it's in there.
*/
- online = CSCHED_CPUONLINE(vc->domain->cpupool);
+ online = cpupool_scheduler_cpumask(vc->domain->cpupool);
cpumask_and(&cpus, online, vc->cpu_affinity);
cpu = cpumask_test_cpu(vc->processor, &cpus)
? vc->processor
@@ -1230,7 +1228,7 @@ csched_load_balance(struct csched_privat
int peer_cpu;
BUG_ON( cpu != snext->vcpu->processor );
- online = CSCHED_CPUONLINE(per_cpu(cpupool, cpu));
+ online = cpupool_scheduler_cpumask(per_cpu(cpupool, cpu));
/* If this CPU is going offline we shouldn't steal work. */
if ( unlikely(!cpumask_test_cpu(cpu, online)) )
diff -r 370924e204dc -r 99f98e501f22 xen/common/sched_credit2.c
--- a/xen/common/sched_credit2.c Mon Jan 23 15:10:43 2012 +0000
+++ b/xen/common/sched_credit2.c Tue Jan 24 06:53:06 2012 +0100
@@ -169,8 +169,6 @@ integer_param("sched_credit2_migrate_res
((struct csched_private *)((_ops)->sched_data))
#define CSCHED_VCPU(_vcpu) ((struct csched_vcpu *) (_vcpu)->sched_priv)
#define CSCHED_DOM(_dom) ((struct csched_dom *) (_dom)->sched_priv)
-#define CSCHED_CPUONLINE(_pool) \
- (((_pool) == NULL) ? &cpupool_free_cpus : (_pool)->cpu_valid)
/* CPU to runq_id macro */
#define c2r(_ops, _cpu) (CSCHED_PRIV(_ops)->runq_map[(_cpu)])
/* CPU to runqueue struct macro */
diff -r 370924e204dc -r 99f98e501f22 xen/common/sched_sedf.c
--- a/xen/common/sched_sedf.c Mon Jan 23 15:10:43 2012 +0000
+++ b/xen/common/sched_sedf.c Tue Jan 24 06:53:06 2012 +0100
@@ -12,9 +12,6 @@
#include <xen/softirq.h>
#include <xen/time.h>
#include <xen/errno.h>
-
-#define SEDF_CPUONLINE(_pool) \
- (((_pool) == NULL) ? &cpupool_free_cpus : (_pool)->cpu_valid)
#ifndef NDEBUG
#define SEDF_STATS
@@ -397,7 +394,7 @@ static int sedf_pick_cpu(const struct sc
cpumask_t online_affinity;
cpumask_t *online;
- online = SEDF_CPUONLINE(v->domain->cpupool);
+ online = cpupool_scheduler_cpumask(v->domain->cpupool);
cpumask_and(&online_affinity, v->cpu_affinity, online);
return cpumask_first(&online_affinity);
}
@@ -801,7 +798,8 @@ static struct task_slice sedf_do_schedul
*/
if ( tasklet_work_scheduled ||
(list_empty(runq) && list_empty(waitq)) ||
- unlikely(!cpumask_test_cpu(cpu, SEDF_CPUONLINE(per_cpu(cpupool, cpu)))) )
+ unlikely(!cpumask_test_cpu(cpu,
+ cpupool_scheduler_cpumask(per_cpu(cpupool, cpu)))) )
{
ret.task = IDLETASK(cpu);
ret.time = SECONDS(1);
diff -r 370924e204dc -r 99f98e501f22 xen/common/schedule.c
--- a/xen/common/schedule.c Mon Jan 23 15:10:43 2012 +0000
+++ b/xen/common/schedule.c Tue Jan 24 06:53:06 2012 +0100
@@ -77,9 +77,7 @@ static struct scheduler __read_mostly op
#define DOM2OP(_d) (((_d)->cpupool == NULL) ? &ops : ((_d)->cpupool->sched))
#define VCPU2OP(_v) (DOM2OP((_v)->domain))
-#define VCPU2ONLINE(_v) \
- (((_v)->domain->cpupool == NULL) ? &cpu_online_map \
- : (_v)->domain->cpupool->cpu_valid)
+#define VCPU2ONLINE(_v) cpupool_online_cpumask((_v)->domain->cpupool)
static inline void trace_runstate_change(struct vcpu *v, int new_state)
{
@@ -1418,7 +1416,7 @@ void schedule_dump(struct cpupool *c)
cpumask_t *cpus;
sched = (c == NULL) ? &ops : c->sched;
- cpus = (c == NULL) ? &cpupool_free_cpus : c->cpu_valid;
+ cpus = cpupool_scheduler_cpumask(c);
printk("Scheduler: %s (%s)\n", sched->name, sched->opt_name);
SCHED_OP(sched, dump_settings);
diff -r 370924e204dc -r 99f98e501f22 xen/include/xen/sched-if.h
--- a/xen/include/xen/sched-if.h Mon Jan 23 15:10:43 2012 +0000
+++ b/xen/include/xen/sched-if.h Tue Jan 24 06:53:06 2012 +0100
@@ -204,4 +204,9 @@ struct cpupool
atomic_t refcnt;
};
+#define cpupool_scheduler_cpumask(_pool) \
+ (((_pool) == NULL) ? &cpupool_free_cpus : (_pool)->cpu_valid)
+#define cpupool_online_cpumask(_pool) \
+ (((_pool) == NULL) ? &cpu_online_map : (_pool)->cpu_valid)
+
#endif /* __XEN_SCHED_IF_H__ */
[-- Attachment #3: Type: text/plain, Size: 138 bytes --]
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xensource.com
http://lists.xensource.com/xen-devel
^ permalink raw reply [flat|nested] 7+ messages in thread
* [PATCH 2 of 3] switch to dynamically allocated cpumask in domain_update_node_affinity()
2012-01-24 5:54 [PATCH 0 of 3] Reflect cpupool in numa node affinity (v4) Juergen Gross
2012-01-24 5:54 ` [PATCH 1 of 3] introduce and use common macros for selecting cpupool based cpumasks Juergen Gross
@ 2012-01-24 5:54 ` Juergen Gross
2012-01-24 9:33 ` Ian Campbell
2012-01-24 5:54 ` [PATCH 3 of 3] reflect cpupool in numa node affinity Juergen Gross
2 siblings, 1 reply; 7+ messages in thread
From: Juergen Gross @ 2012-01-24 5:54 UTC (permalink / raw)
To: xen-devel
[-- Attachment #1: Type: text/plain, Size: 186 bytes --]
cpumasks should rather be allocated dynamically.
Signed-off-by: juergen.gross@ts.fujitsu.com
1 file changed, 8 insertions(+), 4 deletions(-)
xen/common/domain.c | 12 ++++++++----
[-- Attachment #2: xen-staging.hg-3.patch --]
[-- Type: text/x-patch, Size: 1397 bytes --]
# HG changeset patch
# User Juergen Gross <juergen.gross@ts.fujitsu.com>
# Date 1327384410 -3600
# Node ID 08232960ff4bed750d26e5f1ff53972fee9e0130
# Parent 99f98e501f226825fbf16f6210b4b7834dff5df1
switch to dynamically allocated cpumask in domain_update_node_affinity()
cpumasks should rather be allocated dynamically.
Signed-off-by: juergen.gross@ts.fujitsu.com
diff -r 99f98e501f22 -r 08232960ff4b xen/common/domain.c
--- a/xen/common/domain.c Tue Jan 24 06:53:06 2012 +0100
+++ b/xen/common/domain.c Tue Jan 24 06:53:30 2012 +0100
@@ -333,23 +333,27 @@ struct domain *domain_create(
void domain_update_node_affinity(struct domain *d)
{
- cpumask_t cpumask;
+ cpumask_var_t cpumask;
nodemask_t nodemask = NODE_MASK_NONE;
struct vcpu *v;
unsigned int node;
- cpumask_clear(&cpumask);
+ if ( !zalloc_cpumask_var(&cpumask) )
+ return;
+
spin_lock(&d->node_affinity_lock);
for_each_vcpu ( d, v )
- cpumask_or(&cpumask, &cpumask, v->cpu_affinity);
+ cpumask_or(cpumask, cpumask, v->cpu_affinity);
for_each_online_node ( node )
- if ( cpumask_intersects(&node_to_cpumask(node), &cpumask) )
+ if ( cpumask_intersects(&node_to_cpumask(node), cpumask) )
node_set(node, nodemask);
d->node_affinity = nodemask;
spin_unlock(&d->node_affinity_lock);
+
+ free_cpumask_var(cpumask);
}
[-- Attachment #3: Type: text/plain, Size: 138 bytes --]
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xensource.com
http://lists.xensource.com/xen-devel
^ permalink raw reply [flat|nested] 7+ messages in thread
* [PATCH 3 of 3] reflect cpupool in numa node affinity
2012-01-24 5:54 [PATCH 0 of 3] Reflect cpupool in numa node affinity (v4) Juergen Gross
2012-01-24 5:54 ` [PATCH 1 of 3] introduce and use common macros for selecting cpupool based cpumasks Juergen Gross
2012-01-24 5:54 ` [PATCH 2 of 3] switch to dynamically allocated cpumask in domain_update_node_affinity() Juergen Gross
@ 2012-01-24 5:54 ` Juergen Gross
2 siblings, 0 replies; 7+ messages in thread
From: Juergen Gross @ 2012-01-24 5:54 UTC (permalink / raw)
To: xen-devel
[-- Attachment #1: Type: text/plain, Size: 378 bytes --]
In order to prefer node local memory for a domain the numa node locality
info must be built according to the cpus belonging to the cpupool of the
domain.
Signed-off-by: juergen.gross@ts.fujitsu.com
3 files changed, 27 insertions(+), 8 deletions(-)
xen/common/cpupool.c | 9 +++++++++
xen/common/domain.c | 16 +++++++++++++++-
xen/common/schedule.c | 10 +++-------
[-- Attachment #2: xen-staging.hg-3.patch --]
[-- Type: text/x-patch, Size: 4196 bytes --]
# HG changeset patch
# User Juergen Gross <juergen.gross@ts.fujitsu.com>
# Date 1327384424 -3600
# Node ID 574dba7570ff785d3351051a4a0a724c63066f57
# Parent 08232960ff4bed750d26e5f1ff53972fee9e0130
reflect cpupool in numa node affinity
In order to prefer node local memory for a domain the numa node locality
info must be built according to the cpus belonging to the cpupool of the
domain.
Signed-off-by: juergen.gross@ts.fujitsu.com
diff -r 08232960ff4b -r 574dba7570ff xen/common/cpupool.c
--- a/xen/common/cpupool.c Tue Jan 24 06:53:30 2012 +0100
+++ b/xen/common/cpupool.c Tue Jan 24 06:53:44 2012 +0100
@@ -220,6 +220,7 @@ static int cpupool_assign_cpu_locked(str
{
int ret;
struct cpupool *old;
+ struct domain *d;
if ( (cpupool_moving_cpu == cpu) && (c != cpupool_cpu_moving) )
return -EBUSY;
@@ -240,6 +241,14 @@ static int cpupool_assign_cpu_locked(str
cpupool_cpu_moving = NULL;
}
cpumask_set_cpu(cpu, c->cpu_valid);
+
+ rcu_read_lock(&domlist_read_lock);
+ for_each_domain_in_cpupool(d, c)
+ {
+ domain_update_node_affinity(d);
+ }
+ rcu_read_unlock(&domlist_read_lock);
+
return 0;
}
diff -r 08232960ff4b -r 574dba7570ff xen/common/domain.c
--- a/xen/common/domain.c Tue Jan 24 06:53:30 2012 +0100
+++ b/xen/common/domain.c Tue Jan 24 06:53:44 2012 +0100
@@ -11,6 +11,7 @@
#include <xen/ctype.h>
#include <xen/errno.h>
#include <xen/sched.h>
+#include <xen/sched-if.h>
#include <xen/domain.h>
#include <xen/mm.h>
#include <xen/event.h>
@@ -334,17 +335,29 @@ void domain_update_node_affinity(struct
void domain_update_node_affinity(struct domain *d)
{
cpumask_var_t cpumask;
+ cpumask_var_t online_affinity;
+ const cpumask_t *online;
nodemask_t nodemask = NODE_MASK_NONE;
struct vcpu *v;
unsigned int node;
if ( !zalloc_cpumask_var(&cpumask) )
return;
+ if ( !alloc_cpumask_var(&online_affinity) )
+ {
+ free_cpumask_var(cpumask);
+ return;
+ }
+
+ online = cpupool_online_cpumask(d->cpupool);
spin_lock(&d->node_affinity_lock);
for_each_vcpu ( d, v )
- cpumask_or(cpumask, cpumask, v->cpu_affinity);
+ {
+ cpumask_and(online_affinity, v->cpu_affinity, online);
+ cpumask_or(cpumask, cpumask, online_affinity);
+ }
for_each_online_node ( node )
if ( cpumask_intersects(&node_to_cpumask(node), cpumask) )
@@ -353,6 +366,7 @@ void domain_update_node_affinity(struct
d->node_affinity = nodemask;
spin_unlock(&d->node_affinity_lock);
+ free_cpumask_var(online_affinity);
free_cpumask_var(cpumask);
}
diff -r 08232960ff4b -r 574dba7570ff xen/common/schedule.c
--- a/xen/common/schedule.c Tue Jan 24 06:53:30 2012 +0100
+++ b/xen/common/schedule.c Tue Jan 24 06:53:44 2012 +0100
@@ -280,11 +280,12 @@ int sched_move_domain(struct domain *d,
SCHED_OP(VCPU2OP(v), insert_vcpu, v);
}
- domain_update_node_affinity(d);
d->cpupool = c;
SCHED_OP(DOM2OP(d), free_domdata, d->sched_priv);
d->sched_priv = domdata;
+
+ domain_update_node_affinity(d);
domain_unpause(d);
@@ -535,7 +536,6 @@ int cpu_disable_scheduler(unsigned int c
struct cpupool *c;
cpumask_t online_affinity;
int ret = 0;
- bool_t affinity_broken;
c = per_cpu(cpupool, cpu);
if ( c == NULL )
@@ -543,8 +543,6 @@ int cpu_disable_scheduler(unsigned int c
for_each_domain_in_cpupool ( d, c )
{
- affinity_broken = 0;
-
for_each_vcpu ( d, v )
{
vcpu_schedule_lock_irq(v);
@@ -556,7 +554,6 @@ int cpu_disable_scheduler(unsigned int c
printk("Breaking vcpu affinity for domain %d vcpu %d\n",
v->domain->domain_id, v->vcpu_id);
cpumask_setall(v->cpu_affinity);
- affinity_broken = 1;
}
if ( v->processor == cpu )
@@ -580,8 +577,7 @@ int cpu_disable_scheduler(unsigned int c
ret = -EAGAIN;
}
- if ( affinity_broken )
- domain_update_node_affinity(d);
+ domain_update_node_affinity(d);
}
return ret;
[-- Attachment #3: Type: text/plain, Size: 138 bytes --]
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xensource.com
http://lists.xensource.com/xen-devel
^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: [PATCH 2 of 3] switch to dynamically allocated cpumask in domain_update_node_affinity()
2012-01-24 5:54 ` [PATCH 2 of 3] switch to dynamically allocated cpumask in domain_update_node_affinity() Juergen Gross
@ 2012-01-24 9:33 ` Ian Campbell
2012-01-24 9:56 ` Juergen Gross
0 siblings, 1 reply; 7+ messages in thread
From: Ian Campbell @ 2012-01-24 9:33 UTC (permalink / raw)
To: Juergen Gross; +Cc: xen-devel@lists.xensource.com
On Tue, 2012-01-24 at 05:54 +0000, Juergen Gross wrote:
> # HG changeset patch
> # User Juergen Gross <juergen.gross@ts.fujitsu.com>
> # Date 1327384410 -3600
> # Node ID 08232960ff4bed750d26e5f1ff53972fee9e0130
> # Parent 99f98e501f226825fbf16f6210b4b7834dff5df1
> switch to dynamically allocated cpumask in
> domain_update_node_affinity()
>
> cpumasks should rather be allocated dynamically.
>
> Signed-off-by: juergen.gross@ts.fujitsu.com
>
> diff -r 99f98e501f22 -r 08232960ff4b xen/common/domain.c
> --- a/xen/common/domain.c Tue Jan 24 06:53:06 2012 +0100
> +++ b/xen/common/domain.c Tue Jan 24 06:53:30 2012 +0100
> @@ -333,23 +333,27 @@ struct domain *domain_create(
>
> void domain_update_node_affinity(struct domain *d)
> {
> - cpumask_t cpumask;
> + cpumask_var_t cpumask;
> nodemask_t nodemask = NODE_MASK_NONE;
> struct vcpu *v;
> unsigned int node;
>
> - cpumask_clear(&cpumask);
> + if ( !zalloc_cpumask_var(&cpumask) )
> + return;
If this ends up always failing we will never set node_affinity to
anything at all. Granted that is already a pretty nasty situation to be
in but perhaps setting the affinity to NODE_MASK_ALL on failure is
slightly more robust?
> +
> spin_lock(&d->node_affinity_lock);
>
> for_each_vcpu ( d, v )
> - cpumask_or(&cpumask, &cpumask, v->cpu_affinity);
> + cpumask_or(cpumask, cpumask, v->cpu_affinity);
>
> for_each_online_node ( node )
> - if ( cpumask_intersects(&node_to_cpumask(node), &cpumask) )
> + if ( cpumask_intersects(&node_to_cpumask(node), cpumask) )
> node_set(node, nodemask);
>
> d->node_affinity = nodemask;
> spin_unlock(&d->node_affinity_lock);
> +
> + free_cpumask_var(cpumask);
> }
>
>
^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: [PATCH 2 of 3] switch to dynamically allocated cpumask in domain_update_node_affinity()
2012-01-24 9:33 ` Ian Campbell
@ 2012-01-24 9:56 ` Juergen Gross
2012-01-24 10:04 ` Ian Campbell
0 siblings, 1 reply; 7+ messages in thread
From: Juergen Gross @ 2012-01-24 9:56 UTC (permalink / raw)
To: Ian Campbell; +Cc: xen-devel@lists.xensource.com
On 01/24/2012 10:33 AM, Ian Campbell wrote:
> On Tue, 2012-01-24 at 05:54 +0000, Juergen Gross wrote:
>> # HG changeset patch
>> # User Juergen Gross<juergen.gross@ts.fujitsu.com>
>> # Date 1327384410 -3600
>> # Node ID 08232960ff4bed750d26e5f1ff53972fee9e0130
>> # Parent 99f98e501f226825fbf16f6210b4b7834dff5df1
>> switch to dynamically allocated cpumask in
>> domain_update_node_affinity()
>>
>> cpumasks should rather be allocated dynamically.
>>
>> Signed-off-by: juergen.gross@ts.fujitsu.com
>>
>> diff -r 99f98e501f22 -r 08232960ff4b xen/common/domain.c
>> --- a/xen/common/domain.c Tue Jan 24 06:53:06 2012 +0100
>> +++ b/xen/common/domain.c Tue Jan 24 06:53:30 2012 +0100
>> @@ -333,23 +333,27 @@ struct domain *domain_create(
>>
>> void domain_update_node_affinity(struct domain *d)
>> {
>> - cpumask_t cpumask;
>> + cpumask_var_t cpumask;
>> nodemask_t nodemask = NODE_MASK_NONE;
>> struct vcpu *v;
>> unsigned int node;
>>
>> - cpumask_clear(&cpumask);
>> + if ( !zalloc_cpumask_var(&cpumask) )
>> + return;
> If this ends up always failing we will never set node_affinity to
> anything at all. Granted that is already a pretty nasty situation to be
> in but perhaps setting the affinity to NODE_MASK_ALL on failure is
> slightly more robust?
Hmm, I really don't know.
node_affinity is only used in alloc_heap_pages(), which will fall back to other
nodes if no memory is found on those nodes.
OTOH this implementation might change in the future.
The question is whether node_affinity should rather contain a subset or a
superset of the nodes the domain is running on.
What should be done if allocating a cpumask fails later? Should node_affinity
be set to NODE_MASK_ALL/NONE or should it be left untouched assuming a real
change is a rare thing to happen?
Juergen
>> +
>> spin_lock(&d->node_affinity_lock);
>>
>> for_each_vcpu ( d, v )
>> - cpumask_or(&cpumask,&cpumask, v->cpu_affinity);
>> + cpumask_or(cpumask, cpumask, v->cpu_affinity);
>>
>> for_each_online_node ( node )
>> - if ( cpumask_intersects(&node_to_cpumask(node),&cpumask) )
>> + if ( cpumask_intersects(&node_to_cpumask(node), cpumask) )
>> node_set(node, nodemask);
>>
>> d->node_affinity = nodemask;
>> spin_unlock(&d->node_affinity_lock);
>> +
>> + free_cpumask_var(cpumask);
>> }
>>
>>
>
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xensource.com
> http://lists.xensource.com/xen-devel
>
>
--
Juergen Gross Principal Developer Operating Systems
PDG ES&S SWE OS6 Telephone: +49 (0) 89 3222 2967
Fujitsu Technology Solutions e-mail: juergen.gross@ts.fujitsu.com
Domagkstr. 28 Internet: ts.fujitsu.com
D-80807 Muenchen Company details: ts.fujitsu.com/imprint.html
^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: [PATCH 2 of 3] switch to dynamically allocated cpumask in domain_update_node_affinity()
2012-01-24 9:56 ` Juergen Gross
@ 2012-01-24 10:04 ` Ian Campbell
0 siblings, 0 replies; 7+ messages in thread
From: Ian Campbell @ 2012-01-24 10:04 UTC (permalink / raw)
To: Juergen Gross; +Cc: xen-devel@lists.xensource.com
On Tue, 2012-01-24 at 09:56 +0000, Juergen Gross wrote:
> On 01/24/2012 10:33 AM, Ian Campbell wrote:
> > On Tue, 2012-01-24 at 05:54 +0000, Juergen Gross wrote:
> >> # HG changeset patch
> >> # User Juergen Gross<juergen.gross@ts.fujitsu.com>
> >> # Date 1327384410 -3600
> >> # Node ID 08232960ff4bed750d26e5f1ff53972fee9e0130
> >> # Parent 99f98e501f226825fbf16f6210b4b7834dff5df1
> >> switch to dynamically allocated cpumask in
> >> domain_update_node_affinity()
> >>
> >> cpumasks should rather be allocated dynamically.
> >>
> >> Signed-off-by: juergen.gross@ts.fujitsu.com
> >>
> >> diff -r 99f98e501f22 -r 08232960ff4b xen/common/domain.c
> >> --- a/xen/common/domain.c Tue Jan 24 06:53:06 2012 +0100
> >> +++ b/xen/common/domain.c Tue Jan 24 06:53:30 2012 +0100
> >> @@ -333,23 +333,27 @@ struct domain *domain_create(
> >>
> >> void domain_update_node_affinity(struct domain *d)
> >> {
> >> - cpumask_t cpumask;
> >> + cpumask_var_t cpumask;
> >> nodemask_t nodemask = NODE_MASK_NONE;
> >> struct vcpu *v;
> >> unsigned int node;
> >>
> >> - cpumask_clear(&cpumask);
> >> + if ( !zalloc_cpumask_var(&cpumask) )
> >> + return;
> > If this ends up always failing we will never set node_affinity to
> > anything at all. Granted that is already a pretty nasty situation to be
> > in but perhaps setting the affinity to NODE_MASK_ALL on failure is
> > slightly more robust?
>
> Hmm, I really don't know.
>
> node_affinity is only used in alloc_heap_pages(), which will fall back to other
> nodes if no memory is found on those nodes.
>
> OTOH this implementation might change in the future.
>
> The question is whether node_affinity should rather contain a subset or a
> superset of the nodes the domain is running on.
If we've had an allocation failure then we are presumably unable to
calculate whether anything is a sub/super set other than the trivial all
nodes or no nodes cases. IMHO no nodes is likely to lead to
strange/unexpected behaviour so all nodes is safer.
> What should be done if allocating a cpumask fails later? Should node_affinity
> be set to NODE_MASK_ALL/NONE or should it be left untouched assuming a real
> change is a rare thing to happen?
Leaving alone seems reasonable and it would mean this issue could be
fixed by simply initialising d->node_affinity to all nodes on allocation
instead of worrying about it here.
Ian.
^ permalink raw reply [flat|nested] 7+ messages in thread
end of thread, other threads:[~2012-01-24 10:04 UTC | newest]
Thread overview: 7+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2012-01-24 5:54 [PATCH 0 of 3] Reflect cpupool in numa node affinity (v4) Juergen Gross
2012-01-24 5:54 ` [PATCH 1 of 3] introduce and use common macros for selecting cpupool based cpumasks Juergen Gross
2012-01-24 5:54 ` [PATCH 2 of 3] switch to dynamically allocated cpumask in domain_update_node_affinity() Juergen Gross
2012-01-24 9:33 ` Ian Campbell
2012-01-24 9:56 ` Juergen Gross
2012-01-24 10:04 ` Ian Campbell
2012-01-24 5:54 ` [PATCH 3 of 3] reflect cpupool in numa node affinity Juergen Gross
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).